Re: [gpfsug-discuss] slow filesystem

2019-07-10 Thread Buterbaugh, Kevin L
Hi Damir, Have you checked to see whether gssio4 might have a failing internal HD / SSD? Thanks… Kevin On Jul 10, 2019, at 7:16 AM, Damir Krstic mailto:damir.krs...@gmail.com>> wrote: Over last couple of days our reads and writes on our compute cluster are experiencing real slow reads and w

Re: [gpfsug-discuss] Adding to an existing GPFS ACL

2019-03-27 Thread Buterbaugh, Kevin L
mscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org> mailto:gpfsug-discuss-boun...@spectrumscale.org>> on behalf of Buterbaugh, Kevin L mailto:kevin.buterba...@vanderbilt.edu>> Sent: Wednesday, March 27, 2019 11:19:03 AM To: gpfsug main discussion list Subject: [EXT] Re: [gpfsug

Re: [gpfsug-discuss] Adding to an existing GPFS ACL

2019-03-27 Thread Buterbaugh, Kevin L
discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org> mailto:gpfsug-discuss-boun...@spectrumscale.org>> on behalf of Buterbaugh, Kevin L mailto:kevin.buterba...@vanderbilt.edu>> Sent: Wednesday, March 27, 2019 10:59:17 AM To: gpfsug main discussion list Subject: [E

[gpfsug-discuss] Adding to an existing GPFS ACL

2019-03-27 Thread Buterbaugh, Kevin L
Hi All, First off, I have very limited experience with GPFS ACL’s, so please forgive me if I’m missing something obvious here. AFAIK, this is the first time we’ve hit something like this… We have a fileset where all the files / directories have GPFS NFSv4 ACL’s set on them. However, unlike m

Re: [gpfsug-discuss] GPFS v5: Blocksizes and subblocks

2019-03-27 Thread Buterbaugh, Kevin L
Hi All, So I was looking at the presentation referenced below and it states - on multiple slides - that there is one system storage pool per cluster. Really? Shouldn’t that be one system storage pool per filesystem?!? If not, please explain how in my GPFS cluster with two (local) filesystems

Re: [gpfsug-discuss] SSDs for data - DWPD?

2019-03-18 Thread Buterbaugh, Kevin L
3TBpd > 2TB drive (using 1/2 capacity) = 6TBpd > > Simon > > From: gpfsug-discuss-boun...@spectrumscale.org > [gpfsug-discuss-boun...@spectrumscale.org] on behalf of Buterbaugh, Kevin L > [kevin.buterba...@vanderbilt.edu] > S

Re: [gpfsug-discuss] SSDs for data - DWPD?

2019-03-18 Thread Buterbaugh, Kevin L
... On Mar 8, 2019, at 10:24 AM, Buterbaugh, Kevin L mailto:kevin.buterba...@vanderbilt.edu>> wrote: Hi All, This is kind of a survey if you will, so for this one it might be best if you responded directly to me and I’ll summarize the results next week. Question 1 - do you use SSDs for dat

[gpfsug-discuss] SSDs for data - DWPD?

2019-03-10 Thread Buterbaugh, Kevin L
Hi All, This is kind of a survey if you will, so for this one it might be best if you responded directly to me and I’ll summarize the results next week. Question 1 - do you use SSDs for data? If not - i.e. if you only use SSDs for metadata (as we currently do) - thanks, that’s all! If, howeve

Re: [gpfsug-discuss] Clarification of mmdiag --iohist output

2019-02-21 Thread Buterbaugh, Kevin L
ase contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From:"Buterbaugh, Kevin L" mai

Re: [gpfsug-discuss] Clarification of mmdiag --iohist output

2019-02-20 Thread Buterbaugh, Kevin L
ot; on queues where the "active" field is > 0). That doesn't necessarily mean you need to tune your queues but I'd suggest that if the disk I/O on your NSD server looks healthy (e.g. low latency, not overly-taxed) that you could benefit from queue tuning. -Aaron On Sat, F

[gpfsug-discuss] Clarification of mmdiag --iohist output

2019-02-16 Thread Buterbaugh, Kevin L
Hi All, Been reading man pages, docs, and Googling, and haven’t found a definitive answer to this question, so I knew exactly where to turn… ;-) I’m dealing with some slow I/O’s to certain storage arrays in our environments … like really, really slow I/O’s … here’s just one example from one of

Re: [gpfsug-discuss] Node ‘crash and restart’ event using GPFS callback?

2019-01-31 Thread Buterbaugh, Kevin L
Hi Bob, We use the nodeLeave callback to detect node expels … for what you’re wanting to do I wonder if nodeJoin might work?? If a node joins the cluster and then has an uptime of a few minutes you could go looking in /tmp/mmfs. HTH... -- Kevin Buterbaugh - Senior System Administrator Vanderb

Re: [gpfsug-discuss] Get list of filesets_without_runningmmlsfileset?

2019-01-21 Thread Buterbaugh, Kevin L
Hi All, I just wanted to follow up on this thread … the only way I have found to obtain a list of filesets and their associated junction paths as a non-root user is via the REST API (and thanks to those who suggested that). However, AFAICT querying the REST API via a script would expose the us

Re: [gpfsug-discuss] Get list of filesets_without_runningmmlsfileset?

2019-01-15 Thread Buterbaugh, Kevin L
Hi Marc (All), Yes, I can easily determine where filesets are linked here … it is, as you said, in just one or two paths. The script as it stands now has been doing that for several years and only needs a couple of relatively minor tweaks to be even more useful to _us_ by whittling down a coup

Re: [gpfsug-discuss] Get list of filesets _without_ running mmlsfileset?

2019-01-15 Thread Buterbaugh, Kevin L
019 4:07 PM > To: gpfsug-discuss@spectrumscale.org > Reply-to: gpfsug-discuss@spectrumscale.org > Subject: Re: [gpfsug-discuss] Get list of filesets _without_ running > mmlsfileset? > > On Sat, 12 Jan 2019 03:07:29 +, "Buterbaugh, Kevin L" said: >> But from there

Re: [gpfsug-discuss] Get list of filesets _without_ running mmlsfileset?

2019-01-12 Thread Buterbaugh, Kevin L
out which ones are needed by looking at the group ownership, Its very slow and a little cumbersome. Not least because it was written ages ago in a mix of bash, sed, awk and find. On Tue, 2019-01-08 at 22:12 +, Buterbaugh, Kevin L wrote: Hi All, Happy New Year to all! Personally, I’l

Re: [gpfsug-discuss] Get list of filesets _without_runningmmlsfileset?

2019-01-10 Thread Buterbaugh, Kevin L
ne: 614-2133-7927 E-mail: abeat...@au1.ibm.com<mailto:abeat...@au1.ibm.com> - Original message - From: "Buterbaugh, Kevin L" mailto:kevin.buterba...@vanderbilt.edu>> Sent by: gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale

Re: [gpfsug-discuss] Get list of filesets _without_ running mmlsfileset?

2019-01-09 Thread Buterbaugh, Kevin L
..@spectrumscale.org [gpfsug-discuss-boun...@spectrumscale.org] on behalf of Buterbaugh, Kevin L [kevin.buterba...@vanderbilt.edu] Sent: 08 January 2019 22:12 To: gpfsug main discussion list Subject: [gpfsug-discuss] Get list of filesets _without_ running mmlsfileset? Hi All, Happy New Year to all!

[gpfsug-discuss] Get list of filesets _without_ running mmlsfileset?

2019-01-09 Thread Buterbaugh, Kevin L
Hi All, Happy New Year to all! Personally, I’ll gladly and gratefully settle for 2019 not being a dumpster fire like 2018 was (those who attended my talk at the user group meeting at SC18 know what I’m referring to), but I certainly wish all of you the best! Is there a way to get a list of th

[gpfsug-discuss] Couple of questions related to storage pools and mmapplypolicy

2018-12-18 Thread Buterbaugh, Kevin L
Hi All, As those of you who suffered thru my talk at SC18 already know, we’re really short on space on one of our GPFS filesystems as the output of mmdf piped to grep pool shows: Disks in storage pool: system (Maximum disk size allowed is 24 TB) (pool total) 4.318T

[gpfsug-discuss] Anybody running GPFS over iSCSI?

2018-12-15 Thread Buterbaugh, Kevin L
Hi All, Googling “GPFS and iSCSI” doesn’t produce a ton of hits! But we are interested to know if anyone is actually using GPFS over iSCSI? The reason why I’m asking is that we currently use an 8 Gb FC SAN … QLogic SANbox 5800’s, QLogic HBA’s in our NSD servers … but we’re seeing signs that,

Re: [gpfsug-discuss] Best way to migrate data

2018-10-18 Thread Buterbaugh, Kevin L
Hi Dwayne, I’m assuming you can’t just let an rsync run, possibly throttled in some way? If not, and if you’re just tapping out your network, then would it be possible to go old school? We have parts of the Medical Center here where their network connections are … um, less than robust. So th

Re: [gpfsug-discuss] Job vacancy @Birmingham

2018-10-18 Thread Buterbaugh, Kevin L
Hi Nathan, Well, while I’m truly sorry for what you’re going thru, at least a majority of the voters in the UK did vote for it. Keep in mind that things could be worse. Some of us do happen to live in a country where a far worse thing has happened despite the fact that the majority of the vote

Re: [gpfsug-discuss] mmfileid on 2 NSDs simultaneously?

2018-10-15 Thread Buterbaugh, Kevin L
Marc, Ugh - sorry, completely overlooked that… Kevin On Oct 15, 2018, at 1:44 PM, Marc A Kaplan mailto:makap...@us.ibm.com>> wrote: How about using the -F option? ___ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org

[gpfsug-discuss] mmfileid on 2 NSDs simultaneously?

2018-10-15 Thread Buterbaugh, Kevin L
Hi All, Is there a way to run mmfileid on two NSD’s simultaneously? Thanks… Kevin — Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education kevin.buterba...@vanderbilt.edu - (615)875-963

[gpfsug-discuss] Long I/O's on client but not on NSD server(s)

2018-10-04 Thread Buterbaugh, Kevin L
Hi All, What does it mean if I have a few dozen very long I/O’s (50 - 75 seconds) on a gateway as reported by “mmdiag —iohist” and they all reference two of my eight NSD servers… … but then I go to those 2 NSD servers and I don’t see any long I/O’s at all? In other words, if the problem (this

Re: [gpfsug-discuss] What is this error message telling me?

2018-09-27 Thread Buterbaugh, Kevin L
Hi John, Thanks for the explanation and the link to your presentation … just what I was needing. Kevin — Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education kevin.buterba...@vanderbilt.edu

Re: [gpfsug-discuss] What is this error message telling me?

2018-09-27 Thread Buterbaugh, Kevin L
at 11:03 AM, Aaron Knister mailto:aaron.s.knis...@nasa.gov>> wrote: Kevin, Is the communication in this case by chance using IPoIB in connected mode? -Aaron On 9/27/18 11:04 AM, Buterbaugh, Kevin L wrote: Hi All, 2018-09-27_09:48:50.923-0500: [E] The TCP connection to IP address 1.2.3.4

[gpfsug-discuss] What is this error message telling me?

2018-09-27 Thread Buterbaugh, Kevin L
Hi All, 2018-09-27_09:48:50.923-0500: [E] The TCP connection to IP address 1.2.3.4 some client (socket 442) state is unexpected: ca_state=1 unacked=3 rto=27008000 Seeing errors like the above and trying to track down the root cause. I know that at last weeks’ GPFS User Group meeting at ORNL

Re: [gpfsug-discuss] RAID type for system pool

2018-09-11 Thread Buterbaugh, Kevin L
quot;, "logData", ... that doesn't mean those aren't metadata also. From:"Buterbaugh, Kevin L" mailto:kevin.buterba...@vanderbilt.edu>> To:gpfsug main discussion list mailto:gpfsug-discuss@spectrumscale.org>> Date:09/10/2018 03:12

[gpfsug-discuss] RAID type for system pool

2018-09-10 Thread Buterbaugh, Kevin L
From: gpfsug-discuss-ow...@spectrumscale.org Subject: Re: [gpfsug-discuss] RAID type for system pool Date: September 10, 2018 at 11:35:05 AM CDT To: k...@accre.vanderbilt.edu Hi All, So while I’m waiting for the pu

[gpfsug-discuss] RAID type for system pool

2018-09-10 Thread Buterbaugh, Kevin L
Hi All, So while I’m waiting for the purchase of new hardware to go thru, I’m trying to gather more data about the current workload. One of the things I’m trying to do is get a handle on the ratio of reads versus writes for my metadata. I’m using “mmdiag —iohist” … in this case “dm-12” is one

Re: [gpfsug-discuss] RAID type for system pool

2018-09-06 Thread Buterbaugh, Kevin L
Hi All, Wow - my query got more responses than I expected and my sincere thanks to all who took the time to respond! At this point in time we do have two GPFS filesystems … one which is basically “/home” and some software installations and the other which is “/scratch” and “/data” (former back

[gpfsug-discuss] RAID type for system pool

2018-09-05 Thread Buterbaugh, Kevin L
Hi All, We are in the process of finalizing the purchase of some new storage arrays (so no sales people who might be monitoring this list need contact me) to life-cycle some older hardware. One of the things we are considering is the purchase of some new SSD’s for our “/home” filesystem and I

Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 79, Issue 21: mmaddcallback documentation issue

2018-08-07 Thread Buterbaugh, Kevin L
;reserved=0 or, via email, send a message with subject or body 'help' to gpfsug-discuss-requ...@spectrumscale.org You can reach the person managing the list at gpfsug-discuss-ow...@spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gp

[gpfsug-discuss] mmaddcallback documentation issue

2018-08-06 Thread Buterbaugh, Kevin L
Hi All, So I’m _still_ reading about and testing various policies for file placement and migration on our test cluster (which is now running GPFS 5). On page 392 of the GPFS 5.0.0 Administration Guide it says: To add a callback, run this command. The following command is on one line: mmaddcal

Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem?

2018-08-06 Thread Buterbaugh, Kevin L
Hi All, So I was just reading the GPFS 5.0.0 Administration Guide (yes, I actually do look at the documentation even if it seems sometimes that I don’t!) for some other information and happened to come across this at the bottom of page 358: The --metadata-block-size flag on the mmcrfs command

Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem?

2018-08-03 Thread Buterbaugh, Kevin L
arguments to mmcrfs. My apologies… Kevin On Aug 3, 2018, at 1:01 AM, Olaf Weiser mailto:olaf.wei...@de.ibm.com>> wrote: Can u share your stanza file ? Von meinem iPhone gesendet Am 02.08.2018 um 23:15 schrieb Buterbaugh, Kevin L mailto:kevin.buterba...@vanderbilt.edu>>: OK, so h

Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem?

2018-08-02 Thread Buterbaugh, Kevin L
kevin.buterba...@vanderbilt.edu<mailto:kevin.buterba...@vanderbilt.edu> - (615)875-9633 On Aug 2, 2018, at 3:31 PM, Buterbaugh, Kevin L mailto:kevin.buterba...@vanderbilt.edu>> wrote: Hi All, Thanks for all the responses on this, although I have the sneaking suspicion that the most signi

Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem?

2018-08-02 Thread Buterbaugh, Kevin L
Hi All, Thanks for all the responses on this, although I have the sneaking suspicion that the most significant thing that is going to come out of this thread is the knowledge that Sven has left IBM for DDN. ;-) or :-( or :-O depending on your perspective. Anyway … we have done some testing wh

Re: [gpfsug-discuss] Sub-block size wrong on GPFS 5 filesystem?

2018-08-01 Thread Buterbaugh, Kevin L
2018, at 4:01 PM, Sven Oehme mailto:oeh...@gmail.com>> wrote: the only way to get max number of subblocks for a 5.0.x filesystem with the released code is to have metadata and data use the same blocksize. sven On Wed, Aug 1, 2018 at 11:52 AM Buterbaugh, Kevin L mailto:kevin.buterba.

Re: [gpfsug-discuss] Sub-block size wrong on GPFS 5 filesystem?

2018-08-01 Thread Buterbaugh, Kevin L
___ I haven't looked into all the details but here's a clue -- notice there is only one "subblocks-per-full-block" parameter. And it is the same for both metadata blocks and datadata blocks. So maybe (MAYBE) that is a constraint somewhe

Re: [gpfsug-discuss] Sub-block size wrong on GPFS 5 filesystem?

2018-08-01 Thread Buterbaugh, Kevin L
Administrator Vanderbilt University - Advanced Computing Center for Research and Education kevin.buterba...@vanderbilt.edu<mailto:kevin.buterba...@vanderbilt.edu> - (615)875-9633 On Aug 1, 2018, at 1:47 PM, Buterbaugh, Kevin L mailto:kevin.buterba...@vanderbilt.edu>> wrote: Hi Sven,

Re: [gpfsug-discuss] Sub-block size wrong on GPFS 5 filesystem?

2018-08-01 Thread Buterbaugh, Kevin L
" parameter. And it is the same for both metadata blocks and datadata blocks. So maybe (MAYBE) that is a constraint somewhere... Certainly, in the currently supported code, that's what you get. From:"Buterbaugh, Kevin L" mailto:kevin.buterba...@vanderbilt.edu>>

[gpfsug-discuss] Sub-block size wrong on GPFS 5 filesystem?

2018-08-01 Thread Buterbaugh, Kevin L
Hi All, Our production cluster is still on GPFS 4.2.3.x, but in preparation for moving to GPFS 5 I have upgraded our small (7 node) test cluster to GPFS 5.0.1-1. I am setting up a new filesystem there using hardware that we recently life-cycled out of our production environment. I “successful

Re: [gpfsug-discuss] Power9 / GPFS

2018-07-27 Thread Buterbaugh, Kevin L
Hi Simon, Have you tried running it with the “—silent” flag, too? Kevin — Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education kevin.buterba...@vanderbilt.edu - (615)875-9633 On Jul 2

Re: [gpfsug-discuss] mmdiag --iohist question

2018-07-23 Thread Buterbaugh, Kevin L
the Spectrum Scale (GPFS) team. "Buterbaugh, Kevin L" ---07/11/2018 10:34:32 PM---Hi All, Quick question about “mmdiag —iohist” that is not documented in the man page … what does it From: "Buterbaugh, Kevin L" mailto:kevin.buterba...@vanderbilt.edu>> To: gpfsug main

Re: [gpfsug-discuss] mmhealth - where is the info hiding?

2018-07-19 Thread Buterbaugh, Kevin L
Hi Valdis, Is this what you’re looking for (from an IBMer in response to another question a few weeks back)? assuming 4.2.3 code level this can be done by deleting and recreating the rule with changed settings: # mmhealth thresholds list ### Threshold Rules ### rule_namemetric

Re: [gpfsug-discuss] mmchdisk hung / proceeding at a glacial pace?

2018-07-15 Thread Buterbaugh, Kevin L
in the suspend effort? Might be worth running some quick tracing on the FS manager to see what it’s up to. On July 15, 2018 at 13:27:54 EDT, Buterbaugh, Kevin L mailto:kevin.buterba...@vanderbilt.edu>> wrote: Hi All, We are in a partial cluster downtime today to do firmware upgrades on

[gpfsug-discuss] mmchdisk hung / proceeding at a glacial pace?

2018-07-15 Thread Buterbaugh, Kevin L
Hi All, We are in a partial cluster downtime today to do firmware upgrades on our storage arrays. It is a partial downtime because we have two GPFS filesystems: 1. gpfs23 - 900+ TB and which corresponds to /scratch and /data, and which I’ve unmounted across the cluster because it has data rep

[gpfsug-discuss] mmdiag --iohist question

2018-07-11 Thread Buterbaugh, Kevin L
Hi All, Quick question about “mmdiag —iohist” that is not documented in the man page … what does it mean if the client IP address field is blank? That the NSD server itself issued the I/O? Or ??? This only happens occasionally … and the way I discovered it was that our Python script that tak

Re: [gpfsug-discuss] High I/O wait times

2018-07-09 Thread Buterbaugh, Kevin L
e.org> Date:07/07/2018 11:43 AM Subject:Re: [gpfsug-discuss] High I/O wait times Sent by: gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org> ________ On 07/07/18 01:28, Buterbaugh, Kevin L wrote: [SNIP] >

Re: [gpfsug-discuss] What NSDs does a file have blocks on?

2018-07-09 Thread Buterbaugh, Kevin L
37748736) : [DMD_NSD4 c72f1m5u39ib0,c72f1m5u37ib0] [FILE: /mnt/gpfs3a/data_out/lf SUMMARY INFO] replica1: c72f1m5u37ib0,c72f1m5u39ib0: 5 chunk(s) c72f1m5u39ib0,c72f1m5u37ib0: 5 chunk(s) Thanks and Regards, -Kums From: "Buterbaugh, Kevin L" mailto:kevi

[gpfsug-discuss] What NSDs does a file have blocks on?

2018-07-09 Thread Buterbaugh, Kevin L
Hi All, I am still working on my issue of the occasional high I/O wait times and that has raised another question … I know that I can run mmfileid to see what files have a block on a given NSD, but is there a way to do the opposite? I.e. I want to know what NSDs a single file has its’ blocks o

Re: [gpfsug-discuss] High I/O wait times

2018-07-06 Thread Buterbaugh, Kevin L
. Another possibility for troubleshooting, if you have sufficient free resources: you can just suspend the problematic LUNs in GPFS, as that will remove the write load from them, while still having them service read requests and not affecting users. Regards, Alex On Fri, Jul 6, 2018 at 9

Re: [gpfsug-discuss] High I/O wait times

2018-07-06 Thread Buterbaugh, Kevin L
en there are going to be IOs that are queued and waiting for a thread. Jim On Thursday, July 5, 2018, 9:30:30 PM EDT, Buterbaugh, Kevin L mailto:kevin.buterba...@vanderbilt.edu>> wrote: Hi All, First off, my apologies for the delay in responding back to the list … we’ve actually been wor

Re: [gpfsug-discuss] High I/O wait times

2018-07-05 Thread Buterbaugh, Kevin L
ut we've not yet found a smoking gun. The timing and description > of your problem sounded eerily similar to what we're seeing so I'd thought > I'd ask. > > -Aaron > > -- > Aaron Knister > NASA Center for Climate Simulation (Code 606.2) > God

Re: [gpfsug-discuss] High I/O wait times

2018-07-03 Thread Buterbaugh, Kevin L
> From: "Buterbaugh, Kevin L" mailto:kevin.buterba...@vanderbilt.edu>> To:gpfsug main discussion list mailto:gpfsug-discuss@spectrumscale.org>> Date:07/03/2018 05:41 PM Subject:Re: [gpfsug-discuss] High I/O wait times Sent by: gpfsug

Re: [gpfsug-discuss] High I/O wait times

2018-07-03 Thread Buterbaugh, Kevin L
0-8821 sto...@us.ibm.com From:"Buterbaugh, Kevin L" To:gpfsug main discussion list Date:07/03/2018 03:49 PM Subject:[gpfsug-discuss] High I/O wait times Sent by:gpfsug-discuss-boun...@spectrumscale.org Hi all, We a

[gpfsug-discuss] High I/O wait times

2018-07-03 Thread Buterbaugh, Kevin L
Hi all, We are experiencing some high I/O wait times (5 - 20 seconds!) on some of our NSDs as reported by “mmdiag —iohist" and are struggling to understand why. One of the confusing things is that, while certain NSDs tend to show the problem more than others, the problem is not consistent … i.

Re: [gpfsug-discuss] File system manager - won't change to new node

2018-06-22 Thread Buterbaugh, Kevin L
Hi Bob, Have you tried explicitly moving it to a specific manager node? That’s what I always do … I personally never let GPFS pick when I’m moving the management functions for some reason. Thanks… Kevin On Jun 22, 2018, at 8:13 AM, Oesterlin, Robert mailto:robert.oester...@nuance.com>> wrot

Re: [gpfsug-discuss] Capacity pool filling

2018-06-07 Thread Buterbaugh, Kevin L
Hi Uwe, Thanks for your response. So our restore software lays down the metadata first, then the data. While it has no specific knowledge of the extended attributes, it does back them up and restore them. So the only explanation that makes sense to me is that since the inode for the file say

Re: [gpfsug-discuss] Capacity pool filling

2018-06-07 Thread Buterbaugh, Kevin L
7, 2018 at 8:17 AM -0600, "Buterbaugh, Kevin L" mailto:kevin.buterba...@vanderbilt.edu>> wrote: Hi All, First off, I’m on day 8 of dealing with two different mini-catastrophes at work and am therefore very sleep deprived and possibly missing something obvious … with that discl

Re: [gpfsug-discuss] Capacity pool filling

2018-06-07 Thread Buterbaugh, Kevin L
t 9:53 AM, Jaime Pinto wrote: > > I think the restore is is bringing back a lot of material with atime > 90, so > it is passing-trough gpfs23data and going directly to gpfs23capacity. > > I also think you may not have stopped the crontab script as you believe you > d

[gpfsug-discuss] Capacity pool filling

2018-06-07 Thread Buterbaugh, Kevin L
Hi All, First off, I’m on day 8 of dealing with two different mini-catastrophes at work and am therefore very sleep deprived and possibly missing something obvious … with that disclaimer out of the way… We have a filesystem with 3 pools: 1) system (metadata only), 2) gpfs23data (the default p

Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7

2018-05-15 Thread Buterbaugh, Kevin L
All, I have to kind of agree with Andrew … it seems that there is a broad range of takes on kernel upgrades … everything from “install the latest kernel the day it comes out” to “stick with this kernel, we know it works.” Related to that, let me throw out this question … what about those who ha

Re: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out

2018-05-11 Thread Buterbaugh, Kevin L
On the other hand, we are very excited by this (from the README): File systems: Traditional NSD nodes and servers can use checksums NSD clients and servers that are configured with IBM Spectrum Scale can use checksums to verify data integrity and detect network cor

Re: [gpfsug-discuss] Node list error

2018-05-10 Thread Buterbaugh, Kevin L
? -B From:gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org> [mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Buterbaugh, Kevin L Sent: Tuesday, May 08, 2018 1:24 PM To: gpfsug main discussion list mailto:gpfsug-discuss@spectrumsc

[gpfsug-discuss] Node list error

2018-05-08 Thread Buterbaugh, Kevin L
Hi All, I can open a PMR for this if necessary, but does anyone know offhand what the following messages mean: 2018-05-08_12:16:39.567-0500: [I] Calling user exit script mmNodeRoleChange: event ccrFileChange, Async command /usr/lpp/mmfs/bin/mmsysmonc. 2018-05-08_12:16:39.719-0500: [I] Calling u

Re: [gpfsug-discuss] Not recommended, but why not?

2018-05-07 Thread Buterbaugh, Kevin L
node, i probably wouldn't limit memory but CPU as this is the more critical resource to prevent expels and other time sensitive issues. sven On Fri, May 4, 2018 at 8:39 AM Buterbaugh, Kevin L mailto:kevin.buterba...@vanderbilt.edu>> wrote: Hi All, In doing some research, I have

Re: [gpfsug-discuss] Not recommended, but why not?

2018-05-04 Thread Buterbaugh, Kevin L
general_lab_services] Phone: 55-19-2132-4317 E-mail: ano...@br.ibm.com<mailto:ano...@br.ibm.com> [IBM] - Original message - From: "Buterbaugh, Kevin L" mailto:kevin.buterba...@vanderbilt.edu>> Sent by: gpfsug-discuss-boun...@sp

[gpfsug-discuss] Not recommended, but why not?

2018-05-04 Thread Buterbaugh, Kevin L
Hi All, In doing some research, I have come across numerous places (IBM docs, DeveloperWorks posts, etc.) where it is stated that it is not recommended to run CES on NSD servers … but I’ve not found any detailed explanation of why not. I understand that CES, especially if you enable SMB, can be

[gpfsug-discuss] GPFS GUI - DataPool_capUtil error

2018-04-09 Thread Buterbaugh, Kevin L
Hi All, I’m pretty new to using the GPFS GUI for health and performance monitoring, but am finding it very useful. I’ve got an issue that I can’t figure out. In my events I see: Event name:pool-data_high_error Component:File SystemEntity type:PoolEntity name: Event time:3/26/18 4:44:10 PM Me

Re: [gpfsug-discuss] Dual server NSDs

2018-04-04 Thread Buterbaugh, Kevin L
Hi John, Yes, you can remove one of the servers and yes, we’ve done it and yes, the documentation is clear and correct. ;-) Last time I did this we were in a full cluster downtime, so unmounting wasn’t an issue. We were changing our network architecture and so the IP addresses of all NSD ser

[gpfsug-discuss] Local event

2018-04-04 Thread Buterbaugh, Kevin L
Hi All, According to the man page for mmaddcallback: A local event triggers a callback only on the node on which the event occurred, such as mounting a file system on one of the nodes. We have two GPFS clusters here (well, three if you count our small test cluster).

Re: [gpfsug-discuss] mmfind -ls and so forth

2018-03-08 Thread Buterbaugh, Kevin L
Hi Marc, I test in production … just kidding. But - not kidding - I did read the entire mmfind.README, compiled the binary as described therein, and read the output of “mmfind -h”. But what I forgot was that when you run a bash shell script with “bash -x” it doesn’t show you the redirection y

Re: [gpfsug-discuss] mmfind performance

2018-03-07 Thread Buterbaugh, Kevin L
time to complete, even for this special case. -- Marc K of GPFS From:"Buterbaugh, Kevin L" mailto:kevin.buterba...@vanderbilt.edu>> To:gpfsug main discussion list mailto:gpfsug-discuss@spectrumscale.org>> Date:03/06/2018 01:52 PM Subje

[gpfsug-discuss] mmfind performance

2018-03-06 Thread Buterbaugh, Kevin L
Hi All, In the README for the mmfind command it says: mmfind A highly efficient file system traversal tool, designed to serve as a drop-in replacement for the 'find' command as used against GPFS FSes. And: mmfind is expected to be slower than find on file systems with relatively few inode

Re: [gpfsug-discuss] Meltdown, Spectre, and impacts on GPFS

2018-03-06 Thread Buterbaugh, Kevin L
be used for priority messages to the Spectrum Scale (GPFS) team. "Buterbaugh, Kevin L" ---01/04/2018 01:11:59 PM---Happy New Year everyone, I’m sure that everyone is aware of Meltdown and Spectre by now … we, like m From: "Buterbaugh, Kevin L" mailto:kevin.buterba..

Re: [gpfsug-discuss] Odd d????????? permissions

2018-02-14 Thread Buterbaugh, Kevin L
Hi John, We had a similar incident happen just a week or so ago here, although in our case it was that certain files within a directory showed up with the question marks, while others didn’t. The problem was simply that the node had been run out of RAM and the GPFS daemon couldn’t allocate mem

Re: [gpfsug-discuss] mmchdisk suspend / stop

2018-02-13 Thread Buterbaugh, Kevin L
were unaware that “major version” firmware upgrades could not be done live on our storage, but we’ve got a plan to work around this this time. Kevin > On Feb 13, 2018, at 7:43 AM, Jonathan Buzzard > wrote: > > On Fri, 2018-02-09 at 15:07 +, Buterbaugh, Kevin L wrote: >> Hi

Re: [gpfsug-discuss] mmchdisk suspend / stop

2018-02-09 Thread Buterbaugh, Kevin L
Hi All, Since several people have made this same suggestion, let me respond to that. We did ask the vendor - twice - to do that. Their response boils down to, “No, the older version has bugs and we won’t send you a controller with firmware that we know has bugs in it.” We have not had a full

Re: [gpfsug-discuss] hdisk suspend / stop (Buterbaugh, Kevin L)

2018-02-08 Thread Buterbaugh, Kevin L
+0000 > From: "Buterbaugh, Kevin L" > mailto:kevin.buterba...@vanderbilt.edu>> > To: gpfsug main discussion list > mailto:gpfsug-discuss@spectrumscale.org>> > Subject: [gpfsug-discuss] mmchdisk suspend / stop > Message-ID: > <8dca682d-9850-4c03-8930-

[gpfsug-discuss] mmchdisk suspend / stop

2018-02-08 Thread Buterbaugh, Kevin L
Hi All, We are in a bit of a difficult situation right now with one of our non-IBM hardware vendors (I know, I know, I KNOW - buy IBM hardware! ) and are looking for some advice on how to deal with this unfortunate situation. We have a non-IBM FC storage array with dual-“redundant” controllers.

Re: [gpfsug-discuss] Metadata only system pool

2018-01-23 Thread Buterbaugh, Kevin L
ibm.com<mailto:uwefa...@de.ibm.com> --- IBM Deutschland Business & Technology Services GmbH / Geschäftsführung: Thomas Wolter, Sven Schooß Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 17122 From: "Buterbaugh, Kevin L&quo

Re: [gpfsug-discuss] Metadata only system pool

2018-01-23 Thread Buterbaugh, Kevin L
re allocated they cannot be de-allocated. Fred __ Fred Stock | IBM Pittsburgh Lab | 720-430-8821 sto...@us.ibm.com<mailto:sto...@us.ibm.com> From: "Buterbaugh, Kevin L" To:gpfsug main discussion list mailto:gpfs

[gpfsug-discuss] Metadata only system pool

2018-01-23 Thread Buterbaugh, Kevin L
Hi All, I was under the (possibly false) impression that if you have a filesystem where the system pool contains metadata only then the only thing that would cause the amount of free space in that pool to change is the creation of more inodes … is that correct? In other words, given that I hav

Re: [gpfsug-discuss] GPFS best practises : end user standpoint

2018-01-17 Thread Buterbaugh, Kevin L
n.buzz...@strath.ac.uk>> wrote: On Tue, 2018-01-16 at 16:35 +, Buterbaugh, Kevin L wrote: [SNIP] I am quite sure someone storing 1PB has to pay more than someone storing 1TB, so why should someone storing 20 million files not have to pay more than someone storing 100k files? Becaus

Re: [gpfsug-discuss] GPFS best practises : end user standpoint

2018-01-16 Thread Buterbaugh, Kevin L
Hi Jonathan, Comments / questions inline. Thanks! Kevin > On Jan 16, 2018, at 10:08 AM, Jonathan Buzzard > wrote: > > On Tue, 2018-01-16 at 15:47 +, Carl Zetie wrote: >> Maybe this would make for a good session at a future user group >> meeting -- perhaps as an interactive session? IBM c

Re: [gpfsug-discuss] Meltdown, Spectre, and impacts on GPFS

2018-01-08 Thread Buterbaugh, Kevin L
s to the Spectrum Scale (GPFS) team. "Buterbaugh, Kevin L" ---01/04/2018 01:11:59 PM---Happy New Year everyone, I’m sure that everyone is aware of Meltdown and Spectre by now … we, like m From: "Buterbaugh, Kevin L" mailto:kevin.buterba...@vanderbilt.edu>>

Re: [gpfsug-discuss] Password to GUI forgotten

2018-01-05 Thread Buterbaugh, Kevin L
ale (GPFS) team. From: "Buterbaugh, Kevin L" mailto:kevin.buterba...@vanderbilt.edu>> To:"Hanley, Jesse A." mailto:hanle...@ornl.gov>> Cc:gpfsug main discussion list mailto:gpfsug-discuss@spectrumscale.org>> Date:12/19/2017

[gpfsug-discuss] Meltdown, Spectre, and impacts on GPFS

2018-01-04 Thread Buterbaugh, Kevin L
Happy New Year everyone, I’m sure that everyone is aware of Meltdown and Spectre by now … we, like many other institutions, will be patching for it at the earliest possible opportunity. Our understanding is that the most serious of the negative performance impacts of these patches will be for

Re: [gpfsug-discuss] Password to GUI forgotten

2017-12-18 Thread Buterbaugh, Kevin L
mscale.org>> on behalf of "Buterbaugh, Kevin L" mailto:kevin.buterba...@vanderbilt.edu>> Reply-To: gpfsug main discussion list mailto:gpfsug-discuss@spectrumscale.org>> Date: Monday, December 18, 2017 at 2:52 PM To: gpfsug main discussion list mailto:gpfsug-discuss@spectru

Re: [gpfsug-discuss] FW: Spectrum Scale 5.0 now available on Fix Central

2017-12-18 Thread Buterbaugh, Kevin L
Hi All, GPFS 5.0 was announced on Friday … and today: IBM Spectrum Scale : IBM Spectrum Scale: NFS operations may fail with IO-Error

Re: [gpfsug-discuss] Password to GUI forgotten

2017-12-18 Thread Buterbaugh, Kevin L
f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636482454631155492&sdata=sPRr5u4lo%2BAwBPBQ2%2BdXw%2F2EWUAqy30Fk0UNssRFWHU%3D&reserved=0> From: gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org> mailto:gpfsug-discuss-boun...@s

[gpfsug-discuss] mmbackup log file size after GPFS 4.2.3.5 upgrade

2017-12-14 Thread Buterbaugh, Kevin L
Hi All, 26 mmbackupDors-20171023.log 26 mmbackupDors-20171024.log 26 mmbackupDors-20171025.log 26 mmbackupDors-20171026.log 2922752 mmbackupDors-20171027.log 137 mmbackupDors-20171028.log 59328 mmbackupDors-20171029.log 2748095 mmbackupDors

Re: [gpfsug-discuss] Password to GUI forgotten

2017-12-06 Thread Buterbaugh, Kevin L
cuss-boun...@spectrumscale.org>> on behalf of "Buterbaugh, Kevin L" mailto:kevin.buterba...@vanderbilt.edu>> Reply-To: gpfsug main discussion list mailto:gpfsug-discuss@spectrumscale.org>> Date: Wednesday, December 6, 2017 at 5:15 PM To: gpfsug main discussion list mailto:gpfsug

Re: [gpfsug-discuss] Password to GUI forgotten

2017-12-06 Thread Buterbaugh, Kevin L
using chuser. /usr/lpp/mmfs/gui/cli/chuser Usage is as follows (where userID = admin) chuser userID {-p | -l | -a | -d | -g | --expirePassword} [-o ] Josh K On Dec 6, 2017, at 4:56 PM, Buterbaugh, Kevin L mailto:kevin.buterba...@vanderbilt.edu>> wrote: Hi All, So this is emb

[gpfsug-discuss] Password to GUI forgotten

2017-12-06 Thread Buterbaugh, Kevin L
Hi All, So this is embarrassing to admit but I was playing around with setting up the GPFS GUI on our test cluster earlier this fall. However, I was gone pretty much the entire month of November for a combination of vacation and SC17 and the vacation was so relaxing that I’ve forgotten the adm

Re: [gpfsug-discuss] 5.0 features?

2017-11-29 Thread Buterbaugh, Kevin L
Simon in correct … I’d love to be able to support a larger block size for my users who have sane workflows while still not wasting a ton of space for the biomedical folks…. ;-) A question … will the new, much improved, much faster mmrestripefs that was touted at SC17 require a filesystem that w

  1   2   3   >