Re: [Linux-cluster] Feature Request: gfs_fsck has a yes to all response.

2009-02-23 Thread Shawn Hood
hen 100,000 > potential blocks/200,000 key strokes are concerned!! :-) > > Is there any way for the user to say "Yes to all"? > > At least if the default choice was "Yes" when the Enter key was pressed, the > user > could hold down the Enter key until the entire list of blocks had been fixed. > > Regards, > > Stewart > > > -- > Linux-cluster mailing list > Linux-cluster@redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- Shawn Hood -- Linux-cluster mailing list Linux-cluster@redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster

Re: [Linux-cluster] fencing problem

2008-10-16 Thread Shawn Hood
luster mailing list > Linux-cluster@redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- Shawn Hood 910.670.1819 m -- Linux-cluster mailing list Linux-cluster@redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster

[Linux-cluster] fencing problem

2008-10-16 Thread Shawn Hood
All, I'll provide some more config details a little later, but thought maybe some cursory information could yield a response. Simple four node cluster running RHEL4U7, latest RHEL cluster packages. Three GFS filesystems. This morning one of our nodes remained responsive, but was having some pro

[Linux-cluster] Re: GFS hanging on 3 node RHEL4 cluster

2008-10-13 Thread Shawn Hood
High priorty support request, I mean. On Mon, Oct 13, 2008 at 5:32 PM, Shawn Hood <[EMAIL PROTECTED]> wrote: > As a heads up, I'm about to open a high priority bug on this. It's > crippling us. Also, I meant to say it is a 4 node cluster, not a 3 > node. > > Plea

[Linux-cluster] Re: GFS hanging on 3 node RHEL4 cluster

2008-10-13 Thread Shawn Hood
l counters commands with the support request. Shawn On Tue, Oct 7, 2008 at 1:40 PM, Shawn Hood <[EMAIL PROTECTED]> wrote: > More info: > > All filesystems mounted using noatime,nodiratime,noquota. > > All filesystems report the same data from gfs_tool gettune: > > limit1 =

Re: [Linux-cluster] GFS reserved blocks?

2008-10-13 Thread Shawn Hood
nd delete allot >> of files because that is the loading area they used when new data >> comes in. In the last month I have seen it go up to 70 to 85% used but >> it usually comes back down to about 50% within about 24 hours. >> Hopefully they will find a fix for this soon. &g

Re: [Linux-cluster] GFS reserved blocks?

2008-10-13 Thread Shawn Hood
# df -h /l1load1 > FilesystemSize Used Avail Use% Mounted on > /dev/mapper/l1load1--vg-l1load1--lv >1.7T 1.3T 468G 74% /l1load1 > [EMAIL PROTECTED] ~]# du -sh /l1load1 > 18G /l1load1 > > > Jason Huddleston, RHCE > > PS-

[Linux-cluster] GFS reserved blocks?

2008-10-13 Thread Shawn Hood
Does GFS reserve blocks for the superuser, a la ext3's "Reserved block count"? I've had a ~1.1TB FS report that it's full with df reporting ~100GB remaining. -- Shawn Hood 910.670.1819 m -- Linux-cluster mailing list Linux-cluster@redhat.com https://www.redhat.c

Re: [Linux-cluster] GFS lockups ?

2008-10-08 Thread Shawn Hood
See my thread from yesterday. Same general thing, but the dlm kernel threads were eating cycles. Sent from my iPhone On Oct 8, 2008, at 7:24 PM, Janar Kartau <[EMAIL PROTECTED]> wrote: Hi, Recently our three-node webserver cluster started randomly crashing. I never had time to investigate w

[Linux-cluster] Re: GFS hanging on 3 node RHEL4 cluster

2008-10-07 Thread Shawn Hood
And for another follow-up in the interest of full disclosure, I don't recall the specifics, but it seems dlm_recvd was eating up all the CPU cycles on one of the machines, and others seemed to follow suit shortly thereafter. Sorry for the flood! Shawn -- Linux-cluster mailing list Linux-cluster@re

[Linux-cluster] Re: GFS hanging on 3 node RHEL4 cluster

2008-10-07 Thread Shawn Hood
, Oct 7, 2008 at 1:33 PM, Shawn Hood <[EMAIL PROTECTED]> wrote: > Problem: > It seems that IO on one machine in the cluster (not always the same > machine) will hang and all processes accessing clustered LVs will > block. Other machines will follow suit shortly thereafter until the &g

[Linux-cluster] GFS hanging on 3 node RHEL4 cluster

2008-10-07 Thread Shawn Hood
: -- Shawn

Re: [Linux-cluster] GFS frozen again

2008-08-18 Thread Shawn Hood
Doh! I missed the part about there being nothing in the logs. ;) Shawn On Mon, Aug 18, 2008 at 11:43 AM, Shawn Hood <[EMAIL PROTECTED]> wrote: > Could you post the errors from syslog/dmesg? > > Shawn > > > On Mon, Aug 18, 2008 at 11:35 AM, Brett Cave <[EMAIL PROT

Re: [Linux-cluster] GFS frozen again

2008-08-18 Thread Shawn Hood
come unavailable. The entire system locks up when this > happens, and the only option I have is to reset all nodes in the > cluster to start up the cluster again. no errors in logs, nothing out > of the ordinary that i can see. > > -- > Linux-cluster mailing list > Linux-cluster

Re: [Linux-cluster] Can I create Redhat Custer without Fence and SAN

2008-08-12 Thread Shawn Hood
See http://sources.redhat.com/cluster/faq.html 2008/8/12 haresh ghoghari <[EMAIL PROTECTED]> > > Dear Friends, > > I have Redhat 5 , running on 3 Servers. > > I am creating Redhat Custer for NFS. > > Thanks > > -- > Linux-cluster mailing list > Linux-cluster@redhat.com > https://www.redhat.com/m

Re: [Linux-cluster] creating file locks on a gfs volume

2008-08-12 Thread Shawn Hood
is to allow an application we are developing to create file locks > with different lock modes, based on what the app is doing (e.g. CR / > CW / PW / EX locks). > > -- > Linux-cluster mailing list > Linux-cluster@redhat.com > https://www.redhat.com/mailman/listinfo/linux-clus

Re: [Linux-cluster] Cluster Shutdown - ideas?

2008-08-12 Thread Shawn Hood
tioned earlier. > > Does anyone have any preferences, ideas or other options we might consider? > > Chrissie > > -- > Linux-cluster mailing list > Linux-cluster@redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- Shawn Hood 910.670.1819 m -- Linux-cluster mailing list Linux-cluster@redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster

[Linux-cluster] Problems after gfs_resize

2008-08-07 Thread Shawn Hood
Yesterday, I resized (+300GB) a clustered logical volume using lvresize. I then executed gfs_grow. All went well. Today, however, I'm getting the error message 'attempt to access beyond end of device.' See below. I wanted to give some cursory information in case this is a known issue with a qu

Re: [Linux-cluster] quota and noatime configurations

2008-07-07 Thread Shawn Hood
ome gfs2 rw,noatime,hostdata=jid=0:id > =196610:first=1 0 0 > > is this normal? > > thanks, > -- > Linux-cluster mailing list > Linux-cluster@redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- -- Shawn Hood 910.670.1819 m -- Linux-cluster

[Linux-cluster] Re: cluster instability

2008-06-16 Thread Shawn Hood
And here's my cluster.conf:

[Linux-cluster] cluster instability

2008-06-16 Thread Shawn Hood
All, This message was sent out to my office, so the voice may seem a bit odd. We have a 4 node cluster running RHEL4U6 on Dell Poweredge 1950s. Fencing is done via DRAC. Using packages (from RHN): cman-kernel-smp-2.6.9-53.13 cman-1.0.17-0.el4_6.5 ccs-1.0.11-1.el4_6.1 fence-1.32.50-2.el4_6.1 lv

Re: [Linux-cluster] red hat enterprise

2008-04-15 Thread Shawn Hood
I hate to break it to you, but this kind of message isn't going to get you anywhere. I can assure you that many who read this message are thinking RTFM (see http://en.wikipedia.org/wiki/RTFM). You're going to have to hit the books like the rest of us. Shawn Hood 2008/4/15 Lexi Herre

Re: [Linux-cluster] how can I share a logical volume?

2008-04-14 Thread Shawn Hood
As far as I know, you should be able to at least SEE the logical volume as long as there is a path to the physical volumes on the other nodes. Are you able to see the same block devices (eg /dev/sd?) on the other nodes? Shawn Hood 2008/4/14 nch <[EMAIL PROTECTED]>: > > Hell

[Linux-cluster] performance using low-latency network for cluster/GFS communication

2008-02-22 Thread Shawn Hood
, how unadisable is it to use a shared gigabit network that is mildly to moderarely utilized for these communications? It seems out network performance may be hurting our overall cluster performance, causing nodes to drop out of the cluster. Shawn Hood -- Linux-cluster mailing list Linux-cluster

Re: [Linux-cluster] Initiating transition message

2008-02-21 Thread Shawn Hood
While this explains the situation somewhat, I was trying to bring a bit more clarity to the problem (without examining th esource). What exactly is happening when a 'initiates transition'? Shawn On Thu, Feb 21, 2008 at 3:30 AM, Christine Caulfield <[EMAIL PROTECTED]> wrote: >

[Linux-cluster] Initiating transition message

2008-02-20 Thread Shawn Hood
Though one instance of 'Initating transition' message seems to be normal , what could the behavior shown in the following log indicate? What exactly is happening during an 'Initating transition' message? Shawn Feb 14 15:25:55 odin kernel: CMAN: Initiating transition, generation 7 Feb 14 15:26:01

[Linux-cluster] performace tuning

2008-02-04 Thread Shawn Hood
Hey all, My company has gone live with a GFS cluster this morning. It is a 4 node RHEL4U6 cluster, running RHCS and GFS. It mounts an Apple 4.5TB XRAID configured as RAID5, whose physical volumes are combined into one large volume group. From this volume group, five striped LVMs (striped across

[Linux-cluster] Speeding up GFS on 1 node cluster

2008-02-01 Thread Shawn Hood
I currently have several machines I'm about to cluster using RHCS, utilizing shared storage (XRAID w/ GFS). I currently have another existing XRAID which is mounted on one machine. I need to move all this data from this existing XRAID to the new GFS XRAID. Right now, I have a one-node cluster ru

Re: [Linux-cluster] with gfs2, permission denied on first attempt to run an executable

2007-12-12 Thread Shawn Hood
nied" problem. > > -- > Linux-cluster mailing list > Linux-cluster@redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- Shawn Hood (910) 670-1819 Mobile -- Linux-cluster mailing list Linux-cluster@redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster

[Linux-cluster] journal size, performance

2007-12-03 Thread Shawn Hood
Hey folks, I'm in the process of implementing a GFS cluster. A quick over of our hardware: 1 Apple xraid (with plans to bring the two others into the SAN after testing)SA 3 Dell (2x 1950 / 1x 2950) boxes, running RHEL4u5 Qlogic SANbox Qlogic HBAs lvscan of SAN volume group ACTIVE'/

[Linux-cluster] Fwd: cluster post

2007-11-16 Thread Shawn Hood
ent, Bulletproof Linux -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Shawn Hood Sent: Tuesday, November 13, 2007 9:59 PM To: linux-cluster@redhat.com Subject: [Linux-cluster] howdy Hey folks, Just thought I'd introduce myself. I found this list while

[Linux-cluster] howdy

2007-11-13 Thread Shawn Hood
umentation) related to RHCS/GFS? Are there any books that are imperative reads on the concepts of highly-available infrastructure? Shawn Hood -- Linux-cluster mailing list Linux-cluster@redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster