hen 100,000
> potential blocks/200,000 key strokes are concerned!! :-)
>
> Is there any way for the user to say "Yes to all"?
>
> At least if the default choice was "Yes" when the Enter key was pressed, the
> user
> could hold down the Enter key until the entire list of blocks had been fixed.
>
> Regards,
>
> Stewart
>
>
> --
> Linux-cluster mailing list
> Linux-cluster@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
--
Shawn Hood
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
luster mailing list
> Linux-cluster@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
--
Shawn Hood
910.670.1819 m
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
All,
I'll provide some more config details a little later, but thought
maybe some cursory information could yield a response. Simple four
node cluster running RHEL4U7, latest RHEL cluster packages. Three GFS
filesystems. This morning one of our nodes remained responsive, but
was having some pro
High priorty support request, I mean.
On Mon, Oct 13, 2008 at 5:32 PM, Shawn Hood <[EMAIL PROTECTED]> wrote:
> As a heads up, I'm about to open a high priority bug on this. It's
> crippling us. Also, I meant to say it is a 4 node cluster, not a 3
> node.
>
> Plea
l counters commands with the support request.
Shawn
On Tue, Oct 7, 2008 at 1:40 PM, Shawn Hood <[EMAIL PROTECTED]> wrote:
> More info:
>
> All filesystems mounted using noatime,nodiratime,noquota.
>
> All filesystems report the same data from gfs_tool gettune:
>
> limit1 =
nd delete allot
>> of files because that is the loading area they used when new data
>> comes in. In the last month I have seen it go up to 70 to 85% used but
>> it usually comes back down to about 50% within about 24 hours.
>> Hopefully they will find a fix for this soon.
&g
# df -h /l1load1
> FilesystemSize Used Avail Use% Mounted on
> /dev/mapper/l1load1--vg-l1load1--lv
>1.7T 1.3T 468G 74% /l1load1
> [EMAIL PROTECTED] ~]# du -sh /l1load1
> 18G /l1load1
>
>
> Jason Huddleston, RHCE
>
> PS-
Does GFS reserve blocks for the superuser, a la ext3's "Reserved block
count"? I've had a ~1.1TB FS report that it's full with df reporting
~100GB remaining.
--
Shawn Hood
910.670.1819 m
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.c
See my thread from yesterday. Same general thing, but the dlm kernel
threads were eating cycles.
Sent from my iPhone
On Oct 8, 2008, at 7:24 PM, Janar Kartau <[EMAIL PROTECTED]> wrote:
Hi,
Recently our three-node webserver cluster started randomly crashing. I
never had time to investigate w
And for another follow-up in the interest of full disclosure, I don't recall
the specifics, but it seems dlm_recvd was eating up all the CPU cycles on
one of the machines, and others seemed to follow suit shortly thereafter.
Sorry for the flood!
Shawn
--
Linux-cluster mailing list
Linux-cluster@re
, Oct 7, 2008 at 1:33 PM, Shawn Hood <[EMAIL PROTECTED]> wrote:
> Problem:
> It seems that IO on one machine in the cluster (not always the same
> machine) will hang and all processes accessing clustered LVs will
> block. Other machines will follow suit shortly thereafter until the
&g
:
--
Shawn
Doh! I missed the part about there being nothing in the logs. ;)
Shawn
On Mon, Aug 18, 2008 at 11:43 AM, Shawn Hood <[EMAIL PROTECTED]> wrote:
> Could you post the errors from syslog/dmesg?
>
> Shawn
>
>
> On Mon, Aug 18, 2008 at 11:35 AM, Brett Cave <[EMAIL PROT
come unavailable. The entire system locks up when this
> happens, and the only option I have is to reset all nodes in the
> cluster to start up the cluster again. no errors in logs, nothing out
> of the ordinary that i can see.
>
> --
> Linux-cluster mailing list
> Linux-cluster
See http://sources.redhat.com/cluster/faq.html
2008/8/12 haresh ghoghari <[EMAIL PROTECTED]>
>
> Dear Friends,
>
> I have Redhat 5 , running on 3 Servers.
>
> I am creating Redhat Custer for NFS.
>
> Thanks
>
> --
> Linux-cluster mailing list
> Linux-cluster@redhat.com
> https://www.redhat.com/m
is to allow an application we are developing to create file locks
> with different lock modes, based on what the app is doing (e.g. CR /
> CW / PW / EX locks).
>
> --
> Linux-cluster mailing list
> Linux-cluster@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-clus
tioned earlier.
>
> Does anyone have any preferences, ideas or other options we might consider?
>
> Chrissie
>
> --
> Linux-cluster mailing list
> Linux-cluster@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
--
Shawn Hood
910.670.1819 m
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
Yesterday, I resized (+300GB) a clustered logical volume using lvresize. I
then executed gfs_grow. All went well. Today, however, I'm getting the
error message 'attempt to access beyond end of device.' See below. I
wanted to give some cursory information in case this is a known issue with a
qu
ome gfs2 rw,noatime,hostdata=jid=0:id
> =196610:first=1 0 0
>
> is this normal?
>
> thanks,
> --
> Linux-cluster mailing list
> Linux-cluster@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
--
--
Shawn Hood
910.670.1819 m
--
Linux-cluster
And here's my cluster.conf:
All,
This message was sent out to my office, so the voice may seem a bit
odd. We have a 4 node cluster running RHEL4U6 on Dell Poweredge
1950s. Fencing is done via DRAC.
Using packages (from RHN):
cman-kernel-smp-2.6.9-53.13
cman-1.0.17-0.el4_6.5
ccs-1.0.11-1.el4_6.1
fence-1.32.50-2.el4_6.1
lv
I hate to break it to you, but this kind of message isn't going to get
you anywhere. I can assure you that many who read this message are
thinking RTFM (see http://en.wikipedia.org/wiki/RTFM). You're going
to have to hit the books like the rest of us.
Shawn Hood
2008/4/15 Lexi Herre
As far as I know, you should be able to at least SEE the logical
volume as long as there is a path to the physical volumes on the other
nodes. Are you able to see the same block devices (eg /dev/sd?) on
the other nodes?
Shawn Hood
2008/4/14 nch <[EMAIL PROTECTED]>:
>
> Hell
, how unadisable is it to use a shared gigabit
network that is mildly to moderarely utilized for these
communications? It seems out network performance may be hurting our
overall cluster performance, causing nodes to drop out of the cluster.
Shawn Hood
--
Linux-cluster mailing list
Linux-cluster
While this explains the situation somewhat, I was trying to bring a
bit more clarity to the problem (without examining th esource). What
exactly is happening when a 'initiates transition'?
Shawn
On Thu, Feb 21, 2008 at 3:30 AM, Christine Caulfield
<[EMAIL PROTECTED]> wrote:
>
Though one instance of 'Initating transition' message seems to be
normal , what could the behavior shown in the following log indicate?
What exactly is happening during an 'Initating transition' message?
Shawn
Feb 14 15:25:55 odin kernel: CMAN: Initiating transition, generation 7
Feb 14 15:26:01
Hey all,
My company has gone live with a GFS cluster this morning. It is a 4
node RHEL4U6 cluster, running RHCS and GFS. It mounts an Apple 4.5TB
XRAID configured as RAID5, whose physical volumes are combined into
one large volume group. From this volume group, five striped LVMs
(striped across
I currently have several machines I'm about to cluster using RHCS,
utilizing shared storage (XRAID w/ GFS). I currently have another
existing XRAID which is mounted on one machine. I need to move all
this data from this existing XRAID to the new GFS XRAID. Right now, I
have a one-node cluster ru
nied" problem.
>
> --
> Linux-cluster mailing list
> Linux-cluster@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
--
Shawn Hood
(910) 670-1819 Mobile
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
Hey folks,
I'm in the process of implementing a GFS cluster. A quick over of our hardware:
1 Apple xraid (with plans to bring the two others into the SAN after testing)SA
3 Dell (2x 1950 / 1x 2950) boxes, running RHEL4u5
Qlogic SANbox
Qlogic HBAs
lvscan of SAN volume group
ACTIVE'/
ent, Bulletproof Linux
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Shawn Hood
Sent: Tuesday, November 13, 2007 9:59 PM
To: linux-cluster@redhat.com
Subject: [Linux-cluster] howdy
Hey folks,
Just thought I'd introduce myself. I found this list while
umentation) related to RHCS/GFS? Are there any books
that are imperative reads on the concepts of highly-available
infrastructure?
Shawn Hood
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
32 matches
Mail list logo