I currently have several machines I'm about to cluster using RHCS,
utilizing shared storage (XRAID w/ GFS). I currently have another
existing XRAID which is mounted on one machine. I need to move all
this data from this existing XRAID to the new GFS XRAID. Right now, I
have a one-node cluster ru
Hi everyone!
How about this RedHat ClusterSuite 5.1 configuration, if i have example 2 nodes
test1 and test2:
Network is configured like this:
prodution network
test1 ip=10.10.10.5
test2 ip=10.10.10.6
Private Heartbeat network
test1hb ip=192.168.0.2 (machine= test1)
test2hb ip=192.168.0.3
Gordan Bobic <[EMAIL PROTECTED]>:
> For more meaningful results, try iozone.
>
> Gordan
>
yeah iozone -a is pretty easy. For anyone who's interested: some graphs from
results I got a couple weeks ago while running it on 5 nodes at once:
https://mywebspace.wisc.edu/bpkroth/web/fs-cluster/fs-clust
For more meaningful results, try iozone.
Gordan
[EMAIL PROTECTED] wrote:
Ah, wonderful day knee deep in fibre channel.
Does anyone know how I can test NFS speeds in a similar fashion to hdparm -tT?
All the testing I've ever done was with hdparm so that's my reference point.
Mike
On Thu, 31
[EMAIL PROTECTED] schrieb:
I looked at it before but it looked like I'd have to spend some time learning
it as I was not able to find some example commands for basic drive speed
stats.
Mike
I can't test it right now, but i think it should just be
bonnie -s 2000
the number being any size of
I looked at it before but it looked like I'd have to spend some time learning
it as I was not able to find some example commands for basic drive speed
stats.
Mike
On Fri, 01 Feb 2008 23:52:10 +0100, Johannes Russek wrote:
> [EMAIL PROTECTED] schrieb:
>
>> Ah, wonderful day knee deep in fibre
[EMAIL PROTECTED] schrieb:
Ah, wonderful day knee deep in fibre channel.
Does anyone know how I can test NFS speeds in a similar fashion to hdparm -tT?
All the testing I've ever done was with hdparm so that's my reference point.
Mike
Hi Mike,
hdparm -tT is nice and easy but i'm afraid it
Ah, wonderful day knee deep in fibre channel.
Does anyone know how I can test NFS speeds in a similar fashion to hdparm -tT?
All the testing I've ever done was with hdparm so that's my reference point.
Mike
On Thu, 31 Jan 2008 17:39:56 -0600, Brian Kroth wrote:
> Johannes Russek <[EMAIL PROTEC
> I don't need the LVM log. Sorry if I wasn't clear. I wanted the syslog
> extracts for the DLM startup. I also wanted to see the netstat AFTER
> starting the dlm on the first node (and if there were any errors), then
> the same things (on both nodes) when the second node was added.
Ok, I'll resol
Update for anyone watching :).
I've eliminated GFS as being the slowdown at least. No ideas on the DLM
problems but I'll get back to that after I figure out the slowdown.
Seems that the storage controller has problems so I'll either change out the
chassis or controllers if I can. Once I nuke th
On Feb 1, 2008 10:30 AM, Terry <[EMAIL PROTECTED]> wrote:
> On Feb 1, 2008 10:09 AM, Lon Hohberger <[EMAIL PROTECTED]> wrote:
> > On Thu, 2008-01-31 at 23:10 -0600, Terry wrote:
> >
> > > > > name="database" recovery="relocate">
> > >
> > >
I'm still getting very long delays before access. Is there some way of testing
this to see if it's the storage or the GFS setup? Like a test I could run with
the storage connected raw then with GFS?
Perhaps I just kill the partition, run some test, format it as GFS again,
mount it, test?
Mike
On Feb 1, 2008 10:09 AM, Lon Hohberger <[EMAIL PROTECTED]> wrote:
> On Thu, 2008-01-31 at 23:10 -0600, Terry wrote:
>
> > > name="database" recovery="relocate">
> >
> >
> >
> >
On Fri, 2008-02-01 at 14:57 +0530, Abhra Paul wrote:
> Respected Users
>
> I have a problem in cluster. One user of this cluster needs huge
> amount of space. In this cluster one big partition(size 1TB) is
> mounted on /data . So I provide this amount of space for his program
> execution. At 1
On Thu, 2008-01-31 at 23:10 -0600, Terry wrote:
> name="database" recovery="relocate">
>
>
>
>
>
That's not going to work; the dependencies are backward
On Thu, 2008-01-31 at 17:20 -0800, Vectorz Sigma wrote:
> I'm aware of how to do this for CMAN but I'm running gulm. I can't
> find information anywhere on how to do this.
>
> Anyone know?
According to the gulm.5 man page:
heartbeat_rate
The rate at which the heartbeats are
Hi,
CLVM is hung again. This time, the problem started when we restarted clvmd in
one node (xen1).
Xen2 started to report:
Feb 1 15:26:34 xen2 kernel: dlm: recover_master_copy -53 103e7
Feb 1 15:26:34 xen2 kernel: dlm: recover_master_copy -53 10264
Feb 1 15:26:34 xen2 kernel: dlm: recover_ma
jr <[EMAIL PROTECTED]>:
> Am Donnerstag, den 31.01.2008, 19:29 -0600 schrieb [EMAIL PROTECTED]:
>
> > I've not looked into this yet so don't know what to edit or add.
> >
> > On one node, the drive is /dev/sda, but on some nodes, it'll be sdx because
> > some of the machines already have SCSI d
Am Donnerstag, den 31.01.2008, 19:29 -0600 schrieb [EMAIL PROTECTED]:
> I've not looked into this yet so don't know what to edit or add.
>
> On one node, the drive is /dev/sda, but on some nodes, it'll be sdx because
> some of the machines already have SCSI drives in them. Not hard to see, just
[EMAIL PROTECTED] wrote:
>> Can you boot a single node, without any cluster software running, then
>> do the 'netstat -tap'. Then start the cluster software and do it again.
>
> compdev# netstat -tap
> Active Internet connections (servers and established)
> Proto Recv-Q Send-Q Local Address Foreig
Respected Users
I have a problem in cluster. One user of this cluster needs huge
amount of space. In this cluster one big partition(size 1TB) is
mounted on /data . So I provide this amount of space for his program
execution. At 1.30 PM he occupied 200GB of this storage(which is
mounted on /dat
hi all:
I have some problem about gfs2 mds and cluster component.
The problem as follows:
Frist problem:I want known the cluster Component how to manager GFS2
MDS.
Secondly problem: The other proble how to sync mds.
thanks!
carry.chen
--
Linux-cluster mailing list
Linux-cluster@redhat.c
Hi,
On Fri, 1 Feb 2008, Cosimo Streppone wrote:
I also wouldn't recommend NFS, but I never
tried that myself with pg. Only had bad experiences
with NFS shares hanging up processes for long.
See PostgreSQL archives -- NFS is considered *very* harmful for many apps,
and also to PostgreSQL.
Terry wrote:
I am trying to get an active-passive postgres cluster going. I have a
shared storage with NFS. I just can't get it going. I am using luci
to configure this which, from what I have been reading, is somewhat
buggy in the postgres-8 arena. My first question is what components
of po
24 matches
Mail list logo