Re: [Linux-cluster] is ocfs2 is limited 16T

2011-03-14 Thread yue
how to rebuild ocfs2.ko what is needed to changed? thanks At 2011-03-06 20:38:55,"Jakov Sosic" wrote: >On 03/06/2011 06:30 AM, yue wrote: >> if there is a limit on ocfs2'volume? it must less 16T? > >For RHEL v5.x and derivateves yes. But you can hack it and rebuild >kernel modules without l

Re: [Linux-cluster] quorum device not getting a vote causes 2-node cluster to be inquorate

2011-03-14 Thread Fabio M. Di Nitto
On 03/15/2011 05:11 AM, berg...@merctech.com wrote: > I have been using a 2-node cluster with a quorum disk successfully for > about 2 years. Beginning today, the cluster will not boot correctly. > > The RHCS services start, but fencing fails with: > > dlm: no local IP address has bee

[Linux-cluster] quorum device not getting a vote causes 2-node cluster to be inquorate

2011-03-14 Thread bergman
I have been using a 2-node cluster with a quorum disk successfully for about 2 years. Beginning today, the cluster will not boot correctly. The RHCS services start, but fencing fails with: dlm: no local IP address has been set dlm: cannot start dlm lowcomms -107 This seem

Re: [Linux-cluster] GFS2 file system maintenance question.

2011-03-14 Thread yue
1. GFS2 is based on a 64-bit architecture, which can theoretically accommodate an 8 EB file system. However, the current supported maximum size of a GFS2 file system is 25 TB. If your system requires GFS2 file systems larger than 25 TB, contact your Red Hat service representative. At 2011-0

[Linux-cluster] GFS2 file system maintenance question.

2011-03-14 Thread Jack Duston
Hello folks, I am planning to create a 2 node cluster with a GFS2 CLVM SAN. The following Note in the RHEL6 GFS2 manual jumped out at me: Chapter 3. Managing GFS2 Note: Once you have created a GFS2 file system with the mkfs.gfs2 command, you cannot decrease the size of the file system. You can,

Re: [Linux-cluster] Resource groups

2011-03-14 Thread Bob Peterson
- Original Message - | Bob: | | You say this in your best practice document: | | "Our performance testing lab has experimented with various resource | group sizes and found a performance problem with anything bigger than | 768MB. Until this is properly diagnosed, we recommend staying belo

[Linux-cluster] Resource groups

2011-03-14 Thread Alan Brown
Bob: You say this in your best practice document: "Our performance testing lab has experimented with various resource group sizes and found a performance problem with anything bigger than 768MB. Until this is properly diagnosed, we recommend staying below 768MB." What are the details? Nearly

[Linux-cluster] Two node cluster benchmark

2011-03-14 Thread Thiago Henrique
Hello, I have a two node cluster configured like: Ubuntu 10.04 + CMAN + DRBD + GFS2 In a benchmark, I run simultaneously on both nodes, a script that make write operations in the filesystem until it fills. But when I run the benchmark, foo-node remains almost the whole time waiting for bar-no

Re: [Linux-cluster] time of left/join member

2011-03-14 Thread Digimer
On 03/14/2011 09:50 AM, iarly selbir wrote: > I was checking my two node cluster, I noticed that one node is down, my > question is how to find out when this node left the cluster? assuming > that services already running on the other node I can't see why this > node "maybe" was fenced and powered

[Linux-cluster] time of left/join member

2011-03-14 Thread iarly selbir
I was checking my two node cluster, I noticed that one node is down, my question is how to find out when this node left the cluster? assuming that services already running on the other node I can't see why this node "maybe" was fenced and powered off. /var/log/messages was not clear enough to me,

Re: [Linux-cluster] which is better gfs2 and ocfs2?

2011-03-14 Thread Bob Peterson
- Original Message - | > (1) We recently found and fixed a problem that caused the | > dlm to pass locking traffic much slower than possible. | | Is this rolled into 2.6.18-238.5.1.el5 ? Yes, it was added starting with 2.6.18-232 | > (5) We recently identified and fixed a performanc

Re: [Linux-cluster] Two node cluster - a potential problem of node fencing each other?

2011-03-14 Thread Rajagopal Swaminathan
Greetings, On Sun, Mar 13, 2011 at 7:27 PM, Parvez Shaikh wrote: > redundant network link - i trust you were referring to ethernet bonding. > >> >> This is a fairly common problem called "split brain". The two nodes will >> go into a shootout, fencing each other. There are a few ways to prevent >