how to rebuild ocfs2.ko
what is needed to changed?
thanks
At 2011-03-06 20:38:55,"Jakov Sosic" wrote:
>On 03/06/2011 06:30 AM, yue wrote:
>> if there is a limit on ocfs2'volume? it must less 16T?
>
>For RHEL v5.x and derivateves yes. But you can hack it and rebuild
>kernel modules without l
On 03/15/2011 05:11 AM, berg...@merctech.com wrote:
> I have been using a 2-node cluster with a quorum disk successfully for
> about 2 years. Beginning today, the cluster will not boot correctly.
>
> The RHCS services start, but fencing fails with:
>
> dlm: no local IP address has bee
I have been using a 2-node cluster with a quorum disk successfully for
about 2 years. Beginning today, the cluster will not boot correctly.
The RHCS services start, but fencing fails with:
dlm: no local IP address has been set
dlm: cannot start dlm lowcomms -107
This seem
1.
GFS2 is based on a 64-bit architecture, which can theoretically accommodate an
8 EB file system. However, the current supported maximum size of a GFS2 file
system is 25 TB. If your system requires GFS2 file systems larger than 25 TB,
contact your Red Hat service representative.
At 2011-0
Hello folks,
I am planning to create a 2 node cluster with a GFS2 CLVM SAN.
The following Note in the RHEL6 GFS2 manual jumped out at me:
Chapter 3. Managing GFS2
Note:
Once you have created a GFS2 file system with the mkfs.gfs2 command, you
cannot decrease the size of the file system. You can,
- Original Message -
| Bob:
|
| You say this in your best practice document:
|
| "Our performance testing lab has experimented with various resource
| group sizes and found a performance problem with anything bigger than
| 768MB. Until this is properly diagnosed, we recommend staying belo
Bob:
You say this in your best practice document:
"Our performance testing lab has experimented with various resource
group sizes and found a performance problem with anything bigger than
768MB. Until this is properly diagnosed, we recommend staying below 768MB."
What are the details? Nearly
Hello,
I have a two node cluster configured like:
Ubuntu 10.04 + CMAN + DRBD + GFS2
In a benchmark, I run simultaneously on both nodes, a script that make
write operations in the filesystem until it fills.
But when I run the benchmark, foo-node remains almost the whole time
waiting for bar-no
On 03/14/2011 09:50 AM, iarly selbir wrote:
> I was checking my two node cluster, I noticed that one node is down, my
> question is how to find out when this node left the cluster? assuming
> that services already running on the other node I can't see why this
> node "maybe" was fenced and powered
I was checking my two node cluster, I noticed that one node is down, my
question is how to find out when this node left the cluster? assuming that
services already running on the other node I can't see why this node "maybe"
was fenced and powered off.
/var/log/messages was not clear enough to me,
- Original Message -
| > (1) We recently found and fixed a problem that caused the
| > dlm to pass locking traffic much slower than possible.
|
| Is this rolled into 2.6.18-238.5.1.el5 ?
Yes, it was added starting with 2.6.18-232
| > (5) We recently identified and fixed a performanc
Greetings,
On Sun, Mar 13, 2011 at 7:27 PM, Parvez Shaikh
wrote:
> redundant network link - i trust you were referring to ethernet bonding.
>
>>
>> This is a fairly common problem called "split brain". The two nodes will
>> go into a shootout, fencing each other. There are a few ways to prevent
>
12 matches
Mail list logo