Dear,
let me know how can i add a storge quorum in an existing cluster.
i have an existing cluster of RHEL4.5 Servers
--
Regards,
Vishal Bordia
HCL Infosystems Ltd.
Mob :+91-9216883922
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
Hello. I have a HP msa500 SCSI array and I was using a mdadm device as a quorum
on a two node cluster until last week when I updated from 5.2 rhel to 5.3 -
what I found was the quorum will no longer function seemingly due to the
multipath. If I create a new quorum using a single physical path fr
- "Jeff Sturm" wrote:
| > What level of GFS driver is this? Are you up2date or running
| > a recent level?
|
| We aren't running Red Hat. We have:
|
| CentOS release 5.2 (Final)
| kmod-gfs-0.1.23-5.el5
Close enough. :7) That's a bit old, so it's possible it's the
problem I pointed out.
> -Original Message-
> From: linux-cluster-boun...@redhat.com
> [mailto:linux-cluster-boun...@redhat.com] On Behalf Of Bob Peterson
> Sent: Friday, March 06, 2009 8:47 AM
> To: linux clustering
> Subject: Re: [Linux-cluster] Strange directory listing
>
> - "Jeff Sturm" wrote:
> | We
- "Jeff Sturm" wrote:
| We keep Lucene search indexes on a GFS storage volume, mounted
| cluster-wide. This way each cluster node can perform a search, or
| append the search index with new content. Works great.
|
| Funny thing is, when I list the directory containing the search index,
| I so
Ok,
I'll mail it as soon as I clean up the code and do some more tests
Marcos David
Fabio M. Di Nitto wrote:
> Hi Marcos,
>
> On Thu, 2009-03-05 at 16:42 +, Marcos David wrote:
>
>> Once I have a stable version, where can I upload it so it can be added
>> to the cluster packages?
>>
>>
Hi,
Yes you can issue a "poweroff server 3" to gracefully shutdown a blade
(I'm using "poweroff server # force", to ensure a fast shutdown,
without the "force" option sometimes the OS shutdown hanged and things
didn't work properly.
I'm not sure how it works internally, I'll have to shutdown or d
I have a c7000 too with two test blades I'm going to install.
I'm available to test it if you like.
My planned OS will be RedHat EL 5 U3 x86_64 with its clustersuite
Blades will be 2 x BL685c G1 serving Oracle 10gR2
At this moment the fw version of the c7000 is 2.25, while iLo fw is 1.60
One que
Hi Marcos,
On Thu, 2009-03-05 at 16:42 +, Marcos David wrote:
>
>
> Once I have a stable version, where can I upload it so it can be added
> to the cluster packages?
>
Can you please mail it to Jan and Marek? They are the maintainers of all
fence agents in our stack.
New agents are always
On Wed, 2009-03-04 at 07:34 -0800, Doug Bunger wrote:
> I'm having trouble making the cluster aware of changes in Fedora 10
> (x86_64). The setup has three VMs accessing a shared, attached
> partition, formatted as GFS.When modifying the cluster.conf and
> incrementing version number, I have t
On Fri, 2009-03-06 at 12:17 +0100, Fabio M. Di Nitto wrote:
>
> - corosync 0.94 (strongly recommended to use svn rev 1791 and not
> higher)
Sorry for the typo.. svn rev. 1792 should be used.
Fabio
signature.asc
Description: This is a digitally signed message part
--
Linux-cluster mailing list
The cluster team and its community are proud to announce the
3.0.0.alpha7 release from the STABLE3 branch.
The development cycle for 3.0.0 is about to end. The STABLE3 branch is
now collecting only bug fixes and minimal update required to build on
top of the latest upstream kernel/corosync/openais
**
*Hi all.*
*I have a question regarding GFS.*
* *
*Is it possible that the block devices on multiple gnbd servers were made to
one GFS ? *
* *
*Of course, I already knew that GFS supports multiple gnbd servers for one
block device to provide High Availability *
*and GNBD clients can import
13 matches
Mail list logo