Are these websites correct?
Steven Dake wrote:
Merged in whitetank and corosync trunk.
Thanks again Also please copy open...@lists.osdl.org in the future
since it makes tracking patches easier for me.
-steve
On Mon, 2009-02-02 at 19:33 +0900, Masatake YAMATO wrote:
It seems that libtomcrypt
merged whitetank and corosync trunk.
Sorry for the delay. Had outtage on main system with all my ssh keys.
Regards
-steve
On Fri, 2009-01-30 at 17:02 +0900, Masatake YAMATO wrote:
> I've found a redundant statement in totemsrp.c.
> Could you apply following patch?
>
>
> Masatake YAMATO
>
> I
Merged whitetank and corosync trunk.
Thanks again!
On Mon, 2009-02-02 at 16:34 +0900, Masatake YAMATO wrote:
> Could you apply this patch if appreciated?
>
>
> Masatake YAMATO
>
>
> Index: exec/totemsrp.c
> ===
> --- exec/totemsr
Merged in whitetank and corosync trunk.
Thanks again Also please copy open...@lists.osdl.org in the future
since it makes tracking patches easier for me.
-steve
On Mon, 2009-02-02 at 19:33 +0900, Masatake YAMATO wrote:
> It seems that libtomcrypt.org is moved.
>
> Masatake YAMATO
>
>
> Index
On Tue, Feb 17, 2009 at 04:23:20PM -0700, Gary Romo wrote:
>
> We had this issue a long time ago.
> What we did was remove the sg3_utils rpm and then did a chkconfig
> scsi_reserve off
Ahh, yes. If you don't intend to use SCSI-3 reservations, you
definitely need to turn off scsi_reserve.
Thanks
We had this issue a long time ago.
What we did was remove the sg3_utils rpm and then did a chkconfig
scsi_reserve off
Gary
Alan A
Can you dump the registered keys and reservation key?
# get a list of keys resgitered for a device
sg_persist -i -k
# tells you which key holding the reservation
sg_persist -i -r
It appears that this node is trying to access /dev/sda but is not
registered with the device. WERO reservations wi
Did anyone experience this? Any suggestions to fixing this error?
Feb 17 16:32:18 fendev04 kernel: sd 0:0:0:0: SCSI error: return code =
0x0018
Feb 17 16:32:18 fendev04 kernel: end_request: I/O error, dev sda, sector
11009281
Feb 17 16:32:18 fendev04 kernel: sd 0:0:0:0: reservation conflict
Fe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi Ken...
CTDB has a non-official channel for discussion under irc.freenode.net #ctdb
btw, RHCS has a channel under the same server as #linux-cluster
Some developers, advanced and newbie users are there for discussion
this nice thing :)
I'm using re
Having an issue with my 2 node cluster. Think it is related to the quorum disk.
2 node RHEL 5.3 cluster with quorum disk. Virtual servers running on each node.
Whenever node1 takes over the master role in qdisk it looses quorum and
restarts all the virtual servers. It does regain quorum a few
Hi all.
I have a GFS2 cluster of three machines using an iSCSI disk.
Everything went fine on the initial tests and the cluster seems to work just
great.
A couple of days ago I submitted this cluster to a number of file creation
operations and whilst this was providing enough load to see some perfo
I just uped everything to RHEL 5.3 and this is what I get when trying to
"fence_node nodename":
[r...@fendev03 ~]# fence_node fendev04.x.com
agent "fence_apc" reports: Traceback (most recent call last):
File "/sbin/fence_apc", line 207, in ?
main()
File "/sbin/fence_apc", line 191
Hi,
first, thank you very much for your answer,
You are right, I have not fencing devices at all, but for one reason: I
havent!!!
I´m just testing with 2 xen virtual machines running on the same host and
mounting an iscsi disk on other host to simulate shared storage.
on the other hand, I thin
13 matches
Mail list logo