1 Iwi-ao 996,00M
volgrp01 /dev/mpath/quorum01(0)
[logvolquorum_mimage_1] volgrp01 Iwi-ao 996,00M
volgrp01 unknown device(0)
[logvolquorum_mlog] volgrp01 lwi-ao 4,00M
volgrp01 /dev/mpath/logquorum01(0)
I hope this brings some light
rrent equipment architecture (where do you run the ESXi and where are
the Virtual Machines storaged)?
Cheers,
Rafael
--
Rafael Micó Miranda
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
at you are talking about a failure of up to 3 nodes in a
cluster of 5 members. Maybe there is no sense in this because depending
on the configuration given you can even lose Quorum before achieving
this situation.
> regards,
> Martin
>
>
> --
> Linux-cluster mailing list
> Linux-cluster@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
Cheers,
Rafael
--
Rafael Micó Miranda
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
avoid using automated boot of cluster services
at system boot time.
If a cluster node fails let an administrator check the node prior to adding it
to the cluster again.
I hope this helps. Cheers,
Rafael
--
Rafael Micó Miranda
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
is value is 1 (on)."
So my thoughts were wrong and this is the default behaviour, isn't it?
I'm pretty sure in my previous tests I did not see this behaviour.
Another question is: how does qdisk implement the "reboot" function? Is
it really a "hard reset&q
(30 seconds if
> I'm not wrong) + 5 seconds= 425 seconds.
>
> Brem
>
> 2009/12/16 Rafael Micó Miranda :
> --
> Linux-cluster mailing list
> Linux-cluster@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
After some testings I have to drop t
e "no_path_retry" to a smaller value or even
to "fail". With the current value (equivalent to "queue_if_no_path" of
12 regarding RHEL docs) MDADM saw the failure of the device, so this is
more or less working.
I'm interested too in the "flush_on_last_del"
rformance reasons, but I think
the Qdisk is a must in the service for availability reasons. I'll take
note on your recommendation and maybe i change the votes to make the
minimal number of nodes higher, possibly 2.
Thanks!
Rafael
--
Rafael Micó Miranda
--
Linux-cluster mailing list
L
s which build the LVM-mirror volume, and
what happens when the LUN is back.
Thanks for your interest. Cheers,
Rafael
--
Rafael Micó Miranda
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
hem, but this is not the case. But that is, in fact, a really
interesting scenario :)
Thanks for your interest. Cheers,
Rafael
--
Rafael Micó Miranda
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
Hi Andreas
El mar, 15-12-2009 a las 15:31 +0100, Andreas Pfaffeneder escribió:
> Hi Rafael,
>
> Am 14.12.2009 23:15, schrieb Rafael Micó Miranda:
> > Hi all,
> >
> > I was wondering if there is a way to achieve a "quorum disk over a RAID
> >
Hi Jakov,
El mar, 15-12-2009 a las 11:58 +0100, Jakov Sosic escribió:
> On Mon, 2009-12-14 at 23:15 +0100, Rafael Micó Miranda wrote:
>
> > - Using an LVM-Mirror device as a Qdisk and creating additional LUNs for
> > mirror and log in both storage arrays: if the Qdisk is a Clu
on-line again?
- With MDRAID: same questions.
Of course, any idea os proposal is welcome. Thanks in advance. Cheers,
Rafael
--
Rafael Micó Miranda
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
shutdown_wait="0"/>
>
>
> myapp_home="/opt/myapp_22"
> shutdown_wait="0"/>
>
>
>
>
> As you can see I don't know how to specify the node
>
> thanks in advance
>
> gilberto
>
> --
> Linux-cluster mailing list
> Linux-cluster@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
--
Rafael Micó Miranda
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
State
>
>
>
> [r...@node1 ~]# ps -edf | grep qdisk
> root 4409 1 0 Nov26 ?00:04:00 qdiskd -Q
>
>
> Concerning your point 1, you may address this by giving a different
> score to each heuristic, but I clearly don't kn
on the system.
Is there somethin wrong?
I'm using RHEL5.3 with:
cman-2.0.98-1.el5.x86_64
openais-0.80.3-22.el5.x86_64
rgmanager-2.0.46-1.el5.x86_64
Thanks in advance. Cheers,
Rafael
--
Rafael Micó Miranda
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
Hi Andrew,
El jue, 26-11-2009 a las 11:35 +0100, Andrew Beekhof escribió:
> On Wed, Nov 25, 2009 at 8:41 PM, Rafael Micó Miranda
> wrote:
> > Hi all
> >
> > Is there any automated tool to test the functionality and availability
> > of a cluster configuration? Which
daemons failures
- Fencing device failures
- Qdisk failure
- etc.
If the answer is "no", which is the battery of test you would apply over
your cluster before considering it "stable and ready" for production
purposes?
Thanks in advance,
Rafael
--
Rafael Micó Miranda
--
Linux-
archives/cluster-devel/2009-October/msg00012.html
I'm pretty interested on it, I would make any necessary changes in it if
needed.
Thanks in advance,
Rafael
--
Rafael Micó Miranda
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
t; > plan to use the 2 node cluster configuration:
> >
> >
> >
> > Is there any other consideration to take into account to keep the
> > cluster quorate?
> >
> http://sources.redhat.com/cluster/wiki/FAQ/CMAN#two_node_dual may give
> you additionnal information
Hi Andrew
El mar, 20-10-2009 a las 08:43 +0200, Andrew Beekhof escribió:
> On Mon, Oct 19, 2009 at 11:54 PM, Rafael Micó Miranda
> wrote:
>
> > This is not the situation as i will not have
> > available a shared storage device, and this will be the first cluster
> > u
n:
Is there any other consideration to take into account to keep the
cluster quorate?
Any help/recommendations will be appreciated.
Cheers,
Rafael
--
Rafael Micó Miranda
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
red storage clusters.
I'm sorry the only proposal is using VMWare, maybe someone can propose
more solutions.
Cheers,
--
Rafael Micó Miranda
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
y kind of filesystem (you use a RAW device).
I'm very interested in this situation because i think in a short time i
would be deploying a cluster in a similar situation, but without Xen so
i hope i will get the benefits of this "monitor" method (if it exists
and works). Please, keep m
Thanks a lot for the report, I just checked the bugzilla a couple days
ago.
If RedHat does fix this "bug", i'll try again to upload the
lvm-cluster.sh resource script, with some kind of readme to make it into
the project.
Cheers and thanks again,
Rafael
--
Rafael Micó Mira
ven think that
it would be possible to use it. I would expect CLVM not propagating
changes if set to 1. Have you done any tests about this? Is the
configuration working as you expected?
Cheers,
Rafael
--
Rafael Micó Miranda
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
09 a las 12:43 +0200, brem belguebli escribió:
> Hi,
>
> The last sentence seems to mean that Gigi wants to create a GFS on top
> of a NFS FS.
>
> This GFS will be then exported thru NFS .
>
> Gigi, is that wjat you want to do ?
>
> PS: how is it going Rafael ?
>
&
ere:
http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Global_Network_Block_Device/index.html
I hope this helps.
Cheers,
Rafael
--
Rafael Micó Miranda
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
ng devices. Maybe they always return "success" code
error and its not true.
On the other hand, about the LVM volume, please check this in order to
establish your service:
https://www.redhat.com/archives/linux-cluster/2009-July/msg00259.html
>
> My cluster.conf with which I have runned those test is here [1].
>
> [1] http://pastebin.com/m6a23734a
>
> --
> Linux-cluster mailing list
> Linux-cluster@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
Cheers,
Rafael
--
Rafael Micó Miranda
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
t; with clusvcadm, what causes the service to fail and become inactive,
> forcing me to disable and enable it again.
>
I tested it and it should work. Same answer as before: give us a copy of
your current cluster.conf.
> Thanks.
>
> --
> Linux-cluster mailing list
> Linux-
hing I need. I hope that it
> can. =)
>
I don't have any knowledge about that resource script, I'm sorry I can't
help.
> --
> Linux-cluster mailing list
> Linux-cluster@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
Cheers,
Rafael
--
Rafael Micó Miranda
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
?
Yes, but if you have a management console for your VM service you will
need a Virtual IP which "floats" with the service, so you can always
connect to the same IP to check the status of your virtualization
service. You can make that attaching a IP resource to your service.
>
> [1]
e lacks of CMAN
> > actually.
> >
> > I hope this helps. Just ask whatever you want.
> >
>
> Thank you, very much.
>
> --
> Linux-cluster mailing list
> Linux-cluster@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
Thanks for your interest. Cheers,
Rafael
--
Rafael Micó Miranda
cluster.conf.example
Description: XML document
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
cuting
"lvchange -aey volgrp01/logvol01". This command is executed (with the
proper volume group and logical volume names) internally in the resource
script.
I designed the lvm-cluster.sh resource script to be verbose, maybe you
can copy here your logs (default to /var/log/messages on
ailure (hang-up, fencing...) logvol is not open
anymore, so it can be exclusively activated on a new node
All this was tested manually, but this is the expected behaviour on
lvm-cluster.sh resource script.
Link to lvm-cluster.sh resource script:
https://www.redhat.com/archives/cluster-devel/2009-Ju
ted.
>
> Brem
>
>
>
> 2009/7/21, Rafael Micó Miranda :
> Hi Brem,
>
> El mar, 21-07-2009 a las 16:40 +0200, brem belguebli
>
ttps://www.redhat.com/mailman/listinfo/linux-cluster
Please, check this link:
https://www.redhat.com/archives/cluster-devel/2009-June/msg00020.html
I found exactly the same problem as you, and i developed the
"lvm-cluster.sh" script to solve the needs I had. You can find the
script on t
/cluster-devel
I will subscribe to it just now.
Thanks,
--
Rafael Micó Miranda
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
Hi Jonathan
El jue, 04-06-2009 a las 16:39 -0500, Jonathan Brassow escribió:
> On Jun 4, 2009, at 1:48 PM, Rafael Micó Miranda wrote:
>
> I am sorry, I have not received your e-mail yet. I suppose it could
> have been caught by my spam filter. Could you please try to send
Hi Jonathan,
El jue, 04-06-2009 a las 12:04 -0500, Jonathan Brassow escribió:
> I missed that post. Perhaps you could send it directly to me?
>
> brassow
>
>
I have just send them to you.
Thanks in advance,
--
Rafael Micó Miranda
--
Linux-cluster mailing list
Linux-clu
Hi Fabio,
El mar, 02-06-2009 a las 07:04 +0200, Fabio M. Di Nitto escribió:
> Hi Rafael,
>
> On Mon, 2009-06-01 at 21:17 +0200, Rafael Micó Miranda wrote:
[...]
>
>
> The best way to submit is to post the code to cluster-de...@redhat.com
> mailing list. We don't have
ave made some testing but of course
they need much more to allow them be put into the main project.
Sincerely yours,
Rafael Micó Miranda
--
Rafael Micó Miranda
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
42 matches
Mail list logo