No, I'm asking something different:
To have same Service running in more than one node at same time.
On 30-Sep-2011 5:30 PM, "Digimer" wrote:
> On 09/30/2011 02:24 PM, Ruben Sajnovetzky wrote:
>>
>> Sorry, I was out yesterday and most of today!
>> I got, finally, somehow running partial
On 09/30/2011 02:24 PM, Ruben Sajnovetzky wrote:
>
> Sorry, I was out yesterday and most of today!
> I got, finally, somehow running partially what I need:
>
> I created two resources with different mounting point as:
>
> /opt/Central
> /opt/Collector
>
> At least is
Sorry, I was out yesterday and most of today!
I got, finally, somehow running partially what I need:
I created two resources with different mounting point as:
/opt/Central
/opt/Collector
At least is a workaround doable because the application can be installed
anywher
On 09/29/2011 09:34 AM, Rajagopal Swaminathan wrote:
> Greetings,
>
> On Wed, Sep 28, 2011 at 10:34 PM, Digimer wrote:
>> Ok, that *looks* fine. So when you start the cman and rgmanager, what
>> does 'clustat' show?
>>
>>> I copied the full cluster.conf, I deleted everything else to
>>> “concentr
Greetings,
On Wed, Sep 28, 2011 at 10:34 PM, Digimer wrote:
> Ok, that *looks* fine. So when you start the cman and rgmanager, what
> does 'clustat' show?
>
>> I copied the full cluster.conf, I deleted everything else to
>> “concentrate” in the issue.
>> Now I re-created everything from scratch a
On 09/28/2011 10:09 AM, Ruben Sajnovetzky wrote:
> Thanks for this.
> I have still a long way to learn :)
That's why clustering is fun! :D
--
Digimer
E-Mail: digi...@alteeve.com
Freenode handle: digimer
Papers and Projects: http://alteeve.com
Node Assassin: http://nodeassa
Thanks for this.
I have still a long way to learn :)
On 28-Sep-2011 1:05 PM, "Digimer" wrote:
> On 09/28/2011 10:02 AM, Ruben Sajnovetzky wrote:
>> >
>> > We crossed e-mails :)
>> > I sent the new-fresh configuration.
>> > I thought about fencing, the problem is that we have a very "odd"
>> >
We crossed e-mails :)
I sent the new-fresh configuration.
I thought about fencing, the problem is that we have a very "odd"
configuration because I don't really need fence anything ... Maybe I
can establish kind of rule like "If Central is not working, Collector cant
work" or similar, will think o
On 09/28/2011 10:02 AM, Ruben Sajnovetzky wrote:
>
> We crossed e-mails :)
> I sent the new-fresh configuration.
> I thought about fencing, the problem is that we have a very "odd"
> configuration because I don't really need fence anything ... Maybe I
> can establish kind of rule like "If Central
Ok, that *looks* fine. So when you start the cman and rgmanager, what
does 'clustat' show?
Also, *setup fencing*. Without fencing configured, weird things will
happen. Once you have fencing configured and tested, paste the updated
cluster.conf and the output of clustat.
On 09/28/2011 09:57 AM, Ru
I copied the full cluster.conf, I deleted everything else to ³concentrate²
in the issue.
Now I re-created everything from scratch and with only FS service. I¹m
copying here the files and
Output you requested.
Situation is still the same.
cluster.conf file:
On 09/28/2011 06:20 AM, Ruben Sajnovetzky wrote:
>
>
> post_join_delay="30"/>
>
>
>
>
>
>
>
>
>
>
>
>
>
On 09/28/2011 05:49 AM, Rajagopal Swaminathan wrote:
> Greetings,
>
> On Wed, Sep 28, 2011 at 6:03 AM, Ruben Sajnovetzky wrote:
>>
>>FS Type: ext3
>
> Shouldn't it be GFS /gfs2?
You can use non-clustered FS if your not mounting the same device on
multiple nodes. I'm not sure why you'd want
On 09/28/2011 06:09 AM, Ruben Sajnovetzky wrote:
> This approach didn’t work either :(
> First server started service the second couldn’t start
You only shared a small snippet of your cluster.conf config, and none of
the other requested info. I don't know what might be missing versus omitted.
--
Here is the cluster.conf (didn't get access to run other commands yet) :
This approach didn¹t work either :(
First server started service the second couldn¹t start
On 28-Sep-2011 8:52 AM, "Robert Hayden" wrote:
>
>> On 09/27/2011 05:33 PM, Ruben Sajnovetzky wrote:
>>> >
>>> > I might be doing something wrong, because you say "you are fine" but
>>> didn't
>>> > wor
> On 09/27/2011 05:33 PM, Ruben Sajnovetzky wrote:
> >
> > I might be doing something wrong, because you say "you are fine" but
> didn't
> > work :(
> >
> > All servers have "/opt/app" mounted in same internal disk partition.
> > They are not shared, it is just that all have identical layout.
> > I
Greetings,
On Wed, Sep 28, 2011 at 6:03 AM, Ruben Sajnovetzky wrote:
>
> FS Type: ext3
Shouldn't it be GFS /gfs2?
--
Regards,
Rajagopal
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
On 09/27/2011 05:33 PM, Ruben Sajnovetzky wrote:
>
> I might be doing something wrong, because you say "you are fine" but didn't
> work :(
>
> All servers have "/opt/app" mounted in same internal disk partition.
> They are not shared, it is just that all have identical layout.
> I tried to create
I might be doing something wrong, because you say "you are fine" but didn't
work :(
All servers have "/opt/app" mounted in same internal disk partition.
They are not shared, it is just that all have identical layout.
I tried to create:
Resource name: Central_FS
Device: /dev/mapper/VolGro
On 09/27/2011 05:04 PM, Ruben Sajnovetzky wrote:
>
> Good example, thanks.
> Not sure if is doable because we could have 10 servers and the idea to have
> 10 service instances could be tricky to admin :(
Oh? How so? The file would be a bit long, but even with ten definitions
it should still be ma
Good example, thanks.
Not sure if is doable because we could have 10 servers and the idea to have
10 service instances could be tricky to admin :(
What about the other q, related with the usage of same name of devices and
mounting points?
--
Sent from my PDP-11
On 27-Sep-2011 7:25 PM, "Di
Forgot to include an example;
This link shows RGManager/cluster.conf configured with two single-node
failoverdomains (for managing the storage services needed to be running
on both nodes in a 2-node cluster) and two failoverdomains used for a
service that can migrate (a VM, specifially). It will h
On 09/27/2011 02:29 PM, Ruben Sajnovetzky wrote:
>
> Hello,
>
> I’m in the process of design a solution replacement to a Veritas
> implementation and have to find similar functionalities, not
> sure if this is doable in Red Hat Clutser:
>
> We have a distributed application that runs in seve
Hello,
I¹m in the process of design a solution replacement to a Veritas
implementation and have to find similar functionalities, not
sure if this is doable in Red Hat Clutser:
We have a distributed application that runs in several servers
simultaneously and that application must run in a clu
25 matches
Mail list logo