Brem,
It's been my understanding that the kernel panic technique you are
describing essentially is undesirable for the fact that the kernel is in an
unknown state. Basically anything can happen. The OS doesn't have to do a
sync for an hba do flush etc. Since RedHat isn't in the business of buildin
Thanks to all
shalom.kle...@hp.com
On Thu, Mar 4, 2010 at 12:00 AM, Lon Hohberger wrote:
> On Wed, 2010-03-03 at 13:10 +0200, שלום קלמר wrote:
> > Hi.
> >
> > I got 2 power supplies. But if someone by mistake pull the power
> > cables , is that mean
> >
> > That the services will not failo
Hi all,
Just had a crash on our 3 node RedHat Enterprise Linux 5.4 cluster
that looks a lot like
https://bugzilla.redhat.com/show_bug.cgi?id=520720. We're running
kernel 2.6.18-164.11.1.el5. Here is the traceback:
[2010-03-03 19:18:27]Unable to handle kernel NULL pointer dereference at
Greetings,
On Thu, Mar 4, 2010 at 7:53 AM, Jeff Karpinski wrote:
> Man, I feel dumb. I should forfeit my RHCE. :-P
>
Don't.
I am not an RHCE.
But with 26 years in IT industry, I have couple of feathers in my cap
though they are od extinct birds like Z80/Prime 550/Prime
750/Microvax/MSDos (1.0
Greetings,
On Thu, Mar 4, 2010 at 10:03 AM, Rajagopal Swaminathan > quite some
months and they are still ok for last 5 years.
>
s/5/3/g
and oh they are terminal servers.
Regards,
Rajagopal
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-clu
Greetings,
On Thu, Mar 4, 2010 at 1:15 AM, Leo Pleiman wrote:
> GFS can't be used with software raid and since you won't be
> using GFS you won't need a fence device.
>
Urmm
I have administered a cluster with gfs over clvm over drbd over md for
quite some months and they are still ok for l
Hello again Leo,
I'm not exactly sure I follow you on this one.
# ps -efaww | grep md0
root 262975 0 18:30 ?00:00:02 [md0_raid1]
# ps -efaww | grep 75
root75 1 0 18:29 ?00:00:00 [kthread]
# dmesg | grep "md:" | less
md: md driver 0.90.3 MAX_MD_DEVS=256, M
> On Thu, Mar 4, 2010 at 5:22 AM, Lon Hohberger wrote:
> On Tue, 2010-03-02 at 10:37 +0800, Bernard Chew wrote:
>> >
>>
>> Hi Lon,
>>
>> Fencing works perfectly if just the VM dies.
>>
>> Thanks,
>> Bernard
>>
>
> Thanks -- this needs to be moved to bugzilla, I think. Maybe there's
> something ob
Our assigned Red Hat engineer was on-site today and pointed out the blindingly
obvious solution. Can't believe I didn't think of it: Run NFS as a clustered
service and have the VMs mount that. That way ANY system - even outside of the
cluster - can also access the data.
Man, I feel dumb. I shou
Hi Lon,
The problem could be addressed in a different manner.
Most cluster stacks that I know, in the case of network failure (either
link down or unability to reach the other nodes) but not power, consider
this failure (network) as critical and reboot (hard reboot with no sync)
the failing node
On Wed, 03 Mar 2010 16:53:49 -0500, Lon Hohberger wrote:
> As it happens, the 'fs' file system type looks for child 'fs' resources:
>
>
>
> ... but it does not have an entry for 'lvm', which would be required to
> make it work in the order you specified.
With this argument I understand ex
On Wed, 2010-03-03 at 13:10 +0200, שלום קלמר wrote:
> Hi.
>
> I got 2 power supplies. But if someone by mistake pull the power
> cables , is that mean
>
> That the services will not failover ??
The problem is:
no power = no ping + no DRAC access
no network = no ping, no DRAC access
If there'
On Wed, 2010-03-03 at 18:01 +0100, Gianluca Cecchi wrote:
> The new desired mount point is to be put under /oradata/TEST/newtemp
>
>
> Current extract of cluster.conf is
>
>
>
>
>
On Tue, 2010-03-02 at 10:37 +0800, Bernard Chew wrote:
> >
>
> Hi Lon,
>
> Fencing works perfectly if just the VM dies.
>
> Thanks,
> Bernard
>
Thanks -- this needs to be moved to bugzilla, I think. Maybe there's
something obvious that I'm missing.
-- Lon
--
Linux-cluster mailing list
Linu
If you remove it from the initrd then it won't be available to the cluster
software. At boot time both nodes will try to start the md devices. You'll need
to add an init script to stop the md devices very early in the boot process, an
S00 script would be appropriate. That will ensure that when y
Leo,
Thanks for the quick response. It's nice to know my initial thought was
close to what you are recommending as well.
As for the details - (disable MD startup); wouldn't this mean - I'd need
to rebuild with something like - "mkinitrd --omit-raid-modules" - but if
I did that... wouldn't t
One solution would be to build the two machines as a two node cluster. Qdisk is
normally recommended but two node clusters are supported without it. Use the
cluster resource manager to control md management of the FCAL drives. You'll
need to disable md startup and create a custom script to allow
Hail Linux Cluster gurus,
I have researched myself into a corner and am looking for advice. I've
never been a "clustered storage guy", so I apologize for the potentially
naive set of questions. ( I am savvy on most other aspects of networks,
hardware, OS's etc... but not storage systems).
Le mercredi 03 mars 2010 à 14:23 +0100, brem belguebli a écrit :
> Hi Xavier,
Hi Brem, Xavier,
> 2010/3/3 Xavier Montagutelli :
> > On Wednesday 03 March 2010 03:11:50 brem belguebli wrote:
> >> Hi,
> >>
> >> I experienced a strange cluster behavior that I couldn't explain.
> >>
> >> I have a 4
carlopmart wrote:
Christine Caulfield wrote:
On 03/03/10 09:02, carlopmart wrote:
mart...@tenheuvel.net wrote:
Hi all,
I am trying to setup a rh5.4 cluster with only two nodes, but I can't.
Under
/var/log/messages I can see a lot of errors like these:
These nodes have two network interfaces,
Hello,
my problem begins from this need:
- having a rh el 5.4 cluster with 2 nodes where I have HA-LVM in place and
some lvm/fs pairs resources componing one service
I want to add a new lvm/fs to the cluster, without disrupting the running
service.
My already configured and running lvm/mountpoints
Exactly.
2010/3/3 שלום קלמר :
> Hi.
>
> I got 2 power supplies. But if someone by mistake pull the power cables , is
> that mean
>
> That the services will not failover ??
>
> Regards
>
> Shalom.
>
> On Wed, Mar 3, 2010 at 1:15 AM, Georgi Stanojevski wrote:
>>
>> On Tue, Mar 2, 2010 at 6:17 PM, ש
Hi Xavier,
2010/3/3 Xavier Montagutelli :
> On Wednesday 03 March 2010 03:11:50 brem belguebli wrote:
>> Hi,
>>
>> I experienced a strange cluster behavior that I couldn't explain.
>>
>> I have a 4 nodes Rhel 5.4 cluster (node1, node2, node3 and node4).
>>
>> Node1 and node2 are connected to an et
That won't work then since pulling the power cable effectively disables the
drac port. You need some out of band controller for that type of fencing to
work. I use APC units, others will prefer different units. It's the same
problem as with an iLO. Actually, you can get into the same problem if you
Hi.
I got 2 power supplies. But if someone by mistake pull the power cables , is
that mean
That the services will not failover ??
Regards
Shalom.
On Wed, Mar 3, 2010 at 1:15 AM, Georgi Stanojevski wrote:
> On Tue, Mar 2, 2010 at 6:17 PM, שלום קלמר wrote:
>
>>
>> The only test which faile
Hi.
I am runnig fence_ipmilan on iDRAC6.
Regards
Shalom
On Wed, Mar 3, 2010 at 6:22 AM, Rajagopal Swaminathan <
raju.rajs...@gmail.com> wrote:
> Greetings,
>
> On Tue, Mar 2, 2010 at 10:40 PM, שלום קלמר wrote:
> > Hello.
> >
> > Everything is working fine. We did some failover tests, all test
> On Wed, Mar 3, 2010 at 4:09 PM, Kit Gerrits wrote:
>
> Might it be a good idea to stick them in the same cluster, but with
> different failure domains?
> That way, chances are higher of staying quorate.
>
> Just a thought...
>
> Kit
>
> -Original Message-
> From: linux-cluster-boun...@re
Christine Caulfield wrote:
On 03/03/10 09:02, carlopmart wrote:
mart...@tenheuvel.net wrote:
Hi all,
I am trying to setup a rh5.4 cluster with only two nodes, but I can't.
Under
/var/log/messages I can see a lot of errors like these:
These nodes have two network interfaces, one on the same ne
On 03/03/10 09:02, carlopmart wrote:
mart...@tenheuvel.net wrote:
Hi all,
I am trying to setup a rh5.4 cluster with only two nodes, but I can't.
Under
/var/log/messages I can see a lot of errors like these:
These nodes have two network interfaces, one on the same network for
cluster
operation
mart...@tenheuvel.net wrote:
Hi all,
I am trying to setup a rh5.4 cluster with only two nodes, but I can't.
Under
/var/log/messages I can see a lot of errors like these:
These nodes have two network interfaces, one on the same network for
cluster
operation and another on different subnet. L
> Hi all,
>
> I am trying to setup a rh5.4 cluster with only two nodes, but I can't.
> Under
> /var/log/messages I can see a lot of errors like these:
>
> These nodes have two network interfaces, one on the same network for
> cluster
> operation and another on different subnet. Like this:
>
>
Might it be a good idea to stick them in the same cluster, but with
different failure domains?
That way, chances are higher of staying quorate.
Just a thought...
Kit
-Original Message-
From: linux-cluster-boun...@redhat.com
[mailto:linux-cluster-boun...@redhat.com] On Behalf Of Xavier
32 matches
Mail list logo