Re: [Linux-cluster] Where to find information on HA-LVM

2012-03-27 Thread Jankowski, Chris
Ming, I have never seen HA-LVM properly described either. Here is a little bit: http://www.nxnt.org/2010/09/redhat-cluster-howto/ The notion of tags is crucial to understanding how HA-LVM works. It worked pretty well the last time I used it about 2 years ago, but I did not do a lot of testin

[Linux-cluster] Why RHEV 3.0 does not use GFS2?

2012-01-27 Thread Jankowski, Chris
I am curious why the designers of RHEV V3.0 did not use GFS2 for their shared storage. It seems that this would be a natural choice. Instead RHEV 3.0 allows either NFS or raw shared LUNs, I believe. Anybody has some thoughts on this subject? Thanks and regards, Chris Jankowski -- Linux-clus

Re: [Linux-cluster] GFS2 support EMC storage SRDF??

2011-12-13 Thread Jankowski, Chris
*Unidirectional* replication is probably a better phrase to describe what EMC SRDF and all other typical block mode storage arrays do for replication. Typically this is used for manual or semi-automated DR systems and works very well for this purpose. This approach splits the HA and DR domains.

Re: [Linux-cluster] GFS2 support EMC storage SRDF??

2011-12-11 Thread Jankowski, Chris
Yu, GFS2 or any other filesystems being replicated are not aware at all of the block replication taking place in the storage layer. This is entirely transparent to the OS and filesystems clustered or not. Replication happens entirely in the array/SAN layer and the servers are not involved at

Re: [Linux-cluster] DR node in a cluster

2011-07-07 Thread Jankowski, Chris
sk vote=3 ,node votes=3, total=6) which is not recommened I guess (?). Steve On Wed, Jul 6, 2011 at 11:46 AM, Jankowski, Chris mailto:chris.jankow...@hp.com>> wrote: Paras, A curiosity question: How do you make sure that your storage will survive failure of *either* of your site without

Re: [Linux-cluster] DR node in a cluster

2011-07-06 Thread Jankowski, Chris
Paras, A curiosity question: How do you make sure that your storage will survive failure of *either* of your site without loss of data and continuity of service? What storage configuration are you using? Thanks and regards, Chris From: linux-cluster-boun...@redhat.com [mailto:linux-cluster-b

Re: [Linux-cluster] How do you HA your storage?

2011-05-30 Thread Jankowski, Chris
There is a school of thought among practitioners of Business Continuity that says: HA != DR The two cover different domains and mixing the two concepts leads to horror stories. Essentially, HA covers a single (small or large) component failure. If components can be duplicated and work

Re: [Linux-cluster] Announcing - RHCS2 on EL5, Xen, DRBD and rgmanager 2-node cluster tutorial

2011-05-15 Thread Jankowski, Chris
Digimer, I think you published an earlier version before. Isn't it the time to introduce versioning, release dates and also list of deltas from version to version? Mundane things, I know. But if you want to make this a useful document for others they are all very necessary, I think. Regards,

Re: [Linux-cluster] oracle DB is not failing over on killin PMON deamon

2011-05-11 Thread Jankowski, Chris
Sufyan, What username does the instance of Oracle DB run as? Is this "orainfra" or some other username? The scripts assume a user named "orainfra". If you use a different username then you need to modify the scripts accordingly. Regards, Chris Jankowski -Original Message- From: linux

Re: [Linux-cluster] How do you HA your storage?

2011-05-01 Thread Jankowski, Chris
and DR in multi-site environment. There are specialized forums for this. Regards, Chris -Original Message- From: urgrue [mailto:urg...@bulbous.org] Sent: Sunday, 1 May 2011 02:42 To: linux clustering Cc: Jankowski, Chris Subject: Re: [Linux-cluster] How do you HA your storage? I do

Re: [Linux-cluster] How do you HA your storage?

2011-04-30 Thread Jankowski, Chris
I am just wondering, why would you like to do it this way? If you have SAN then by implication you have a storage array on the SAN. This storage array will normally have capability to give you highly available storage through RAID{1,5,6}. Moreover, any decent array will also provide redundanc

Re: [Linux-cluster] Solution for HPC

2011-04-20 Thread Jankowski, Chris
Konrad, The first thing to do is to recompile your application using a parallelizing compiler with proper parameter equal to the number of cores on your server. This of course assumes that you have the source code for your application. For a properly written Fortran and C application a modern

Re: [Linux-cluster] Question with RHCS and oracle

2011-04-19 Thread Jankowski, Chris
uster, but, I used gfs. If I use and active-passive conf, I believe that I shouldn't use gfs. Anyone could explain me how to configure a cluster without gfs?. Thanks.! 2011/4/19 Jankowski, Chris mailto:chris.jankow...@hp.com>> Marcelo, The paper you mentioned is now 6 years old.

Re: [Linux-cluster] Question with RHCS and oracle

2011-04-19 Thread Jankowski, Chris
Marcelo, The paper you mentioned is now 6 years old. Quite a bit has changed in RHEL CS and Oracle DB since. If you need RAC and you are doing this for a living, I recommend that you invest in these two books: For Oracle 11g (published in 2010): http://www.amazon.com/Pro-Oracle-Database-11g-

Re: [Linux-cluster] gfs2 v. zfs?

2011-01-24 Thread Jankowski, Chris
>-Original Message- >From: linux-cluster-boun...@redhat.com >[mailto:linux-cluster-boun...@redhat.com] On Behalf Of Jeff Sturm >Sent: Tuesday, 25 January 2011 09:02 >To: linux clustering >Subject: Re: [Linux-cluster] gfs2 v. zfs? >> -Original Message- >> From: linux-cluster-boun..

Re: [Linux-cluster] rgmanager gets stuck on shutdown, if no services are running on its node.

2010-12-08 Thread Jankowski, Chris
outputs "Shutdown complete, exiting" and completes its own shutdown. - As a workaround, I set status_poll_interval="10" for the time being, although I believe that I should be forced to rely on short polling interval. Regards, Chris Jankowski -Original Mess

Re: [Linux-cluster] How do I implement an unmount only filesystem resource agent

2010-12-08 Thread Jankowski, Chris
...@redhat.com [mailto:linux-cluster-boun...@redhat.com] On Behalf Of Lon Hohberger Sent: Thursday, 9 December 2010 06:49 To: linux clustering Subject: Re: [Linux-cluster] How do I implement an unmount only filesystem resource agent On Mon, 2010-12-06 at 12:27 +, Jankowski, Chris wrote: > > T

Re: [Linux-cluster] rgmanager gets stuck on shutdown, if no services are running on its node.

2010-12-08 Thread Jankowski, Chris
clustering Subject: Re: [Linux-cluster] rgmanager gets stuck on shutdown, if no services are running on its node. On Wed, 2010-12-08 at 03:11 +, Jankowski, Chris wrote: > Hi, > > I configured a cluster of 2 RHEL6 nodes. > The cluster has only one HA service defined. > > I h

Re: [Linux-cluster] Heuristics for quorum disk used as a tiebreaker in a two node cluster.

2010-12-08 Thread Jankowski, Chris
n Behalf Of Lon Hohberger Sent: Thursday, 9 December 2010 07:33 To: linux clustering Subject: Re: [Linux-cluster] Heuristics for quorum disk used as a tiebreaker in a two node cluster. On Fri, 2010-12-03 at 10:10 +0000, Jankowski, Chris wrote: > This is exactly what I would like to achieve. I know w

Re: [Linux-cluster] rgmanager gets stuck on shutdown, if no services are running on its node.

2010-12-07 Thread Jankowski, Chris
on shutdown, if no services are running on its node. Hi, On 12/08/2010 04:11 AM, Jankowski, Chris wrote: > Hi, > > I configured a cluster of 2 RHEL6 nodes. > The cluster has only one HA service defined. > > I have a problem with rgmanager getting stuck on shutdown whe

[Linux-cluster] rgmanager gets stuck on shutdown, if no services are running on its node.

2010-12-07 Thread Jankowski, Chris
Hi, I configured a cluster of 2 RHEL6 nodes. The cluster has only one HA service defined. I have a problem with rgmanager getting stuck on shutdown when certain set of conditions are met. The details follow. 1. If I execute "shutdown -h now" on the node that is *not* running the HA service th

[Linux-cluster] How do I implement an unmount only filesystem resource agent

2010-12-06 Thread Jankowski, Chris
Hi, I am configuring a service that uses HA-LVM and XFS filesystem on top of it. The filesystem will be backed up by a separate script run from cron(8) creating an LVM snapshot of the filesystem and mounting it on a mountpoint. To have a foolproof HA service I need to: - Check, if the sna

[Linux-cluster] Difference between -d and -s options of clusvcadm

2010-12-05 Thread Jankowski, Chris
Hi, What is the difference between -d and -s options of clusvcadm? When would I prefer using one over the other? The manual page for clusvcadm(8) says: -d Stops and disables the user service named -s Stops the service named until a member transition or until it is enabled a

[Linux-cluster] Heuristics for quorum disk used as a tiebreaker in a two node cluster.

2010-12-03 Thread Jankowski, Chris
Hi, I am configuring a two node HA cluster that has only one service. The sole purpose of the cluster is to keep the service up with minimum disruption for the widest possible range of failure scenarios. I configured a quorum disk to make sure that after a failure of a node, the cluster (now co

Re: [Linux-cluster] Validation failure of cluster.conf.

2010-12-03 Thread Jankowski, Chris
really appreciate it. Regards, Chris Jankowski -Original Message- From: Fabio M. Di Nitto [mailto:fdini...@redhat.com] Sent: Friday, 3 December 2010 19:27 To: linux clustering Cc: Jankowski, Chris Subject: Re: [Linux-cluster] Validation failure of cluster.conf. On 12/3/2010 6:33 AM

[Linux-cluster] Validation failure of cluster.conf.

2010-12-02 Thread Jankowski, Chris
Hi, I am in a process of building a cluster on RHEL6. I elected to build the /etc/cluster/cluster.conf (attached) by hand i.e. no Conga. After I added fencing and fence devices the configuration file no longer passes validation check. ccs_config_validate reports the following error: [r...@boob

Re: [Linux-cluster] new cluster defined and storage added - what's next?

2010-11-28 Thread Jankowski, Chris
Yvette, You can: - either use GFS2 with concurrent access to your filesystem from both nodes, as this is a cluster filesystem - or use ext3/XFS as a failover filesystem - mounted by no more than one of the cluster nodes at any time. Either of the two approaches will have HA characteristics with

Re: [Linux-cluster] RHEL 6 cluster filesystem resource and LVM snapshots

2010-11-24 Thread Jankowski, Chris
action safe. On Wed, 2010-11-24 at 14:27 +, Jonathan Barber wrote: > On 24 November 2010 09:48, Xavier Montagutelli > wrote: > > On Wednesday 24 November 2010 09:34:48 Jankowski, Chris wrote: > >> Xavier, > >> > >> Thank you for the

Re: [Linux-cluster] RHEL 6 cluster filesystem resource and LVM snapshots

2010-11-24 Thread Jankowski, Chris
nux clustering Subject: Re: [Linux-cluster] RHEL 6 cluster filesystem resource and LVM snapshots On Wednesday 24 November 2010 01:20:42 Jankowski, Chris wrote: > Hi, > > 1. > I found in the "Logical Volume Manager Administration" manual for RHEL 6 on > p.12 and on p.35 the fo

Re: [Linux-cluster] RHEL 6 cluster filesystem resource and LVM snapshots

2010-11-23 Thread Jankowski, Chris
pport LVM snapshots? Thanks and regards, Chris Jankowski -----Original Message- From: Jankowski, Chris Sent: Wednesday, 24 November 2010 07:23 To: 'linux clustering' Subject: RE: [Linux-cluster] RHEL 6 cluster filesystem resource and LVM snapshots Roger, Thank you. I

Re: [Linux-cluster] RHEL 6 cluster filesystem resource and LVM snapshots

2010-11-23 Thread Jankowski, Chris
...@redhat.com] On Behalf Of Roger Pena Escobio Sent: Wednesday, 24 November 2010 01:32 To: linux clustering Subject: Re: [Linux-cluster] RHEL 6 cluster filesystem resource and LVM snapshots --- On Tue, 11/23/10, Jankowski, Chris wrote: > From: Jankowski, Chris > Subject: Re: [Linux-cluster]

Re: [Linux-cluster] RHEL 6 cluster filesystem resource and LVM snapshots

2010-11-23 Thread Jankowski, Chris
: [Linux-cluster] RHEL 6 cluster filesystem resource and LVM snapshots On Tuesday 23 November 2010 07:13:56 Jankowski, Chris wrote: > Hi, > > I am preparing a build of a RHEL 6 cluster with a filesystem resource(s) > (ext4 or XFS). The customer would like to use LVM snap

[Linux-cluster] RHEL 6 cluster filesystem resource and LVM snapshots

2010-11-22 Thread Jankowski, Chris
Hi, I am preparing a build of a RHEL 6 cluster with a filesystem resource(s) (ext4 or XFS). The customer would like to use LVM snapshots of the filesystems for tape backup. The tape backup may take a few hours after which the snapshot will be deleted. Questions: 1. Is the filesystem resourc

[Linux-cluster] XFS as a servicein RHEL 6 Linux Cluster.

2010-11-14 Thread Jankowski, Chris
Hi, RHEL 6 now officially supports XFS, as an additional subscription option, I believe. Does the RHEL 6 Linux Cluster provide the necessary module to configure an XFS filesystem as a failover service? Thanks and regards, Chris Jankowski -- Linux-cluster mailing list Linux-cluster@redhat.com

Re: [Linux-cluster] Starter Cluster / GFS

2010-11-11 Thread Jankowski, Chris
Regards, Chris Jankowski -Original Message- From: Digimer [mailto:li...@alteeve.com] Sent: Friday, 12 November 2010 14:44 To: Jankowski, Chris Cc: linux clustering Subject: Re: [Linux-cluster] Starter Cluster / GFS On 10-11-11 10:25 PM, Jankowski, Chris wrote: > Digimer, > >

Re: [Linux-cluster] Starter Cluster / GFS

2010-11-11 Thread Jankowski, Chris
need minimum of 3 Ethernet interfaces per server and minimum of 6 if all links will be bonded, but this is OK. Regards, Chris Jankowski -Original Message- From: Digimer [mailto:li...@alteeve.com] Sent: Friday, 12 November 2010 13:42 To: Jankowski, Chris Cc: linux clustering Subject: Re: [

Re: [Linux-cluster] Starter Cluster / GFS

2010-11-11 Thread Jankowski, Chris
luding login to the service processor is less than 1 ms. Delay or lack thereof is not a problem. The transactional nature of the processing is the issue. Regards, Chris Jankowski -Original Message- From: Digimer [mailto:li...@alteeve.com] Sent: Friday, 12 November 2010 03:39 To: linux

Re: [Linux-cluster] Starter Cluster / GFS

2010-11-11 Thread Jankowski, Chris
Of Digimer Sent: Friday, 12 November 2010 03:44 To: linux clustering Subject: Re: [Linux-cluster] Starter Cluster / GFS On 10-11-11 04:23 AM, Gordan Bobic wrote: > Jankowski, Chris wrote: >> Digimer, >> >> 1. >> Digimer wrote: >>>>> Both partitions will

Re: [Linux-cluster] Starter Cluster / GFS

2010-11-11 Thread Jankowski, Chris
: Thursday, 11 November 2010 21:08 To: linux clustering Subject: Re: [Linux-cluster] Starter Cluster / GFS Jankowski, Chris wrote: > Gordan, > > I do understand the mechanism. I was trying to gently point out that > this behaviour is unacceptable for my commercial IP customers. The >

Re: [Linux-cluster] Starter Cluster / GFS

2010-11-11 Thread Jankowski, Chris
: Re: [Linux-cluster] Starter Cluster / GFS Digimer wrote: > On 10-11-10 10:29 PM, Jankowski, Chris wrote: >> Digimer, >> >> 1. >> Digimer wrote: >>>>> Both partitions will try to fence the other, but the slower will lose and >>>>> get fenc

Re: [Linux-cluster] Starter Cluster / GFS

2010-11-10 Thread Jankowski, Chris
make sense in real world. I cannot think of one. Regards, Chris Jankowski -Original Message- From: Digimer [mailto:li...@alteeve.com] Sent: Thursday, 11 November 2010 15:30 To: linux clustering Cc: Jankowski, Chris Subject: Re: [Linux-cluster] Starter Cluster / GFS On 10-11-10 10:29 PM,

Re: [Linux-cluster] Starter Cluster / GFS

2010-11-10 Thread Jankowski, Chris
Digimer, 1. Digimer wrote: >>>Both partitions will try to fence the other, but the slower will lose and >>>get fenced before it can fence. Well, this is certainly not my experience in dealing with modern rack mounted or blade servers where you use iLO (on HP) or DRAC (on Dell). What actually h

Re: [Linux-cluster] Configurations of services?

2010-11-10 Thread Jankowski, Chris
Jakov, If you make it general enough you may end up with rsync. How would you position your tool in the continuum between ccs_tool update .. And rsync? Where would it add value? Regards, Chris Jankowski -Original Message- From: linux-cluster-boun...@redhat.com [mailto:linux-cluster

Re: [Linux-cluster] Starter Cluster / GFS

2010-11-10 Thread Jankowski, Chris
Robert, One reason is that with GFS2 you do not have to do fsck on the surviving node after one node in the cluster failed. Doing fsck ona 20 TB filesystem with heaps of files may take well over an hour. So, if you built your cluster for HA you'd rather avoid it. The locks need to be recovere

Re: [Linux-cluster] ha-lvm

2010-11-02 Thread Jankowski, Chris
Corey, I vaguely remember from my work on UNIX clusters many years ago that if /dir is the mount point of a mounted filesystem then cd /dir or into any directory below /dir from an interactive shell will prevent an unmount of the filesystem i.e. umount /dir will fail. I believe that this restr

[Linux-cluster] GFS2 changes between RHEL AP V5.x and 6?

2010-09-16 Thread Jankowski, Chris
Hi, I read the beta 2 release notes for RHEL 6. It mentions numerous changes in the cluster for RHEL 6, but nothing about GFS2. Are there any GFS2 changes in RHEL 6 compared with RHEL 5.x? Thanks and regards, Chris Jankowski -- Linux-cluster mailing list Linux-cluster@redhat.com https://www.

Re: [Linux-cluster] need help - Fencing problem

2010-09-08 Thread Jankowski, Chris
Why did you have to set iLO as non-shared? Thank and regards, Chris From: linux-cluster-boun...@redhat.com [mailto:linux-cluster-boun...@redhat.com] On Behalf Of ESGLinux Sent: Wednesday, 8 September 2010 22:57 To: linux clustering Subject: Re: [Linux-cluster] n

Re: [Linux-cluster] Fencing through iLO and functioning of kdump

2010-08-28 Thread Jankowski, Chris
Ben, Thank you for pointing me at fence_scsi. It looks like fence_scsi will fit the bill elegantly. And it should be much more reliable then iLO fencing if the cluster uses properly configured, dual fabric FC SAN for shared storage. I read the fence_scsi manual page and have one more question.

[Linux-cluster] Fencing through iLO and functioning of kdump

2010-08-26 Thread Jankowski, Chris
Hi, How can I reconcile the need to have Kdump configured and operational on cluster nodes with the need for fencing of a node most commonly and conveniently implemented through iLO on HP servers? Customers require Kdump configured and operational to be able to have kernel crashes analysed by

Re: [Linux-cluster] qdisk WITHOUT fencing

2010-06-18 Thread Jankowski, Chris
ki -Original Message- From: linux-cluster-boun...@redhat.com [mailto:linux-cluster-boun...@redhat.com] On Behalf Of Gordan Bobic Sent: Friday, 18 June 2010 18:38 To: linux clustering Subject: Re: [Linux-cluster] qdisk WITHOUT fencing On 06/18/2010 07:57 AM, Jankowski, Chris wrote: >

Re: [Linux-cluster] qdisk WITHOUT fencing

2010-06-18 Thread Jankowski, Chris
s constraint is normally impossible, though some scripting logic could allow to bypass completely the fencing and guarantee the integrity of the cluster. Brem On Thu, 2010-06-17 at 23:31 +0000, Jankowski, Chris wrote: > Jim, > > You hit architectural limitation of Linux Cluster, wh

Re: [Linux-cluster] qdisk WITHOUT fencing

2010-06-17 Thread Jankowski, Chris
Jim, You hit architectural limitation of Linux Cluster, which is specific to Linux Cluster design, which other clusters tend not to have. Linux Cluster assumes that you will *always* be able to execute fencing of *all* other nodes. In fact, this is a stated *prerequisite* for correct operatio

Re: [Linux-cluster] GFS (1 & partially 2) performance problems

2010-06-17 Thread Jankowski, Chris
e, I would be happy to listen to any additional suggestions to further improve performance. Thanks! Jankowski, Chris wrote: > Michael, > > I do not know the process for setting this up in a multipathing configuration, but the scheduler to test is the noop scheduler. > > Please let us

Re: [Linux-cluster] GFS (1 & partially 2) performance problems

2010-06-16 Thread Jankowski, Chris
he moment, the scheduler files for each blockdevice contain this line: "noop anticipatory deadline [cfq]" Maybe I would have to do something like "echo [noop] anticipatory deadline cfq > /sys/block/sd*/queue/scheduler" instead? Thanks for the help. Jankowski, Chr

Re: [Linux-cluster] GFS (1 & partially 2) performance problems

2010-06-15 Thread Jankowski, Chris
rlying storage. This is extremely surprising and a bit shocking I must say. I guess for the Reads I will need to check the SAN itself, see if I can do any optimization on it.. That thing can't possibly be that bad when it comes to reading.. Thanks a lot for your ideas so far! Jankowski, Chr

Re: [Linux-cluster] GFS (1 & partially 2) performance problems

2010-06-14 Thread Jankowski, Chris
Michael, For comparison, could you do your dd(1) tests with a very large block size (1 MB) and tell us the results, please? I have a vague hunch that the problem may have something to do with coalescing or not of IO operations. Also, which IO scheduler are you using? Thanks abnd regards, Chr

Re: [Linux-cluster] Two node cluster, start CMAN fence the other node

2010-04-19 Thread Jankowski, Chris
to-increase-gfs2-performance-in-a-cluster.html Regards, Celso. ____ From: "Jankowski, Chris" To: linux clustering Sent: Fri, April 16, 2010 9:39:54 PM Subject: Re: [Linux-cluster] Two node cluster, start CMAN fence the other node Alex, 1. Thank you ver

Re: [Linux-cluster] Two node cluster, start CMAN fence the other node

2010-04-16 Thread Jankowski, Chris
thing to add... I'm going to play a little with the quorum devices. Hope it helps! Alex On 04/16/2010 05:00 PM, Jankowski, Chris wrote: eparate the cluster interconne -- Linux-cluster mailing list Linux-cluster@redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster

Re: [Linux-cluster] Two node cluster, start CMAN fence the other node

2010-04-16 Thread Jankowski, Chris
Alex, What exactly did you configure for IGMP? Did you also separate the cluster interconnect traffic in its own VLAN? Thanks and regards, Chris From: linux-cluster-boun...@redhat.com [mailto:linux-cluster-boun...@redhat.com] On Behalf Of Alex Re Sent: Friday,

Re: [Linux-cluster] fence_ilo halt instead reboot

2010-04-14 Thread Jankowski, Chris
ESG, Yes, there is a BIOS entry that you need to modify - "Boot on power on" or some such. I do not remember from the top of my head where it is in the RBSU menu structure, but you can certainly configure it to reboot after fence through iLO. I did that a few months ago for a customer on DL380

Re: [Linux-cluster] Listing openAIS parameters on RHEL Cluster Suite 5

2010-04-07 Thread Jankowski, Chris
, 8 April 2010 01:18 To: Jankowski, Chris Cc: linux clustering Subject: Re: [Linux-cluster] Listing openAIS parameters on RHEL Cluster Suite 5 On 07/04/10 04:33, Jankowski, Chris wrote: > Chrissie, > > Thank you for the explanation. > > With the expected_nodes, I have the para

Re: [Linux-cluster] RHCS: Multi site cluster

2010-04-04 Thread Jankowski, Chris
Another comment: You can certainly do it, but you may be surprised that the result may not be neither as resilient generally nor as highly available as initially hoped, due to the limitations of the cluster subsystems. I'll give just two examples where you may hit unresolvable difficulties -

[Linux-cluster] Is nit worth setting up jumbo Ethernet frames on the cluster interconnect link?

2010-04-02 Thread Jankowski, Chris
Hi, On a heavily used cluster with GFS2 is it worth setting up jumbo Ethernet frames on the cluster interconnect link? Obviously, if only miniscule portion of the packets travelling through this link are larger than standard 1500 MTU then why to bother. I am seeing significant traffic on the

[Linux-cluster] Listing openAIS parameters on RHEL Cluster Suite 5

2010-04-02 Thread Jankowski, Chris
Hi, As per Red Hat Knowledgebase note 18886 on RHEL 5.4 I should be able to get the current in-memory values of the openAIS paramemters by running the following commands: # openais-confdb-display totem.version = '2' totem.secauth = '1' # openais-confdb-display totem token totem.token = '1'

Re: [Linux-cluster] Cron Jobs

2010-03-30 Thread Jankowski, Chris
Hi, 1. >>>yeah, my first inkling was to symlink /etc/cron.daily but that breaks so >>>much existing functionality. I was actually thinking about /var/spool/cron/crontabs directory. You can put your cron definitions there in the old UNIX style. It works perfectly well and is more general and f

Re: [Linux-cluster] Cron Jobs

2010-03-30 Thread Jankowski, Chris
A few ideas. 1. What about replacing the directory containing the cron job descriptions in /var with a symbolic link to a directory on the sahred filesystem. 2. You application service start/stop script may modify the cron job description files. This is more complex, as it has to deal with rem

Re: [Linux-cluster] GFS2 - monitoring the rate of Posix lock operations

2010-03-29 Thread Jankowski, Chris
: linux clustering Subject: Re: [Linux-cluster] GFS2 - monitoring the rate of Posix lock operations Hi, On Mon, 2010-03-29 at 12:15 +, Jankowski, Chris wrote: > Steven, > > >>>You can use localflocks on each node provided you never access any of the > >>>locked fil

Re: [Linux-cluster] GFS2 - monitoring the rate of Posix lock operations

2010-03-29 Thread Jankowski, Chris
boun...@redhat.com [mailto:linux-cluster-boun...@redhat.com] On Behalf Of Steven Whitehouse Sent: Monday, 29 March 2010 19:41 To: linux clustering Subject: Re: [Linux-cluster] GFS2 - monitoring the rate of Posix lock operations On Sun, 2010-03-28 at 02:32 +, Jankowski, Chris wrote: > Steve, &

Re: [Linux-cluster] GFS2 - monitoring the rate of Posix lock operations

2010-03-27 Thread Jankowski, Chris
Thanks and regards, Chris -Original Message- From: linux-cluster-boun...@redhat.com [mailto:linux-cluster-boun...@redhat.com] On Behalf Of Steven Whitehouse Sent: Saturday, 27 March 2010 00:26 To: linux clustering Subject: Re: [Linux-cluster] GFS2 - monitoring the rate of Posix lock operat

Re: [Linux-cluster] dump(8) for GFS2

2010-03-27 Thread Jankowski, Chris
-cluster-boun...@redhat.com [mailto:linux-cluster-boun...@redhat.com] On Behalf Of Steven Whitehouse Sent: Saturday, 27 March 2010 00:21 To: linux clustering Subject: Re: [Linux-cluster] dump(8) for GFS2 Hi, On Fri, 2010-03-26 at 02:48 +, Jankowski, Chris wrote: > Hi, > >

[Linux-cluster] dump(8) for GFS2

2010-03-25 Thread Jankowski, Chris
Hi, Question: - Are there any plans to develop a backup utility working on the same principles as dump(8) does for ext3fs? This means getting the backup done by walking the block structure contained in the inodes instead of just reading the file the way tar(1), cpio(1) and others do it.

[Linux-cluster] GFS2 - monitoring the rate of Posix lock operations

2010-03-25 Thread Jankowski, Chris
Hi, I understand that GFS2 by default has a limit on the rate of POSIX locks to 100 per second. This limit can be removed by the following entry in /etc/cluster/cluster.conf: Question 1: How can I monitor the rate of POSIX lock operations? The reason I am asking this question is

[Linux-cluster] fence_scsi - restrictions in use

2010-03-25 Thread Jankowski, Chris
Hi, The man page that I have for fence_scsi(8) lists (among others) the following restrictions in use: - The fence_scsi fencing agent requires a minimum of three nodes in the cluster to operate. - In addition, fence_scsi cannot be used in conjunction with qdisk. I am puzzled by those restrict