Hi,
I set upper-case hostname (GUEST03/GUEST4) and run Pacemaker 1.1.9 +
Corosync 2.3.0.
[root@GUEST04 ~]# crm_mon -1
Last updated: Wed Apr 10 15:12:48 2013
Last change: Wed Apr 10 14:02:36 2013 via crmd on GUEST04
Stack: corosync
Current DC: GUEST04 (3232242817) - partition with quorum
Version:
Hi,
My previous patch had a spelling error, revise it just a bit.
Thanks,
Junko
2012/5/30 Junko IKEDA tsukishima...@gmail.com:
Hi,
I am trying to setup NFSv4 server using nfsserver RA,
and adding some handlings for rpc.idmad.
http://linux.die.net/man/8/rpc.idmapd
Please see the attached
Hi,
Thank you for your quick response!
This one seems to be missing. Or is it covered now by the monitor
test?
nfsserver_start () can now return $OCF_SUCCESS if it detects that nfs
server is already started.
This ocf_log debug, which complains about the argument, will not be
called anymore
,
so ocf_log debug complains;
Not enough arguments [1] to ocf_log.
I added a check statement for this.
Please see the attached.
Regards,
Junko IKEDA
NTT DATA INTELLILINK CORPORATION
nfsserver-validate-all.patch
Description: Binary data
nfsserver-check-start.patch
Description: Binary data
Hi,
Is my case hard to understand?
multipath means the Fibre Channels, there are two cables for redundancy.
Thanks,
Junko
2012/5/9 Junko IKEDA tsukishima...@gmail.com:
Hi,
In my case, the umount succeed when the Fibre Channels is disconnected,
so it seemed that the handling status file
OCF_CHECK_LEVE), it's enough to try unmount
the file system, isn't it?
https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/Filesystem#L774
Regards,
Junko IKEDA
NTT DATA INTELLILINK CORPORATION
Filesystem.patch
Description: Binary data
Hi,
In my case, the umount succeed when the Fibre Channels is disconnected,
so it seemed that the handling status file caused a longer failover,
as Dejan said.
If the umount fails, it will go into a timeout, might call stonith
action, and this case also makes sense (though I couldn't see this).
Hi,
Thank you for pointing that out!
Regards,
Junko IKEDA
2012/1/17 Dejan Muhamedagic de...@suse.de:
On Mon, Jan 16, 2012 at 03:10:14PM +0100, Dejan Muhamedagic wrote:
On Sat, Jan 14, 2012 at 12:32:20PM +0100, Lars Ellenberg wrote:
On Mon, Jan 09, 2012 at 05:50:14PM +0100, Dejan Muhamedagic
, right?
named_monitor()
output=`$OCF_RESKEY_host $OCF_RESKEY_monitor_request $OCF_RESKEY_monitor_ip`
if [ $? -ne 0 ] || ! echo $output | grep -q '.* has address
'$OCF_RESKEY_monitor_response
Would you please give me some advice?
Regards,
Junko IKEDA
NTT DATA INTELLILINK CORPORATION
named_ipv6
Hi Raoul,
Thank you for your comments!
this method should leave the slave be if the master did not change
since the last sync. consider:
crm node standby node02; crm node online node02
the slave should pick up where it left using mysql's own way of saving
the last replication information
Hi,
sorry, agein.
My previous patch was wrong.
I attached the new one.
Thanks,
Junko
2011/11/11 Junko IKEDA tsukishima...@gmail.com:
Hi,
The current mysql RA, it set hostname (= uname -n) as its replication network,
but I have the following restriction.
# uname -n
node01
# cat /etc
Hi Raoul,
Sure, thanks!
Regards,
Junko
2011/11/14 Raoul Bhatia [IPAX] r.bha...@ipax.at:
hello junko-san!
i propose the following documentation update to clarify the parameter's
usage.
parameter name=replication_hostname_suffix unique=0 required=0
longdesc lang=en
A hostname suffix that
Hi Marek, Florian,
Thank you for your comments!
Did you set evict_outdated_slaves?
No,
If set to false (the default), then the slave will be allowed to stay in
the cluster, but its master preference will be pushed down so it's not
promoted, and this seems to be Ikeda-san's preferred
noisy.
I think there is no problem if we change these log level from info to debug.
Please see attached.
Regards,
Junko IKEDA
NTT DATA INTELLILINK CORPORATION
mysql-log.patch
Description: Binary data
___
Linux-HA-Dev: Linux-HA-Dev@lists.linux
]; then
# Sanitize a below-zero preference to just zero
master_pref=0
fi
$CRM_MASTER -v $master_pref
fi
I'm less familiar with the replication behavior,
please advise me how to do it.
Regards,
Junko IKEDA
NTT DATA INTELLILINK CORPORATION
mysql
?
Regards,
Junko IKEDA
NTT DATA INTELLILINK CORPORATION
mysql-replication_hostname_suffix.patch
Description: Binary data
___
Linux-HA-Dev: Linux-HA-Dev@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha-dev
Home Page: http://linux
Hi Dejan,
Many thanks!
Can I get it from http://hg.linux-ha.org/glue/ ?
Regards,
Junko
2011/9/22 Dejan Muhamedagic de...@suse.de:
Hi Junko-san,
On Wed, Aug 17, 2011 at 10:22:40AM +0900, Junko IKEDA wrote:
Hi Dejan,
Thank you for your reply!
I attached the revised patch.
Just applied
Hi Dejan,
Thank you for your reply!
I attached the revised patch.
http://www.gossamer-threads.com/lists/linuxha/pacemaker/74350
I don't see the connection between the two.
I am trying to use /tmp/ipmitool command for some tests,
and add its path for root.
so $PATH for root is here;
# echo
?
Best Regards,
Junko IKEDA
NTT DATA INTELLILINK CORPORATION
ipmi.patch
Description: Binary data
___
Linux-HA-Dev: Linux-HA-Dev@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha-dev
Home Page: http://linux-ha.org/
Hi,
The latest resource-agent has man page for sfex_init,
and I add it to .spec.
Please see the attached patch.
Best Regard,
Junko IKEDA
NTT DATA INTELLILINK CORPORATION
sfex_init.patch
Description: Binary data
___
Linux-HA-Dev: Linux-HA-Dev
'
gmake[1]: *** [all-recursive] Error 1
gmake[1]: Leaving directory `/root/Desktop/work/20110622/resource-agents'
make: *** [all] Error 2
Best Regards,
Junko IKEDA
NTT DATA INTELLILINK CORPORATION
ethmonitor.patch
Description: Binary data
Hi,
May I suggest that you go with the devel version, because
crm_cli.txt was converted to crm.8.txt. There are not many
textual changes, just some obsolete parts removed.
OK, I got crm.8.txt from devel.
Each directory structure for Pacemaker 1.0,1.1 and devel is just a bit
different.
Does
Hi,
I tried to compile the latest agents package from mercurial repository,
but new exportfs RA complained about something like this;
# hg clone http://hg.linux-ha.org/agents/
# cd agents
# ./autogen.sh
# ./configure --localstatedir=/var --disable-fatal-warnings
# make
/bin/sh:
Hi,
I had done some tests for this patch,
and I could get the desired results.
I think this patch wouldn't affect the current usage.
Serge,
Thank you for your review!
Thanks,
Junko
--- Forwarded message ---
From: Serge Dubrouski serge...@gmail.com
To: Junko IKEDA ike
Hi,
If some failures happen during the online backup of PostgreSQL,
pgsql can not handle the fail over,
because backup_label, this is a file for a backup process of Postgres,
remains on the shared disk.
pgsql can not start DB if this file remains.
Please see the attached.
Thanks,
Junko
; then
- return $OCF_SUCCESS
- fi
+MSG=`$PING $PINGARGS 21`
+if [ $? = 0 ]; then
+return $OCF_SUCCESS
+fi
done
-
+
+ocf_log err $MSG
return $OCF_ERR_GENERIC
}
Thanks,
Junko
On Mon, 09 Nov 2009 18:13:29 +0900, Junko IKEDA
ike...@intellilink.co.jp
Hi,
I wonder why IPaddr RA needs to run route del before it deletes the
target interface.
Does the old version of IPaddr contain route add?
If route del fails, RA will be able to return $OCF_SUCCESS,
but I feel a little strange when I see the error message from route
command like this.
Hi,
By the way, this is a really trivial thing,
I have some requests about logging messages of IPaddr.
Please see the modified attachment.
Thanks,
Junko
On Mon, 09 Nov 2009 18:13:29 +0900, Junko IKEDA ike...@intellilink.co.jp
wrote:
Hi,
I wonder why IPaddr RA needs to run route del
Development List
Subject: Re: [Linux-ha-dev] xm dump-core from xen0
Hi,
On Mon, Mar 16, 2009 at 07:22:02PM +0900, Junko IKEDA wrote:
Hi,
I run the new xen0 on domU now,
and need an additional feature for a dump destination.
I have RHEL5.2 x86_64 and xen 3.1.
This would dump
Hi,
My operation is here;
# ssh x3650g
# export dom0=x3650g
# export hostlist=dom-d1:/etc/xen/dom-d1 dom-d2:/etc/xen/dom-d2
# /usr/lib64/stonith/plugins/external/xen0 on dom-d1
# echo $?
0
dom-d1 was created well.
# /usr/lib64/stonith/plugins/external/xen0 reset dom-d1
# echo $?
# 1
Sorry for all of my mistakes...
I have a wrong /etc/hosts.
It works well for now.
By the way, Could I config this plugin on two Dom0 and two DomU?
ex.) domU-1 on Dom0-1, and domU-2 o Dom0-2
Thanks,
Junko
Hi,
My operation is here;
# ssh x3650g
# export dom0=x3650g
# export
I run the attached cib.xml.
It seems that this configuration works well (but I need more tests)
If there is any strange elements, please let me know.
Thanks,
Junko
Sorry for all of my mistakes...
I have a wrong /etc/hosts.
It works well for now.
By the way, Could I config this plugin on
be a big deal. I can add one more config parameter like
run_dump, then if it's set the script will call xm dump-core before
destroying xunU.
On Tue, Mar 3, 2009 at 10:38 PM, Junko IKEDA ike...@intellilink.co.jp
wrote:
Hi Serge,
I'm trying to manage xen domain-U with xen0 plugin
4, 2009 at 6:45 PM, Junko IKEDA ike...@intellilink.co.jp
wrote:
Hi,
Attached is a patch that adds that functionality.
Many thanks!
I'll give it a try.
By the way, xen0 plugin should run on domain-0, right?
Is it possible to run it on domain-U?
Thanks,
Junko
On Tue, Mar 3
Hi Serge,
I'm trying to manage xen domain-U with xen0 plugin.
There are two xm command, like xm destroy and xm create in xen0,
How do you think to add xm dump-core into it?
If possible, I want to get the dump of domain-U when some fence events
happen.
Best Regards,
Junko Ikeda
Hi,
See also this page, please.
http://www.linux-ha.org/sfex/
Thanks,
Junko
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Xinwei Hu
Sent: Thursday, October 16, 2008 6:55 PM
To: High-Availability Linux Development List
Subject: Re: [Linux-ha-dev]
with /sbin/ldconfig for
RedHat?
Best Regards,
Junko Ikeda
NTT DATA INTELLILINK CORPORATION
___
Linux-HA-Dev: Linux-HA-Dev@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha-dev
Home Page: http://linux-ha.org/
If there's no objection I would like to push this patch into
the lha-2.1 repository, but any problem on that?
sure
It seems that the latest pacemaker also presents the same behavior
so I think the both needs to be fixed as well.
I thought it was fine?
sorry, that might have been
Btw. You do realize that setting ordered=false for the master resource
also means that the group's actions wont be ordered either don't you?
You mean, there's a possibility that slave resource will start/stop
before
master's action complete if I don't set ordered=true, right?
No.
of it next time.
Thanks,
Junko
2008/4/18 Junko IKEDA [EMAIL PROTECTED]:
Fixed by:
http://hg.clusterlabs.org/pacemaker/stable-0.6/rev/4817a7094683
It works well with group-master/slave, too.
Many thanks!
Please merge it into Heartbeat 2.1.4.
Thanks,
Junko
any ideas as to why the current code doesn't work for you?
I failed to build rpm on open suse 10.1 too...
It might be a potential problem in Heartbeat 2.1.3.
See attached configure-213.log.
It's sure that the summary says CIM provider and TSA plugin would not be
built,
Build CRM
.
Is there something wrong with cib.xml ?
This is similar case to what Yamauchi-san posted.
Best Regards,
Junko Ikeda
NTT DATA INTELLILINK CORPORATION
___
Linux-HA-Dev: Linux-HA-Dev@lists.linux-ha.org
http://lists.linux
Hi,
I keep failing to build lha-2.1 on RHEL5.1 for now.
It seems that --enable-cim-provider=no and --enable-tsa-plugin=no are
ineffective for ConfigureMe.
We don't need CIM providers or TSA plugin, so have a try to make patch
about
it.
Please check the attached.
Sorry for annoying.
The
Hi again,
Another request;
Would it be possible to include the following patch in release 2.1.4?
http://hg.linux-ha.org/dev/rev/6307bb091d02
It will help the problems which are posted into Bugzilla 1814,
for all platform not only ppc.
Hi,
So, that said, I've pushed my proposed code to
http://hg.linux-ha.org/lha-2.1/. It, for reasons outlined above, likely
doesn't build yet (because the in-tree packaging is broken), but I
wanted to share the scope of changes with you.
There are some fixes about failcount in
Hi,
There was two bugs in the configure stuff:
1) It got the package name for pegasus wrong for Red Hat
2) It didn't work if you had pegasus installed but didn't
enable the CIM provider.
I tried this branch and it worked well.
right there,
it might be related to CCM problem, I'm not sure.
If it's possible to handle this function as RA,
could you consider introduction of SF-EX into the next release as the first
step?
Thanks,
Junko IKEDA
NTT DATA INTELLILINK
If Node B updates the lock status _at just the right moment_,
sfex_update() detects that the other node is trying to update its
status,
and it will be terminated with exit(2).
This time window is enough to destroy all data if you are bad luck ;-(
Node B is just updating its lock status,
Assume we have 2 nodes.
1. Node A B reach step 3) in the same time.
2. sfex_lock on Node B is scheduled out due to some other reasons.
3. sfex_lock on Node A goes through step 3 to 6, and Node A holds
the lock now.
Node A is sure to hold the lock at this moment.
sfex_lock() is going
: [Linux-ha-dev] Shared disk file Exclusiveness
controlprogramforHB2
2007/8/9, Junko IKEDA [EMAIL PROTECTED]:
Hi,
sorry, my previous answer was off the mark...
When 2 nodes reach there at the same time,
node A notices that the other node want to lock too, so give up lock
itself.
I only see
for it?
We have no problem with improving it, but it's not known exactly how we
should do.
Best Regards,
Junko Ikeda
NTT DATA INTELLILINK CORPORATION
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Alan
Robertson
Sent: Wednesday, August 08, 2007 1:12
Hi,
You know that could be true...
but if it's called from RA, 2 nodes wouldn't reach that part at the same
time, right?
One node will be able to reach there according to the score rule.
Thanks,
Junko
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
] On Behalf Of Junko IKEDA
Sent: Thursday, August 09, 2007 4:30 PM
To: 'High-Availability Linux Development List'
Subject: RE: [Linux-ha-dev] Shared disk file Exclusiveness control
programforHB2
Hi,
You know that could be true...
but if it's called from RA, 2 nodes wouldn't reach that part
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Lars
Marowsky-Bree
Sent: Tuesday, April 17, 2007 11:13 PM
To: High-Availability Linux Development List
Subject: Re: [Linux-ha-dev] Split-Brain that use the latest development
version
On
54 matches
Mail list logo