Hi,
I begin to test. So I defined 2 clusters that are include: Cluster 1: App1 and
App3 Cluster 2: App2 and App4
Now If I want enable IPaddr as a resource on clusters, this resource will
assign to both nodes. while ip float should run only on App3 and App4.Should I
use constraints to limit ip
Yes, we do have our application using shared memory which is what we see
when the cluster is down.
On Tue, May 17, 2016 at 10:53 PM, Ken Gaillot wrote:
> On 05/17/2016 12:02 PM, Nikhil Utane wrote:
> > OK. Will do that.
> >
> > Actually I gave the /dev/shm usage when the
On 05/17/2016 12:02 PM, Nikhil Utane wrote:
> OK. Will do that.
>
> Actually I gave the /dev/shm usage when the cluster wasn't up.
> When it is up, I see it occupies close to 300 MB (it's also the DC).
Hmmm, there should be no usage if the cluster is stopped. Any memory
used by the cluster will
Hi Honza,
Just checking if you have the official patch available for this issue.
As far as I am concerned barring couple of issues, everything seems to be
working fine on our big-endian system. Much relieved. :)
-Thanks
Nikhil
On Thu, May 5, 2016 at 3:24 PM, Nikhil Utane
OK. Will do that.
Actually I gave the /dev/shm usage when the cluster wasn't up.
When it is up, I see it occupies close to 300 MB (it's also the DC).
tmpfs 500.0M329.4M170.6M 66% /dev/shm
On another node the same is 115 MB.
Anyways, I'll monitor the usage to know
I was used another package build, so want keep it
2016-05-17 14:39 GMT+03:00 Ferenc Wágner :
> Andrey Rogovsky writes:
>
> > I have deb rules, comes from 1.12 and try apply it to current release.
>
> 1.1.14 is available in sid, stretch and jessie-backports,
On 05/17/2016 06:50 AM, Bogdan Dobrelya wrote:
> On 05/17/2016 01:17 PM, Adam Spiers wrote:
>> Bogdan Dobrelya wrote:
>>> On 05/16/2016 09:23 AM, Jan Friesse wrote:
> Hi,
>
> I have an idea: use Pacemaker with Zookeeper (instead of Corosync). Is
> it
On 05/16/2016 12:22 PM, Dimitri Maziuk wrote:
> On 05/13/2016 04:31 PM, Ken Gaillot wrote:
>
>> That is definitely not a properly functioning cluster. Something
>> is going wrong at some level.
>
> Yeah, well... how do I find out what/where?
What happens after "pcs resource cleanup"? "pcs
Ulrich Windl writes:
> Hi!
>
> I tried to move a primitive away from a node with crm shell:
>
> crm resource migrate clone PT5M
>
> This results in the message:
> Resource 'clone' not moved: active in 2 locations.
> You can prevent 'clone' from running on a
Andrey Rogovsky writes:
> I have deb rules, comes from 1.12 and try apply it to current release.
1.1.14 is available in sid, stretch and jessie-backports, any reason you
can't use those packages?
> In the building I get an error:
> dh_testroot -a
> rm -rf
Bogdan Dobrelya wrote:
> On 05/16/2016 09:23 AM, Jan Friesse wrote:
> >> Hi,
> >>
> >> I have an idea: use Pacemaker with Zookeeper (instead of Corosync). Is
> >> it possible?
> >> Is there any examination about that?
>
> Indeed, would be *great* to have a Pacemaker based
What I would like to understand is how much total shared memory
(approximately) would Pacemaker need so that accordingly I can define the
partition size. Currently it is 300 MB in our system. I recently ran into
insufficient shared memory issue because of improper clean-up. So would
like to
Hi,
Thank you again.
Servers are in one location. I read a few about Booth. But I'm afraid of
ticketing. This servers are very sensitive.
So with this situation you offer Booth or using attribute method??
Best Regards,H.Yavari
From: Klaus Wenninger
To:
Hi!
One of the main problems I identified with POSIX shared memory (/dev/shm) in
Linux is that changes to the shared memory don't affect the i-node, so you
cannot tell from a "ls -rtl" which segments are still active and which are not.
You can only see the creation time.
Maybe there should be
On 05/17/2016 08:20 AM, H Yavari wrote:
> Hi,
>
> Emm I have a scenario and I'm confused. So I'm searching for the
> solutions. Can you please check this
> http://clusterlabs.org/pipermail/users/2016-April/002796.html
>
> I don't know how achieve to this? with Booth? with attribute? 2
>
Hi,
Emm I have a scenario and I'm confused. So I'm searching for the solutions. Can
you please check this
http://clusterlabs.org/pipermail/users/2016-April/002796.html
I don't know how achieve to this? with Booth? with attribute? 2 clusters or 1
cluster?
Please show me a way.
Many thanks.
16 matches
Mail list logo