Does anyone know why trimming a filesystem mounted on a DRBD volume takes so
long? I mean like three days to trim a 1.2TB filesystem.
Here are some pertinent details:
OS: SLES 12 SP2
Kernel: 4.4.74-92.29
Drives: 6 x Samsung SSD 840 Pro 512GB
RAID: 0 (mdraid)
DRBD: 9.0.8
Protocol: C
Network:
Hi,
i'm wondering from where the default values for operations of a resource come
from.
I tried to configure:
crm(live)# configure primitive prim_drbd_idcc_devel ocf:linbit:drbd params
drbd_resource=idcc-devel \
> op monitor interval=60
WARNING: prim_drbd_idcc_devel: default timeout 20s for
On 2017-08-01 03:05, Stephen Carville (HA List) wrote:
Can
clustering even be done reliably on CentOS 6? I have no objection to
moving to 7 but I was hoping I could get this up quicker than building
out a bunch of new balancers.
I have a number of centos 6 active/passive pairs running
Hey Marek,
I've run the command with --action off and uploaded the file on one of our
servers : https://cloud.iwgate.com/index.php/s/1SpZlG8mBSR1dNE
Interesting thing is that at the end of the file I found "Unable to
connect/login to fencing device" instead of "Failed: Timed out waiting to
power
On Tue, Aug 1, 2017 at 2:05 AM, Stephen Carville (HA List) <
62d2a...@opayq.com> wrote:
> On 07/31/2017 11:13 PM, Ulrich Windl [Masked] wrote:
>
> I guess you have no fencing configured, right?
>
> No. I didn't realize it was necessary unless there was shared storage
> involved. I guess it is
Hey everyone!
Here's a quick update for the upcoming Clusterlabs Summit at the SUSE
office in Nuremberg in September:
The time to register for the pool of hotel rooms has now expired - we
have sent the final list of names to the hotel. There may still be hotel
rooms available at the Sorat Saxx
Hi,
> But when I call any of the power actions (on, off, reboot) I get "Failed:
> > Timed out waiting to power OFF".
> >
> > I've tried with all the combinations of --power-timeout and --power-wait
> > and same error without any change in the response time.
> >
> > Any ideas from where or how to
- On Aug 1, 2017, at 8:06 AM, Ulrich Windl
ulrich.wi...@rz.uni-regensburg.de wrote:
"Lentes, Bernd" schrieb am 31.07.2017
> um
> 18:51 in Nachricht
> <641329685.12981098.1501519915026.javamail.zim...@helmholtz-muenchen.de>:
>> Hi,
>>
>> i'm
Hello Ulrich,
Thank you for the reply.
Tested that and also the reset action fail with the same message.
I forgot to tell that the vm guests are centos 7.3 and they power off in
like 2 seconds, and a full reboot takes like 10 seconds.
Also in VMware I see the soap task for "get id for UUID"
On 07/31/2017 11:13 PM, Ulrich Windl [Masked] wrote:
>> I am experimenting with pacemaker for high availability for some load
>> balancers. I was able to sucessfully get two CentOS (6.9) machines
>> (scahadev01da and scahadev01db) to form a cluster and the shared IP was
>> assigned to
>>> "Stephen Carville (HA List)" <62d2a...@opayq.com> schrieb am 31.07.2017 um
20:17 in Nachricht :
> I am experimenting with pacemaker for high availability for some load
> balancers. I was able to sucessfully get two CentOS (6.9) machines
>
>>> Octavian Ciobanu schrieb am 31.07.2017 um 20:16 in
Nachricht
>>> "Lentes, Bernd" schrieb am 31.07.2017
um
18:51 in Nachricht
<641329685.12981098.1501519915026.javamail.zim...@helmholtz-muenchen.de>:
> Hi,
>
> i'm currently a bit confused. I have several resources running as
> VirtualDomains, the vm reside on plain
13 matches
Mail list logo