[DRBD-user] Recovery from split-brain condition, please advice.

2009-11-16 Thread Ivan
on of DRBD User's guide, whilst man file for drbdadm does not mention it. 3. After DRBD starts process of synchronisation, can I mount block devises on the master node, or do I have to wait until synchronisation is completed? Thank you very much for y

Re: [DRBD-user] Recovery from split-brain condition, please advice.

2009-11-20 Thread Ivan
Hi everyone. I would like to thank members of the list who replied to my question. I followed your advice and I was able to resolved the split brain condition and to synced both nodes successfully. Regards, Ivan. On Tue, Nov 17, 2009 at 1:06 AM, Ivan wrote: > Hello everyone! > > I

[DRBD-user] device naming and udev + pacemaker issues

2014-09-14 Thread Ivan
the ocf filer rather than relying on pacemaker's defined timeout ? It would save some a lot of debugging time for the 8x->9x transition. cheers ivan ___ drbd-user mailing list drbd-user@lists.linbit.com http://lists.linbit.com/mailman/listinfo/drbd-user

Re: [DRBD-user] device naming and udev + pacemaker issues

2014-09-20 Thread Ivan
On 09/17/2014 12:09 PM, Lars Ellenberg wrote: On Sun, Sep 14, 2014 at 10:15:25AM +0300, Ivan wrote: (side question: I read that the by-res/ naming is 8.x legacy. Will the 9x series drop that and use exclusively /dev/drbd[0-9]+ and /dev/drbd_{resourcename} devices ?) I guess you are

Re: [DRBD-user] operation monitor failed 'not configured' - how to tell what's not configured?

2014-09-24 Thread Ivan
en manually recovering from a split brain) - I'll probably file a bug when I have time to test if the problem is still in the last rc. ivan ___ drbd-user mailing list drbd-user@lists.linbit.com http://lists.linbit.com/mailman/listinfo/drbd-user

Re: [DRBD-user] drbd storage size

2014-10-25 Thread Ivan
/converged-network-adapters/ethernet-x520-qda1-brief.html At those speeds it would be interesting to test the upcoming 3.18 kernel with the bulk network transmission patch [1] ; that should save you a bunch of cpu cycles. [1] http://lwn.net/Articles/615238 ivan In my Hardware setup also i

Re: [DRBD-user] Slow initial full sync

2014-10-29 Thread Ivan
Hi On 10/29/2014 06:34 AM, aurelien panizza wrote: Hi all, Here is my problem : I've got two servers connected via a switch with a dedicated 1Gb NIC. We have a SSD raid 1 on both servers (software RAID on Redhat 6.5). Using dd command I can reach ~180MB/s (dd if=/dev/zero of=/home/oracle/output

Re: [DRBD-user] Why so slow? Troubleshoot?

2014-11-22 Thread Ivan
. - use virtio drivers (both for disk and network) - trivial, but check that you don't have accidentally have QoS rules ivan But when I check performance values I do not see any bottlenecks: -CPU IO wait is below 2% -CPU itself is ~98%idle -network is not busy at all. Anyone having a clue

Re: [DRBD-user] Best practice: drbd+lvm+gfs2+dm-crypt on dual primary

2015-02-02 Thread Ivan
On 02/02/2015 05:50 PM, Digimer wrote: I see no particular problem with this. I use DRBD -> Clustered LVM -> GFS2 all the time. If you wanted to add LUKS, I'd probably do it as DRBD -> Clustered LVM -> LUKS'ed LV -> GFS2. I'm not sure that two (or more) LUKS partitions are identical given ex

Re: [DRBD-user] Best practice: drbd+lvm+gfs2+dm-crypt on dual primary

2015-02-02 Thread Ivan
I'm not sure that two (or more) LUKS partitions are identical given exactly the same cleartext content and the same keys. There must be some kind of sector randomization when writing data to make cryptoanalysis harder, so it makes me think that it's not the case (that would require testing thoug

Re: [DRBD-user] DRBD dual primary mode implications

2015-03-05 Thread Ivan
uot; pcs resource op add ${VM} monitor timeout="30s" interval="10s" pcs resource op add ${VM} migrate_from timeout="60s" interval="0" pcs resource op add ${VM} migrate_to timeout="120s" interval="0" the commands use pcs but you can easily translate them to crm. good luck ivan ___ drbd-user mailing list drbd-user@lists.linbit.com http://lists.linbit.com/mailman/listinfo/drbd-user

Re: [DRBD-user] Dual Primary Mode: Shared Directory blocked after node crash until reboot

2015-05-12 Thread Ivan
On 05/12/2015 02:09 PM, DRBD User wrote: Hi @Cesar: thx for your suggestion - but i don't want to do a manually fence. from Digimer's replies to your posts: 1- the dlm "lock" will be released once the crashed node is set to a *known* state in pacemaker. Without releasing, forget about using

Re: [DRBD-user] Determining whether a resource is in dual-primary mode

2015-10-29 Thread Ivan
little sed magic and you're set up. - neither the documented nor official hidden commands of drbdadm seem to be able to return this information what about: $ drbdadm role vm-file Primary/Primary Primary/Primary (resource vm-file here has 2 volumes) Any idea apprec

Re: [DRBD-user] split-brain on Ubuntu 14.04 LTS after reboot of master node

2015-11-15 Thread Ivan
problem; maybe the reboot doesn't properly shut down pacemaker, or the network (link, firewall, ...) is torn down before pacemaker is stopped, ... cheers ivan ___ drbd-user mailing list drbd-user@lists.linbit.com http://lists.linbit.com/mailman/listinfo/drbd-user

Re: [DRBD-user] split-brain on Ubuntu 14.04 LTS after reboot of master node

2015-11-15 Thread Ivan
On 11/15/2015 07:53 PM, Digimer wrote: On 15/11/15 11:36 AM, Ivan wrote: On 11/15/2015 05:04 PM, Digimer wrote: On 15/11/15 05:03 AM, Waldemar Brodkorb wrote: Hi, Digimer wrote, property $id="cib-bootstrap-options" \ dc-version="1.1.10-42f2063"

Re: [DRBD-user] Got stuck while installing DRBD for HA

2017-01-11 Thread Ivan
probably a disabled centos-base and/or centos-update repo. that said: 1- drbd-users is not the place to ask for this as it has nothing to do with drbd 2- no offense intended, but if you can't solve such basic problems you should probably stay away from drbd/pacemaker until you have a good k

[DRBD-user] drbd_send_uuids() causing kernel panic

2009-12-09 Thread Ivan Ho
now we intermittently see a kernel panic as a result of calling drbd_send_uuids() within after_state_ch(). Any help is much appreciated. Thanks, Ivan Starting DRBD resources: [ d(drbd0) d(drbd1) d(drbd2) d(drbd3) d(drbd4) CPU 2 Unable to handle kernel paging request at virtual address

[DRBD-user] Steps to resize resources on top of LVM, quick peer review is needed.

2009-12-15 Thread Ivan Teliatnikov
Hi everyone, I need to resize an online drbd resource. Drbd is on top of LVM. resource drbd0 { on node1.aka.edu.au { device /dev/drbd0; disk/dev/lvmc1/export1; address 192.168.1.1:7789; meta-disk /dev/sdd1[0]; } on node2.aka.edu

[DRBD-user] Upgrading to latest DRBD on RHEL 5.5

2010-07-23 Thread Ivan Teliatnikov
Hi All, I am running two nodes active/passive HA cluster (RHEL 5.5) with DRBD 8.2.4. kernel version: 2.6.18-194.8.1 What is the latest version of DRBD recommended for redhat? Thank you in advance, Ivan. ___ drbd-user mailing list drbd-user

[DRBD-user] DRBD settings difficulties with understanding

2011-01-11 Thread Ivan Pavlenko
y performance? Stability? Thank you in advice, Ivan. ___ drbd-user mailing list drbd-user@lists.linbit.com http://lists.linbit.com/mailman/listinfo/drbd-user

Re: [DRBD-user] DRBD settings difficulties with understanding

2011-01-12 Thread Ivan Pavlenko
of examples in the Internet and my question is "why". Why lots of people chose it? Apparently, my understanding of split brain behavior is wrong. Could you explain me, please, how a system from two nodes will self recovery with these settings? Thank you in advance, Ivan. (Sydney) On 01

[DRBD-user] drbd network block device deadlock ?

2011-02-01 Thread Ivan Frain
solution to have more memory available was to write the dirty page to disk. If someone has some information about that problem I'am eager to read it. Thank you in advance. BR, Ivan ___ drbd-user mailing list drbd-user@lists.linbit.com

Re: [DRBD-user] drbd network block device deadlock ?

2011-02-01 Thread Ivan Frain
explain in my first post. I hope this clarify the problem. Correct me if I have made wrong assumptions, I just want to have the conviction that this cannot happen in production. Ivan At Tuesday, 01/02/2011 on 17:44 Antonio Anselmi wrote: OS disk is usually a local device and not managed by drb

Re: [DRBD-user] drbd network block device deadlock ?

2011-02-03 Thread Ivan Frain
Thank you very much Phil for your answer. Sounds great. Best Regards, Ivan At Thursday, 03/02/2011 on 10:29 Philipp Reisner wrote: Am Dienstag, 1. Februar 2011, um 16:49:05 schrieb Ivan Frain: > Hi all, > > I am currently evaluating DRBD as a storage candidate for highly > availabl

[DRBD-user] List of drbd socket errors

2011-09-20 Thread Ivan Pavlenko
I can get info about error codes? Thank you in advance, Ivan ___ drbd-user mailing list drbd-user@lists.linbit.com http://lists.linbit.com/mailman/listinfo/drbd-user

Re: [DRBD-user] List of drbd socket errors

2011-09-20 Thread Ivan Pavlenko
Sorry for incomplete info.. There's RHEL 5.5 with 2.6.18-194.17.1.el5 kernel and DRBD is 8.3.8 (api:88) version. Thank you, Ivan On 09/21/2011 10:49 AM, David Coulson wrote: On 9/20/11 8:08 PM, Ivan Pavlenko wrote: Hi All, Recently I had split brain onto my cluster. There was a

Re: [DRBD-user] List of drbd socket errors

2011-09-21 Thread Ivan Pavlenko
quot;/usr/lib/drbd/notify-split-brain.sh root"; } on infplsm004 { address 192.168.10.9:7790; } on infplsm005 { address 192.168.10.10:7790; } } Thank you, Ivan On 09/21/2011 10:15 PM, Lars Ellenberg wrote: On Wed, Sep 21, 2011 at 10:08:42AM +1000, Ivan Pavlenko wrote: H

[DRBD-user] Split brain problem.

2011-12-01 Thread Ivan Pavlenko
stination I did it from my first server (INFPLSM017). And I have absolutely same result from the second one (INFPLSM018). Could you tell me, please, wht the possible reason of this problem and how I can fix this. Thank you in advance, Ivan ___ drbd-us

Re: [DRBD-user] Split brain problem.

2011-12-04 Thread Ivan Pavlenko
syslog_priority="info" to_logfile="yes" to_syslog="yes"> name="dlm_controld"/> name="gfs_controld"/> On 12/02/2011 03:05 PM, Digimer wrote: On 12/01/2011 07:30 PM, Ivan Pavlenko wrote: Hi ALL, Could you help me to fi

Re: [DRBD-user] Split brain problem.

2011-12-04 Thread Ivan Pavlenko
ng-timeout=20 --after-sb-2pri=disconnect --after-sb-1pri=discard-secondary --after-sb-0pri=discard-zero-changes --allow-two-primaries --discard-my-data' terminated with exit code 10 # I guess I need to stop cluster daemons, don't I? Thank you again, Ivan On 12/05/2011 12:21 PM, Digimer wrote

Re: [DRBD-user] Split brain problem.

2011-12-04 Thread Ivan Pavlenko
253,0 40962 / drbd1_wor 3414 root rtd DIR 253,0 40962 / drbd1_wor 3414 root txt unknown /proc/3414/exe kill -9 3414 doesn't do anything. I even tried to restart two nodes simultaneously - no luck. Ivan. On 12/05/2011 01:50 PM, Digimer wrote: On 12/04

Re: [DRBD-user] Split brain problem.

2011-12-05 Thread Ivan Pavlenko
Hi ALL, Digimer, thank you very much for your help. Finally I've fixed the split brain. I had to stop lvm2-monitor and clvmd daemons for both nodes of my cluster and only after that I was able to repair the r0 volume. Thank you very much again, Ivan On 12/05/2011 03:11 PM, Digimer

[DRBD-user] DRBD uses a wrong interface.

2012-03-06 Thread Ivan Pavlenko
device/dev/drbd1; disk /dev/sdb1; address 10.10.24.11:7789; meta-disk internal; } } [root@infplsm018 ~]# Does somebody have any idea why my server (only one the second is fine!) tries to use wrong interface? Thank you in advance, Ivan

[DRBD-user] DRBD and Advanced Format hard drives

2012-06-26 Thread Yarovoy Ivan
I am planning to use DRBD with Advanced Format WD hard drives which have 4KB physical sector. Should I suspect a significant performance degradation in this case? All partitions will be aligned on 4KB boundary. Thanks for any help, Ivan ___ drbd

[DRBD-user] [PATCH] drivers: block: drbd: remove unused macros

2015-12-27 Thread Ivan Safonov
div_ceil and div_floor macros duplicates round_up and round_down from kernel.h Signed-off-by: Ivan Safonov --- drivers/block/drbd/drbd_int.h | 5 - 1 file changed, 5 deletions(-) diff --git a/drivers/block/drbd/drbd_int.h b/drivers/block/drbd/drbd_int.h index e66d453..08256ad 100644 --- a