div_ceil and div_floor macros duplicates round_up and round_down from kernel.h
Signed-off-by: Ivan Safonov <insafo...@gmail.com>
---
drivers/block/drbd/drbd_int.h | 5 -
1 file changed, 5 deletions(-)
diff --git a/drivers/block/drbd/drbd_int.h b/drivers/block/drbd/drbd_int.h
index e
esn't properly shut down pacemaker, or the network (link, firewall,
...) is torn down before pacemaker is stopped, ...
cheers
ivan
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
le sed magic and you're set up.
- neither the documented nor official hidden commands of drbdadm seem
to be able to return this information
what about:
$ drbdadm role vm-file
Primary/Primary
Primary/Primary
(resource vm-file here has 2 volumes)
Any idea appreciated.
Re
On 05/12/2015 02:09 PM, DRBD User wrote:
Hi
@Cesar: thx for your suggestion - but i don't want to do a manually fence.
from Digimer's replies to your posts:
1- the dlm lock will be released once the crashed node is set to a
*known* state in pacemaker. Without releasing, forget about using
} migrate_to timeout=120s interval=0
the commands use pcs but you can easily translate them to crm.
good luck
ivan
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
(both for disk and network)
- trivial, but check that you don't have accidentally have QoS rules
ivan
But when I check performance values I do not see any bottlenecks:
-CPU IO wait is below 2%
-CPU itself is ~98%idle
-network is not busy at all.
Anyone having a clue why this is so slow?
Here my
Hi
On 10/29/2014 06:34 AM, aurelien panizza wrote:
Hi all,
Here is my problem :
I've got two servers connected via a switch with a dedicated 1Gb NIC. We
have a SSD raid 1 on both servers (software RAID on Redhat 6.5).
Using dd command I can reach ~180MB/s (dd if=/dev/zero
/converged-network-adapters/ethernet-x520-qda1-brief.html
At those speeds it would be interesting to test the upcoming 3.18 kernel
with the bulk network transmission patch [1] ; that should save you a
bunch of cpu cycles.
[1] http://lwn.net/Articles/615238
ivan
In my Hardware setup also i
to test if the problem is still in the last rc.
ivan
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
On 09/17/2014 12:09 PM, Lars Ellenberg wrote:
On Sun, Sep 14, 2014 at 10:15:25AM +0300, Ivan wrote:
(side question: I read that the by-res/ naming is 8.x legacy. Will
the 9x series drop that and use exclusively /dev/drbd[0-9]+ and
/dev/drbd_{resourcename} devices ?)
I guess you
transition.
cheers
ivan
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
I am planning to use DRBD with Advanced Format WD hard drives which have 4KB
physical sector. Should I suspect a significant performance degradation in this
case? All partitions will be aligned on 4KB boundary.
Thanks for any help,
Ivan
___
drbd
idea why my server (only one the second is fine!)
tries to use wrong interface?
Thank you in advance,
Ivan
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
=disconnect --after-sb-1pri=discard-secondary
--after-sb-0pri=discard-zero-changes --allow-two-primaries
--discard-my-data' terminated with exit code 10
#
I guess I need to stop cluster daemons, don't I?
Thank you again,
Ivan
On 12/05/2011 12:21 PM, Digimer wrote:
On 12/04/2011 04:15 PM, Ivan Pavlenko
253,0 40962 /
drbd1_wor 3414 root rtd DIR 253,0 40962 /
drbd1_wor 3414 root txt unknown /proc/3414/exe
kill -9 3414 doesn't do anything. I even tried to restart two nodes
simultaneously - no luck.
Ivan.
On 12/05/2011 01:50 PM, Digimer wrote:
On 12/04/2011
;
}
on infplsm004 {
address 192.168.10.9:7790;
}
on infplsm005 {
address 192.168.10.10:7790;
}
}
Thank you,
Ivan
On 09/21/2011 10:15 PM, Lars Ellenberg wrote:
On Wed, Sep 21, 2011 at 10:08:42AM +1000, Ivan Pavlenko wrote:
Hi All,
Recently I had split brain onto my cluster
error codes?
Thank you in advance,
Ivan
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
the
solution to have more memory available was to write the dirty page to
disk.
If someone has some information about that problem I'am eager to read
it.
Thank you in advance.
BR,
Ivan
___
drbd-user mailing list
drbd-user@lists.linbit.com
http
in my first post.
I hope this clarify the problem.
Correct me if I have made wrong assumptions, I just want to have the
conviction that this cannot happen in production.
Ivan
At Tuesday, 01/02/2011 on 17:44 Antonio Anselmi wrote:
OS disk is usually a local device and not managed by drbd, i.e
,
Ivan.
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
, but now we intermittently see a kernel panic as a
result of calling drbd_send_uuids() within after_state_ch().
Any help is much appreciated.
Thanks,
Ivan
Starting DRBD resources: [ d(drbd0) d(drbd1) d(drbd2) d(drbd3)
d(drbd4) CPU 2 Unable to handle kernel paging request at virtual address
User's guide, whilst man file for drbdadm does
not mention it.
3. After DRBD starts process of synchronisation, can I mount block
devises on the master node, or do I have to wait until synchronisation
is completed?
Thank you very much for your help.
Ivan
22 matches
Mail list logo