2015-06-18 16:39 GMT+02:00 Robert Altnoeder :
> On 05/28/2015 05:05 PM, Ben RUBSON wrote:
>
>> So,
>>
>> I played during hours with the dynamic resync rate controller.
>> Here are my settings :
>>
>> c-plan-ahead 10; //10ms between my 2 nodes, but minimum 1 second
>> recommended here :
>> https://b
On 05/28/2015 05:05 PM, Ben RUBSON wrote:
So,
I played during hours with the dynamic resync rate controller.
Here are my settings :
c-plan-ahead 10; //10ms between my 2 nodes, but minimum 1 second
recommended here :
https://blogs.linbit.com/p/128/drbd-sync-rate-controller/
"c-plan-ahead 10"
Thank you for your answer, and don't worry, we clearly understand you !
What I want is the following :
When the secondary node is outdated, I want it to be uptodate as soon as
possible, even if new blocks arrive on the primary node, which would of course
also need to be duplicated.
My priority i
Thank you for your answer, and don't worry, we clearly understand you !
What I want is the following :
When the secondary node is outdated, I want it to be uptodate as soon as
possible, even if new blocks arrive on the primary node, which would of course
also need to be duplicated.
My priority i
Hello.
I didnt reply your email because:
I'm a DRBD user. I'm not in the DRBD Team.
My english is poor.
And I dont understand what is your problem, because in the email you
send the DRBD sync is aprox 680MB/s.
RAID 10 to 800MB/s is a theoretical or tested value ?
Do you test link connectio
Hello,
I am confused to ask again, but could you help me with this please ?
I really don't know how to go further, if the behavior I would like to have
is supported by DRBD or not...
DRBD team ?
Any support would really be appreciated.
Thank you again,
Best regards,
Ben
2015-05-28 17:05 GMT
So,
I played during hours with the dynamic resync rate controller.
Here are my settings :
c-plan-ahead 10; //10ms between my 2 nodes, but minimum 1 second
recommended here : https://blogs.linbit.com/p/128/drbd-sync-rate-controller/
resync-rate 680M; //mainly ignored
c-min-rate 400M;
c-max-rate 68
Have you test these values ?
https://drbd.linbit.com/users-guide/s-throughput-tuning.html
El 26/05/15 a las 13:16, Ben RUBSON escribió:
RAID controller is OK yes.
Here is a 4 steps example of the issue :
### 1 - initial state :
Source :
- sdb read MB/s : 0
- sdb write MB/s : 0
-
RAID controller is OK yes.
Here is a 4 steps example of the issue :
### 1 - initial state :
Source :
- sdb read MB/s : 0
- sdb write MB/s : 0
- eth1 incoming MB/s : 0
- eth1 outgoing MB/s : 0
Target :
- sdb read MB/s : 0
- sdb write MB/s : 0
- eth1 incoming MB/s : 0
- eth1 ou
Cache settings an I/O in RAID controller is optimal ??? Write-back,
write-through, cache enablad, I/O direct, ...
El 25/05/15 a las 11:50, Ben RUBSON escribió:
The link between nodes is a 10Gb/s link.
The DRBD resource is a RAID-10 array which is able to resync at up to 800M (as
you can see I
The link between nodes is a 10Gb/s link.
The DRBD resource is a RAID-10 array which is able to resync at up to 800M (as
you can see I have lowered it to 680M in my configuration file).
The "issue" here seems to be a prioritization "issue" between application IOs
and resync IOs.
Perhaps I miss-co
the link between nodes is ??? 1Gb/s ??? , 10Gb/s ??? ...
the Hard Disks are ??? SATA 7200rpm ???, 1rpm ???, SAS ???,
SSD ???...
400M to 680M with a 10Gb/s link and SAS 15.000 rpm is OK but less ...
El dom, 24-05-2015 a las 20:43 +0200, Ben RUBSON escribió:
> > Le 12 avr. 2014 à 17:23, Ben R
> Le 12 avr. 2014 à 17:23, Ben RUBSON a écrit :
>
> Hello,
>
> Let's assume the following configuration :
> disk {
> c-plan-ahead 0;
> resync-rate 680M;
> c-min-rate 400M;
> }
>
> Both nodes are uptodate, and on the primary, I have a test IO burst running,
> using dd.
>
> I then c
> Hello,
>
> Let's assume the following configuration :
> disk {
> c-plan-ahead 0;
> resync-rate 680M;
> c-min-rate 400M;
> }
>
> Both nodes are uptodate, and on the primary, I have a test IO burst running,
> using dd.
>
> I then cut replication link for a few minutes so that second
> Hello,
>
> Let's assume the following configuration :
> disk {
> c-plan-ahead 0;
> resync-rate 680M;
> c-min-rate 400M;
> }
>
> Both nodes are uptodate, and on the primary, I have a test IO burst running,
> using dd.
>
> I then cut replication link for a few minutes so that second
Hello,
Let's assume the following configuration :
disk {
c-plan-ahead 0;
resync-rate 680M;
c-min-rate 400M;
}
Both nodes are uptodate, and on the primary, I have a test IO burst
running, using dd.
I then cut replication link for a few minutes so that secondary node will
be several GB
16 matches
Mail list logo