Hi mailing !

 

I have three nodes drbdmanage cluster.

Two nodes work as storage backend (S1/S2).

One node as satellite pure client (for future nova usage)

I work on 20GB/s LACP network between storage backends and satellite pure
client node

 

So, when I bench on local drbd on storage node with two nodes connected with
:

dd if=/dev/zero of=/dev/drbd104 bs=1M count=512 oflag=direct

I have almost 680 MB/s => It is ok for me

 

After I assign the resource to the satellite node.

I try the same thing on it :

dd if=/dev/zero of=/dev/drbd104 bs=1M count=512 oflag=direct

I get 420MB/s => why ?

 

If I do the same test on satellite node and with disconnect resource on one
storage backend :

dd if=/dev/zero of=/dev/drbd104 bs=1M count=512 oflag=direct

I get 650MB/s => It is ok for me

 

The 20GB network can support those two flows between storage and satellite.

I don't understand where is the bottleneck or misconfiguration.

 

(I can read balance at 800MB/s, but I didn't try other settings to get more)

 

Schema :

 

                 -------------

                  -Satellite-

                  -------------

                        ||

             --------------------

              -     Switch      -

              -------------------

               ||                ||

              -----             ------

              -S1-             -S2-

              -----             ------

 

 

 

Cfg:

Protocol C

al-extents 6007;

md-flushes no;

disk-barrier no;

disk-flushes no;

Others are default (good results with those)

 

If you have any suggestion I will be very happy.

Thanks !

 

 

_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to