On 10/12/2014 1:04 PM, Digimer wrote:
If it's only 1 Gbps, then the maximum sustainable write speed is ~120
MB/sec (at most; the slower of network or disk determines the max
speed). You want the sync rate to be ~30% of maximum speed, or else
you will choke out the apps using the DRBD resource
On 12/10/14 03:55 PM, John R Pierce wrote:
On 10/12/2014 12:30 PM, Digimer wrote:
On 12/10/14 02:52 PM, John R Pierce wrote:
On 10/12/2014 9:30 AM, Digimer wrote:
I can't speak to backuppc, but I am curious how you're managing the
resources. Are you using cman + rgmanager or pacemaker?
stric
On 10/12/2014 12:30 PM, Digimer wrote:
On 12/10/14 02:52 PM, John R Pierce wrote:
On 10/12/2014 9:30 AM, Digimer wrote:
I can't speak to backuppc, but I am curious how you're managing the
resources. Are you using cman + rgmanager or pacemaker?
strictly manually, with drbdadm and such. if th
On 12/10/14 02:52 PM, John R Pierce wrote:
On 10/12/2014 9:30 AM, Digimer wrote:
I can't speak to backuppc, but I am curious how you're managing the
resources. Are you using cman + rgmanager or pacemaker?
strictly manually, with drbdadm and such. if the primary backup server
ever fails, I'll
On 10/12/2014 9:30 AM, Digimer wrote:
I can't speak to backuppc, but I am curious how you're managing the
resources. Are you using cman + rgmanager or pacemaker?
strictly manually, with drbdadm and such. if the primary backup server
ever fails, I'll bring up the backup by hand.
On 10/12/20
On 12/10/14 04:07 AM, John R Pierce wrote:
On 10/11/2014 11:30 PM, John R Pierce wrote:
so I've had a drbd replica running for a while of a 16TB raid thats
used as a backuppc repository.
oh. this is running on a pair of centos 6.latest boxes, each dual xeon
x5650 w/ 48GB ram, with LSI SAS2 r
On 12/10/14 02:30 AM, John R Pierce wrote:
so I've had a drbd replica running for a while of a 16TB raid thats used
as a backuppc repository.
when I have rebooted the backuppc server, the replica doesn't seem to
auto-restart til I do it manually, and the backupc /data file system on
this 16TB LU
On 10/11/2014 11:30 PM, John R Pierce wrote:
so I've had a drbd replica running for a while of a 16TB raid thats
used as a backuppc repository.
oh. this is running on a pair of centos 6.latest boxes, each dual xeon
x5650 w/ 48GB ram, with LSI SAS2 raid card hooked up to a whole lotta
sas/sa
so I've had a drbd replica running for a while of a 16TB raid thats used
as a backuppc repository.
when I have rebooted the backuppc server, the replica doesn't seem to
auto-restart til I do it manually, and the backupc /data file system on
this 16TB LUN doesn't seem to automount, either.
I'
>
> I also haven't investigated yet if drbd devices can be 'grown' ... pause
> replication, lvextend slave and master, xfs_grow the master, and resume
> replication? or is that too easy and it won't work...
>
They can be grown (I used it for static image store on the Sky
Entertainment websites
On 2013-02-26, John R Pierce wrote:
>
> the use case is more like, if the primary backup server fails, I'd like
> to have the secondary backup server running within a few hours of
> futzing with the existing backups available for recovery.
If you're doing something rsync-like, and if your buil
On 2/26/2013 4:17 PM, Steve Thompson wrote:
> On Tue, 26 Feb 2013, John R Pierce wrote:
>
>> >the initial sync of the 8TB starting volumes is looking to be a 460 hour
>> >affair.
> Something wrong here. That's only 5 MB/sec; I did an initial sync of a
> 10TB volume in less than a day (dual bonded g
On Tue, 26 Feb 2013, John R Pierce wrote:
> the initial sync of the 8TB starting volumes is looking to be a 460 hour
> affair.
Something wrong here. That's only 5 MB/sec; I did an initial sync of a
10TB volume in less than a day (dual bonded gigabits, dedicated).
Steve
_
On 2/26/2013 3:36 PM, Les Mikesell wrote:
> That should work, but what happens if they ever get out of sync? How
> long will it take drbd to catch up with something that size?
the initial sync of the 8TB starting volumes is looking to be a 460 hour
affair. yeouch. I might have to rethink th
On Tue, Feb 26, 2013 at 5:26 PM, John R Pierce wrote:
>
> don't have anywhere near that sort of uptime requirements, but when data
> starts spiralling out into the multi-terabytes with billions of file
> links, rsync is painfully slow.
Yes, the one problem with backuppc is that the number of hard
On 2/26/2013 3:03 PM, Patrick Flaherty wrote:
> That being said, if you have a requirement that your backup solution
> is up five nines, then yeah, use drbd / pacemaker, it's just not a use
> case I see very often.
don't have anywhere near that sort of uptime requirements, but when data
starts sp
On Tue, Feb 26, 2013 at 4:44 PM, John R Pierce wrote:
> hey, I have an application for drbd replication between a pair of EL6
> servers, and I just realized that drbd is no longer built in.
>
> googling found me this blog on doing it using ElRepo distributions...
> http://www.broexperts.com/2012/0
On Tue, Feb 26, 2013 at 1:44 PM, John R Pierce wrote:
> hey, I have an application for drbd replication between a pair of EL6
> servers, and I just realized that drbd is no longer built in.
>
> googling found me this blog on doing it using ElRepo distributions...
> http://www.broexperts.com/2012/0
hey, I have an application for drbd replication between a pair of EL6
servers, and I just realized that drbd is no longer built in.
googling found me this blog on doing it using ElRepo distributions...
http://www.broexperts.com/2012/06/how-to-install-drbd-on-centos-6-2/
is that still best pract
>
> Anyone experienced in DRBD
>
pls read this..
http://planetmysql.ru/2010/07/21/mysql-ha-with-drdb-and-heartbeat-on-centos-5-5/
another url
http://alexsdba.spaces.live.com/blog/cns!F86565E81CD9BC16!150.entry
http://alexsdba.spaces.live.com/blog/cns!F86565E81CD9BC16!149.entry
http://alexsd
On 02/15/2011 09:02 AM, ann kok wrote:
> Hi
>
> Anyone experienced in DRBD
Sort of ...
>
> ls it good in using mysql redundant?
>
I use DRBD to keep whole servers redundant ... one of the things it
keeps redundant is mysql.
If the machine was only a database server I would likely do it
diffe
On 2/15/2011 9:18 AM, Brunner, Brian T. wrote:
>
>> Anyone experienced in DRBD
>
> Is this a CentOS question, or a data base language question?
> What mailing-list or news group is the best choice for this question?
DRBD is closer to an OS-related topic than a database... You could go
to the HA-
From: ann kok
> Anyone experienced in DRBD
> ls it good in using mysql redundant?
> What is it best to use?
Maybe check http://mysql-mmm.org/
JD
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
> -Original Message-
> From: centos-boun...@centos.org
> [mailto:centos-boun...@centos.org] On Behalf Of ann kok
> Sent: Tuesday, February 15, 2011 10:03 AM
> To: centos@centos.org
> Subject: [CentOS] DRBD question
>
> Hi
>
> Anyone experienced in DRBD
Hi
Anyone experienced in DRBD
ls it good in using mysql redundant?
What is it best to use?
Thank you
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
On Sun, 3 Oct 2010, Dag Wieers wrote:
> On Thu, 30 Sep 2010, Shad L. Lords wrote:
>
>> Can we get a refresh of the drbd packages to 8.3.8.1
>>
>> There was a fix to the resync protocol. 8.3.8 would stall under certain
>> circumstances.
>
> If you haven't tried the ELRepo DRBD packages yet, could
On Sun, 3 Oct 2010, Dag Wieers wrote:
> On Thu, 30 Sep 2010, Shad L. Lords wrote:
>
>> Can we get a refresh of the drbd packages to 8.3.8.1
>>
>> There was a fix to the resync protocol. 8.3.8 would stall under certain
>> circumstances.
>
> If you haven't tried the ELRepo DRBD packages yet, could
On 10/2/2010 6:12 PM, Dag Wieers wrote:
> If you haven't tried the ELRepo DRBD packages yet, could you please test
> the one at:
>
> http://elrepo.org/linux/testing/el5/i386/RPMS/
> http://elrepo.org/linux/testing/el5/x86_64/RPMS/
>
> and provide feedback ? The more people test and prov
>The more people test and provide feedback, the
>quicker we can move it out of testing, into the elrepo repository.
Dag,
I got these on my cluster at work, it will be exercised thoroughly this
weekend with a tb of data changing, I'll report back next week.
jlc
___
On Thu, 30 Sep 2010, Shad L. Lords wrote:
> Can we get a refresh of the drbd packages to 8.3.8.1
>
> There was a fix to the resync protocol. 8.3.8 would stall under certain
> circumstances.
Hi Shad,
If you haven't tried the ELRepo DRBD packages yet, could you please test
the one at:
h
Can we get a refresh of the drbd packages to 8.3.8.1
There was a fix to the resync protocol. 8.3.8 would stall under certain
circumstances.
Thanks,
-Shad
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
Hello,
Someone had some problem recently with DRBD updates?
Thanks,
--
Daniel Bruno
http://danielbruno.eti.br
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
On Tue, Jun 22, 2010 at 9:03 AM, JohnS wrote:
>
> On Mon, 2010-06-21 at 17:11 +0200, Ralph Angenendt wrote:
>> On Mon, Jun 21, 2010 at 12:54 PM, Joseph L. Casale
>> wrote:
>> >>This seems like duplication of effort with the CentOS people, since they
>> >>already package DRBD for CentOS 5.x (and i
On Mon, 2010-06-21 at 17:11 +0200, Ralph Angenendt wrote:
> On Mon, Jun 21, 2010 at 12:54 PM, Joseph L. Casale
> wrote:
> >>This seems like duplication of effort with the CentOS people, since they
> >>already package DRBD for CentOS 5.x (and it works very well).
> >
> > No its not, the CentOS pac
On 6/21/2010 2:35 PM, Ralph Angenendt wrote:
> Am 21.06.10 20:45, schrieb Joseph L. Casale:
>>> http://dev.centos.org/testing/ tells me something else (and yes, this
>>> time they will go into extras).
>>
>> Well, we talked about it for some time and I never saw an update to
>> the effort someone m
Am 21.06.10 20:45, schrieb Joseph L. Casale:
>> http://dev.centos.org/testing/ tells me something else (and yes, this
>> time they will go into extras).
>
> Well, we talked about it for some time and I never saw an update to
> the effort someone made (my bad? I must have missed that). Couple
> thi
Am 21.06.10 18:21, schrieb Dag Wieers:
> Once again, I didn't want any controversy, we are just looking for CentOS
> people that are willing to test and provide feedback regarding the ELRepo
> kmod-drbd packages (preferably on the ELRepo bug-tracker / mailinglist to
> not cause even more contro
>http://dev.centos.org/testing/ tells me something else (and yes, this
>time they will go into extras).
Well, we talked about it for some time and I never saw an update to
the effort someone made (my bad? I must have missed that). Couple
this with the concerns posted by Dag that motivated him to s
On 6/21/2010 11:21 AM, Dag Wieers wrote:
>
> Now, it shouldn't really matter to users whether this is a duplication of
> effort or not. Users will now have additional choice, if CentOS delays or
> skips a release, ELRepo might have it available. Everybody wins.
On the other hand, it is likely to c
On Mon, 21 Jun 2010, Ralph Angenendt wrote:
> On Mon, Jun 21, 2010 at 12:54 PM, Joseph L. Casale
> wrote:
>
>>> This seems like duplication of effort with the CentOS people, since they
>>> already package DRBD for CentOS 5.x (and it works very well).
>>
>> No its not, the CentOS packages are no l
On Mon, Jun 21, 2010 at 12:54 PM, Joseph L. Casale
wrote:
>>This seems like duplication of effort with the CentOS people, since they
>>already package DRBD for CentOS 5.x (and it works very well).
>
> No its not, the CentOS packages are no longer maintained...
http://dev.centos.org/testing/ tells
>This seems like duplication of effort with the CentOS people, since they
>already package DRBD for CentOS 5.x (and it works very well).
No its not, the CentOS packages are no longer maintained...
___
CentOS mailing list
CentOS@centos.org
http://lists.ce
Thanks Juergen for your response. I did not post till now because i've been
fighting with all cluster stuff ! :D
I mean GFS2. DRBD, mysql and heartbeat work fine in an active/passive
configuration.
What really does not fit to my needs is the cluster stuff which i have to
use only to be able to mo
Hi,
yes, you need to go with the Cluster Stuff...
Regarding your Setup, i got the best experience with drbd + gfs + iscsi
export. GNBD was not as Stable than i expected. Overall Performance was
even worse, too compared with iSCSI.
Greetings
Juergen
On 03/27/2010 05:39 PM, Raffaele Camarda wrot
Hi all,
Where i want to arrive:
1) having two storage server replicating partition with DRBD
2) exporting via GNBD from the primary server the drbd with GFS2
3) inporting the GNBD on some nodes and mount it with GFS2
Assuming no logical error are done in the last points logic this is the
situati
On Thu, Mar 11, 2010 at 6:35 PM, Akemi Yagi wrote:
> Ralph? Are you the dev in charge of the maintenance?
Looks like I have to be :)
Cheers,
Ralph
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
On Fri, Mar 12, 2010 at 4:31 AM, Nicholas L. Soms wrote:
> OK, thank you!
>
> One more question.
> Does anyone have the same problem - when installing drbd-kmdl package via
> yum, madwifi packages needs to remove. May be I missed something, but I also
> need madwifi packages =)
There is some conf
OK, thank you!
One more question.
Does anyone have the same problem - when installing drbd-kmdl package via
yum, madwifi packages needs to remove. May be I missed something, but I also
need madwifi packages =)
2010/3/11 Akemi Yagi
> On Thu, Mar 11, 2010 at 9:30 AM, Akemi Yagi wrote:
> > On Thu
On Thu, Mar 11, 2010 at 9:35 AM, Joseph L. Casale
wrote:
>>No, it is a kernel version independent, kABI-tracking kernel module.
>>So, it should survive each kernel update. No need for rebuilding. (It
>>is different from drbd-kmdl)
>
> Akemi,
> Funny, there was just a thread on the extra repo's dr
>No, it is a kernel version independent, kABI-tracking kernel module.
>So, it should survive each kernel update. No need for rebuilding. (It
>is different from drbd-kmdl)
Akemi,
Funny, there was just a thread on the extra repo's drbd packages today
in the drbd list. Given those packages have bugs
On Thu, Mar 11, 2010 at 9:30 AM, Akemi Yagi wrote:
> On Thu, Mar 11, 2010 at 9:19 AM, Nicholas L. Soms wrote:
>> Dear collegues!
>>
>> I've got one question about drbd-kmdl packages.
>> As I can see at
>> $ rpm -qp --qf "%{DESCRIPTION}" kmod-drbd-8.0.16-5.el5_3.i686.rpm
>> This package provides t
On Thu, Mar 11, 2010 at 9:19 AM, Nicholas L. Soms wrote:
> Dear collegues!
>
> I've got one question about drbd-kmdl packages.
> As I can see at
> $ rpm -qp --qf "%{DESCRIPTION}" kmod-drbd-8.0.16-5.el5_3.i686.rpm
> This package provides the drbd kernel modules built for the Linux
> kernel 2.6.18-1
Dear collegues!
I've got one question about drbd-kmdl packages.
As I can see at
$ rpm -qp --qf "%{DESCRIPTION}" kmod-drbd-8.0.16-5.el5_3.i686.rpm
This package provides the drbd kernel modules built for the Linux
kernel 2.6.18-128.4.1.el5 for the i686 family of processors.
So, the question is - km
Hey Guys,
I want to use drbd (prot A) to handle replication for some backup volumes.
I see 8/8.2/8.3 available, are the 8.3 packages considered stable?
The files on the primary node will be from 2->400 gig in size. The nodes
are interconnected by gig fiber, the volumes will be lvm backed, given I
On Thu, Dec 31, 2009 at 1:45 PM, robert mena wrote:
> Hi,
> I am trying to use drbd in my centos 5.4 but I keep getting errors.
> When I try modprobe -v drbd I receive
> insmod /lib/modules/2.6.18-164.el5PAE/extra/drbd/drbd.ko
> FATAL: Error inserting drbd
> (/lib/modules/2.6.18-164.el5PAE/extra/d
Hi,
I am trying to use drbd in my centos 5.4 but I keep getting errors.
When I try modprobe -v drbd I receive
insmod /lib/modules/2.6.18-164.el5PAE/extra/drbd/drbd.ko
FATAL: Error inserting drbd
(/lib/modules/2.6.18-164.el5PAE/extra/drbd/drbd.ko): Invalid module format.
The current (running) ke
Scott McClanahan wrote:
> Would any of you be comfortable running the drbd packages from the
> extras repo? If so, any particular version .. I notice 8.0, 8.2, 8.3.
> I'll do my own due diligence but just curious if the list has any
> implementation based feedback. Thanks.
>
I've used DRBD on s
I am currently playing with the 8.3 package (8.2 redirects to 8.3 btw).
so far I haven't had any issues with it.
Jacob Bresciani
Linux Systems Administrator
Advanced Economic Research Systems / Terapeak
Cell: 250 418-5412
On 2009-12-18, at 8:53 AM, Flaherty, Patrick wrot
> Would any of you be comfortable running the drbd packages
> from the extras repo? If so, any particular version .. I
> notice 8.0, 8.2, 8.3.
> I'll do my own due diligence but just curious if the list has
> any implementation based feedback. Thanks.
I've been running 8.0 for a year or more
Would any of you be comfortable running the drbd packages from the
extras repo? If so, any particular version .. I notice 8.0, 8.2, 8.3.
I'll do my own due diligence but just curious if the list has any
implementation based feedback. Thanks.
___
CentO
On Aug 21, 2009, at 6:27 AM, Karanbir Singh
wrote:
> On 08/20/2009 05:46 PM, Coert Waagmeester wrote:
>> Xen DomU
>>
>> DRBD
>>
>> LVM Volume
>>
>> RAID 1
>>
>
> this makes no sense, you are loosing about 12% of i/o capability
> here -
> even before hitt
On 08/20/2009 05:46 PM, Coert Waagmeester wrote:
> Xen DomU
>
> DRBD
>
> LVM Volume
>
> RAID 1
>
this makes no sense, you are loosing about 12% of i/o capability here -
even before hitting drbd, and then taking another hit on whatever drbd
brings in ( depen
Coert Waagmeester wrote:
> Hello Alan,
>
> This is my current setup:
>
> Xen DomU
>
> DRBD
>
> LVM Volume
>
> RAID 1
>
>
> What I first wanted to do was:
>
> DomU | DRBD
>
> LVM Volume
>
> RAID 1
>
>
If I understand you diagram, y
On Thu, 2009-08-20 at 09:38 -0600, Alan Sparks wrote:
> Ross Walker wrote:
> > On Aug 20, 2009, at 10:22 AM, Coert Waagmeester > > wrote:
> >
> >> Hello all,
> >>
> >>
> >> I am running drbd protocol A to a secondary machine to have
> >> 'backups' of
> >> my xen domUs.
> >>
> >> Is it neces
Ross Walker wrote:
> On Aug 20, 2009, at 10:22 AM, Coert Waagmeester > wrote:
>
>> Hello all,
>>
>>
>> I am running drbd protocol A to a secondary machine to have
>> 'backups' of
>> my xen domUs.
>>
>> Is it necessary to change the xen domains configs to use /dev/drbd*
>> instead of the LVM
On Aug 20, 2009, at 10:22 AM, Coert Waagmeester wrote:
> Hello all,
>
>
> I am running drbd protocol A to a secondary machine to have
> 'backups' of
> my xen domUs.
>
> Is it necessary to change the xen domains configs to use /dev/drbd*
> instead of the LVM volume that drbd mirrors, and which t
Hello all,
I am running drbd protocol A to a secondary machine to have 'backups' of
my xen domUs.
Is it necessary to change the xen domains configs to use /dev/drbd*
instead of the LVM volume that drbd mirrors, and which the xen domU runs
of?
regards,
Coert
___
On Jul 29, 2009, at 2:30 PM, "Andrea Dell'Amico"
wrote:
> On Wed, 2009-07-29 at 16:16 +0200, Andrea Dell'Amico wrote:
>> On Wed, 2009-07-29 at 09:55 -0400, Ross Walker wrote:
>
>> I'm pretty sure the crash is DRBD related: until the secondary drbd
>> server is detached, all is working well. Th
On Wed, 2009-07-29 at 16:16 +0200, Andrea Dell'Amico wrote:
> On Wed, 2009-07-29 at 09:55 -0400, Ross Walker wrote:
> I'm pretty sure the crash is DRBD related: until the secondary drbd
> server is detached, all is working well. There are 23 guests running,
> right now, some of them paravirtualize
On Wed, 2009-07-29 at 09:55 -0400, Ross Walker wrote:
> I read on another forum how a user using iSCSI for domUs was
> experiencing network hangs due to the fact that dom0 didn't have
> enough scheduler credits to handle the network throughput. That might
> be related.
>
> http://lists.cent
On Jul 29, 2009, at 7:52 AM, "Andrea Dell'Amico"
wrote:
> On Tue, 2009-07-28 at 14:31 -0400, William L. Maltby wrote:
>
>>> When the two hosts are in sync, if I activate more than a few (six
>>> or
>>> seven) xen guests, the master server crashes spectacularly and
>>> reboots.
>>>
>>> I've
On Tue, 2009-07-28 at 14:31 -0400, William L. Maltby wrote:
> > When the two hosts are in sync, if I activate more than a few (six or
> > seven) xen guests, the master server crashes spectacularly and reboots.
> >
> > I've seen a kernel dump over the serial console, but the machine
> > restarts i
July 27, 2009 10:30 AM
Subject: Re: [CentOS] DRBD very slow
>
> On Mon, 2009-07-27 at 10:18 +0400, Roman Savelyev wrote:
>> > Invest in a HW RAID card with NVRAM cache that will negate the need
>> > for barrier writes from the OS as the controller will issue them async
>&
On Tue, 2009-07-28 at 20:11 +0200, Andrea Dell'Amico wrote:
> Hello,
> I have a couple of Dell 2950 III, both of them with CentOS 5.3, Xen,
> drbd 8.2 and cluster suite.
> Hardware: 32DB RAM, RAID 5 with 6 SAS disks (one hot spare) on a PERC/6
> controller.
>
> I configured DRBD to use the main n
Hello,
I have a couple of Dell 2950 III, both of them with CentOS 5.3, Xen,
drbd 8.2 and cluster suite.
Hardware: 32DB RAM, RAID 5 with 6 SAS disks (one hot spare) on a PERC/6
controller.
I configured DRBD to use the main network interfaces (bnx2 driver), with
bonding and crossover cables to have
On Mon, 2009-07-27 at 18:18 -0400, Ross Walker wrote:
> On Jul 27, 2009, at 4:09 PM, Coert Waagmeester
> wrote:
>
>
>
>
> >
> > On Mon, 2009-07-27 at 12:37 +0200, Coert Waagmeester wrote:
> > > On Mon, 2009-07-27 at 12:02 +0200, Alexander Dalloz wrote:
> > > > >
> > > > > On Mon, 2009-07-27
On Jul 27, 2009, at 4:09 PM, Coert Waagmeester > wrote:
On Mon, 2009-07-27 at 12:37 +0200, Coert Waagmeester wrote:
On Mon, 2009-07-27 at 12:02 +0200, Alexander Dalloz wrote:
On Mon, 2009-07-27 at 08:30 +0200, Coert Waagmeester wrote:
Hello Roman,
I am running drbd 8.2.6 (the standard ce
On Mon, 2009-07-27 at 12:37 +0200, Coert Waagmeester wrote:
> On Mon, 2009-07-27 at 12:02 +0200, Alexander Dalloz wrote:
> > >
> > > On Mon, 2009-07-27 at 08:30 +0200, Coert Waagmeester wrote:
> >
> > >> Hello Roman,
> > >>
> > >> I am running drbd 8.2.6 (the standard centos version)
> >
> >
>
On Mon, 2009-07-27 at 12:02 +0200, Alexander Dalloz wrote:
> >
> > On Mon, 2009-07-27 at 08:30 +0200, Coert Waagmeester wrote:
>
> >> Hello Roman,
> >>
> >> I am running drbd 8.2.6 (the standard centos version)
>
>
> Hi,
>
> have you considered to test the drbd-8.3 packages?
>
> http://bugs.c
>
> On Mon, 2009-07-27 at 08:30 +0200, Coert Waagmeester wrote:
>> Hello Roman,
>>
>> I am running drbd 8.2.6 (the standard centos version)
Hi,
have you considered to test the drbd-8.3 packages?
http://bugs.centos.org/view.php?id=3598
http://dev.centos.org/centos/5/testing/{i386,x86_64}/RPMS/
> On google I found the following page:
> http://www.nabble.com/Huge-latency-issue-with-8.2.6-td18947965.html
>
> I have found in the drbdsetup (8) man page the sndbuf-size option, and I
> will try setting this.
>
> On the nabble page they talk about the TCP_NODELAY and TCP_QUICKACK
> socket optio
On Mon, 2009-07-27 at 08:30 +0200, Coert Waagmeester wrote:
> On Mon, 2009-07-27 at 10:18 +0400, Roman Savelyev wrote:
> > > Invest in a HW RAID card with NVRAM cache that will negate the need
> > > for barrier writes from the OS as the controller will issue them async
> > > from cache allowing I/
On Mon, 2009-07-27 at 10:18 +0400, Roman Savelyev wrote:
> > Invest in a HW RAID card with NVRAM cache that will negate the need
> > for barrier writes from the OS as the controller will issue them async
> > from cache allowing I/O to continue flowing. This really is the safest
> > method.
> It's
> Invest in a HW RAID card with NVRAM cache that will negate the need
> for barrier writes from the OS as the controller will issue them async
> from cache allowing I/O to continue flowing. This really is the safest
> method.
It's a better way. But socket oprions in DRBD up to 8.2 (Nagel alghoritm)
On Fri, 2009-07-24 at 09:27 -0400, Ross Walker wrote:
> On Jul 24, 2009, at 3:28 AM, Coert Waagmeester > wrote:
>
> >
> > On Fri, 2009-07-24 at 10:21 +0400, Roman Savelyev wrote:
> >> 1. You are hit by Nagel alghoritm (slow TCP response). You can
> >> build DRBD
> >> 8.3. In 8.3 "TCP_NODELAY"
On Jul 24, 2009, at 3:28 AM, Coert Waagmeester wrote:
>
> On Fri, 2009-07-24 at 10:21 +0400, Roman Savelyev wrote:
>> 1. You are hit by Nagel alghoritm (slow TCP response). You can
>> build DRBD
>> 8.3. In 8.3 "TCP_NODELAY" and "QUICK_RESPONSE" implemented in place.
>> 2. You are hit by DRBD pr
> I have googled the triple barriers thing but cant find that much
> information.
Please refer to drbdsetup(8) for detailed description of the parameters.
no-disk-barrier, no-disk-flushes, no-disk-drain, no-md-flushes
> Would it help if I used IPv6 instead of IPv4?
No.
And small transaction must
On Fri, 2009-07-24 at 10:21 +0400, Roman Savelyev wrote:
> 1. You are hit by Nagel alghoritm (slow TCP response). You can build DRBD
> 8.3. In 8.3 "TCP_NODELAY" and "QUICK_RESPONSE" implemented in place.
> 2. You are hit by DRBD protocol. In most cases, "B" is enought.
> 3. You are hit by triple
1. You are hit by Nagel alghoritm (slow TCP response). You can build DRBD
8.3. In 8.3 "TCP_NODELAY" and "QUICK_RESPONSE" implemented in place.
2. You are hit by DRBD protocol. In most cases, "B" is enought.
3. You are hit by triple barriers. In most cases you are need only one of
"barrier, flush,
On Wed, 2009-07-22 at 18:16 -0700, Ian Forde wrote:
> On Wed, 2009-07-22 at 11:16 +0200, Coert Waagmeester wrote:
> > The highest speed I can get through that link with drbd is 11 MB/sec
> > (megabytes)
>
> Not good...
>
> > But if I copy a 1 gig file over that link I get 110 MB/sec.
>
> That t
Hello all,
For completeness here is my current setup:
host1:
Xeon Quad-Core
8GB RAM
Centos 5.3 64bit
2x 1TB seagate sata disks in software raid level 1
LVM on top of the raid for dom0 root fs and for all domU root FSses
host2:
Xeon Dual-Core
8GB RAM
Centos 5.3 64bit
2x 1TB seagate sata disks i
On Wed, 2009-07-22 at 11:16 +0200, Coert Waagmeester wrote:
> The highest speed I can get through that link with drbd is 11 MB/sec
> (megabytes)
Not good...
> But if I copy a 1 gig file over that link I get 110 MB/sec.
That tells me that the network connection is fine. The issue is at a
higher
On Jul 22, 2009, at 9:57 AM, Ross Walker wrote:
> On Jul 22, 2009, at 5:59 AM, Coert Waagmeester > wrote:
>
>>
>> I am reading up on this on the internet as well, but all the tcp
>> settings and disk settings make me slightly nervous...
>
> Just get it going without those tuning options, run som
On Jul 22, 2009, at 5:59 AM, Coert Waagmeester wrote:
>
> I am reading up on this on the internet as well, but all the tcp
> settings and disk settings make me slightly nervous...
Just get it going without those tuning options, run some bench marks
on it, see where it is not performing well, l
On Jul 22, 2009, at 5:16 AM, Coert Waagmeester wrote:
> Hello all,
>
> we have a new setup with xen on centos5.3
>
> I run drbd from lvm volumes to mirror data between the two servers.
>
> both servers are 1U nec rack mounts with 8GB RAM, 2x mirrored 1TB
> seagate satas.
>
> The one is a dual cor
On Wed, 2009-07-22 at 11:16 +0200, Coert Waagmeester wrote:
> Hello all,
>
> we have a new setup with xen on centos5.3
>
> I run drbd from lvm volumes to mirror data between the two servers.
>
> both servers are 1U nec rack mounts with 8GB RAM, 2x mirrored 1TB
> seagate satas.
>
> The one is a
Hello all,
we have a new setup with xen on centos5.3
I run drbd from lvm volumes to mirror data between the two servers.
both servers are 1U nec rack mounts with 8GB RAM, 2x mirrored 1TB
seagate satas.
The one is a dual core xeon, and the other a quad-core xeon.
I have a gigabit crossover link
On Thu, 2009-06-25 at 11:42 +0200, Kris Buytaert wrote:
> > Use a serial console, attach that to some "monitoring" host.
> > (you can useUSB-to-Serial, they are cheap and work), and log
> > on that one. You'll get the last messages from there.
> >
> I indeed had hoped to see some output on on the
2009/6/5 Giuseppe Fuggiano :
> Hi list.
>
> I am dealing with DRBD (+GFS as its DLM). GFS configuration needs a
> CLVMD configuration. So, after syncronized my (two) /dev/drbd0 block
> devices, I start the clvmd service and try to create a clustered
> logical volume. I get this:
...snip...
> Wha
Hi list.
I am dealing with DRBD (+GFS as its DLM). GFS configuration needs a
CLVMD configuration. So, after syncronized my (two) /dev/drbd0 block
devices, I start the clvmd service and try to create a clustered
logical volume. I get this:
On "alice":
[r...@alice ~]# pvcreate /dev/drbd0
Phys
1 - 100 of 135 matches
Mail list logo