Hi!
Am 19.05.2011 21:16, schrieb Digimer:
As Felix stated, try 10M. If it get's up to that speed (and it can take
a while, be patient), then bump it to 20M, etc.
I tried syncing with 10M last night and indeed it became faster than
before and was around 10MB/s. Using "drbdsetup /dev/drbd0 sync
On Fri, May 20, 2011 at 10:12:29AM +0200, Daniel Meszaros wrote:
> Hi!
>
> Am 19.05.2011 21:16, schrieb Digimer:
> >As Felix stated, try 10M. If it get's up to that speed (and it can take
> >a while, be patient), then bump it to 20M, etc.
>
> I tried syncing with 10M last night and indeed it beca
On 05/20/2011 10:12 AM, Daniel Meszaros wrote:
> I tried syncing with 10M last night and indeed it became faster than
> before and was around 10MB/s. Using "drbdsetup /dev/drbd0 syncer -r 20M"
> and so on I could increase the sync speed up to 70M ... then it
> interrupted and a new bitmap check sta
Am 20.05.2011 10:23, schrieb Felix Frank:
On 05/20/2011 10:12 AM, Daniel Meszaros wrote:
I tried syncing with 10M last night and indeed it became faster than
before and was around 10MB/s. Using "drbdsetup /dev/drbd0 syncer -r 20M"
and so on I could increase the sync speed up to 70M ... then it
i
Hi,
We are trying to configure a DRBD resource as a Physical Volume, using
examples from the following document.
http://www.drbd.org/users-guide/s-lvm-drbd-as-pv.html
DRBD works well, but there are some perfomance problems.
We run "dd" like thie;
# dd if=/dev/zero of=/mnt/dd/10GB.dat bs=1M count=
On Fri, May 20, 2011 at 05:32:52PM +0900, Junko IKEDA wrote:
> Hi,
>
> We are trying to configure a DRBD resource as a Physical Volume, using
> examples from the following document.
> http://www.drbd.org/users-guide/s-lvm-drbd-as-pv.html
>
> DRBD works well, but there are some perfomance problems
Hi,
I have a question that is not directly related to drbd, but more to
performance of drbd on HW-Raid.
This is our configuration:
2 HP machines with ubuntu-Linux (LTS)
configured HW Raid5 on 4 disks (DG146BB976 = 146.8GB 10k SAS) on each of the
two machines
LVM on that HW-RAID with 3 logical
Hi Jan !
This is a tough question. It depends on many parameters, starting with the
size of the requested IOs. There are a couple of good articles on the net
talking about that, including these :
- 2 from Scott LOWE, wich are fairly clear and interesting from the
IOPS estimation metho
Quoting Alex Kuehne :
Quoting Lars Ellenberg :
On Tue, Jan 11, 2011 at 04:58:46PM +0100, Alex Kuehne wrote:
Quoting Lars Ellenberg :
On Mon, Jan 10, 2011 at 04:07:12PM +0100, Alex Kuehne wrote:
Quoting Alex Kuehne :
Hi guys,
This is another report of DRBD not working with Xenserver 5.6
Am 20.05.2011 10:22, schrieb Lars Ellenberg:
According to you, it "had been working" before with the exact same
hardware and configuration.
Now if it does not anymore, then I strongly suspect network problems.
Did you do some network benchmarks on the replication link recently?
Packet loss, exc
Le -10/01/-28163 20:59, Mikael Andersson a écrit :
>> We're running v8.3.7, though this cluster has been upgraded to 8.3.10 for a
>> short while and then downgraded back, but the resources were created with
>> 8.3.7.
> Why did you downgrade? Any problems with 8.3.10?
Not really: I was test-driving
Hi,
> DRBD version, LVM version, Device mapper version (kernel version),
> distribution?
DRBD version
8.3.10(we noticed this with 8.3.5 at first, and update DRBD after that)
LVM version
LVM2(included RHEL in 5.2)
kernel version
2.6.18-92.EL5 (x86_64)
distribution
RHEL 5.2
> What about oflag=d
Andreas Hofmeister wrote:
On 17.05.2011 18:19, Herman wrote:
I made a change to IPTables, and did a "service iptables restart", and
next thing I knew, I had a split brain.
I would guess that the RHEL FW setup flushes the connection tracking
tables and has a default drop (or reject) rule.
Hi Felix,
At 12:38 AM 5/19/2011, you wrote:
On 05/18/2011 10:02 PM, Richard Stockton wrote:
>> Is NFS mounted sync or async?
>
> NFS is mounted "sync" (NFS3 default, I believe).
This is very bad. Do use async, it's not as asynchronous as the name
implies (except if you run applications that rel
14 matches
Mail list logo