On Sun, Jun 10, 2012 at 4:13 PM, Jake Smith wrote:
> I ran into that (scheduler change) also after upgrading. I only accidentally
> stumbled onto that fact. I wish Ubuntu had made it a little clearer that not
> having a separate server kernel had more implications than just kernel!
It's correct t
"
To:
Subject: [Pacemaker] DRBD < LVM < EXT4 < NFS performance
Date: Sun, Jun 10, 2012 8:59 am
Hi,
we did not solve the performance issue yet. However we could improve the
responsiveness of the system. We no longer get timeouts and reenabled
pacemaker.
The problem that lead
Hi,
we did not solve the performance issue yet. However we could improve the
responsiveness of the system. We no longer get timeouts and reenabled
pacemaker.
The problem that lead to an unresponsive system was that Ubuntu 12.04 LTS
uses the cfq I/O-Scheduler by default. Ubuntu 10.04 LTS used t
> Dedicated replication link?
>
> Maybe the additional latency is all that kills you.
> Do you have non-volatile write cache on your IO backend?
> Did you post your drbd configuration setings already?
There is a dedicated 10GB Ethernet replication link between both nodes.
There is also a cache
On Thu, May 24, 2012 at 03:34:51PM +0300, Dan Frincu wrote:
> Hi,
>
> On Mon, May 21, 2012 at 4:24 PM, Christoph Bartoschek
> wrote:
> > Florian Haas wrote:
> >
> >>> Thus I would expect to have a write performance of about 100 MByte/s. But
> >>> dd gives me only 20 MByte/s.
> >>>
> >>> dd if=/d
Hi,
On Mon, May 21, 2012 at 4:24 PM, Christoph Bartoschek wrote:
> Florian Haas wrote:
>
>>> Thus I would expect to have a write performance of about 100 MByte/s. But
>>> dd gives me only 20 MByte/s.
>>>
>>> dd if=/dev/zero of=bigfile.10G bs=8192 count=1310720
>>> 1310720+0 records in
>>> 131072
Florian Haas wrote:
>> Thus I would expect to have a write performance of about 100 MByte/s. But
>> dd gives me only 20 MByte/s.
>>
>> dd if=/dev/zero of=bigfile.10G bs=8192 count=1310720
>> 1310720+0 records in
>> 1310720+0 records out
>> 10737418240 bytes (11 GB) copied, 498.26 s, 21.5 MB/s
>
On Sun, May 20, 2012 at 12:05 PM, Christoph Bartoschek
wrote:
> Hi,
>
> we have a two node setup with drbd below LVM and an Ext4 filesystem that is
> shared vi NFS. The system shows low performance and lots of timeouts
> resulting in unnecessary failovers from pacemaker.
>
> The connection between
Raoul Bhatia [IPAX] wrote:
> i haven't seen such issue during my current tests.
>
>> Is ext4 unsuitable for such a setup? Or is the linux nfs3 implementation
>> broken? Are buffers too large such that one has too wait too long for a
>> flush?
>
> Maybe I'll have the time to switch form xfs to ex
On 2012-05-20 12:05, Christoph Bartoschek wrote:
Hi,
we have a two node setup with drbd below LVM and an Ext4 filesystem that is
shared vi NFS. The system shows low performance and lots of timeouts
resulting in unnecessary failovers from pacemaker.
The connection between both nodes is capable o
It's normal setup without LVM, EXT4 and NFS it works fine, you don't have 3
layers in more
2012/5/20 Christoph Bartoschek
> emmanuel segura wrote:
>
> > Hello Christoph
> >
> > For make some tuning on drbd you can look this link
> >
> > http://www.drbd.org/users-guide/s-latency-tuning.html
> >
>
emmanuel segura wrote:
> Hello Christoph
>
> For make some tuning on drbd you can look this link
>
> http://www.drbd.org/users-guide/s-latency-tuning.html
>
Hi,
I do not have the impression that drbd is the problem here because a similar
setup without LVM, EXT4 and NFS above it works fine.
Hello Christoph
For make some tuning on drbd you can look this link
http://www.drbd.org/users-guide/s-latency-tuning.html
2012/5/20 Christoph Bartoschek
> Hi,
>
> we have a two node setup with drbd below LVM and an Ext4 filesystem that is
> shared vi NFS. The system shows low performance and l
Hi,
we have a two node setup with drbd below LVM and an Ext4 filesystem that is
shared vi NFS. The system shows low performance and lots of timeouts
resulting in unnecessary failovers from pacemaker.
The connection between both nodes is capable of 1 GByte/s as shown by iperf.
The network betwe
14 matches
Mail list logo