On Fri, 4 Jun 2010, Sandon Van Ness wrote:
The problem is that just using rsync I am not getting gigabit. For me
gigabit maxes out at around 930-940 megabits. When I use rsync alone I
only was getting around 720 megabits incomming. This is only when its
reading from the block device. When
On 06/05/2010 01:08 PM, Bob Friesenhahn wrote:
On Fri, 4 Jun 2010, Sandon Van Ness wrote:
The problem is that just using rsync I am not getting gigabit. For me
gigabit maxes out at around 930-940 megabits. When I use rsync alone I
only was getting around 720 megabits incomming. This
On 06/01/2010 07:57 AM, Bob Friesenhahn wrote:
On Mon, 31 May 2010, Sandon Van Ness wrote:
With sequential writes I don't see how parity writing would be any
different from when I just created a 20 disk zpool which is doing the
same writes every 5 seconds but the only difference is it isn't
On Fri, 4 Jun 2010, Sandon Van Ness wrote:
Interesting enough when I went to copy the data back I got even worse
download speeds than I did write speeds! It looks like i need some sort
of read-ahead as unlike the writes it doesn't appear to be CPU bound as
using mbuffer/tar gives me full
On 06/04/2010 06:15 PM, Bob Friesenhahn wrote:
On Fri, 4 Jun 2010, Sandon Van Ness wrote:
Interesting enough when I went to copy the data back I got even worse
download speeds than I did write speeds! It looks like i need some sort
of read-ahead as unlike the writes it doesn't appear to be
On Mon, 31 May 2010, Sandon Van Ness wrote:
With sequential writes I don't see how parity writing would be any
different from when I just created a 20 disk zpool which is doing the
same writes every 5 seconds but the only difference is it isn't maxing
out CPU usage when doing the writes and and
On Sun, 30 May 2010, Sandon Van Ness wrote:
The problem is that when it does the write burst its taking away CPU
usage from rsync which is actually what might be causing the dip during
writes (not the I/O activity itself) but the CPU generated from the writes.
You didn't say which Solaris you
On 05/31/2010 01:13 PM, Bob Friesenhahn wrote:
On Sun, 30 May 2010, Sandon Van Ness wrote:
The problem is that when it does the write burst its taking away CPU
usage from rsync which is actually what might be causing the dip during
writes (not the I/O activity itself) but the CPU generated
On Mon, 31 May 2010, Sandon Van Ness wrote:
6586537 async zio taskqs can block out userland commands
Bob
I am using opensolaris snv_134:
r...@opensolaris: 01:32 PM :~# uname -a
SunOS opensolaris 5.11 snv_134 i86pc i386 i86pc
Is there a setting that to change the cpu scheduler for the ZFS
On 05/31/2010 01:51 PM, Bob Friesenhahn wrote:
There are multiple factors at work. Your OpenSolaris should be new
enough to have the fix in which the zfs I/O tasks are run in in a
scheduling class at lower priority than normal user processes.
However, there is also a throttling mechanism for
On Mon, May 31, 2010 at 4:32 PM, Sandon Van Ness san...@van-ness.com wrote:
On 05/31/2010 01:51 PM, Bob Friesenhahn wrote:
There are multiple factors at work. Your OpenSolaris should be new
enough to have the fix in which the zfs I/O tasks are run in in a
scheduling class at lower priority
On 05/31/2010 02:32 PM, Sandon Van Ness wrote:
well it seems like when messing with the txg sync times and stuff like
that it did make the transfer more smooth but didn't actually help with
speeds as it just meant the hangs happened for a shorter time but at a
smaller interval and actually
On 05/31/2010 02:52 PM, Mike Gerdts wrote:
On Mon, May 31, 2010 at 4:32 PM, Sandon Van Ness san...@van-ness.com wrote:
On 05/31/2010 01:51 PM, Bob Friesenhahn wrote:
There are multiple factors at work. Your OpenSolaris should be new
enough to have the fix in which the zfs I/O tasks
On 05/31/2010 01:13 PM, Bob Friesenhahn wrote:
On Sun, 30 May 2010, Sandon Van Ness wrote:
The problem is that when it does the write burst its taking away CPU
usage from rsync which is actually what might be causing the dip during
writes (not the I/O activity itself) but the CPU generated
Sorry, turned on html mode to avoid gmail's line wrapping.
On Mon, May 31, 2010 at 4:58 PM, Sandon Van Ness san...@van-ness.comwrote:
On 05/31/2010 02:52 PM, Mike Gerdts wrote:
On Mon, May 31, 2010 at 4:32 PM, Sandon Van Ness san...@van-ness.com
wrote:
On 05/31/2010 01:51 PM, Bob
On Mon, 31 May 2010, Sandon Van Ness wrote:
I think I have came to the conclusion that the problem here is CPU due
to the fact that its only doing this with parity raid. I would think if
it was I/O based then it would be the same as if anything its heavier on
I/O on non parity raid due to the
On 05/31/2010 04:45 PM, Bob Friesenhahn wrote:
On Mon, 31 May 2010, Sandon Van Ness wrote:
I think I have came to the conclusion that the problem here is CPU due
to the fact that its only doing this with parity raid. I would think if
it was I/O based then it would be the same as if anything
Basically for a few seconds at a time I can get very nice speeds through
rsync (saturating a 1 gig link) which is around 112-113 megabytes/sec
which is about as good as I can expect after overhead. The problem is
that every 5 seconds when data is actually written to disks (physically
looking at
On May 30, 2010, at 3:04 PM, Sandon Van Ness wrote:
Basically for a few seconds at a time I can get very nice speeds through
rsync (saturating a 1 gig link) which is around 112-113 megabytes/sec
which is about as good as I can expect after overhead. The problem is
that every 5 seconds when
On 05/30/2010 04:22 PM, Richard Elling wrote:
If you want to decouple the txg commit completely, then you might consider
using a buffer of some sort. I use mbuffer for pipes, but that may be tricky
to use in an rsync environment.
-- richard
I initially thought this was I/O but now I
20 matches
Mail list logo