On Mon, Oct 04, 2010 at 07:39:16PM -0400, Dan Langille wrote:
> On 10/4/2010 2:10 PM, Jeremy Chadwick wrote:
> >On Mon, Oct 04, 2010 at 01:31:07PM -0400, Dan Langille wrote:
> >>
> >>On Mon, October 4, 2010 3:27 am, Martin Matuska wrote:
> >>>Try using zfs receive with the -v flag (gives you some s
On 10/4/2010 2:10 PM, Jeremy Chadwick wrote:
On Mon, Oct 04, 2010 at 01:31:07PM -0400, Dan Langille wrote:
On Mon, October 4, 2010 3:27 am, Martin Matuska wrote:
Try using zfs receive with the -v flag (gives you some stats at the end):
# zfs send storage/bac...@transfer | zfs receive -v
storag
On Mon, Oct 04, 2010 at 01:31:07PM -0400, Dan Langille wrote:
>
> On Mon, October 4, 2010 3:27 am, Martin Matuska wrote:
> > Try using zfs receive with the -v flag (gives you some stats at the end):
> > # zfs send storage/bac...@transfer | zfs receive -v
> > storage/compressed/bacula
> >
> > And u
On Mon, October 4, 2010 3:27 am, Martin Matuska wrote:
> Try using zfs receive with the -v flag (gives you some stats at the end):
> # zfs send storage/bac...@transfer | zfs receive -v
> storage/compressed/bacula
>
> And use the following sysctl (you may set that in /boot/loader.conf, too):
> # sy
Try using zfs receive with the -v flag (gives you some stats at the end):
# zfs send storage/bac...@transfer | zfs receive -v
storage/compressed/bacula
And use the following sysctl (you may set that in /boot/loader.conf, too):
# sysctl vfs.zfs.txg.write_limit_override=805306368
I have good result
On Sun, Oct 3, 2010 at 6:11 PM, Dan Langille wrote:
> I'm rerunning my test after I had a drive go offline[1]. But I'm not
> getting anything like the previous test:
>
> time zfs send storage/bac...@transfer | mbuffer | zfs receive
> storage/compressed/bacula-buffer
>
> $ zpool iostat 10 10
>
On 10/1/2010 9:32 PM, Dan Langille wrote:
On 10/1/2010 7:00 PM, Artem Belevich wrote:
On Fri, Oct 1, 2010 at 3:49 PM, Dan Langille wrote:
FYI: this is all on the same box.
In one of the previous emails you've used this command line:
# mbuffer -s 128k -m 1G -I 9090 | zfs receive
You've used
I've just tested on my box and loopback interface does not seem to be
the bottleneck. I can easily push through ~400MB/s through two
instances of mbuffer.
--Artem
On Fri, Oct 1, 2010 at 7:51 PM, Sean wrote:
>
> On 02/10/2010, at 11:43 AM, Artem Belevich wrote:
>
>>> As soon as I opened this em
On 02/10/2010, at 11:43 AM, Artem Belevich wrote:
>> As soon as I opened this email I knew what it would say.
>>
>>
>> # time zfs send storage/bac...@transfer | mbuffer | zfs receive
>> storage/compressed/bacula-mbuffer
>> in @ 197 MB/s, out @ 205 MB/s, 1749 MB total, buffer 0% full
> ..
>>
On Fri, Oct 1, 2010 at 8:43 PM, Artem Belevich wrote:
>> As soon as I opened this email I knew what it would say.
>>
>>
>> # time zfs send storage/bac...@transfer | mbuffer | zfs receive
>> storage/compressed/bacula-mbuffer
>> in @ 197 MB/s, out @ 205 MB/s, 1749 MB total, buffer 0% full
> ...
> As soon as I opened this email I knew what it would say.
>
>
> # time zfs send storage/bac...@transfer | mbuffer | zfs receive
> storage/compressed/bacula-mbuffer
> in @ 197 MB/s, out @ 205 MB/s, 1749 MB total, buffer 0% full
...
> Big difference. :)
I'm glad it helped.
Does anyone know wh
On 10/1/2010 7:00 PM, Artem Belevich wrote:
On Fri, Oct 1, 2010 at 3:49 PM, Dan Langille wrote:
FYI: this is all on the same box.
In one of the previous emails you've used this command line:
# mbuffer -s 128k -m 1G -I 9090 | zfs receive
You've used mbuffer in network client mode. I assumed
FYI: this is all on the same box.
--
Dan Langille
http://langille.org/
On Oct 1, 2010, at 5:56 PM, Artem Belevich wrote:
> Hmm. It did help me a lot when I was replicating ~2TB worth of data
> over GigE. Without mbuffer things were roughly in the ballpark of your
> numbers. With mbuffer I've
On Fri, Oct 1, 2010 at 3:49 PM, Dan Langille wrote:
> FYI: this is all on the same box.
In one of the previous emails you've used this command line:
> # mbuffer -s 128k -m 1G -I 9090 | zfs receive
You've used mbuffer in network client mode. I assumed that you did do
your transfer over network.
Hmm. It did help me a lot when I was replicating ~2TB worth of data
over GigE. Without mbuffer things were roughly in the ballpark of your
numbers. With mbuffer I've got around 100MB/s.
Assuming that you have two boxes connected via ethernet, it would be
good to check that nobody generates PAUSE f
On Fri, Oct 01, 2010 at 02:51:12PM -0400, Dan Langille wrote:
>
> On Wed, September 29, 2010 2:04 pm, Dan Langille wrote:
> > $ zpool iostat 10
> >capacity operationsbandwidth
> > pool used avail read write read write
> > -- - - -
On Wed, September 29, 2010 2:04 pm, Dan Langille wrote:
> $ zpool iostat 10
>capacity operationsbandwidth
> pool used avail read write read write
> -- - - - - - -
> storage 7.67T 5.02T358 38 43.1M 1.96M
> s
On Fri, October 1, 2010 11:45 am, Dan Langille wrote:
>
> On Wed, September 29, 2010 3:57 pm, Artem Belevich wrote:
>> On Wed, Sep 29, 2010 at 11:04 AM, Dan Langille wrote:
>>> It's taken about 15 hours to copy 800GB. I'm sure there's some tuning
>>> I
>>> can do.
>>>
>>> The system is now runni
On Wed, September 29, 2010 3:57 pm, Artem Belevich wrote:
> On Wed, Sep 29, 2010 at 11:04 AM, Dan Langille wrote:
>> It's taken about 15 hours to copy 800GB. I'm sure there's some tuning I
>> can do.
>>
>> The system is now running:
>>
>> # zfs send storage/bac...@transfer | zfs receive
>> stora
On 9/29/2010 3:57 PM, Artem Belevich wrote:
On Wed, Sep 29, 2010 at 11:04 AM, Dan Langille wrote:
It's taken about 15 hours to copy 800GB. I'm sure there's some tuning I
can do.
The system is now running:
# zfs send storage/bac...@transfer | zfs receive storage/compressed/bacula
Try piping
On Wed, Sep 29, 2010 at 11:04 AM, Dan Langille wrote:
> It's taken about 15 hours to copy 800GB. I'm sure there's some tuning I
> can do.
>
> The system is now running:
>
> # zfs send storage/bac...@transfer | zfs receive storage/compressed/bacula
Try piping zfs data through mbuffer (misc/mbuffe
It's taken about 15 hours to copy 800GB. I'm sure there's some tuning I
can do.
The system is now running:
# zfs send storage/bac...@transfer | zfs receive storage/compressed/bacula
All the drives are ATA-8 SATA 2.x device
from systat:
1 usersLoad 0.36 0.58 0.57 S
22 matches
Mail list logo