Re: [zfs-discuss] Data transfer taking a longer time than expected (Possibly dedup related)

2010-10-04 Thread Ville Ojamo
The article would probably be correct. In my experience, and looking at the
archives for other posts, dedup really needs the RAM and preferably L2ARC
device as well. As someone else put it, "home servers need not apply".

I would point out the "very slow dataset destroy" caveat depending on which
build you are using. At least until and including b134 destroying a dataset
that had at some point dedup turned on, with low memory, results in _very_
lengthy operation. And prepare to give it a long time (days?). If you
absolutely must shut down the system during it, the next restart will be
painfully slow (think week or so). There has been few posts about this,
including mine. I might be mistaken but someone explained it might have been
due to the dataset destroy operation running along with resilver.


-V


From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Tom
Sent: 25 September 2010 04:40
To: David Blasingame Oracle
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Data transfer taking a longer time than expected
(Possibly dedup related)

Thanks a lot for that. I'm not experienced in reading the output of dtrace,
but I'm pretty sure that dedup was the cause here, as I disabling it during
the transfer, immediately raised the transfer speed to ~100MB/s.

Thanks for the article you linked to — it seems my system would need about
16GB RAM for dedup to work smoothly in my case...


On Fri, Sep 24, 2010 at 11:10 PM, David Blasingame Oracle
 wrote:
How do you know it is dedup causing the problem?

You can check to see how much is by looking at the threads (look for ddt)

mdb -k 

::threadlist -v

or dtrace it.

fbt:zfs:ddt*:entry

You can disable dedup.  I believe current dedup data stays until it gets
over written.  I'm not sure what send would do, but I would assume the new
filesystem if dedup is not enabled would not have dedup'd data.  

You might also want to read.

http://blogs.sun.com/roch/entry/dedup_performance_considerations1

As far as the impact of  on a move operation, When I do a test to
move a file from one file system to another an  the operation, the
file is intact on the original filesystem and on the new filesystem it is
partial.  So you would have to be careful about which data has already been
copied.

Dave

On 09/24/10 14:34, Thomas S. wrote: 
Hi all

I'm currently moving a fairly big dataset (~2TB) within the same zpool. Data
is being moved from a dataset to another, which has dedup enabled.

The transfer started at quite a slow transfer speed — maybe 12MB/s. But it
is now crawling to a near halt. Only 800GB has been moved in 48 hours.

I looked for similar problems on the forums and other places, and it seems
dedup needs a much bigger amount of RAM than the server currently has (3GB),
to perform smoothly for such an operation.

My question is, how can I gracefully stop the ongoing operation? What I did
was simply "mv temp/* new/" in an ssh session (which is still open).

Can I disable dedup on the dataset while the transfer is going on? Can I
simply Ctrl-C the procress to stop it? Shoul I be careful of anything?

Help would be appreciated
  

-- 





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Data transfer taking a longer time than expected (Possibly dedup related)

2010-09-28 Thread Tom
Thanks a lot for that. I'm not experienced in reading the output of dtrace,
but I'm pretty sure that dedup was the cause here, as I disabling it during
the transfer, immediately raised the transfer speed to ~100MB/s.

Thanks for the article you linked to — it seems my system would need about
16GB RAM for dedup to work smoothly in my case...



On Fri, Sep 24, 2010 at 11:10 PM, David Blasingame Oracle <
david.blasing...@oracle.com> wrote:

>  How do you know it is dedup causing the problem?
>
> You can check to see how much is by looking at the threads (look for ddt)
>
> mdb -k
>
> ::threadlist -v
>
> or dtrace it.
>
> fbt:zfs:ddt*:entry
>
> You can disable dedup.  I believe current dedup data stays until it gets
> over written.  I'm not sure what send would do, but I would assume the new
> filesystem if dedup is not enabled would not have dedup'd data.
>
> You might also want to read.
>
> http://blogs.sun.com/roch/entry/dedup_performance_considerations1
>
> As far as the impact of  on a move operation, When I do a test to
> move a file from one file system to another an  the operation, the
> file is intact on the original filesystem and on the new filesystem it is
> partial.  So you would have to be careful about which data has already been
> copied.
>
> Dave
>
> On 09/24/10 14:34, Thomas S. wrote:
>
> Hi all
>
> I'm currently moving a fairly big dataset (~2TB) within the same zpool. Data 
> is being moved from a dataset to another, which has dedup enabled.
>
> The transfer started at quite a slow transfer speed — maybe 12MB/s. But it is 
> now crawling to a near halt. Only 800GB has been moved in 48 hours.
>
> I looked for similar problems on the forums and other places, and it seems 
> dedup needs a much bigger amount of RAM than the server currently has (3GB), 
> to perform smoothly for such an operation.
>
> My question is, how can I gracefully stop the ongoing operation? What I did 
> was simply "mv temp/* new/" in an ssh session (which is still open).
>
> Can I disable dedup on the dataset while the transfer is going on? Can I 
> simply Ctrl-C the procress to stop it? Shoul I be careful of anything?
>
> Help would be appreciated
>
>
>
>
> --
>
>
>
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Data transfer taking a longer time than expected (Possibly dedup related)

2010-09-24 Thread Thomas S.
Thanks, I'm going to do that. I'm just worried about corrupting my data, or 
other problems. I wanted to make sure there is nothing I really should be 
careful with.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Data transfer taking a longer time than expected (Possibly dedup related)

2010-09-24 Thread David Blasingame Oracle

How do you know it is dedup causing the problem?

You can check to see how much is by looking at the threads (look for ddt)

mdb -k

::threadlist -v

or dtrace it.

fbt:zfs:ddt*:entry

You can disable dedup.  I believe current dedup data stays until it gets 
over written.  I'm not sure what send would do, but I would assume the 
new filesystem if dedup is not enabled would not have dedup'd data. 


You might also want to read.

http://blogs.sun.com/roch/entry/dedup_performance_considerations1

As far as the impact of  on a move operation, When I do a test 
to move a file from one file system to another an  the 
operation, the file is intact on the original filesystem and on the new 
filesystem it is partial.  So you would have to be careful about which 
data has already been copied.


Dave

On 09/24/10 14:34, Thomas S. wrote:

Hi all

I'm currently moving a fairly big dataset (~2TB) within the same zpool. Data is 
being moved from a dataset to another, which has dedup enabled.

The transfer started at quite a slow transfer speed — maybe 12MB/s. But it is 
now crawling to a near halt. Only 800GB has been moved in 48 hours.

I looked for similar problems on the forums and other places, and it seems 
dedup needs a much bigger amount of RAM than the server currently has (3GB), to 
perform smoothly for such an operation.

My question is, how can I gracefully stop the ongoing operation? What I did was simply 
"mv temp/* new/" in an ssh session (which is still open).

Can I disable dedup on the dataset while the transfer is going on? Can I simply 
Ctrl-C the procress to stop it? Shoul I be careful of anything?

Help would be appreciated
  



--




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Data transfer taking a longer time than expected (Possibly dedup related)

2010-09-24 Thread Scott Meilicke
"Can I disable dedup on the dataset while the transfer is going on?"
Yes. Only the blocks copied after disabling dedupe will not be deduped. The 
stuff you have already copied will be deduped. 

"Can I simply Ctrl-C the procress to stop it?"
Yes, you can do that to a mv process. 

Maybe stop the process, delete the deduped file system (your copy target), and 
create a new file system without dedupe to see if that is any better?

Scott
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Data transfer taking a longer time than expected (Possibly dedup related)

2010-09-24 Thread Thomas S.
Hi all

I'm currently moving a fairly big dataset (~2TB) within the same zpool. Data is 
being moved from a dataset to another, which has dedup enabled.

The transfer started at quite a slow transfer speed — maybe 12MB/s. But it is 
now crawling to a near halt. Only 800GB has been moved in 48 hours.

I looked for similar problems on the forums and other places, and it seems 
dedup needs a much bigger amount of RAM than the server currently has (3GB), to 
perform smoothly for such an operation.

My question is, how can I gracefully stop the ongoing operation? What I did was 
simply "mv temp/* new/" in an ssh session (which is still open).

Can I disable dedup on the dataset while the transfer is going on? Can I simply 
Ctrl-C the procress to stop it? Shoul I be careful of anything?

Help would be appreciated
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss