Re: [zfs-discuss] deleting a link in ZFS

2012-08-29 Thread Murray Cullen

On 12-08-29 12:29 AM, Gregg Wonderly wrote:

On Aug 28, 2012, at 6:01 AM, Murray Cullen themurma...@gmail.com wrote:


I've copied an old home directory from an install of OS 134 to the data pool on 
my OI install. Opensolaris apparently had wine installed as I now have a link 
to / in my data pool. I've tried everything I can think of to remove this link 
with one exception. I have not tried mounting the pool on a different OS yet, 
I'm trying to avoid that.

Does anyone have any advice or suggestions? Ulink and rm error out as root.

What is the error?  Is it permission denied, I/O error, or what?

Gregg

The error is unlink, not owner although I am the owner.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] deleting a link in ZFS

2012-08-29 Thread Casper . Dik

On 12-08-29 12:29 AM, Gregg Wonderly wrote:
 On Aug 28, 2012, at 6:01 AM, Murray Cullen themurma...@gmail.com wrote:

 I've copied an old home directory from an install of OS 134 to the data 
 pool on my OI install. 
Opensolaris apparently had wine installed as I now have a link to / in my data 
pool. I've tried eve
rything I can think of to remove this link with one exception. I have not tried 
mounting the pool o
n a different OS yet, I'm trying to avoid that.

 Does anyone have any advice or suggestions? Ulink and rm error out as root.
 What is the error?  Is it permission denied, I/O error, or what?

 Gregg
The error is unlink, not owner although I am the owner.


What exactly is the file?  In zfs you cannot create a link to a directory;
so does the link look like?


Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] deleting a link in ZFS

2012-08-29 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Murray Cullen
 
 I've copied an old home directory from an install of OS 134 to the data
 pool on my OI install. Opensolaris apparently had wine installed as I
 now have a link to / in my data pool. I've tried everything I can think
 of to remove this link with one exception. I have not tried mounting the
 pool on a different OS yet, I'm trying to avoid that.
 
 Does anyone have any advice or suggestions? Ulink and rm error out as root.

This doesn't sound like a link to me.  Can you send us the output?
ls -ld /path/to/the_link

zfs list | grep whatever_is_relevant

zfs get mountpoint the_zfs_filesystem

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] deleting a link in ZFS

2012-08-29 Thread Murray Cullen
On 12-08-29 8:30 AM, Edward Ned Harvey 
(opensolarisisdeadlongliveopensolaris) wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Murray Cullen

I've copied an old home directory from an install of OS 134 to the data
pool on my OI install. Opensolaris apparently had wine installed as I
now have a link to / in my data pool. I've tried everything I can think
of to remove this link with one exception. I have not tried mounting the
pool on a different OS yet, I'm trying to avoid that.

Does anyone have any advice or suggestions? Ulink and rm error out as root.

This doesn't sound like a link to me.  Can you send us the output?
ls -ld /path/to/the_link

zfs list | grep whatever_is_relevant

zfs get mountpoint the_zfs_filesystem
It now looks like that was a copy of the entire filesystem inside the 
folder in question, perhaps as a result of copying links off the old 
drive? Arrgh. Surprising how a night's sleep lets you resolve some issues :)


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS test suite

2012-08-29 Thread anoopn80

Hi,

I found a ZFS test suite on the oracle website and found a lot of useful 
test which I can run for qualification.


http://hub.opensolaris.org/bin/view/Community+Group+zfs/zfstestsuite

How can I run this test suite on a FreeBSD platform?

Any help would be greatly appreciated.

Thanks
-Anoop
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS test suite

2012-08-29 Thread Vitaliy Gusev

On 08/29/2012 10:44 AM, anoopn80 wrote:

Hi,

I found a ZFS test suite on the oracle website and found a lot of useful
test which I can run for qualification.

http://hub.opensolaris.org/bin/view/Community+Group+zfs/zfstestsuite

How can I run this test suite on a FreeBSD platform?


I was carried on by Delphix. Please read:

http://blog.delphix.com/jkennedy/2012/01/18/resurrecting-the-zfs-test-suite/


Anyway this updated zfstest-suite requires /sbin/zhack. So you should 
fix it.


---
Vitaliy



Any help would be greatly appreciated.

Thanks
-Anoop



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS snapshot used space question

2012-08-29 Thread Truhn, Chad
All,

I apologize in advance for what appears to be a question asked quite often, but 
I am not sure I have ever seen an answer that explains it.  This may also be a 
bit long-winded so I apologize for that as well.

I would like to know how much unique space each individual snapshot is using.

I have a ZFS filesystem that shows:

$zfs list -o space rootpool/export/home
NAME  AVAIL   USED  USEDSNAP  USEDDS  
USEDREFRESERV  USEDCHILD
rootpool/export/home  5.81G   14.4G  8.81G5.54G  0  
  0

So reading this I see that I have a total of 14.4G of space used by this data 
set.  Currently 5.54 is active data that is available on the normal 
filesystem and 8.81G used in snapshots.  8.81G + 5.54G = 14.4G (roughly).   I 
100% agree with these numbers and the world makes sense.

This is also backed up by:

$zfs get usedbysnapshots rootpool/export/home
NAME PROPERTYVALUE 
SOURCE
rootpool/export/home  usedbysnapshots 8.81G  -


Now if I wanted to see how much space any individual snapshot is currently 
using I would like to think that this would show me:

$zfs list -ro space rootpool/export/home

NAME  AVAIL   USED  
USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
rootpool/export/home   5.81G  14.4G 8.81G   5.54G   
   0  0
rootpool/export/home@week3  -202M -   - 
 -  -
rootpool/export/home@week2  -104M -   - 
 -  -
rootpool/export/home@7daysago-1.37M -   -   
   -  -
rootpool/export/home@6daysago-1.20M -   -   
   -  -
rootpool/export/home@5daysago-1020K -   -   
   -  -
rootpool/export/home@4daysago-342K -   -
  -  -
rootpool/export/home@3daysago-1.28M -   -   
   -  -
rootpool/export/home@week1  -0-   - 
 -  -
rootpool/export/home@2daysago-0-   -
  -  -
rootpool/export/home@yesterday   -   360K -   - 
 -  -
rootpool/export/home@today-1.26M -   -  
-  -


So normal logic would tell me if USEDSNAP is 8.81G and is composed of 11 
snapshots, I would add up the size of each of those snapshots and that would 
roughly equal 8.81G.  So time to break out the calculator:

202M + 104M + 1.37M + 1.20M + 1020K + 342K + 1.28M +0 +0 + 360K + 1.26M
equals...  ~312M!

That is nowhere near 8.81G.  I would accept it even if it was within 15%, but 
it's not even close.  That definitely not metadata or ZFS overhead or anything.

I understand that snapshots are just the delta between the time when the 
snapshot was taken and the current active filesystem and are truly just 
references to a block on disk rather than a copy.  I also understand how two 
(or more) snapshots can reference the same block on a disk but yet there is 
still only that one block used.  If I delete a recent snapshot I may not save 
as much space as advertised because some may be inherited by a parent 
snapshot.  But that inheritance is not creating duplicate used space on disk so 
it doesn't justify the huge difference in sizes. 

But even with this logic in place there is currently 8.81G of blocks referred 
to by snapshots which are not currently on the active filesystem and I don't 
believe anyone can argue with that.  Can something show me how much space a 
single snapshot has reserved?

I searched through some of the archives and found this thread 
(http://mail.opensolaris.org/pipermail/zfs-discuss/2012-August/052163.html) 
from early this month and I feel as if I have the same problem as the OP, but 
hopefully attacking it with a little more background.  I am not arguing with 
discrepancies between df/du and zfs output and I have read the Oracle 
documentation about it but haven't found what I feel like should be a simple 
answer.  I currently have a ticket open with Oracle, but I am getting answers 
to all kinds of questions except for the question I am asking so I am hoping 
someone out there might be able to help me.

I am a little concerned I am going to find out that there is no real way to 
show it and that makes for one sad SysAdmin.

Thanks,
Chad


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshot used space question

2012-08-29 Thread Timothy Coalson
As I understand it, the used space of a snapshot does not include anything
that is in more than one snapshot.  There is a bit of a hack, using the
verbose and dry run options of zfs send, that will tell you how much data
must be transferred to replicate each snapshot incrementally, which should
help clear things up.  Try (with elevated privileges):

zfs send -nvR rootpool/export/home@today

You might also look at the REFER column in zfs list -r -t
snapshot rootpool/export/home, which tells you how much data was
referenced by the filesystem when the snapshot was taken.  However, this
could be uninformative if the majority of the snapshot data is rewritten
files rather than new files.

Tim

On Wed, Aug 29, 2012 at 1:12 PM, Truhn, Chad
chad.tr...@bowheadsupport.comwrote:

 All,

 I apologize in advance for what appears to be a question asked quite
 often, but I am not sure I have ever seen an answer that explains it.  This
 may also be a bit long-winded so I apologize for that as well.

 I would like to know how much unique space each individual snapshot is
 using.

 I have a ZFS filesystem that shows:

 $zfs list -o space rootpool/export/home
 NAME  AVAIL   USED  USEDSNAP  USEDDS
  USEDREFRESERV  USEDCHILD
 rootpool/export/home  5.81G   14.4G  8.81G5.54G  0
0

 So reading this I see that I have a total of 14.4G of space used by this
 data set.  Currently 5.54 is active data that is available on the normal
 filesystem and 8.81G used in snapshots.  8.81G + 5.54G = 14.4G (roughly).
 I 100% agree with these numbers and the world makes sense.

 This is also backed up by:

 $zfs get usedbysnapshots rootpool/export/home
 NAME PROPERTYVALUE
 SOURCE
 rootpool/export/home  usedbysnapshots 8.81G  -


 Now if I wanted to see how much space any individual snapshot is currently
 using I would like to think that this would show me:

 $zfs list -ro space rootpool/export/home

 NAME  AVAIL   USED
  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
 rootpool/export/home   5.81G  14.4G 8.81G
 5.54G  0  0
 rootpool/export/home@week3  -202M -   -
-  -
 rootpool/export/home@week2  -104M -   -
-  -
 rootpool/export/home@7daysago-1.37M -   -
  -  -
 rootpool/export/home@6daysago-1.20M -   -
  -  -
 rootpool/export/home@5daysago-1020K -   -
  -  -
 rootpool/export/home@4daysago-342K -   -
  -  -
 rootpool/export/home@3daysago-1.28M -   -
  -  -
 rootpool/export/home@week1  -0-
 -  -  -
 rootpool/export/home@2daysago-0-   -
  -  -
 rootpool/export/home@yesterday   -   360K -   -
-  -
 rootpool/export/home@today-1.26M -
 -  -  -


 So normal logic would tell me if USEDSNAP is 8.81G and is composed of 11
 snapshots, I would add up the size of each of those snapshots and that
 would roughly equal 8.81G.  So time to break out the calculator:

 202M + 104M + 1.37M + 1.20M + 1020K + 342K + 1.28M +0 +0 + 360K + 1.26M
  equals...  ~312M!

 That is nowhere near 8.81G.  I would accept it even if it was within 15%,
 but it's not even close.  That definitely not metadata or ZFS overhead or
 anything.

 I understand that snapshots are just the delta between the time when the
 snapshot was taken and the current active filesystem and are truly just
 references to a block on disk rather than a copy.  I also understand how
 two (or more) snapshots can reference the same block on a disk but yet
 there is still only that one block used.  If I delete a recent snapshot I
 may not save as much space as advertised because some may be inherited by a
 parent snapshot.  But that inheritance is not creating duplicate used
 space on disk so it doesn't justify the huge difference in sizes.

 But even with this logic in place there is currently 8.81G of blocks
 referred to by snapshots which are not currently on the active filesystem
 and I don't believe anyone can argue with that.  Can something show me how
 much space a single snapshot has reserved?

 I searched through some of the archives and found this thread (
 http://mail.opensolaris.org/pipermail/zfs-discuss/2012-August/052163.html)
 from early this month and I feel as if I have the same problem as the OP,
 but hopefully attacking it with a little more background.  I am not arguing
 with discrepancies between df/du and zfs output and I have read the 

Re: [zfs-discuss] ZFS snapshot used space question

2012-08-29 Thread Stefan Ring
On Wed, Aug 29, 2012 at 8:58 PM, Timothy Coalson tsc...@mst.edu wrote:
 As I understand it, the used space of a snapshot does not include anything
 that is in more than one snapshot.

True. It shows the amount that would be freed if you destroyed the
snapshot right away. Data held onto by more than one snapshot cannot
be removed when you destroy just one of them, obviously. The act of
destroying a snapshot will likely change the USED value of the
neighbouring snapshots though.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] what have you been buying for slog and l2arc?

2012-08-29 Thread Ian Collins

On 08/ 4/12 09:50 PM, Eugen Leitl wrote:

On Fri, Aug 03, 2012 at 08:39:55PM -0500, Bob Friesenhahn wrote:


Extreme write IOPS claims in consumer SSDs are normally based on large
write caches which can lose even more data if there is a power failure.

Intel 311 with a good UPS would seem to be a reasonable tradeoff.


The 313 series looks like a consumer price SLC drive aimed at the recent 
trend in windows cache drives.


Should be worth a look.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss