Re: [zfs-discuss] What's eating my disk space? Missing snapshots?

2009-08-18 Thread Matthew Stevenson
The guide is good, but didn't tell me anything I didn't already know about this 
area unfortunately.

Anyway, I freed up a big chunk of space by first deleting the snapshot which 
was reported by zfs list as being the largest (2GB). Doing zfs list after this 
deletion revealed that several of the numbers had changed - all of a sudden a 
snapshot that was a couple of hundred MB turned into 5GB, so I deleted that as 
well.

So there must be basically lots of references to data that hide themselves from 
the surface and can't really be found using zfs list. Is there another tool 
that helps to visualize disk space usage and references to data, or could one 
potentially be made? E.g. I regained my disk space by trial and error, but if 
there had been some kind of visual tool that made it easy to see that if I 
delete that snapshot, then that one, I will get 7GB back, that would have been 
really handy.

Thanks,
Matt
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ate my RAID-10 data

2009-08-18 Thread Ross
I'm no expert, but it sounds like this:
http://opensolaris.org/jive/thread.jspa?threadID=80232

Can you remove the faulted disk?

I found this as well, but I don't think I'd be too comfortable using zpool 
destroy as a recovery tool...
http://forums.sun.com/thread.jspa?threadID=5259623

It also appears that this may be a bug that is now fixed:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=2176098
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6633599

It might be worth booting an OpenSolaris Live CD, and seeing if you can import 
the pool there.  If that works, you can perform the pool recovery, then reboot 
back to your S10u4 and import the repaired pool.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What's eating my disk space? Missing snapshots?

2009-08-18 Thread Fajar A. Nugraha
On Tue, Aug 18, 2009 at 2:37 PM, Matthew
Stevensonno-re...@opensolaris.org wrote:
 So there must be basically lots of references to data that hide themselves 
 from the surface and can't really be found using zfs list.

zfs list -t all usually works for me. Look at USED and REFER

My understanding is like this:
REFER - how much data that fs/snapshot refers to, usually the same as
(or close to) df output for fs.
USED - how much data that fs (including its snapshot) uses, or how
much data that particular snapshot uses which is not also in the
previous snapshot.

You might also be interested in the output of zfs get
usedbysnapshots,usedbydataset,usedbychildren dataset_name

 Is there another tool that helps to visualize disk space usage and references 
 to data, or could one potentially be made? E.g. I regained my disk space by 
 trial and error, but if there had been some kind of visual tool that made it 
 easy to see that if I delete that snapshot, then that one, I will get 7GB 
 back, that would have been really handy.

The usedbysnapshots property shohuld give an idea how much space
will be regained if you delete all snapshots. It's somewhat new
property (starting zpool v13, I think).

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What's eating my disk space? Missing snapshots?

2009-08-18 Thread Matthew Stevenson
Hi, thanks for the info.

Can you have a look at the attachment on the original post for me?

Everything you said is what I expected to see in the output there, but a lot of 
the values are blank where I hoped they would at least be able to tell me a 
breakdown of the USEDSNAP figure

As far as I know I'm using zpool version 13 (it might be higher - I didn't 
upgrade it manually, but I don't know if the updates from /dev repository would 
have triggered an upgrade at any point)

Thanks,
Matt
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What's eating my disk space? Missing snapshots?

2009-08-18 Thread Fajar A. Nugraha
On Tue, Aug 18, 2009 at 4:09 PM, Matthew
Stevensonno-re...@opensolaris.org wrote:
 Hi, thanks for the info.

 Can you have a look at the attachment on the original post for me?

 Everything you said is what I expected to see in the output there, but a lot 
 of the values are blank where I hoped they would at least be able to tell me 
 a breakdown of the USEDSNAP figure

Well, I see USEDSNAP 13.8 GB for the dataset, so if you delete ALL
snapshots you'd probably be able to get that much.

As for which snapshot to delete to get most space, that's a liitle
bit tricky. See
rpool/export/home/m...@zfs-auto-snap:monthly-2009-06-28-20:59, which
has USED 2.45G? If I understand correctly, it roughly means
rpool/export/home/m...@zfs-auto-snap:monthly-2009-06-28-20:59 and
rpool/export/home/m...@zfs-auto-snap:monthly-2009-07-05-13:43 has
about 2.45G of difference. Note that the space is probably
shared/refered to by the previous snapshots as well.

This means:
- Deleting rpool/export/home/m...@zfs-auto-snap:monthly-2009-06-28-20:59
only probably won't save you lots of space, as the used space would
probably be moved to
rpool/export/home/m...@zfs-auto-snap:monthly-2009-05-28-08:03
- Deleting all snapshot on and prior to
rpool/export/home/m...@zfs-auto-snap:monthly-2009-06-28-20:59 could
give you at least 2.45G space.

It's somewhat rough guess, but that's the best interpretation I can
come up with.


 As far as I know I'm using zpool version 13 (it might be higher - I didn't 
 upgrade it manually, but I don't know if the updates from /dev repository 
 would have triggered an upgrade at any point)

The system will be capable of new version, but the pool will not
upgrade itself automatically.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Crypto Updates [PSARC/2009/443 FastTrack timeout 08/24/2009]

2009-08-18 Thread Darren J Moffat

Garrett D'Amore wrote:

Darren J Moffat wrote:


Dataset rename restrictions
---

On rename a dataset can non be moved out of its wrapping key hierarchy
ie where it inherits the keysource property from. This is best 
explained by example:


# zfs get -r keysource tankNAMEPROPERTY   
VALUE  SOURCE

tankkeysource  none   default
tank/A  keysource  passphrase,prompt  local
tank/A/bkeysource  passphrase,prompt  inherited from tank/A
tank/A/b/c  keysource  passphrase,prompt  inherited from tank/A
tank/D  keysource  none   default
tank/D/ekeysource  passphrase,prompt  local

Simple rename of leaf dataset in place:
# zfs rename tank/D/e tank/D/EOK

Rename within keysource inheritance remains the same:

# zfs rename tank/A/b/c tank/A/cOK

Rename out of keysource inheritance path:

# zfs rename tank/A/b/c tank/D/e/cFAIL
  


I'd like to see draft text for a man page describing this behavior.  I 
suspect that this is likely to be potentially confusing.


The above is the draft man page text.  What more would you like to see ? 
 Is it that you don't understand the behaviour or you want to do an 
editoral review of the man page text ?  If the later then I'd assert 
that isn't ARC review.


As an alternative to failure, could one imagine having a -f (force) 
switch that allows the rename, and creates a new keysource root?


We can't do that because you need to specify what the keysource is.  So 
a force switch would be more like:

# zfs rename -o keysource=... tank/A/b/c tank/D/e/c

However 'zfs rename' doesn't have a -o capability today and it is out of 
the scope of the crypto project to add that support to rename. It is 
possible to add in the future but it has a non trivial impact beyond 
crypto support so I'm not willing to take that on just now - 
particularly since it requires changes to libzfs APIs that are in use by 
Fishworks.


I've logged an RFE for the general case: 6872829


Dataset mount
-
The zfs_mount() library call in libzfs, and thus zfs(1M) mount command
will load keys if they are available when a dataset is attempting to be
mounted.   Note that this means that 'zfs mount -a' can attempt to be
interactive if the keysource locator is prompt.  Note that this does
NOT cause a prompt for system boot and we do NOT wait looking for keys
(there is no facility to do so with SMF anyway).
  


So what is the behavior at boot for these file systems?  Are they left 
unmounted?  


Yes, and this is unchanged from the previous PSARC case for ZFS Crypto.

 Is there any indication to the administrator that this is
the situation?  


Unchanged from the previous PSARC case for ZFS Crypto.

The aren't mounted and shown in df, the mounted property of the dataset 
is false and the keystatus property is unavailable.


 The man page indicates that zfs mount -a is run at boot,
but it seems like this might be a special case.  Again, this is one I'd 
like to see supporting man page diffs for.


This is unchanged from the previous case.

There is a private implementation detail where we ensure that the SMF 
service that runs at boot does not attempt to prompt. This is because 
SMF doesn't provide any means to do so, and even if it did we may not 
want that to happen.



Dnode Bonusbuf Encryption
-
Instead of encrypting the bonusbuf section of the dnodes the ZFS 
Crypto feature

will now depend on the ZFS fast system attributes project and will cause
the bonusbuf to always completely spill.  Note there are no user visible
interface change from this and the ZFS fast system attribtues project 
isn't

expected to be reviewed in ARC as it is an implementation detail only.
Management of the dependency is thus not an ARC issue but an internal 
team

coordination issue.
  


Implementation details that affect on disk storage, or have other larger 
ramifications elsewhere on the system, should probably still be ARC'd.  


This is not an ARC issue in any many other ZFS cases and I'm not making 
it one in this case either, because it isn't this project that is 
introducing fast system attributes.   Not all ZFS changes that impact on 
disk actually come to ARC anyway.


Is the fast system attributes project planning on changing the on-disk 
format?


Yes but in a compatible way using the version system, there is nothing 
directly visible to admins.


To enable encryption you have to upgrade your pool version number anyway 
- this is standard ZFS behaviour when adding new features that have an 
on disk impact.  That was already the case with the previous ZFS Crypto 
ARC case.  Encryption support will be a pool version after the fast 
system attributes project and encryption depends on it.  However I don't 
expect to see an ARC case presented to PSARC for the fast system 
attributes project since there is nothing visible to the admin other 
than the pool 

Re: [zfs-discuss] What's eating my disk space? Missing snapshots?

2009-08-18 Thread Matthew Stevenson
 Well, I see USEDSNAP 13.8 GB for the dataset, so if you delete ALL
 snapshots you'd probably be able to get that much.

I agree, it's just hard to see how...

 As for which snapshot to delete to get most space,
 that's a liitle
 bit tricky. See
 rpool/export/home/m...@zfs-auto-snap:monthly-2009-06-2
 8-20:59, which
 has USED 2.45G? If I understand correctly, it roughly
 means
 rpool/export/home/m...@zfs-auto-snap:monthly-2009-06-2
 8-20:59 and
 rpool/export/home/m...@zfs-auto-snap:monthly-2009-07-0
 5-13:43 has
 about 2.45G of difference. 

That's what I thought too, but by that logic I'd have thought that if you add 
up all the differences, you'd get the total USEDSNAP figure of 13.8GB, but 
you don't - it only adds up to around 5GB. 

 
 This means:
 - Deleting
 rpool/export/home/m...@zfs-auto-snap:monthly-2009-06-2
 8-20:59
 only probably won't save you lots of space, as the
 used space would
 probably be moved to
 rpool/export/home/m...@zfs-auto-snap:monthly-2009-05-2
 8-08:03
 - Deleting all snapshot on and prior to
 rpool/export/home/m...@zfs-auto-snap:monthly-2009-06-2
 8-20:59 could
 give you at least 2.45G space.

Yes, that's exactly what happened - the used data all moved to another 
snapshot, and deleting that snapshot freed it all up (I guess that must have 
been the point at which I first wrote the large files, or whatever). It seems 
that the more you start to delete things, the easier it is to see where the 
total USEDSNAP figure is coming from.

It's almost like it can't tell you where the figure is coming from ahead of 
time - it requires you to take more actions before it can give you a more 
accurate figure. I'm sure there must be a way to find out though...

Thanks,
Matt
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Crypto Updates [PSARC/2009/443 FastTrack timeout 08/24/2009]

2009-08-18 Thread Brian Hechinger
On Tue, Aug 18, 2009 at 12:37:23AM +0100, Robert Milkowski wrote:
 Hi Darren,
 
 Thank you for the update.
 Have you got any ETA (build number) for the crypto project?

Also, is there any word on if this will support the hardware crypto stuff
in the VIA CPUs natively?  That would be nice. :)

-brian
-- 
Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard full of
pop tarts and pancake mix. -- IRC User (http://www.bash.org/?841435)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Crypto Updates [PSARC/2009/443 FastTrack timeout 08/24/2009]

2009-08-18 Thread Constantin Gonzalez

Hi,

Brian Hechinger wrote:

On Tue, Aug 18, 2009 at 12:37:23AM +0100, Robert Milkowski wrote:

Hi Darren,

Thank you for the update.
Have you got any ETA (build number) for the crypto project?


Also, is there any word on if this will support the hardware crypto stuff
in the VIA CPUs natively?  That would be nice. :)


ZFS Crypto uses the Solaris Cryptographics Framework to do the actual
encryption work, so ZFS is agnostic to any hardware crypto acceleration.

The Cryptographic Framework project on OpenSolaris.org is looking for help
in implementing VIA Padlock support for the Solaris Cryptographic Framework:

  http://www.opensolaris.org/os/project/crypto/inprogress/padlock/

Cheers,
  Constantin

--
Constantin Gonzalez  Sun Microsystems GmbH, Germany
Principal Field Technologisthttp://blogs.sun.com/constantin
Tel.: +49 89/4 60 08-25 91   http://google.com/search?q=constantin+gonzalez

Sitz d. Ges.: Sun Microsystems GmbH, Sonnenallee 1, 85551 Kirchheim-Heimstetten
Amtsgericht Muenchen: HRB 161028
Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Wolf Frenkel
Vorsitzender des Aufsichtsrates: Martin Haering
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ETA for 6574286 removing a slog doesn't work?

2009-08-18 Thread Roman Naumenko
Does anybody aware if this bug is going to be fixed in nearest future?
IBM just started to sale new X25 model for half a price. 

--
Roman
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Crypto Updates [PSARC/2009/443 FastTrack timeout 08/24/2009]

2009-08-18 Thread Darren J Moffat

Brian Hechinger wrote:

On Tue, Aug 18, 2009 at 12:37:23AM +0100, Robert Milkowski wrote:

Hi Darren,

Thank you for the update.
Have you got any ETA (build number) for the crypto project?


Also, is there any word on if this will support the hardware crypto stuff


That has always been the plan and it comes completely for free given it 
uses the Crypto Framework APIs.  I already have acceleration on the 
UltraSPARC T2 processors without have a single special line of code.



in the VIA CPUs natively?  That would be nice. :)


There isn't yet a driver/code for the VIA CPUs integrated into the 
crypto framework.  A prototype was started by someone outside of Sun but 
it hasn't yet integrated:


http://opensolaris.org/os/project/crypto/inprogress/padlock/

The prototype code is here:

http://src.opensolaris.org/source/xref/crypto/padlock/

However that has nothing to do with the ZFS Crypto project directly but 
it will benefit if it is implemented. If you can and wish to help out 
please contact crypto-disc...@opensolaris.org


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What's eating my disk space? Missing snapshots?

2009-08-18 Thread Thomas Burgess
it's pretty simple, if i understand it correctly.  When you add some blocks
to zfs...


xxx

then take a snapshot


(snapshot of x)

the disk has the space of the x's and the snapshot does't take up any space
yet

then you add more to the drive
and maybe take another snapshot
 
(snapshot of x)
(+ difference)
  (snapshot of d)

each time you change stuff thats part of all the data sets the snapshots
grow (it keeps track of the DIFFERENCES)  so they start off small but grow
largerthen, if you DELETE something from your current filesystem it
falls to the snapshoti hope i am explaining it well...but when you
think about it like this it's easy to see how this can happen
On Tue, Aug 18, 2009 at 8:47 AM, Matthew Stevenson no-re...@opensolaris.org
 wrote:

  Well, I see USEDSNAP 13.8 GB for the dataset, so if you delete ALL
  snapshots you'd probably be able to get that much.

 I agree, it's just hard to see how...

  As for which snapshot to delete to get most space,
  that's a liitle
  bit tricky. See
  rpool/export/home/m...@zfs-auto-snap:monthly-2009-06-2
  8-20:59, which
  has USED 2.45G? If I understand correctly, it roughly
  means
  rpool/export/home/m...@zfs-auto-snap:monthly-2009-06-2
  8-20:59 and
  rpool/export/home/m...@zfs-auto-snap:monthly-2009-07-0
  5-13:43 has
  about 2.45G of difference.

 That's what I thought too, but by that logic I'd have thought that if you
 add up all the differences, you'd get the total USEDSNAP figure of 13.8GB,
 but you don't - it only adds up to around 5GB.

 
  This means:
  - Deleting
  rpool/export/home/m...@zfs-auto-snap:monthly-2009-06-2
  8-20:59
  only probably won't save you lots of space, as the
  used space would
  probably be moved to
  rpool/export/home/m...@zfs-auto-snap:monthly-2009-05-2
  8-08:03
  - Deleting all snapshot on and prior to
  rpool/export/home/m...@zfs-auto-snap:monthly-2009-06-2
  8-20:59 could
  give you at least 2.45G space.

 Yes, that's exactly what happened - the used data all moved to another
 snapshot, and deleting that snapshot freed it all up (I guess that must have
 been the point at which I first wrote the large files, or whatever). It
 seems that the more you start to delete things, the easier it is to see
 where the total USEDSNAP figure is coming from.

 It's almost like it can't tell you where the figure is coming from ahead of
 time - it requires you to take more actions before it can give you a more
 accurate figure. I'm sure there must be a way to find out though...

 Thanks,
 Matt
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs fragmentation

2009-08-18 Thread Mertol Ozyoney
There are Works to make NDMP more efficient in highly fregmanted file
Systems with a lot of small files. 
I am not a development engineer so I don't know much and I do not think that
there is any committed work. However ZFS engineers on the forum may comment
more
Mertol 



Mertol Ozyoney 
Storage Practice - Sales Manager

Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email mertol.ozyo...@sun.com


-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Ed Spencer
Sent: Sunday, August 09, 2009 12:14 AM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] zfs fragmentation


On Sat, 2009-08-08 at 15:20, Bob Friesenhahn wrote:

 A SSD slog backed by a SAS 15K JBOD array should perform much better 
 than a big iSCSI LUN.

Now...yes. We implemented this pool years ago. I believe, then, the
server would crash if you had a zfs drive fail. We decided to let the
netapp handle the disk redundency. Its worked out well. 

I've looked at those really nice Sun products adoringly. And a 7000
series appliance would also be a nice addition to our central NFS
service. Not to mention more cost effective than expanding our Network
Appliance (We have researchers who are quite hungry for storage and NFS
is always our first choice).

We now have quite an investment in the current implementation. Its
difficult to move away from. The netapp is quite a reliable product.

We are quite happy with zfs and our implementation. We just need to
address our backup performance and  improve it just a little bit!

We were almost lynched this spring because we encountered some pretty
severe zfs bugs. We are still running the IDR named A wad of ZFS bug
fixes for Solaris 10 Update 6. It took over a month to resolve the
issues.

I work at a University and Final Exams and year end occur at the same
time. I don't recommend having email problems during this time! People
are intolerant to email problems.

I live in hope that a Netapp OS update, or a solaris patch, or a zfs
patch, or a iscsi patch , or something will come along that improves our
performance just a bit so our backup people get off my back!

-- 
Ed 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS nfs performance on ESX4i

2009-08-18 Thread Mertol Ozyoney
Hi Ashley;

 

RaidZ Group is Ok for throughput but due to the design whole RaidZ Group
behavies like a single disk so your max IOPS is around 100. I'd personaly
use Raid10 instead. Also you seem to have no write cache which can effect
performance. Try using a log device

 

 

Best regards

Mertol 

 

 


 http://www.sun.com/ http://www.sun.com/emrkt/sigs/6g_top.gif

Mertol Ozyoney 
Storage Practice - Sales Manager

Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email mertol.ozyo...@sun.com

 

 

From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Ashley Avileli
Sent: Friday, August 14, 2009 2:21 PM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] ZFS nfs performance on ESX4i

 

I have setup a pool called vmstorage and mounted it as nfs storage in esx4i.
The pool in freenas contains 4 sata2 disks in raidz.  I have 6 vms; 5 linux
and 1 windows and performance is terrible.

Any suggestion on improving the performance of the current setup.

I have added the following vfs.zfs.prefetch_disable=1 which improved the
performance slightly.



attachment: image001.gif___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What's eating my disk space? Missing snapshots?

2009-08-18 Thread Matthew Stevenson
I do understand these concepts, but to me that still doesn't explain why adding 
the size of each snapshot together doesn't equal the size reported by zfs list 
in USEDSNAP.

I'm clearly missing something. Hmmm...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What's eating my disk space? Missing snapshots?

2009-08-18 Thread Richard Elling


On Aug 18, 2009, at 9:04 AM, Matthew Stevenson wrote:

I do understand these concepts, but to me that still doesn't explain  
why adding the size of each snapshot together doesn't equal the size  
reported by zfs list in USEDSNAP.


Here is the pertinent text from the ZFS Admin Guide.

usedbysnapshots
Read-only property that identifies the amount of space that is
consumed by snapshots of this dataset. In particular, it is the
amount of space that would be freed if all of this dataset's
snapshots were destroyed. Note that this is not simply the sum
of the snapshots' used properties, because space can be shared
by multiple snapshots. The property abbreviation is usedsnap.

Could you offer an alternative or clarifying statement?
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What's eating my disk space? Missing snapshots?

2009-08-18 Thread Thomas Burgess
dude, i just explained it =)

ok...let me see if i can do better...

if you have a file that's 1 gb , in zfs you have those blocks added.  on a
normal filesystem when you edit the file or add to it, it will erase the old
file and add a new one over it (more or less).

on zfs, you have the blocks added just liek when you first add them on
antoher system, but when you edit it and change it, it doesn't erase the
file and add the changed file, it adds JUST what is different.  because of
this, both copies of the file exist (the old one + blocks referencing what
has changed)
now, a snapshot, is a point in time COPY of what THIS looks likeso, when
files CHANGE, snapshots GROW, not always BASED on the size of the files, but
on the size of how different things are...does this make more sense?  then
when you DELETE a file or something, all of those original changes are still
referenced in snapshotsso the more snapshots you have, and the MORE
things change, the MORE SPACE you're going to take up

On Tue, Aug 18, 2009 at 12:04 PM, Matthew Stevenson 
no-re...@opensolaris.org wrote:

 I do understand these concepts, but to me that still doesn't explain why
 adding the size of each snapshot together doesn't equal the size reported by
 zfs list in USEDSNAP.

 I'm clearly missing something. Hmmm...
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Is it possible to replicate an entire zpool with AVS?

2009-08-18 Thread Paul Choi

Hello,

Is it possible to replicate an entire zpool with AVS? From what I see, 
you can replicate a zvol, because AVS is filesystem agnostic. I can 
create zvols within a pool, and AVS can replicate replicate those, but 
that's not really what I want.


If I create a zpool called disk1,

paulc...@nfs01b:/dev/zvol# find /dev/zvol
/dev/zvol
/dev/zvol/dsk
/dev/zvol/dsk/rpool
/dev/zvol/dsk/rpool/dump
/dev/zvol/dsk/rpool/swap
/dev/zvol/rdsk
/dev/zvol/rdsk/rpool
/dev/zvol/rdsk/rpool/dump
/dev/zvol/rdsk/rpool/swap
paulc...@nfs01b:/dev/zvol#

The only zvol entries I see are for zvols that have been explicitly created.
Any tricks to using AVS with a zpool? Or should I just opt for periodic 
zfs snapshot and zfs send/recieve?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What's eating my disk space? Missing snapshots?

2009-08-18 Thread Matthew Stevenson
Ha ha, I know! Like I say, I do get COW principles!

I guess what I'm after is for someone to look at my specific example (in txt 
file attached to first post) and tell me specifically how to find out where the 
13.8GB number is coming from.

I feel like a total numpty for going on about this, I really do, but despite 
all the input I still don't see an answer to this basic question.

I promise this will be my last post querying the subject!

Thanks
Matt
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What's eating my disk space? Missing snapshots?

2009-08-18 Thread Thomas Burgess
If you understand how copy on write works and how snapshots work then the
concept of the extra space should make perfect since.  If you want a
mathmatic formula for how to figure it out i would have to say that it would
be based on how DIFFERENT the data is between snapshots AND how MUCH data it
isbut it's not always a 10 gb file + 10 gb file = 20 gb...that is simply
NOT how ZFS or snapshots WORK.  Really, If you think about it, it makes
perfect senseand it's not always as black and white as you'd like it to
be.

On Tue, Aug 18, 2009 at 1:52 PM, Matthew Stevenson no-re...@opensolaris.org
 wrote:

 Ha ha, I know! Like I say, I do get COW principles!

 I guess what I'm after is for someone to look at my specific example (in
 txt file attached to first post) and tell me specifically how to find out
 where the 13.8GB number is coming from.

 I feel like a total numpty for going on about this, I really do, but
 despite all the input I still don't see an answer to this basic question.

 I promise this will be my last post querying the subject!

 Thanks
 Matt
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] *Almost* empty ZFS filesystem - 14GB?

2009-08-18 Thread Chris Murray
I don't have quotas set, so I think I'll have to put this down to some sort of 
bug. I'm on SXCE 105 at the minute, ZFS version is 3, but zpool is version 13 
(could be 14 if I upgrade). I don't have everything backed-up so won't do a 
zpool upgrade just at the minute. I think when SXCE 120 is released, I'll 
install that, upgrade my pool and see if the filesystem still registers as 
14GB. If it does, I'll destroy and recreate - no biggie! :-)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] data disappear

2009-08-18 Thread Rafal Ciepiela
Bingo!
After several updates I have many boot environments.

Thanks.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] *Almost* empty ZFS filesystem - 14GB?

2009-08-18 Thread Nicolas Williams
Perhaps an open 14GB, zero-link file?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs send speed

2009-08-18 Thread Paul Kraus
Posted from the wrong address the first time, sorry.

Is the speed of a 'zfs send' dependant on file size / number of files ?

       We have a system with some large datasets (3.3 TB and about 35
million files) and conventional backups take a long time (using
Netbackup 6.5 a FULL takes between two and three days, differential
incrementals, even with very few files changing, take between 15 and
20 hours). We already use snapshots for day to day restores, but we
need the 'real' backups for DR.

       I have been testing zfs send throughput and have not been
getting promising results. Note that this is NOT OpenSolaris, but
Solaris 10U6 (10/08) with the IDR for the snapshot interrupts resilver
bug.

Server: V480, 4 CPU, 16 GB RAM (test server, production is an M4000)
Storage: two SE-3511, each with one 512 GB LUN presented

Simple mirror layout:

pkr...@nyc-sted1:/IDR-test/ppk zpool status
 pool: IDR-test
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Wed Jul  1 16:54:58 2009
config:

       NAME                                       STATE     READ WRITE CKSUM
       IDR-test                                   ONLINE       0     0     0
         mirror                                   ONLINE       0     0     0
           c6t600C0FF00927852FB91AD308d0  ONLINE       0     0     0
           c6t600C0FF00922614781B19008d0  ONLINE       0     0     0

errors: No known data errors
pkr...@nyc-sted1:/IDR-test/ppk

pkr...@nyc-sted1:/IDR-test/ppk zfs list
NAME                          USED  AVAIL  REFER  MOUNTPOINT
IDR-test                      101G   399G  24.3M  /IDR-test
idr-t...@1250597527          96.8M      -   101M  -
idr-t...@1250604834          20.1M      -  24.3M  -
idr-t...@1250605236            16K      -  24.3M  -
idr-t...@1250605400            20K      -  24.3M  -
idr-t...@1250606582            20K      -  24.3M  -
idr-t...@1250612553            20K      -  24.3M  -
idr-t...@1250616026            20K      -  24.3M  -
IDR-test/dataset              101G   399G   100G  /IDR-test/dataset
IDR-test/data...@1250597527   313K      -  87.1G  -
IDR-test/data...@1250604834   266K      -  87.1G  -
IDR-test/data...@1250605236   187M      -  88.2G  -
IDR-test/data...@1250605400   192M      -  89.3G  -
IDR-test/data...@1250606582   246K      -  95.4G  -
IDR-test/data...@1250612553   233K      -  95.4G  -
IDR-test/data...@1250616026   230K      -   100G  -
pkr...@nyc-sted1:/IDR-test/ppk

There are about 3.3 million files / directories in the 'dataset',
files range in size from 1 KB to 100 KB.

pkr...@nyc-sted1:/IDR-test/ppk time sudo zfs send
IDR-test/data...@1250616026 /dev/null

real    91m19.024s
user    0m0.022s
sys     11m51.422s
pkr...@nyc-sted1:/IDR-test/ppk

Which translates to a little over 18 MB/sec. and 600 files/sec. That
would mean almost 16 hours per TB. Better, but not much better than
NBU.

I do not think the SE-3511 is limiting us, as I have seen much higher
throughput on them when resilvering one or more mirrors.

Any thoughts as to why I am not getting better throughput ?

Thanks.

--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Designer, The Pajama Game @ Schenectady Light Opera Company
( http://www.sloctheater.org/ )
- Technical Advisor, Lunacon 2010 (http://www.lunacon.org/)
- Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send speed

2009-08-18 Thread Joseph L. Casale
Is the speed of a 'zfs send' dependant on file size / number of files ?

I am going to say no, I have *far* inferior iron that I am running a backup
rig on, and doing a send/recv over ssh through gige and last night's replication
gave the following: received 40.2GB stream in 3498 seconds (11.8MB/sec)
I have seen it as high as your figures but usually between this and your number.

I assumed it was a result of the ssh overhead (arcfour yielded the best 
results).

There are about 3.3 million files / directories in the 'dataset',
files range in size from 1 KB to 100 KB.

The number of files I am replicating would be ~100!

jlc
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs incremental send stream size

2009-08-18 Thread michael
Is there perhaps a workaround for this?  A way to condense the free blocks 
information?  

If not, any idea when an improvement might be implemented?

We are currently suffering from incremental snapshots that refer to zero new 
blocks, but where incremental snapshots required over a gigabyte even after 
gzip'ing.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Behind the scenes of 'invalid vdev configuration'

2009-08-18 Thread Galen
I am dealing with a zpool that's refusing to import, and reporting  
invalid vdev configuration.


How can I learn more about what exactly this means? Can I isolate  
which disk(s) are missing or corrupted/failing?


zpool import provides some information, but not enough. Confusingly,  
it lists all disks as 'online' as well, even if they are physically  
not present.


Suggestions anybody?

-Galen
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send speed

2009-08-18 Thread Mattias Pantzare
On Tue, Aug 18, 2009 at 22:22, Paul Krauspk1...@gmail.com wrote:
 Posted from the wrong address the first time, sorry.

 Is the speed of a 'zfs send' dependant on file size / number of files ?

        We have a system with some large datasets (3.3 TB and about 35
 million files) and conventional backups take a long time (using
 Netbackup 6.5 a FULL takes between two and three days, differential
 incrementals, even with very few files changing, take between 15 and
 20 hours). We already use snapshots for day to day restores, but we
 need the 'real' backups for DR.

Conventional backups can be faster that that! I have not used
netbackup but you should be able to configure netbackup to run several
backup streams in parallel. You may have to point netbackup to subdirs
instead of the file system root.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss