Re: [zfs-discuss] ZFS causing slow boot up

2007-02-19 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Kory Wheatley wrote:
 We created 10,000 zfs file systems with no data in them yet, and
 it seems after we did this our boot up process takes over an hour.

http://en.wikipedia.org/wiki/Zfs#Current_implementation_issues

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBRdmNo5lgi5GaxT1NAQI0nAQAhMO3MoyD6rGuNP1llDfekipQhd2044qY
j66dAoPd+jK3go592T3cmGzB5AijO9zUqeFi3QZRFQN0/tmLk/PwjvZZy9rL5+/F
zie4LStGHm0z+dqiORY+Bkh0W204+Yaj3QkPXaFm4TrbsZCDg3WUgjRjQBm4VB9F
5IZzlM22D6Q=
=j2Jw
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Samba ACLs en ZFS

2007-02-19 Thread Rod
Are now supported Samba Acls with ZFS?

I am looking for information on the release notes of 3.0.24 version Samba, but 
I can't see anything about ZFS and ACLs.

Does anybody knows something?


Thank you
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[2]: [zfs-discuss] Zfs best practice for 2U SATA iSCSI NAS

2007-02-19 Thread Robert Milkowski




Hello Nicholas,

Monday, February 19, 2007, 11:31:50 AM, you wrote:








2. What is the recommended version of Opensolaris to use at the moment that has iscsi? Is there a stable-like branch or is it better to stay on the N-1 update? 


3. Which leads to: coming from Debian, how easy is system updates? I remember with OpenBSD system updates used to be a pain.

4. I assume that since Solaris can't boot off zfs yet, that the best option is two mirrored drives for the OS, and all the other drives in the pool.

5. Is there a recommended amount of system memory for (only) serving storage? ie. is 4Gb sufficient for 10x500Gb SATA array, or will increasing this make a big difference to performance. How about CPU? Would 2 Xeon 5130s be sufficient for 20-40 iSCSI targets?







2. if you are talking about iSCSI client it's in a Solaris 10 release.
 IIRC correctly iSCSI server should be in U4 - otherwise use Nevada

3. you have to options to upgrade system - via patches or via upgrade.
 both are fairly easy

4. that's what a lot of people are doing now. However even with zfsboot code I guess lot of people
 will still mirror two drives for a system (but with zfs) and create separate pool for all the other
 disks

5. there's no simple answer to this question as it greatly depends on workload and data.
 One thing you should keep in mind - Solaris *has* to boot in a 64bit mode if you wan to
 use all that memory as a cache for zfs, so old x86 32bit CPU are not welcome



--
Best regards,
Robert  mailto:[EMAIL PROTECTED]
   http://milek.blogspot.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [osol-help] How to recover from rm *?

2007-02-19 Thread Robert Milkowski
Hello Dennis,

Monday, February 19, 2007, 12:20:49 AM, you wrote:

 On Sun, 18 Feb 2007, Calvin Liu wrote:

 I want to run command rm Dis* in a folder but mis-typed a space in it
 so it became rm Dis *. Unfortunately I had pressed the return button
 before I noticed the mistake. So you all know what happened... :( :( :(

 Ouch!

 How can I get the files back in this case?

 You restore them from your backups.

 I haven't backup them.

DC This is one ( of many ) reasons why ZFS just rocks.  A snapshot would have
DC saved you.  I don't consider a snapshot to be an actual backup however.  I
DC define a backup as something that you can actually restore to bare metal
DC when your entire datacenter has vanished into a blackhole.  That means a
DC tape generally.

DC In the Lotus Notes/Domino world there is a very nice feature where you can
DC have soft-deletions.  Essentially you can delete a record from a database
DC and then still do a recovery if needed within a given retention time period.
DC  Perhaps a soft-deletion feature to ZFS would be nice.  It would allow a
DC sysadmin or maybe even a user to delete something and then come back later,
DC check a deletion log and possibly just unrm the file.

Something similar was proposed here before and IIRC someone even has a
working implementation. I don't know what happened to it.

Anyone? That someone?

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Samba ACLs en ZFS

2007-02-19 Thread Ivo De Decker
On Mon, Feb 19, 2007 at 04:07:25AM -0800, Rod wrote:

Hello,

 Are now supported Samba Acls with ZFS?
 
 I am looking for information on the release notes of 3.0.24 version Samba,
 but I can't see anything about ZFS and ACLs.
 
 Does anybody knows something?

The code is in the svn repository:
http://viewcvs.samba.org/cgi-bin/viewcvs.cgi/branches/SAMBA_3_0_25/source/modules/vfs_solarisacl.c?rev=21153view=log

But it isn't in the 3.0.24 release. It seems that 3.0.24 is a security
release, and that new features that were scheduled for 3.0.24 will be in
3.0.25.

Greetings,

Ivo De Decker
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [osol-help] How to recover from rm *?

2007-02-19 Thread Jeremy Teo

Something similar was proposed here before and IIRC someone even has a
working implementation. I don't know what happened to it.


That would be me. AFAIK, no one really wanted it.  The problem that it
solves can be solved by putting snapshots in a cronjob.

--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Samba ACLs en ZFS

2007-02-19 Thread Onno Molenkamp
Op maandag 19 februari 2007 schreef Ivo De Decker:
 The code is in the svn repository:
 http://viewcvs.samba.org/cgi-bin/viewcvs.cgi/branches/SAMBA_3_0_25/source/m
odules/vfs_solarisacl.c?rev=21153view=log

There is no ZFS ACL code in there, just UFS ACL stuff.

Onno


pgp9KJmivqAQw.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Samba ACLs en ZFS

2007-02-19 Thread Ivo De Decker
On Mon, Feb 19, 2007 at 03:43:49PM +0100, Rodrigo LerĂ­a wrote:
 And when is going to be released 3.0.25 version?

The samba-technical mailing list has more info:

http://lists.samba.org/archive/samba-technical/2007-February/051430.html

BTW this thread is also very interesting for ZFS users:

http://lists.samba.org/archive/samba-technical/2007-February/051506.html


Greetings,

Ivo De Decker
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: SPEC SFS benchmark of NFS/ZFS/B56 - please help to improve it!

2007-02-19 Thread Roch - PAE

Leon Koll writes:
  An update:
  
  Not sure is it related to the fragmentation, but I can say that serious 
  performance degradation in my NFS/ZFS benchmarks is a result of on-disk ZFS 
  data layout.
  Read operations on directories (NFS3 readdirplus) are abnormally time 
  consuming . That kills the server. After cold restart of the host the 
  performans is still on the flour. 
  My conclusion: it's not CPU, not memory, it's ZFS on-disk structures.
   
   
  This message posted from opensolaris.org
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


As I understand the issue, a readdirplus is
2X slower when data is already cached in the client than
when it is not.

Given that the on-disk structure does not change between the 
2 runs, I can't really place the fault on it.

-r

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is ZFS file system supports short writes ?

2007-02-19 Thread Roch - PAE

dudekula mastan writes:
  If a write call attempted to write X bytes of data, and if writecall writes 
  only x ( hwere x X) bytes, then we call that write as short write.
 
-Masthan

What kind of support do you want/need ?

-r

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Samba ACLs en ZFS

2007-02-19 Thread Eric Enright

On 2/19/07, Rod [EMAIL PROTECTED] wrote:

Are now supported Samba Acls with ZFS?

I am looking for information on the release notes of 3.0.24 version Samba, but 
I can't see anything about ZFS and ACLs.

Does anybody knows something?


It's not there yet.  I spent some time looking at this a few weeks
ago, and last I looked there was a Sun engineer on the SFW team
working on ZFS ACL support, who said he'd have something in two or
three weeks.  That was several weeks ago, and I haven't looked into
it beyond a quick glance since.

One thing I did try out was loopback mounting the filesystem via NFS
and exporting /that/ with Samba, which seemed to work fine as far as
getting/setting ACLs via Explorer.  That is clearly not an optimal
solution, however, and I decided that I could live with the real
permissions being invisible.

--
Eric Enright
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Exporting zvol properties to .zfs

2007-02-19 Thread Dale Ghent


Here at my university, I recently started selling disk space to users  
from a server with 4.5TB of space. They purchase space and I make  
them their own volume, typically with compression on and it's then  
exported via NFS to their servers/workstations. So far this has gone  
quite well (with zil_disable and a tuned up nfsd of course)


Anyhow, the frustration exhibited by a new customer of mine made me  
think of a new RFE possibility. This customer purchased some space  
and began moving his data (2TB's worth) over to it from his ailing  
RAID array. He became frantic at one point and said that the transfer  
was taking too long.


What he was doing was judging the speed at which the move was going  
by doing a 'df' on his NFS client and comparing that to the existing  
partition which holds his data. What he didn't realize was that the  
transfer seemed slower because his data on the ZFS-backed NFS server  
was being compressed by a 2:1 ratio... so, for example, although the  
df on his NFS client reported 250G used, in reality approximately  
500G had been transfered and then compressed on ZFS.


This was explained to him and that averted his fury for the time  
being... but it got me thinking about how things such as the current  
compression ratio for a volume could be indicated over a otherwise  
ZFS-agnostic NFS export. The .zfs snapdir came to mind. Perhaps ZFS  
could maintain a special file under there, called compressratio for  
example, and a remote client could cat it or whatever to be aware of  
how volume compression factors into their space usage.


Any thoughts? A quick b.o.o search did bring up and existing RFE  
along these lines, so I thought I'd mention that here.


/dale

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs best practice for 2U SATA iSCSI NAS

2007-02-19 Thread Jason J. W. Williams

Hi Nicholas,


Actually Virtual Iron, they have a nice system at the moment with live
migration of windows guest.


Ah. We looked at them for some Windows DR. They do have a nice product.


3. Which leads to: coming from Debian, how easy is system updates?  I
remember with OpenBSD system updates used to be a pain.


Not a pain, but coming from Debian/Gentoo not great either. Packaging
is one of the last areas that Solaris really needs an upgrade. You
might want to take a look at Nexenta, which is OpenSolaris with GNU
userland and apt-get. Works pretty well. Once installed you can update
it to Build 56 to get the iSCSI target.

-J
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs best practice for 2U SATA iSCSI NAS

2007-02-19 Thread Nicholas Lee

On 2/20/07, Jason J. W. Williams [EMAIL PROTECTED] wrote:



Ah. We looked at them for some Windows DR. They do have a nice product.



Just waiting for them to get iscsi and vlan support. Supposely sometime in
the next couple months.  Combined with zfs/iscsi it will make a very nice
small data center solution.


Not a pain, but coming from Debian/Gentoo not great either. Packaging

is one of the last areas that Solaris really needs an upgrade. You
might want to take a look at Nexenta, which is OpenSolaris with GNU
userland and apt-get. Works pretty well. Once installed you can update
it to Build 56 to get the iSCSI target.



I've thought about this. How stable is it for just serving (iscsi/nfs/cifs)
storage? What about when Zones are added with a db (pgdb, mydb) instance?

Nicholas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: Re[2]: [zfs-discuss] Zfs best practice for 2U SATA iSCSI NAS

2007-02-19 Thread Nicholas Lee

On 2/19/07, Robert Milkowski [EMAIL PROTECTED] wrote:


5. there's no simple answer to this question as it greatly depends on
workload and data.

   One thing you should keep in mind - Solaris *has* to boot in a 64bit
mode if you wan to

   use all that memory as a cache for zfs, so old x86 32bit CPU are not
welcome



Any rules of thumb?  ie 512Mb or 1024Mb per TB?

Nicholas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Google paper on disk reliability

2007-02-19 Thread Richard Elling

Akhilesh Mritunjai wrote:

I believe that the word would have gone around already, Google engineers have 
published a paper on disk reliability. It might supplement the ZFS FMA 
integration and well - all the numerous debates on spares etc etc over here.


Good paper.  They validate the old saying, complex systems fail in complex 
ways.
We've also done some internal (Sun) studies which cast doubt on the ability of 
SMART
to predict failures.


To quote /.

The Google engineers just published a paper on Failure Trends in a Large Disk Drive 
Population. Based on a study of 100,000 disk drives over 5 years they find some 
interesting stuff. To quote from the abstract: 'Our analysis identifies several 
parameters from the drive's self monitoring facility (SMART) that correlate highly with 
failures. Despite this high correlation, we conclude that models based on SMART 
parameters alone are unlikely to be useful for predicting individual drive failures. 
Surprisingly, we found that temperature and activity levels were much less correlated 
with drive failures than previously reported.'

Link to the paper is http://labs.google.com/papers/disk_failures.pdf


As for the spares debate, that is easy: use spares :-)
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Google paper on disk reliability

2007-02-19 Thread Torrey McMahon

Richard Elling wrote:

Akhilesh Mritunjai wrote:
I believe that the word would have gone around already, Google 
engineers have published a paper on disk reliability. It might 
supplement the ZFS FMA integration and well - all the numerous 
debates on spares etc etc over here.


Good paper.  They validate the old saying, complex systems fail in 
complex ways.
We've also done some internal (Sun) studies which cast doubt on the 
ability of SMART
to predict failures. 


 which is why we were never really fans of turning it on.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss