Re: [zfs-discuss] zfs-fuse mirror unavailable after upgrade to ubuntu 9.04

2009-04-27 Thread Fajar A. Nugraha
On Tue, Apr 28, 2009 at 11:49 AM, Julius Roberts wrote: > Hi there, > > juli...@rainforest:~$ cat /etc/issue > Ubuntu 9.04 \n \l > juli...@rainforest:~$ dpkg -l | grep -i zfs-fuse > ii  zfs-fuse                                  0.5.1-1ubuntu5 First of all this question might be more appropriate o

[zfs-discuss] zfs-fuse mirror unavailable after upgrade to ubuntu 9.04

2009-04-27 Thread Julius Roberts
Hi there, juli...@rainforest:~$ cat /etc/issue Ubuntu 9.04 \n \l juli...@rainforest:~$ dpkg -l | grep -i zfs-fuse ii zfs-fuse 0.5.1-1ubuntu5 I have two 320gb sata disks connected to a PCI raid controller: juli...@rainforest:~$ lspci | grep -i sata 00:08.0 RAID bu

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-27 Thread Scott Lawson
Richard Elling wrote: Some history below... Scott Lawson wrote: Michael Shadle wrote: On Mon, Apr 27, 2009 at 4:51 PM, Scott Lawson wrote: If possible though you would be best to let the 3ware controller expose the 16 disks as a JBOD to ZFS and create a RAIDZ2 within Solaris as you

Re: [zfs-discuss] storage & zilstat assistance

2009-04-27 Thread Bob Friesenhahn
I have now downloaded zilstat.ksh and this is the sort of loading it reports with my StorageTek 2540 while running the initial writer part of the benchmark: % ./zilstat.ksh -p Sun_2540 -l 30 10 N-Bytes N-Bytes/s N-Max-RateB-Bytes B-Bytes/s B-Max-Rateops <=4kB 4-32kB >=32kB 41301

Re: [zfs-discuss] storage & zilstat assistance

2009-04-27 Thread Bob Friesenhahn
On Mon, 27 Apr 2009, Marion Hakanson wrote: I guess one question I'd add is: The "ops" numbers seem pretty small. Is it possible to give enough spindles to a pool to handle that many IOP's without needing an NVRAM cache? I know latency comes into play at some point, but are we at that point?

[zfs-discuss] storage & zilstat assistance

2009-04-27 Thread Marion Hakanson
Greetings, We have a small Oracle project on ZFS (Solaris-10), using a SAN-connected array which is need of replacement. I'm weighing whether to recommend a Sun 2540 array or a Sun J4200 JBOD as the replacement. The old array and the new ones all have 7200RPM SATA drives. I've been watching the

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-27 Thread Bob Friesenhahn
On Mon, 27 Apr 2009, Michael Shadle wrote: I was still operating under the impression that vdevs larger than 7-8 disks typically make baby Jesus nervous. Baby Jesus might not be particularly nervous but if your drives don't perform consistently, then there will be more chance of performance

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-27 Thread Richard Elling
Some history below... Scott Lawson wrote: Michael Shadle wrote: On Mon, Apr 27, 2009 at 4:51 PM, Scott Lawson wrote: If possible though you would be best to let the 3ware controller expose the 16 disks as a JBOD to ZFS and create a RAIDZ2 within Solaris as you will then gain the full b

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-27 Thread Scott Lawson
Michael Shadle wrote: On Mon, Apr 27, 2009 at 5:32 PM, Scott Lawson wrote: One thing you haven't mentioned is the drive type and size that you are planning to use as this greatly influences what people here would recommend. RAIDZ2 is built for big, slow SATA disks as reconstruction times

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-27 Thread Michael Shadle
On Mon, Apr 27, 2009 at 5:32 PM, Scott Lawson wrote: > One thing you haven't mentioned is the drive type and size that you are > planning to use as this > greatly influences what people here would recommend. RAIDZ2 is built for > big, slow SATA > disks as reconstruction times in large RAIDZ's and

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-27 Thread Scott Lawson
Michael Shadle wrote: On Mon, Apr 27, 2009 at 4:51 PM, Scott Lawson wrote: If possible though you would be best to let the 3ware controller expose the 16 disks as a JBOD to ZFS and create a RAIDZ2 within Solaris as you will then gain the full benefits of ZFS. Block self healing etc etc.

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-27 Thread Michael Shadle
On Mon, Apr 27, 2009 at 4:51 PM, Scott Lawson wrote: > If possible though you would be best to let the 3ware controller expose > the 16 disks as a JBOD  to ZFS and create a RAIDZ2 within Solaris as you > will then > gain the full benefits of ZFS. Block self healing etc etc. > > There isn't an iss

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-27 Thread Scott Lawson
Leon, RAIDZ2 is ~equivalent to RAID6. ~2 disks of parity data. Allowing a double drive failure and still having the pool available. If possible though you would be best to let the 3ware controller expose the 16 disks as a JBOD to ZFS and create a RAIDZ2 within Solaris as you will then gain

[zfs-discuss] Raidz vdev size... again.

2009-04-27 Thread Leon Meßner
Hi, i'm new to the list so please bare with me. This isn't an OpenSolaris related problem but i hope it's still the right list to post to. I'm on the way to move a backup server to using zfs based storage, but i don't want to spend too much drives to parity (the 16 drives are attached to a 3ware r

Re: [zfs-discuss] Peculiarities of COW over COW?

2009-04-27 Thread Robert Milkowski
Hello Jeff, Monday, April 27, 2009, 9:12:26 AM, you wrote: >> ZFS blocksize is dynamic, power of 2, with a max size == recordsize. JB> Minor clarification: recordsize is restricted to powers of 2, but JB> blocksize is not -- it can be any multiple of sector size (512 bytes). JB> For small files,

Re: [zfs-discuss] [Fwd: ZFS user/group quotas & space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-04-27 Thread Lin Ling
On 04/27/09 14:13, ольга крыжановская wrote: Will this work with Linux rquota clients, too? Olga It should be. The ZFS userquota support for rquotad (CR 6824968) went into snv_114. It uses the same rquotad protocol. As long as the client can talk to rquotad, it will receive the usage/quo

Re: [zfs-discuss] can zfs create return with no error code before the mount takes place?

2009-04-27 Thread Robert Milkowski
Hello Alastair, Monday, April 27, 2009, 10:18:50 PM, you wrote: > Yes generally the filesystem gets created just that the mount seems not to take place. Seems or did you confirm it with mount or df command? Do you mount it manually then? --  Best regards,  Robert   Milkowski    

Re: [zfs-discuss] What causes slow performance under load?

2009-04-27 Thread Gary Mills
On Sat, Apr 18, 2009 at 04:27:55PM -0500, Gary Mills wrote: > We have an IMAP server with ZFS for mailbox storage that has recently > become extremely slow on most weekday mornings and afternoons. When > one of these incidents happens, the number of processes increases, the > load average increase

Re: [zfs-discuss] can zfs create return with no error code before the mount takes place?

2009-04-27 Thread Alastair Neil
Yes generally the filesystem gets created just that the mount seems not to take place. On Mon, Apr 27, 2009 at 3:41 PM, Robert Milkowski wrote: > Hello Alastair, > > > Monday, April 27, 2009, 7:17:51 PM, you wrote: > > > > > > > > On Tue, Apr 21, 2009 at 12:34 PM, Alastair Neil wrote: > > >

Re: [zfs-discuss] [Fwd: ZFS user/group quotas & space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-04-27 Thread ольга крыжановская
Will this work with Linux rquota clients, too? Olga On 4/1/09, Matthew Ahrens wrote: > Mike Gerdts wrote: > > > On Tue, Mar 31, 2009 at 7:12 PM, Matthew Ahrens > wrote: > > > > > River Tarnell wrote: > > > > > > > Matthew Ahrens: > > > > > > > > > ZFS user quotas (like other zfs properties) wil

Re: [zfs-discuss] can zfs create return with no error code before the mount takes place?

2009-04-27 Thread Robert Milkowski
Hello Alastair, Monday, April 27, 2009, 7:17:51 PM, you wrote: > On Tue, Apr 21, 2009 at 12:34 PM, Alastair Neil wrote: A very basic question.  I have in recent releases of opensolaris found that a script I use to create large number of account home directories has

Re: [zfs-discuss] [Fwd: ZFS user/group quotas & space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-04-27 Thread Mike Gerdts
On Tue, Mar 31, 2009 at 8:47 PM, River Tarnell wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > Matthew Ahrens: >>> does this mean that without an account on the NFS server, a user cannot see >>> his >>> current disk use / quota? > >> That's correct. > > in this case, might i suggest

Re: [zfs-discuss] can zfs create return with no error code before the mount takes place?

2009-04-27 Thread Alastair Neil
On Tue, Apr 21, 2009 at 12:34 PM, Alastair Neil wrote: > A very basic question. I have in recent releases of opensolaris found that > a script I use to create large number of account home directories has been > failing because the script attempts to create and modify the directories > after the

Re: [zfs-discuss] Peculiarities of COW over COW?

2009-04-27 Thread David Magda
On Mon, April 27, 2009 02:13, Tomas Ögren wrote: > On 26 April, 2009 - Gary Mills sent me these 1,3K bytes: > >> I prefer NFS too, but the IMAP server requires POSIX semantics. >> I believe that NFS doesn't support that, at least NFS version 3. > > What non-POSIXness are you referring to, or is it

Re: [zfs-discuss] What is the 32 GB 2.5-Inch SATA Solid State Drive?

2009-04-27 Thread Mike Watkins
Create the zpool with: zpool create log - for the ZIL zpool create cache - for the L2ARC On Sat, Apr 25, 2009 at 11:13 PM, Richard Elling wrote: > Gary Mills wrote: > >> On Fri, Apr 24, 2009 at 09:08:52PM -0700, Richard Elling wrote: >> >> >>> Gary Mills wrote: >>> >>> Does anyone k

Re: [zfs-discuss] Peculiarities of COW over COW?

2009-04-27 Thread Jeff Bonwick
> ZFS blocksize is dynamic, power of 2, with a max size == recordsize. Minor clarification: recordsize is restricted to powers of 2, but blocksize is not -- it can be any multiple of sector size (512 bytes). For small files, this matters: a 37k file is stored in a 37k block. For larger, multi-bloc