Re: [zfs-discuss] Missing zpool devices, what are the options

2007-11-14 Thread David Bustos
Quoth Mark Ashley on Mon, Nov 12, 2007 at 11:35:57AM +1100:
> Is it possible to tell ZFS to forget those SE6140 LUNs ever belonged to the
> zpool? I know that ZFS will have probably put some user data on them, but if
> there is a possibility of recovering any of those zvols on the zpool 
> it'd really help a lot, to put it mildly. My understanding is all the
> metadata will be spread around and polluted by now, even after a few
> days of the SE6140 LUNs being linked, but I thought I'd ask.

No.  I believe this is 4852783, "reduce pool capacity", which hasn't
been implemented yet.  (I don't know whether it's being worked on.)
I think your best bet is to copy off any data you can get to, and
recreate the pool.


David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Parallel zfs destroy results in No more processes

2007-10-24 Thread David Bustos
Quoth Stuart Anderson on Sun, Oct 21, 2007 at 07:09:10PM -0700:
> Running 102 parallel "zfs destroy -r" commands on an X4500 running S10U4 has
> resulted in "No more processes" errors in existing login shells for several
> minutes of time, but then fork() calls started working again.  However, none
> of the zfs destroy processes have actually completed yet, which is odd since
> some of the filesystems are trivially small.
...
> Is this a known issue?  Any ideas on what resource lots of zfs commands use
> up to prevent fork() from working?

ZFS is known to use a lot of memory.  I suspect this problem has
diminished in recent Nevada builds.  Can you try this on Nevada?


David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dfratime on zfs

2007-08-16 Thread David Bustos
Quoth Darren Dunham on Wed, Aug 15, 2007 at 12:50:33PM -0700:
> But a traditional filesystem isn't going to write anything without a
> request.  ZFS is constantly updating the pool/uberblock status the way
> things currently work.  So even if you choose to defer the atime update
> until much longer, it won't prevent writes from being scheduled anyway.

Why does ZFS update the uberblock when there are no writes?


David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dfratime on zfs

2007-08-15 Thread David Bustos
Quoth Darren J Moffat on Thu, Aug 09, 2007 at 10:32:02AM +0100:
> Prompted by a recent /. article on atime vs realtime ranting by some 
> Linux kernel hackers (Linus included) I went back and looked at the 
> mount_ufs(1M) man page because I was sure that OpenSolaris had more than 
> just atime,noatime.  Yep sure enough UFS has dfratime.
> 
> So that got me wondering does ZFS need dfratime or is it just not a 
> problem because ZFS works in a different way.

I believe ZFS will delay atime updates waiting for more writes to come
in, but it will eventually write them anyway (5 seconds?).  dfratime
postpones the write until another write comes in, so it seems legitimate
to me for ZFS to have such an option.

>If ZFS did have dfratime 
> how would it impact the "always consistent on disk" requirement.  One 
> though was that the ZIL would need to be used to ensure that the writes 
> got to disk eventually, but then that would mean we were still writing 
> just to the ZIL instead of the dataset itself.

I don't think dfratime changes the disk consistency at all.  Wouldn't
writing to the ZIL with dfratime on defeat the purpose?  If we need to
write to the ZIL for some other write, though, then it would be ok to
flush the atime updates out too.

All that said, I believe the primary use case for dfratime is laptops,
and therefore it shouldn't be a high priority for the ZFS team.


David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] netbsd client can mount zfs snapshot dir but it never updates

2007-06-11 Thread David Bustos
Quoth Ed Ravin on Thu, Jun 07, 2007 at 09:57:52PM -0700:
> My Solaris 10 box is exporting a ZFS filesystem over NFS.  I'm
> accessing the data with a NetBSD 3.1 client, which only supports NFS
> 3.  Everything works except when I look at the .zfs/snapshot
> directory.  The first time I list out the .zfs/snapshot directory,
> I get a correct listing of the contents.  An hour later, when
> a snapshot has been deleted and a new one created, I still see the
> same listing.  If I type in the name of the new snapshot manually,
> I can access it, but the contents of the .zfs/snapshot directory as
> seen by the NetBSD 3.1 box never changes (well, not in the last 24
> hours since I started testing this).

I'm not an NFS expert, but I do know that non-Solaris clients have had
problems accessing the snapshot directory via NFSv3 in the past.  See
http://www.opensolaris.org/jive/thread.jspa?messageID=45927덧 .


David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Trying to understand zfs RAID-Z

2007-05-18 Thread David Bustos
Quoth Steven Sim on Thu, May 17, 2007 at 09:55:37AM +0800:
>Gurus;
>I am exceedingly impressed by the ZFS although it is my humble opinion
>that Sun is not doing enough evangelizing for it.

What else do you think we should be doing?


David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool(1M): import -a?

2007-03-12 Thread David Bustos
My copies of zpool(1M) have three entries for zpool import:

 zpool import [-d dir] [-D]
 zpool import [-d dir] [-D] [-f] [-o opts] [-R root] pool | id [newpool]
 zpool import [-d dir] [-D] [-f] [-a]

Shouldn't the last one be

 zpool import [-d dir] [-D] [-f] -a

?  That is, if the -a is optional, how can we tell the difference
between it and the first version?


David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2007-01-01 Thread David Bustos
Quoth Darren J Moffat on Thu, Dec 21, 2006 at 03:31:59PM +:
> Pawel Jakub Dawidek wrote:
> >I like the idea, I really do, but it will be s expensive because of
> >ZFS' COW model. Not only file removal or truncation will call bleaching,
> >but every single file system modification... Heh, well, if privacy of
> >your data is important enough, you probably don't care too much about
> >performance. 
> 
> I'm not sure it will be that slow, the bleaching will be done in a 
> separate (new) transaction group in most (probably all) cases anyway so 
> it shouldn't really impact your write performance unless you are very 
> I/O bound and already running near the limit.  However this is 
> speculation until someone tries to implement this!

Bleaching previously used blocks will corrupt files pointed to by older
uberblocks.  I think that means that you'd have to verify that the new
uberblock is readable before you proceed, since part of ZFS's fault
tolerance is falling back to the most recent good uberblock if the
latest one is corrupt.  I don't think this makes bleaching unworkable,
but the interplay will require analysis.


David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz DEGRADED state

2006-12-05 Thread David Bustos
Quoth Thomas Garner on Thu, Nov 30, 2006 at 06:41:15PM -0500:
> I currently have a 400GB disk that is full of data on a linux system.
> If I buy 2 more disks and put them into a raid-z'ed zfs under solaris,
> is there a generally accepted way to build an degraded array with the
> 2 disks, copy the data to the new filesystem, and then move the
> original disk to complete the array?

No, because we currently can't add disks to a raidz array.  You could
create a mirror instead and then add in the other disk to make
a three-way mirror, though.

Even doing that would be dicey if you only have a single machine,
though, since Solaris can't natively read the popular Linux filesystems.
I believe there is freeware to do it, but nothing supported.


David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on production servers with SLA

2006-09-15 Thread David Bustos
Quoth Darren J Moffat on Fri, Sep 08, 2006 at 01:59:16PM +0100:
> Nicolas Dorfsman wrote:
> > Regarding "system partitions" (/var, /opt, all mirrored + alternate 
> > disk), what would be YOUR recommendations ?  ZFS or not ?
> 
> /var for now must be UFS since Solaris 10 doesn't not have ZFS root 
> support and that means /, /etc/, /var/, /usr.

Once 6354489 was fixed, I believe Stephen Hahn got zfs-on-/usr working.
That might be painful to upgrade, though.

> I've run systems with 
> /opt as a ZFS filesystem and it works just fine.  However note that the 
> Solaris installed puts stuff in /opt (for backwards compat reasons, 
> ideally it wouldn't) and that may cause issues with live upgrade or 
> require you to move that stuff onto your ZFS /opt datasets.

I also use zfs for /opt.  I have to unmount it before using Live
Upgrade, though, because it refuses to leave /opt on a separate
filesystem.  I suppose it's right, since the package database may refer
to files in /opt, but I haven't had any problems.


David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] howto reduce ?zfs introduced? noise

2006-07-14 Thread David Bustos
Quoth Thomas Maier-Komor on Thu, Jul 13, 2006 at 04:19:11AM -0700:
> after switching over to zfs from ufs for my ~/ at home, I am a little
> bit disturbed by the noise the disks are making. To be more precise,
> I always have thunderbird and firefox running on my desktop and either
> or both seem to be writing to my ~/ at short intervals and ZFS flushes
> these transactions at intervals about 2-5 seconds to the disks. In
> contrast UFS seems to be doing a little bit more aggressive caching,
> which reduces disk noise.

I'd bet that this is due to your applications repeatedly reading some
file, for which ZFS updates the atime, which requires writing more
blocks than with UFS.  In which case you could fix it by mounting with
noatime, but that's probably not a great idea.


David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs list - column width

2006-07-10 Thread David Bustos
Quoth [EMAIL PROTECTED] on Mon, Jul 10, 2006 at 10:33:44AM +0200:
...
> I would like to customise column width to have nicer view of the hierarchy.
> Wouldn't be good to have some sort of configuration file in which I could set 
> up the column
> width ?

No.  You want zfs to keep the columns aligned by expanding
appropriately.

  6349494 'zfs list' output annoying for even moderately long dataset names


David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss