Re: [zfs-discuss] Missing zpool devices, what are the options

2007-11-14 Thread David Bustos
Quoth Mark Ashley on Mon, Nov 12, 2007 at 11:35:57AM +1100:
 Is it possible to tell ZFS to forget those SE6140 LUNs ever belonged to the
 zpool? I know that ZFS will have probably put some user data on them, but if
 there is a possibility of recovering any of those zvols on the zpool 
 it'd really help a lot, to put it mildly. My understanding is all the
 metadata will be spread around and polluted by now, even after a few
 days of the SE6140 LUNs being linked, but I thought I'd ask.

No.  I believe this is 4852783, reduce pool capacity, which hasn't
been implemented yet.  (I don't know whether it's being worked on.)
I think your best bet is to copy off any data you can get to, and
recreate the pool.


David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dfratime on zfs

2007-08-16 Thread David Bustos
Quoth Darren Dunham on Wed, Aug 15, 2007 at 12:50:33PM -0700:
 But a traditional filesystem isn't going to write anything without a
 request.  ZFS is constantly updating the pool/uberblock status the way
 things currently work.  So even if you choose to defer the atime update
 until much longer, it won't prevent writes from being scheduled anyway.

Why does ZFS update the uberblock when there are no writes?


David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] netbsd client can mount zfs snapshot dir but it never updates

2007-06-11 Thread David Bustos
Quoth Ed Ravin on Thu, Jun 07, 2007 at 09:57:52PM -0700:
 My Solaris 10 box is exporting a ZFS filesystem over NFS.  I'm
 accessing the data with a NetBSD 3.1 client, which only supports NFS
 3.  Everything works except when I look at the .zfs/snapshot
 directory.  The first time I list out the .zfs/snapshot directory,
 I get a correct listing of the contents.  An hour later, when
 a snapshot has been deleted and a new one created, I still see the
 same listing.  If I type in the name of the new snapshot manually,
 I can access it, but the contents of the .zfs/snapshot directory as
 seen by the NetBSD 3.1 box never changes (well, not in the last 24
 hours since I started testing this).

I'm not an NFS expert, but I do know that non-Solaris clients have had
problems accessing the snapshot directory via NFSv3 in the past.  See
http://www.opensolaris.org/jive/thread.jspa?messageID=45927#45927 .


David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Trying to understand zfs RAID-Z

2007-05-18 Thread David Bustos
Quoth Steven Sim on Thu, May 17, 2007 at 09:55:37AM +0800:
Gurus;
I am exceedingly impressed by the ZFS although it is my humble opinion
that Sun is not doing enough evangelizing for it.

What else do you think we should be doing?


David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2007-01-01 Thread David Bustos
Quoth Darren J Moffat on Thu, Dec 21, 2006 at 03:31:59PM +:
 Pawel Jakub Dawidek wrote:
 I like the idea, I really do, but it will be s expensive because of
 ZFS' COW model. Not only file removal or truncation will call bleaching,
 but every single file system modification... Heh, well, if privacy of
 your data is important enough, you probably don't care too much about
 performance. 
 
 I'm not sure it will be that slow, the bleaching will be done in a 
 separate (new) transaction group in most (probably all) cases anyway so 
 it shouldn't really impact your write performance unless you are very 
 I/O bound and already running near the limit.  However this is 
 speculation until someone tries to implement this!

Bleaching previously used blocks will corrupt files pointed to by older
uberblocks.  I think that means that you'd have to verify that the new
uberblock is readable before you proceed, since part of ZFS's fault
tolerance is falling back to the most recent good uberblock if the
latest one is corrupt.  I don't think this makes bleaching unworkable,
but the interplay will require analysis.


David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz DEGRADED state

2006-12-05 Thread David Bustos
Quoth Thomas Garner on Thu, Nov 30, 2006 at 06:41:15PM -0500:
 I currently have a 400GB disk that is full of data on a linux system.
 If I buy 2 more disks and put them into a raid-z'ed zfs under solaris,
 is there a generally accepted way to build an degraded array with the
 2 disks, copy the data to the new filesystem, and then move the
 original disk to complete the array?

No, because we currently can't add disks to a raidz array.  You could
create a mirror instead and then add in the other disk to make
a three-way mirror, though.

Even doing that would be dicey if you only have a single machine,
though, since Solaris can't natively read the popular Linux filesystems.
I believe there is freeware to do it, but nothing supported.


David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on production servers with SLA

2006-09-15 Thread David Bustos
Quoth Darren J Moffat on Fri, Sep 08, 2006 at 01:59:16PM +0100:
 Nicolas Dorfsman wrote:
  Regarding system partitions (/var, /opt, all mirrored + alternate 
  disk), what would be YOUR recommendations ?  ZFS or not ?
 
 /var for now must be UFS since Solaris 10 doesn't not have ZFS root 
 support and that means /, /etc/, /var/, /usr.

Once 6354489 was fixed, I believe Stephen Hahn got zfs-on-/usr working.
That might be painful to upgrade, though.

 I've run systems with 
 /opt as a ZFS filesystem and it works just fine.  However note that the 
 Solaris installed puts stuff in /opt (for backwards compat reasons, 
 ideally it wouldn't) and that may cause issues with live upgrade or 
 require you to move that stuff onto your ZFS /opt datasets.

I also use zfs for /opt.  I have to unmount it before using Live
Upgrade, though, because it refuses to leave /opt on a separate
filesystem.  I suppose it's right, since the package database may refer
to files in /opt, but I haven't had any problems.


David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] howto reduce ?zfs introduced? noise

2006-07-14 Thread David Bustos
Quoth Thomas Maier-Komor on Thu, Jul 13, 2006 at 04:19:11AM -0700:
 after switching over to zfs from ufs for my ~/ at home, I am a little
 bit disturbed by the noise the disks are making. To be more precise,
 I always have thunderbird and firefox running on my desktop and either
 or both seem to be writing to my ~/ at short intervals and ZFS flushes
 these transactions at intervals about 2-5 seconds to the disks. In
 contrast UFS seems to be doing a little bit more aggressive caching,
 which reduces disk noise.

I'd bet that this is due to your applications repeatedly reading some
file, for which ZFS updates the atime, which requires writing more
blocks than with UFS.  In which case you could fix it by mounting with
noatime, but that's probably not a great idea.


David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss