Re: [zfs-discuss] ZFS/WAFL lawsuit

2007-09-10 Thread Joerg Schilling
David Hopwood [EMAIL PROTECTED] wrote:

 Al Hopper wrote:
  So back to patent portfolios: yes there will be (public and private) 
  posturing; yes there will be negotiations; and, ultimately, there will 
  be a resolution.  All of this won't affect ZFS or anyone running ZFS.

 It matters a great deal what the resolution is. The best outcome, for
 everyone wanting to use any COW and/or always-consistent-on-disk filesystem
 (including btrfs and others), would be for the invalidation part of
 NetApp's lawsuit to succeed, and the infringement part to fail.
 A cross-licensing deal or other out-of-court settlement would be much
 less helpful.

Invalidating COW filesystem patents would of course be the best.
Unfortunately those lawsuits are usually not handled in the open and in order
to understand everything you would need to know about the background ionterests 
of both parties.

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Query on Zpool mount point

2007-09-10 Thread dudekula mastan
Hi All,
   
  At the time of zpool creation, user controls the zpool mount point by using 
-m option. Is there a way to change this mount point dynamically ?
   
  Your help is appreciated.
   
  Thanks  Regards
  Masthan D

   
-
Building a website is a piece of cake. 
Yahoo! Small Business gives you all the tools to get online.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Query on Zpool mount point

2007-09-10 Thread Darren J Moffat
dudekula mastan wrote:
 Hi All,
  
 At the time of zpool creation, user controls the zpool mount point by 
 using -m option. Is there a way to change this mount point dynamically ?

By dynamically I assume me mean after the pool has been created.  If yes 
  then do this if your pool is called tank and you want it mounted on 
/foo/bar

zfs set mountpoint=/foo/bar tank

Alternatively see the -R option for zpool import.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Consequences of adding a root vdev later?

2007-09-10 Thread Curtis Schiewek
So,

If I have a pool that made up of 2 raidz vdevs, all data is striped across?   
So if I somehow lose a vdev I lose all my data?!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Consequences of adding a root vdev later?

2007-09-10 Thread Mario Goebbels
 If I have a pool that made up of 2 raidz vdevs, all data is striped across?   
 So if I somehow lose a vdev I lose all my data?!

If your vdevs are RAID-Z's, there has to be a rare coincidence to happen
to break the pool (two disks failing in the same RAID-Z)...

But yeah, ZFS spreads blocks to different vdevs, trying to balance the
bandwidths of the vdevs.

-mg



signature.asc
Description: OpenPGP digital signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] An Academic Sysadmin's Lament for ZFS ?

2007-09-10 Thread Brian H. Nelson
Stephen Usher wrote:
 Brian H. Nelson:

 I'm sure it would be interesting for those on the list if you could 
 outline the gotchas so that the rest of us don't have to re-invent the 
 wheel... or at least not fall down the pitfalls.
   

I believe I ran into one or both of these bugs:

6429996 zvols don't reserve enough space for requisite meta data
6430003 record size needs to affect zvol reservation size on RAID-Z

Basically what happened was that the zpool filled to 100% and broke UFS 
with 'no space left on device' errors. This was quite strange to sort 
out since the UFS zvol had 30GB of free space.

I never got any replies to my request for more info and/or workarounds 
for the above bugs. My workaround and recommendation is to leave a 
'healthy' amount of un-allocated space in the zpool. I don't know what a 
good level for 'healthy' is. Currently I've left about 1% (2GB) on a 
200GB raid-z pool.

-Brian

-- 
---
Brian H. Nelson Youngstown State University
System Administrator   Media and Academic Computing
  bnelson[at]cis.ysu.edu
---

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS RAIDZ vs. RAID5.

2007-09-10 Thread Robert Milkowski
Hello Pawel,

Excellent job!

Now I guess it would be a good idea to get writes done properly,
even if it means make them slow (like with SVM). The end result
would be - do you want fast wrties/slow reads go ahead with
raid-z; if you need fast reads/slow writes go with raid-5.

btw: I'm just thinking loudly - for raid-5 writes, couldn't you
somewhow utilize ZIL to make writes safe? I'm asking because we've
got an ability to put zil somewhere else like NVRAM card...


-- 
Best regards,
 Robert Milkowski  mailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] An Academic Sysadmin's Lament for ZFS ?

2007-09-10 Thread Brian H. Nelson
Mike Gerdts wrote:
 The UFS on zvols option sounds intriguing to me, but I would guess
 that the following could be problems:

 1) Double buffering:  Will ZFS store data in the ARC while UFS uses
 traditional file system buffers?
   
This is probably an issue. You also have the journal+COW combination 
issue. I'm guessing that both would be performance concerns. My 
application is relatively low bandwidth, so I haven't dug deep into this 
area.
 2) Boot order dependencies.  How does the startup of zfs compare to
 processing of /etc/vfstab?  I would guess that this is OK due to
 legacy mount type supported by zfs.  If this is OK, then dfstab
 processing is probably OK.
Zvols by nature are not available under ZFS automatic mounting. You 
would need to add the /dev/zvol/dsk/... lines to /etc/vfstab just as you 
would for any other /dev/dsk... or /dev/md/dsk/... devices.

If you are not using the z_pool_ for anything else, I would remove the 
automatic mount point for it.

-Brian

-- 
---
Brian H. Nelson Youngstown State University
System Administrator   Media and Academic Computing
  bnelson[at]cis.ysu.edu
---

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Consequences of adding a root vdev later?

2007-09-10 Thread Mario Goebbels
 I'm more worried about the availability of my data in the even of a
 controller failure.  I plan on using 4-chan SATA controllers and
 creating multiple 4 disk RAIDZ vdevs.  I want to use a single pool, but
 it looks like I can't as controller failure = ZERO access, although the
 same can be said for any other non-redundant components.

ZFS vdevs are marked accordingly to identify them as drive, slice or
partition of a pool. ZFS can reconstruct the pool easily using these
after controller failure.

ZFS preferred mode of operation is using dumb disks, means leaving jobs
like redundancy and such to the file system. So unless you're using
hardware RAID, which makes the disk configuration pretty much dependent
on the controller, changing one won't make your data blow up.

(Not sure what actually happens if you change the controller at random
and let the system boot, whether it automagically scans all controllers
and disks if zpool.cache doesn't match the system configuration, or if
it blows up and requires manual intervention. There wasn't an occasion
yet to try out for me.)

-mg



signature.asc
Description: OpenPGP digital signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] An Academic Sysadmin's Lament for ZFS ?

2007-09-10 Thread Brian H. Nelson
Stephen Usher wrote:

 Brian H. Nelson:

 I'm sure it would be interesting for those on the list if you could 
 outline the gotchas so that the rest of us don't have to re-invent the 
 wheel... or at least not fall down the pitfalls.
   
Also, here's a link to the ufs on zvol blog where I originally found the 
idea:

http://blogs.sun.com/scottdickson/entry/fun_with_zvols_-_ufs

-Brian

-- 
---
Brian H. Nelson Youngstown State University
System Administrator   Media and Academic Computing
  bnelson[at]cis.ysu.edu
---

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] An Academic Sysadmin's Lament for ZFS ?

2007-09-10 Thread Richard Elling
Mike Gerdts wrote:
 On 9/8/07, Richard Elling [EMAIL PROTECTED] wrote:
 Changing the topic slightly, the strategic question is:
 why are you providing disk space to students?
 
 For most programming and productivity (e.g. word processing, etc.)
 people will likely be better suited by having network access for their
 personal equipment with local storage.

Most students today are carrying around more storage in their pocket
than they'll get from the university.

 For cases when specialized expensive tools ($10k + per seat) are used,
 it is not practical to install them on hundreds or thousands of
 personal devices for a semester or two of work.  The typical computing
 lab that provides such tools is not well equipped to deal with
 removable media such as flash drives.

I disagree, any lab machine bought in the past 5 years or so has a USB
port, even SunRays.

Further, such tools will often
 times be used to do designs that require simulations to run as batch
 jobs that run under grid computing tools such as Grid Engine, Condor,
 LSF, etc.

Yes, but you won't have 15,000 students running grid engine.  But even
if you do, you can adopt the services models now prevalent in the
industry.  For example, rather than providing storage for a class, let
Google or Yahoo do it.

 Then, of course, there are files that need to be shared, have reliable
 backups, etc.  Pushing that out to desktop or laptop machines is not
 really a good idea.

Clearly the business of a university has different requirements than
student instruction.  But even then, it seems we're stuck in the 1960s
rather than the 21st century.

I think I might have some home directory somewhere at USC, where I
currently attend, but I'm not really sure.  I know I have a (Sun-based :-)
email account with some sort of quota, but that isn't implemented as a
file system quota.  I keep my stuff in my pocket.  This won't work entirely
for situations like Steve's compute cluster, but it will for many.

There is also a long tail situation here, which is how I approached the
problem at eng.Auburn.edu.  1% of the users will use  90% of the space. For
them, I had special places.  For everyone else, they were lumped into large-ish
buckets.  A daily cron job easily identifies the 1% and we could proactively
redistribute them, as needed.  Of course, quotas are also easily defeated
and the more clever students played a fun game of hide-and-seek, but I
digress.  There is more than one way to solve these allocation problems.

The real PITA was cost accounting, especially for government contracts :-(
The cost of managing the storage is much greater than the cost of the
storage, so the trend will inexorably be towards eliminating the management
costs -- hence the management structure of ZFS is simpler than the previous
solutions.  The main gap for .edu sites is quotas which will likely be solved
some other way in the long run...  Meanwhile, pile on
http://bugs.opensolaris.org/view_bug.do?bug_id=6501037
  -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] An Academic Sysadmin's Lament for ZFS ?

2007-09-10 Thread Darren J Moffat
Richard Elling wrote:
 There is also a long tail situation here, which is how I approached the
 problem at eng.Auburn.edu.  1% of the users will use  90% of the space. For
 them, I had special places.  For everyone else, they were lumped into 
 large-ish
 buckets.  A daily cron job easily identifies the 1% and we could proactively
 redistribute them, as needed.  Of course, quotas are also easily defeated
 and the more clever students played a fun game of hide-and-seek, but I
 digress.  There is more than one way to solve these allocation problems.

Ah I remember those games well and they are one of the reasons I'm now a 
Solaris developer!  Though at Glasgow Uni's Comp Sci department it 
wasn't disk quotas (peer pressure was used for us) but print quotas 
which were much more fun to try and bypass and environmentally 
responsible to quota in the first place.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] I/O freeze after a disk failure

2007-09-10 Thread Richard Elling
Gino wrote:
 Richard, thank you for your detailed reply.
 Unfortunately an other reason to stay with UFS in
 production ..
   
 IMHO, maturity is the primary reason to stick with
 UFS.  To look at
 this through the maturity lens, UFS is the great
 grandfather living on
 life support (prune juice and oxygen) while ZFS is
 the late adolescent,
 soon to bloom into a young adult. The torch will pass
 when ZFS
 becomes the preferred root file system.
  -- richard
 
 I agree with you but don't understand why Sun has integrated ZFS on Solaris 
 and declared it as stable.
 Sun Sales tell you to trash your old redundant arrays and go with jbod and 
 ZFS...
 but don't tell you that you probably will need to reboot your SF25k because 
 of a disk failure!!  :(

To put this in perspective, no system on the planet today handles all faults.
I would even argue that building such a system is theoretically impossible.

So the subset of faults which ZFS covers which is different than the subset
that UFS covers and different than what SVM covers.  For example, we *know*
that ZFS has allowed people to detect and recover from faulty SAN switches,
borken RAID arrays, and accidental deletions which UFS could have never even
detected.  There are some known gaps which are being closed in ZFS, but it is
simply not the case that UFS is superior in all RAS respects to ZFS.
  -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] An Academic Sysadmin's Lament for ZFS ?

2007-09-10 Thread Wade . Stuart
[EMAIL PROTECTED] wrote on 09/10/2007 11:40:16 AM:

 Richard Elling wrote:
  There is also a long tail situation here, which is how I approached the
  problem at eng.Auburn.edu.  1% of the users will use  90% of the
space. For
  them, I had special places.  For everyone else, they were lumped
 into large-ish
  buckets.  A daily cron job easily identifies the 1% and we could
proactively
  redistribute them, as needed.  Of course, quotas are also easily
defeated
  and the more clever students played a fun game of hide-and-seek, but I
  digress.  There is more than one way to solve these allocation
problems.

 Ah I remember those games well and they are one of the reasons I'm now a
 Solaris developer!  Though at Glasgow Uni's Comp Sci department it
 wasn't disk quotas (peer pressure was used for us) but print quotas
 which were much more fun to try and bypass and environmentally
 responsible to quota in the first place.


  Very true,  you could even pay people to track down heavy users and
bonk them on the head.  Why is everyone responding with alternate routes to
a simple need?  User quotas have been used in the past, and will be used in
the future because they work (well), are simple, tied into many existing
workflows/systems and very understandable for both end users and
administrators.  You can come up with 100 other ways to accomplish psudo
user quotas or end runs around the core issue (did we really have google
space farming suggested -- we are reading a FS mailing list here?), but
quotas are tested and well understood fixes to these problems.  Just
because someone decided to call ZFS pool reservations quotas does not mean
the need for real user quotas is gone.

User quotas are a KISS solution to space hogs.
Zpool quotas (really pool reservations) are not unless you can divvy up
data slices into small fs mounts and have no user overlap in the partition.
user quotas + zfs quotas  zfs quotas;

-Wade

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS RAIDZ vs. RAID5.

2007-09-10 Thread Darren Dunham
 Now I guess it would be a good idea to get writes done properly,
 even if it means make them slow (like with SVM). The end result
 would be - do you want fast wrties/slow reads go ahead with
 raid-z; if you need fast reads/slow writes go with raid-5.
 
 btw: I'm just thinking loudly - for raid-5 writes, couldn't you
 somewhow utilize ZIL to make writes safe? I'm asking because we've
 got an ability to put zil somewhere else like NVRAM card...

But the safety of raidz (and the overall on-disk consistency of the
pool) does not currently depend on the ZIL.

It instead depends on the fact that blocks are never modified in-place,
but written first, then activated atomically.  So I guess this depends
on how the R5 is implemented in ZFS.  As long as all writes cause a new
block to be written (which has a full R5 stripe?), then the activation
will be atomic and there is no write hole.  The only problem comes if
existing blocks were modified (and that would cause problems with
snapshots anyway, right?)

-- 
Darren Dunham   [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper?   San Francisco, CA bay area
  This line left intentionally blank to confuse you. 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] An Academic Sysadmin's Lament for ZFS ?

2007-09-10 Thread Darren J Moffat
[EMAIL PROTECTED] wrote:
   Very true,  you could even pay people to track down heavy users and
 bonk them on the head.  Why is everyone responding with alternate routes to
 a simple need? 

For the simple reason that sometimes it is good to challenge existing 
practice and try and find the real need rather than I need X because 
I've always done it using X.

We always used a vfstab and dfstab (or exportfs) file before and used a 
separate software RAID and filesystem before too.

  User quotas have been used in the past, and will be used in
 the future because they work (well), are simple, tied into many existing
 workflows/systems and very understandable for both end users and
 administrators.  You can come up with 100 other ways to accomplish psudo
 user quotas or end runs around the core issue (did we really have google
 space farming suggested -- we are reading a FS mailing list here?), but
 quotas are tested and well understood fixes to these problems.  Just
 because someone decided to call ZFS pool reservations quotas does not mean
 the need for real user quotas is gone.

Reservations in ZFS are quite different to Quotas, ZFS has both 
concepts.  A reservation is a guaranteed minimum, a quota in ZFS is a 
guaranteed maximum.



-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] An Academic Sysadmin's Lament for ZFS ?

2007-09-10 Thread Wade . Stuart


[EMAIL PROTECTED] wrote on 09/10/2007 12:13:18 PM:

 [EMAIL PROTECTED] wrote:
Very true,  you could even pay people to track down heavy users
and
  bonk them on the head.  Why is everyone responding with alternate
routes to
  a simple need?

 For the simple reason that sometimes it is good to challenge existing
 practice and try and find the real need rather than I need X because
 I've always done it using X.

I am not against refactoring solutions,  but zfs quotas and the lack of
user quotas in general either leave people trying to use zfs quotas in lieu
of user quotas, suggesting weak end runs against the problem (a cron to
calculate hogs), or belittling the need to actually limit disk usage per
user id.  All of these threads to this point have not answered the needs in
anyway close to an solution that user quotas allow.





 We always used a vfstab and dfstab (or exportfs) file before and used a
 separate software RAID and filesystem before too.

Yes,  and the replacements (when talking ZFS) are either parity or better
-- that makes switching a win-win.  ENOSUCH when talking user quotas.


   User quotas have been used in the past, and will be used in
  the future because they work (well), are simple, tied into many
existing
  workflows/systems and very understandable for both end users and
  administrators.  You can come up with 100 other ways to accomplish
psudo
  user quotas or end runs around the core issue (did we really have
google
  space farming suggested -- we are reading a FS mailing list here?), but
  quotas are tested and well understood fixes to these problems.  Just
  because someone decided to call ZFS pool reservations quotas does not
mean
  the need for real user quotas is gone.

 Reservations in ZFS are quite different to Quotas, ZFS has both
 concepts.  A reservation is a guaranteed minimum, a quota in ZFS is a
 guaranteed maximum.


Reservations (the general term when talking most of the disk virtualizing
and pooling technologies in play today) usually cover both the floor
(guaranteed space) and ceiling (max alloc space) for the pool volume,
dynamic store, or backing store.  ZFS Quotas (reservations) can be called
whatever you want -- it has just become frustrating when people start
pushing ZFS quotas (reservations) as a drop in replacement for user quotas.
They are tools for different issues with some overlap.  Even though one can
pound in a nail with a screwdriver,  I would rather have a hammer.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS RAIDZ vs. RAID5.

2007-09-10 Thread Pawel Jakub Dawidek
On Mon, Sep 10, 2007 at 04:31:32PM +0100, Robert Milkowski wrote:
 Hello Pawel,
 
 Excellent job!
 
 Now I guess it would be a good idea to get writes done properly,
 even if it means make them slow (like with SVM). The end result
 would be - do you want fast wrties/slow reads go ahead with
 raid-z; if you need fast reads/slow writes go with raid-5.

Writes in non-degraded mode already works. Only non-degraded mode
doesn't work. My implementation is based on RAIDZ, so I'm planning to
support RAID6 as well.

 btw: I'm just thinking loudly - for raid-5 writes, couldn't you
 somewhow utilize ZIL to make writes safe? I'm asking because we've
 got an ability to put zil somewhere else like NVRAM card...

The problem with RAID5 is that different blocks share the same parity,
which is not the case for RAIDZ. When you write a block in RAIDZ, you
write the data and the parity, and then you switch the pointer in
uberblock. For RAID5, you write the data and you need to update parity,
which also protects some other data. Now if you write the data, but
don't update the parity before a crash, you have a whole. If you update
you parity before the write and a crash, you have a inconsistent with
different block in the same stripe.

My idea was to have one sector every 1GB on each disk for a journal to
keep list of blocks beeing updated. For example you want to write 2kB of
data at offset 1MB. You first store offset+size in this journal, then
write data and update parity and then remove offset+size from the
journal.  Unfortuantely, we would need to flush write cache twice: after
offset+size addition and before offset+size removal.
We could optimize it by doing lazy removal, eg. wait for ZFS to flush
write cache as a part of transaction and then remove old offset+size
paris.
But I still expect this to give too much overhead.

-- 
Pawel Jakub Dawidek   http://www.wheel.pl
[EMAIL PROTECTED]   http://www.FreeBSD.org
FreeBSD committer Am I Evil? Yes, I Am!


pgpKARqkGHZjL.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] An Academic Sysadmin's Lament for ZFS ?

2007-09-10 Thread Richard Elling
[EMAIL PROTECTED] wrote:
 All of these threads to this point have not answered the needs in
 anyway close to an solution that user quotas allow.

I thought I did answer that... for some definition of answer...

   The main gap for .edu sites is quotas which will likely be solved
  some other way in the long run...  Meanwhile, pile on
  http://bugs.opensolaris.org/view_bug.do?bug_id=6501037

Or, if you're so inclined,
http://cvs.opensolaris.org/source/

The point being that it either isn't a high priority for the ZFS team, there
are other solutions to the problem (which may not require changes to ZFS),
or you can fix it on your own.  You can impact any or all of these things.
  -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best way to incorporate disk size tolerance into

2007-09-10 Thread MC
To expand on this:

 The recommended use of whole disks is for drives with volatile write caches 
 where ZFS will enable the cache if it owns the whole disk.

Does ZFS really never use disk cache when working with a disk slice?  Is there 
any way to force it to use the disk cache?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs mount points (all-or-nothing)

2007-09-10 Thread msl
Hello all,
 There is a way to configure the zpool to legacy_mount, and have all 
filesystems in that pool mounted automatically?
 I will try explain better:
 - Imagine that i have a zfs pool with 1000 filesystems. 
 - I want to control the mount/unmount of that pool, so, i did configure the 
zpool to legacy_mount. 
 - But i don't want to have to mount the other 1000 filessytems...so, when i 
issue a mount -F zfs mypool, all the filesystems would be mounted too (i 
think the mount property is per-filesystem).
 Sorry if that is a dummy question, but the all-or-nothing configuration that 
i think is the solution, is not what i really need.
 Thanks for your time!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best way to incorporate disk size tolerance into

2007-09-10 Thread Richard Elling
MC wrote:
 To expand on this:
 
 The recommended use of whole disks is for drives with volatile 
 write caches where ZFS will enable the cache if it owns the whole disk.
 
 Does ZFS really never use disk cache when working with a disk slice?  

This question doesn't make sense.  ZFS doesn't know anything about the
disk's cache.  But if ZFS has full control over the disk, then it will
attempt to enable the disk's nonvolatile cache.

 Is there any way to force it to use the disk cache?

ZFS doesn't know anything about the disk's cache.  But it will try to
issue the flush cache commands as needed.

To try to preempt the next question, some disks allow you to turn off
the nonvolatile write cache.  Some don't.  Some disks allow you to
enable or disable the nonvolatile write cache via the format(1m) command
in expert mode.  Some don't.  AFAIK, nobody has a list of these, so you
might try it.  Caveat: do not enable nonvolatile write cache for UFS.
  -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] An Academic Sysadmin's Lament for ZFS ?

2007-09-10 Thread Phil Harman

On 10 Sep 2007, at 16:41, Brian H. Nelson wrote:


Stephen Usher wrote:


Brian H. Nelson:

I'm sure it would be interesting for those on the list if you could
outline the gotchas so that the rest of us don't have to re-invent  
the

wheel... or at least not fall down the pitfalls.

Also, here's a link to the ufs on zvol blog where I originally  
found the

idea:

http://blogs.sun.com/scottdickson/entry/fun_with_zvols_-_ufs


Not everything I've seen blogged about UFS and zvols fills me with  
warm fuzzies. For instance, the above takes no account of the fact  
that the UFS filesystem needs to be in a consistent state before a  
snapshot is taken - e.g. using lockfs(1M).


Example:


Preparation ...

basket# zfs create -V 10m pool0/v1
basket# newfs /dev/zvol/rdsk/pool0/v1
newfs: /dev/zvol/rdsk/pool0/v1 last mounted as /tmp/v1
newfs: construct a new file system /dev/zvol/rdsk/pool0/v1: (y/n)? y
Warning: 4130 sector(s) in last cylinder unallocated
/dev/zvol/rdsk/pool0/v1:20446 sectors in 4 cylinders of 48  
tracks, 128 sectors

10.0MB in 1 cyl groups (14 c/g, 42.00MB/g, 20160 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32,

basket# mount -r /dev/zvol/dsk/pool0/v1 /tmp/v1


Scenario 1 ...

basket# date /tmp/v1/f1; zfs snapshot pool0/[EMAIL PROTECTED]
basket# cat /tmp/v1/f1
Mon Sep 10 23:07:42 BST 2007
basket# mount -r /dev/zvol/dsk/pool0/[EMAIL PROTECTED] /tmp/v1s1
basket# ls /tmp/v1s1
f1   lost+found/
basket# cat /tmp/v1s1/f1

basket# date /tmp/v1/f1; zfs snapshot pool0/[EMAIL PROTECTED]
basket# mount -r /dev/zvol/dsk/pool0/[EMAIL PROTECTED] /tmp/v1s2
basket# cat /tmp/v1s2/f1
Mon Sep 10 23:07:42 BST 2007
basket# cat /tmp/v1/f1
Mon Sep 10 23:09:19 BST 2007

Note: the first snapshot sees the file but not the contents, while  
the second snapshot sees stale data.



Scenario 2 ...

basket# date /tmp/v1/f2; lockfs -wf /tmp/v1; zfs snapshot pool0/ 
[EMAIL PROTECTED]; lockfs -u /tmp/v1

basket# mount -r /dev/zvol/dsk/pool0/[EMAIL PROTECTED] /tmp/v1s3
mount: Mount point /tmp/v1s3 does not exist.
basket# mkdir /tmp/v1s3
basket# mount -r /dev/zvol/dsk/pool0/[EMAIL PROTECTED] /tmp/v1s3
basket# cat /tmp/v1s3/f2
Mon Sep 10 23:18:17 BST 2007
basket# cat /tmp/v1/f2
Mon Sep 10 23:18:17 BST 2007
basket#

Note: the snapshot is consistent because of the lockfs(1M) calls.


Phil

smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS/WAFL lawsuit

2007-09-10 Thread David Hopwood
Joerg Schilling wrote:
 David Hopwood [EMAIL PROTECTED] wrote:
 
 Al Hopper wrote:
 So back to patent portfolios: yes there will be (public and private) 
 posturing; yes there will be negotiations; and, ultimately, there will 
 be a resolution.  All of this won't affect ZFS or anyone running ZFS.
 It matters a great deal what the resolution is. The best outcome, for
 everyone wanting to use any COW and/or always-consistent-on-disk filesystem
 (including btrfs and others), would be for the invalidation part of
 NetApp's lawsuit to succeed, and the infringement part to fail.
 A cross-licensing deal or other out-of-court settlement would be much
 less helpful.
 
 Invalidating COW filesystem patents would of course be the best.
 Unfortunately those lawsuits are usually not handled in the open and in order
 to understand everything you would need to know about the background 
 interests 
 of both parties.

IANAL, but I was under the impression that it was possible to file an
amicus brief or amicus curiae, which in this case would detail known
prior art -- whether or not it is prior art that benefits either party in
the case.

http://en.wikipedia.org/wiki/Amicus_curiae

The EFF does this quite often, and it has a patent busting project which
is described at http://www.eff.org/patent/. (The EFF wanted list does
not currently include the WAFL and ZFS patents, but it's only been a few
days since the NetApp suit was announced.)

It is true that, as the WP article says, The decision whether to admit the
information lies with the court. But surely, even in East Texas, it would
be difficult to completely dismiss prior art on COW and always-consistent-
on-disk filesystems, no matter how submitted. Or am I being too naive?

-- 
David Hopwood [EMAIL PROTECTED]

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ext3 on zvols journal performance pathologies?

2007-09-10 Thread Joshua Goodall
I've been seeing read and write performance pathologies with Linux
ext3 over iSCSI to zvols, especially with small writes. Does running
a journalled filesystem to a zvol turn the block storage into swiss
cheese? I am considering serving ext3 journals (and possibly swap
too) off a raw, hardware-mirrored device. Before I do (and I'll
write up any results) I'd like to know if anyone tried/addressed
this already.

The lack of tools to analyse ZFS fragmentation means I'm somewhat
in the dark, so I'm just likely to suck it and see.

JG

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss