Re: [zfs-discuss] Who owns the dataset?

2010-07-16 Thread Frank Cusack

On 7/16/10 4:33 PM -0700 Johnson Earls wrote:

On 07/16/10 10:30 AM, Lori Alt wrote:

You can also run through the zones, doing 'zoneconfig -z  info'
commands to look for datasets delegated to each zone.


That's not necessarily the current owner though, is it?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Who owns the dataset?

2010-07-16 Thread Johnson Earls
Lori,

Thanks for the reply.

By "floating" I mean we have a set of scripts that will shut down a zone, 
export all the ZFS pools attached to that zone, modify the zone config to not 
have those datasets associated anymore, then, on another system, import the ZFS 
pools, modify the new zone's config to include those datasets, and boot the 
zone.

I'll look at using the mount-option thing, thanks for that.

- Johnson

On 07/16/10 10:30 AM, Lori Alt wrote:
>
> On 07/14/10 05:45 PM, Johnson Earls wrote:
> > Hello,
> >
> > How would I go about finding out which zone owns a particular dataset from 
> > a script running in the > global zone?
> >
>
> > We have some ZFS datasets that can "float" between zones on different 
> > servers in order to provide > a manual application failover mechanism.
>
> I don't know what you mean by datasets "floating" between zones.  In 
> order for a zone to access a dataset, the dataset must have been 
> delegated to the zone, which requires some explicit action.
>
> But to answer your specific question, if you look at a mounted dataset's 
> entry in /etc/mnttab:
>
> rpool/z2-del/myz2zfs
> rw,nodevices,setuid,nonbmand,exec,xattr,atime,zone=z2,dev=16d001c
> 1279296850
>
> you'll see a 'zone=' entry if the zone is delegated to the zone 
> (assuming it's mounted at all).
>
> Oddly, enough, the "zone=" string doesn't appear for the zone 
> root.  I'm not sure if that's intentional or an oversight.  But in any 
> case, it doesn't appear that you're looking for zone roots.
>
> You can also run through the zones, doing 'zoneconfig -z  info' 
> commands to look for datasets delegated to each zone.
>
>
> Lori

- Johnson
jea...@responsys.com


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] preparing for future drive additions

2010-07-16 Thread Richard Elling
On Jul 14, 2010, at 11:44 PM, David Dyer-Bennet wrote:
> On Wed, July 14, 2010 14:58, Daniel Taylor wrote:
> 
>> I'm about the build a opensolaris NAS system, currently we have two drives
>> and are planning on adding two more at a later date (2TB enterprise level
>> HDD are a bit expensive!).
> 
> Do you really need them?  Now?  Maybe 1TB drives are good now, and then
> add a pair of 2TB in a year?

I was recently at a large computer retailer and 1TB drives were not available
for purchase.  2TB 3.5" for $110 and 500GB 2.5" drives were available.  As
David notes, if you plan to expand, plan to expand by replacing drives or
adding pairs.  This will be very cost efficient when your data space needs
are modest.
 -- richard

-- 
Richard Elling
rich...@nexenta.com   +1-760-896-4422


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is there any support for bi-directional synchronization in zfs?

2010-07-16 Thread Richard Elling
On Jul 14, 2010, at 9:10 AM, Peter Taps wrote:
> Folks,
> 
> This is probably a very naive question.
> 
> Is it possible to set zfs for bi-directional synchronization of data across 
> two locations? I am thinking this is almost impossible. Consider two files A 
> and B at two different sites. There are three possible cases that require 
> synchronization:
> 
>   1. A is changed. B is unchanged.
>   2. B is changed. A is unchanged.
>   3. A is changed. B is changed.
> 
> While it is possible to achieve synchronization for the first two cases, case 
> 3 requires special merging and is almost impossible.

It is certainly not impossible, people do this every day.

> I am thinking it is the same problem even at the block level.

No, it is just much more difficult at the block level because blocks do not have
context.  Your view of A and B requires some level of context above the block
level. So you must do the reconciliation at that level, not below.  Hence the
recommendations to use unison, hg, svn, or even OpenOffice which have the
tools at the contextual level of the data to reconcile differences between two
objects.

> Even to achieve 1 and 2 is a bit tricky given the latency between the two 
> sites. Is there anything in zfs that makes it easier? 

Don't try to solve this problem by removing data contextual knowledge, try to 
solve it by increasing data context. 
 -- richard


-- 
Richard Elling
rich...@nexenta.com   +1-760-896-4422



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-16 Thread Frank Cusack

On 7/16/10 3:07 PM -0500 David Dyer-Bennet wrote:


On Fri, July 16, 2010 14:07, Frank Cusack wrote:

On 7/16/10 12:02 PM -0500 David Dyer-Bennet wrote:

It would be nice to have applications request to be notified
before a snapshot is taken, and when that have requested
notification have acknowledged that they're ready, the snapshot
would be taken; and then another notification sent that it was
taken.  Prior to indicating they were ready, the apps could
have achieved a logically consistent on disk state.  That
would eliminate the need for (for example) separate database
backups, if you could have a snapshot with the database on it
in a consistent state.


Any software dependent on cooperating with the filesystem to ensure that
the files are consistent in a snapshot fails the cord-yank test (which
is
equivalent to the "processor explodes" test and the "power supply bursts
into flames" test and the "disk drive shatters" test and so forth).  It
can't survive unavoidable physical-world events.


It can, if said software can roll back to the last consistent state.
That may or may not be "recent" wrt a snapshot.  If an application is
very active, it's possible that many snapshots may be taken, none of
which are actually in a state the application can use to recover from.
Rendering snapshots much less effective.


Wait, if the application can in fact survive the "cord pull" test then by
definition of "survive", all the snapshots are useful.


Useful, yes, but you missed my point about recency.  They may not be as
useful as they could be, and depending on how data changes older data or
transactions may be unrecoverable due to an inconsistent snapshot.


 They'll be
everything consistent that was committed to disk by the time of the yank
(or snapshot); which, it seems to me, is the very best that anybody could
hope for.


This is true only if transactions are journaled somehow, and thus a snapshot
could return the application to it's current state -1.


Also, just administratively, and perhaps legally, it's highly desirable
to know that the time of a snapshot is the actual time that application
state can be recovered to or referenced to.


Maybe, but since that's not achievable for your core corporate asset (the
database), I think of it as a pipe dream rather than a goal.


Ah, because we can't achieve this ideal for some very critical application,
we shouldn't bother getting there for other applications.


Also, if an application cannot survive a cord-yank test, it might be
even more highly desirable that snapshots be a stable that from which
the application can be restarted.


If it cannot survive a cord-yank test, it should not be run, ever, by
anybody, for any purpose more important than playing a game.


Nice ideal world you live in ... wish I were there.

It's not as if a notification mechanism somehow makes things worse for
applications that don't use it.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recommended RAM for ZFS on various platforms

2010-07-16 Thread Joerg Schilling
"Sam Fourman Jr."  wrote:

> using FreeBSD 9 w/ ZFSv15 using default settings, nothing in loader.conf
> or nothing in sysctl.conf and a GENERIC kernel
>
> 12GB of memory seems to be all ZFS wanted to use, I have tried
> machines with 32GB
> but zfs never wants to use more unless you play with loader.conf settings

On Solaris on a SunFire X4540 with 64 GB, I've seen ZFS RAM usage far beyond 
32 GB without doing anything. 

Do you know whether this FreeBSD behavior is intended?



Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recommended RAM for ZFS on various platforms

2010-07-16 Thread Sam Fourman Jr.
On Fri, Jul 16, 2010 at 12:24 PM, Michael Johnson
 wrote:
> I'm currently planning on running FreeBSD with ZFS, but I wanted to 
> double-check
> how much memory I'd need for it to be stable.  The ZFS wiki currently says you
> can go as low as 1 GB, but recommends 2 GB; however, elsewhere I've seen 
> someone
> claim that you need at least 4 GB.  Does anyone here know how much RAM FreeBSD
> would need in this case?
>
> Likewise, how much RAM does OpenSolaris need for stability when running ZFS?
>  How about other OpenSolaris-based OSs, like NexentaStor?  (My searching found
> that OpenSolaris recommended at least 1 GB, while NexentaStor said 2 GB was
> okay, 4 GB was better.  I'd be interested in hearing your input, though.)
>
> If it matters, I'm currently planning on RAID-Z2 with 4x500GB consumer-grade
> SATA drives.  (I know that's not a very efficient configuration, but I'd 
> really
> like the redundancy of RAID-Z2 and I just don't need more than 1 TB of 
> available
> storage right now, or for the next several years.)  This is on an AMD64 
> system,
> and the OS in question will be running inside of VirtualBox, with raw access 
> to
> the drives.
>
> Thanks,
> Michael
>

using FreeBSD 9 w/ ZFSv15 using default settings, nothing in loader.conf
or nothing in sysctl.conf and a GENERIC kernel

12GB of memory seems to be all ZFS wanted to use, I have tried
machines with 32GB
but zfs never wants to use more unless you play with loader.conf settings


if I was building a small office NAS 12GB would be where I would start.
I run mostly whitebox hardware (asus motherboards desktop disks etc..)

in my experience, I have found FreeBSD to be much more stable than Open Solaris
but to be fair, I understand FreeBSD, and I have only loaded Open solaris

with default settings and the most ram I ever gave open Solaris is a 8GB machine
so if you wanted to go with Open solaris (for dedupe and such) I would
use a Lot of ram

from many people I have talked to 32GB of memory and Open Soalris is
really stable

-- 

Sam Fourman Jr.
Fourman Networks
http://www.fourmannetworks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recommended RAM for ZFS on various platforms

2010-07-16 Thread Bob Friesenhahn

On Fri, 16 Jul 2010, Michael Johnson wrote:


Just curious, why do you say I'd be able to get away with less RAM in FreeBSD
(as compared to NexentaStor, I'm assuming)?  I don't know tons about the OSs in
question; is FreeBSD just leaner in general?


The FreeBSD OS itself is normally leaner but FreeBSD plus zfs is not 
(yet) as memory efficient as Solaris.  Solaris and zfs do the Vulcan 
mind-meld when it comes to memory but FreeBSD does not.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-16 Thread David Dyer-Bennet

On Fri, July 16, 2010 14:07, Frank Cusack wrote:
> On 7/16/10 12:02 PM -0500 David Dyer-Bennet wrote:
>>> It would be nice to have applications request to be notified
>>> before a snapshot is taken, and when that have requested
>>> notification have acknowledged that they're ready, the snapshot
>>> would be taken; and then another notification sent that it was
>>> taken.  Prior to indicating they were ready, the apps could
>>> have achieved a logically consistent on disk state.  That
>>> would eliminate the need for (for example) separate database
>>> backups, if you could have a snapshot with the database on it
>>> in a consistent state.
>>
>> Any software dependent on cooperating with the filesystem to ensure that
>> the files are consistent in a snapshot fails the cord-yank test (which
>> is
>> equivalent to the "processor explodes" test and the "power supply bursts
>> into flames" test and the "disk drive shatters" test and so forth).  It
>> can't survive unavoidable physical-world events.
>
> It can, if said software can roll back to the last consistent state.
> That may or may not be "recent" wrt a snapshot.  If an application is
> very active, it's possible that many snapshots may be taken, none of
> which are actually in a state the application can use to recover from.
> Rendering snapshots much less effective.

Wait, if the application can in fact survive the "cord pull" test then by
definition of "survive", all the snapshots are useful.  They'll be
everything consistent that was committed to disk by the time of the yank
(or snapshot); which, it seems to me, is the very best that anybody could
hope for.

> Also, just administratively, and perhaps legally, it's highly desirable
> to know that the time of a snapshot is the actual time that application
> state can be recovered to or referenced to.

Maybe, but since that's not achievable for your core corporate asset (the
database), I think of it as a pipe dream rather than a goal.

> Also, if an application cannot survive a cord-yank test, it might be
> even more highly desirable that snapshots be a stable that from which
> the application can be restarted.

If it cannot survive a cord-yank test, it should not be run, ever, by
anybody, for any purpose more important than playing a game.

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recommended RAM for ZFS on various platforms

2010-07-16 Thread Garrett D'Amore
On Fri, 2010-07-16 at 11:57 -0700, Michael Johnson wrote:
> us, why do you say I'd be able to get away with less RAM in FreeBSD 
> (as compared to NexentaStor, I'm assuming)?  I don't know tons about
> the OSs in 
> question; is FreeBSD just leaner in general? 

Compared to Solaris, in my estimation, yes, its a little leaner.  Not
necessarily a lot -- the bulk of memory consumption these days is ZFS
and applications (Firefox!)

- Garrett


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 1tb SATA drives

2010-07-16 Thread Andrew Gabriel

Arne Jansen wrote:

Jordan McQuown wrote:
I’m curious to know what other people are running for HD’s in white 
box systems? I’m currently looking at Seagate Barracuda’s and Hitachi 
Deskstars. I’m looking at the 1tb models. These will be attached to 
an LSI expander in a sc847e2 chassis driven by an LSI 9211-8i HBA. 
This system will be used as a large storage array for backups and 
archiving.


I wouldn't recommend using desktop drives in a server RAID. They can't
handle the vibrations well that are present in a server. I'd recommend
at least the Seagate Constellation or the Hitachi Ultrastar, though I
haven't tested the Deskstar myself.


I've been using a couple of 1TB Hitachi Ultrastars for about a year with 
no problem. I don't think mine are still available, but I expect they 
have something equivalent.


The pool is scrubbed 3 times a week which takes nearly 19 hours now, and 
hammers the heads quite hard. I keep meaning to reduce the scrub 
frequency now it's getting to take so long, but haven't got around to 
it. What I really want is pause/resume scrub, and the ability to trigger 
the pause/resume from the screensaver (or something similar).


--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-16 Thread Frank Cusack

On 7/16/10 12:02 PM -0500 David Dyer-Bennet wrote:

It would be nice to have applications request to be notified
before a snapshot is taken, and when that have requested
notification have acknowledged that they're ready, the snapshot
would be taken; and then another notification sent that it was
taken.  Prior to indicating they were ready, the apps could
have achieved a logically consistent on disk state.  That
would eliminate the need for (for example) separate database
backups, if you could have a snapshot with the database on it
in a consistent state.


Any software dependent on cooperating with the filesystem to ensure that
the files are consistent in a snapshot fails the cord-yank test (which is
equivalent to the "processor explodes" test and the "power supply bursts
into flames" test and the "disk drive shatters" test and so forth).  It
can't survive unavoidable physical-world events.


It can, if said software can roll back to the last consistent state.
That may or may not be "recent" wrt a snapshot.  If an application is
very active, it's possible that many snapshots may be taken, none of
which are actually in a state the application can use to recover from.
Rendering snapshots much less effective.

Also, just administratively, and perhaps legally, it's highly desirable
to know that the time of a snapshot is the actual time that application
state can be recovered to or referenced to.

Also, if an application cannot survive a cord-yank test, it might be
even more highly desirable that snapshots be a stable that from which
the application can be restarted.

A notification mechanism is pretty desirable, IMHO.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recommended RAM for ZFS on various platforms

2010-07-16 Thread Freddie Cash
On Fri, Jul 16, 2010 at 10:24 AM, Michael Johnson
 wrote:
> I'm currently planning on running FreeBSD with ZFS, but I wanted to 
> double-check
> how much memory I'd need for it to be stable.  The ZFS wiki currently says you
> can go as low as 1 GB, but recommends 2 GB; however, elsewhere I've seen 
> someone
> claim that you need at least 4 GB.  Does anyone here know how much RAM FreeBSD
> would need in this case?

There's no such thing as "too much RAM" when it comes to ZFS.  The
more RAM you add to the system, the better it will perform.  ZFS will
use all the RAM you give it for the ARC, enabling it to cache more and
more data.

On the flip side, if you spend enough time tuning ZFS and FreeBSD, you
can use ZFS on a system with 512 MB of RAM (there are reports on the
FreeBSD mailing lists of various people doing thing on single-drive
laptops).

However, the "rule of thumb" for ZFS is 2 GB of RAM as a bare minimum,
using the 64-bit version of FreeBSD.  The "sweet spot" is 4 GB of RAM.

But, more is always better.

-- 
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recommended RAM for ZFS on various platforms

2010-07-16 Thread Michael Johnson
Garrett D'Amore wrote:
>On Fri, 2010-07-16 at 10:24 -0700, Michael Johnson wrote:
>> I'm currently planning on running FreeBSD with ZFS, but I wanted to 
>>double-check 
>> how much memory I'd need for it to be stable.  The ZFS wiki currently says 
>you 
>> can go as low as 1 GB, but recommends 2 GB; however, elsewhere I've seen 
>>someone 
>> claim that you need at least 4 GB.  Does anyone here know how much RAM 
>FreeBSD 
>> would need in this case?
>> 
>> Likewise, how much RAM does OpenSolaris need for stability when running ZFS? 
>>  How about other OpenSolaris-based OSs, like NexentaStor?  (My searching 
>found 
>> that OpenSolaris recommended at least 1 GB, while NexentaStor said 2 GB was 
>> okay, 4 GB was better.  I'd be interested in hearing your input, though.)

>
>1GB isn't enough for a real system.  2GB is a bare minimum.  If you're
>going to use dedup, plan on a *lot* more.  I think 4 or 8 GB are good
>for a typical desktop or home NAS setup.  With FreeBSD you may be able
>to get away with less.  (Probably, in fact.)

Fortunately, I don't need deduplication; it's kind of a nice feature, but the 
extra RAM it would take isn't worth it.

Just curious, why do you say I'd be able to get away with less RAM in FreeBSD 
(as compared to NexentaStor, I'm assuming)?  I don't know tons about the OSs in 
question; is FreeBSD just leaner in general?

>> If it matters, I'm currently planning on RAID-Z2 with 4x500GB consumer-grade 
>> SATA drives.  (I know that's not a very efficient configuration, but I'd 
>>really 
>> like the redundancy of RAID-Z2 and I just don't need more than 1 TB of 
>>available 
>> storage right now, or for the next several years.)  This is on an AMD64 
>>system, 
>> and the OS in question will be running inside of VirtualBox, with raw access 
>>to 
>> the drives.

>
>Btw, instead of RAIDZ2, I'd recommend simply using stripe of mirrors.
>You'll have better performance, and good resilience against errors.  And
>you can grow later as you need to by just adding additional drive pairs.


A pair of mirrors would be nice, but would only protect against 100% of one 
drive failing, and 50% of two-drive failures.  Performance is less important to 
me than redundancy; this setup won't be seeing tons of disk activity, but I 
want 
it to be as reliable as possible.

Michael


  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 1tb SATA drives

2010-07-16 Thread Arne Jansen

Jordan McQuown wrote:
I’m curious to know what other people are running for HD’s in white box 
systems? I’m currently looking at Seagate Barracuda’s and Hitachi 
Deskstars. I’m looking at the 1tb models. These will be attached to an 
LSI expander in a sc847e2 chassis driven by an LSI 9211-8i HBA. This 
system will be used as a large storage array for backups and archiving.


I wouldn't recommend using desktop drives in a server RAID. They can't
handle the vibrations well that are present in a server. I'd recommend
at least the Seagate Constellation or the Hitachi Ultrastar, though I
haven't tested the Deskstar myself.

--Arne

 
Thanks,

Jordan
 





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] 1tb SATA drives

2010-07-16 Thread Jordan McQuown
I'm curious to know what other people are running for HD's in white box 
systems? I'm currently looking at Seagate Barracuda's and Hitachi Deskstars. 
I'm looking at the 1tb models. These will be attached to an LSI expander in a 
sc847e2 chassis driven by an LSI 9211-8i HBA. This system will be used as a 
large storage array for backups and archiving.

Thanks,
Jordan

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recommended RAM for ZFS on various platforms

2010-07-16 Thread Andrew Gabriel

Garrett D'Amore wrote:

Btw, instead of RAIDZ2, I'd recommend simply using stripe of mirrors.
You'll have better performance, and good resilience against errors.  And
you can grow later as you need to by just adding additional drive pairs.

-- Garrett
  


Or in my case, I find my home data growth is slightly less than the rate 
of disk capacity increase, so every 18 months or so, I simply swap out 
the disks for higher capacity ones.



--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recommended RAM for ZFS on various platforms

2010-07-16 Thread Garrett D'Amore
1GB isn't enough for a real system.  2GB is a bare minimum.  If you're
going to use dedup, plan on a *lot* more.  I think 4 or 8 GB are good
for a typical desktop or home NAS setup.  With FreeBSD you may be able
to get away with less.  (Probably, in fact.)

Btw, instead of RAIDZ2, I'd recommend simply using stripe of mirrors.
You'll have better performance, and good resilience against errors.  And
you can grow later as you need to by just adding additional drive pairs.

-- Garrett

On Fri, 2010-07-16 at 10:24 -0700, Michael Johnson wrote:
> I'm currently planning on running FreeBSD with ZFS, but I wanted to 
> double-check 
> how much memory I'd need for it to be stable.  The ZFS wiki currently says 
> you 
> can go as low as 1 GB, but recommends 2 GB; however, elsewhere I've seen 
> someone 
> claim that you need at least 4 GB.  Does anyone here know how much RAM 
> FreeBSD 
> would need in this case?
> 
> Likewise, how much RAM does OpenSolaris need for stability when running ZFS? 
>  How about other OpenSolaris-based OSs, like NexentaStor?  (My searching 
> found 
> that OpenSolaris recommended at least 1 GB, while NexentaStor said 2 GB was 
> okay, 4 GB was better.  I'd be interested in hearing your input, though.)
> 
> If it matters, I'm currently planning on RAID-Z2 with 4x500GB consumer-grade 
> SATA drives.  (I know that's not a very efficient configuration, but I'd 
> really 
> like the redundancy of RAID-Z2 and I just don't need more than 1 TB of 
> available 
> storage right now, or for the next several years.)  This is on an AMD64 
> system, 
> and the OS in question will be running inside of VirtualBox, with raw access 
> to 
> the drives.
> 
> Thanks,
> Michael
> 
> 
>   
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Recommended RAM for ZFS on various platforms

2010-07-16 Thread Michael Johnson
I'm currently planning on running FreeBSD with ZFS, but I wanted to 
double-check 
how much memory I'd need for it to be stable.  The ZFS wiki currently says you 
can go as low as 1 GB, but recommends 2 GB; however, elsewhere I've seen 
someone 
claim that you need at least 4 GB.  Does anyone here know how much RAM FreeBSD 
would need in this case?

Likewise, how much RAM does OpenSolaris need for stability when running ZFS? 
 How about other OpenSolaris-based OSs, like NexentaStor?  (My searching found 
that OpenSolaris recommended at least 1 GB, while NexentaStor said 2 GB was 
okay, 4 GB was better.  I'd be interested in hearing your input, though.)

If it matters, I'm currently planning on RAID-Z2 with 4x500GB consumer-grade 
SATA drives.  (I know that's not a very efficient configuration, but I'd really 
like the redundancy of RAID-Z2 and I just don't need more than 1 TB of 
available 
storage right now, or for the next several years.)  This is on an AMD64 system, 
and the OS in question will be running inside of VirtualBox, with raw access to 
the drives.

Thanks,
Michael


  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-16 Thread David Dyer-Bennet

On Fri, July 16, 2010 08:39, Richard L. Hamilton wrote:
>> > It'd be handy to have a mechanism where
>> applications could register for
>> > snapshot notifications. When one is about to
>> happen, they could be told
>> > about it and do what they need to do. Once all the
>> applications have
>> > acknowledged the snapshot alert--and/or after a
>> pre-set timeout--the file
>> > system would create the snapshot, and then notify
>> the applications that
>> > it's done.
>> >
>> Why would an application need to be notified? I think
>> you're under the
>> misconception that something happens when a ZFS
>> snapshot is taken.
>> NOTHING happens when a snapshot is taken (OK, well,
>> there is the
>> snapshot reference name created). Blocks aren't moved
>> around, we don't
>> copy anything, etc. Applications have no need to "do
>> anything" before a
>> snapshot it taken.
>
> It would be nice to have applications request to be notified
> before a snapshot is taken, and when that have requested
> notification have acknowledged that they're ready, the snapshot
> would be taken; and then another notification sent that it was
> taken.  Prior to indicating they were ready, the apps could
> have achieved a logically consistent on disk state.  That
> would eliminate the need for (for example) separate database
> backups, if you could have a snapshot with the database on it
> in a consistent state.

Any software dependent on cooperating with the filesystem to ensure that
the files are consistent in a snapshot fails the cord-yank test (which is
equivalent to the "processor explodes" test and the "power supply bursts
into flames" test and the "disk drive shatters" test and so forth).  It
can't survive unavoidable physical-world events.

Conversely, any scheme for a program writing to its files that PASSES
those tests will be fine with arbitrary snapshots, too.

For that matter, remember that the "snapshot" may be taken on a zfs server
on another continent which is making the storage available via iScsi;
there's currently no notification channel to tell the software the
snapshot is happening.

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Who owns the dataset?

2010-07-16 Thread Lori Alt

On 07/14/10 05:45 PM, Johnson Earls wrote:

Hello,

How would I go about finding out which zone owns a particular dataset from a 
script running in the global zone?
   



We have some ZFS datasets that can "float" between zones on different servers 
in order to provide a manual application failover mechanism.


I don't know what you mean by datasets "floating" between zones.  In 
order for a zone to access a dataset, the dataset must have been 
delegated to the zone, which requires some explicit action.


But to answer your specific question, if you look at a mounted dataset's 
entry in /etc/mnttab:


rpool/z2-del/myz2zfs
rw,nodevices,setuid,nonbmand,exec,xattr,atime,zone=z2,dev=16d001c
1279296850


you'll see a 'zone=' entry if the zone is delegated to the zone 
(assuming it's mounted at all).


Oddly, enough, the "zone=" string doesn't appear for the zone 
root.  I'm not sure if that's intentional or an oversight.  But in any 
case, it doesn't appear that you're looking for zone roots.


You can also run through the zones, doing 'zoneconfig -z  info' 
commands to look for datasets delegated to each zone.



Lori



  I've got scripts that gather disk usage and i/o statistics per dataset, but 
I'd like to make those statistics available in the zone which owns the dataset, 
rather than the global zone (which is where the dtrace script has to run).  So 
I'd like to be able to find out which zone owns the dataset in order to direct 
output into a directory within that zone.  Is this possible?

Thanks in advance,
- Johnson
jea...@responsys.com


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snapshot notification [was: Legality and the future of zfs...]

2010-07-16 Thread Richard Elling
On Jul 16, 2010, at 3:39 PM, Richard L. Hamilton wrote:
> Of course, another approach would be for a zfs aware app to be
> keeping its storage on a dedicated filesystem or zvol, and itself
> control when snapshots were taken of that.  As lightweight as
> zvols and filesystems are under zfs, having each app that needed
> such functionality have its own would be no big deal, and would
> even be handy insofar as each app could create snapshots on
> its own independent schedule.

No new API is needed.  Simply delegate to the owner of the process
the ability to take snapshots.  You need to do this anyway, for
security purposes.  Then use open() to create a file in the .zfs
snapshot subdirectory.
 -- richard

-- 
Richard Elling
rich...@nexenta.com   +1-760-896-4422




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send : invalid option 'R'

2010-07-16 Thread Wenhwa Liu

The build is : Solaris 10 8/07 s10s_u4wos_12b SPARC
zfs version:  v4

VER  DESCRIPTION
---  
1   Initial ZFS version
2   Ditto blocks (replicated metadata)
3   Hot spares and double parity RAID-Z
4   zpool history

thanks,
wen


Brandon High wrote:

On Wed, Jul 14, 2010 at 5:01 PM, Wenhwa Liu  wrote:
  

I'm getting invalid option error when I use '-R' option with
zfs send command.



What build of solaris are you using?

  


--
Wenhwa Liu
wenhwa@oracle.com
Office: x84799 / 650-786-4799


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-16 Thread Richard L. Hamilton
> > It'd be handy to have a mechanism where
> applications could register for
> > snapshot notifications. When one is about to
> happen, they could be told
> > about it and do what they need to do. Once all the
> applications have
> > acknowledged the snapshot alert--and/or after a
> pre-set timeout--the file
> > system would create the snapshot, and then notify
> the applications that
> > it's done.
> >
> Why would an application need to be notified? I think
> you're under the 
> misconception that something happens when a ZFS
> snapshot is taken. 
> NOTHING happens when a snapshot is taken (OK, well,
> there is the 
> snapshot reference name created). Blocks aren't moved
> around, we don't 
> copy anything, etc. Applications have no need to "do
> anything" before a 
> snapshot it taken.

It would be nice to have applications request to be notified
before a snapshot is taken, and when that have requested
notification have acknowledged that they're ready, the snapshot
would be taken; and then another notification sent that it was
taken.  Prior to indicating they were ready, the apps could
have achieved a logically consistent on disk state.  That
would eliminate the need for (for example) separate database
backups, if you could have a snapshot with the database on it
in a consistent state.

If I undertand correctly, _that's_ what the notification mechanism
on Windows achieves.

Of course, another approach would be for a zfs aware app to be
keeping its storage on a dedicated filesystem or zvol, and itself
control when snapshots were taken of that.  As lightweight as
zvols and filesystems are under zfs, having each app that needed
such functionality have its own would be no big deal, and would
even be handy insofar as each app could create snapshots on
its own independent schedule.

Either way, the apps would have to be aware of how to
participate in coordinating their logical consistency on disk with
the snapshot (or vice versa).

> > Given that snapshots will probably be more popular
> in the future (WAFL
> > NFS/LUNs, ZFS, Btrfs, VMware disk image snapshots,
> etc.), an agreed upon
> > consensus would be handy (D-Bus? POSIX?).

Hypothetically, one could hide some of the details
with suitable libraries and infrastructure.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-16 Thread Richard L. Hamilton
> never make it any better. Just for a record: Solaris
> 9 and 10 from Sun
> was a plain crap to work with, and still is
> inconvenient conservative
> stagnationware. They won't build a free cool tools

Everybody but geeks _wants_ stagnationware, if you means
something that just runs.  Even my old Sun Blade 100 at home
still has Solaris 9 on it, because I haven't had a day to kill to
split the mirror, load something newer like the last SXCE, and
get everything on there working on it.  (My other SPARC is running
a semi-recent SXCE, and pending activation of an already installed
most recent SXCE.  Sitting at a Sun, I still prefer CDE to GNOME,
and the best graphics card I have for that box won't work with
the newer Xorg server, so I can't see putting OpenSolaris on it.)

For instance, recent enough Solaris 10 updates to be able to do zfs
root are pretty decent; you get into the habit of doing live upgrades
even for patching, so you can minimize downtime.  Hardly stagnant,
considering that the initial release of Solaris 10 didn't even have
zfs in it yet.

> for Solaris, hence
> the whole thing will turned to be a dry job for
> trained monkeys
> wearing suits in a corporations. Nothing more. That's
> a philosophy of
> last decade, but IT now is very changing and is very
> different. That
> is why Oracle's idea to kill community is totally
> stupid. And that's
> why IBM will win, because you run the same Linux on
> their hardware as
> you run at your home.
> 
> Yes, Oracle will run good for a while, using the
> inertia of a hype
> (and latest their financial report proves that), but
> soon people will
> realize that Oracle is just another evil mean beast
> with great
> marketing and the same sh*tty products as they always
> had. Buy Solaris
> for any single little purpose? No way ever! I may buy
> support and/or
> security patches, updates. But not the OS itself. If
> that is the only
> option, then I'd rather stick to Linux from other
> vendor, i.e. RedHat.
> That will lead me to no more talk to Oracle about
> software at OS
> level, only applications (if I am an idiot enough to
> jump into APEX or
> something like that). Hence, if all I can do is talk
> only about
> hardware (well, not really, because no more
> hardware-only support!!!),
> then I'd better talk to IBM, if I need a brand and I
> consider myself
> too dumb to get SuperMicro instead. IBM System x3550
> M3 is still
> better by characteristics than equivalent from
> Oracle, it is OEM if
> somebody needs that at first place and is still
> cheaper than Oracle's
> similar class. And IBM stuff just works great (at
> least if we talk
> about hardware).

I'm not going to say you're wrong, because in part I agree
with you.  Systems people can run at home, desktops, laptops,
those are all what get future mindshare and eventually get
people with big bucks spending them.

But the simple fact that Sun went down suggests that
just being all lovey-dovey (and plenty of people thought that
Sun wasn't lovey-dovey _enough_?) won't keep you in business
either.

[...]
> > But for home users? I doubt it. I was about to
> build a
> > big storage box at home running OpenSolaris, I
> froze that project.

Mine's running SXCE, and unless I can find a solution
to getting decent graphics working with Xorg on it,
probably always will be.  But the big (well, target 9TB redundant;
presently 3TB redundant) storage is doing just fine.
Being super latest and greatest just isn't necessary for that.

> Same here. A lot of nice ideas and potential
> open-source tools
> basically frozen and I think gonna be dumped. We
> (geeks) won't build
> stuff for Larry just for free. We need OS back opened
> in reward. So I
> think OpenSolaris is pretty much game over, thanks to
> the Oracle. Some
> Oracle fanboys might call it a plain FUD, hope to get
> updates etc, but
> the reality is that Oracle to OpenSolaris is pretty
> much the same what
> Palm did for BeOS.
> 
> Enjoy your last svn_134 build.
> 

I can't rule out that possibility, but I see some reasons
to think that it's worth being patient for a couple more
months.  As it is, I find myself updating my Mac and Windows
every darn week; so I'm pretty much past getting a kick out
of updating just to see what's kewl.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-16 Thread Richard L. Hamilton
> On Tue, 13 Jul 2010, Edward Ned Harvey wrote:
> 
> > It is true there's no new build published in the
> last 3 months.  But you
> > can't use that to assume they're killing the
> community.
> 
> Hmm, the community seems to think they're killing the
> community:
> 
>   http://developers.slashdot.org/story/10/07/14/1448209
> /OpenSolaris-Governing-Board-Closing-Shop?from=rss
> 
> 
> ZFS is great. It's pretty much the only reason we're
> running Solaris. But I
> don't have much confidence Oracle Solaris is going to
> be a product I'm
> going to want to run in the future. We barely put our
> ZFS stuff into
> production last year but quite frankly I'm already on
> the lookout for
> something to replace it.
> 
> No new version of OpenSolaris (which we were about to
> start migrating to).
> No new update of Solaris 10. *Zero* information about
> what the hell's going
> on...

Presumably if you have a maintenance contract or some other
formal relationship, you could get an NDA briefing.  Not having
been to one yet myself, I don't know what that would tell you,
but presumably more than without it.

Still, the silence is quite unhelpful, and the apparent lack of
anyone willing to recognize that, and with the authority to do
anything about it, is troubling.


> ZFS will surely live on as the filesystem under the
> hood in the doubtlessly
> forthcoming Oracle "database appliances", and I'm
> sure they'll keep selling
> their NAS devices. But for home users? I doubt it. I
> was about to build a
> big storage box at home running OpenSolaris, I froze
> that project. Oracle
> is all about the money. Which I guess is why they're
> succeeding and Sun
> failed to the point of having to sell out to them. My
> home use wasn't
> exactly going to make them a profit, but on the other
> hand, the philosophy
> that led to my not using the product at home is a
> direct cause of my lack
> of desire to continue using it at work, and while
> we're not exactly a huge
> client we've dropped a decent penny or two in Sun's
> wallet over the years.

FWIW, you're not the only one that's tried to make that point!

> Who knows, maybe Oracle will start to play ball
> before August 16th and the
> OpenSolaris Governing Board won't shut themselves
> down. But I wouldn't hold
> my breath.

Postponement of respiration pending hypothetical actions by others
is seldom an effective survival strategy.

Nevertheless, the zfs on my Sun Blade 2000 currently running SXCE snv_97
(pending luactivate and reboot to switch to snv_129) is doing just fine
with what is presently 3TB of redundant storage, and will eventually grow
to 9TB as I populate the rest of the slots in my JBOD
(8 slots; 2 x 1TB mirror for root; presently also 2 x 2TB mirror for data,
but that will change to 5 x 2TB raidz + 1 2TB hot spare when I can afford
four more 2TB drives).

I have a spare power supply and some other odds and ends for the
Sun Blade 2000, so, with fingers crossed, it will run (and heat my house :-)
for quite some time to come, regardless of availability of future software
updates.  If not, I'm sure I have an ISO of SXCE 129 or so for x86 somewhere
too, which I could put on any cheap x86 box with a PCIx slot for my SAS
controller, and just import the zpools and go.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS mirror to RAIDz?

2010-07-16 Thread Erik Trimble

On 7/16/2010 5:54 AM, Ben wrote:

Hi all,

I currently have four drives in my OpenSolaris box.  The drives are split into two 
mirrors, one mirror containing my rpool (disks 1&  2) and one containing other data 
(disks 2&  3).

I'm running out of space on my data mirror and am thinking of upgrading it to 
two 2TB disks. I then considered replacing disk 2 with a 2TB disk and making a 
RAIDz from the three new drives.

I know this would leave my rpool vulnerable to hard drive failure, but I've got 
no data on it that can't be replaced with a reinstall.

Can this be done easily?  Or will I have to transfer all of my data to another 
machine and build the RAIDz from scratch, then transfer the data back?

Thanks for any advice,
Ben
   


You can't "convert" a mirror to a RAIDZ directly.  In your case, 
however, there is a bit of slight-of-hand that can work here.


Assume you have disks A, B, C, D, all the same size, where A & B are 
your rpool, and C & D are your datapool:


# zpool detach rpool B
# zpool detach datapool C
# mkfile -n  /foo
# zpool create newpool raidz B C /foo
# zpool offline newpool /foo
# rsync -a /datapool/. /newpool/.   (use whichever rsync options fit you 
best)

# zpool destroy datapool
# zpool replace newpool /foo D
# rm /foo


During this process, you will have your data on both mirrors exposed to 
a disk failure, and when it's complete, the rpool will of course remain 
unprotected.



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS mirror to RAIDz?

2010-07-16 Thread Ben
Hi all,

I currently have four drives in my OpenSolaris box.  The drives are split into 
two mirrors, one mirror containing my rpool (disks 1 & 2) and one containing 
other data (disks 2 & 3).

I'm running out of space on my data mirror and am thinking of upgrading it to 
two 2TB disks. I then considered replacing disk 2 with a 2TB disk and making a 
RAIDz from the three new drives.

I know this would leave my rpool vulnerable to hard drive failure, but I've got 
no data on it that can't be replaced with a reinstall.

Can this be done easily?  Or will I have to transfer all of my data to another 
machine and build the RAIDz from scratch, then transfer the data back?

Thanks for any advice,
Ben
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-16 Thread Richard L. Hamilton
> Losing ZFS would indeed be disastrous, as it would
> leave Solaris with 
> only the Veritas File System (VxFS) as a semi-modern
> filesystem, and a 
> non-native FS at that (i.e. VxFS is a 3rd-party
> for-pay FS, which 
> severely inhibits its uptake). UFS is just way to old
> to be competitive 
> these days.

Having come to depend on them, the absence of some of the
features would certainly be significant.

But how come everyone forgets about QFS?
http://www.sun.com/storage/management_software/data_management/qfs/index.xml
http://en.wikipedia.org/wiki/QFS
http://hub.opensolaris.org/bin/view/Project+samqfs/WebHome
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss