crypto properties (Was: Re: [zfs-discuss] ZFS inode equivalent)

2007-02-02 Thread Darren J Moffat

Pawel Jakub Dawidek wrote:

On Thu, Feb 01, 2007 at 11:00:07AM +, Darren J Moffat wrote:

Neil Perrin wrote:

No it's not the final version or even the latest!
The current on disk format version is 3. However, it hasn't
diverged much and the znode/acl stuff hasn't changed.

and it will get updated as part of zfs-crypto, I just haven't done so yet 
because I'm not finished designing yet.


Do you consider adding a new property type (next to readonly and
inherit) - a "oneway" property? Such propery could be only set if the
dataset has no children, no snapshots and no data, and once set can't be
modified. "oneway" would be the type of the "encryption" property.
On the other hand you may still want to support encryption algorithm
change and most likely key change.


I'm not sure I understand what you are asking for.

My current plan is that once set the encryption property that describes 
which algorithm (mechanism actually: algorithm, key length and mode, eg 
aes-128-ccm) can not be changed, it would be inherited by any clones. 
Creating new child file systems "rooted" in an encrypted filesystem you 
would be allowed to turn if off (I'd like to have a policy like the acl 
one here) but by default it would be inherited.


Key change is a very difficult problem because in some cases it can mean 
rewritting all previous data, in other cases it just means start using 
the new key now but keep the old one.   Which is correct depends on why 
you are doing a key change.  Key change for data at rest is a very 
different problem space from rekey in a network protocol.


In theory the algorithm could be different per dnode_phys_t just like 
checksum/compression are today, however having aes-128 on one dnode and 
aes-256 on another causes a problem because you also need different keys 
for them, it gets even more complex if you consider the algorithm mode 
and if you choose completely different algorithms.  Having a different 
algorithm and key length will certainly be possible for different 
filesystems though (eg root with aes-128 and home with aes-256).


Which I think is a long way of saying "yes".

--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS vs NFS vs array caches, revisited

2007-02-02 Thread Roch - PAE

Marion, this is a common misintrepetation :

"Anyway, I've also read that if ZFS notices it's using "slices" instead 
of
whole disks, it will not enable/use the write cache. "

The reality is that

ZFS turns on the write cache when it owns the
whole disk.

_Independantly_ of that,

ZFS flushes the write cache when ZFS needs to insure 
that data reaches stable storage.

The point is that the flushes occur whether or not ZFS turned
the caches on or not (caches might be turned on by some
other means outside the visibility of ZFS).

The problem is that the flush cache command means 2
different things to the 2 components.

To ZFS :

"put on stable storage"

To Storage:

"flush the cache"

Until we get this  house in order,  storage needs  to ignore
the requests.

-r

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Meta data corruptions on ZFS.

2007-02-02 Thread dudekula mastan

  Hi All,
   
  In my test set up, I have one zpool of size 1000M bytes.
   
  On this zpool, my application writes 100 files each of size 10 MB.
   
  First 96 files were written successfully with out any problem.
   
  But the 97 file is not written successfully , it written only 5 MB (the 
return value of write() call ). 
   
  Since it is short write my application tried to truncate it to 5MB. But 
ftruncate is failing with an erroe message saying that No space on the devices.
   
  Have you people ever seen these kind of error message ?
   
  After ftruncate failure I checked the size of 97 th file, it is strange. The 
size is 7 MB but the expected size is only 5 MB.
   
  You help is appreciated.
   
  Thanks & Regards
  Mastan
   

 
-
TV dinner still cooling?
Check out "Tonight's Picks" on Yahoo! TV.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FYI: ZFS on USB sticks (from Germany)

2007-02-02 Thread Constantin Gonzalez
Hi Richard,

Richard Elling wrote:
> FYI,
> here is an interesting blog on using ZFS with a dozen USB drives from
> Constantin.
> http://blogs.sun.com/solarium/entry/solaris_zfs_auf_12_usb

thank you for spotting it :).

We're working on translating the video (hope we get the lip-syncing right...)
and will then re-release it in an english version. BTW, we've now hosted
the video on YouTube so it can be embedded in the blog.

Of course, I'll then write an english version of the blog entry with the
tech details.

Please hang on for a week or two... :).

Best regards,
   Constantin

-- 
Constantin GonzalezSun Microsystems GmbH, Germany
Platform Technology Group, Global Systems Engineering  http://www.sun.de/
Tel.: +49 89/4 60 08-25 91   http://blogs.sun.com/constantin/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: crypto properties (Was: Re: [zfs-discuss] ZFS inode equivalent)

2007-02-02 Thread Pawel Jakub Dawidek
On Fri, Feb 02, 2007 at 08:46:34AM +, Darren J Moffat wrote:
> Pawel Jakub Dawidek wrote:
> >On Thu, Feb 01, 2007 at 11:00:07AM +, Darren J Moffat wrote:
> >>Neil Perrin wrote:
> >>>No it's not the final version or even the latest!
> >>>The current on disk format version is 3. However, it hasn't
> >>>diverged much and the znode/acl stuff hasn't changed.
> >>and it will get updated as part of zfs-crypto, I just haven't done so yet 
> >>because I'm not finished designing yet.
> >Do you consider adding a new property type (next to readonly and
> >inherit) - a "oneway" property? Such propery could be only set if the
> >dataset has no children, no snapshots and no data, and once set can't be
> >modified. "oneway" would be the type of the "encryption" property.
> >On the other hand you may still want to support encryption algorithm
> >change and most likely key change.
> 
> I'm not sure I understand what you are asking for.

I'm sorry it seems I started my explanations from too deep. I started to
play with encryption on my own by creating a "crypto" compression
algorithm.  Currently there are few types of property (readonly,
inherited, etc.), but non of them seems to be suitable for encryption.
When you enable encryption there should be no data, or you know that
existing data is going to be encrypted and plaintext data securely
removed automatically. Of course the later is much more complex to
implement.

> My current plan is that once set the encryption property that describes which 
> algorithm (mechanism actually: algorithm, key length and mode, eg 
> aes-128-ccm) can not be 
> changed, it would be inherited by any clones. Creating new child file systems 
> "rooted" in an encrypted filesystem you would be allowed to turn if off (I'd 
> like to have a 
> policy like the acl one here) but by default it would be inherited.

Right. I forget that a dataset created under another dataset doesn't
share data with the parent.

> Key change is a very difficult problem because in some cases it can mean 
> rewritting all previous data, in other cases it just means start using the 
> new key now but keep the 
> old one.   Which is correct depends on why you are doing a key change.  Key 
> change for data at rest is a very different problem space from rekey in a 
> network protocol.

Key change is nice and algorithm change possibility is also nice in case
the one you use become broken.
What I'm doing in geli (my disk encryption software for FreeBSD) is to
use random, strong master key, which is encrypted by user's passphrase,
keyfiles, etc. This is nice because changing user's passphrase doesn't
affect the master key, thus doesn't cost any I/O operations.
Another nice thing about it is that you can have many copies of the
master key protected by different passphrases. For example two persons
can decrypt your data: you and security officer in your company.

On the other hand, changing the master key should also be possible.

A good starting point IMHO will be to support user's passphrase
(keyfile, etc.) change (without touching the master key) and document
changing the master key, algorithm, key length, etc. via eg. local zfs
send/recv.

> In theory the algorithm could be different per dnode_phys_t just like 
> checksum/compression are today, however having aes-128 on one dnode and 
> aes-256 on another causes a 
> problem because you also need different keys for them, it gets even more 
> complex if you consider the algorithm mode and if you choose completely 
> different algorithms.  
> Having a different algorithm and key length will certainly be possible for 
> different filesystems though (eg root with aes-128 and home with aes-256).

Maybe keys should be pool's properties? You add new key to the pool and
then assign selected key to the given datasets. You can then "unlock"
the key using zpool(1M) or you'll be asked to "unlock" all keys used by
a dataset when you want to mount/attach it (file system or zvol). Once
the key is "unlocked", the remaining datasets that use the same key can
be mounted/attached automatically. Just a thought...

-- 
Pawel Jakub Dawidek   http://www.wheel.pl
[EMAIL PROTECTED]   http://www.FreeBSD.org
FreeBSD committer Am I Evil? Yes, I Am!


pgpnlZmnidmyi.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS and thin provisioning

2007-02-02 Thread Andre Lue
thanks Darren! I got led down the wrong path by following newfs.

Now my other question is. How would you add raw storage to the vtank (virtual 
filesystem) as the usage approached the current underlying raw storage?

Would you going forward just simply in the normal fashion ( i will try this 
when I can add another disk)
zpool add vtank cXtXdX

Is there a performance hit for having what seems to be a zfs on top a zpool on 
top a zpool?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS and thin provisioning

2007-02-02 Thread Wade . Stuart






[EMAIL PROTECTED] wrote on 02/02/2007 10:34:22 AM:

> thanks Darren! I got led down the wrong path by following newfs.
>
> Now my other question is. How would you add raw storage to the vtank
> (virtual filesystem) as the usage approached the current underlying
> raw storage?
>
> Would you going forward just simply in the normal fashion ( i will
> try this when I can add another disk)
> zpool add vtank cXtXdX
>
> Is there a performance hit for having what seems to be a zfs on top
> a zpool on top a zpool?
>


I would think so.  Also a good test would be to write on the final fs a few
blocks more data then the backing sparse volume actually has available.
Gut feeling is that will cause a panic on current solaris.  Is there any
reason why this multi layering is even allowed -- seems risky and hackish?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: crypto properties (Was: Re: [zfs-discuss] ZFS inode equivalent)

2007-02-02 Thread Nicolas Williams
On Fri, Feb 02, 2007 at 08:46:34AM +, Darren J Moffat wrote:
> My current plan is that once set the encryption property that describes 
> which algorithm (mechanism actually: algorithm, key length and mode, eg 
> aes-128-ccm) can not be changed, it would be inherited by any clones. 
> Creating new child file systems "rooted" in an encrypted filesystem you 
> would be allowed to turn if off (I'd like to have a policy like the acl 
> one here) but by default it would be inherited.
> 
> Key change is a very difficult problem because in some cases it can mean 
> rewritting all previous data, in other cases it just means start using 
> the new key now but keep the old one.   Which is correct depends on why 
> you are doing a key change.  Key change for data at rest is a very 
> different problem space from rekey in a network protocol.

Re-keying and algorithm change should be seen as related.

And encryption off -> null encryption algorithm.

> In theory the algorithm could be different per dnode_phys_t just like 
> checksum/compression are today, however having aes-128 on one dnode and 
> aes-256 on another causes a problem because you also need different keys 
> for them, it gets even more complex if you consider the algorithm mode 
> and if you choose completely different algorithms.  Having a different 
> algorithm and key length will certainly be possible for different 
> filesystems though (eg root with aes-128 and home with aes-256).

I don't see why having a different key per-dnode is so difficult,
particularly if the per-dnode key isn't randomly generated, but derived
from the filesystem's master key and the dnode number (then you don't
have to store it anywhere).  And you might have to have per-dnode keys
because of the birthday bound on key usage.  Just make sure that the
master key is always randomly generated, rather than derived from a
passphrase.

Now, the cipher modes you'll use matter, as you could, over the life of
a file, end up using the same block more than once with the same key.  A
simple counter mode with the counter being based on the block address
would leak whether the same data has been written encrypted in the same
key; if the counter is based on the file offset and you have per-dnode
keys then the leak would be whether the same data has been written at
the same file offset in the same file.

Can you replace the ZFS checksum function with the MAC from an
authenticated encryption cipher mode?

If not then not using authenticated encryption would mean not having to
worry about where to store a MAC, and the existing ZFS checksum
functionality (using a cryptographic hash function) in combination with
encryption should buy you integrity protection semantics very close to
what you'd get with an authenticated cipher mode.

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS and thin provisioning

2007-02-02 Thread Darren Dunham
> thanks Darren! I got led down the wrong path by following newfs.
> 
> Now my other question is. How would you add raw storage to the vtank (virtual 
> filesystem) as the usage approached the current underlying raw storage?

You just increase the storage in the underlying pool.  In my case, I'd
just add storage to 'tank'.

> Would you going forward just simply in the normal fashion ( i will try this 
> when I can add another disk)
> zpool add vtank cXtXdX

No.  vtank is already large (but sparse).  It's 'tank' that is limited
and needs to be increased.

> Is there a performance hit for having what seems to be a zfs on top a
> zpool on top a zpool?

I'm sure.  Possibly significant.  I did it based on Ben's article just
as an exercise. 

-- 
Darren Dunham   [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper?   San Francisco, CA bay area
 < This line left intentionally blank to confuse you. >
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS inode equivalent

2007-02-02 Thread Nicolas Williams
On Fri, Feb 02, 2007 at 12:25:04AM +0100, Pawel Jakub Dawidek wrote:
> On Thu, Feb 01, 2007 at 11:00:07AM +, Darren J Moffat wrote:
> > Neil Perrin wrote:
> > >No it's not the final version or even the latest!
> > >The current on disk format version is 3. However, it hasn't
> > >diverged much and the znode/acl stuff hasn't changed.
> > 
> > and it will get updated as part of zfs-crypto, I just haven't done so yet 
> > because I'm not finished designing yet.
> 
> Do you consider adding a new property type (next to readonly and
> inherit) - a "oneway" property? Such propery could be only set if the
> dataset has no children, no snapshots and no data, and once set can't be
> modified. "oneway" would be the type of the "encryption" property.
> On the other hand you may still want to support encryption algorithm
> change and most likely key change.

I think you asking about some sort of magic property behaviour.  I
imagine magic properties like these:

 - all_ascii

   True if all filesystem object names are US-ASCII only (i.e., there
   are no bytes with the high bit set in such names).


 - all_utf8

   True if all filesystem object names are UTF-8 only (in terms of
   encoding; the filesystem couldn't tell if there's any
   codeset/encoding aliasing going on, of course).


 - all_encrypted

   True if all contents is encrypted.


 - encryption_start

   Timestamp from which all new content is encrypted.


and so on.  These are all magical in that you can't set them, or if you
can you can only set them to a subset of their possible values.

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS and thin provisioning

2007-02-02 Thread Darren Dunham
> > Is there a performance hit for having what seems to be a zfs on top
> > a zpool on top a zpool?
> 
> I would think so.  Also a good test would be to write on the final fs a few
> blocks more data then the backing sparse volume actually has available.
> Gut feeling is that will cause a panic on current solaris.

That's what I'm assuming.  I might go ahead and force it later if I have
time to watch the action.  Of course I've done this on a somewhat old
build.  I'd rather do it on a recent one, and I'm unlikely to get around
to that anytime soon on this machine.

> Is there any reason why this multi layering is even allowed -- seems
> risky and hackish?

Is this something that an administrator is likely to do accidentally or
without understanding the consequences?  Even though risky, is really
useless in all situations?  Otherwise, is it worth code to prevent it?

-- 
Darren Dunham   [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper?   San Francisco, CA bay area
 < This line left intentionally blank to confuse you. >
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS and question on repetative data migrating to it efficiently...

2007-02-02 Thread jason
Hi all,

Longtime reader, first time poster Sorry for the lengthy intro and not 
really sure the title matches what I'm trying to get at... I am trying to find 
a solution where making use of a zfs filesystem can shorten our backup window.  
Currently, our backup solution takes data from ufs or vxfs filesystems from a 
split mirrored disk, mounts it off host, and then writes that directly to tape 
from the off host backup system.  This equates to a "full" backup and requires 
a significant amount of time that the split mirror is attached to that system 
till it can get returned.  I'd like to fit in a zfs filesystem into the mix, 
and hopefully make use of "space saving" snapshot capabilities and find out if 
there is a known way to migrate this into a "incremental" backup, with known 
freeware or os level tools.

I get that if the source data storage was originally zfs instead of ufs|vxfs, 
that I'd be able to take snapshots of the storage pre mirror split, mount that 
storage on the offhost, and then take deltas from the different snapshots to 
turn into files, or get applied directly to another zfs filesystem on the 
offhost that was originally created from information off that detached mirror.  
We could also do this without the mirror split and just do a zfs send and pipe 
the data out to a remote host where it would recreate that snapshot there.  It 
will take a while to get to where I can have zfs running in production as it 
might involve some brainwashing of some DBAs to get it done, so in the 
meantime, what are some thoughts on how to do this without data sitting on a 
zfs source?

Some questions I have are in regards to trying to keep this management of data 
at a "file" level.  I would like to use a zfs filesystem as the repository of 
data, and have that repository most efficiently house the data.  I'd like to 
see that if I sent over binary database files that were sourced on a ufs|vxfs 
filesystem to the zfs filesystem, and then took a snapshot of that data, how 
could I update the data on that zfs filesystem with more current files, and 
then have zfs recognize that the files are mostly the same, and only have some 
differing bits.  Can a snapshotted zfs filesystem, get a file that is named the 
same, overwritten fully on the live zfs filesystem, and use the same amount of 
"block" space that is used in the snapshot?  I don't know if I'm stating that 
all clearly.  I don't know how I can recreate data on a zfs filesystem to the 
point where a zfs snapshot makes use of the same data if it is the same.  I 
know that if I tar -c - | tar -x or find | cpio data onto a zfs filesystem, 
take a snapshot of that zfs fs, and then do the operation again to the same set 
of files, take another snapshot, both snapshots say they consume the amount of 
space that included the total of the amount of files copied.  So they are not 
sharing the same space on blocks of the disk.  Do I understand that correctly?

What I'm really looking for is a way to shrink our backup window, by making use 
of some "tool" that can look at a binary file that is at 2 different points in 
time, say one on a zfs snapshot, and one from a different filesystem, i.e. a 
current split of a mirror housing a zfs|ufs|vxfs filesystem mounted on a host 
that can see both filesystems.  Is there a way to compare the 2 files, and just 
get portions that differ to get written to the copy that is on the zfs 
filesystem, so that after a new snapshot is taken, the zfs snapshot could see 
the amount of changes in that file to only be the delta of bits inside the file 
that changed?  I thought rsync could deal with this, yet I think if timestamp 
changes on your source file, it considers the whole file as changed and would 
copy over the whole thing, yet I'm really not that versed in rsync and could be 
completely wrong.

I guess I'm more after that "tool".  I know there exist agents that can run 
that poll oracle database files and find out what bits changed and write those 
off somewhere.  RMAN can do that, yet that still keeps things down at a DBA 
level, yet I need to keep this backup processing at the SA level.  I'm just 
trying to find out how to migrate our data in a way that is fast, reliable, and 
optimal.

Was checking out these threads: 
http://www.opensolaris.org/jive/thread.jspa?threadID=20276&tstart=0
http://www.opensolaris.org/jive/thread.jspa?threadID=22724&tstart=0

And now just saw an update to http://blogs.sun.com/AVS/.  Maybe all my answers 
lie there... Will dig around there for more, but would welcome feedback and 
ideas for this.

TIA
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and question on repetative data migrating to it efficiently...

2007-02-02 Thread Wade . Stuart





[EMAIL PROTECTED] wrote on 02/02/2007 11:16:32 AM:

> Hi all,
>
> Longtime reader, first time poster Sorry for the lengthy intro
> and not really sure the title matches what I'm trying to get at... I
> am trying to find a solution where making use of a zfs filesystem
> can shorten our backup window.  Currently, our backup solution takes
> data from ufs or vxfs filesystems from a split mirrored disk, mounts
> it off host, and then writes that directly to tape from the off host
> backup system.  This equates to a "full" backup and requires a
> significant amount of time that the split mirror is attached to that
> system till it can get returned.  I'd like to fit in a zfs
> filesystem into the mix, and hopefully make use of "space saving"
> snapshot capabilities and find out if there is a known way to
> migrate this into a "incremental" backup, with known freeware or os
> level tools.

What we are playing with here is rsync --inplace diffs from vxfs/ufs (large
systems, 7+TB, millions of files) multiple times per day to thumper and
have the thumper then spool to tape via netbackup.  In our situation this
has shortened our full backup windows from 2 days on our largest systems to
< 1 hour.  On the thumper side we snap after each rsync and because of the
--inplace the differential data requirements for the snaps are very close
to actual data delta on the primary server.   This also allows for (in our
case) 1 -> 8 snaps per day to be kept live nearline over extended periods
with very little overhead -- and reducing the amount of production side
snaps holding delta data.  Most restore requests are done via snaps.


>
> I get that if the source data storage was originally zfs instead of
> ufs|vxfs, that I'd be able to take snapshots of the storage pre
> mirror split, mount that storage on the offhost, and then take
> deltas from the different snapshots to turn into files, or get
> applied directly to another zfs filesystem on the offhost that was
> originally created from information off that detached mirror.  We
> could also do this without the mirror split and just do a zfs send
> and pipe the data out to a remote host where it would recreate that
> snapshot there.  It will take a while to get to where I can have zfs
> running in production as it might involve some brainwashing of some
> DBAs to get it done, so in the meantime, what are some thoughts on
> how to do this without data sitting on a zfs source?

This was the same issue we had,  zfs is missing some features and has some
performance issues in certain workflows that do not allow us to migrate
most of our production systems (yet).  rsync is working well for us in in
lieu of zfs send/receive.

>
> Some questions I have are in regards to trying to keep this
> management of data at a "file" level.  I would like to use a zfs
> filesystem as the repository of data, and have that repository most
> efficiently house the data.  I'd like to see that if I sent over
> binary database files that were sourced on a ufs|vxfs filesystem to
> the zfs filesystem, and then took a snapshot of that data, how could
> I update the data on that zfs filesystem with more current files,
> and then have zfs recognize that the files are mostly the same, and
> only have some differing bits.  Can a snapshotted zfs filesystem,
> get a file that is named the same, overwritten fully on the live zfs
> filesystem, and use the same amount of "block" space that is used in
> the snapshot?  I don't know if I'm stating that all clearly.  I
> don't know how I can recreate data on a zfs filesystem to the point
> where a zfs snapshot makes use of the same data if it is the same.
> I know that if I tar -c - | tar -x or find | cpio data onto a zfs
> filesystem, take a snapshot of that zfs fs, and then do the
> operation again to the same set of files, take another snapshot,
> both snapshots say they consume the amount of space that included
> the total of the amount of files copied.  So they are not sharing
> the same space on blocks of the disk.  Do I understand that correctly?

again, rsync --inplace overlays only changed files in place so as only
blocks that change are rewritten -- minimizing snap delta cost.
>
> What I'm really looking for is a way to shrink our backup window, by
> making use of some "tool" that can look at a binary file that is at
> 2 different points in time, say one on a zfs snapshot, and one from
> a different filesystem, i.e. a current split of a mirror housing a
> zfs|ufs|vxfs filesystem mounted on a host that can see both
> filesystems.  Is there a way to compare the 2 files, and just get
> portions that differ to get written to the copy that is on the zfs
> filesystem, so that after a new snapshot is taken, the zfs snapshot
> could see the amount of changes in that file to only be the delta of
> bits inside the file that changed?  I thought rsync could deal with
> this, yet I think if timestamp changes on your source file, it
> considers the whole file as change

[zfs-discuss] Re: ZFS panic on B54

2007-02-02 Thread John Weekley
Looks like bad memory.  I removed the affected DIMM and haven't had any reboots 
in about 24hrs.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Project Proposal: Availability Suite

2007-02-02 Thread Torrey McMahon

Jason J. W. Williams wrote:

Hi Jim,

Thank you very much for the heads up. Unfortunately, we need the
write-cache enabled for the application I was thinking of combining
this with. Sounds like SNDR and ZFS need some more soak time together
before you can use both to their full potential together?


Well...there is the fact that SNDR works with other FS other then ZFS. 
(Yes, I know this is the ZFS list.) Working around architectural issues 
for ZFS and ZFS alone might cause issues for others.


I think the best of both worlds approach would be to let SNDR plug-in to 
ZFS along the same lines the crypto stuff will be able to plug in, 
different compression types, etc. There once was a slide that showed how 
that workedor I'm hallucinating again.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Thumper Origins Q

2007-02-02 Thread Torrey McMahon

Richard Elling wrote:


One of the benefits of ZFS is that not only is head synchronization not
needed, but also block offsets do not have to be the same.  For example,
in a traditional mirror, block 1 on device 1 is paired with block 1 on
device 2.  In ZFS, this 1:1 mapping is not required.  I believe this will
result in ZFS being more resilient to disks with multiple block failures.
In order for a traditional RAID to implement this, it would basically
need to [re]invent a file system.


We had this fixed in T3 land awhile ago so I think most storage arrays 
don't do the 1:1 mapping anymore. It's striped down the drives. In 
theory, you could lose more then one drive in a T3 mirror and still 
maintain data in certain situations.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Thumper Origins Q

2007-02-02 Thread Torrey McMahon

Dale Ghent wrote:



Yeah sure it "might" eat into STK profits, but one will still have to 
go there for redundant controllers.


Repeat after me: There is no STK. There is only Sun. 8-)


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS panic on B54

2007-02-02 Thread Ian Collins
John Weekley wrote:

>Looks like bad memory.  I removed the affected DIMM and haven't had any 
>reboots in about 24hrs.
> 
>  
>
Give mtest86 a whirl on that system.

Ian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hot spares - in standby?

2007-02-02 Thread Torrey McMahon

Richard Elling wrote:


Good question. If you consider that mechanical wear out is what 
ultimately

causes many failure modes, then the argument can be made that a spun down
disk should last longer. The problem is that there are failure modes 
which
are triggered by a spin up.  I've never seen field data showing the 
difference

between the two.


Often, the spare is up and running but for whatever reason you'll have a 
bad block on it and you'll die during the reconstruct. Periodically 
checking the spare means reading and writing from over time in order to 
make sure it's still ok. (You take the spare out of the trunk, you look 
at it, you check the tire pressure, etc.) The issue I see coming down 
the road is that we'll start getting into a "Golden Gate paint job" 
where it takes so long to check the spare that we'll just keep the 
process going constantly. Not as much wear and tear as real i/o but it 
will still be up and running the entire time and you won't be able to 
spin the spare down.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Project Proposal: Availability Suite

2007-02-02 Thread Nicolas Williams
On Fri, Jan 26, 2007 at 05:15:28PM -0700, Jason J. W. Williams wrote:
> Could the replication engine eventually be integrated more tightly
> with ZFS? That would be slick alternative to send/recv.

But a continuous zfs send/recv would be cool too.  In fact, I think ZFS
tightly integrated with SNDR wouldn't be that much different from a
continuous zfs send/recv.

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS or UFS - what to do?

2007-02-02 Thread Torrey McMahon

Marion Hakanson wrote:

However, given the default behavior of ZFS (as of Solaris-10U3) is to
panic/halt when it encounters a corrupted block that it can't repair,
I'm re-thinking our options, weighing against the possibility of a
significant downtime caused by a single-block corruption.


Guess what happens when UFS finds an inconsistency it can't fix either?

The issue is that ZFS has the chance to fix the inconsistency if the 
zpool is a mirror or raidZ. Not that it finds the inconsistency in the 
first place. ZFS will just find more of them given a set of errors vs 
other filesystems.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Project Proposal: Availability Suite

2007-02-02 Thread Torrey McMahon

Nicolas Williams wrote:

On Fri, Jan 26, 2007 at 05:15:28PM -0700, Jason J. W. Williams wrote:
  

Could the replication engine eventually be integrated more tightly
with ZFS? That would be slick alternative to send/recv.



But a continuous zfs send/recv would be cool too.  In fact, I think ZFS
tightly integrated with SNDR wouldn't be that much different from a
continuous zfs send/recv.


Even better with snapshots, and scoreboarding, and synch vs asynch and 
and and and .


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Project Proposal: Availability Suite

2007-02-02 Thread Nicolas Williams
On Fri, Feb 02, 2007 at 03:17:17PM -0500, Torrey McMahon wrote:
> Nicolas Williams wrote:
> >But a continuous zfs send/recv would be cool too.  In fact, I think ZFS
> >tightly integrated with SNDR wouldn't be that much different from a
> >continuous zfs send/recv.
> 
> Even better with snapshots, and scoreboarding, and synch vs asynch and 
> and and and .

Right.  I hadn't thought of that.  A replication system that is well
integrated with ZFS should have very similar properties whether designed
as a journalling scheme or as a scoreboarding scheme.

A continuous zfs send/recv as I imagine it would be like journalling
while ZFS+SNDR would be more like scoreboarding.

Unlike traditional journalling replication, a continuous ZFS send/recv
scheme could deal with resource constraints by taking a snapshot and
throttling replication until resources become available again.
Replication throttling would mean losing some transaction history, but
since we don't expose that right now, nothing would be lost.

Scoreboarding (what SNDR does) should perform better in general, but in
the case of COW filesystems and databases ISTM that it should be a wash
unless it's properly integrated with the COW system, and that's what
makes me think scoreboarding and journalling approach each other at the
limit when integrated with ZFS.

In general I would expect journalling to have better reliability
semantics (since you always know exactly the last transaction that was
successfully replicated).

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Project Proposal: Availability Suite

2007-02-02 Thread Torrey McMahon

Nicolas Williams wrote:

On Fri, Feb 02, 2007 at 03:17:17PM -0500, Torrey McMahon wrote:
  

Nicolas Williams wrote:


But a continuous zfs send/recv would be cool too.  In fact, I think ZFS
tightly integrated with SNDR wouldn't be that much different from a
continuous zfs send/recv.
  
Even better with snapshots, and scoreboarding, and synch vs asynch and 
and and and .



Right.  I hadn't thought of that.  A replication system that is well
integrated with ZFS should have very similar properties whether designed
as a journalling scheme or as a scoreboarding scheme.


Here's an other thing to think about: ZFS is a COW filesystem. Even if 
I'm changing one piece of data over and over, which in the past might be 
a set of blocks, I'm going to be writing out new blocks on disk. Many 
replication strategies take into account the fact that even through your 
data is changing quite a bit the actual block storage level changes are 
much smaller.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Project Proposal: Availability Suite

2007-02-02 Thread Jonathan Edwards


On Feb 2, 2007, at 15:35, Nicolas Williams wrote:


Unlike traditional journalling replication, a continuous ZFS send/recv
scheme could deal with resource constraints by taking a snapshot and
throttling replication until resources become available again.
Replication throttling would mean losing some transaction history, but
since we don't expose that right now, nothing would be lost.

Scoreboarding (what SNDR does) should perform better in general,  
but in
the case of COW filesystems and databases ISTM that it should be a  
wash

unless it's properly integrated with the COW system, and that's what
makes me think scoreboarding and journalling approach each other at  
the

limit when integrated with ZFS.


hmm .. a COW scoreboard .. visions of Clustra with the notion of  
"each node
is an atomic failure unit" spring to mind .. of course in this light,  
there's not
much of a difference between just replication and global  
synchronization ..


very interesting ..

---
.je
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Project Proposal: Availability Suite

2007-02-02 Thread Torrey McMahon

Jonathan Edwards wrote:


On Feb 2, 2007, at 15:35, Nicolas Williams wrote:


Unlike traditional journalling replication, a continuous ZFS send/recv
scheme could deal with resource constraints by taking a snapshot and
throttling replication until resources become available again.
Replication throttling would mean losing some transaction history, but
since we don't expose that right now, nothing would be lost.

Scoreboarding (what SNDR does) should perform better in general, but in
the case of COW filesystems and databases ISTM that it should be a wash
unless it's properly integrated with the COW system, and that's what
makes me think scoreboarding and journalling approach each other at the
limit when integrated with ZFS.


hmm .. a COW scoreboard .. visions of Clustra with the notion of "each 
node
is an atomic failure unit" spring to mind .. of course in this light, 
there's not
much of a difference between just replication and global 
synchronization ..


But would you want a COW scoreboard or a transaction log? Or would there 
be a difference? Is it Friday yet? I think we need to start drinking on 
this one. ;)



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FYI: ZFS on USB sticks (from Germany)

2007-02-02 Thread Bart Smaalders

Constantin Gonzalez wrote:

Hi Richard,

Richard Elling wrote:

FYI,
here is an interesting blog on using ZFS with a dozen USB drives from
Constantin.
http://blogs.sun.com/solarium/entry/solaris_zfs_auf_12_usb


thank you for spotting it :).

We're working on translating the video (hope we get the lip-syncing right...)
and will then re-release it in an english version. BTW, we've now hosted
the video on YouTube so it can be embedded in the blog.

Of course, I'll then write an english version of the blog entry with the
tech details.

Please hang on for a week or two... :).

Best regards,
   Constantin



Brilliant video, guys.  I particularly liked the fellow
in the background with the hardhat and snow shovel :-).

The USB stick machinations were pretty cool, too.

- Bart


--
Bart Smaalders  Solaris Kernel Performance
[EMAIL PROTECTED]   http://blogs.sun.com/barts
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] corrupted files and improved 'zpool status -v'

2007-02-02 Thread eric kustarz

For your reading pleasure:
http://blogs.sun.com/erickustarz/entry/damaged_files_and_zpool_status

eric

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FYI: ZFS on USB sticks (from Germany)

2007-02-02 Thread Artem Kachitchkine



Brilliant video, guys.


Totally agreed, great work.

Boy, would I like to see Peter Stormare in that video %)

-Artem.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: hot spares - in standby?

2007-02-02 Thread Anton B. Rang
> Often, the spare is up and running but for whatever reason you'll have a 
> bad block on it and you'll die during the reconstruct.

Shouldn't SCSI/ATA block sparing handle this?  Reconstruction should be purely 
a matter of writing, so "bit rot" shouldn't be an issue; or are there cases I'm 
not thinking of? (Yes, I know there are a limited number of spare blocks, but I 
wouldn't expect a spare which is turned off to develop severe media 
problems...am I wrong?)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS panic on B54

2007-02-02 Thread Anton B. Rang
The affected DIMM?  Did you have memory errors before this?

The message you posted looked like a ZFS encountered an error writing to the 
drive (which could, admittedly, have been caused by bad memory).
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS and question on repetative data migrating to it efficiently...

2007-02-02 Thread Anton B. Rang
In general, your backup software should handle making incremental dumps, even 
from a split mirror. What are you using to write data to tape? Are you simply 
dumping the whole file system, rather than using standard backup software?

ZFS snapshots use a pure copy-on-write model. If you have a block containing 
some data, and you write exactly the same data to that block, ZFS will allocate 
a new block for it. (It would be possible to change this, but I can't think of 
many environments where detecting duplicate blocks would be advantageous, since 
most synchronization tools won't copy duplicate blocks.)

rsync does actually detect unchanged portions of files and avoids copying them. 
However, I'm not sure if it also avoids *rewriting* them, so it may not help 
you.

You also wrote:
>RMAN can [collect changes at the block level from Oracle files], yet that 
>still keeps things
>down at a DBA level, yet I need to keep this backup processing at the SA level.

This sounds like you have a political problem that really should be fixed. 
Splitting a mirror is not sufficient to have an Oracle backup from which you 
can safely restore, so the DBAs must already be cooperating with the SAs on 
backups. Proper use of the database backup tools can make the backup window 
shorter and 

zfs send/receive can be used to back up only changed blocks; vxfs also has 
incremental block-based backup available, but the licensing fees may be high.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS vs NFS vs array caches, revisited

2007-02-02 Thread Marion Hakanson
[EMAIL PROTECTED] said:
> The reality is that
>   ZFS turns on the write cache when it owns the
>   whole disk.
> _Independantly_ of that,
>   ZFS flushes the write cache when ZFS needs to insure 
>   that data reaches stable storage.
> 
> The point is that the flushes occur whether or not ZFS turned the caches on
> or not (caches might be turned on by some other means outside the visibility
> of ZFS). 

Thanks for taking the time to clear this up for us (assuming others than
just me had this misunderstanding :-).

Yet today I measured something that leaves me puzzled again.  How can we
explain the following results?

# zpool status -v
  pool: bulk_zp1
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ 
WRITE CKSUM
bulk_zp1 ONLINE   0
 0 0
  raidz1 ONLINE   0
 0 0
c6t4849544143484920443630303133323230303230d0s0  ONLINE   0
 0 0
c6t4849544143484920443630303133323230303230d0s1  ONLINE   0
 0 0
c6t4849544143484920443630303133323230303230d0s2  ONLINE   0
 0 0
c6t4849544143484920443630303133323230303230d0s3  ONLINE   0
 0 0
c6t4849544143484920443630303133323230303230d0s4  ONLINE   0
 0 0
c6t4849544143484920443630303133323230303230d0s5  ONLINE   0
 0 0
c6t4849544143484920443630303133323230303230d0s6  ONLINE   0

0 0

errors: No known data errors
# prtvtoc -s /dev/rdsk/c6t4849544143484920443630303133323230303230d0
*  First SectorLast
* Partition  Tag  FlagsSector CountSector  Mount Directory
   0  400 34 613563821 613563854
   1  400  613563855 613563821 1227127675
   2  400  1227127676 613563821 1840691496
   3  400  1840691497 613563821 2454255317
   4  400  2454255318 613563821 3067819138
   5  400  3067819139 613563821 3681382959
   6  400  3681382960 613563821 4294946780
   8 1100  4294946783 16384 4294963166
# 

And, at a later time:
# zpool status -v bulk_sp1s
  pool: bulk_sp1s
 state: ONLINE
 scrub: none requested
config:

NAME   STATE READ 
WRITE CKSUM
bulk_sp1s  ONLINE   0 
0 0
  c6t4849544143484920443630303133323230303230d0s0  ONLINE   0 
0 0
  c6t4849544143484920443630303133323230303230d0s1  ONLINE   0 
0 0
  c6t4849544143484920443630303133323230303230d0s2  ONLINE   0 
0 0
  c6t4849544143484920443630303133323230303230d0s3  ONLINE   0 
0 0
  c6t4849544143484920443630303133323230303230d0s4  ONLINE   0 
0 0
  c6t4849544143484920443630303133323230303230d0s5  ONLINE   0 
0 0
  c6t4849544143484920443630303133323230303230d0s6  ONLINE   0 
0 0

errors: No known data errors
# 


The storage is that same "single 2TB LUN" I used yesterday, except I've
used "format" to slice it up into 7 equal chunks, and made a raidz
(and later a simple striped) pool across all of them.  My "tar over NFS"
benchmark on these goes pretty fast.  If ZFS is making the flush-cache call,
it sure works faster than in the whole-LUN case:

ZFS on whole-disk FC-SATA LUN via NFS, yesterday:
real 968.13
user 0.33
sys 0.04
  7.9 KB/sec overall

ZFS on whole-disk FC-SATA LUN via NFS, ssd_max_throttle=32 today:
real 664.78
user 0.33
sys 0.04
  11.4 KB/sec overall

ZFS raidz on 7 slices of FC-SATA LUN via NFS today:
real 12.32
user 0.32
sys 0.03
  620.2 KB/sec overall

ZFS striped on 7 slices of FC-SATA LUN via NFS today:
real 6.51
user 0.32
sys 0.03
  1178.3 KB/sec overall

Not that I'm not complaining, mind you.  I appear to have stumbled across
a way to get NFS over ZFS to work at a reasonable speed, without making
changes to the array (nor resorting to giving ZFS SVN soft partitions
instead of "real" devices).  Suboptimal, mind you, but it's workable
if our Hitachi folks don't turn up a way to tweak the array.

Guess I should go read the ZFS source code (though my 10U3 surely lags
the Opensolaris stuff).

Thanks and regards,

Marion


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Project Proposal: Availability Suite

2007-02-02 Thread mike

My two (everyman's) cents - could something like this be modeled after
MySQL replication or even something like DRBD (drbd.org) ? Seems like
possibly the same idea.

On 1/26/07, Jim Dunham <[EMAIL PROTECTED]> wrote:

Project Overview:
...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS and question on repetative data migrating to it efficiently...

2007-02-02 Thread jason
> In general, your backup software should handle making
> incremental dumps, even from a split mirror. What are
> you using to write data to tape? Are you simply
> dumping the whole file system, rather than using
> standard backup software?
> 
We are using Veritas Netbackup 5 MP4.  It is performing a backup of a vxfs 
filesystem that gets mounted from a BCV (business continuance volume) volume 
split off a set of mirrors from an EMC DMX.  The dbf files prior to split are 
placed into hot backup mode so that the files are in a consistent state.  VxFS 
with checkpoints taken can record differing blocks between checkpoints, and 
house them for the ability to mount those checkpoints as read-only or 
read-write, yet the no-data method of taking checkpoints is a feature that 
would record the different blocks that change and could pipe that data to the 
netbackup master server, provided you have the correct netbackup agents that 
can do this.  Our issue is that the supported method of grabbing no-data 
checkpoints on a vxfs filesystem is that this would need to happen on the 
database host itself.  We would like if possible to keep the backups performed 
offhost, and not steal any cpu cycles from the database host.  AFAIK, there is 
not a method that Veritas Netbackup has that can see differences in files on a 
filesystem that gets presented to a host, then backed up on that host, and then 
BCV disks reattached|resynched to their source (hidden from the backup host 
when getting resynched|reattached to source mirrors), then resplit and 
remounted on the backup host for another backup cycle.  Netbackup does not know 
which blocks have then changed and then would treat all the dbf files as 
different from previous backup and backup the whole file = full backup.  I'd 
love to implement an incremental backup for these files, yet don't know which 
agents can do this if the storage is not present all the time on the off-host 
backup server.  Maybe that is not an issue, but would need to research more to 
see if it is.

> ZFS snapshots use a pure copy-on-write model. If you
> have a block containing some data, and you write
> exactly the same data to that block, ZFS will
> allocate a new block for it. (It would be possible to
> change this, but I can't think of many environments
> where detecting duplicate blocks would be
> advantageous, since most synchronization tools won't
> copy duplicate blocks.)
> 
I guess I understand this.  So anytime any new file is created, it will take a 
new block.  There would be no point in copying the same file on top of itself, 
yet I wanted to find an application that could see differences between 2 files, 
and if some bits are identical, do not change them, just change the differing 
bits to get the 2 files equivalent.  But if it goes to the point of rewriting 
the whole file, then no snapshot space saving is accomplished.

> rsync does actually detect unchanged portions of
> files and avoids copying them. However, I'm not sure
> if it also avoids *rewriting* them, so it may not
> help you.
> 
> You also wrote:
> >RMAN can [collect changes at the block level from
> Oracle files], yet that still keeps things
> >down at a DBA level, yet I need to keep this backup
> processing at the SA level.
> 
> This sounds like you have a political problem that
> really should be fixed. Splitting a mirror is not
> sufficient to have an Oracle backup from which you
> can safely restore, so the DBAs must already be
> cooperating with the SAs on backups. Proper use of
> the database backup tools can make the backup window
> shorter and 
> 
> zfs send/receive can be used to back up only changed
> blocks; vxfs also has incremental block-based backup
> available, but the licensing fees may be high.

This is true that our DBAs do support our existing configuration, yet I feel 
that full backups for each backup window are not the fastest method of backup.  
And if you have to backup fully, you need to restore fully to get a database 
back into a previous state.  This is for the method that we have, in that we do 
not do backups directly from the database host.  So our method for restore 
would be to restore the data from tapes back to a BCV disk, and then reverse 
sync those disks (or BCV restore sync) the data back to the original.  This 
would be a time consuming process, yet could be quicker on machines that have a 
vxfs filesystem on them since they could have a checkpoint remounted to be the 
live filesystem.  I failed to mention earlier that we do have some databases 
running on ufs filesystems and rely on this BCV synch|split process to backup 
their data off-host.  So anytime we would need to restore any data from tapes, 
it will be the a long process.  

I'll keep looking into Veritas Netbackup agents and what solutions are 
available for off-host backup of Oracle database files.  Maybe I can spawn an 
Oracle database to read the content of the BCV which gets mounted on the 
Netbackup master server 

[zfs-discuss] Which label a ZFS/ZPOOL device has ? VTOC or EFI ?

2007-02-02 Thread dudekula mastan
Hi All,
   
  ZPOOL / ZFS commands writes EFI label on a device if we create ZPOOL/ZFS fs 
on it. Is it true ?
   
  I formatted a device with VTOC lable and I created a ZFS file system on it.
   
  Now which label the ZFS device has ? is it old VTOC or EFI ?
   
  After creating the ZFS file system on a VTOC labeled disk, I am seeing the 
following warning messages.
   
  Feb  3 07:47:00 scoobyb Corrupt label; wrong magic number
  Feb  3 07:47:00 scoobyb scsi: [ID 107833 kern.warning] WARNING: 
/scsi_vhci/[EMAIL PROTECTED] (ss
d156):
   
  Any idea on this ?
   
  Your help is appreciated.
   
  Thanks & Regards
  Masthan


 
-
It's here! Your new message!
Get new email alerts with the free Yahoo! Toolbar.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss