Re: [zfs-discuss] Thoughts on patching + zfs root

2006-11-16 Thread Ceri Davies
On Wed, Nov 15, 2006 at 04:45:02PM -0700, Lori Alt wrote:
 Ceri Davies wrote:
 On Tue, Nov 14, 2006 at 07:32:08PM +0100, [EMAIL PROTECTED] wrote:
   
 Actually, we have considered this.  On both SPARC and x86, there will be
 a way to specify the root file system (i.e., the bootable dataset) to be 
 booted,
 at either the GRUB prompt (for x86) or the OBP prompt (for SPARC).
 If no root file system is specified, the current default 'bootfs' 
 specified
 in the root pool's metadata will be booted.  But it will be possible to
 override the default, which will provide that fallback boot capability.
   
 I was thinking of some automated mechanism such as:
 
 - BIOS which, when reset during POST, will switch to safe
   defaults and enter setup
 - Windows which, when reset during boot, will offer safe mode
   at the next boot.
 
 I was thinking of something that on activation of a new boot environment
 would automatically fallback on catastrophic failure.
 
 
 I don't wish to sound ungrateful or unconstructive but there's no other
 way to say this: I liked ZFS better when it was a filesystem + volume
 manager rather than the one-tool-fits-all monster that it seems to be
 heading in.
 
 I'm very concerned about bolting some flavour of boot loader on to the
 side, particularly one that's automatic.  I'm not doubting that the
 concept is way cool, but I want predictable behaviour every time; not
 way cool.
   
 
 All of these ideas about automated recovery are just ideas.   I don't think
 we've reached monsterdom just yet.  For right now, the planned behavior
 is more predictable:  there is one dataset specified as the 'default 
 bootable
 dataset' for the pool.  You will have to take explicit action (something
 like luactivate) to change that default.  You will always have a failsafe
 archive to boot if something goes terribly wrong and you need to
 fix your menu.lst or set a different default bootable dataset.  You will
 also be able to have multiple entries in the menu.list file, corresponding
 to multiple BEs, but that will be optional. 
 
 But I'm open to these ideas of automatic recovery.  It's an interesting
 thing to consider.  Ultimately, it might need to be something that is
 optional, so that we could also get behavior that is more predictable.

OK, thanks for the clarification.  Optional sounds good to me,
whatever the default may be.

And thanks again for working on the monster :)

Ceri
-- 
That must be wonderful!  I don't understand it at all.
  -- Moliere


pgpVyMEDYPq4k.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to backup/clone all filesystems *and* snapshots in a zpool?

2006-11-16 Thread Peter Eriksson
Suppose I have a server that is used as a backup system fom many other (live) 
servers. It uses ZFS snapshots to enable people to recover files from any date 
a year back (or so). 

Now, I want to backup this backup server to some kind of external stable 
storage in case disaster happens and this ZFS-backup server's disks gets 
corrupted. 

If I just backup the normal current filesystem on this backup server then I 
can always restore that and return to some known point - however all the 
snapshots are lost so after that crash my users won't be able to get back old 
files... I could restore multiple backups from various dates, but that will use 
up a lot of disk space.

Is there some way to dump all information from a ZFS filesystem? I suppose I 
*could* backup the raw disk devices that is used by the zpool but that'll eat 
up a lot of tape space...

Any suggestions?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to backup/clone all filesystems *and* snapshots in a zpool?

2006-11-16 Thread Dick Davies

On 16/11/06, Peter Eriksson [EMAIL PROTECTED] wrote:


Is there some way to dump all information from a ZFS filesystem? I suppose I 
*could* backup the raw disk devices that is used by the zpool but that'll eat up a lot of 
tape space...


If you want to have another copy somewhere, use zfs send/recv.

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] SVM - UFS Upgrade

2006-11-16 Thread Dan Christensen
Is it possible to convert/upgrade a file system that is currently under the 
control of Solaris Volume Manager to ZFS?

Thanks
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SVM - UFS Upgrade

2006-11-16 Thread Torrey McMahon
Not automagically. You'll need to do a dump/restore or copy from one to the 
other.


- Original Message 
From: Dan Christensen [EMAIL PROTECTED]
To: zfs-discuss@opensolaris.org
Sent: Thursday, November 16, 2006 5:52:51 PM
Subject: [zfs-discuss] SVM - UFS Upgrade

Is it possible to convert/upgrade a file system that is currently under the 
control of Solaris Volume Manager to ZFS?

Thanks
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss







 

Sponsored Link

Mortgage rates near 39yr lows. 
$310k for $999/mo. Calculate new payment! 
www.LowerMyBills.com/lre___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SVM - UFS Upgrade

2006-11-16 Thread Darren Dunham
 Is it possible to convert/upgrade a file system that is currently
 under the control of Solaris Volume Manager to ZFS?

SVM or not doesn't really matter.  There's no method for converting an
existing filesystem to ZFS in place.  

You'll have to populate the ZFS pool after allocating storage to it.

-- 
Darren Dunham   [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper?   San Francisco, CA bay area
  This line left intentionally blank to confuse you. 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SVM - UFS Upgrade

2006-11-16 Thread Fred Zlotnick

This is the Migration Problem: given a dataset on a on-ZFS file system,
what is the safest and easiest way to move it to a ZFS pool.  There
are two and a half cases:

1. You need to reuse the existing storage.
1.5 You have some extra storage, but not enough for 2 copies of all
   your data.
2. You can create the zpool on new storage.

Clearly, case 1 is hardest, and we currently have no automated tool
that will do this.  Backup/Destroy existing file systems/Create new
pools and file systems/Restore will work, but (a) you are offline for
a period of time, and (b) you have to really trust your backup and
restore software.

Case 2 can often be solved using rdist.  You may have to quiesce
your file systems for a while.

Case 1.5 can usually be solved with a lot of hacking around with
partial versions of case 2.  A lot depends on how your existing data
is organized.

We have a number of clever ideas for how to automate case 2, and some
ideas for case 1.5, but no bandwidth to implement them now.

Note that in all cases, it really pays to give some thought up front
to how you want your ZFS pools and file systems to be organized.  ZFS
removes many of the arbitrary constraints that may have governed your
existing structure; free yourself from those constraints.

I'm curious to hear of any migration success stories - or not - that
folks on this alias have experienced.  You can send them to me and
I'll summarize to the alias.

Thanks,
Fred

Darren Dunham wrote:

Is it possible to convert/upgrade a file system that is currently
under the control of Solaris Volume Manager to ZFS?


SVM or not doesn't really matter.  There's no method for converting an
existing filesystem to ZFS in place.  


You'll have to populate the ZFS pool after allocating storage to it.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] sharing a zfs file system

2006-11-16 Thread Sanjay Nadkarni


I am trying to nfs share a zfs file system using zfs share file system

I get an error saying
   sump/install_image': legacy share

I have not set legacy mount on this dataset.  Here's what zfs get shows:

sump/install_image  mountpoint /sump/install_imagedefault

Thanks

-Sanjay



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SVM - UFS Upgrade

2006-11-16 Thread Bill Sommerfeld
On Thu, 2006-11-16 at 16:08 -0800, Fred Zlotnick wrote:
 I'm curious to hear of any migration success stories - or not - that
 folks on this alias have experienced.  You can send them to me and
 I'll summarize to the alias.

I sent one to this list some months ago.

To recap, I used a variant of case 2: When I set up the original SVM+UFS
filesystem, I knew a zfs migration was coming so I held back sufficient
storage to permit me to create the first raidz group.

rsync worked nicely to copy the bits.  once the move was complete, the
SVM+UFS filesystem was taken apart and the underlying disks added to the
pool.  

It took a few months before usage levelled out between the first raidz
group and the ones added later.

- Bill


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharing a zfs file system

2006-11-16 Thread James Dickens

On 11/16/06, Sanjay Nadkarni [EMAIL PROTECTED] wrote:



I am trying to nfs share a zfs file system using zfs share file system

I get an error saying
sump/install_image': legacy share

I have not set legacy mount on this dataset.  Here's what zfs get shows:




you need to set  sharenfs  to how you want the filesystem(s) shared by
default

zfs set sharenfs=rw sump

is a good setting if you are on a secure network

James Dickens
uadmin.blogspot.com


sump/install_image  mountpoint /sump/install_imagedefault


Thanks

-Sanjay



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Rivas message

2006-11-16 Thread Dorian Rivas
Our Hottest pick this year!  Brand new issue Cana Petroleum!

VERY tightly held, in a booming business sector, with a huge 
publicity campaign starting up, Cana Petroleum (CNPM) is set 
to bring all our readers huge gains.  We advise you to get 
in on it on Friday, November 17th and ride it to the top! 

Symbol: CNPM
Current Price: Around $2.00
Projected Price: $10.40

Check the stats!  Check the level 2!  Imagine what this one 
will do when the full force of the PR campaign hits it, in 
conjunction with smashing news!

Major oil discovery?  We are not permitted to say at this 
point.  All we can say is that this one is going to see 
amazing appreciation in a very short period of time!  This 
is your opportunity.  Win big with CNPM!  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss