Re: [zfs-discuss] zpool split failing

2012-04-17 Thread Matt Keenan

Hi Cindy,

Tried out your example below in a vbox env, and detaching a device from 
a pool makes that device simply unavailable. and simply cannot be 
re-imported.


I then tried setting up a mirrored rpool within a vbox env, agreed one 
device is not USB however, when booted into the rpool, split worked, I 
then tried booting directly into the rpool on the faulty laptop, and 
split still failed.


My only conclusion for failure is
 - The rpool I'm attempting to split has a LOT of history been around 
for some 2 years now, so has gone through a lot of upgrades etc, there 
may be some ZFS history there that's not letting this happen, BTW the 
version is 33 which is current.
- or is it possible that one of the devices being a USB device is 
causing the failure ? I don't know.


My reason for splitting the pool was so I could attach the clean USB 
rpool to another laptop and simply attach the disk from the new laptop, 
let it resilver, installgrub to new laptop disk device and boot it up 
and I would be back in action.


As a workaround I'm trying to simply attach my USB rpool to the new 
laptop and use zfs replace to effectively replace the offline device 
with the new laptop disk device. So far so good, 12% resilvering, so 
fingers crossed this will work.


As an aside, I have noticed that on the old laptop, it would not boot if 
the USB part of the mirror was not attached to the laptop, successful 
boot could only be achieved when both mirror devices were online. Is 
this a know issue with ZFS ? bug ?


cheers

Matt


On 04/16/12 10:05 PM, Cindy Swearingen wrote:

Hi Matt,

I don't have a way to reproduce this issue and I don't know why
this is failing. Maybe someone else does. I know someone who
recently split a root pool running the S11 FCS release without
problems.

I'm not a fan of root pools on external USB devices.

I haven't tested these steps in a while but you might try
these steps instead. Make sure you have a recent snapshot
of your rpool on the unhealthy laptop.

1. Ensure that the existing root pool and disks are healthy.

# zpool status -x

2. Detach the USB disk.

# zpool detach rpool disk-name

3. Connect the USB disk to the new laptop.

4. Force import the pool on the USB disk.

# zpool import -f rpool rpool2

5. Device cleanup steps, something like:

Boot from media and import rpool2 as rpool.
Make sure the device info is visible.
Reset BIOS to boot from this disk.

On 04/16/12 04:12, Matt Keenan wrote:

Hi

Attempting to split a mirrored rpool and fails with error :

Unable to split rpool: pool already exists


I have a laptop with main disk mirrored to an external USB. However as
the laptop is not too healthy I'd like to split the pool into two pools
and attach the external drive to another laptop and mirror it to the new
laptop.

What I did :

- Booted laptop into an live DVD

- Import the rpool:
$ zpool import rpool

- Attempt to split :
$ zpool split rpool rpool-ext

- Error message shown and split fails :
Unable to split rpool: pool already exists

- So I tried exporting the pool
and re-importing with a different name and I still get the same
error. There are no other zpools on the system, both zpool list and
zpool export return nothing other than the rpool I've just imported.

I'm somewhat stumped... any ideas ?

cheers

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool split failing

2012-04-17 Thread Matt Keenan

On 04/17/12 01:00 PM, Jim Klimov wrote:

2012-04-17 14:47, Matt Keenan wrote:

- or is it possible that one of the devices being a USB device is
causing the failure ? I don't know.


Might be, I've got little experience with those beside LiveUSB
imagery ;)


My reason for splitting the pool was so I could attach the clean USB
rpool to another laptop and simply attach the disk from the new laptop,
let it resilver, installgrub to new laptop disk device and boot it up
and I would be back in action.


If the USB disk split-off were to work, I'd rather try booting
the laptop off the USB disk, if BIOS permits, or I'd boot off
a LiveCD/LiveUSB (if Solaris 11 has one - or from installation
media and break out into a shell) and try to import the rpool
from USB disk and then attach the laptop's disk to it to resilver.


This is exactly what I am doing, booted new laptop into LiveCD, imported 
USB pool, and zpool replacing the old laptop disk device which is in 
degraded state, with the new laptop disk device (after I partitioned to 
keep windows install).





As a workaround I'm trying to simply attach my USB rpool to the new
laptop and use zfs replace to effectively replace the offline device
with the new laptop disk device. So far so good, 12% resilvering, so
fingers crossed this will work.


Won't this overwrite the USB disk with the new laptop's (empty)
disk? The way you describe it...


No the offline disk in this instance is the old laptop's internal disk, 
the online device is the USB drive.





As an aside, I have noticed that on the old laptop, it would not boot if
the USB part of the mirror was not attached to the laptop, successful
boot could only be achieved when both mirror devices were online. Is
this a know issue with ZFS ? bug ?


Shouldn't be as mirrors are to protect against the disk failures.
What was your rpool's failmode zpool-level attribute?
It might have some relevance, but should define kernel's reaction
to catastrophic failures of the pool, and loss of a mirror's
side IMHO should not be one?.. Try failmode=continue and see if
that helps the rpool, to be certain. I think that's what the
installer should have done.


Exactly what I would have thought ZFS should actually help here not 
hinder. From what I can see the default failmode as set by install is 
wait which is exactly what is happening when I attempt to boot.


Just tried setting zpool failmode=continue and unfortunately still fails 
to boot, failmode=wait is definitely the default.


cheers

Matt

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool split failing

2012-04-16 Thread Matt Keenan

Hi

Attempting to split a mirrored rpool and fails with error :

  Unable to split rpool: pool already exists


I have a laptop with main disk mirrored to an external USB. However as 
the laptop is not too healthy I'd like to split the pool into two pools 
and attach the external drive to another laptop and mirror it to the new 
laptop.


What I did :

- Booted laptop into an live DVD

- Import the rpool:
  $ zpool import rpool

- Attempt to split :
  $ zpool split rpool rpool-ext

- Error message shown and split fails :
  Unable to split rpool: pool already exists

- So I tried exporting the pool
  and re-importing  with a different name and I still get the same
  error. There are no other zpools on the system, both zpool list and
  zpool export return nothing other than the rpool I've just imported.

I'm somewhat stumped... any ideas ?

cheers

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Accessing Data from a detached device.

2012-03-30 Thread Matt Keenan

Hi,
As an addendum to this, I'm curious about how to grow the split pool in 
size.


Scenario, mirrored pool comprising of two disks, one 200GB and the other 
300GB, naturally the size of the mirrored pool is 200GB e.g. the smaller 
of the two devices.


I ran some tests within vbox env and I'm curious why after a zpool split 
one of the pools does not increase in size to 300gb, yet for some reason 
both pools remain at 200gb even if I export/import them. Sizes are  
reported via zpool list.


I checked the label, both disks have a single EFI partition consuming 
100% of each disk. and format/partition shows slice 0 on both disks also 
consuming the entire disk respectively.


So how does one force the pool with the larger disk to increase in size ?

cheers

Matt

On 03/30/12 12:55 AM, Daniel Carosone wrote:

On Thu, Mar 29, 2012 at 05:54:47PM +0200, casper@oracle.com wrote:

Is it possible to access the data from a detached device from an
mirrored pool.

If it is detached, I don't think there is a way to get access
to the mirror.  Had you used split, you should be able to reimport it.

(You can try aiming zpool import at the disk but I'm not hopeful)

The uberblocks have been invalidated as a precaution, so no.

If it's too late to use split instead of detach, see this thread:

  http://thread.gmane.org/gmane.os.solaris.opensolaris.zfs/15796/focus=15929

I renew my request for someone to adopt and nurture this tool.

--
Dan.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Accessing Data from a detached device.

2012-03-30 Thread Matt Keenan

Casper,

Yep that's the lad, I set it to on and split pool expands..

thanks

Matt

On 03/30/12 02:15 PM, casper@oracle.com wrote:

Hi,
As an addendum to this, I'm curious about how to grow the split pool in
size.

Scenario, mirrored pool comprising of two disks, one 200GB and the other
300GB, naturally the size of the mirrored pool is 200GB e.g. the smaller
of the two devices.

I ran some tests within vbox env and I'm curious why after a zpool split
one of the pools does not increase in size to 300gb, yet for some reason
both pools remain at 200gb even if I export/import them. Sizes are
reported via zpool list.

I checked the label, both disks have a single EFI partition consuming
100% of each disk. and format/partition shows slice 0 on both disks also
consuming the entire disk respectively.

So how does one force the pool with the larger disk to increase in size ?


What is the autoexpand setting (I think it is off by default)?


zpool get autoexpand splitted-pool


Casper



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Accessing Data from a detached device.

2012-03-29 Thread Matt Keenan

Hi,

Is it possible to access the data from a detached device from an 
mirrored pool.


Given a two device mirrored pool, if you zpool detach one device. Can 
the data on the removed device be accessed in some means. From what I 
can see you can attach the device back to the original pool, but this 
will simply re-silver everything from the already attached device back 
onto this device.


If I attached this device to a different pool it will simply get 
overwritten.


Any ideas ?

cheers

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Accessing Data from a detached device.

2012-03-29 Thread Matt Keenan

Cindy/Casper,

Thanks for the pointer, luckily I'd not done the detach before sending 
the email, split seems the way to go.


thanks again

Matt

On 03/29/12 05:13 PM, Cindy Swearingen wrote:

Hi Matt,

There is no easy way to access data from a detached device.

You could try to force import it on another system or under
a different name on the same system with the remaining device.

The easiest way is to split the mirrored pool. See the
steps below.

Thanks,

Cindy


# zpool status pool
  pool: pool
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Wed Mar 28 15:58:44 
2012

config:

NAME   STATE READ WRITE CKSUM
pool   ONLINE   0 0 0
  mirror-0 ONLINE   0 0 0
c0t2014C3F04F4Fd0  ONLINE   0 0 0
c0t2014C3F04F38d0  ONLINE   0 0 0

errors: No known data errors
# zpool split pool pool2
# zpool import pool2
# zpool status pool pool2
  pool: pool
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Wed Mar 28 15:58:44 
2012

config:

NAME STATE READ WRITE CKSUM
pool ONLINE   0 0 0
  c0t2014C3F04F4Fd0  ONLINE   0 0 0

errors: No known data errors

  pool: pool2
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Wed Mar 28 15:58:44 
2012

config:

NAME STATE READ WRITE CKSUM
pool2ONLINE   0 0 0
  c0t2014C3F04F38d0  ONLINE   0 0 0

errors: No known data errors
#



On 03/29/12 09:50, Matt Keenan wrote:

Hi,

Is it possible to access the data from a detached device from an
mirrored pool.

Given a two device mirrored pool, if you zpool detach one device. Can
the data on the removed device be accessed in some means. From what I
can see you can attach the device back to the original pool, but this
will simply re-silver everything from the already attached device back
onto this device.

If I attached this device to a different pool it will simply get
overwritten.

Any ideas ?

cheers

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ensure Newly created pool is imported automatically in new BE

2011-06-01 Thread Matt Keenan

Dan,

Tried export data after beadm umount, but on reboot zpool data is simply 
not imported at all...


So exporting data before reboot dosen;t appear to help..

thanks

Matt

On 06/01/11 01:35, Daniel Carosone wrote:

On Tue, May 31, 2011 at 05:32:47PM +0100, Matt Keenan wrote:

Jim,

Thanks for the response, I've nearly got it working, coming up against a
hostid issue.

Here's the steps I'm going through :

- At end of auto-install, on the client just installed before I manually
reboot I do the following :
   $ beadm mount solaris /a
   $ zpool export data
   $ zpool import -R /a -N -o cachefile=/a/etc/zfs/zpool.cache data
   $ beadm umount solaris
   $ reboot

- Before rebooting I check /a/etc/zfs/zpool.cache and it does contain
references to data.

- On reboot, the automatic import of data is attempted however following
message is displayed :

  WARNING: pool 'data' could not be loaded as it was last accessed by
another system (host: ai-client hostid: 0x87a4a4). See
http://www.sun.com/msg/ZFS-8000-EY.

- Host id on booted client is :
   $ hostid
   000c32eb

As I don't control the import command on boot i cannot simply add a -f
to force the import, any ideas on what else I can do here ?

Can you simply export the pool again before rebooting, but after the
cachefile in /a has been unmounted?

--
Dan.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ensure Newly created pool is imported automatically in new BE

2011-05-31 Thread Matt Keenan

Jim,

Thanks for the response, I've nearly got it working, coming up against a 
hostid issue.


Here's the steps I'm going through :

- At end of auto-install, on the client just installed before I manually 
reboot I do the following :

  $ beadm mount solaris /a
  $ zpool export data
  $ zpool import -R /a -N -o cachefile=/a/etc/zfs/zpool.cache data
  $ beadm umount solaris
  $ reboot

- Before rebooting I check /a/etc/zfs/zpool.cache and it does contain 
references to data.


- On reboot, the automatic import of data is attempted however following 
message is displayed :


 WARNING: pool 'data' could not be loaded as it was last accessed by 
another system (host: ai-client hostid: 0x87a4a4). See 
http://www.sun.com/msg/ZFS-8000-EY.


- Host id on booted client is :
  $ hostid
  000c32eb

As I don't control the import command on boot i cannot simply add a -f 
to force the import, any ideas on what else I can do here ?


cheers

Matt

On 05/27/11 13:43, Jim Klimov wrote:

Did you try it as a single command, somewhat like:

zpool create -R /a -o cachefile=/a/etc/zfs/zpool.cache mypool c3d0
Using altroots and cachefile(=none) explicitly is a nearly-documented
way to avoid caching pools which you would not want to see after
reboot, i.e. removable media.
I think that after the AI is done and before reboot you might want to
reset the altroot property to point to root (or be undefined) so that
the data pool is mounted into your new rpools hierarchy and not
under /a/mypool again ;)
And if your AI setup does not use the data pool, you might be better
off not using altroot at all, maybe...

- Original Message -
From: Matt Keenan matt...@opensolaris.org
Date: Friday, May 27, 2011 13:25
Subject: [zfs-discuss] Ensure Newly created pool is imported 
automatically in new BE

To: zfs-discuss@opensolaris.org

 Hi,

 Trying to ensure a newly created data pool gets import on boot
 into a
 new BE.

 Scenario :
Just completed a AI install, and on the client
 before I reboot I want
 to create a data pool, and have this pool automatically imported
 on boot
 into the newly installed AI Boot Env.

Trying to use the -R altroot option to zpool create
 to achieve this or
 the zpool set -o cachefile property, but having no luck, and
 would like
 some advice on what the best means of achieving this would be.

 When the install completes, we have a default root pool rpool, which
 contains a single default boot environment, rpool/ROOT/solaris

 This is mounted on /a so I tried :
 zpool create -R /a mypool c3d0

 Also tried :
 zpool create mypool c3d0
 zpool set -o cachefile=/a mypool

 I can clearly see /a/etc/zfs/zpool.cache contains information
 for rpool,
 but it does not get any information about mypool. I would expect
 this
 file to contain some reference to mypool. So I tried :
 zpool set -o cachefile=/a/etc/zfs/zpool.cache

 Which fails.

 Any advice would be great.

 cheers

 Matt
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--

++
||
| Климов Евгений, Jim Klimov |
| технический директор   CTO |
| ЗАО ЦОС и ВТ  JSC COSHT |
||
| +7-903-7705859 (cellular)  mailto:jimkli...@cos.ru |
|CC:ad...@cos.ru,jimkli...@gmail.com |
++
| ()  ascii ribbon campaign - against html mail  |
| /\- against microsoft attachments  |
++


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Ensure Newly created pool is imported automatically in new BE

2011-05-27 Thread Matt Keenan

Hi,

Trying to ensure a newly created data pool gets import on boot into a 
new BE.


Scenario :
  Just completed a AI install, and on the client before I reboot I want 
to create a data pool, and have this pool automatically imported on boot 
into the newly installed AI Boot Env.


  Trying to use the -R altroot option to zpool create to achieve this or
the zpool set -o cachefile property, but having no luck, and would like 
some advice on what the best means of achieving this would be.


When the install completes, we have a default root pool rpool, which
contains a single default boot environment, rpool/ROOT/solaris

This is mounted on /a so I tried :
   zpool create -R /a mypool c3d0

Also tried :
   zpool create mypool c3d0
   zpool set -o cachefile=/a mypool

I can clearly see /a/etc/zfs/zpool.cache contains information for rpool, 
but it does not get any information about mypool. I would expect this 
file to contain some reference to mypool. So I tried :

   zpool set -o cachefile=/a/etc/zfs/zpool.cache

Which fails.

Any advice would be great.

cheers

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZPOOL_CONFIG_IS_HOLE

2010-10-15 Thread Matt Keenan

Hi,

Can someone shed some light on what this ZPOOL_CONFIG is exactly.
At a guess is it a bad sector of the disk, non writable and thus ZFS 
marks it as a hole ?


cheers

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] root pool mirror problems

2010-05-20 Thread Matt Keenan
As queried by Ian, the new disk being attached must be at least as big 
as the original root pool disk. It can be bigger, but the difference 
will not be used in the mirroring.


cheers

Matt

On 05/20/10 10:11 AM, Ian Collins wrote:

On 05/20/10 08:39 PM, roi shidlovsky wrote:

hi.
i am trying to attach a mirror disk to my root pool. if the two disk 
are the same size.. it all works fine, but if the two disks are with 
different size (8GB and 7.5GB) i get a I/O error on the attach 
command.


can anybody tell me what am i doing wrong?

Trying to mirror a larger disk with a smaller one?

Recent builds can cope with a small difference to allow for variations 
between models of the same nominal size, but not 500MB.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mirroring USB Drive with Laptop for Backup purposes

2010-05-10 Thread Matt Keenan

On 05/ 7/10 10:07 PM, Bill McGonigle wrote:

On 05/07/2010 11:08 AM, Edward Ned Harvey wrote:
I'm going to continue encouraging you to staying mainstream, 
because what

people do the most is usually what's supported the best.


If I may be the contrarian, I hope Matt keeps experimenting with this, 
files bugs, and they get fixed.  His use case is very compelling - I 
know lots of SOHO folks who could really use a NAS where this 'just 
worked'


The ZFS team has done well by thinking liberally about conventional 
assumptions.


-Bill



My plan indeed is to continue with this setup (going to upgrade to 138 
to resolve my reboot issue). This particular use case for me is 
definitely compelling, the simply fact that I can plug my USB drive into 
another laptop and boot into the exact same environment is reason enough 
for me to continue with this setup and see how things go.


Mind you doing occasional zfs send's to another backup drive might be 
something I'll do aswell :-)


cheers

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mirroring USB Drive with Laptop for Backup purposes

2010-05-07 Thread Matt Keenan
After some playing around I've noticed some kinks particularly around 
booting.


Some scenarios :

- Poweroff with USB drive connected or removed, Solaris will not boot 
unless USB drive is
  connected, and in some cases need to be attached to the exact same 
USB port when last

  attached. Is this a bug ?

- Take USB drive offline via zfs offline, and poweroff, this is much 
nastier, as machine would not
  boot at all regardless of whether USB drive was connected or not. I 
had to boot into LiveCD,
  zpool import (whilst USB was attached), and bring USB drive back 
online via zfs online pool

  disk.

Exact steps on what I did :
  http://blogs.sun.com/mattman/entry/bootable_usb_mirror

As I find other caveats I'll add them... But it looks like having the 
drive connected at all times is preferable.


cheers

Matt

On 05/ 6/10 12:11 PM, Matt Keenan wrote:


Based on comments, some people say nay, some say yah. so I decided 
to give it a spin, and see

how I get on.

To make my mirror bootable I followed instructions posted here :
  http://www.taiter.com/blog/2009/04/opensolaris-200811-adding-disk.html

I plan to do a quick write up myself of my own experience, but so far 
everything is working fine.


Mirror size is 200GB (Smallest disk, happens to be laptop disk), once 
I attached the USB drive, it
started resilvering straight away, and only took 1hr 45mins to 
complete and it resilvered 120G !!

This I was very impressed with.

So far I've not noticed any system performance degradation with the 
drive attached. I did a quick test, yanked out the drive, degrades 
rpool as expected, but system continues to function fine.


I also did a quick test to see of the USB drive was indeed bootable, 
by connecting to another laptop, it booted perfectly.


Connecting the USB drive back to original laptop, the pool comes back 
online and resilvers seamlessly.


This is automatic 24/7 backup at it's best...

One thing I did notice, I powered down yesterday whilst USB was 
attached, this morning when booting up, I did so without the USB 
attached, laptop failed to boot, I had to connect the USB drive and it 
booted up fine.


Key would be to degrade the pool before shutdown, e.g. disconnect USB 
drive, might try using zpool offline and see how that works.


If I encounter issues, I'll post again.

cheers

Matt

On 05/ 5/10 09:34 PM, Edward Ned Harvey wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Matt Keenan

Just wondering whether mirroring a USB drive with main laptop disk for
backup purposes is recommended or not.

Plan would be to connect the USB drive, once or twice a week, let it
resilver, and then disconnect again. Connecting USB drive 24/7 would
AFAIK have performance issues for the Laptop.

MMmmm...  If it works, sounds good.  But I don't think it'll work as
expected, for a number of reasons, outlined below.

The suggestion I would have instead, would be to make the external 
drive its
own separate zpool, and then you can incrementally zfs send | zfs 
receive

onto the external.

Here are the obstacles I think you'll have with your proposed solution:

#1 I think all the entire used portion of the filesystem needs to 
resilver

every time.  I don't think there's any such thing as an incremental
resilver.

#2 How would you plan to disconnect the drive?  If you zpool detach 
it, I
think it's no longer a mirror, and not mountable.  If you simply yank 
out
the plug ... although that might work, it would certainly be 
nonideal.  If

you power off, disconnect, and power on ... Again, it should probably be
fine, but it's not designed to be used that way intentionally, so your
results ... are probably as-yet untested.

I don't want to go on.  This list could go on forever.  I will strongly
encourage you to simply use zfs send | zfs receive because that's a
standard practice thing to do.  It is known that the external drive 
is not
bootable this way, but that's why you have this article on how to 
make it

bootable:

http://docs.sun.com/app/docs/doc/819-5461/ghzur?l=ena=view



This would have the added benefit of the USB drive being bootable.

By default, AFAIK, that's not correct.  When you mirror rpool to another
device, by default the 2nd device is not bootable, because it's just 
got an

rpool in there.  No boot loader.

Even if you do this mirror idea, which I believe will be slower and less
reliable than zfs send | zfs receive you still haven't gained 
anything as
compared to the zfs send | zfs receive procedure, which is known to 
work

reliable with optimal performance.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mirroring USB Drive with Laptop for Backup purposes

2010-05-06 Thread Matt Keenan


Based on comments, some people say nay, some say yah. so I decided 
to give it a spin, and see

how I get on.

To make my mirror bootable I followed instructions posted here :
  http://www.taiter.com/blog/2009/04/opensolaris-200811-adding-disk.html

I plan to do a quick write up myself of my own experience, but so far 
everything is working fine.


Mirror size is 200GB (Smallest disk, happens to be laptop disk), once I 
attached the USB drive, it
started resilvering straight away, and only took 1hr 45mins to complete 
and it resilvered 120G !!

This I was very impressed with.

So far I've not noticed any system performance degradation with the 
drive attached. I did a quick test, yanked out the drive, degrades rpool 
as expected, but system continues to function fine.


I also did a quick test to see of the USB drive was indeed bootable, by 
connecting to another laptop, it booted perfectly.


Connecting the USB drive back to original laptop, the pool comes back 
online and resilvers seamlessly.


This is automatic 24/7 backup at it's best...

One thing I did notice, I powered down yesterday whilst USB was 
attached, this morning when booting up, I did so without the USB 
attached, laptop failed to boot, I had to connect the USB drive and it 
booted up fine.


Key would be to degrade the pool before shutdown, e.g. disconnect USB 
drive, might try using zpool offline and see how that works.


If I encounter issues, I'll post again.

cheers

Matt

On 05/ 5/10 09:34 PM, Edward Ned Harvey wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Matt Keenan

Just wondering whether mirroring a USB drive with main laptop disk for
backup purposes is recommended or not.

Plan would be to connect the USB drive, once or twice a week, let it
resilver, and then disconnect again. Connecting USB drive 24/7 would
AFAIK have performance issues for the Laptop.
 

MMmmm...  If it works, sounds good.  But I don't think it'll work as
expected, for a number of reasons, outlined below.

The suggestion I would have instead, would be to make the external drive its
own separate zpool, and then you can incrementally zfs send | zfs receive
onto the external.

Here are the obstacles I think you'll have with your proposed solution:

#1 I think all the entire used portion of the filesystem needs to resilver
every time.  I don't think there's any such thing as an incremental
resilver.

#2 How would you plan to disconnect the drive?  If you zpool detach it, I
think it's no longer a mirror, and not mountable.  If you simply yank out
the plug ... although that might work, it would certainly be nonideal.  If
you power off, disconnect, and power on ... Again, it should probably be
fine, but it's not designed to be used that way intentionally, so your
results ... are probably as-yet untested.

I don't want to go on.  This list could go on forever.  I will strongly
encourage you to simply use zfs send | zfs receive because that's a
standard practice thing to do.  It is known that the external drive is not
bootable this way, but that's why you have this article on how to make it
bootable:

http://docs.sun.com/app/docs/doc/819-5461/ghzur?l=ena=view


   

This would have the added benefit of the USB drive being bootable.
 

By default, AFAIK, that's not correct.  When you mirror rpool to another
device, by default the 2nd device is not bootable, because it's just got an
rpool in there.  No boot loader.

Even if you do this mirror idea, which I believe will be slower and less
reliable than zfs send | zfs receive you still haven't gained anything as
compared to the zfs send | zfs receive procedure, which is known to work
reliable with optimal performance.

   


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Mirroring USB Drive with Laptop for Backup purposes

2010-05-04 Thread Matt Keenan

Hi,

Just wondering whether mirroring a USB drive with main laptop disk for 
backup purposes is recommended or not.


Current setup, single root pool set up on 200GB internal laptop drive :

$ zpool status
  pool: rpool
 state: ONLINE
 scrub: non requested
config :
NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c5t0d0s0  ONLINE   0 0 0


I have a 320GB external USB drive which I'd like to configure as a 
mirror of this root pool (I know it will only use 200GB of the eternal 
one, not worried about that).


Plan would be to connect the USB drive, once or twice a week, let it 
resilver, and then disconnect again. Connecting USB drive 24/7 would 
AFAIK have performance issues for the Laptop.


This would have the added benefit of the USB drive being bootable.

- Recommended or not ?
- Are there known issues with this type of setup ?


cheers

Matt

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] swap across multiple pools

2010-03-03 Thread Matt Keenan
The default install for OpenSolaris creates a single root pool, and creates a 
swap and dump dataset within this pool.


In a mutipool environment, would be make sense to add swap to a pool outside or 
the root pool, either as the sole swap dataset to be used or as extra swap ?


Would this have any performance implications ?

cheers

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss