[zfs-discuss] Problem with memory recovery from arc cache

2009-11-05 Thread Peter Pickford
Hi All,

Has anyone seen problems with the arc cache holding on to memory in
memory pressure conditions?

We have several Oracle DB servers running zfs for the root file
systems and the databases on vxfs.

An unexpected number of clients connected and cause a memory shortage
such that some processes were swapped out.

The system recovered partially with around 1G free however the arch
cache was still around 9-10g.

It appears that the arc cache didn't dump memory as fast as it was
recovered from processes etc.

As a workaround we have limited the max_arc_cache to 2G.

Shouldn't the arc_cache be recovered in preference to active process memory?
Having to competing systems recovering memory does not make sense to
me and seems to result in a strange situation with memory shortages
and a arc large cache.

Also would it be better if the min_arc_cache was based on the size of
zfs file systems rather than a percentage of total memory?
3or 4G minimums seem huge!

Thanks

Peter
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS flar image.

2009-09-25 Thread Peter Pickford
Hi Peter,

Do you have any notes on what you did to restore a sendfile to an existing BE?

I'm interested in creating a 'golden image' and restring this into a
new BE on a  running system as part of a hardening project.

Thanks

Peter

2009/9/14 Peter Karlsson peter.karls...@sun.com:
 Hi Greg,

 We did a hack on those lines when we installed 100 Ultra 27s that was used
 during J1, but we automated the process by using AI to install a bootstrap
 image that had a SMF service that pulled over the zfs sendfile, create a new
 BE and received the sendfile to the new BE. Work fairly OK, there where a
 few things that we to run a few scripts to fix, but at large it was
 smooth I really need to get that blog entry done :)

 /peter

 Greg Mason wrote:

 As an alternative, I've been taking a snapshot of rpool on the golden
 system, sending it to a file, and creating a boot environment from the
 archived snapshot on target systems. After fiddling with the snapshots a
 little, I then either appropriately anonymize the system or provide it with
 its identity. When it boots up, it's ready to go.

 The only downfall to my method is that I still have to run the full
 OpenSolaris installer, and I can't exclude anything in the archive.

 Essentially, it's a poor man's flash archive.

 -Greg

 cindy.swearin...@sun.com wrote:

 Hi RB,

 We have a draft of the ZFS/flar image support here:

 http://opensolaris.org/os/community/zfs/boot/flash/

 Make sure you review the Solaris OS requirements.

 Thanks,

 Cindy

 On 09/14/09 11:45, RB wrote:

 Is it possible to create flar image of ZFS root filesystem to install it
 to other macines?

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cloning Systems using zpool

2009-09-25 Thread Peter Pickford
Hi Lori,

Is the u8 flash support for the whole root pool or an individual BE
using live upgrade?

Thanks

Peter

2009/9/24 Lori Alt lori@sun.com:
 On 09/24/09 15:54, Peter Pickford wrote:

 Hi Cindy,

 Wouldn't

 touch /reconfigure
 mv /etc/path_to_inst* /var/tmp/

 regenerate all device information?


 It might, but it's hard to say whether that would accomplish everything
 needed to move a root file system from one system to another.

 I just got done modifying flash archive support to work with zfs root on
 Solaris 10 Update 8.  For those not familiar with it, flash archives are a
 way to clone full boot environments across multiple machines.  The S10
 Solaris installer knows how to install one of these flash archives on a
 system and then do all the customizations to adapt it to the  local hardware
 and local network environment.  I'm pretty sure there's more to the
 customization than just a device reconfiguration.

 So feel free to hack together your own solution.  It might work for you, but
 don't assume that you've come up with a completely general way to clone root
 pools.

 lori

 AFIK zfs doesn't care about the device names it scans for them
 it would only affect things like vfstab.

 I did a restore from a E2900 to V890 and is seemed to work

 Created the pool and zfs recieve.

 I would like to be able to have a zfs send of a minimal build and
 install it in an abe and activate it.
 I tried that is test and it seems to work.

 It seems to work but IM just wondering what I may have missed.

 I saw someone else has done this on the list and was going to write a blog.

 It seems like a good way to get a minimal install on a server with
 reduced downtime.

 Now if I just knew how to run the installer in and abe without there
 being an OS there already that would be cool too.

 Thanks

 Peter

 2009/9/24 Cindy Swearingen cindy.swearin...@sun.com:


 Hi Peter,

 I can't provide it because I don't know what it is.

 Even if we could provide a list of items, tweaking
 the device informaton if the systems are not identical
 would be too difficult.

 cs

 On 09/24/09 12:04, Peter Pickford wrote:


 Hi Cindy,

 Could you provide a list of system specific info stored in the root pool?

 Thanks

 Peter

 2009/9/24 Cindy Swearingen cindy.swearin...@sun.com:


 Hi Karl,

 Manually cloning the root pool is difficult. We have a root pool recovery
 procedure that you might be able to apply as long as the
 systems are identical. I would not attempt this with LiveUpgrade
 and manually tweaking.


 http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Complete_Solaris_ZFS_Root_Pool_Recovery

 The problem is that the amount system-specific info stored in the root
 pool and any kind of device differences might be insurmountable.

 Solaris 10 ZFS/flash archive support is available with patches but not
 for the Nevada release.

 The ZFS team is working on a split-mirrored-pool feature and that might
 be an option for future root pool cloning.

 If you're still interested in a manual process, see the steps below
 attempted by another community member who moved his root pool to a
 larger disk on the same system.

 This is probably more than you wanted to know...

 Cindy



 # zpool create -f altrpool c1t1d0s0
 # zpool set listsnapshots=on rpool
 # SNAPNAME=`date +%Y%m%d`
 # zfs snapshot -r rpool/r...@$snapname
 # zfs list -t snapshot
 # zfs send -R rp...@$snapname | zfs recv -vFd altrpool
 # installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk
 /dev/rdsk/c1t1d0s0
 for x86 do
 # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
 Set the bootfs property on the root pool BE.
 # zpool set bootfs=altrpool/ROOT/zfsBE altrpool
 # zpool export altrpool
 # init 5
 remove source disk (c1t0d0s0) and move target disk (c1t1d0s0) to slot0
 -insert solaris10 dvd
 ok boot cdrom -s
 # zpool import altrpool rpool
 # init 0
 ok boot disk1

 On 09/24/09 10:06, Karl Rossing wrote:


 I would like to clone the configuration on a v210 with snv_115.

 The current pool looks like this:

 -bash-3.2$ /usr/sbin/zpool status    pool: rpool
  state: ONLINE
  scrub: none requested
 config:

       NAME          STATE     READ WRITE CKSUM
       rpool         ONLINE       0     0     0
         mirror      ONLINE       0     0     0
           c1t0d0s0  ONLINE       0     0     0
           c1t1d0s0  ONLINE       0     0     0

 errors: No known data errors

 After I run zpool detach rpool c1t1d0s0, how can I remount c1t1d0s0 to
 /tmp/a so that I can make the changes I need prior to removing the drive
 and
 putting it into the new v210.

 I supose I could lucreate -n new_v210, lumount new_v210, edit what I
 need
 to, luumount new_v210, luactivate new_v210, zpool detach rpool c1t1d0s0
 and
 then luactivate the original boot environment.


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Which directories must be part of rpool?

2009-09-25 Thread Peter Pickford
Hi David,

I believe /opt is an essential file system as it contains software
that is maintained by the packaging system.
In fact anywhere you install software via pkgadd probably should be in
the BE under /rpool/ROOT/bename

AFIK it should not even be split from root in the BE under zfs boot
(only /var is supported) other wise LU breaks.

I have sub directories of /opt like /aop/app which does not contain
software installed via pkgadd.

I also split off /var/core and /var/crash.

Unfortunately when you need to boot -F and import the pool for
maintenance it doesn't mount /var causing directory /var/core and
/var/crash to be created in the root file system.

The system then reboots but when you do a lucreate, or lumount it
fails due to /var/core and /var/crash existing on the / file system
causing the mount of /var to fail in the ABE.

I have found it a bit problematic to split of file systems from /
under zfs boot and still have LU work properly.

I haven't tried putting split off file systems as apposed to
application file systems on a different pool but I believe there may
be mount ordering issues with mounting dependent file systems from
different pools where the parent file system are not part of the BE or
legacy mounts.

It is not possible to mount a vxfs file system under a non legacy zone
root file system due to ordering issues with mounting on boot (legacy
is done before automatic zfs mounts).

Perhaps u7 addressed some of there issues as I believe it is now
allowable to have zone root file system on a non root pool.

These are just my experiences and I'm sure others can give more
definitive answers.
Perhaps its easier to get some bigger disks.

Thanks

Peter

2009/9/25 David Abrahams d...@boostpro.com:

 on Fri Sep 25 2009, Cindy Swearingen Cindy.Swearingen-AT-Sun.COM wrote:

 Hi David,

 All system-related components should remain in the root pool, such as
 the components needed for booting and running the OS.

 Yes, of course.  But which *are* those?

 If you have datasets like /export/home or other non-system-related
 datasets in the root pool, then feel free to move them out.

 Well, for example, surely /opt can be moved?

 Moving OS components out of the root pool is not tested by us and I've
 heard of one example recently of breakage when usr and var were moved
 to a non-root RAIDZ pool.

 It would be cheaper and easier to buy another disk to mirror your root
 pool then it would be to take the time to figure out what could move out
 and then possibly deal with an unbootable system.

 Buy another disk and we'll all sleep better.

 Easy for you to say.  There's no room left in the machine for another disk.

 --
 Dave Abrahams
 BoostPro Computing
 http://www.boostpro.com

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cloning Systems using zpool

2009-09-24 Thread Peter Pickford
Hi Cindy,

Wouldn't

touch /reconfigure
mv /etc/path_to_inst* /var/tmp/

regenerate all device information?

AFIK zfs doesn't care about the device names it scans for them
it would only affect things like vfstab.

I did a restore from a E2900 to V890 and is seemed to work

Created the pool and zfs recieve.

I would like to be able to have a zfs send of a minimal build and
install it in an abe and activate it.
I tried that is test and it seems to work.

It seems to work but IM just wondering what I may have missed.

I saw someone else has done this on the list and was going to write a blog.

It seems like a good way to get a minimal install on a server with
reduced downtime.

Now if I just knew how to run the installer in and abe without there
being an OS there already that would be cool too.

Thanks

Peter

2009/9/24 Cindy Swearingen cindy.swearin...@sun.com:
 Hi Peter,

 I can't provide it because I don't know what it is.

 Even if we could provide a list of items, tweaking
 the device informaton if the systems are not identical
 would be too difficult.

 cs

 On 09/24/09 12:04, Peter Pickford wrote:

 Hi Cindy,

 Could you provide a list of system specific info stored in the root pool?

 Thanks

 Peter

 2009/9/24 Cindy Swearingen cindy.swearin...@sun.com:

 Hi Karl,

 Manually cloning the root pool is difficult. We have a root pool recovery
 procedure that you might be able to apply as long as the
 systems are identical. I would not attempt this with LiveUpgrade
 and manually tweaking.


 http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Complete_Solaris_ZFS_Root_Pool_Recovery

 The problem is that the amount system-specific info stored in the root
 pool and any kind of device differences might be insurmountable.

 Solaris 10 ZFS/flash archive support is available with patches but not
 for the Nevada release.

 The ZFS team is working on a split-mirrored-pool feature and that might
 be an option for future root pool cloning.

 If you're still interested in a manual process, see the steps below
 attempted by another community member who moved his root pool to a
 larger disk on the same system.

 This is probably more than you wanted to know...

 Cindy



 # zpool create -f altrpool c1t1d0s0
 # zpool set listsnapshots=on rpool
 # SNAPNAME=`date +%Y%m%d`
 # zfs snapshot -r rpool/r...@$snapname
 # zfs list -t snapshot
 # zfs send -R rp...@$snapname | zfs recv -vFd altrpool
 # installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk
 /dev/rdsk/c1t1d0s0
 for x86 do
 # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
 Set the bootfs property on the root pool BE.
 # zpool set bootfs=altrpool/ROOT/zfsBE altrpool
 # zpool export altrpool
 # init 5
 remove source disk (c1t0d0s0) and move target disk (c1t1d0s0) to slot0
 -insert solaris10 dvd
 ok boot cdrom -s
 # zpool import altrpool rpool
 # init 0
 ok boot disk1

 On 09/24/09 10:06, Karl Rossing wrote:

 I would like to clone the configuration on a v210 with snv_115.

 The current pool looks like this:

 -bash-3.2$ /usr/sbin/zpool status    pool: rpool
  state: ONLINE
  scrub: none requested
 config:

       NAME          STATE     READ WRITE CKSUM
       rpool         ONLINE       0     0     0
         mirror      ONLINE       0     0     0
           c1t0d0s0  ONLINE       0     0     0
           c1t1d0s0  ONLINE       0     0     0

 errors: No known data errors

 After I run zpool detach rpool c1t1d0s0, how can I remount c1t1d0s0 to
 /tmp/a so that I can make the changes I need prior to removing the drive
 and
 putting it into the new v210.

 I supose I could lucreate -n new_v210, lumount new_v210, edit what I
 need
 to, luumount new_v210, luactivate new_v210, zpool detach rpool c1t1d0s0
 and
 then luactivate the original boot environment.

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Incremental backup via zfs send / zfs receive

2009-09-20 Thread Peter Pickford
just destroy the swap snapshot and it doesn't get sent when you do a full send

2009/9/20 Frank Middleton f.middle...@apogeect.com:
 A while back I posted a script that does individual send/recvs
 for each file system, sending incremental streams if the remote
 file system exists, and regular streams if not.

 The reason for doing it this way rather than a full recursive
 stream is that there's no way to avoid sending certain file
 systems such as swap, and it would be nice not to always send
 certain properties such as mountpoint, and there might be file
 systems you want to keep on the receiving end.

 The problem with the regular stream is that most of the file
 system properties (such as mountpoint) are not copied as they
 are with a recursive stream. This may seem an advantage to some,
 (e.g., if the remote mountpoint is already in use, the mountpoint
 seems to default to legacy). However, did I miss anything in the
 documentation, or would it be worth submitting an RFE for an
 option to send/recv properties in a non-recursive stream?

 Oddly, incremental non-recursive streams do seem to override
 properties, such as mountpoint, hence the /opt problem. Am I
 missing something, or is this really an inconsistency? IMO
 non-recursive regular and incremental streams should behave the
 same way and both have options to send or not send properties.
 For my purposes the default behavior is reversed for what I
 would like to do...

 Thanks -- Frank

 Latest version of the  script follows; suggestions for improvements
 most welcome, especially the /opt problem where source and destination
 hosts have different /opts (host6-opt and host5-opt here) - see
 ugly hack below (/opt is on the data pool because the boot disks
 - soon to be SSDs - are filling up):

 #!/bin/bash
 #
 # backup is the alias for the host receiving the stream
 # To start, do a full recursive send/receive and put the
 # name of the initial snapshot in cur_snap, In case of
 # disasters, the older snap name is saved in cur_snap_prev
 # and there's an option not to delete any snapshots when done.
 #
 if test ! -e cur_snap; then echo cur_snap not found; exit; fi
 P=`cat cur_snap`
 mv -f cur_snap cur_snap_prev
 T=`date +%Y-%m-%d:%H:%M:%S`
 echo $T  cur_snap
 echo snapping to sp...@$t
 echo Starting backup from sp...@$p to sp...@$t at `date`  snap_time
 zfs snapshot -r sp...@$t
 echo snapshot done
 for FS in `zfs list -H | cut -f 1`
 do
 RFS=`ssh backup zfs list -H $FS 2/dev/null | cut  -f 1`
 case $FS in
 space/file system to skip here)
  echo skipping $FS
  ;;
 *)
  if test $RFS; then
    if [ $FS = space/swap ]; then
      echo skipping $FS
    else
      echo do zfs send -i $...@$p $...@$t I ssh backup zfs recv -vF $RFS
              zfs send -i $...@$p $...@$t | ssh backup zfs recv -vF $RFS
    fi
  else
    echo do zfs send $...@$t I ssh backup zfs recv -v $FS
            zfs send $...@$t | ssh backup zfs recv -v $FS
  fi
  if [ $FS = space/host5-opt ]; then
  echo do ssh backup zfs set mountpoint=legacy space/host5-opt
          ssh backup zfs set mountpoint=legacy space/host5-opt
  fi
  ;;
 esac
 done

 echo --Ending backup from sp...@$p to sp...@$t at `date`  snap_time

 DOIT=1
 while [ $DOIT -eq 1 ]
 do
  read -p Delete old snapshot y/n  REPLY
  REPLY=`echo $REPLY | tr '[:upper:]' '[:lower:]'`
  case $REPLY in
    y)
      ssh backup zfs destroy -r sp...@$p
      echo Remote sp...@$p destroyed
      zfs destroy -r sp...@$p
      echo Local sp...@$p destroyed
      DOIT=0
      ;;
    n)
      echo Skipping:
      echo    ssh backup zfs destroy -r sp...@$p
      echo    zfs destroy -r sp...@$p
      DOIT=0
      ;;
     *)
      echo Please enter y or n
      ;;
  esac
 done



 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris live CD that supports ZFS root mount for fs fixes

2009-07-16 Thread Peter Pickford
will
boot -F failsafe
work

2009/7/16 Matt Weatherford m...@u.washington.edu:

 Hi,

 I borked a libc.so library file on my solaris 10 server (zfs root) - was
 wondering if there
 is a good live CD that will be able to mount my ZFS root fs so that I can
 make this
 quick repair on the system boot drive and get back running again.  Are all
 ZFS
 roots created equal? Its an x86 solaris 10 box. If I boot a belenix live CD
 will it be
 able to mount this ZFS root?

 Thanks,

 Matt

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Fwd: Solaris 8/9 branded zones on ZFS root?

2009-02-25 Thread Peter Pickford
Hi Rich,

Solaris 8/9 zones seem to work fine with zfs root for the zone.

Only problem so far is where to put the root file system for the zone
in the zfs file system hierarchy.

branded zones do not seem to be part of the luupdate scheme.

At them moment I have another tree of file systems on rpool.

Problem is that the boot code doesn't mount the root file system for
the zone when you re-boot.


Thanks

Peter


2009/2/25 Rich Teer rich.t...@rite-group.com:
 Hi all,

 I have a situation where I need to consolidate a few servers running
 Solaris 9 and 8.  If the application doesn't run natively on Solaris
 10 or Nevada, I was thinking of using Solars 9 or 8 branded zones.
 My intent would be for the global zone to use ZFS boot/root; would I
 be correct in thinking that this will be OK for the branded zones?
 That is, they don't care about the underlying file system type?

 Or am I stuck with using UFS for the root file systems of Solaris 8
 and 9 branded zones?  (I sure hpoe not!)

 Many TIA,

 --
 Rich Teer, SCSA, SCNA, SCSECA

 URLs: http://www.rite-group.com/rich
      http://www.linkedin.com/in/richteer
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Problem with zfs mount lu and solaris 8/9 containers

2009-02-06 Thread Peter Pickford
Hi,

If this is not a zfs question please direct me to the correct place
for this question.

I have a server with Solaris 10 u6 zfs root file system and Solaris 9
zones along with Solaris 10 zones.

What is the best way to configure the root file system of a Solaris 9
container WRT zfs file system location and options?

The live upgrade guide for zfs file systems and zones says to put the
root of the zone in a sub-file system of the boot environment and set
canmount=noauto.

The svc:/system/filesystem/minimal:default
(/lib/svc/method/fs-minimal) mount s all the files ystem in the
current BE and life is good :)

The problem comes what you clone the BE to do patches etc for the GZ
and (Solaris 10) non GZ. using lucreate.

It doesn't clone the file systems root for the solaris 9 zone (because
it cant patch it from the global zone?)

When the new BE is activated the root for the solaris 9 zone is not
part of the new BE and is not mounted because its set to
canmount=noauto.

So I tried to move the root filesystem out of the BE and make it canmount=yes

This causes the boot to error svc:/system/filesystem/local:default and
place it in maintenance.

Currently I have the file system outside of the BE canmount=noauto.

Im thinking of writing a smallservice to mount all of the filesystem
of where I keep the nonnative on global zones.

Is there a better way of solving this issue?

How will sun like resolve this so that I can do more or less the same
thing and it will just work then the new patches.

Thanks

Peter
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] s10u6 ludelete issues with zones on zfs root

2009-01-16 Thread Peter Pickford
This what I discovered

Yo cant have sub directories of the zone root file system that is part of
the BE filesystem tree with zfs and lu (no spearate /var etc)
zone roots must be on the root pool for lu to work
extra file system must be from a none BE zfs file system tree ( I use
datasets)

[r...@buildsun4u ~]# zfs list -r rpool/zones
NAMEUSED  AVAIL  REFER
MOUNTPOINT
rpool/zones 162M  14.8G21K  /zones
rpool/zones/zone1-restore_080915   73.2M  14.8G  73.2M
/zones/zone1
rpool/zones/zone1-restore_080...@patch_090115  0  -  73.2M  -
rpool/zones/zone1-restore_080915-patch_090115  7.76M  14.8G  76.1M
/.alt.patch_090115/zones/zone1
rpool/zones/zone2-restore_080915   73.4M  14.8G  73.4M
/zones/zone2
rpool/zones/zone2-restore_080...@patch_090115  0  -  73.4M  -
rpool/zones/zone2-restore_080915-patch_090115  7.75M  14.8G  76.3M
/.alt.patch_090115/zones/zone2

You can have datasets and probably mount that are not part of the BE

[r...@buildsun4u ~]# zfs list -r rpool/zonesextra
NAME USED  AVAIL  REFER  MOUNTPOINT
rpool/zonesextra 284K  14.8G18K  legacy
rpool/zonesextra/zone1   132K  14.8G18K  legacy
rpool/zonesextra/zone1/app18K  8.00G18K  /opt/app
rpool/zonesextra/zone1/core   18K  8.00G18K  /var/core
rpool/zonesextra/zone1/export   78.5K  14.8G20K  /export
rpool/zonesextra/zone1/export/home  58.5K  8.00G  58.5K  /export/home
rpool/zonesextra/zone2   133K  14.8G18K  legacy
rpool/zonesextra/zone2/app18K  8.00G18K  /opt/app
rpool/zonesextra/zone2/core   18K  8.00G18K  /var/core
rpool/zonesextra/zone2/export 79K  14.8G20K  /export
rpool/zonesextra/zone2/export/home59K  8.00G59K  /export/home

2009/1/16 amy.r...@tufts.edu

 cindy.swearingen
 http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Live_Upgrade_with_Zones

 Thanks, Cindy, that was in fact the page I had been originally referencing
 when I set up my datasets, and it was very helpful.  I found it by reading
 a
 comp.unix.solaris post in which someone else was talking about not being
 able
 to ludelete an old BE.  Unfortunately, it wasn't quite the same issue as
 you
 cover in Recover from BE Removal Failure (ludelete), and that fix had
 already been applied to my system.

 cindy.swearingen The entire Solaris 10 10/08 UFS to ZFS with zones
 migration
 cindy.swearingen is described here:
 cindy.swearingen
 http://docs.sun.com/app/docs/doc/819-5461/zfsboot-1?a=view

 Thanks, I find most of the ZFS stuff to be fairly straightforward.  And I'm
 never doing any migration from UFS (which is what many of the zones and zfs
 docs seem to be aimed at).  It's mixing ZFS, Zones, and liveupgrade that's
 been... challenging.  :}

 But now I know that there's definitely a bug involved, and I'll wait for
 the
 patch.  Thanks to you and Mark for your help.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Changing GUID

2008-07-07 Thread Peter Pickford
Hi Jeff,

What I'm trying to do is import many copies of a pool that is cloned on a
storage array.

ZFS will only import the first disk (there is only one disk in the pool) and
any clones have the same pool name and GUID and are ignored.

Is there any chance Sun will support external cloned disks and add an option
to generate a new GUID on import in the near future?

Veritas 5.0 supports a similar idea and allows disks to be tagged and the
disk group to be imported using the tag with an option to generate new
GUIDs.

Cyril has kindly sent me some code so my immediate problem is probably
resolved but don't you think this would be better handled as part of zpool
import?

Thanks

Peter

2008/7/2 Jeff Bonwick [EMAIL PROTECTED]:

  How difficult would it be to write some code to change the GUID of a
 pool?

 As a recreational hack, not hard at all.  But I cannot recommend it
 in good conscience, because if the pool contains more than one disk,
 the GUID change cannot possibly be atomic.  If you were to crash or
 lose power in the middle of the operation, your data would be gone.

 What problem are you trying to solve?

 Jeff

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Changing GUID

2008-07-02 Thread Peter Pickford
Hi,

How difficult would it be to write some code to change the GUID of a pool?



Thanks

Peter
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss