Re: [zfs-discuss] Scripting zfs send / receive

2008-09-26 Thread Enda O'Connor ( Sun Micro Systems Ireland)
Hi
Clive King has a nice blog entry showing this in action
http://blogs.sun.com/clive/entry/replication_using_zfs

with associated script at:
http://blogs.sun.com/clive/resource/zfs_repl.ksh

Which I think answers most of your questions.

Enda
Ross wrote:
> Hey folks,
> 
> Is anybody able to help a Solaris scripting newbie with this? I want to put 
> together an automatic script to take snapshots on one system and send them 
> across to another. I've shown the manual process works, but only have a very 
> basic idea about how I'm going to automate this.
> 
> My current thinking is that I want to put together a cron job that will work 
> along these lines:
> 
> - Run every 15 mins
> - take a new snapshot of the pool
> - send the snapshot to the remote system with zfs send / receive and ssh.
> (am I right in thinking I can get ssh to work with no password if I create a 
> public/private key pair? http://www.go2linux.org/ssh-login-using-no-password)
> - send an e-mail alert if zfs send / receive fails for any reason (with the 
> text of the failure message)
> - send an e-mail alert if zfs send / receive takes longer than 15 minutes and 
> clashes with the next attempt
> - delete the oldest snapshot on both systems if the send / receive worked
> 
> Can anybody think of any potential problems I may have missed? 
> 
> Bearing in mind I've next to no experience in bash scripting, how does the 
> following look?
> 
> **
> #!/bin/bash
> 
> # Prepare variables for e-mail alerts
> SUBJECT="zfs send / receive error"
> EMAIL="[EMAIL PROTECTED]"
> 
> NEWSNAP="build filesystem + snapshot name here"
> RESULTS=$(/usr/sbin/zfs snapshot $NEWSNAP)
> # how do I check for a snapshot failure here?  Just look for non blank 
> $RESULTS?
> if $RESULTS; then
># send e-mail
>/bin/mail -s $SUBJECT $EMAIL $RESULTS
>exit
> fi
> 
> PREVIOUSSNAP="build filesystem + snapshot name here"
> RESULTS=$(/usr/sbin/zfs send -i $NEWSNAP $PREVIOUSSNAP | ssh -l *user* 
> *remote-system* /usr/sbin/zfs receive *filesystem*)
> # again, how do I check for error messages here?  Do I just look for a blank 
> $RESULTS to indicate success?
> if $RESULTS ok; then
>OBSOLETESNAP="build filesystem + name here"
>zfs destroy $OBSOLETESNAP
>ssh -l *user* *remote-system* /usr/sbin/zfs destroy $OBSOLETESNAP
> else 
># send e-mail with error message
>/bin/mail -s $SUBJECT $EMAIL $RESULTS
> fi
> **
> 
> One concern I have is what happens if the send / receive takes longer than 15 
> minutes. Do I need to check that manually, or will the script cope with this 
> already? Can anybody confirm that it will behave as I am hoping in that the 
> script will take the next snapshot, but the send / receive will fail and 
> generate an e-mail alert?
> 
> thanks,
> 
> Ross
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] S10u6, zfs and zones

2008-08-05 Thread Enda O'Connor ( Sun Micro Systems Ireland)
dick hoogendijk wrote:
> My server runs S10u5. All slices are UFS. I run a couple of sparse
> zones on a seperate slice mounted on /zones.
> 
> When S10u6 comes out booting of ZFS will become possible. That is great
> news. However, will it be possible to have those zones I run now too?
you can migrate pre u5 ufs to u6 zfs via lucreate, zones included.

There is no support issues for zones on a system with zfs root, that I'm aware 
of, and Lu 
( Live upgrade ) in u6 will support zones on zfs upgrade.
> I always understood ZFS and root zones are difficult. I hope to be able
> to change all FS to ZFS, including the space for the sparse zones.
zones can be on zfs or any other supported config in combination with zfs root.

Is there a specific question you had in mind with regard to sparse zones and 
zfs root, no 
too clear if I answered your actual query.

Enda
> 
> Does somebody have more information on this?
> 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I trust ZFS?

2008-08-01 Thread Enda O'Connor ( Sun Micro Systems Ireland)
Dave wrote:
> 
> 
> Enda O'Connor wrote:
>>
>> As for thumpers, once 138053-02 (  marvell88sx driver patch ) releases 
>> within the next two weeks ( assuming no issues found ), then the 
>> thumper platform running s10 updates will be up to date in terms of 
>> marvel88sx driver fixes, which fixes some pretty important issues for 
>> thumper.
>> Strongly suggest applying this patch to thumpers going forward.
>> u6 will have the fixes by default.
>>
> 
> I'm assuming the fixes listed in these patches are already committed in 
> OpenSolaris (b94 or greater)?
> 
> -- 
> Dave
yep.
I know this is opensolaris list, but a lot of folk asking questions do seem to 
be running 
various update releases.


Enda
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot - upgrade from UFS & swap slices

2008-07-24 Thread Enda O'Connor ( Sun Micro Systems Ireland)
[EMAIL PROTECTED] wrote:
> Alan,
> 
> Just make sure you use dumpadm to point to valid dump device and
> this setup should work fine. Please let us know if it doesn't.
> 
> The ZFS strategy behind automatically creating separate swap and
> dump devices including the following:
> 
> o Eliminates the need to create separate slices
> o Enables underlying ZFS architecture for swap and dump devices
> o Enables you to set characteristics like compression on swap
> and dump devices, and eventually, encryption
Hi
also makes resizing easy to do as well.
ie
zfs set volsize=8G lupool/dump


Enda
> 
> Cindy
> 
> Alan Burlison wrote:
>> [EMAIL PROTECTED] wrote:
>>
>>> ZFS doesn't swap to a slice in build 92. In this build, a ZFS root
>>> environment requires separate ZFS volumes for swap and dump devices.
>>>
>>> The ZFS boot/install project and information trail starts here:
>>>
>>> http://opensolaris.org/os/community/zfs/boot/
>>
>> Is this going to be supported in a later build?
>>
>> I got it to use the existing swap slice by manually reconfiguring the 
>> ZFS-root BE post-install to use the swap slice as swap & dump - the 
>> resulting BE seems to work just fine, so I'm not sure why LU insists on 
>> creating ZFS swap & dump.
>>
>> Basically I want to migrate my root filesystem from UFS to ZFS and leave 
>> everything else as it it, there doesn't seem to be a way to do this.
>>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot attach mirror to SPARC zfs root pool

2008-07-24 Thread Enda O'Connor ( Sun Micro Systems Ireland)
Enda O'Connor ( Sun Micro Systems Ireland) wrote:
> Mike Gerdts wrote:
>> On Wed, Jul 23, 2008 at 11:36 AM,  <[EMAIL PROTECTED]> wrote:
>>> Rainer,
>>>
>>> Sorry for your trouble.
>>>
>>> I'm updating the installboot example in the ZFS Admin Guide with the
>>> -F zfs syntax now. We'll fix the installboot man page as well.
>>
>> Perhaps it also deserves a mention in the FAQ somewhere near
>> http://opensolaris.org/os/community/zfs/boot/zfsbootFAQ/#mirrorboot.
>>
>> 5. How do I attach a mirror to an existing ZFS root pool"?
>>
>> Attach the second disk to form a mirror.  In this example, c1t1d0s0 is 
>> attached.
>>
>> # zpool attach rpool c1t0d0s0 c1t1d0s0
>>
>> Prior to build , bug 6668666 causes the following
>> platform-dependent steps to also be needed:
>>
>> On sparc systems:
>> # installboot -F zfs /usr/`uname -i`/lib/fs/zfs/bootblk 
>> /dev/rdsk/c1t1d0s0
> 
> should be uname -m above I think.
> and path to be:
> # installboot -F zfs /platform/`uname -m`/lib/fs/zfs/bootblk as path for 
> sparc.
> 
> others might correct me though
> 
>>
>> On x86 systems:
>> # ...
meant to add that on x86 the following should do the trick ( again I'm open to 
correction )

installgrub /boot/grub/stage1 /zfsroot/boot/grub/stage2 /dev/rdsk/c1t0d0s0

haven't tested the z86 one though.

Enda
>>
> 
> 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot attach mirror to SPARC zfs root pool

2008-07-24 Thread Enda O'Connor ( Sun Micro Systems Ireland)
Mike Gerdts wrote:
> On Wed, Jul 23, 2008 at 11:36 AM,  <[EMAIL PROTECTED]> wrote:
>> Rainer,
>>
>> Sorry for your trouble.
>>
>> I'm updating the installboot example in the ZFS Admin Guide with the
>> -F zfs syntax now. We'll fix the installboot man page as well.
> 
> Perhaps it also deserves a mention in the FAQ somewhere near
> http://opensolaris.org/os/community/zfs/boot/zfsbootFAQ/#mirrorboot.
> 
> 5. How do I attach a mirror to an existing ZFS root pool"?
> 
> Attach the second disk to form a mirror.  In this example, c1t1d0s0 is 
> attached.
> 
> # zpool attach rpool c1t0d0s0 c1t1d0s0
> 
> Prior to build , bug 6668666 causes the following
> platform-dependent steps to also be needed:
> 
> On sparc systems:
> # installboot -F zfs /usr/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0

should be uname -m above I think.
and path to be:
# installboot -F zfs /platform/`uname -m`/lib/fs/zfs/bootblk as path for sparc.

others might correct me though

> 
> On x86 systems:
> # ...
> 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS/Install related question

2008-07-11 Thread Enda O'Connor ( Sun Micro Systems Ireland)
Andre wrote:
> Hi there,
> 
> I'm currently setting up a new system to my lab. 4 SATA drives would be 
> turned into the main file system (ZFS?) running on a soft raid (raid-z?).
> 
> My main target is reliability, my experience with Linux SoftRaid was 
> catastrophic and the array could no be restored after some testing simulating 
> power failures (thank god I did the tests before relying on that...)
> 
> For what I've seen so far, Solaris cannot boot from a raid-z system. Is that 
> correct?
yes, zfs can boot of a mirror or single disk ( not raidz or above ), actually 
zfs root can 
only boot of a slice in the disk, you can make the whole disk into a slice as 
such.
Reason being that zfs does not boot of EFI labelled disks only SMI currently, 
you can use 
zpool attach to add disk to setup as wel, if you want to convert to a mirror.
i tend to suggest a mirror for reliability.

Not overly clear to me what you want, you say the 4 drives are going to be 
turned into the 
main file system, so is this what you will be using for root ( ie booting ) or 
are there 
other disks you will be using to boot.
Also what solaris release is this ie zfs root will be in s10 update 6,or is in 
current 
nevada builds.

Enda
> 
> In this case, what needs to be out of the array? Example, on a Linux system, 
> I could set the /boot to be on a old 256MB USB flash.(As long the boot loader 
> and kernel were out of the array the system would boot.) What are the 
> requirements for booting from the USB but loading a system on the array?
> 
> Second, how do I proceed during the Install process?
> 
> I know it's a little bit weird but I must confess I'm doing it on purpose. :-)
> 
> I thank you in advance
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mirroring zfs slice

2008-06-17 Thread Enda O'Connor ( Sun Micro Systems Ireland)
Hi
Use zpool attach
from
http://docs.sun.com/app/docs/doc/819-2240/zpool-1m

zpool attach [-f] pool device new_device

 Attaches new_device to an existing zpool device. The existing device 
cannot be part 
of a raidz configuration. If device is not currently part of a mirrored 
configuration, 
device automatically transforms into a two-way mirror of device and new_device. 
If device 
is part of a two-way mirror, attaching new_device creates a three-way mirror, 
and so on. 
In either case, new_device begins to resilver immediately.


Enda
Srinivas Chadalavada wrote:
> Hi All,
> 
> I had a slice with zfs file system which I want to mirror, I 
> followed the procedure mentioned in the amin guide I am getting this 
> error. Can you tell me what I did wrong?
> 
>  
> 
> root # zpool list
> 
> NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
> 
> export  254G230K254G 0%  ONLINE -
> 
> root # echo |format
> 
> Searching for disks...done
> 
>  
> 
>  
> 
> AVAILABLE DISK SELECTIONS:
> 
>0. c2t0d0 
> 
>   /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci1000,[EMAIL 
> PROTECTED]/[EMAIL PROTECTED],0
> 
>1. c2t2d0 
> 
>   /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci1000,[EMAIL 
> PROTECTED]/[EMAIL PROTECTED],0
> 
> Specify disk (enter its number): Specify disk (enter its number):
> 
> :root # zpool create export mirror c2t0d0s5 c2t2d0s5
> 
> invalid vdev specification
> 
> use '-f' to override the following errors:
> 
> /dev/dsk/c2t0d0s5 is part of active ZFS pool export. Please see zpool(1M).
> 
>  
> 
> Thanks,
> 
> Srini
> 
> 
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Version Correct

2008-05-20 Thread Enda O'Connor ( Sun Micro Systems Ireland)
Kenny wrote:
> Back to the top
> 
> Is there a patch upgrade for ZFS on Solaris 10?  Where might I find it.

it's the kernel patch, depending on how far back you are in the update's you 
might have to 
install m ultiple Kernel Patches.

the latest one is 127127-11/127128-11 ( the u5 KU )
it depends on 120011-14/120012-14 ( the u4 kernel )
which depends on 118833-36/118855-36 the U3 kernel
Above showing sparc/x86 versions

You can get them from sunsolve.sun.com
http://sunsolve.sun.com/show.do?target=patchpage

Not sure about entitlement though, you will have to register at minimum ( no 
a/c needed as 
far as I know ), but you might need an a/c for certain patches.

Also make sure you have latest patch utils patch applied as well ( 
119254/119255 :- 
sparc/86 ), also run patchadd -a , the -a does a dryrun, and 
doesn't update 
the system, examine output, and then drop the -a if all looks ok.
The recommended cluster ( under downloads in patch page ) has all the latest 
patches and 
requirements, might be easier to grab and work with it.

Enda
> 
> TIA   --Kenny
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs snapshot -r hangs

2008-04-16 Thread Enda O'Connor ( Sun Micro Systems Ireland)

Sam Nicholson wrote:

Greetings,

snv_79a
AMD 64x2 in 64 bit kernel mode.

I'm in the middle of migrating a large zfs set from a pair of 1TB mirrors 
to a 1.3TB RAIDz.


I decided to use zfs send | zfs receive, so the first order of business 
was to snap the entire source filesystem.  


# zfs snapshot -r [EMAIL PROTECTED]

What happened was expected, the source drives flashed and wiggled :)
What happened next was not, the destination drives (or maybe the boot 
drive, as they share one disk activity light) began flashing and wiggling, 
and have been doing so for 12 hours how.


iostat shows no activity to speak of, and no transfers at all on any of the 
disks.  ditto for zpool iostat.


all zfs commands hang, and the lack of output from truss'ing the pids 
indicate they are stuck in the kernel.  Heck, I can't even reboot, as that

hangs.

So what I was wondering whether there exists a dtrace recipe or some 
such that I can use to figure out where this is hung in the kernel.


Cheers!
-sam
 
 
This message posted from opensolaris.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Hi
echo "::walk thread|::findstack!munges" |mdb -k > sometestfile.txt

where munges is the script I have attached ( courtesy of David Powell I believe ), ie 
place munges somewhere on your path, and run above.


This text file might be large ( most likely will be, but the munges bit will trim it down 
sufficiently ), so examine it and see if there are any zfs related stuff in there.


That might be sufficient to get an idea of where zfs is stuck, else might need the entire 
text file.

Assuming that this actually works ( seen as reboot is apparently even stuck )

Enda
#!/bin/sh
#
# CDDL HEADER START
#
# The contents of this file are subject to the terms of the
# Common Development and Distribution License, Version 1.0 only
# (the "License").  You may not use this file except in compliance
# with the License.
#
# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
# or http://www.opensolaris.org/os/licensing.
# See the License for the specific language governing permissions
# and limitations under the License.
#
# When distributing Covered Code, include this CDDL HEADER in each
# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
# If applicable, add the following below this CDDL HEADER, with the
# fields enclosed by brackets "[]" replaced with your own identifying
# information: Portions Copyright [] [name of copyright owner]
#
# CDDL HEADER END
#
#
# Copyright 2005 Sun Microsystems, Inc.  All rights reserved.
# Use is subject to license terms.
#

#
# Stack munging utility, written by David Powell.
#
# Takes the output of multiple ::findstack dcmds and groups similar 
# stacks together, presenting the most common ones first.  To use:
#
# > ::walk thread | ::findstack ! munges
#

foo="d"
bar=""
while getopts ab i; do
case $i in
b)  foo="s/\[\(.*\) ]/\1/";;
a)  bar="s/+[^(]*//";;
esac
done

sed "
/^P/ d
/(..*)$/ d
s/^s.*read \(.*:\).*$/\1/
t a
/^\[/ $foo
s/^ .* \(.*\)$/ \1/
$bar
H
$ !d
s/.*//
:a
x
1 d
s/\n//g
" | sort -t : -k 2 | uniq -c -f 1 | sort -rn  | sed '
s/) /)\
/g
s/^ *\([^ ]*\) *\(.*\): */\1##  tp: \2\
/
1 !s/^/\
/
'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zones-discuss] 3510 Array and ZFS/Zones

2007-12-21 Thread Enda O'Connor ( Sun Micro Systems Ireland)
Mangan wrote:
> The 9/07 release appears to be for X86 only. The 8/07 release appears to be 
> for Sparc or X86. The 9/07 release is also titled " Express Developers 
> Edition 9/07".
> 
> Apparently not a release I can use.
> 
> Thanks for the quick feedback.
ok my mistake, getting confused by release numbers, 9.07 was what 
Richard meant.

Enda

> When is the next release for Sparc due out?
> 
> Paul
> 
> 
> -----Original Message-
>> From: "Enda O'Connor ( Sun Micro Systems Ireland)" <[EMAIL PROTECTED]>
>> Sent: Dec 21, 2007 9:15 AM
>> To: Richard Elling <[EMAIL PROTECTED]>
>> Cc: zfs-discuss@opensolaris.org, [EMAIL PROTECTED], [EMAIL PROTECTED]
>> Subject: Re: [zones-discuss] [zfs-discuss] 3510 Array and ZFS/Zones
>>
>> Richard Elling wrote:
>>> Morris Hooten wrote:
>>>> I looked through the solarsinternals zfs best practices and not
>>>> completly sure
>>>> of the best scenario.
>>>>   
>>> ok, perhaps we should add some clarifications...
>>>
>>>> I have a Solaris 10 6/06 Generic_125100-10 box with attached 3510 array
>>>> and would like to use zfs on it. Should I create multiple logical disks
>>>> thru the raid
>>>> controller then create zfs raid file systems across the LD's?
>>>>
>>>>   
>>> That method will work ok.  Many people do this with various RAID
>>> arrays.  We can't answer the question "is it the best way?" because we
>>> would need more detailed information on what you are trying to
>>> accomplish and how you want to make design trade-offs.  So for now,
>>> I would say it works just like you would expect.
>>>
>>>> Can I also migrate zones that are on a ufs file system now into a newly
>>>> created zfs file system
>>>> although knowing the limitations with zones and zfs in 06/06?
>>>>   
>>> Zone limitations with ZFS should be well documented in the admin
>>> guides.  Currently, the install and patch process is not ZFS aware, which
>>> might cause you some difficulty with upgrading or patching.  There are
>>> alternative methods to solve this problem, but you should be aware of the
>>> current limitation.
>> the patch to fix the patch of zones on zfs is pending.
>> 119254/119255 revision 49, we hope to release this in the coming days ( 
>> maybe by COB today even )
>>>> Recommendations?
>>>>   
>>> Use Solaris 10 9/07.  It has more than a year's worth of improvements
>>> and enhancements to Solaris.
>> I think you mean 8/07, ( update 4 ) release?
>> But yes this release is most advised,
>> Enda
>>>  -- richard
>>>
>>> ___
>>> zfs-discuss mailing list
>>> zfs-discuss@opensolaris.org
>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>> ___
>> zones-discuss mailing list
>> [EMAIL PROTECTED]
> 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zones-discuss] 3510 Array and ZFS/Zones

2007-12-21 Thread Enda O'Connor ( Sun Micro Systems Ireland)
Mangan wrote:
> Is this a release that can be downloaded from the website and will work on 
> SPARC systems. The write up says it is for VMware. Am I missing something?
> 
> 
>> Use Solaris 10 9/07.  It has more than a year's worth of improvements
>> and enhancements to Solaris.
>> -- richard
> 
> ___
> zones-discuss mailing list
> [EMAIL PROTECTED]
Hi
Haven't been following this thread so I might be off topic ..

I think this should be 8/07 ( Solaris 10 update 4 )
If so then it's on the download site ( or should be ) and works for 
SPARC/x86 ( same as any Solaris 10 release )

What writeup are you looking at?


Enda
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 3510 Array and ZFS/Zones

2007-12-21 Thread Enda O'Connor ( Sun Micro Systems Ireland)
Richard Elling wrote:
> Morris Hooten wrote:
>> I looked through the solarsinternals zfs best practices and not
>> completly sure
>> of the best scenario.
>>   
> 
> ok, perhaps we should add some clarifications...
> 
>> I have a Solaris 10 6/06 Generic_125100-10 box with attached 3510 array
>> and would like to use zfs on it. Should I create multiple logical disks
>> thru the raid
>> controller then create zfs raid file systems across the LD's?
>>
>>   
> 
> That method will work ok.  Many people do this with various RAID
> arrays.  We can't answer the question "is it the best way?" because we
> would need more detailed information on what you are trying to
> accomplish and how you want to make design trade-offs.  So for now,
> I would say it works just like you would expect.
> 
>> Can I also migrate zones that are on a ufs file system now into a newly
>> created zfs file system
>> although knowing the limitations with zones and zfs in 06/06?
>>   
> 
> Zone limitations with ZFS should be well documented in the admin
> guides.  Currently, the install and patch process is not ZFS aware, which
> might cause you some difficulty with upgrading or patching.  There are
> alternative methods to solve this problem, but you should be aware of the
> current limitation.

the patch to fix the patch of zones on zfs is pending.
119254/119255 revision 49, we hope to release this in the coming days ( 
maybe by COB today even )
> 
>> Recommendations?
>>   
> 
> Use Solaris 10 9/07.  It has more than a year's worth of improvements
> and enhancements to Solaris.
I think you mean 8/07, ( update 4 ) release?
But yes this release is most advised,
Enda
>  -- richard
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does Oracle support ZFS as a file system with Oracle RAC?

2007-12-18 Thread Enda O'Connor ( Sun Micro Systems Ireland)
David Runyon wrote:
> Does anyone know this?
> 
> David Runyon
> Disk Sales Specialist
> 
> Sun Microsystems, Inc.
> 4040 Palm Drive
> Santa Clara, CA 95054 US
> Mobile 925 323-1211
> Email [EMAIL PROTECTED]
> 
> 
> 
> 
> Russ Lai wrote:
>> Dave;
>> Does ZFS support Oracle RAC?
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
metalink doc 403202.1 appears to support this config, but to me reads a 
little unclear.


{
Applies to:
Oracle Server - Enterprise Edition - Version: 9.2.0.5 to 10.2.0.3
Solaris Operating System (SPARC 64-bit)
Goal
Is the Zeta File System (ZFS) of Solaris 10 certified/supported by 
ORACLE for:
- Database
- RAC

Solution
Oracle certifies and support the RDBMS on the whole OS for non-RAC 
installations. However if there is an exception, this should appear on 
the Release Notes, or in the OS Oracle specific documentation manual.

As you are not specific to cluster file systems for RAC installations, 
usually there is no problem on install Oracle on the file systems 
provided by OS vendor.But if any underlying OS error is found then it 
should be handled by the OS vendor.

Over the past few years Oracle has worked with all the leading system 
and storage vendors to validate their specialized storage products, 
under the Oracle Storage Compatibility Program (OSCP), to ensure these 
products were compatible for use with the Oracle database. Under the 
OSCP, Oracle and its partners worked together to validate specialized 
storage technology including NFS file servers, remote mirroring, and 
snapshot products.

At this time Oracle believes that these three specialized storage 
technologies are well understood by the customers, are very mature, and 
the Oracle technology requirements are well know. As of January, 2007, 
Oracle will no longer validate these products.

On a related note, many Oracle customers have embraced the concept of 
the resilient low-cost storage grid defined by Oracle's Resilient 
Low-Cost Storage Initiative (leveraging the Oracle Database 10g 
Automatic Storage Management (ASM) feature to make low-cost, modular 
storage arrays resilient), and many storage vendors continue to 
introduce new, low-cost, modular arrays for an Oracle storage grid 
environment. As of January, 2007, the Resilient Low-Cost Storage 
Initiative is discontinued.

For more information on the same please refer to Oracle Storage Program 
Change Notice

}
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] safe zfs-level snapshots with a UFS-on-ZVOL filesystem?

2007-10-08 Thread Enda O'Connor ( Sun Micro Systems Ireland)
Dick Davies wrote:
> I had some trouble installing a zone on ZFS with S10u4
> (bug in the postgres packages) that went away when I  used a
> ZVOL-backed UFS filesystem
> for the zonepath.
> 
Hi
Out of interest what was the bug.

Enda
> I thought I'd push on with the experiment (in the hope Live Upgrade
> would be able to upgrade such a zone).
> It's a bit unwieldy, but everything worked reasonably well -
> performance isn't much worse than straight ZFS (it gets much faster
> with compression enabled, but that's another story).
> 
> The only fly in the ointment is that ZVOL level snapshots don't
> capture unsynced data up at the FS level. There's a workaround at:
> 
>   http://blogs.sun.com/pgdh/entry/taking_ufs_new_places_safely
> 
> but I wondered if there was anything else that could be done to avoid
> having to take such measures?
> I don't want to stop writes to get a snap, and I'd really like to avoid UFS
> snapshots if at all possible.
> 
> I tried mounting forcedirectio in the (mistaken) belief that this
> would bypass the UFS
> buffer cache, but it didn't help.
> 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool versioning

2007-09-13 Thread Enda O'Connor ( Sun Micro Systems Ireland)
Paul Armor wrote:
> Hi,
> I was wondering if anyone would know if this is just an accounting-type 
> error with the recorded "version=" stored on disk, or if there 
> are/could-be any deeper issues with an "upgraded" zpool?
> 
> I created a pool under a Sol10_x86_u3 install (11/06?), and zdb correctly 
> reported the pool as a "version=3" pool.  I reinstalled the OS with a u4 
> (08/07), ran zpool grade, was told I successfully upgraded from version 3 
> to version 4, but zdb reported "version=3".  I unmounted the zfs, 
> remounted, and zdb still reported "version=3".  I reran zpool upgrade, and 
> was told there were no pools to upgrade.
> 
> I blew away that pool, and created a new pool and zdb correctly reported 
> "version=4".
> 
> Perhaps I'm being pedantic, but the version thing on an upgraded pool 
> bugged me ;-)
> 
> Does anyone have any thoughts/experiences on other surprises that may be 
> lying in wait on an "upgraded" zpool?
> 
> Thanks,
> Paul
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Paul
is it not zpool upgrade -a,
but I could be wrong

I seem to remember zpool upgrade does not actually upgrade unless you 
specify the -a.

Enda
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 120473-05

2007-04-12 Thread Enda O'Connor ( Sun Micro Systems Ireland)

Robert Milkowski wrote:

Hello Enda,

Wednesday, April 11, 2007, 4:21:35 PM, you wrote:

EOCSMSI> Robert Milkowski wrote:

Hello zfs-discuss,

  In order to get IDR126199-01 I need to install 120473-05 first.
  I can get 120473-07 but everything more than -05 is marked as
  incompatible with IDR126199-01 so I do not want to force it.

  Local Sun's support has problems with getting 120473-05 also so I'm
  stuck for now and I would really like to get that IDR running.

  Can someone help?



EOCSMSI> Hi
EOCSMSI> This patch will be on SunSolve possibly later today, tomorrow at latest
EOCSMSI> I suspect as it has only justed being pushed out from testing.
EOCSMSI> I have sent the patch in another mail for now.

Thank you patch - it worked (installed) along with IDR properly.

However it seems like the problem is not solved by IDR :(


Hi Robert
So this IDR has two bugs as fixed
6458218 assertion failed: ss == NULL
6495013 Loops and recursion in metaslab_ff_alloc can kill performance, 
even on a pool with lots of free data


I have add'ed the IDR's requestors as they can comment, which one of the 
above fixes was not solved via this IDR in your testing.



Enda
___
zfs-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 120473-05

2007-04-11 Thread Enda O'Connor ( Sun Micro Systems Ireland)

Robert Milkowski wrote:

Hello zfs-discuss,

  In order to get IDR126199-01 I need to install 120473-05 first.
  I can get 120473-07 but everything more than -05 is marked as
  incompatible with IDR126199-01 so I do not want to force it.

  Local Sun's support has problems with getting 120473-05 also so I'm
  stuck for now and I would really like to get that IDR running.

  Can someone help?



Hi
This patch will be on SunSolve possibly later today, tomorrow at latest 
I suspect as it has only justed being pushed out from testing.

I have sent the patch in another mail for now.


Enda
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss