[zfs-discuss] Pogo Linux ships NexentaStor pre-installed boxes

2008-08-01 Thread Erast Benson
Hi folks,

wanted to share some exciting news with you. Pogo Linux shipping
NexentaStor pre-installed boxes, like this one 16TB - 24TB:

http://www.pogolinux.com/quotes/editsys?sys_id=3989

And here is announce:

http://www.nexenta.com/corp/index.php?option=com_content&task=view&id=129&Itemid=56

Pogo says: "Managed Storage – NetApp features without the price"...

Go OpenSolaris, Go!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disabling disks' write-cache in J4200 with ZFS?

2008-08-01 Thread Richard Elling
Todd E. Moore wrote:
>
> I want to disable write cache on the disk drives in our J4200 JBODs so 
> that fsync() actually writes to disk, not just to the cache on the drive.
>

ZFS will do this for you, via the way the ZIL works.
Neil explains it pretty well at
http://blogs.sun.com/perrin/entry/the_lumberjack

> I did this using 'format -e', but it displays a warning about the 
> drive being part of a zpool and also it says that the change is not 
> permanent.
>  
> Is 'format -e' the right way to do this with ZFS?  Is there no way to 
> make it permanent?

In general, you don't need to do this.  ZFS will send the cache
flush command to the disks as needed.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Booting from a USB HD

2008-08-01 Thread W. Wayne Liauh
> W. Wayne Liauh wrote:
> > I installed OS 2008.05 onto a USB HD (WD Passport),
> and was able to boot from it (knock on wood!).
> >
> > However, when plugged into a different machine, I
> then am unable to boot from it.
> >
> > Is there any permission issue that I must address
> on this ZFS HD before I can boot from it?
> >   
> Could you provide a bit more information as to where
> it fails ? Does the 
> new system discover it ? Do you see the grub menu ? 
> 
> -Sanjay
> 

Thanks a whole bunch for responding to my question.

After trying it on another desktop machine (and failed), I was led to suspect 
that the problem may be caused by the system (computer) not delivering enough 
juice (electrical current) to the USB HD.  This is one of the first generation 
portable USB HDs and it may need more current to operate than the later models.

So, before going thru the whole process of re-installing OS 08.05 on a newer 
version of the WD Passport, I decided to give this old portable disc another 
try.

After canceled my dinner appointment, I plugged this OS 08.05 installed USB HB 
into an IBM ThinkPad R61i.  This time it worked beautifully.  (The squeaking 
noise never sounded so pleasant. :-)  )

I always believe that using a customized USB HD will be one of the best ways to 
effectively (and more convincingly) propagate OpenSolaris.  More investigations 
are necessary, but, in short, the problem I experienced definitely has nothing 
to do with ZFS permission issues.

Thanks again for the response.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Booting from a USB HD

2008-08-01 Thread sanjay nadkarni (Laptop)
W. Wayne Liauh wrote:
> I installed OS 2008.05 onto a USB HD (WD Passport), and was able to boot from 
> it (knock on wood!).
>
> However, when plugged into a different machine, I then am unable to boot from 
> it.
>
> Is there any permission issue that I must address on this ZFS HD before I can 
> boot from it?
>   
Could you provide a bit more information as to where it fails ? Does the 
new system discover it ? Do you see the grub menu ? 

-Sanjay

>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Booting from a USB HD

2008-08-01 Thread W. Wayne Liauh
I installed OS 2008.05 onto a USB HD (WD Passport), and was able to boot from 
it (knock on wood!).

However, when plugged into a different machine, I then am unable to boot from 
it.

Is there any permission issue that I must address on this ZFS HD before I can 
boot from it?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to get a file's crtime attribute from a znode?

2008-08-01 Thread Todd E. Moore




I'm used to using fstat() and other calls to get
atime, ctime, and mtime values, but I understand that the znode also
stores a files creation time in crtime attribute.

Which system call can I use to retrieve this information?

-- 
Todd E. Moore
Sun Microsystems Incorporated
443.516.4002
AIM: toddmoore72462



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I trust ZFS?

2008-08-01 Thread Brent Jones
I have done a bit of testing, and so far so good really.
I have a Dell 1800 with a Perc4e and a 14 drive Dell Powervault 220S.
I have a RaidZ2 volume named 'tank' that spans 6 drives. I have made 1 drive
available as a spare to ZFS.

Normal array:

# zpool status
  pool: tank
 state: ONLINE
 scrub: scrub completed with 0 errors on Fri Aug  1 19:37:33 2008
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  raidz2ONLINE   0 0 0
c0t1d0  ONLINE   0 0 0
c0t2d0  ONLINE   0 0 0
c0t3d0  ONLINE   0 0 0
c0t4d0  ONLINE   0 0 0
c0t5d0  ONLINE   0 0 0
c0t6d0  ONLINE   0 0 0
spares
  c0t13d0   AVAIL

errors: No known data errors



One drive removed:

# zpool status
  pool: tank
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist
for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-D3
 scrub: resilver completed with 0 errors on Fri Aug  1 20:30:39 2008
config:

NAME   STATE READ WRITE CKSUM
tank   DEGRADED 0 0 0
  raidz2   DEGRADED 0 0 0
c0t1d0 ONLINE   0 0 0
c0t2d0 ONLINE   0 0 0
spare  DEGRADED 0 0 0
  c0t3d0   UNAVAIL  0 0 0  cannot open
  c0t13d0  ONLINE   0 0 0
c0t4d0 ONLINE   0 0 0
c0t5d0 ONLINE   0 0 0
c0t6d0 ONLINE   0 0 0
spares
  c0t13d0  INUSE currently in use

errors: No known data errors


Now lets remove the hot spare  ;)

# zpool status
  pool: tank
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist
for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-D3
 scrub: resilver completed with 0 errors on Fri Aug  1 20:30:39 2008
config:

NAME   STATE READ WRITE CKSUM
tank   DEGRADED 0 0 0
  raidz2   DEGRADED 0 0 0
c0t1d0 ONLINE   0 0 0
c0t2d0 ONLINE   0 0 0
spare  UNAVAIL  0   656 0  insufficient replicas
  c0t3d0   UNAVAIL  0 0 0  cannot open
  c0t13d0  UNAVAIL  0 0 0  cannot open
c0t4d0 ONLINE   0 0 0
c0t5d0 ONLINE   0 0 0
c0t6d0 ONLINE   0 0 0
spares
  c0t13d0  INUSE currently in use

errors: No known data errors


Now, this Perc4e doesn't support JBOD, so each drive is a standalone Raid0
(how annoying).
With that, I cannot plug the drives back in with the system running,
controller will keep them offline until I enter the bios.

But in my scenario, this does demonstrate ZFS tolerates hot removal of
drives, without issuing a graceful removal of the device.
I was copying MP3s to the volume the whole time, and the copy continued
uninterrupted, without error.
I verified all data was written as well. All data should be online when I
reboot and put the pool back in normal state.

I am very happy with the test. I don't know many hardware controllers
that'll loose 3 drives out of an array of 6 (with spare), and still function
normally (even if the controller supports Raid6, I've seen major issues
where writes were not committed).

I'll add my results to your forum thread as well.

Regards

Brent Jones
[EMAIL PROTECTED]

On Thu, Jul 31, 2008 at 11:56 PM, Ross Smith <[EMAIL PROTECTED]> wrote:

>  Hey Brent,
>
> On the Sun hardware like the Thumper you do get a nice bright blue "ready
> to remove" led as soon as you issue the "cfgadm -c unconfigure xxx"
> command.  On other hardware it takes a little more care, I'm labelling our
> drive bays up *very* carefully to ensure we always remove the right drive.
> Stickers are your friend, mine will probably be labelled "sata1/0",
> "sata1/1", "sata1/2", etc.
>
> I know Sun are working to improve the LED support, but I don't know whether
> that support will ever be extended to 3rd party hardware:
> http://blogs.sun.com/eschrock/entry/external_storage_enclosures_in_solaris
>
> I'd love to use Sun hardware for this, but while things like x2200 servers
> are great value for money, Sun don't have anything even remotely competative
> to a standard 3U server with 16 SATA bays.  The x4240 is probably closest,
> but is at least double the price.  Even the J4200 arrays are more expensive
> than this entire server.
>
> Ros

[zfs-discuss] Disabling disks' write-cache in J4200 with ZFS?

2008-08-01 Thread Todd E. Moore






I
want to disable write cache on the disk drives in our J4200 JBODs so
that fsync() actually writes to disk, not just to the cache on the
drive.
 
I did
this using 'format -e', but it displays a warning about the drive being
part of a zpool and also it says that the change is not permanent.
 
Is
'format -e' the right way to do this with ZFS?  Is there no way to make
it permanent? 

-- 
Todd E. Moore
Sun Microsystems Incorporated
443.516.4002
AIM: toddmoore72462



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'legacy' mount after installing B95

2008-08-01 Thread Alan Burlison
Lori Alt wrote:

> Basically, it means that we don't want it mounted at all
> because it's a placeholder dataset.  It's just a container for
> all the boot environments on the system.
> Though, now that I think about it, we should have
> made it "none".

Ok, thanks for the explanation :-)

-- 
Alan Burlison
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'legacy' mount after installing B95

2008-08-01 Thread Lori Alt
Alan Burlison wrote:
> NAME   USED  AVAIL  REFER  MOUNTPOINT
> pool/ROOT 5.58G  53.4G18K  legacy
>
> What's the legacy mount for?  Is it related to zones?
>
>
>   
Basically, it means that we don't want it mounted at all
because it's a placeholder dataset.  It's just a container for
all the boot environments on the system.
Though, now that I think about it, we should have
made it "none".

Lori

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] 'legacy' mount after installing B95

2008-08-01 Thread Alan Burlison
NAME   USED  AVAIL  REFER  MOUNTPOINT
pool/ROOT 5.58G  53.4G18K  legacy

What's the legacy mount for?  Is it related to zones?

thanks,

-- 
Alan Burlison
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Announcing the OpenSolaris Storage Summit

2008-08-01 Thread Mark A. Carlson






The date for the OpenSolaris
Storage Summit has been set. We will be hosting the event at the
Santa Clara Hyatt Regency
hotel on the 21st of September, 2008. This is right before this year's Storage
Developer Conference at which Sun is a Platinum sponsor.

We already have a couple of keynote speakers lined up: Ben Rockwood and Mike Shapiro. It will take place
all day Sunday, so if you are coming to SDC this year, come a day early
and participate. We will have the first ever OpenSolaris Storage
Community meeting (face to face). A great way to meet some folks that
you may only know by their email addresses. Everybody has an
opportunity to give a Lightning talk and/or put a poster together about
what they are working on or doing with OpenSolaris storage.

Registration is via a wiki page here: http://www.genunix.org/wiki/index.php/OpenSolaris_Developer_Summit_08.
Just add your name and contact information into the Attendance List
table and you are registered! Attendance is free and we hope many folks
from the community will attend this year. Also, you should add yourself
to the summit list,
even if you are not sure you will be able to go, so you can keep up to
date with the planned activities. We may create ways to participate
even if you can't make it to Santa Clara this year (if we get people
asking for access). Once registered, scroll down to the bottom of the
wiki and suggest some topics or volunteer to participate as a poster or
lightning talk. Fill in some details about yourself (or someone else)
at the bottom of the page.

Check it out, get involved, and I hope to see you there! It will be a
blast. More details later...

-- mark


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Diagnosing problems after botched upgrade - grub busted

2008-08-01 Thread Johan Hartzenberg
On Fri, Aug 1, 2008 at 11:43 PM, Johan Hartzenberg <[EMAIL PROTECTED]>wrote:

> [snip]
> I could now just re-install and recover my data (I keep my data far away
> from OS disks/pools), or I can try to fix grub.  I hope to learn from this
> process so my questions are:
>
> 1. What is up with grub here?  I don't get a menu, but it does remember the
> old menu entry name for the default entry.  This happens even when I try to
> boot without the External drive plugged in.
>
> 2. How can I edit the grub commands?  What does "Error 15: File not found"
> mean?  Is it looking for the grub menu?  Or a program to boot?
>
> 3. Removing the internal disk from the machine may help... I am not sure to
> what extent grub uses the BIOS boot disk priority... Maybe that will get the
> external disk bootable again?
>
> 4. Should I try to get the grub menu back (from where I can try options to
> edit the boot entries), or should I try to get the grub> prompt back?  Or
> should I try to get one of the pools to import?  Where do I go from here?
>
> Note: I have been careful not to touch or break anything on the external
> disk.  However I never tried to reboot since partitioning the new disk with
> an ACTIVE partition, the way it is at present.  I think this could also
> affect grub's perception of what disks are what.
>
> Thank you,
>   _Johan
>

I physically removed the internal disk.  I am now able to boot again, at
least temporarily.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 200805 Grub problems

2008-08-01 Thread Johan Hartzenberg
Hello kugutsumen, Did you have any luck in resolving your problems?

On Sun, Jun 8, 2008 at 10:53 AM, Kugutsumen <[EMAIL PROTECTED]>wrote:

> I've just installed 2008.05 on a 500 gig disk... Install went fine...
>
> I attached an identically partitioned and labeled disk as soon as the
> rpool was created during the installation.:
>
>  zpool attach rpool c5t0d0s0 c6t0d0s0
>
> Resilver completed right away... and everything seemed to work fine.
>
> Boot on 1st disk and 2nd disk both worked fine...
>
> I created a zfs filesystem, enabled samba sharing which worked fine:
>
> pkg install SUNWsmbs
> pkg install SUNWsmbskr
> svcadm enable -r smb/server
>
> echo >>/etc/pam.conf other password required pam_smb_passwd.so.1 nowarn
>
> zfs create -o casesensitivity=mixed -o nbmand=on -o sharesmb=on rpool/p
> zfs set sharesmb=name=p rpool/p
>
> I copied a bunch of stuff to /rpool/p
>
> rebooted and problem started:
>
> Grub drops me to the command prompt without menu...
>
> Trying bootfs rpool/ROOT/opensolaris
> kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
>
> failed with an inconsistent file system structure...
>
> Rebooted into install environment and did a 'zpool import -R /mnt -f rpool'
> ... rpool seems
> to be okay and rebooted.
>
> Grub drops me again to the command prompt without menu...
>
> Trying bootfs rpool/ROOT/opensolaris
> kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
>
> fails with Error 17: Cannot mount selected partition
>
> Rebooted with the install CD in text mode... and tried
>
>zpool import -R /mnt -f rpool
>mkdir /mnt2
>mount -F zfs rpool/ROOT/opensolaris /mnt2
>bootadm update-archive -R /mnt2
>zpool set bootfs=rpool/ROOT/opensolaris rpool
>
>installgrub /mnt/boot/grub/stage1 /mnt/boot/grub/stage2
> /dev/rdsk/c5t0d0s0
>installgrub /mnt/boot/grub/stage1 /mnt/boot/grub/stage2
> /dev/rdsk/c6t0d0s0
>
> What am I doing wrong?
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
Any sufficiently advanced technology is indistinguishable from magic.
Arthur C. Clarke

Afrikaanse Stap Website: http://www.bloukous.co.za

My blog: http://initialprogramload.blogspot.com

ICQ = 193944626, YahooIM = johan_hartzenberg, GoogleTalk =
[EMAIL PROTECTED], AIM = JohanHartzenberg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Diagnosing problems after botched upgrade - grub busted

2008-08-01 Thread Johan Hartzenberg
I tried to be clever and botched my upgrade.  Now I don't get a grub menu,
only an error like this:

=
Booting 'BE3 Solaris xVM'

findroot (BE_BE3,1,a)

Error 15: File not found

Press any key to continue
=


I do not see a grub menu prior to this error, only the Stage1 Stage2 message
which goes past very fast.

Prior to this error I booted from a CD to single-user mode and ran
installgrub stage1 stage2 /dev/rdsk/Xs0

I did this because at that point grub just gave me a grub prompt and I don't
know grub well enough to boot from there.  I rather suspect that if I manage
to boot the system there will be a way to fix it permanently.  But now
rather let me give the sequence of events that led up to this in the order
they happened.

1.  I took the disk out of the laptop, and made it bootable in an external
enclosure.  This was a couple of days ago - I posted about the fun I had
with that previously, but essentially booting to safemode and importing the
rpool caused the on-disk device-path to be updated, making the disk once
more bootable.

2. I partitioned the new disk, creating a solaris2 partition and on that a
single hog-slice layout.  s0 is the whole partition, minus slice 8 and 9.

3. I create a new future root pool, like this
zpool create RPOOL -f c0d0s0

Note:  -f required because s2 overlaps.

4. Ran lucreate, like this
lucreate -p RPOOL -n BE4

This finished fine.  I used upper-case RPOOL to distinguish it from the BE3
rpool.

5. mounted new Nevada build ISO on /mnt and ran upgraded the live-upgrade
packages.

6. luupgrade -s /mnt -n BE4

7. lumount BE4 and peeked around in there a little.

After this I rebooted, and got no grub menu, just a grub> prompt.

I then booted from the CD and ran installgrub.  Not being able to get to man
pages, I have tried it two times with different options, with reboots in
between, like this:
> installgrub zfs_stage1_5 stage2 /dev/rds/s0
> installgrub -m stage1 stage2 /dev/rdsk/xxs2

This at least got me the error above (Am I now worse off or better off than
I were when I had the grub> prmpt?).

I then booted from the CD again and tried /boot/solaris/bin/update_grub as I
found that in these forums, but it does not seem to have made any
difference.  I don't know if the command takes any options, I just ran it
and it finished very quickly and without errors.

Note: Due to past editing of the menu.lst file, the default item points to
the BE3 xVM entry.  I just tap the up-arrow and enter to load the "non-xVM"
entry.

Note: I never ran luactivate during the above procedure.

Note: When booting to single-user shell from the install CD, it tells me
that it finds both rpool (BE3) and RPOOL (BE4), allowing me to select one to
mount on /a, however they do not mount, I get an error but I forgot to write
that down.  I get the same error for both.

I could now just re-install and recover my data (I keep my data far away
from OS disks/pools), or I can try to fix grub.  I hope to learn from this
process so my questions are:

1. What is up with grub here?  I don't get a menu, but it does remember the
old menu entry name for the default entry.  This happens even when I try to
boot without the External drive plugged in.

2. How can I edit the grub commands?  What does "Error 15: File not found"
mean?  Is it looking for the grub menu?  Or a program to boot?

3. Removing the internal disk from the machine may help... I am not sure to
what extent grub uses the BIOS boot disk priority... Maybe that will get the
external disk bootable again?

4. Should I try to get the grub menu back (from where I can try options to
edit the boot entries), or should I try to get the grub> prompt back?  Or
should I try to get one of the pools to import?  Where do I go from here?

Note: I have been careful not to touch or break anything on the external
disk.  However I never tried to reboot since partitioning the new disk with
an ACTIVE partition, the way it is at present.  I think this could also
affect grub's perception of what disks are what.

Thank you,
  _Johan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS with Samba and the "previous versions"-tab under Windows explorer

2008-08-01 Thread Rene
Hello, 

I'm testing Ed Plese's Samba patches. As far as I understood his comments on 
http://www.edplese.com/samba-with-zfs.html I should see a "previous version" 
tab in the Windows explorer (explained on: 
http://www.petri.co.il/how_to_use_the_shadow_copy_client.htm) on my 
Samba/ZFS-share. But actually I don't have a previous version tab :(

Samba is working, I can connect from my Windows to my samba share and even 
browse in the .zfs/snapshot directory.

Here's my smb.conf:

 cat /usr/local/samba/lib/smb.conf 
[global]
workgroup = sambatest
security = user

[sambatest]
comment = samba_with_shadowcopies testing area
path = /export/sambatest
read only = no
vfs objects = shadow_copy
shadow_copy: sort = desc
shadow_copy: path = /export/sambatest/renny/.zfs/snapshot
shadow_copy: format = $Y.$m.$d-$H.$M.$S
shadow_copy: sort = desc

Here's how the /export/sambatest/renny/.zfs/snaphot directtory looks like:

ls -lhart /export/sambatest/renny/.zfs/snapshot
total 18
drwxr-xr-x   2 root sys2 Jun 30 12:07 GMT-2008.06.30-10.08.56
dr-xr-xr-x   3 root root   3 Jun 30 12:07 ..
dr-xr-xr-x   8 root root   8 Jun 30 12:07 .
drwxr-xr-x   2 rennystaff  3 Jun 30 12:31 GMT-2008.06.30-10.32.26
drwxr-xr-x   3 rennystaff  5 Jun 30 14:36 GMT-2008.06.30-12.40.03
drwxr-xr-x   3 rennystaff  5 Jun 30 14:36 GMT-2008.06.30-12.39.01
drwxr-xr-x   3 rennystaff  6 Jul  2 10:58 GMT-2008.07.13-18.21.24
drwxr-xr-x   3 rennystaff  6 Jul  2 10:58 GMT-2008.07.13-18.19.00

So, I have a sambauser named renny and I can connect from Windows to the share, 
can brows in the snapshot directory, but I don't have a "previous version" tab 
in my Window explorer. I've tried it with Windows XP Home and Pro and even with 
the ShadowCopyClient from Microsoft (you can get it from here: 
http://download.microsoft.com/download/4/9/d/49d18272-7622-42f7-85a5-7b01609e8d64/ShadowCopyClient.msi).
 So it works "basically" but with the tab it would be perfect ;)

Does anyone have an idea how I get this tab if possibly anyway...


Greetings, 

René
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Terrible zfs performance under NFS load

2008-08-01 Thread Miles Nordin
> "cs" == Chris Siebenmann <[EMAIL PROTECTED]> writes:

cs> (Some versions of syslog let you turn this off for specific
cs> log files, which is very useful for high volume, low
cs> importance ones.)

 To ensure that kernel messages are written to disk promptly,
 syslogd(8) calls fsync(2) after writing messages from the kernel.
 Other messages are not synced explcitly.  You may disable syncing of
 files specified to receive kernel messages by prefixing the pathname
 with a minus sign `-'. 

That's from BSD, which fsync's kernel messages only, not messages from
libc.  try adding a '-' to the start of your log filename.  It
probably won't work with your syslog variant, though.

If your syslog is calling fsync on all messages not just kernel
messages, then moving to the syslog protocol between client and ZFS
server instead of NFS might not help.  If you test more, let us know
what happens.


pgprnyhndg2fR.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ButterFS

2008-08-01 Thread Vincent Fox
Once upon a time I ran a lab with a whole bunch of SGI workstations.

A company that barely exists now.

This ButterFS may be the Next Big Thing.  But I recall one time how hot 
everyone was for Reiser.  Look how that turned out.

3 years is an entire production lifecycle for the systems in this datacenter.  
So in 3 years I may re-evaluate ZFS.   Until then this is just an interesting 
newsbit.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] What's the best way to get pool vdev structure information?

2008-08-01 Thread Chris Siebenmann
 For various sorts of manageability reasons[*], I need to be able to
extract information about the vdev and device structure of our ZFS pools
(partly because we're using iSCSI and MPXIO, which create basically
opaque device names). Unfortunately Solaris 10 U5 doesn't seem to
currently provide any script/machine readable output form for this
information, so I need to build something to do it myself.

 I can think of three different ways to do this:
* parse the output of 'zpool status'
* write a C program that directly uses libzfs to dump the information
  in a more script-readable format
* use Will Murnane's recently announced 'pyzfs' module to dump the
  information (conveniently I am already writing some of the management
  programs in Python)

 Each approach has its own set of drawbacks, so I'm curious if people
have opinions on which one will probably be the best/most stable over
time/etc. And if anyone already has code (or experience of subtle things
to watch out for), I would love to hear from you.

 Thanks in advance.

- cks
[*: for example, we need to be able to generate a single list of all of
the iSCSI target+LUNs that are in use on all of the fileservers, and
how the usage is distributed among fileservers and ZFS pools.
]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ButterFS

2008-08-01 Thread Neal Pollack
dick hoogendijk wrote:
> I read this just now in the Unix Guardian:
>
> 
> BTRFS, pronounced ButterFS:
> BTRFS was launched in June 2007, and is a POSIX-compliant file system
> that will support very large files and volumes (16 exabytes) and a
> ridiculous number of files (two to the power of 64 files, to be
> precise). The file system has object-level mirroring and striping,
> checksums on data and metadata, online file system check, incremental
> backup and file system mirroring, subvolumes with their own file system
> roots, writable snapshots, and index and file packing to conserve
> space, among many other features. BTRFS is not anywhere near primetime,
> and Garbee figures it will take at least three years to get it out the
> door.
> 
>
> I thought that ZFS was/is the way to the future, but reading this it
> seems there are compatitors out there ;-)
>   

Not yet :-)  Wait three years, if they are on time
For today, this hour, you can actually use ZFS.

Also, no problem, choice is good.  It keep up the
motivation for ongoing innovation.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Terrible zfs performance under NFS load

2008-08-01 Thread Chris Siebenmann
| Syslog is funny in that it does a lot of open/write/close cycles so
| that rotate can work trivially.

 I don't know of any version of syslog that does this (certainly Solaris
10 U5 syslog does not). The traditional syslog(d) performance issue
is that it fsync()'s after writing each log message, in an attempt to
maximize the chances that the log message will make it to disk and
survive a system crash, power outage, etc.

(Some versions of syslog let you turn this off for specific log files,
which is very useful for high volume, low importance ones.)

 I've heard that at one point, NFS + ZFS was known to have performance
issues with fsync()-heavy workloads. I don't know if that's still true
today (in either Solaris 10U5 or current OpenSolaris builds), or if all
of the issues have been fixed.

- cks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-08-01 Thread Richard Elling
Florin Iucha wrote:
> On Fri, Aug 01, 2008 at 06:37:29AM -0700, Steve wrote:
>   
>> So, better AMD with ECC but not optimal power mgt (and seems cheaper), or 
>> Intel with NO-ECC but power mgt?
>> 
>
> How about we complain enough to shame somebody into adding power
> management to the K8 chips?  We can start by reminding SUN on how much
> it was trumpeting the early Opterons as 'green computing'.
>   

FWIW, the power management discussions on this are held over in the
laptop-discuss forum.  You can search for the threads there and see the 
current
status.
http://www.opensolaris.org/jive/forum.jspa?forumID=66

 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Supermicro AOC-SAT2-MV8 hang when drive removed

2008-08-01 Thread Richard Elling
Hi Andy, answer & pointer below...

Andrew Hisgen wrote:
> Question embedded below...
>
> Richard Elling wrote:
> ...
>> If you surf to http://www.sun.com/msg/ZFS-8000-HC you'll
>> see words to the effect that,
>> The pool has experienced I/O failures. Since the ZFS pool property
>>   'failmode' is set to 'wait', all I/Os (reads and writes) are
>>   blocked. See the zpool(1M) manpage for more information on the
>>   'failmode' property. Manual intervention is required for I/Os to
>>   be serviced.
>>
>>>  
>>> I would guess that ZFS is attempting to write to the disk in the 
>>> background, and that this is silently failing.
>>
>> It is clearly not silently failing.
>>
>> However, the default failmode property is set to "wait" which will 
>> patiently
>> wait forever.  If you would rather have the I/O fail, then you should 
>> change
>> the failmode to "continue"  I would not normally recommend a failmode of
>> "panic"
>
> Hi Richard,
>
> Does failmode==wait cause ZFS itself to retry i/o, that is, to retry an
> i/o where an earlier request (of that same i/o) returned from the driver
> with an error?  If so, that will compound timeouts even further.
>
> I'm also confused by your statement that wait means wait forever, given
> that the actual circumstances here are that zfs (and the rest of the
> i/o stack) returned after 9 minutes.

The details are in PSARC/2007/567.  Externally available at:
http://www.opensolaris.org/os/community/arc/caselog/2007/567/

With failmode=wait, I/Os will wait until "manual intervention" which
is shown as an administrator running zpool clear on the affected pool.

I see the need for a document to help people work through these
cases as they can be complex at many different levels.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing the boot HDDs in x4500

2008-08-01 Thread Ross Smith

Sorry Ian, I was posting on the forum and missed the word "disks" from my 
previous post.  I'm still not used to Sun's mutant cross of a message board / 
mailing list.
 
Ross
> Date: Fri, 1 Aug 2008 21:08:08 +1200> From: [EMAIL PROTECTED]> To: [EMAIL 
> PROTECTED]> CC: zfs-discuss@opensolaris.org> Subject: Re: [zfs-discuss] 
> Replacing the boot HDDs in x4500> > Ross wrote:> > Wipe the snv_70b disks I 
> meant.> > > > > What disks? This message makes no sense without context.> > 
> Context free messages are a pain in the arse for those of us who use the> 
> mail list.> > Ian
_
Make a mini you on Windows Live Messenger!
http://clk.atdmt.com/UKM/go/107571437/direct/01/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing the boot HDDs in x4500

2008-08-01 Thread Ian Collins
Ross wrote:
> Wipe the snv_70b disks I meant.
>  
>   
What disks?  This message makes no sense without context.

Context free messages are a pain in the arse for those of us who use the
mail list.

Ian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ButterFS

2008-08-01 Thread Michael Schuster
dick hoogendijk wrote:
> I read this just now in the Unix Guardian:
> 
> 
> BTRFS, pronounced ButterFS:
> BTRFS was launched in June 2007, and is a POSIX-compliant file system
> that will support very large files and volumes (16 exabytes) and a
> ridiculous number of files (two to the power of 64 files, to be
> precise). The file system has object-level mirroring and striping,
> checksums on data and metadata, online file system check, incremental
> backup and file system mirroring, subvolumes with their own file system
> roots, writable snapshots, and index and file packing to conserve
> space, among many other features. BTRFS is not anywhere near primetime,
> and Garbee figures it will take at least three years to get it out the
> door.
> 
> 
> I thought that ZFS was/is the way to the future, but reading this it
> seems there are compatitors out there ;-)

I don't see any contradiction here - even if ZFS is the way to go, there's 
no objecting to other people trying their own path, right? ;-)

Michael
-- 
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-08-01 Thread Florin Iucha
On Fri, Aug 01, 2008 at 06:37:29AM -0700, Steve wrote:
> So, better AMD with ECC but not optimal power mgt (and seems cheaper), or 
> Intel with NO-ECC but power mgt?

How about we complain enough to shame somebody into adding power
management to the K8 chips?  We can start by reminding SUN on how much
it was trumpeting the early Opterons as 'green computing'.

Cheers,
florin

-- 
Bruce Schneier expects the Spanish Inquisition.
  http://geekz.co.uk/schneierfacts/fact/163


pgpYb5mVH6r01.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-08-01 Thread Steve
I didn't throughly search, but it seems that newegg doesn't have any micro atx 
mb with the chipset specified on wikipedia that is supporting ECC!... (query: 
Form Factor[Micro ATX ],North Bridge[Intel 925X ],North Bridge[Intel 975X 
],North Bridge[Intel X38 ],North Bridge[Intel X48 ])

So, better AMD with ECC but not optimal power mgt (and seems cheaper), or Intel 
with NO-ECC but power mgt?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Supermicro AOC-SAT2-MV8 hang when drive removed

2008-08-01 Thread Andrew Hisgen
Question embedded below...

Richard Elling wrote:
...
> If you surf to http://www.sun.com/msg/ZFS-8000-HC you'll
> see words to the effect that,
> The pool has experienced I/O failures. Since the ZFS pool property
>   'failmode' is set to 'wait', all I/Os (reads and writes) are
>   blocked. See the zpool(1M) manpage for more information on the
>   'failmode' property. Manual intervention is required for I/Os to
>   be serviced.
> 
>>  
>> I would guess that ZFS is attempting to write to the disk in the 
>> background, and that this is silently failing.
> 
> It is clearly not silently failing.
> 
> However, the default failmode property is set to "wait" which will patiently
> wait forever.  If you would rather have the I/O fail, then you should change
> the failmode to "continue"  I would not normally recommend a failmode of
> "panic"

Hi Richard,

Does failmode==wait cause ZFS itself to retry i/o, that is, to retry an
i/o where an earlier request (of that same i/o) returned from the driver
with an error?  If so, that will compound timeouts even further.

I'm also confused by your statement that wait means wait forever, given
that the actual circumstances here are that zfs (and the rest of the
i/o stack) returned after 9 minutes.

thanks,
Andy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing the boot HDDs in x4500

2008-08-01 Thread Jorgen Lundman


Ross wrote:
> Not if you don't upgrade the pool it won't.  ZFS can import and work with an 
> old version of the filesystem fine.  The manual page for zpool upgrade says:
> "Older versions can continue to be used"
> 
> Just import it on Solaris 5/08 without doing the upgrade.  Your ZFS pool will 
> be available and can be served out from the new version.  If you do find any 
> problems (which I wouldn't expect to be honest), you can plug your old 
> snv_70b boot disk in if necessary.

Now/old server is ZFS version 2 zfs. The new boot HDDs/OS, are only ZFS 
version 1. I do not think zfs version 1 will read version 2. I see no 
script talking about converting a version 2 to a version 1.



-- 
Jorgen Lundman   | <[EMAIL PROTECTED]>
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I trust ZFS?

2008-08-01 Thread Robert Milkowski
Hello Ross,

I know personally many environments using ZFS in a production for
quite some time. Quite often in business critical environments.
Some of them are small, some of them are rather large (hundreds of
TBs), some of them are clustered. Different usages like file servers,
MySQL on ZFS, Oracle on ZFS, mail on ZFS, virtualization on ZFS, ...

So far I haven't seen loosing any data - I hit some issues from time
to time but nothing which can't be work-arounded.

That being said ZFS is still relatively young technology so if your
top priority regardless of anything else is stability and confidence I
would go with UFS or VxFS/VxVM which are in the market for many many
years proven in a lot of technologies.



-- 
Best regards,
 Robert Milkowskimailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing the boot HDDs in x4500

2008-08-01 Thread Ross
> > so you can still go back to snv_70b if needed.
> 
> Alas, it would be downgrade. Which is why I think it
> will fail.

Not if you don't upgrade the pool it won't.  ZFS can import and work with an 
old version of the filesystem fine.  The manual page for zpool upgrade says:
"Older versions can continue to be used"

Just import it on Solaris 5/08 without doing the upgrade.  Your ZFS pool will 
be available and can be served out from the new version.  If you do find any 
problems (which I wouldn't expect to be honest), you can plug your old snv_70b 
boot disk in if necessary.

> zfs send of the /zvol/ufs volume would take 2 days. Currently it panics 
> at least once a day. There appears to be no way to resume a "half 
> transfered" zfs send. So, rsyncing smaller bits.

Aaah, that makes sense now.  I don't think you need to do this though, I really 
think your idea of swapping the boot disks is the best way of getting this 
server up & running.

The absolute worst case scenario is that Solaris 5/08 also crashes on the old 
Thumper which means you have faulty hardware.  If that happens you'll probably 
need to move your data drives to the new chassis and hope it's not a bad drive 
causing the fault.

Either way, let me know how you get on.

Ross
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing the boot HDDs in x4500

2008-08-01 Thread Jorgen Lundman


Ross wrote:
> I do think a zfs import after booting from the new drives should 
 > work fine, and it doesn't automatically upgrade the pool,
 > so you can still go back to snv_70b if needed.

Alas, it would be downgrade. Which is why I think it will fail.


> 
> PS.  In your first post you said you had no time to copy the filesystem, so 
> why are you trying to use send/receive?  Both rsync and send/receive will 
> take a long time to complete.
>  
>  

zfs send of the /zvol/ufs volume would take 2 days. Currently it panics 
at least once a day. There appears to be no way to resume a "half 
transfered" zfs send. So, rsyncing smaller bits.

zfs send -i only works if you have a full copy already, which we can't 
get from above.



-- 
Jorgen Lundman   | <[EMAIL PROTECTED]>
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ButterFS

2008-08-01 Thread dick hoogendijk
I read this just now in the Unix Guardian:


BTRFS, pronounced ButterFS:
BTRFS was launched in June 2007, and is a POSIX-compliant file system
that will support very large files and volumes (16 exabytes) and a
ridiculous number of files (two to the power of 64 files, to be
precise). The file system has object-level mirroring and striping,
checksums on data and metadata, online file system check, incremental
backup and file system mirroring, subvolumes with their own file system
roots, writable snapshots, and index and file packing to conserve
space, among many other features. BTRFS is not anywhere near primetime,
and Garbee figures it will take at least three years to get it out the
door.


I thought that ZFS was/is the way to the future, but reading this it
seems there are compatitors out there ;-)

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
++ http://nagual.nl/ + SunOS sxce snv94 ++
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing the boot HDDs in x4500

2008-08-01 Thread Ross
But zfs send/receive is very different to zfs import.  I'm not sure if zfs 
send/receive work across different versions of zfs, I vaguely remember reading 
something about it not working, but can't find anything specific about it right 
now.

I do think a zfs import after booting from the new drives should work fine, and 
it doesn't automatically upgrade the pool, so you can still go back to snv_70b 
if needed.  After all, if zfs import did change the version, the zfs upgrade 
command would be redundant.  See the following lines from the zfs manual:

zpool import [-d dir] [-D] [-f] [-o opts] [-R root] pool | id [newpool] 
Imports a specific pool. A pool can be identified by its name or the numeric 
identifier. If newpool is specified, the pool is imported using the name 
newpool. Otherwise, it is imported with the same name as its exported name.

If a device is removed from a system without running “zpool export” first, the 
device appears as potentially active. It cannot be determined if this was a 
failed export, or whether the device is really in use from another host. To 
import a pool in this state, the -f option is required.

zpool upgrade 
Displays all pools formatted using a different ZFS on-disk version. Older 
versions can continue to be used, but some features may not be available. These 
pools can be upgraded using “zpool upgrade -a”. Pools that are formatted with a 
more recent version are also displayed, although these pools will be 
inaccessible on the system.

Ross

PS.  In your first post you said you had no time to copy the filesystem, so why 
are you trying to use send/receive?  Both rsync and send/receive will take a 
long time to complete.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I trust ZFS?

2008-08-01 Thread Enda O'Connor ( Sun Micro Systems Ireland)
Dave wrote:
> 
> 
> Enda O'Connor wrote:
>>
>> As for thumpers, once 138053-02 (  marvell88sx driver patch ) releases 
>> within the next two weeks ( assuming no issues found ), then the 
>> thumper platform running s10 updates will be up to date in terms of 
>> marvel88sx driver fixes, which fixes some pretty important issues for 
>> thumper.
>> Strongly suggest applying this patch to thumpers going forward.
>> u6 will have the fixes by default.
>>
> 
> I'm assuming the fixes listed in these patches are already committed in 
> OpenSolaris (b94 or greater)?
> 
> -- 
> Dave
yep.
I know this is opensolaris list, but a lot of folk asking questions do seem to 
be running 
various update releases.


Enda
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs-auto-snapshot 0.11 work (was Re: zfs-auto-snapshot with at schedul

2008-08-01 Thread Darren J Moffat
Tim Foster wrote:
>>> can roles run cron jobs ?),
>>
>> No. You need a user who can take on the role.
> 
> Darn, back to the drawing board.

I don't have all the context on this but Solaris RBAC roles *can* run 
cron jobs.  Roles don't have to have users assigned to them.

Roles normally have passwords and accounts that have valid passwords can 
run cron jobs.

To create an account that can not login but can run cron jobs use:
passwd -N username

Examples of such accounts are sys,adm,lp,postgres

Accounts that are locked (by passwd -l username) can not run cron jobs.

Tim feel free to explain to me offline what it was you were trying to 
use roles for.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing the boot HDDs in x4500

2008-08-01 Thread Jorgen Lundman

I am currently thinking that it will not work. I found this situation 
happened :

x4500-01# zfs send zpool1/[EMAIL PROTECTED] | nc -v x4500-02 3334
x4500-02# nc -l -p  -vvv | zfs recv  -v zpool1/www

x4500-02# cannot mount 'zpool1/www': Operation not supported

Mismatched versions:  File system is version 2 on-disk format, which is 
incompatible with this software version 1!cannot mount 'zpool1/www': 
Operation not supported

Bluntly, we are screwed. It is rsync, or nothing.

Lund

Ross wrote:
> I'd expect that to work personally, although I'd just drop one of your boot 
> mirrors in myself.  That leaves the second drive untouched for your other 
> server.  It also means that if it works you could just wipe the old snv_70b 
> and re-establish the boot mirrors on each server with them.
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 

-- 
Jorgen Lundman   | <[EMAIL PROTECTED]>
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing the boot HDDs in x4500

2008-08-01 Thread Ross
Wipe the snv_70b disks I meant.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing the boot HDDs in x4500

2008-08-01 Thread Ross
I'd expect that to work personally, although I'd just drop one of your boot 
mirrors in myself.  That leaves the second drive untouched for your other 
server.  It also means that if it works you could just wipe the old snv_70b and 
re-establish the boot mirrors on each server with them.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss