Re: [zfs-discuss] Sun samba <-> ZFS ACLs

2008-09-03 Thread Richard Elling
Wilkinson, Alex wrote:
> 0n Wed, Sep 03, 2008 at 12:57:52PM -0700, Paul B. Henson wrote: 
>
> >I tried installing the Sun provided samba source code package to try to 
> do
> >some debugging on my own, but it won't even compile, configure fails 
> with:
>
> Oh, where did you get that from ?
>   

Source packages are usually in a Solaris distribution (overloaded term,
but look at something like Solaris 10 5/08) and typically end in "S"
So look in the Product directory for something like SUNWsambaS.
Of course, this means that if you think you are installing everything
when you tell the installer to install all, then you are wrong
for assuming all meant everything -- a pet peeve of mine, and
probably a new pet peeve for you, too :-(
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool upgrade wrecked GRUB

2008-09-03 Thread Fred
I have a similar situation and would love some concise suggestions:

Had a working version of 2008.05 running svn_93 with the updated grub. I did a 
pkg-update to svn_95 and ran the zfs update when it was suggested. System ran 
fine until I did a a reboot, then no boot, only grub command line shows up.

>From this post it appears that I'll have to install another disk to import the 
>rpool and resurrect the system. Is this true? I'm downloading 
>sol-nv-b97-x86-dvd.iso now. Can I use this?

Any guided suggestions would be wonderful.

Thanks.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun samba <-> ZFS ACLs

2008-09-03 Thread Wilkinson, Alex
0n Wed, Sep 03, 2008 at 12:57:52PM -0700, Paul B. Henson wrote: 

>I tried installing the Sun provided samba source code package to try to do
>some debugging on my own, but it won't even compile, configure fails with:

Oh, where did you get that from ?

 -aW

IMPORTANT: This email remains the property of the Australian Defence 
Organisation and is subject to the jurisdiction of section 70 of the CRIMES ACT 
1914.  If you have received this email in error, you are requested to contact 
the sender and delete the email.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 vs AVS ?

2008-09-03 Thread Marion Hakanson
[EMAIL PROTECTED] said:
> We did ask our vendor, but we were just told that AVS does not support
> x4500. 

You might have to use the open-source version of AVS, but it's not
clear if that requires OpenSolaris or if it will run on Solaris-10.
Here's a description of how to set it up between two X4500's:

  http://blogs.sun.com/AVS/entry/avs_and_zfs_seamless

Regards,

Marion


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SAS or SATA HBA with write cache

2008-09-03 Thread Bryan Wagoner
Doesn't really have a write cache, but some of us have been using this 
relatively inexpensive card with good fast results. I've been using it with 
SATA rather than SAS.

AOC-USAS-L8i

http://www.supermicro.com/products/accessories/addon/AOC-USAS-L8i.cfm

Thread:
http://opensolaris.org/jive/thread.jspa?threadID=66128&tstart=60
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] x4500 vs AVS ?

2008-09-03 Thread Jorgen Lundman

If we get two x4500s, and look at AVS, would it be possible to:

1) Setup AVS to replicate zfs, and zvol (ufs) from 01 -> 02 ? Supported 
by Sol 10 5/08 ?


Assuming 1, if we setup a home-made IP fail-over so that; should 01 go 
down, all clients are redirected to 02.


2) Fail-back, are there methods in AVS to handle fail-back? Since 02 has 
been used, it will have newer/modified files, and will need to replicate 
backwards until synchronised, before fail-back can occur.


We did ask our vendor, but we were just told that AVS does not support 
x4500.


Lund

-- 
Jorgen Lundman   | <[EMAIL PROTECTED]>
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] manual intervention needed on usb pool import

2008-09-03 Thread Michael Hunter
I have a pool on a usb device that I try to 'zpool import -f passport'.  I get 
an error in syslog "Pool 'passport' has encountered an uncorrectable I/O error. 
 Manual intervention is required."

The import at this point is hung and unkillable.

I didn't find anything in the man pages to cover this situation.

>From google and this list I could find procedures that covered seeing this 
>error but not on import (status, scrub don't work before import).

What is the procedure for working through this issue and/or where do I find 
docs that cover it.

   mph
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SAS or SATA HBA with write cache

2008-09-03 Thread Aaron Blew
On Wed, Sep 3, 2008 at 1:48 PM, Miles Nordin <[EMAIL PROTECTED]> wrote:

> I've never heard of a battery that's used for anything but RAID
> features.  It's an interesting question, if you use the controller in
> ``JBOD mode'' will it use the write cache or not?  I would guess not,
> but it might.  And if it doesn't, can you force it, even by doing
> sneaky things like making 2-disk mirrors where 1 disk happens to be
> missing thus wasting half the ports you bought, but turning on the
> damned write cache?  I don't know.
>

The X4150 SAS RAID controllers will use the on-board battery backed cache
even when disks are presented as individual LUNs.  You can also globally
enable/disable the disk write caches.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SAS or SATA HBA with write cache

2008-09-03 Thread Richard Elling
comment at bottom...

Miles Nordin wrote:
>> "mb" == Matt Beebe <[EMAIL PROTECTED]> writes:
>> 
>
> mb> Anyone know of a SATA and/or SAS HBA with battery backed write
> mb> cache?
>
> I've never heard of a battery that's used for anything but RAID
> features.  It's an interesting question, if you use the controller in
> ``JBOD mode'' will it use the write cache or not?  I would guess not,
> but it might.  And if it doesn't, can you force it, even by doing
> sneaky things like making 2-disk mirrors where 1 disk happens to be
> missing thus wasting half the ports you bought, but turning on the
> damned write cache?  I don't know.
>
> The alternative is to get a battery-backed SATA slog like the gigabyte
> iram.  However, beware, because once you add a slog to a pool, you can
> never remove it.  You can't improt the pool without the slog, not even
> DEGRADED, not even if you want ZFS to pretend the slog is empty, not
> even if the slog actually was empty.  IIRC (might be confused) Ross
> found the pool will mount at boot without the slog if it's listed in
> zpool.cache (why?  don't know, but I think he said it does), but once
> you export the pool there is no way to get it back into zpool.cache
> since zpool.cache is a secret binary config file.  Can you substitute
> any empty device for the missing slog?  nope---the slog has secret
> binary header label on it.
>
> I'm guessing one of the reasons you wanted a non-RAID controller with
> a write cache was so that if the controller failed, and the exact same
> model wasn't available to replace it, most of your pool would still be
> readable with any random controller, modulo risk of corruption from
> the lost write cache.  so...with the slog, you don't have that,
> because there are magic irreplaceable bits stored on the slog without
> which your whole pool is useless.
>
> bash-3.00# zpool import -d /usr/vdev
>   pool: slogtest
> id: 11808644862621052048
>  state: ONLINE
> action: The pool can be imported using its name or numeric identifier.
> config:
>
> slogtest  ONLINE
>   mirror  ONLINE
> /usr/vdev/d0  ONLINE
> /usr/vdev/d1  ONLINE
> logs
> slogtest  ONLINE
>   /usr/vdev/slog  ONLINE
> bash-3.00# mv vdev/slog .
> bash-3.00# zpool import -d /usr/vdev
>   pool: slogtest
> id: 11808644862621052048
>  state: FAULTED
> status: One or more devices are missing from the system.
> action: The pool cannot be imported. Attach the missing
> devices and try again.
>see: http://www.sun.com/msg/ZFS-8000-6X
> config:
>
> slogtest  UNAVAIL  missing device
>   mirror  ONLINE
> /usr/vdev/d0  ONLINE
> /usr/vdev/d1  ONLINE
>
> Additional devices are known to be part of this pool, though their
> exact configuration cannot be determined.
> bash-3.00# 
>
> damn.  ``no user-serviceable parts inside.''  however, if you were
> sneaky enough to save a backup copy of your empty slog to get around
> Solaris's obtinence, maybe you can proceed:
>
> bash-3.00# gzip slog<-- save a copy of the 
> exported empty slog
> bash-3.00# ls -l slog.gz
> -rw-r--r--   1 root root  106209 Sep  3 16:17 slog.gz
> bash-3.00# gunzip < slog.gz > vdev/slog
> bash-3.00# zpool import -d /usr/vdev
>   pool: slogtest
> id: 11808644862621052048
>  state: ONLINE
> action: The pool can be imported using its name or numeric identifier.
> config:
>
> slogtest  ONLINE
>   mirror  ONLINE
> /usr/vdev/d0  ONLINE
> /usr/vdev/d1  ONLINE
> logs
> slogtest  ONLINE
>   /usr/vdev/slog  ONLINE
> bash-3.00# zpool import -d /usr/vdev slogtest
> bash-3.00# pax -rwpe /usr/sfw/bin /slogtest
> ^C
> bash-3.00# zpool export slogtest
> bash-3.00# gunzip < slog.gz > vdev/slog  <-- wipe the slog
> bash-3.00# zpool import -d /usr/vdev slogtest
> bash-3.00# zfs list -r slogtest
> NAME   USED  AVAIL  REFER  MOUNTPOINT
> slogtest  18.1M  25.4M  17.9M  /slogtest
> bash-3.00# zpool scrub slogtest
> bash-3.00# zpool status slogtest
>   pool: slogtest
>  state: ONLINE
>  scrub: scrub completed with 0 errors on Wed Sep  3 16:23:44 2008
> config:
>
> NAME  STATE READ WRITE CKSUM
> slogtest  ONLINE   0 0 0
>   mirror  ONLINE   0 0 0
> /usr/vdev/d0  ONLINE   0 0 0
> /usr/vdev/d1  ONLINE   0 0 0
> logs  ONLINE   0 0 0
>   /usr/vdev/slog  ONLINE   0 0 0
>
> errors: No known data errors
> bash-3.00# 
>
> I'm not sure this will always work, because there probably wasn't
> anything in the slog when I wiped it.  But I guess it's better than
> ``restore your pool from backup'' because of the pedantry of some
> wallpaper tool and brittle win

Re: [zfs-discuss] SAS or SATA HBA with write cache

2008-09-03 Thread Miles Nordin
> "mb" == Matt Beebe <[EMAIL PROTECTED]> writes:

mb> Anyone know of a SATA and/or SAS HBA with battery backed write
mb> cache?

I've never heard of a battery that's used for anything but RAID
features.  It's an interesting question, if you use the controller in
``JBOD mode'' will it use the write cache or not?  I would guess not,
but it might.  And if it doesn't, can you force it, even by doing
sneaky things like making 2-disk mirrors where 1 disk happens to be
missing thus wasting half the ports you bought, but turning on the
damned write cache?  I don't know.

The alternative is to get a battery-backed SATA slog like the gigabyte
iram.  However, beware, because once you add a slog to a pool, you can
never remove it.  You can't improt the pool without the slog, not even
DEGRADED, not even if you want ZFS to pretend the slog is empty, not
even if the slog actually was empty.  IIRC (might be confused) Ross
found the pool will mount at boot without the slog if it's listed in
zpool.cache (why?  don't know, but I think he said it does), but once
you export the pool there is no way to get it back into zpool.cache
since zpool.cache is a secret binary config file.  Can you substitute
any empty device for the missing slog?  nope---the slog has secret
binary header label on it.

I'm guessing one of the reasons you wanted a non-RAID controller with
a write cache was so that if the controller failed, and the exact same
model wasn't available to replace it, most of your pool would still be
readable with any random controller, modulo risk of corruption from
the lost write cache.  so...with the slog, you don't have that,
because there are magic irreplaceable bits stored on the slog without
which your whole pool is useless.

bash-3.00# zpool import -d /usr/vdev
  pool: slogtest
id: 11808644862621052048
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

slogtest  ONLINE
  mirror  ONLINE
/usr/vdev/d0  ONLINE
/usr/vdev/d1  ONLINE
logs
slogtest  ONLINE
  /usr/vdev/slog  ONLINE
bash-3.00# mv vdev/slog .
bash-3.00# zpool import -d /usr/vdev
  pool: slogtest
id: 11808644862621052048
 state: FAULTED
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-6X
config:

slogtest  UNAVAIL  missing device
  mirror  ONLINE
/usr/vdev/d0  ONLINE
/usr/vdev/d1  ONLINE

Additional devices are known to be part of this pool, though their
exact configuration cannot be determined.
bash-3.00# 

damn.  ``no user-serviceable parts inside.''  however, if you were
sneaky enough to save a backup copy of your empty slog to get around
Solaris's obtinence, maybe you can proceed:

bash-3.00# gzip slog<-- save a copy of the exported 
empty slog
bash-3.00# ls -l slog.gz
-rw-r--r--   1 root root  106209 Sep  3 16:17 slog.gz
bash-3.00# gunzip < slog.gz > vdev/slog
bash-3.00# zpool import -d /usr/vdev
  pool: slogtest
id: 11808644862621052048
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

slogtest  ONLINE
  mirror  ONLINE
/usr/vdev/d0  ONLINE
/usr/vdev/d1  ONLINE
logs
slogtest  ONLINE
  /usr/vdev/slog  ONLINE
bash-3.00# zpool import -d /usr/vdev slogtest
bash-3.00# pax -rwpe /usr/sfw/bin /slogtest
^C
bash-3.00# zpool export slogtest
bash-3.00# gunzip < slog.gz > vdev/slog  <-- wipe the slog
bash-3.00# zpool import -d /usr/vdev slogtest
bash-3.00# zfs list -r slogtest
NAME   USED  AVAIL  REFER  MOUNTPOINT
slogtest  18.1M  25.4M  17.9M  /slogtest
bash-3.00# zpool scrub slogtest
bash-3.00# zpool status slogtest
  pool: slogtest
 state: ONLINE
 scrub: scrub completed with 0 errors on Wed Sep  3 16:23:44 2008
config:

NAME  STATE READ WRITE CKSUM
slogtest  ONLINE   0 0 0
  mirror  ONLINE   0 0 0
/usr/vdev/d0  ONLINE   0 0 0
/usr/vdev/d1  ONLINE   0 0 0
logs  ONLINE   0 0 0
  /usr/vdev/slog  ONLINE   0 0 0

errors: No known data errors
bash-3.00# 

I'm not sure this will always work, because there probably wasn't
anything in the slog when I wiped it.  But I guess it's better than
``restore your pool from backup'' because of the pedantry of some
wallpaper tool and brittle windows-registry-style binary config files.


pgpZJPfHlNCl2.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Sun samba <-> ZFS ACLs

2008-09-03 Thread Paul B. Henson

Way back when I first started looking at ZFS I remember testing the sun
samba/zfs acl integration. I had some problems with the special ace's at
first, but I thought those were resolved by installing the latest samba
patch. However, after working on other pieces of our developing
infrastructure for a while, I went back to revisit samba, and it doesn't
work :(. I initially tested with S10U4, I'm currently running U5 with a
few additional patches.

Given a file with the following ACL:

-rw---   1 henson   csupomona   0 Sep  3 12:19 
/export/user/henson/test.file
owner@:rw-pdDaARWcC--:--:allow
group@:--:--:allow
 everyone@:--:--:allow


I connect to the samba share from Windows XP, right-click on the file,
click properties and then security, give "everyone" read privileges,  and
then after applying here is what happens:

-r--r--r--+  1 henson   csupomona   0 Sep  3 12:19 
/export/user/henson/test.file
group:csupomona:-s:--:allow
 everyone@:r-a-R-c--s:--:allow
   user:henson:rw-pdDaARWcC--:--:allow

The special owner/group entries are replaced with explicit user/group
entries, the order is changed, and the "s"  permission spuriously applied.


I tried installing the Sun provided samba source code package to try to do
some debugging on my own, but it won't even compile, configure fails with:


checking for ldap_add_result_entry... no
configure: error: Active Directory support requires ldap_add_result_entry


Looking at the README.sfw included in the source package, there is
evidently some "libsunwrap.a" file necessary to access that function call
in the Sun LDAP library as it is not exported; this does not appear to be
included in the samba source package.

Anybody have any ideas about this? I'm considering trying to install
another S10U4 system like I initially tested with to confirm whether or not
it actually worked then or if I'm just being prematurely senile 8-/.


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  [EMAIL PROTECTED]
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] disk UNAVAIL

2008-09-03 Thread Glaser, David
I have a disk that went 'bad' on a x4500. It came up with UNAVAIL in a zpool 
status and was 'unconfigured' in cfgadm. The x4500 has a cute little blue light 
that tells you when it's able to be removed. With it on, I replaced the disk 
and reconfigured it with cfgadm.

Now cfgadm lists it as configured and I can see it, but when I try to do a 
zpool status, it still lists the drive as UNAVAIL. I've tried rebooting, and 
applying every patch I can think of (the machine is up to date with patches).

When I run a 'zfs replace  c7t3d0 c7t3d0' the command just hangs. It's 
running S10u3. I don't have any ideas on how to tell zpool to actually use the 
disk again. Any ideas?

I know there's not much detail here, but I'm not sure exactly what more people 
would need to know to help out.

Thanks in advance
Dave


David Glaser
Systems Administrator
LSA Information Technology
University of Michigan

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] SAS or SATA HBA with write cache

2008-09-03 Thread Matt Beebe
Anyone know of a SATA and/or SAS HBA with battery backed write cache?

Seems like using a full-blown RAID controller and exporting each individual 
drive back to ZFS as a single LUN is a waste of power and $$$.  Looking for any 
thoughts or ideas.

Thanks.

-Matt
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What is the correct procedure to replace a non failed disk for another?

2008-09-03 Thread Mattias Pantzare
2008/9/3 Jerry K <[EMAIL PROTECTED]>:
> Hello Bob,
>
> Thank you for your reply.  Your final sentence is a gem I will keep.
>
> As far as the rest, I have a lot of production server that are (2) drive
> systems, and I really hope that there is a mechanism to quickly R&R dead
> drives, resilvering aside.  I guess I need to do some more RTFMing into
> this.

If the drive is dead the pool is already in degraded mode. You simply
replace the failed
drive and tell zfs that it was replaced:

zpool replace pool device
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS hangs/freezes after disk failure,

2008-09-03 Thread Joe S
On Fri, Aug 29, 2008 at 10:32 PM, Todd H. Poole <[EMAIL PROTECTED]> wrote:
> I can't agree with you more. I'm beginning to understand what the phrase 
> "Sun's software is great - as long as you're running it on Sun's hardware" 
> means...
>
> Whether it's deserved or not, I feel like this OS isn't mature yet. And maybe 
> it's not the whole OS, maybe it's some specific subsection (like ZFS), but my 
> general impression of OpenSolaris has been... not stellar.
>
> I don't think it's ready yet for a prime time slot on commodity hardware.

I agree, but with careful research, you can find the *right* hardware.
In my quest (took weeks) to find reports of reliable hardware, I found
that the AMD chipsets were way too buggy. I also noticed that of the
workstations that Sun sells, they use nVidia nForce chipsets for AMD
CPU's and Intel x38 (only intel desktop chipset that supports ecc) for
the Intel CPUs. I read good and bad stories about various hardware and
decided I would stay close to what Sun sells. I've found NO Sun
hardware using the same chipset as yours.

There are a couple of AHCI bugs with the AMD/ATI SB600 chipset. Both
Linux and Solaris were affected. Linux put in a workaround that may
hurt performance slightly. Sun still has the bug open, but for what
it's worth, who's gonna use or care about a buggy desktop chipset in a
storage server?

I have an nVidia nForce 750a chipset (not the same as the sun
workstations, which use nforce pro, but its not too different) and the
same CPU (45 Watt dual core!) you have. My system works great (so
far). I haven't tried the disconnect drive issue thought. I will try
it tonight.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] faulty sub-mirror and CKSUM errors

2008-09-03 Thread Miles Nordin
> "rm" == Robert Milkowski <[EMAIL PROTECTED]> writes:

rm>   What bothers me is why did I got CKSUM errors?

I think they accumulated latently while you had the pool imported on
Node 2 with half of the mirror missing.  ZFS seems to count unexpected
resilvering as CKSUM errors sometimes.  

Richard said you can tell the difference between real CKSUM errors and
resilvering by looking at fmdump, but I'm not sure how to do it.


pgpbEwefrYg7E.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What is the correct procedure to replace a non failed disk for another?

2008-09-03 Thread Jerry K
Hello Bob,

Thank you for your reply.  Your final sentence is a gem I will keep.

As far as the rest, I have a lot of production server that are (2) drive 
systems, and I really hope that there is a mechanism to quickly R&R dead 
drives, resilvering aside.  I guess I need to do some more RTFMing into 
this.

Jerry K.


Bob Friesenhahn wrote:
> On Wed, 3 Sep 2008, Jerry K wrote:
> 
>> How would this work for servers that support only (2) drives, or systems
>>  that are configured to have pools of (2) drives, i.e. mirrors, and
>> there is no additional space to have a new disk, as shown in the sample
>> below.
> 
> You may be able to accomplish what you want by using an intermediate 
> temporary disk and doubling the work (two replacements).  Perhaps the 
> server supports USB so it can use an external USB drive as the initial 
> replacement.  There is also the possibility of replacing the disk with a 
> suitably sized disk file which is stored on some other server or an 
> independent local filesystem with enough space.  You could access 
> temporary storage on another server using iSCSI.  Server performance may 
> suck while the inferior temporary device is in place.
> 
> Whatever you do, make sure that the intermediate storage is never any 
> larger than the final device will be.
> 
> Bob
> ==
> Bob Friesenhahn
> [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What is the correct procedure to replace a non failed disk for another?

2008-09-03 Thread Bob Friesenhahn
On Wed, 3 Sep 2008, Jerry K wrote:

> How would this work for servers that support only (2) drives, or systems
>  that are configured to have pools of (2) drives, i.e. mirrors, and
> there is no additional space to have a new disk, as shown in the sample
> below.

You may be able to accomplish what you want by using an intermediate 
temporary disk and doubling the work (two replacements).  Perhaps the 
server supports USB so it can use an external USB drive as the initial 
replacement.  There is also the possibility of replacing the disk with 
a suitably sized disk file which is stored on some other server or an 
independent local filesystem with enough space.  You could access 
temporary storage on another server using iSCSI.  Server performance 
may suck while the inferior temporary device is in place.

Whatever you do, make sure that the intermediate storage is never any 
larger than the final device will be.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz2 group size

2008-09-03 Thread Richard Elling
Brandon High wrote:
> On Tue, Sep 2, 2008 at 2:15 PM, Richard Elling <[EMAIL PROTECTED]> wrote:
>   
>> Silly me.  It is still Monday, and I am coffee challenged.  RAIDoptimizer
>> is still an internal tool.  However, for those who are interested in the
>> results
>> of a RAIDoptimizer run for 48 disks, see:
>> http://blogs.sun.com/relling/entry/sample_raidoptimizer_output
>> 
>
>
> Richard --
>
> Is there a chance of RAIDoptimizer will be made available to the
> unwashed masses?
>   

Yes, I'm in the process of open-sourcing it.
 -- richard

> Could you post the results for a few runs with other numbers of disks,
> such as 8 (which is the number of drives I plan to use) or 12 (the
> number of drives in the 2510, etc)?
>
> -B
>
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] faulty sub-mirror and CKSUM errors

2008-09-03 Thread Robert Milkowski
Hello zfs-discuss,

  S10U5+patches, SPARC, Sun/qlogic 4Gb dual ported fc cards.

  ZFS does mirroring between two lun's, each is a lun comming from
  separate 6540 disk array.

  I got a kernel panic while pool was imported on one of the nodes
  (kernel panic - it's my fault). After reboot pool was imported
  however it was marked as degraded and one lun was marked as
  unavailable. If I run format on a unavailable disk I could read
  the label but it was definitely garbled. On the other node
  slice layout for both luns (standard EFI layout for zfs) were
  ok. When I tried again I got warning that I need to use fdisk on
  a disk... So i exported the pool and imported on the other node. Pool
  imported fine with both devices, no errors. I exported it again
  and tried to import on first node. Same story. So I exported it,
  uncofnigured devices via cfgadm, did devfsadm -vC, checked with
  format - and this time it could see probel labels/slices on both
  drives, imported pool without any issues. Now I run scrub and it
  detected over 5k CKSUM errors on that previously unavailable
  disk. I run scrub couple more times and no more errors.

  What bothers me is why did I got CKSUM errors? Looks like
  something went terribly wrong outside of zfs (due to label
  issue).


  Unfortunately I didn't have enough time to investigate it in
  more detail, I needed to quickly get it fixed.

  


-- 
Best regards,
 Robert Milkowski  mailto:[EMAIL PROTECTED]
 http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What is the correct procedure to replace a non failed disk for another?

2008-09-03 Thread Jerry K
How would this work for servers that support only (2) drives, or systems 
  that are configured to have pools of (2) drives, i.e. mirrors, and 
there is no additional space to have a new disk, as shown in the sample 
below.

I still support lots of V490's, which hold only (2) drives.

Thanks,

Jerry



Ross wrote:
> Gaah, my command got nerfed by the forum, sorry, should have previewed.  What 
> you want is:
> # zpool replace poolname olddisk newdisk
> --

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What is the correct procedure to replace a non failed disk for another?

2008-09-03 Thread Ross
Gaah, my command got nerfed by the forum, sorry, should have previewed.  What 
you want is:
# zpool replace poolname olddisk newdisk
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What is the correct procedure to replace a non failed disk for another?

2008-09-03 Thread Ross
I'm pretty sure you just need the zpool replace command:
# zpool replace   

Run that for the disk you want to replace and let it resilver.  Once it's done, 
you can unconfigure the old disk with cfgadm and remove it.

If you have multiple mirror vdev's, you'll need to run the command a few times. 
 I expect you can replace several drives at once but I've not tried that 
personally.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What is the correct procedure to replace a non failed disk for another?

2008-09-03 Thread Enda O'Connor
Mark J. Musante wrote:
> 
> On 3 Sep 2008, at 05:20, "F. Wessels" <[EMAIL PROTECTED]> wrote:
> 
>> Hi,
>>
>> can anybody describe the correct procedure to replace a disk (in a  
>> working OK state) with a another disk without degrading my pool?
> 
> This command ought to do the trick:
> 
> zfs replace   
Slight typo above, it's zpool replace is the command

By the way what is the pool config, I assume you have a pool that 
supports this :-)

Once the disk is added, a resilver will occur, so do not take snapshots 
till it has finished, as the resilver will be restarted, this is fixed 
in snv_94 though.

Enda
> 
> The type of pool doesn't matter.
> 
> 
> Regards,
> markm
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What is the correct procedure to replace a non failed disk for another?

2008-09-03 Thread Mark J. Musante


On 3 Sep 2008, at 05:20, "F. Wessels" <[EMAIL PROTECTED]> wrote:

> Hi,
>
> can anybody describe the correct procedure to replace a disk (in a  
> working OK state) with a another disk without degrading my pool?

This command ought to do the trick:

zfs replace   

The type of pool doesn't matter.


Regards,
markm

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] What is the correct procedure to replace a non failed disk for another?

2008-09-03 Thread F. Wessels
Hi,

can anybody describe the correct procedure to replace a disk (in a working OK 
state) with a another disk without degrading my pool?

For a mirror I thought off adding the spare, you'll get a three device mirror. 
Let it resilver. Finally remove the disk I want. 
But what would be the correct commands?
And what if I've got a pool consisting of multiple mirror vdev's?

And what about a raid-z or raid-z2 vdev? I can pull a disk and let the hotspare 
take it's place. But that degrades the pool. I want to mirror the two disks and 
when done remove the source disk. This way I'll never have a degraded pool. Or 
am I asking for a new zpool feature?

Thanks,

Frederik
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz2 group size

2008-09-03 Thread mike
Yeah, I'm looking at using 10 disks or 16 disks (depending on which
chassis I get) - and I would like reasonable redundancy (not HA-crazy
redundancy where I can suffer tons of failures, I can power this down
and replace disks, it's a home server) and maximize the amount of
usable space.

Putting up some page somewhere (if possible) or just exposing the
algorithms so maybe one of us can try to hack together a page would be
cool (I don't have openoffice/staroffice and admit I am too lazy to
download it to examine the file on Windows)

On 9/2/08, Brandon High <[EMAIL PROTECTED]> wrote:
> On Tue, Sep 2, 2008 at 2:15 PM, Richard Elling <[EMAIL PROTECTED]> wrote:
> > Silly me.  It is still Monday, and I am coffee challenged.  RAIDoptimizer
> > is still an internal tool.  However, for those who are interested in the
> > results
> > of a RAIDoptimizer run for 48 disks, see:
> > http://blogs.sun.com/relling/entry/sample_raidoptimizer_output
>
>
> Richard --
>
> Is there a chance of RAIDoptimizer will be made available to the
> unwashed masses?
>
> Could you post the results for a few runs with other numbers of disks,
> such as 8 (which is the number of drives I plan to use) or 12 (the
> number of drives in the 2510, etc)?
>
> -B
>
> --
> Brandon High [EMAIL PROTECTED]
> "You can't blow things up with schools and hospitals." -Stephen Dailey
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss