Re: [zfs-discuss] Areca 1160 & ZFS

2009-05-07 Thread Gregory Skelton
Thanks for all your help, changing the mode from RAID to JBOD did the 
trick. I was hoping to have a RAID 1+0 for the OS, but I guess with Areca 
is all or nothing.


Cheers,
Gregory



On Fri, 8 May 2009, James C. McPherson wrote:


On Thu, 07 May 2009 16:59:01 -0400
milosz  wrote:


with pass-through disks on areca controllers you have to set the lun id (i
believe) using the volume command.  when you issue a volume info your disk
id's should look like this (if you want solaris to see the disks):

0/1/0
0/2/0
0/3/0
0/4/0
etc.

the middle part there (again, i think that's supposed to be lun id) is what
you need to set manually for each disk.  it's actually my #1 peeve with
using areca with solaris.


6784370 enhance arcmsr to support auto-enumeration

solves the "must add luns to sd.conf" problem.

If you're not running at least snv_107, then you will need
to add entries to your /kernel/drv/sd.conf file, regenerate
your boot archive, and then reboot.

The format of the entries is as follows:


name="sd" parent="arcmsr" target=2 lun=0;
name="sd" parent="arcmsr" target=3 lun=0;
name="sd" parent="arcmsr" target=4 lun=0;
name="sd" parent="arcmsr" target=5 lun=0;

etc etc. Oh, and do make sure you're using JBOD mode if
that's available.


James




On Thu, May 7, 2009 at 4:29 PM, Gregory Skelton <
gskel...@gravity.phys.uwm.edu> wrote:


Hi Everyone,

I want to start out by saying ZFS has been a life saver to me, and the
scientific collaboration I work for. I can't imagine working with the TB's
of data that we do, without the snapshots or the ease of moving the data
from one pool to another.

Right now I'm trying to setup a whiteboxe with OpenSolaris. It has an Areca
1160 RAID controller(lastest firmware), SuperMicro H8SSL-I mobo, and a
SuperMicro IPMI card. I haven't been working with Solaris for all that long,
and wanted to create a zpool similar to our x4500's. From the documentation
it says to use the format command to locate the disks.

OpenSolaris lives on a 2 disk Mirrored raid, and I was hoping I could have
the disks pass through, so that zfs could manage the zpool. What am I doing
wrong here, that I can't see all the disks? Or do I have to use a RAID 5
underneath the zpool?

Any and all help is appreciated.
Thanks,
Gregory


r...@nfs0009:~# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c3t0d0 

/p...@0,0/pci1166,3...@1/pci1166,1...@d/pci8086,3...@1/pci17d3,1...@e/s...@0,0
   1. c3t1d0 

/p...@0,0/pci1166,3...@1/pci1166,1...@d/pci8086,3...@1/pci17d3,1...@e/s...@1,0
Specify disk (enter its number):


r...@nfs0009:~# ./cli64 disk info
 # Ch# ModelName   Capacity  Usage

===
  1  1  WDC WD4000YS-01MPB1  400.1GB  Raid Set # 00
  2  2  WDC WD4000YS-01MPB1  400.1GB  Raid Set # 00
  3  3  WDC WD4000YS-01MPB1  400.1GB  Pass Through
  4  4  WDC WD4000YS-01MPB1  400.1GB  Pass Through
  5  5  WDC WD4000YS-01MPB1  400.1GB  Pass Through
  6  6  WDC WD4000YS-01MPB1  400.1GB  Pass Through
  7  7  WDC WD4000YS-01MPB1  400.1GB  Pass Through
  8  8  WDC WD4000YS-01MPB1  400.1GB  Pass Through
  9  9  WDC WD4000YS-01MPB1  400.1GB  Pass Through
 10 10  WDC WD4000YS-01MPB1  400.1GB  Pass Through
 11 11  WDC WD4000YS-01MPB1  400.1GB  Pass Through
 12 12  WDC WD4000YS-01MPB1  400.1GB  Pass Through
 13 13  WDC WD4000YS-01MPB1  400.1GB  Pass Through
 14 14  WDC WD4000YS-01MPB1  400.1GB  Pass Through
 15 15  WDC WD4000YS-01MPB1  400.1GB  Pass Through
 16 16  WDC WD4000YS-01MPB1  400.1GB  Pass Through

===
GuiErrMsg<0x00>: Success.
r...@nfs0009:~#


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss






James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
Kernel Conference Australia - http://au.sun.com/sunnews/events/2009/kernel



--
Gregory R. Skelton Phone: (414)229-2678 (Office)
System Administrator: (920) 246-4415 (Cell)
1900 E. Kenwood Blvd: gskel...@gravity.phys.uwm.edu
University of Wisconsin : AIM/ICQ gregor159
Milwaukee, WI 53201 http://www.lsc-group.phys.uwm.edu/~gskelton
Emergency Email: grego...@vzw.blackberry.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Areca 1160 & ZFS

2009-05-07 Thread James C. McPherson
On Thu, 07 May 2009 16:59:01 -0400
milosz  wrote:

> with pass-through disks on areca controllers you have to set the lun id (i
> believe) using the volume command.  when you issue a volume info your disk
> id's should look like this (if you want solaris to see the disks):
> 
> 0/1/0
> 0/2/0
> 0/3/0
> 0/4/0
> etc.
> 
> the middle part there (again, i think that's supposed to be lun id) is what
> you need to set manually for each disk.  it's actually my #1 peeve with
> using areca with solaris.

6784370 enhance arcmsr to support auto-enumeration

solves the "must add luns to sd.conf" problem.

If you're not running at least snv_107, then you will need
to add entries to your /kernel/drv/sd.conf file, regenerate
your boot archive, and then reboot.

The format of the entries is as follows:


name="sd" parent="arcmsr" target=2 lun=0;
name="sd" parent="arcmsr" target=3 lun=0;
name="sd" parent="arcmsr" target=4 lun=0;
name="sd" parent="arcmsr" target=5 lun=0;
 
etc etc. Oh, and do make sure you're using JBOD mode if
that's available.


James



> On Thu, May 7, 2009 at 4:29 PM, Gregory Skelton <
> gskel...@gravity.phys.uwm.edu> wrote:
> 
> > Hi Everyone,
> >
> > I want to start out by saying ZFS has been a life saver to me, and the
> > scientific collaboration I work for. I can't imagine working with the TB's
> > of data that we do, without the snapshots or the ease of moving the data
> > from one pool to another.
> >
> > Right now I'm trying to setup a whiteboxe with OpenSolaris. It has an Areca
> > 1160 RAID controller(lastest firmware), SuperMicro H8SSL-I mobo, and a
> > SuperMicro IPMI card. I haven't been working with Solaris for all that long,
> > and wanted to create a zpool similar to our x4500's. From the documentation
> > it says to use the format command to locate the disks.
> >
> > OpenSolaris lives on a 2 disk Mirrored raid, and I was hoping I could have
> > the disks pass through, so that zfs could manage the zpool. What am I doing
> > wrong here, that I can't see all the disks? Or do I have to use a RAID 5
> > underneath the zpool?
> >
> > Any and all help is appreciated.
> > Thanks,
> > Gregory
> >
> >
> > r...@nfs0009:~# format
> > Searching for disks...done
> >
> >
> > AVAILABLE DISK SELECTIONS:
> >0. c3t0d0 
> >
> > /p...@0,0/pci1166,3...@1/pci1166,1...@d/pci8086,3...@1/pci17d3,1...@e/s...@0,0
> >1. c3t1d0 
> >
> > /p...@0,0/pci1166,3...@1/pci1166,1...@d/pci8086,3...@1/pci17d3,1...@e/s...@1,0
> > Specify disk (enter its number):
> >
> >
> > r...@nfs0009:~# ./cli64 disk info
> >  # Ch# ModelName   Capacity  Usage
> >
> > ===
> >   1  1  WDC WD4000YS-01MPB1  400.1GB  Raid Set # 00
> >   2  2  WDC WD4000YS-01MPB1  400.1GB  Raid Set # 00
> >   3  3  WDC WD4000YS-01MPB1  400.1GB  Pass Through
> >   4  4  WDC WD4000YS-01MPB1  400.1GB  Pass Through
> >   5  5  WDC WD4000YS-01MPB1  400.1GB  Pass Through
> >   6  6  WDC WD4000YS-01MPB1  400.1GB  Pass Through
> >   7  7  WDC WD4000YS-01MPB1  400.1GB  Pass Through
> >   8  8  WDC WD4000YS-01MPB1  400.1GB  Pass Through
> >   9  9  WDC WD4000YS-01MPB1  400.1GB  Pass Through
> >  10 10  WDC WD4000YS-01MPB1  400.1GB  Pass Through
> >  11 11  WDC WD4000YS-01MPB1  400.1GB  Pass Through
> >  12 12  WDC WD4000YS-01MPB1  400.1GB  Pass Through
> >  13 13  WDC WD4000YS-01MPB1  400.1GB  Pass Through
> >  14 14  WDC WD4000YS-01MPB1  400.1GB  Pass Through
> >  15 15  WDC WD4000YS-01MPB1  400.1GB  Pass Through
> >  16 16  WDC WD4000YS-01MPB1  400.1GB  Pass Through
> >
> > ===
> > GuiErrMsg<0x00>: Success.
> > r...@nfs0009:~#
> >
> >
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> >




James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
Kernel Conference Australia - http://au.sun.com/sunnews/events/2009/kernel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Areca 1160 & ZFS

2009-05-07 Thread milosz
with pass-through disks on areca controllers you have to set the lun id (i
believe) using the volume command.  when you issue a volume info your disk
id's should look like this (if you want solaris to see the disks):

0/1/0
0/2/0
0/3/0
0/4/0
etc.

the middle part there (again, i think that's supposed to be lun id) is what
you need to set manually for each disk.  it's actually my #1 peeve with
using areca with solaris.

On Thu, May 7, 2009 at 4:29 PM, Gregory Skelton <
gskel...@gravity.phys.uwm.edu> wrote:

> Hi Everyone,
>
> I want to start out by saying ZFS has been a life saver to me, and the
> scientific collaboration I work for. I can't imagine working with the TB's
> of data that we do, without the snapshots or the ease of moving the data
> from one pool to another.
>
> Right now I'm trying to setup a whiteboxe with OpenSolaris. It has an Areca
> 1160 RAID controller(lastest firmware), SuperMicro H8SSL-I mobo, and a
> SuperMicro IPMI card. I haven't been working with Solaris for all that long,
> and wanted to create a zpool similar to our x4500's. From the documentation
> it says to use the format command to locate the disks.
>
> OpenSolaris lives on a 2 disk Mirrored raid, and I was hoping I could have
> the disks pass through, so that zfs could manage the zpool. What am I doing
> wrong here, that I can't see all the disks? Or do I have to use a RAID 5
> underneath the zpool?
>
> Any and all help is appreciated.
> Thanks,
> Gregory
>
>
> r...@nfs0009:~# format
> Searching for disks...done
>
>
> AVAILABLE DISK SELECTIONS:
>0. c3t0d0 
>
> /p...@0,0/pci1166,3...@1/pci1166,1...@d/pci8086,3...@1/pci17d3,1...@e/s...@0,0
>1. c3t1d0 
>
> /p...@0,0/pci1166,3...@1/pci1166,1...@d/pci8086,3...@1/pci17d3,1...@e/s...@1,0
> Specify disk (enter its number):
>
>
> r...@nfs0009:~# ./cli64 disk info
>  # Ch# ModelName   Capacity  Usage
>
> ===
>   1  1  WDC WD4000YS-01MPB1  400.1GB  Raid Set # 00
>   2  2  WDC WD4000YS-01MPB1  400.1GB  Raid Set # 00
>   3  3  WDC WD4000YS-01MPB1  400.1GB  Pass Through
>   4  4  WDC WD4000YS-01MPB1  400.1GB  Pass Through
>   5  5  WDC WD4000YS-01MPB1  400.1GB  Pass Through
>   6  6  WDC WD4000YS-01MPB1  400.1GB  Pass Through
>   7  7  WDC WD4000YS-01MPB1  400.1GB  Pass Through
>   8  8  WDC WD4000YS-01MPB1  400.1GB  Pass Through
>   9  9  WDC WD4000YS-01MPB1  400.1GB  Pass Through
>  10 10  WDC WD4000YS-01MPB1  400.1GB  Pass Through
>  11 11  WDC WD4000YS-01MPB1  400.1GB  Pass Through
>  12 12  WDC WD4000YS-01MPB1  400.1GB  Pass Through
>  13 13  WDC WD4000YS-01MPB1  400.1GB  Pass Through
>  14 14  WDC WD4000YS-01MPB1  400.1GB  Pass Through
>  15 15  WDC WD4000YS-01MPB1  400.1GB  Pass Through
>  16 16  WDC WD4000YS-01MPB1  400.1GB  Pass Through
>
> ===
> GuiErrMsg<0x00>: Success.
> r...@nfs0009:~#
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Areca 1160 & ZFS

2009-05-07 Thread Mark J Musante

On Thu, 7 May 2009, Mike Gerdts wrote:



Perhaps you have change the configuration of the array since the last 
reconfiguration boot.  If you run "devfsadm" then run format, does it 
see more disks?


Another thing to check is to see if the controller has a "jbod" mode as 
opposed to passthrough.



Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Areca 1160 & ZFS

2009-05-07 Thread Mike Gerdts
On Thu, May 7, 2009 at 3:29 PM, Gregory Skelton
 wrote:
> Hi Everyone,
>
> I want to start out by saying ZFS has been a life saver to me, and the
> scientific collaboration I work for. I can't imagine working with the TB's
> of data that we do, without the snapshots or the ease of moving the data
> from one pool to another.
>
> Right now I'm trying to setup a whiteboxe with OpenSolaris. It has an Areca
> 1160 RAID controller(lastest firmware), SuperMicro H8SSL-I mobo, and a
> SuperMicro IPMI card. I haven't been working with Solaris for all that long,
> and wanted to create a zpool similar to our x4500's. From the documentation
> it says to use the format command to locate the disks.
>
> OpenSolaris lives on a 2 disk Mirrored raid, and I was hoping I could have
> the disks pass through, so that zfs could manage the zpool. What am I doing
> wrong here, that I can't see all the disks? Or do I have to use a RAID 5
> underneath the zpool?
>
> Any and all help is appreciated.
> Thanks,
> Gregory
>
>
> r...@nfs0009:~# format
> Searching for disks...done
>
>
> AVAILABLE DISK SELECTIONS:
>        0. c3t0d0 
>
> /p...@0,0/pci1166,3...@1/pci1166,1...@d/pci8086,3...@1/pci17d3,1...@e/s...@0,0
>        1. c3t1d0 
>
> /p...@0,0/pci1166,3...@1/pci1166,1...@d/pci8086,3...@1/pci17d3,1...@e/s...@1,0
> Specify disk (enter its number):
>
>
> r...@nfs0009:~# ./cli64 disk info
>  # Ch# ModelName                       Capacity  Usage
> ===
>   1  1  WDC WD4000YS-01MPB1              400.1GB  Raid Set # 00
>   2  2  WDC WD4000YS-01MPB1              400.1GB  Raid Set # 00
>   3  3  WDC WD4000YS-01MPB1              400.1GB  Pass Through
>   4  4  WDC WD4000YS-01MPB1              400.1GB  Pass Through
>   5  5  WDC WD4000YS-01MPB1              400.1GB  Pass Through
>   6  6  WDC WD4000YS-01MPB1              400.1GB  Pass Through
>   7  7  WDC WD4000YS-01MPB1              400.1GB  Pass Through
>   8  8  WDC WD4000YS-01MPB1              400.1GB  Pass Through
>   9  9  WDC WD4000YS-01MPB1              400.1GB  Pass Through
>  10 10  WDC WD4000YS-01MPB1              400.1GB  Pass Through
>  11 11  WDC WD4000YS-01MPB1              400.1GB  Pass Through
>  12 12  WDC WD4000YS-01MPB1              400.1GB  Pass Through
>  13 13  WDC WD4000YS-01MPB1              400.1GB  Pass Through
>  14 14  WDC WD4000YS-01MPB1              400.1GB  Pass Through
>  15 15  WDC WD4000YS-01MPB1              400.1GB  Pass Through
>  16 16  WDC WD4000YS-01MPB1              400.1GB  Pass Through
> ===
> GuiErrMsg<0x00>: Success.
> r...@nfs0009:~#

Perhaps you have change the configuration of the array since the last
reconfiguration boot.  If you run "devfsadm" then run format, does it
see more disks?

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Areca 1160 & ZFS

2009-05-07 Thread Gregory Skelton

Hi Everyone,

I want to start out by saying ZFS has been a life saver to me, and the 
scientific collaboration I work for. I can't imagine working with the TB's of 
data that we do, without the snapshots or the ease of moving the data from one 
pool to another.


Right now I'm trying to setup a whiteboxe with OpenSolaris. It has an Areca 
1160 RAID controller(lastest firmware), SuperMicro H8SSL-I mobo, and a 
SuperMicro IPMI card. I haven't been working with Solaris for all that long, 
and wanted to create a zpool similar to our x4500's. From the documentation it 
says to use the format command to locate the disks.


OpenSolaris lives on a 2 disk Mirrored raid, and I was hoping I could have the 
disks pass through, so that zfs could manage the zpool. What am I doing wrong 
here, that I can't see all the disks? Or do I have to use a RAID 5 underneath 
the zpool?


Any and all help is appreciated.
Thanks,
Gregory


r...@nfs0009:~# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
0. c3t0d0 

/p...@0,0/pci1166,3...@1/pci1166,1...@d/pci8086,3...@1/pci17d3,1...@e/s...@0,0
1. c3t1d0 

/p...@0,0/pci1166,3...@1/pci1166,1...@d/pci8086,3...@1/pci17d3,1...@e/s...@1,0
Specify disk (enter its number):


r...@nfs0009:~# ./cli64 disk info
  # Ch# ModelName   Capacity  Usage
===
   1  1  WDC WD4000YS-01MPB1  400.1GB  Raid Set # 00
   2  2  WDC WD4000YS-01MPB1  400.1GB  Raid Set # 00
   3  3  WDC WD4000YS-01MPB1  400.1GB  Pass Through
   4  4  WDC WD4000YS-01MPB1  400.1GB  Pass Through
   5  5  WDC WD4000YS-01MPB1  400.1GB  Pass Through
   6  6  WDC WD4000YS-01MPB1  400.1GB  Pass Through
   7  7  WDC WD4000YS-01MPB1  400.1GB  Pass Through
   8  8  WDC WD4000YS-01MPB1  400.1GB  Pass Through
   9  9  WDC WD4000YS-01MPB1  400.1GB  Pass Through
  10 10  WDC WD4000YS-01MPB1  400.1GB  Pass Through
  11 11  WDC WD4000YS-01MPB1  400.1GB  Pass Through
  12 12  WDC WD4000YS-01MPB1  400.1GB  Pass Through
  13 13  WDC WD4000YS-01MPB1  400.1GB  Pass Through
  14 14  WDC WD4000YS-01MPB1  400.1GB  Pass Through
  15 15  WDC WD4000YS-01MPB1  400.1GB  Pass Through
  16 16  WDC WD4000YS-01MPB1  400.1GB  Pass Through
===
GuiErrMsg<0x00>: Success.
r...@nfs0009:~#


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss