Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Will Murnane
On Thu, Jul 16, 2009 at 21:30, Rob Logan wrote:
>> c4                             scsi-bus     connected    configured
>> unknown
>> c4::dsk/c4t15d0                disk         connected    configured
>> unknown
>  :
>> c4::dsk/c4t33d0                disk         connected    configured
>> unknown
>> c4::es/ses0                    ESI          connected    configured
>> unknown
>
> thanks! so SATA disks show up JBOD in IT mode.. Is there some magic that
> load balances the 4 SAS ports as this shows up as one "scsi-bus"?
Hypothetically, yes.  In practical terms, though, I've seen more than
300 MB/s of I/O over it:
capacity operationsbandwidth
pool  used  avail   read  write   read  write
---  -  -  -  -  -  -
data11.06T  1.21T  1  1.61K  2.49K   200M
  mirror  460G   236G  0522  1.15K  63.8M
c4t18d0  -  -  0518  6.38K  63.6M
c4t21d0  -  -  0518  12.8K  63.8M
  mirror  467G   229G  0533306  64.8M
c4t23d0  -  -  0523  6.38K  64.3M
c4t25d0  -  -  0529  0  65.0M
  mirror  153G   775G  0597  1.05K  71.8M
c4t20d0  -  -  0589  12.8K  72.5M
c4t22d0  -  -  0584  0  71.8M
---  -  -  -  -  -  -

Note that the pool is only doing 200 MB/s, but the individual devices
are doing a total of 400 MB/s.  It's not possible to put more than 300
MB/s into or out of a single device, so there's no "link aggregation"
to worry about.

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Will Murnane
On Thu, Jul 16, 2009 at 21:35, Adam Sherman wrote:
> I'm looking at the LSI SAS3801X because it seems to be what Sun OEMs for my
> X4100s:
If you're given the choice (i.e., you have the M2 revision), PCI
Express is probably the bus to go with.  It's basically the same card,
but on a faster bus.  But there's nothing wrong with the PCI-X
version.
http://www.lsi.com/storage_home/products_home/host_bus_adapters/sas_hbas/lsisas3801e/index.html

> $280 or so, looks like. Might be overkill for me though.
The 3442X-R is a little cheaper: $205 from Provantage.
http://www.provantage.com/lsi-logic-lsi00164~7LSIG06K.htm

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Adam Sherman

On 16-Jul-09, at 21:17 , Will Murnane wrote:

Good to hear. What HBA(s) are you using against it?

LSI 3442E-R.  It's connected through a Supermicro cable, CBL-0168L, so
it can be attached via an external cable.



I'm looking at the LSI SAS3801X because it seems to be what Sun OEMs  
for my X4100s:


http://sunsolve.sun.com/handbook_private/validateUser.do?target=Devices/SCSI/SCSI_PCIX_SAS_SATA_HBA

$280 or so, looks like. Might be overkill for me though.

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Rob Logan

> c4 scsi-bus connectedconfigured   unknown
> c4::dsk/c4t15d0disk connectedconfigured   unknown
 :
> c4::dsk/c4t33d0disk connectedconfigured   unknown
> c4::es/ses0ESI  connectedconfigured   unknown

thanks! so SATA disks show up JBOD in IT mode.. Is there some magic that
load balances the 4 SAS ports as this shows up as one "scsi-bus"?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Will Murnane
On Thu, Jul 16, 2009 at 21:16, Rob Logan wrote:
> I'm confused, I though expanders only worked with SAS disk, and SATA disks
> took an entire SAS port. could someone post an output showing more than 4
> SATA
> drives across one InfiniBand cable (4 SAS ports)
>
> 2 % cfgadm | grep sata
> sata1/0::dsk/c9t0d0            cd/dvd       connected    configured   ok
> sata1/1::dsk/c9t1d0            disk         connected    configured   ok
> sata1/2::dsk/c9t2d0            disk         connected    configured   ok
> sata1/3                        sata-port    empty        unconfigured ok
> sata1/4::dsk/c9t4d0            disk         connected    configured   ok
> sata1/5                        sata-port    empty        unconfigured ok
> sata2/0::dsk/c7t0d0            disk         connected    configured   ok
> sata2/1::dsk/c7t1d0            disk         connected    configured   ok
> sata2/2::dsk/c7t2d0            disk         connected    configured   ok
> sata2/3                        sata-port    empty        unconfigured ok
> sata2/4::dsk/c7t4d0            disk         connected    configured   ok
> sata2/5::dsk/c7t5d0            disk         connected    configured   ok
> sata2/6                        sata-port    empty        unconfigured ok
> sata2/7                        sata-port    empty        unconfigured ok
> sata3/0::dsk/c8t0d0            disk         connected    configured   ok
> sata3/1::dsk/c8t1d0            disk         connected    configured   ok
> sata3/2::dsk/c8t2d0            disk         connected    configured   ok
> sata3/3                        sata-port    empty        unconfigured ok
> sata3/4::dsk/c8t4d0            disk         connected    configured   ok
> sata3/5::dsk/c8t5d0            disk         connected    configured   ok
> sata3/6                        sata-port    empty        unconfigured ok
> sata3/7                        sata-port    empty        unconfigured ok
Here's the relevant part of cfgadm -al on our machine.  The disks are all sata.

c4 scsi-bus connectedconfigured   unknown
c4::dsk/c4t15d0disk connectedconfigured   unknown
c4::dsk/c4t17d0disk connectedconfigured   unknown
c4::dsk/c4t18d0disk connectedconfigured   unknown
c4::dsk/c4t19d0disk connectedconfigured   unknown
c4::dsk/c4t20d0disk connectedconfigured   unknown
c4::dsk/c4t21d0disk connectedconfigured   unknown
c4::dsk/c4t22d0disk connectedconfigured   unknown
c4::dsk/c4t23d0disk connectedconfigured   unknown
c4::dsk/c4t24d0disk connectedconfigured   unknown
c4::dsk/c4t25d0disk connectedconfigured   unknown
c4::dsk/c4t26d0disk connectedconfigured   unknown
c4::dsk/c4t27d0disk connectedconfigured   unknown
c4::dsk/c4t28d0disk connectedconfigured   unknown
c4::dsk/c4t29d0disk connectedconfigured   unknown
c4::dsk/c4t30d0disk connectedconfigured   unknown
c4::dsk/c4t31d0disk connectedconfigured   unknown
c4::dsk/c4t32d0disk connectedconfigured   unknown
c4::dsk/c4t33d0disk connectedconfigured   unknown
c4::es/ses0ESI  connectedconfigured   unknown

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Will Murnane
On Thu, Jul 16, 2009 at 20:20, Adam Sherman wrote:
> Ever seen/read about anyone use this kind of setup for HA clustering? I'm
> getting ideas about Open HA/Solaris Cluster on top of this setup with two
> systems connecting, that would rock!
It's possible that this would work with homogeneous hardware, but I
tried with another LSI-based expander and SATA disks, and had no luck.
 Perhaps SAS is necessary?

>> We have a SC846E1 at work; it's the 24-disk, 4u version of the 826e1.
>> It's working quite nicely as a SATA JBOD enclosure.  We'll probably be
>> buying another in the coming year to have more capacity.
>
> Good to hear. What HBA(s) are you using against it?
LSI 3442E-R.  It's connected through a Supermicro cable, CBL-0168L, so
it can be attached via an external cable.  There's a card needed,
CSE-PTJBOD-CB1, that allows the case to run without a motherboard in
it.  There's no monitoring for the power supplies, but I built one for
it; I can provide schematics and suggested part numbers if you're
interested.

>> We have a SC846E1 at work; it's the 24-disk, 4u version of the 826e1.
>> It's working quite nicely as a SATA JBOD enclosure.  We'll probably be
>> buying another in the coming year to have more capacity.
>
>
> I should also ask: any other solutions I should have a look at to get >=12
> SATA disks externally attached to my systems?
This was the best solution we found for the money.  The 826 is about
$750, while the 846 is $1100 shipped (wiredzone.com).  Per disk, the
846 is almost $20 cheaper.  If you only care for 12 disks, then one
might as well not spend the extra money, but if there's potential for
expansion it's a wise investment.

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Rob Logan

>> We have a SC846E1 at work; it's the 24-disk, 4u version of the 826e1.
>> It's working quite nicely as a SATA JBOD enclosure.

> use the LSI SAS 3442e which also gives you an external SAS port.

I'm confused, I though expanders only worked with SAS disk, and SATA disks
took an entire SAS port. could someone post an output showing more than 4 SATA
drives across one InfiniBand cable (4 SAS ports)

2 % cfgadm | grep sata
sata1/0::dsk/c9t0d0cd/dvd   connectedconfigured   ok
sata1/1::dsk/c9t1d0disk connectedconfigured   ok
sata1/2::dsk/c9t2d0disk connectedconfigured   ok
sata1/3sata-portemptyunconfigured ok
sata1/4::dsk/c9t4d0disk connectedconfigured   ok
sata1/5sata-portemptyunconfigured ok
sata2/0::dsk/c7t0d0disk connectedconfigured   ok
sata2/1::dsk/c7t1d0disk connectedconfigured   ok
sata2/2::dsk/c7t2d0disk connectedconfigured   ok
sata2/3sata-portemptyunconfigured ok
sata2/4::dsk/c7t4d0disk connectedconfigured   ok
sata2/5::dsk/c7t5d0disk connectedconfigured   ok
sata2/6sata-portemptyunconfigured ok
sata2/7sata-portemptyunconfigured ok
sata3/0::dsk/c8t0d0disk connectedconfigured   ok
sata3/1::dsk/c8t1d0disk connectedconfigured   ok
sata3/2::dsk/c8t2d0disk connectedconfigured   ok
sata3/3sata-portemptyunconfigured ok
sata3/4::dsk/c8t4d0disk connectedconfigured   ok
sata3/5::dsk/c8t5d0disk connectedconfigured   ok
sata3/6sata-portemptyunconfigured ok
sata3/7sata-portemptyunconfigured ok
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Bob Friesenhahn

On Thu, 16 Jul 2009, Adam Sherman wrote:


I should also ask: any other solutions I should have a look at to get >=12 
SATA disks externally attached to my systems?


Depending on how much failure resiliancy you want and how you plan to 
configure your pool, you may be better off with two independent disk 
trays with 12 disks each.  For example, if you were to use mirrors, 
you could split the mirrors across the disk trays.  If one tray fails, 
then your system still works.


If you are planning to use raidz1 or raidz2 then there is likely no 
benefit to going with two smaller trays.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Adam Sherman

On 16-Jul-09, at 18:01 , Will Murnane wrote:

We have a SC846E1 at work; it's the 24-disk, 4u version of the 826e1.
It's working quite nicely as a SATA JBOD enclosure.  We'll probably be
buying another in the coming year to have more capacity.



I should also ask: any other solutions I should have a look at to get  
>=12 SATA disks externally attached to my systems?


Thanks!

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Adam Sherman

On 16-Jul-09, at 20:52 , James C. McPherson wrote:

Another thought in the same vein, I notice many of these systems
support "SES-2" for management. Does this do anything useful under
Solaris?


We've got some integration between FMA and SES devices which
allows us to to some management tasks.


So that would allow FMA to detect SATA disk failures then?


libtopo, libscsi and libses are the main methods of getting
that information out. For an example outside FMA, you could
have a look into the ses/sgen plugin from pluggable fwflash.

Is there anything you're specifically interested in wrt management
uses of SES?


I'm really just exploring. Where can I read about how FMA is going to  
help with failures in my setup?


Thanks,

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread James C. McPherson
On Thu, 16 Jul 2009 20:26:17 -0400
Adam Sherman  wrote:

> Another thought in the same vein, I notice many of these systems  
> support "SES-2" for management. Does this do anything useful under  
> Solaris?

We've got some integration between FMA and SES devices which
allows us to to some management tasks.

libtopo, libscsi and libses are the main methods of getting
that information out. For an example outside FMA, you could
have a look into the ses/sgen plugin from pluggable fwflash.

Is there anything you're specifically interested in wrt management
uses of SES?

thanks,
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
Kernel Conference Australia - http://au.sun.com/sunnews/events/2009/kernel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Jonathan Borden
> 
> > We have a SC846E1 at work; it's the 24-disk, 4u
> version of the 826e1.
> > It's working quite nicely as a SATA JBOD enclosure.
>  We'll probably be
> buying another in the coming year to have more
>  capacity.
> Good to hear. What HBA(s) are you using against it?
> 

I've got one too and it works great. I use the LSI SAS 3442e which also gives 
you an external SAS port. You don't need a fancy HBA with onboard RAID. 
Configure to IT mode.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Adam Sherman
Another thought in the same vein, I notice many of these systems  
support "SES-2" for management. Does this do anything useful under  
Solaris?


Sorry for these questions, I seem to be having a tough time locating  
relevant information on the web.


Thanks,

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Adam Sherman

On 16-Jul-09, at 18:01 , Will Murnane wrote:

The "direct attached" backplane is right out.  This means that each
drive has its own individual sata port, meaning you'd need three SAS
wide ports just to connect the drives.

The single-expander version has one LSI SAS expander, which connects
to all the drives and has two "upstream" ports.  This means you plug
in one or two servers directly, and they can both see all the disks.
I've only tested this with one-server configurations.  It also has one
"downstream" port which you could use to daisy-chain more expanders
(i.e., more 826/846 cases) onto the same server.


That makes things a heck of a lot clearer, thank you very much for  
taking the time to explain!


Ever seen/read about anyone use this kind of setup for HA clustering?  
I'm getting ideas about Open HA/Solaris Cluster on top of this setup  
with two systems connecting, that would rock!



We have a SC846E1 at work; it's the 24-disk, 4u version of the 826e1.
It's working quite nicely as a SATA JBOD enclosure.  We'll probably be
buying another in the coming year to have more capacity.


Good to hear. What HBA(s) are you using against it?


Thanks for pointing to relevant documentation.

The manual for the Supermicro cases [1, 2] does a pretty good job IMO
explaining the different options.  See page D-14 and on in the 826
manual, or page D-11 and on in the 846 manual.



I'll read though that, thanks for the detailed pointers.

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris live CD that supports ZFS root mount for fs fixes

2009-07-16 Thread Peter Pickford
will
boot -F failsafe
work

2009/7/16 Matt Weatherford :
>
> Hi,
>
> I borked a libc.so library file on my solaris 10 server (zfs root) - was
> wondering if there
> is a good live CD that will be able to mount my ZFS root fs so that I can
> make this
> quick repair on the system boot drive and get back running again.  Are all
> ZFS
> roots created equal? Its an x86 solaris 10 box. If I boot a belenix live CD
> will it be
> able to mount this ZFS root?
>
> Thanks,
>
> Matt
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris live CD that supports ZFS root mount for fs fixes

2009-07-16 Thread Jorgen Lundman
We used the OpenSolaris preview 2010.02 DVD on genunix.org, to fix our 
broken zboot after attempting to clone.  It had the zpool and zfs tools 
enough to import, re-mount etc.


Lund


Matt Weatherford wrote:


Hi,

I borked a libc.so library file on my solaris 10 server (zfs root) - was 
wondering if there
is a good live CD that will be able to mount my ZFS root fs so that I 
can make this
quick repair on the system boot drive and get back running again.  Are 
all ZFS
roots created equal? Its an x86 solaris 10 box. If I boot a belenix live 
CD will it be

able to mount this ZFS root?

Thanks,

Matt

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



--
Jorgen Lundman   | 
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris live CD that supports ZFS root mount for fs fixes

2009-07-16 Thread Ian Collins

Matt Weatherford wrote:


Hi,

I borked a libc.so library file on my solaris 10 server (zfs root) - 
was wondering if there
is a good live CD that will be able to mount my ZFS root fs so that I 
can make this
quick repair on the system boot drive and get back running again.  Are 
all ZFS
roots created equal? Its an x86 solaris 10 box. If I boot a belenix 
live CD will it be

able to mount this ZFS root?

It should, as long as the pool version is the same or older than the 
version supported by the live CD.  If you want to be cautious, mount 
your pool read-only first.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Solaris live CD that supports ZFS root mount for fs fixes

2009-07-16 Thread Matt Weatherford


Hi,

I borked a libc.so library file on my solaris 10 server (zfs root) - was 
wondering if there
is a good live CD that will be able to mount my ZFS root fs so that I 
can make this
quick repair on the system boot drive and get back running again.  Are 
all ZFS
roots created equal? Its an x86 solaris 10 box. If I boot a belenix live 
CD will it be

able to mount this ZFS root?

Thanks,

Matt

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Will Murnane
On Thu, Jul 16, 2009 at 17:02, Adam Sherman wrote:
> Hello All,
>
> I'm just starting to think about building some mass-storage arrays and am
> looking to better understand some of the components involved.
>
> For example, the Supermicro SC826 series of systems is available with three
> backplanes:
>
> 1. SAS / SATA Expander Backplane with single LSI SASX28 Expander Chip
> 2. SAS / SATA Expander Backplane with dual LSI SASX28 Expander Chips
> 3. SAS / SATA Direct Attached Backplane
>
> Assuming I am using this an external array, connected to a server via SAS,
> how do these fit into my topology? Expander, dual-expanders and no expander?
> Huh?
The "direct attached" backplane is right out.  This means that each
drive has its own individual sata port, meaning you'd need three SAS
wide ports just to connect the drives.

The single-expander version has one LSI SAS expander, which connects
to all the drives and has two "upstream" ports.  This means you plug
in one or two servers directly, and they can both see all the disks.
I've only tested this with one-server configurations.  It also has one
"downstream" port which you could use to daisy-chain more expanders
(i.e., more 826/846 cases) onto the same server.

We have a SC846E1 at work; it's the 24-disk, 4u version of the 826e1.
It's working quite nicely as a SATA JBOD enclosure.  We'll probably be
buying another in the coming year to have more capacity.

The dual-expander version has two LSI SAS expanders.  You need
dual-port SAS drives (not SATA).  This lets you have two paths all the
way to each drive; even if one expander fails (this seems pretty
unlikely to me, but if you're shooting for many nines it's worth
considering) you still have access to the disks.

> Thanks for pointing to relevant documentation.
The manual for the Supermicro cases [1, 2] does a pretty good job IMO
explaining the different options.  See page D-14 and on in the 826
manual, or page D-11 and on in the 846 manual.

Will

[1]: http://supermicro.com/manuals/chassis/2U/SC826.pdf
[2]: http://supermicro.com/manuals/chassis/tower/SC846.pdf
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Adam Sherman

Hello All,

I'm just starting to think about building some mass-storage arrays and  
am looking to better understand some of the components involved.


For example, the Supermicro SC826 series of systems is available with  
three backplanes:


1. SAS / SATA Expander Backplane with single LSI SASX28 Expander Chip
2. SAS / SATA Expander Backplane with dual LSI SASX28 Expander Chips
3. SAS / SATA Direct Attached Backplane

Assuming I am using this an external array, connected to a server via  
SAS, how do these fit into my topology? Expander, dual-expanders and  
no expander? Huh?


Thanks for pointing to relevant documentation.

A.


--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] permission problem using ZFS send and zfs receive accross SSH

2009-07-16 Thread Ian Collins

Alexander Skwar wrote:

Hi!


On Thu, Jul 16, 2009 at 14:00, Cyril Ducrocq > wrote:
 


moreover i added an "on the fly" compression using gzip


You can dump the gzip|gunzip, if you use SSH on-the-fly compression, using

  ssh -C

But test first, using compression is likely to slow down the transfer 
unless you have a very slow connection.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-16 Thread Bob Friesenhahn
I have received email that Sun CR numbers 6861397 & 6859997 have been 
created to get this performance problem fixed.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS pegging the system

2009-07-16 Thread Jeff Haferman

We have a SGE array task that we wish to run with elements 1-7.  
Each task generates output and takes roughly 20 seconds to 4 minutes  
of CPU time.  We're doing them on a machine with about 144 8-core nodes,
and we've divvied the job up to do about 500 at a time.

So, we have 500 jobs at a time writing to the same ZFS partition.

What is the best way to collect the results of the task? Currently we  
are having each task write to STDOUT and then are combining the  
results. This nails our ZFS partition to the wall and kills  
performance for other users of the system.  We tried setting up a  
MySQL server to receive the results, but it couldn't take 1000  
simultaneous inbound connections.

Jeff

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can't offline a RAID-Z2 device: "no valid replica"

2009-07-16 Thread Ross
Great news, thanks Tom!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-16 Thread James Andrewartha
On Sun, 2009-07-12 at 16:38 -0500, Bob Friesenhahn wrote:
> In order to raise visibility of this issue, I invite others to see if 
> they can reproduce it in their ZFS pools.  The script at
> 
> http://www.simplesystems.org/users/bfriesen/zfs-discuss/zfs-cache-test.ksh

Here's the results from two machines, the first has 12x400MHz US-II
CPUs, 11GB of RAM and the disks are 18GB 10krpm SCSI in a split D1000:

System Configuration:  Sun Microsystems  sun4u 8-slot Sun Enterprise
4000/5000
System architecture: sparc
System release level: 5.11 snv_101
CPU ISA list: sparcv9+vis sparcv9 sparcv8plus+vis sparcv8plus sparcv8 
sparcv8-fsmuld sparcv7 sparc

Pool configuration:
  pool: space
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the
errors
using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: scrub completed after 0h22m with 0 errors on Mon Jul 13 17:18:55
2009
config:

NAME STATE READ WRITE CKSUM
spaceONLINE   0 0 0
  mirror ONLINE   0 0 0
c0t3d0   ONLINE   0 0 0
c2t11d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c0t2d0   ONLINE   0 0 0
c2t10d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c0t4d0   ONLINE   0 0 0
c2t12d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c0t5d0   ONLINE   0 0 0
c2t13d0  ONLINE   1 0 0  128K repaired

errors: No known data errors

zfs create space/zfscachetest
Creating data file set (3000 files of 8192000 bytes) under /space/zfscachetest 
...
Done!
zfs unmount space/zfscachetest
zfs mount space/zfscachetest

Doing initial (unmount/mount) 'cpio -C 131072 -o > /dev/null'
48000256 blocks

real11m40.67s
user0m20.32s
sys 5m27.16s

Doing second 'cpio -C 131072 -o > /dev/null'
48000256 blocks

real31m29.42s
user0m19.31s
sys 6m46.39s

Feel free to clean up with 'zfs destroy space/zfscachetest'.

The second has 2x1.2GHz US-III+, 4GB RAM and 10krpm FC disks on a single
loop.

System Configuration:  Sun Microsystems  sun4u Sun Fire 480R
System architecture: sparc
System release level: 5.11 snv_97
CPU ISA list: sparcv9+vis2 sparcv9+vis sparcv9 sparcv8plus+vis2 sparcv8plus+vis 
sparcv8plus sparcv8 sparcv8-fsmuld sparcv7 sparc

Pool configuration:
  pool: space
 state: ONLINE
 scrub: none requested
config: 

NAME STATE READ WRITE CKSUM
spaceONLINE   0 0 0
  mirror ONLINE   0 0 0
c1t34d0  ONLINE   0 0 0
c1t48d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c1t35d0  ONLINE   0 0 0
c1t49d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c1t36d0  ONLINE   0 0 0
c1t51d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c1t33d0  ONLINE   0 0 0
c1t52d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c1t38d0  ONLINE   0 0 0
c1t53d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c1t39d0  ONLINE   0 0 0
c1t54d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c1t40d0  ONLINE   0 0 0
c1t55d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c1t41d0  ONLINE   0 0 0
c1t56d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c1t42d0  ONLINE   0 0 0
c1t57d0  ONLINE   0 0 0
logs ONLINE   0 0 0
  c1t50d0ONLINE   0 0 0

errors: No known data errors

zfs create space/zfscachetest
Creating data file set (3000 files of 8192000 bytes) under /space/zfscachetest 
...
Done!
zfs unmount space/zfscachetest
zfs mount space/zfscachetest

Doing initial (unmount/mount) 'cpio -C 131072 -o > /dev/null'
48000256 blocks

real5m45.66s
user0m5.63s
sys 1m14.66s

Doing second 'cpio -C 131072 -o > /dev/null'
48000256 blocks

real15m29.42s
user0m5.65s
sys 1m37.83s

Feel free to clean up with 'zfs destroy space/zfscachetest'.

James Andrewartha

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] permission problem using ZFS send and zfs receive accross SSH

2009-07-16 Thread Cyril Ducrocq
Thanks for the tip

in the meantime i had trouble with a cannot receive incremental stream: 
destination rpool/bck_sauvegardes_windows has been modified  most recent 
snapshot

...i resolved isang the -F option of the ZFS RECV command 

(was only a modification of the atime property of the destination file while my 
checks)

I'm gonna try this ZFS solution (probably coupled with a tool like "unison") on 
my real servers with real amount of data and unfortunatly real bandwith 
limitation due to SDSL, all this after my hollidays.

B.R.

B.R.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can't offline a RAID-Z2 device: "no valid replica"

2009-07-16 Thread Thomas Liesner
FYI:

In b117 it works as expected and stated in the documentation.

Tom
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] permission problem using ZFS send and zfs receive accross SSH

2009-07-16 Thread Alexander Skwar
Hi!


On Thu, Jul 16, 2009 at 14:00, Cyril Ducrocq wrote:


> moreover i added an "on the fly" compression using gzip


You can dump the gzip|gunzip, if you use SSH on-the-fly compression, using

  ssh -C

ssh also uses gzip, so there won't be much difference.

Regards,

Alexander
-- 
[[ http://zensursula.net ]]
[ Soc. => http://twitter.com/alexs77 | http://www.plurk.com/alexs77 ]
[ Mehr => http://zyb.com/alexws77 ]
[ Chat => Jabber: alexw...@jabber80.com | Google Talk: a.sk...@gmail.com ]
[ Mehr => AIM: alexws77 ]
[ $[ $RANDOM % 6 ] = 0 ] && rm -rf / || echo 'CLICK!'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] permission problem using ZFS send and zfs receive accross SSH

2009-07-16 Thread Cyril Ducrocq
i just found the solution !

i use pfexec to execute the ZFS receive command with the needed roles without 
beeing asked for a password.

moreover i added an "on the fly" compression using gzip

the solution looks like this

zfs send rpool/sauvegardes_wind...@mercredi-16-07-09 | gzip| ssh 
re...@opensolaris_bck  "gunzip | pfexec /usr/sbin/zfs recv  
rpool/bck_sauvegardes_windows"

B.R
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can't offline a RAID-Z2 device: "no valid replica"

2009-07-16 Thread Thomas Liesner
You're right, from the documentation it definitely should work. Still, it 
doesn't. At least not in Solaris 10. But i am not a zfs-developer, so this 
should probably answered by them. I will give it a try with a recent 
OpneSolaris-VM and check, wether this works in newer implementations of zfs.

> > The pool is not using the disk anymore anyway, so
> > (from the zfs point of view) there is no need to
> > offline the disk. If you want to stop the
> io-system
> > from trying to access the disk, pull it out or
> wait
> > until it gives up...
> 
> Yes, there is. I don't want the disk to become online
> if the system reboots, because what actually happens
> is that it *never* gives up (well, at least not in
> more than 24 hours), and all I/O to the zpool stop as
> long as there are those errors. Yes, I know it should
> continue working. In practice, it does not (though it
> used to be much worse in previous versions of S10,
> with all I/O stopping on all disks and volumes, both
> ZFS and UFS, and usually ending in a panic).
> And the zpool command hangs, and never finished. The
> only way to get out of it is to use cfgadm to send
> multiple hardware resets to the SATA device, then
> disconnect it. At this point, zpool completes and
> shows the disk as having faulted.

Again you are right, that this is a very annoying behaviour. the same thing 
happens with DiskSuite pools and ufs when a disk is failing as well, though. 
For me it is not a zfs problem, but a Solaris problem. The kernel should stop 
trying to access failing disks a LOT earlier instead of blocking the complete 
I/O for the whole system.
I always understood zfs as a concept for hot pluggable disks. This is the way i 
use it and that is why i never really had this problem. Whenever i run into 
this behaviour, i simply pull the disk in question and replace it.  The time 
those "hickups" affect the performance of our production eviroment have never 
been longer than a couple of minutes.

Tom
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] permission problem using ZFS send and zfs receive accross SSH

2009-07-16 Thread Cyril Ducrocq
Hello 
i'm newbie on OpenSolaris and as i'm very interested in the ZFS functionalities 
in order to setup a disk-based replicated backup system for my company.
I'm trying to bench it using 2 Virtual machines.
ZFS snapshot commands work well on my main server as i've got the root "role" 
but i planned to use ZFS Send and receive accross SSH as descibed within the 
sun documentation and then i encounter a problem i can't solve.

as i planned to do such replication using a crontab script, i need it to work 
without any human intervention (no login password asked)

I first try to use the root "account" to log using SSH on the 2nd server but it 
seems you can't do that under OpenSolaris (event when modifying sshd_config to 
authorized it)

so i created a dedicated user "repli" an try this command 
[b]zfs send rpool/sauvegardes_wind...@mardi-15-07-09 | ssh 
re...@opensolaris_bck  /usr/sbin/zfs recv -F rpool/bck_sauvegardes_windows[/b]
but i got this message 
[b]cannot receive new filesystem stream: permission denied[/b]

it seems that the account "repli" does not have enought rights to do the ZFS 
receive (as a matter of fact, when i try to setup a ZFS hierarchy on the 2nd 
server using it, it doesn't work)

As "Rights" management under Solaris seems to be very different from linux 
one...i'm dissapointed because i do not know how to give it enough rights to be 
able to process the "zfs receive" command.

i also try another way, using
zfs send rpool/sauvegardes_wind...@mardi-15-07-09 | ssh re...@opensolaris_bck  
su - root -c /usr/sbin/zfs recv -F rpool/bck_sauvegardes_windows

but then the root password is required (even if set to blank) and the command 
fail with
"su: désolé" 

I'm in a deep :!ù$*, so does an angel here know how to manage such a situation ?
Or is there any other way to proceed this ZFS replication accross the network 
(using something else than SSH ?)

B.R. from France.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can't offline a RAID-Z2 device: "no valid replica"

2009-07-16 Thread Laurent Blume
> You could offline the disk if [b]this[/b] disk (not
> the pool) had a replica. Nothing wrong with the
> documentation. Hmm, maybe it is little misleading
> here. I walked into the same "trap".

I apologize for being daft here, but I don't find any ambiguity in the 
documentation.
This is explicitly stated as being possible.

"This scenario is possible assuming that the systems in question see the 
storage once it is attached to the new switches, possibly through different 
controllers than before, and your pools are set up as RAID-Z or mirrored 
configurations."

And lower, it even says that it's not possible to offline two devices in a 
RAID-Z, with that exact error as an example:

"You cannot take a pool offline to the point where it becomes faulted. For 
example, you cannot take offline two devices out of a RAID-Z configuration, nor 
can you take offline a top-level virtual device.

# zpool offline tank c1t0d0
cannot offline c1t0d0: no valid replicas
"

http://docs.sun.com/app/docs/doc/819-5461/gazgm?l=en&a=view

I don't understand what you mean by this disk not having a replica. It's 
RAID-Z2: by definition, all the data it contains is replicated on two other 
disks in the pool. That's why the pool is still working fine.

> The pool is not using the disk anymore anyway, so
> (from the zfs point of view) there is no need to
> offline the disk. If you want to stop the io-system
> from trying to access the disk, pull it out or wait
> until it gives up...

Yes, there is. I don't want the disk to become online if the system reboots, 
because what actually happens is that it *never* gives up (well, at least not 
in more than 24 hours), and all I/O to the zpool stop as long as there are 
those errors. Yes, I know it should continue working. In practice, it does not 
(though it used to be much worse in previous versions of S10, with all I/O 
stopping on all disks and volumes, both ZFS and UFS, and usually ending in a 
panic).
And the zpool command hangs, and never finished. The only way to get out of it 
is to use cfgadm to send multiple hardware resets to the SATA device, then 
disconnect it. At this point, zpool completes and shows the disk as having 
faulted.


Laurent
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss