Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-22 Thread Frank Cusack
On January 22, 2007 8:15:46 AM -0800 Frank Cusack [EMAIL PROTECTED] 
wrote:

On January 21, 2007 7:38:01 AM -0600 Al Hopper [EMAIL PROTECTED]
wrote:

On Sun, 21 Jan 2007, James C. McPherson wrote:

... snip 

Would you please expand upon this, because I'm really interested
in what your thoughts are. since I work on Sun's SAS driver :)


Hi James - just the man I have a couple of questions for... :)

Will the LsiLogic 3041E-R (4-port internal, SAS/SATA, PCI-e) HBA work as
a generic ZFS/JBOD SATA controller?


It does (I've used it).  Kind of.


eh, sorry, i had a 3042E-R.  I think that was the model #.  Same thing
though, just with 2 external and 2 internal ports instead of 4 internal.
I also had the PCI-X version and had the same issues.


When I've had it attached to an external JBOD, it works fine with only
1 or 2 drives, but when the JBOD (promise j300s) is fully populated
with 12 drives, it flakes out (I/O errors).  Windows had no problems.

It works better with the LSI drivers than the Sun mpt driver.

Sorry I don't remember many more details than that.  You can search
on comp.unix.solaris for a thread a few months ago about it.

It only works on x86.

I ended up selling it and the JBOD, just couldn't get it working reliably.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-22 Thread Frank Cusack
On January 20, 2007 6:08:07 PM -0800 Richard Elling 
[EMAIL PROTECTED] wrote:

Frank Cusack wrote:

On January 19, 2007 5:59:13 PM -0800 David J. Orman
[EMAIL PROTECTED] wrote:

card that supports SAS would be *ideal*,


Except that SAS support on Solaris is not very good.

One major problem is they treat it like scsi when instead they should
treat it like FC (or native SATA).


uhmm... SAS is serial attached SCSI, why wouldn't we treat it like SCSI?


On January 21, 2007 8:17:10 PM +1100 James C. McPherson 
[EMAIL PROTECTED] wrote:

Uh ... you do know that the second S in SAS stands for
serial-attached SCSI, right?


Uh ... you do know that the SCSI part of SAS refers to the command
set, right?  And not the physical topology and associated things.
(Please forgive any terminology errors, you know what I mean.)

That seems like saying, Uh ... you do know that there is no SCSI in FC,
right?  (Yet FC is still SCSI.)


Would you please expand upon this, because I'm really interested
in what your thoughts are. since I work on Sun's SAS driver :)


SAS is limited, by the Solaris driver, to 16 devices.  Not even that,
it's limited to devices with SCSI id's 0-15, so if you have 16 drives
and they start at id 10, well you only get access to 6 of them.

But SAS doesn't even really have scsi target id's.  It has WWN-like
identifiers.  I guess HBAs do some kind of mapping but it's not
reliable and can change, and inspecting or hardcoding device-id
mappings requires changing settings in the card's BIOS/OF.

Also, the HBA may renumber devices.  That can be a big problem.

It would be better to use the SASAddress the way the fibre channel
drivers use the WWN.  Drives could still be mapped to scsi id's, but
it should be done by the Solaris driver, not the HBA.  And when
multipathing the names should change like with FC.

That's one thing.  The other is unreliability with many devices
attached.  I've talked to others that have had this problem as well.
I offered to send my controller(s) and JBOD to Sun for testing, through
the support channel (I had a bug open on this for awhile), but they
didn't want it.  I think it came down to the classic we don't
sell that hardware problem.  The onboard SAS controllers (x4100, v215
etc) work fine due to the limited topology.  I wonder how you fix
(hardcode) the scsi id's with those.  Because you're not doing it
with a PCI card.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-22 Thread Frank Cusack

On January 19, 2007 10:01:43 PM -0800 Dan Mick [EMAIL PROTECTED] wrote:

Scouting around a bit, I see SIIG has a 3132 chip, for which they make a
card, eSATA II, available in PCIe and PCIe ExpressCard formfactors.  I
can't promise, but chances seem good that it's supported by si3124 driver
in Solaris:

si3124 pci1095,3124
si3124 pci1095,3132

Street price for the PCIe card is $30-35.


Myself, I'd just like to have internal SATA with hot plug support.
(I'm using FC for external storage.)  I've only found cards like this:

http://www.cdw.com/shop/products/default.aspx?EDC=1070554

which is $57.  Could you share where I might find one for $30?

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: SAS support on Solaris, was Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-22 Thread Frank Cusack
On January 23, 2007 8:53:30 AM +1100 James C. McPherson 
[EMAIL PROTECTED] wrote:


Hi Frank,

Frank Cusack wrote:

Would you please expand upon this, because I'm really interested
in what your thoughts are. since I work on Sun's SAS driver :)

SAS is limited, by the Solaris driver, to 16 devices.


Correct.


Not even that,
it's limited to devices with SCSI id's 0-15, so if you have 16 drives
and they start at id 10, well you only get access to 6 of them.


Why would you start your numbering at 10?


Because you don't have a choice.  It is up to the HBA and getting it
to do the right thing (ie, what you want) isn't always easy.  IIRC,
the LSI Logic HBA(s) I had would automatically remember SASAddress to
SCSI ID mappings.  So if you had attached 16 drives, removed one
and replaced it with a different one (even in a JBOD, ie it would
be attached to the same PHY), it would be id 16, because the first 16
scsi id's (0-15) were already accounted for.  And then the new drive,
lets call it a replacement for a failed drive, would be unaccessible
under Solaris.

Why it would ever start at something other than 0, I'm not sure.  I
also kind of remember that scsi.conf had some setting to map the HBA
to target 7 (which doesn't apply to SAS! yet the reference there was
specifically for LSI 1068.  again IIRC).  I think that I was seeing
that drives started at 8 because of this initialization, and that
removing it allowed the drives to start at 0 -- once I reset the HBA
BIOS to forget the mappings it had already made.


But SAS doesn't even really have scsi target id's.  It has WWN-like
identifiers.  I guess HBAs do some kind of mapping but it's not
reliable and can change, and inspecting or hardcoding device-id
mappings requires changing settings in the card's BIOS/OF.


SAS has WWNs because that is what the standard requires. SAS hba
implementors are free to map WWNs to relatively user-friendly
identifiers, which is what the LSI SAS1064/SAS1064/E chips do.


Also, the HBA may renumber devices.  That can be a big problem.


Agreed. No argument there!


It would be better to use the SASAddress the way the fibre channel
drivers use the WWN.  Drives could still be mapped to scsi id's, but
it should be done by the Solaris driver, not the HBA.  And when
multipathing the names should change like with FC.


That too is my preference. We're currently working on multipathing
with SAS.


That is good to hear.


That's one thing.  The other is unreliability with many devices
attached.  I've talked to others that have had this problem as well.
I offered to send my controller(s) and JBOD to Sun for testing, through
the support channel (I had a bug open on this for awhile), but they
didn't want it.  I think it came down to the classic we don't
sell that hardware problem.  The onboard SAS controllers (x4100, v215
etc) work fine due to the limited topology.  I wonder how you fix
(hardcode) the scsi id's with those.  Because you're not doing it
with a PCI card.


With a physically limited topology numbering isn't an issue because
of the way that the ports are connected to the onboard devices. It's
external devices (requiring a plugin hba) where it's potentially a
problem. Of course, to fully exploit that situation you'd need to
have 64K addressable targets attached to a single controller, and
that hasn't happened yet. So we do have a window of opportunity :)


I believe SAS supports a maximum of 128 devices per controller, including
multipliers.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: SAS support on Solaris, was Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-22 Thread James C. McPherson

Frank Cusack wrote:
On January 23, 2007 8:53:30 AM +1100 James C. McPherson 

...

Why would you start your numbering at 10?

Because you don't have a choice.  It is up to the HBA and getting it
to do the right thing (ie, what you want) isn't always easy.  IIRC,
the LSI Logic HBA(s) I had would automatically remember SASAddress to
SCSI ID mappings.  So if you had attached 16 drives, removed one
and replaced it with a different one (even in a JBOD, ie it would
be attached to the same PHY), it would be id 16, because the first 16
scsi id's (0-15) were already accounted for.  And then the new drive,
lets call it a replacement for a failed drive, would be unaccessible
under Solaris.


Oh heck. That sounds like one helluva broken way of doing things.


Why it would ever start at something other than 0, I'm not sure.  I
also kind of remember that scsi.conf had some setting to map the HBA
to target 7 (which doesn't apply to SAS! yet the reference there was
specifically for LSI 1068.  again IIRC).  I think that I was seeing
that drives started at 8 because of this initialization, and that
removing it allowed the drives to start at 0 -- once I reset the HBA
BIOS to forget the mappings it had already made.


/me groans ... more brokenness. I'll pass this onto some others in
our team who've been working on a similar issue.

...

With a physically limited topology numbering isn't an issue because
of the way that the ports are connected to the onboard devices. It's
external devices (requiring a plugin hba) where it's potentially a
problem. Of course, to fully exploit that situation you'd need to
have 64K addressable targets attached to a single controller, and
that hasn't happened yet. So we do have a window of opportunity :)

I believe SAS supports a maximum of 128 devices per controller, including
multipliers.


Not quite correct - each expander device can have 128 connections,
up to a max of 16256 devices in a single SAS domain. My figure of
64K addressable targets makes an assumption about the number of
SAS domains that a controller can have :)

Even so, we've still got that window.


cheers,
James C. McPherson
--
Solaris kernel software engineer
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-21 Thread James C. McPherson

Frank Cusack wrote:
On January 19, 2007 5:59:13 PM -0800 David J. Orman 
[EMAIL PROTECTED] wrote:

card that supports SAS would be *ideal*,


Except that SAS support on Solaris is not very good.
One major problem is they treat it like scsi when instead they should
treat it like FC (or native SATA).


Uh ... you do know that the second S in SAS stands for
serial-attached SCSI, right? Native SATA is a subset
of native SAS, too. What I'm intrigued by is your assertion
that we should treat SAS the same way we treat FC.

Would you please expand upon this, because I'm really interested
in what your thoughts are. since I work on Sun's SAS driver :)

I would also like to get some feedback on what you and others
would like to see for Sun's SAS support. Not guaranteeing
anything, but I'm happy to act as a channel to the relevant
people who have signoff on things like this.



cheers,
James C. McPherson
--
Solaris kernel software engineer
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-21 Thread Casper . Dik

What I gather from this is that today, SATA drives will either look like IDE
drives or SCSI drives, to some extent.  When they look like IDE drives, you
don't get all of the cfgadm or luxadm management options and you have to do
thinks like hot plug in a more-rather-than-less manual mode. When they look
like SCSI drives, then you'll also get the more-automatic hot plug features.

In the one case they're running the controller in compatibility mode; for
the other case you'll need to appropriate SATA controller driver.

Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-21 Thread Al Hopper
On Sun, 21 Jan 2007, James C. McPherson wrote:

... snip 
 Would you please expand upon this, because I'm really interested
 in what your thoughts are. since I work on Sun's SAS driver :)

Hi James - just the man I have a couple of questions for... :)

Will the LsiLogic 3041E-R (4-port internal, SAS/SATA, PCI-e) HBA work as a
generic ZFS/JBOD SATA controller?

There are a few white-box hackers on this list looking for a
solid/reliable SATA HBA with a PCI-e (PCI Express) connector - rather than
the rock-solid Supermicro/Marvel board which is only available with a
64-bit PCI-X connector at the moment.

Thanks,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
   Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005
 OpenSolaris Governing Board (OGB) Member - Feb 2006
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-21 Thread James C. McPherson

Al Hopper wrote:

On Sun, 21 Jan 2007, James C. McPherson wrote:
... snip 

Would you please expand upon this, because I'm really interested
in what your thoughts are. since I work on Sun's SAS driver :)

Hi James - just the man I have a couple of questions for... :)
Will the LsiLogic 3041E-R (4-port internal, SAS/SATA, PCI-e) HBA work as a
generic ZFS/JBOD SATA controller?
There are a few white-box hackers on this list looking for a
solid/reliable SATA HBA with a PCI-e (PCI Express) connector - rather than
the rock-solid Supermicro/Marvel board which is only available with a
64-bit PCI-X connector at the moment.


Hi Al,
according to the 3041E-R 2page pdf which I found at
http://www.lsi.com/documentation/storage/scg/hbas/sas/lsisas3041e-r_pb.pdf
the SAS asic is the LSISAS1064E which to the best of my
knowledge, is supported with the mpt driver.

So the answer to your question is I don't see why not :)


That chip is also the onboard controller with the T1000,
T2000, Ultra25 and Ultra45.

cheers,
James C. McPherson
--
Solaris kernel software engineer
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-20 Thread Shannon Roddy
Frank Cusack wrote:

 thumper (x4500) seems pretty reasonable ($/GB).

 -frank


I am always amazed that people consider thumper to be reasonable in
price.  450% or more markup per drive from street price in July 2006
numbers doesn't seem reasonable to me, even after subtracting the cost
of the system.  I like the x4500, I wish I had one.  But, I can't pay
what Sun wants for it.  So, instead, I am stuck buying lower end Sun
systems and buying third party SCSI/SATA JBODs.  I like Sun.  I like
their products, but I can't understand their storage pricing most of the
time.

-Shannon

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-20 Thread Frank Cusack
On January 20, 2007 2:16:45 AM -0600 Shannon Roddy 
[EMAIL PROTECTED] wrote:

Frank Cusack wrote:


thumper (x4500) seems pretty reasonable ($/GB).

-frank



I am always amazed that people consider thumper to be reasonable in
price.  450% or more markup per drive from street price in July 2006
numbers doesn't seem reasonable to me, even after subtracting the cost
of the system.  I like the x4500, I wish I had one.  But, I can't pay
what Sun wants for it.  So, instead, I am stuck buying lower end Sun
systems and buying third party SCSI/SATA JBODs.


But what data throughput do you get?  Thumper is phenomenal.

It is ashame (for the consumer) that it's not available without drives.
Sun has always had an obscene markup on drives.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-20 Thread Ed Gould

Shannon Roddy wrote:

For sun to charge 4-8 times street price for hard drives that
they order just the same as I do from the same manufacturers that I
order from is infuriating.


Are you sure they're really the same drives?  Mechanically, they 
probably are, but last I knew (I don't work in the Storage part of Sun, 
so I have no particular knowledge about current practices), Sun and 
other systems vendors (I know both Apple and DEC did) had custom 
firmware in the drives they resell.  One reason for this is that the 
systems vendors qualified the drives with a particular firmware load, 
and did not buy just the latest firmware that the drive manufacturer 
wanted to ship, for quality-control reasons.  At least some of the time, 
there were custom functionality changes as well.

--
--Ed
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-20 Thread Jason J. W. Williams

Hi Shannon,

The markup is still pretty high on a per-drive basis. That being said,
$1-2/GB is darn low for the capacity in a server. Plus, you're also
paying for having enough HyperTransport I/O to feed the PCI-E I/O.

Does anyone know what problems they had with the 250GB version of the
Thumper that caused them to pull it?

Best Regards,
Jason

On 1/20/07, Shannon Roddy [EMAIL PROTECTED] wrote:

Frank Cusack wrote:

 thumper (x4500) seems pretty reasonable ($/GB).

 -frank


I am always amazed that people consider thumper to be reasonable in
price.  450% or more markup per drive from street price in July 2006
numbers doesn't seem reasonable to me, even after subtracting the cost
of the system.  I like the x4500, I wish I had one.  But, I can't pay
what Sun wants for it.  So, instead, I am stuck buying lower end Sun
systems and buying third party SCSI/SATA JBODs.  I like Sun.  I like
their products, but I can't understand their storage pricing most of the
time.

-Shannon

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-20 Thread Erik Trimble

Frank Cusack wrote:
On January 19, 2007 6:47:30 PM -0800 Erik Trimble 
[EMAIL PROTECTED] wrote:

Not to be picky, but the X2100 and X2200 series are NOT
designed/targeted for disk serving (they don't even have redundant power
supplies).  They're compute-boxes.  The X4100/X4200 are what you are
looking for to get a flexible box more oriented towards disk i/o and
expansion.


But x4100/x4200 only accept expensive 2.5 SAS drives, which have
small capacities.  That doesn't seem oriented towards disk serving.

-frank
Those are boot drives, and for those with small amounts of data (and, 
you get 73gb and soon 143gb drives in that form factor, which isn't 
really any different than typical 3.5 SCSI drive sizes).


No, I was talking about the internal architecture.  The X4100/X4200 have 
multiple independent I/O buses, with lots of PCI-E and PCI-X slots. So 
if you were looking to hook up external storage (which was the original 
poster's intent), the X4100/X4200 is a much better match.


-Erik

--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-20 Thread Erik Trimble

Rich Teer wrote:

On Fri, 19 Jan 2007, Frank Cusack wrote:

  

But x4100/x4200 only accept expensive 2.5 SAS drives, which have
small capacities.  [...]



... and only 2 or 4 drives each.  Hence my blog entry a while back,
wishing for a Sun-badged 1U SAS JBOD with room for 8 drives.  I'm
amazed that Sun hasn't got a product to fill this obvious (to me
at least) hole in their storage catalogue.
  
The Sun 3120 does 4 x 3.5 SCSI drives in a 1U, and the Sun 3320 does 12 
x 3.5 in 2U. Both come in JBOD configs (and the 3320 has HW Raid if you 
want it).


Yes, I'm certain that having 8-10 SAS drives in a 1U might be useful; HP 
thinks so:  the MSA50  
(http://h18004.www1.hp.com/storage/disk_storage/msa_diskarrays/drive_enclosures/ma50/index.html)


But, given that Sun doesn't seem to be really targeting Small Business 
right now (at least, it appears that way), the 3120 works quite well, 
feature-wise, for Medium Business/Enterprise areas..


I priced out the HP MSA-series vs the Sun StorageTek 3000-series, and 
the HP stuff is definitely cheaper. By a noticable amount.  So I'd say 
Sun has less of a hardware selection gap, than a pricing gap. The 
current low end of the Sun line just isn't cheap enough.




Of course the opinions expressed herein are my own, and I have no 
special knowledge of anything relevant to this discussion. (TM)


:-)


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-20 Thread Rich Teer
On Sat, 20 Jan 2007, Erik Trimble wrote:

 The Sun 3120 does 4 x 3.5 SCSI drives in a 1U, and the Sun 3320 does 12 x
 3.5 in 2U. Both come in JBOD configs (and the 3320 has HW Raid if you want
 it).

Yep; I know about those products.  But the entry level 3120 (with
2 x 73GB disks) has a list price of $5K!  I'm a Sun supporter, but
those kind of prices are akin to daylight robbery!  Or, to put it
another way, the list price of that simple JBOD is more than twice
as expensive as the X4100--a server it woulr probably be connected
to!

But more to the point, SAS seems to be future, so it would be
really nice to have a Sun SAS JBOD array.  As I said in my blog
about this, if Sun could produce an 8-drive SAS 1U JBOD array,
with a starting price (say, 2 x 36GB drives with 2 hot swapable
PSUs) of $2K, they'd sell 'em by the truck load.  I mean let's
be honest: when we're talking about low end JBOD arrays, we're
talking about one or two PSUs, some mechanism for holding the
drives, a bit of electronics, and a metal case to put it all in.
No expensive rocket science necessary.

 Yes, I'm certain that having 8-10 SAS drives in a 1U might be useful; HP
 thinks so:  the MSA50
 (http://h18004.www1.hp.com/storage/disk_storage/msa_diskarrays/drive_enclosures/ma50/index.html)

Yep, that's what I'm thinking of, only in a nice case that is similar
to the X4100 (for economies of scale and pretty data centers).

 But, given that Sun doesn't seem to be really targeting Small Business right
 now (at least, it appears that way), the 3120 works quite well, feature-wise,
 for Medium Business/Enterprise areas..

But that's the point: Sun IS targeting Small Business: that's
what the Sun Startup Essentials program is all about!  Not to
mention the programs aimed at developers.

Agreed, Sun isn't targeting the mum and dad kind of business,
but there are a huge number of businesses that need more storage
than will fit into an X4200/T2000 but less than what's available
with (say) the 3320.

 of a hardware selection gap, than a pricing gap. The current low end of the
 Sun line just isn't cheap enough.

Couldn't agree more.

-- 
Rich Teer, SCSA, SCNA, SCSECA, OpenSolaris CAB member

President,
Rite Online Inc.

Voice: +1 (250) 979-1638
URL: http://www.rite-group.com/rich
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-20 Thread Richard Elling

Frank Cusack wrote:
On January 19, 2007 5:59:13 PM -0800 David J. Orman 
[EMAIL PROTECTED] wrote:

card that supports SAS would be *ideal*,


Except that SAS support on Solaris is not very good.

One major problem is they treat it like scsi when instead they should
treat it like FC (or native SATA).


uhmm... SAS is serial attached SCSI, why wouldn't we treat it like SCSI?

BTW, the sd driver and ssd (SCSI over fibre channel) drivers have the same
source.  SATA will also use the sd driver, as Pawel describes in his blogs
on the SATA framework at http://blogs.sun.com/pawelblog

What I gather from this is that today, SATA drives will either look like IDE
drives or SCSI drives, to some extent.  When they look like IDE drives, you
don't get all of the cfgadm or luxadm management options and you have to do
thinks like hot plug in a more-rather-than-less manual mode. When they look
like SCSI drives, then you'll also get the more-automatic hot plug features.
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-19 Thread David J. Orman
Hi,

I'm looking at Sun's 1U x64 server line, and at most they support two drives. 
This is fine for the root OS install, but obviously not sufficient for many 
users.

Specifically, I am looking at the: http://www.sun.com/servers/x64/x2200/ 
X2200M2.

It only has Riser card assembly with two internal 64-bit, 8-lane, low-profile, 
half length PCI-Express slots for expansion.

What I'm looking for is a SAS/SATA card that would allow me to add an external 
SATA enclosure (or some such device) to add storage. The supported list on the 
HCL is pretty slim, and I see no PCI-E stuff. A card that supports SAS would be 
*ideal*, but I can settle for normal SATA too.

So, anybody have any good suggestions for these two things:

#1 - SAS/SATA PCI-E card that would work with the Sun X2200M2.
#2 - Rack-mountable external enclosure for SAS/SATA drives, supporting hot swap 
of drives.

Basically, I'm trying to get around using Sun's extremely expensive storage 
solutions while waiting on them to release something reasonable now that ZFS 
exists.

Cheers,
David
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-19 Thread Jason J. W. Williams

Hi David,

I don't know if your company qualifies as a startup under Sun's regs
but you can get an X4500/Thumper for $24,000 under this program:
http://www.sun.com/emrkt/startupessentials/

Best Regards,
Jason

On 1/19/07, David J. Orman [EMAIL PROTECTED] wrote:

Hi,

I'm looking at Sun's 1U x64 server line, and at most they support two drives. 
This is fine for the root OS install, but obviously not sufficient for many 
users.

Specifically, I am looking at the: http://www.sun.com/servers/x64/x2200/ 
X2200M2.

It only has Riser card assembly with two internal 64-bit, 8-lane, low-profile, half 
length PCI-Express slots for expansion.

What I'm looking for is a SAS/SATA card that would allow me to add an external 
SATA enclosure (or some such device) to add storage. The supported list on the 
HCL is pretty slim, and I see no PCI-E stuff. A card that supports SAS would be 
*ideal*, but I can settle for normal SATA too.

So, anybody have any good suggestions for these two things:

#1 - SAS/SATA PCI-E card that would work with the Sun X2200M2.
#2 - Rack-mountable external enclosure for SAS/SATA drives, supporting hot swap 
of drives.

Basically, I'm trying to get around using Sun's extremely expensive storage 
solutions while waiting on them to release something reasonable now that ZFS 
exists.

Cheers,
David


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-19 Thread Erik Trimble
On Fri, 2007-01-19 at 17:59 -0800, David J. Orman wrote:
 Hi,
 
 I'm looking at Sun's 1U x64 server line, and at most they support two drives. 
 This is fine for the root OS install, but obviously not sufficient for many 
 users.
 
 Specifically, I am looking at the: http://www.sun.com/servers/x64/x2200/ 
 X2200M2.
 
 It only has Riser card assembly with two internal 64-bit, 8-lane, 
 low-profile, half length PCI-Express slots for expansion.
 
 What I'm looking for is a SAS/SATA card that would allow me to add an 
 external SATA enclosure (or some such device) to add storage. The supported 
 list on the HCL is pretty slim, and I see no PCI-E stuff. A card that 
 supports SAS would be *ideal*, but I can settle for normal SATA too.
 
 So, anybody have any good suggestions for these two things:
 
 #1 - SAS/SATA PCI-E card that would work with the Sun X2200M2.
 #2 - Rack-mountable external enclosure for SAS/SATA drives, supporting hot 
 swap of drives.
 
 Basically, I'm trying to get around using Sun's extremely expensive storage 
 solutions while waiting on them to release something reasonable now that ZFS 
 exists.
 
 Cheers,
 David

Not to be picky, but the X2100 and X2200 series are NOT
designed/targeted for disk serving (they don't even have redundant power
supplies).  They're compute-boxes.  The X4100/X4200 are what you are
looking for to get a flexible box more oriented towards disk i/o and
expansion.

That said (if you're set on an X2200 M2), you are probably better off
getting a PCI-E SCSI controller, and then attaching it to an external
SCSI-SATA JBOD.  There are plenty of external JBODs out there which use
Ultra320/Ultra160 as a host interface and SATA as a drive interface.
Sun will sell you a supported SCSI controller with the X2200 M2 (the
Sun StorageTek PCI-E Dual Channel Ultra320 SCSI HBA).

SCSI is far better for a host attachment mechanism than eSATA if you
plan on doing more than a couple of drives, which it sounds like you
are. While the SCSI HBA is going to cost quite a bit more than an eSATA
HBA, the external JBODs run about the same, and the total difference is
going to be $300 or so across the whole setup (which will cost you $5000
or more fully populated). So the cost to use SCSI vs eSATA as the host-
attach is a rounding error.



-- 
Erik Trimble
Java System Support
Mailstop:  usca14-102
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-19 Thread Dan Mick

David J. Orman wrote:

Hi,

I'm looking at Sun's 1U x64 server line, and at most they support two drives. 
This is fine for the root OS install, but obviously not sufficient for many 
users.

Specifically, I am looking at the: http://www.sun.com/servers/x64/x2200/ 
X2200M2.

It only has Riser card assembly with two internal 64-bit, 8-lane, low-profile, half 
length PCI-Express slots for expansion.

What I'm looking for is a SAS/SATA card that would allow me to add an external 
SATA enclosure (or some such device) to add storage. The supported list on the 
HCL is pretty slim, and I see no PCI-E stuff. A card that supports SAS would be 
*ideal*, but I can settle for normal SATA too.

So, anybody have any good suggestions for these two things:

#1 - SAS/SATA PCI-E card that would work with the Sun X2200M2.


Scouting around a bit, I see SIIG has a 3132 chip, for which they make a card, 
eSATA II, available in PCIe and PCIe ExpressCard formfactors.  I can't promise, 
but chances seem good that it's supported by si3124 driver in Solaris:


si3124 pci1095,3124
si3124 pci1095,3132

Street price for the PCIe card is $30-35.

Also, the first hit for PCIe eSATA was a card based on the JMicron JMB 360, 
which is supposed to support AHCI, and so should be supported by the brand-new 
ahci driver (just back in snv_56).  Street prices for the card most popular were 
showing as $29.99 quantity 1.


I don't know whether either of these will work, but it looks promising.  I also 
don't know about eSATA vs. SCSI.  Keep in mind that you'll only be able to 
support two drives with the SIIG card, and one with the other one; port 
multipliers may or may not be working yet.



#2 - Rack-mountable external enclosure for SAS/SATA drives, supporting hot swap 
of drives.

Basically, I'm trying to get around using Sun's extremely expensive storage 
solutions while waiting on them to release something reasonable now that ZFS 
exists.

Cheers,
David
 
 
This message posted from opensolaris.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-19 Thread Frank Cusack
On January 19, 2007 5:59:13 PM -0800 David J. Orman 
[EMAIL PROTECTED] wrote:

card that supports SAS would be *ideal*,


Except that SAS support on Solaris is not very good.

One major problem is they treat it like scsi when instead they should
treat it like FC (or native SATA).


So, anybody have any good suggestions for these two things:

# 1 - SAS/SATA PCI-E card that would work with the Sun X2200M2.


I had the lsilogic 3442-E working on x86 but not reliably.  That is
the only SAS controller Sun supports AFAIK.


# 2 - Rack-mountable external enclosure for SAS/SATA drives, supporting
# hot swap of drives.


promise vtrak j300s is the cheapest one i've found.  adaptec's been
advertising one forever (6+ months?) but it's not in production, at
least you won't be able to find one without hard drives, and you
won't be able to find the dual controller model.


Basically, I'm trying to get around using Sun's extremely expensive
storage solutions while waiting on them to release something reasonable
now that ZFS exists.


thumper (x4500) seems pretty reasonable ($/GB).

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-19 Thread Frank Cusack
On January 19, 2007 6:47:30 PM -0800 Erik Trimble [EMAIL PROTECTED] 
wrote:

Not to be picky, but the X2100 and X2200 series are NOT
designed/targeted for disk serving (they don't even have redundant power
supplies).  They're compute-boxes.  The X4100/X4200 are what you are
looking for to get a flexible box more oriented towards disk i/o and
expansion.


But x4100/x4200 only accept expensive 2.5 SAS drives, which have
small capacities.  That doesn't seem oriented towards disk serving.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss