Re: [zfs-discuss] Using consumer drives in a zraid2

2009-08-26 Thread Adam Sherman
But the real question is whether the "enterprise" drives would have  
avoided your problem.


A.

--
Adam Sherman
+1.613.797.6819

On 2009-08-26, at 11:38, Troels Nørgaard Nielsen  wrote:


Hi Tim Cook.

If I was building my own system again, I would prefer not to go with  
consumer harddrives.
I had a raidz pool containing eight drives on a snv108 system, after  
rebooting, four of the eight drives was so broken they could not be  
seen by format, let alone the zpool they belonged to.


This was with Samsung HD103UJ revision 1112 and 1113 disks. No  
matter what kind of hotspares, raidz2 or 3-way mirror would have  
saved me, so it was RMA the drives, buy some new and restore from  
backup. The controller was LSI1068E - A cheap USB-to-SATA adapter  
could see them, but with masive stalls and errors.
These disks was at the moment the cheapest 1 TB disks avaliable, I  
understand why now.


But I'm still stuck with 6 of them in my system ;-(

Best regards
Troels Nørgaard

Den 26/08/2009 kl. 07.46 skrev Tim Cook:

On Wed, Aug 26, 2009 at 12:22 AM, thomas   
wrote:

> I'll admit, I was cheap at first and my
> fileserver right now is consumer drives.  You
> can bet all my future purchases will be of the enterprise grade.  
 And
> guess what... none of the drives in my array are less than 5  
years old, so even
> if they did die, and I had bought the enterprise versions, they'd  
be

> covered.

Anything particular happen that made you change your mind? I  
started with
"enterprise grade" because of similar information discussed in this  
thread.. but I've
also been wondering how zfs holds up with consumer level drives and  
if I could save
money by using them in the future. I guess I'm looking for horror  
stories that can be

attributed to them? ;)


When it comes to my ZFS project, I am currently lacking horror  
stories.  When it comes to "what the hell, this drive literally  
failed a week after the warranty was up", I unfortunately  
PERSONALLY have 3 examples.  I'm guessing (hoping) it's just bad  
luck.  Perhaps the luck wasn't SO bad though, as I had backups of  
all of those (proof, you should never rely on a single drive to  
last up to,or beyond its warranty period.


--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration input needed.

2009-08-21 Thread Adam Sherman

On 21-Aug-09, at 21:04 , Richard Elling wrote:
My point is, RAIDZx+1 SHOULD be simple.  I don't entirely  
understand why it hasn't been implemented.  I can only imagine like  
so many other things it's because there hasn't been significant  
customer demand.  Unfortunate if it's as simple as I believe it is  
to implement.  (No, don't ask me to do it, I put in my time  
programming in college and have no desire to do it again :))


You can get in the same ballpark with at least two top-level raidz2  
devs and
copies=2.  If you have three or more top-level raidz2 vdevs, then  
you can even

do better with copies=3 ;-)



Maybe this is noted somewhere, but I did not realize that "copies"  
invoked logic that distributed the copies among vdevs? Can you please  
provide some pointers about this?


Thanks,

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can ZFS dynamically grow pool sizes? (re: Windows Home Server)

2009-08-12 Thread Adam Sherman

I believe you will get .5 TB in this example, no?

A.

--  
Adam Sherman

+1.613.797.6819

On 2009-08-12, at 16:44, Erik Trimble  wrote:


Eric D. Mudama wrote:

On Wed, Aug 12 at 12:11, Erik Trimble wrote:

Anyways, if I have a bunch of different size disks (1.5 TB, 1.0 TB,
500 GB, etc), can I put them all into one big array and have data
redundancy, etc?  (RAID-Z?)


Yes.  RAID-Z requires a minimum of 3 drives, and it can use
different drives. Depending on the size differences, it will do the
underlying layout in different ways.  Depending on the number and
size of the disks, ZFS is likely the best bet for using the most
total space.


I don't believe this is correct, as far as I understand it, RAID-Z
will use the lowest-common-denominator for sizing the overall array.
You'll get parity across all three drives, but it won't alter parity
schemes for different regions of the disks.

Yes, if you stick (say) a 1.5TB, 1TB, and .5TB drive together in a  
RAIDZ, you will get only 1TB of usable space.   Of course, there is  
always the ability to use partitions instead of the whole disk, but  
I'm not going to go into that.  Suffice to say, RAIDZ (and  
practically all other RAID controllers, and volume managers) don't  
easily deal maximizing space with different size disks.


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-10 Thread Adam Sherman

On 9-Aug-09, at 3:50 , Erik Trimble wrote:
Also, I'd recommend you not forget to remove the Swap and Dump  
volumes from the CF rpool.  You could theoretically keep Dump, but,  
honestly, why bother.   So, you could likely get a full install of  
OSOL on a 4GB CF card, and maybe even a 2GB card, especially if you  
are moving /var somewhere else.


So that does that mean moving /opt & /var will work under OSOL without  
confusing the BE system?


As far as extra space, I honestly wouldn't use it for anything like  
a ZIL or L2ARC cache.  CF is even worse than the 1st-gen SSDs in  
terms of (random write) performance and doesn't have (much of) any  
of those nice advanced wear-leveling firmware stuff.  I'd only use  
it for stuff that is WORM (or at least, hardly ever changes).  A  
sharable /usr/local or /opt springs to mind...


Understood. I have 8G cards coming in, so space won't be an issue. I'd  
like to have the CF cards as read-only as possible though.


By sharable, what do you mean exactly?

Thanks a lot for the advice,

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-07 Thread Adam Sherman

On 6-Aug-09, at 15:16 , Ian Collins wrote:
This ended up being a costly mistake, the environment I ended up  
with didn't play well with Live Upgrade.  So I suggest what ever you  
do, make sure you can create a new BE and boot into it before  
committing.


I assume this was old-style LU and the new-style ZFS-based "boot  
environments"?


Is there going to be a difference for me? I plan to run OSOL, latest.

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-07 Thread Adam Sherman

On 6-Aug-09, at 11:32 , Thomas Burgess wrote:
i've seen some people use usb sticks, and in practice it works on  
SOME machines.  The biggest difference is that the bios has to allow  
for usb booting.  Most of todays computers DO.  Personally i like  
compact flash because it is fairly easy to use as a cheap  
alternative to a hard drive.  I mirror the cf drives exactly like  
they are hard drives so if one fails i just replace it.  USB is a  
little harder to do that with because they are just not as  
consistent as compact flash.  But honestly it should work and many  
people do this.



I've ended up purchasing two 8GB CF cards and the required CF-SATA  
adapters.


How, once I install OpenSolaris on the system using the two CF cards  
as a mirrored ZFS root pool, can I leverage any of the free space for  
some kind of ZFS specific performance improvement? slog? etc?


Thanks for everyone's input!

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-06 Thread Adam Sherman

Excellent advice, thans Ian.

A.

--  
Adam Sherman

+1.613.797.6819

On 2009-08-06, at 15:16, Ian Collins  wrote:


Adam Sherman wrote:

On 4-Aug-09, at 16:54 , Ian Collins wrote:
Use a CompactFlash card (the board has a slot) for root, 8  
drives in raidz2 tank, backup the root regularly


If booting/running from CompactFlash works, then I like this  
one. Backing up root should be trivial since you can back it up  
into your big storage pool.  Usually root contains mostly non- 
critical data. The nice SAS backplane seems too precious to  
waste for booting.


Do you know if it is possible to put just grub, stage2, kernel on  
the CF card, instead of the entire root?


You can move some of root to another device, but I don't think you  
can move the bulk - /usr.


See:

http://docs.sun.com/source/820-4893-13/compact_flash.html#50589713_78631


Good link.

So I suppose I can move /var out and that would deal with most  
(all?) of the writes.


Good plan!


I also moved most of /opt out to save space.
This ended up being a costly mistake, the environment I ended up  
with didn't play well with Live Upgrade.  So I suggest what ever you  
do, make sure you can create a new BE and boot into it before  
committing.


--
Ian.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-06 Thread Adam Sherman

On 6-Aug-09, at 11:50 , Kyle McDonald wrote:
i've seen some people use usb sticks, and in practice it works on  
SOME machines.  The biggest difference is that the bios has to  
allow for usb booting.  Most of todays computers DO.  Personally i  
like compact flash because it is fairly easy to use as a cheap  
alternative to a hard drive.  I mirror the cf drives exactly like  
they are hard drives so if one fails i just replace it.  USB is a  
little harder to do that with because they are just not as  
consistent as compact flash.  But honestly it should work and many  
people do this.


This product looks really interesting:

http://www.addonics.com/products/flash_memory_reader/ad2sahdcf.asp

But I can't confirm it will show both cards as separate disks…
My read is that it won't (which is supported by the single SATA data  
connector,) but it will do the mirroring for you.


Turns out the FAQ page explains that it will not, too bad.

I know that I generally prefer to let ZFS handle the redundancy for  
me, but for you it may be enough to let this do the mirroring for  
the root pool.


I'm with you there.

It seems too expensive to get 2.   Do they have a cheaper one that  
takes only 1 CF card?


I just ordered a pair of the Syba units, cheap enough too test out  
anyway.


Now to find some reasonably priced 8GB CompactFlash cards…

Thanks,

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-06 Thread Adam Sherman

On 6-Aug-09, at 11:32 , Thomas Burgess wrote:
i've seen some people use usb sticks, and in practice it works on  
SOME machines.  The biggest difference is that the bios has to allow  
for usb booting.  Most of todays computers DO.  Personally i like  
compact flash because it is fairly easy to use as a cheap  
alternative to a hard drive.  I mirror the cf drives exactly like  
they are hard drives so if one fails i just replace it.  USB is a  
little harder to do that with because they are just not as  
consistent as compact flash.  But honestly it should work and many  
people do this.



This product looks really interesting:

http://www.addonics.com/products/flash_memory_reader/ad2sahdcf.asp

But I can't confirm it will show both cards as separate disks…

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-06 Thread Adam Sherman

On 4-Aug-09, at 16:54 , Ian Collins wrote:
Use a CompactFlash card (the board has a slot) for root, 8 drives  
in raidz2 tank, backup the root regularly


If booting/running from CompactFlash works, then I like this one.  
Backing up root should be trivial since you can back it up into  
your big storage pool.  Usually root contains mostly non-critical  
data. The nice SAS backplane seems too precious to waste for  
booting.


Do you know if it is possible to put just grub, stage2, kernel on  
the CF card, instead of the entire root?


You can move some of root to another device, but I don't think you  
can move the bulk - /usr.


See:

http://docs.sun.com/source/820-4893-13/compact_flash.html#50589713_78631


Good link.

So I suppose I can move /var out and that would deal with most (all?)  
of the writes.


Good plan!

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-05 Thread Adam Sherman

On 5-Aug-09, at 12:21 , Bob Friesenhahn wrote:
i would be VERY surprised if you couldn't fit these in there  
SOMEWHERE, the
sata to compactflash adapter i got was about 1.75 inches across and  
very
very thin, i was able to mount them side by side on top of the  
drive tray in
my machine, you can easily make a bracket...i know a guy who used  
double

sided tape! but, check out this picture


Quite a few computers still come with a legacy PCI slot.  Are there  
PCI cards which act as a carrier for one or two CompactFlash devices  
and support system boot?



That's also a good idea. Of course, my system only has a single x16  
PCI-E slot in it. :)


A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-05 Thread Adam Sherman

On 5-Aug-09, at 12:07 , Thomas Burgess wrote:
i would be VERY surprised if you couldn't fit these in there  
SOMEWHERE, the sata to compactflash adapter i got was about 1.75  
inches across and very very thin, i was able to mount them side by  
side on top of the drive tray in my machine, you can easily make a  
bracket...i know a guy who used double sided tape! but, check out  
this picturehttp://www.newegg.com/Product/Product.aspx?Item=N82E16812186051 
  most of them can be found like this, they are VERY VERY thin and  
can be mounted just about anywhere.  they don't get very hot.  I've  
used them on a few machines, opensolaris and freebsd.   I'm a big  
fan of compact flash.



What about USB sticks? Is there a difference in practice?

Thanks for the advice,

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-05 Thread Adam Sherman

On 4-Aug-09, at 19:46 , Chris Du wrote:
Yes Constellation, they also have sata version. CA$350 is way too  
high. It's CA$280 for SAS and CA$235 for SATA, 500GB in Vancouver.



Wow, that is a much better price than I've seen:

http://pricecanada.com/p.php/Seagate-Constellation-7200-500GB-7200-ST9500430SS-602367/?matched_search=ST9500430SS

Which retailer is that?

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-05 Thread Adam Sherman

On 5-Aug-09, at 0:14 , Thomas Burgess wrote:
i boot from compact flash.  it's not a big deal if you mirror it  
because you shouldn't be booting up very often.  Also, they make  
these great compactflash to sata adapters so if yer motherboard has  
2 open sata ports then you'll be golden there.


You are suggesting booting from a mirrored pair of CF cards? I'll have  
to wait until I see the system to know if I have room, but that's a  
good idea.


I've got lots of unused SATA ports.

Thanks,

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-04 Thread Adam Sherman

On 4-Aug-09, at 16:18 , Chris Du wrote:
Another note, have you bought disks already? You may want to take a  
look at 2.5" SAS disks from Seagate as they are enterprise grade  
with different firmware for better error recovery. I know the SAS  
backplane is picky sometimes. You may see disks disconnected with  
consumer level disks.



I have purchased the drives: $60 CDN each. The intention is to swap  
them for something better in the future.


Which Seagates, the Constellation series? 
http://www.seagate.com/www/en-us/products/servers/constellation/constellation/

Those seem to be $350 CDN for the 500GB model, would have put this  
system way over budget.


A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-04 Thread Adam Sherman

On 4-Aug-09, at 16:08 , Bob Friesenhahn wrote:

On Tue, 4 Aug 2009, Adam Sherman wrote:
4. Use a CompactFlash card (the board has a slot) for root, 8  
drives in raidz2 tank, backup the root regularly


If booting/running from CompactFlash works, then I like this one.  
Backing up root should be trivial since you can back it up into your  
big storage pool.  Usually root contains mostly non-critical data.  
The nice SAS backplane seems too precious to waste for booting.



Do you know if it is possible to put just grub, stage2, kernel on the  
CF card, instead of the entire root?


Thanks for the response,

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Pool Layout Advice Needed

2009-08-04 Thread Adam Sherman

Hi All,

I am about to setup a personal data server on some decent hardware (1u  
SuperServer, Xeon, LSI SAS controller, SAS backplane). Well at least,  
it's decent hardware to me. :)


After reading Richard's blog post, I'm still a little unsure how to  
proceed.


Details:

- I have 8 drives to play with on the systems backplane;
- Reliability over performance (320G 2.5" 5400 RPM SATA drives)
- High-availability is not required, but I want the data to be as safe  
as possible (real backups will be infrequent and difficult)
- Potentially, I can add a ninth SATA drive by using some fancy  
bracket to replace the slim DVD that will be hard to source


These are the options I have figured out so far:

1. 2 mirrored drives for the root pool, 6 drives in raidz2 for the tank
2. 4 pairs of mirrored drives for a single pool that would also be  
root, but I think I can't boot from that, right?
3. 1 non-redundant drive for root pool, 7 drives in raidz2 for the  
tank, backup the root pool regularly
4. Use a CompactFlash card (the board has a slot) for root, 8 drives  
in raidz2 tank, backup the root regularly
5. Figure out how to have only the kernel and bootloader on the CF  
card in order to have root on the raidz2 tank
5.5. Figure out how to have the kernel and bootloader on the CF card  
in order to have 4 pairs of mirrored drives in a tank, supposing #2  
doesn't work


Comments, suggestions, questions, criticism?

Thanks,

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40

2009-07-31 Thread Adam Sherman

On 31-Jul-09, at 20:00 , Jason A. Hoffman wrote:
I have thousands and thousands and thousands of zpools. I started  
collecting such zpools back in 2005. None have been lost.



Best regards, Jason


Jason A. Hoffman, PhD | Founder, CTO, Joyent Inc.



I believe I have about a TB of data on at least one of Jason's pools  
and it seems to still be around. ;)


A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] I Still Have My Data

2009-07-31 Thread Adam Sherman
My test setup of 8 x 2G virtual disks under Virtual Box on top of Mac  
OS X is running nicely! I haven't lost a *single* byte of data.


;)

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-27 Thread Adam Sherman

On 27-Jul-09, at 15:14 , David Magda wrote:
Also, I think it may have already been posted, but I haven't found  
the

option to disable VirtualBox' disk cache. Anyone have the incantation
handy?


http://forums.virtualbox.org/viewtopic.php?f=8&t=13661&start=0

It tells VB not to ignore the sync/flush command. Caching is still  
enabled

(it wasn't the problem).


Thanks!

As Russell points on in the last post to that thread, it doesn't seem  
possible to do this with virtual SATA disks? Odd.


A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-27 Thread Adam Sherman

On 27-Jul-09, at 13:54 , Chris Ridd wrote:
i was under the impression it was virtualbox and it's default  
setting that ignored the command, not the hard drive


Do other virtualization products (eg VMware, Parallels, Virtual PC)  
have the same default behaviour as VirtualBox?


I've a suspicion they all behave similarly dangerously, but actual  
data would be useful.


Also, I think it may have already been posted, but I haven't found the  
option to disable VirtualBox' disk cache. Anyone have the incantation  
handy?


Thanks,

A

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD's and ZFS...

2009-07-23 Thread Adam Sherman
In the context of a low-volume file server, for a few users, is the  
low-end Intel SSD sufficient?


A.

--
Adam Sherman
+1.613.797.6819

On 2009-07-23, at 14:09, Greg Mason  wrote:

I think it is a great idea, assuming the SSD has good write  
performance.
This one claims up to 230MB/s read and 180MB/s write and it's only  
$196.


http://www.newegg.com/Product/Product.aspx?Item=N82E16820609393

Compared to this one (250MB/s read and 170MB/s write) which is $699.


Oops. Forgot the link:

http://www.newegg.com/Product/Product.aspx?Item=N82E16820167014

Are those claims really trustworthy? They sound too good to be true!

-Kyle


Kyle-

The less expensive SSD is an MLC device. The Intel SSD is an SLC  
device.
That right there accounts for the cost difference. The SLC device  
(Intel

X25-E) will last quite a bit longer than the MLC device.

-Greg

--
Greg Mason
System Administrator
Michigan State University
High Performance Computing Center

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-21 Thread Adam Sherman

On 21-Jul-09, at 9:25 , F. Wessels wrote:
So to wrap it up. According to Will, a supermicro chassis using a  
single lsi expander connected to sata disks can utilize the wide sas  
port between hba and the chassis. (like a J4500 Richard mentioned.  
How much I like these systems (thumper etc), they're way out of my  
budget.) Will did see more throughput than a single link could  
handle. But due to insufficient disks no more i/o could be generated  
to better demonstrate the availability off the wide sas port.
I assume the J4500 is using dual expanders. Richard, can you confirm  
that the J4500 is using an active/active sata mux per drive to allow  
failover between expanders? Aside from this can you also confirm  
that a wide sas link between a hba and a single expander can be  
fully utilized when using sata drives? To rephrase, utilization is  
independent of the disk type (sas or sata)



The J series JBODs aren't overly expensive, it's the darn drives for  
them that break the budget.


And, on that subject, is there truly a difference between Seagate's  
line-up of 7200 RPM drives? They seem to now have a bunch:


Model   Bus CapacitySKU Cost (USD)
7200.12 SATA 3.0Gb/s1TB ST31000528AS$90
ES.2SATA 3.0Gb/s1TB ST31000340NS$158
ES.2SAS 3Gb/s   1TB ST31000640SS$215

Other manufacturers seem to have similar lineups. Is the difference  
going to matter to me when putting a mess of them into a SAS JBOD with  
an expander?


Thanks for everyone's great feedback, this thread has been highly  
educating.


A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-17 Thread Adam Sherman

On 17-Jul-09, at 1:45 , Will Murnane wrote:
I'm looking at the LSI SAS3801X because it seems to be what Sun  
OEMs for my

X4100s:

If you're given the choice (i.e., you have the M2 revision), PCI
Express is probably the bus to go with.  It's basically the same card,
but on a faster bus.  But there's nothing wrong with the PCI-X
version.


I have a stack of the original X4100s.


$280 or so, looks like. Might be overkill for me though.

The 3442X-R is a little cheaper: $205 from Provantage.
http://www.provantage.com/lsi-logic-lsi00164~7LSIG06K.htm



I don't get it, why is that one cheaper than:

http://www.provantage.com/lsi-logic-lsi00124~7LSIG03W.htm

Just newer?

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Adam Sherman

On 16-Jul-09, at 21:17 , Will Murnane wrote:

Good to hear. What HBA(s) are you using against it?

LSI 3442E-R.  It's connected through a Supermicro cable, CBL-0168L, so
it can be attached via an external cable.



I'm looking at the LSI SAS3801X because it seems to be what Sun OEMs  
for my X4100s:


http://sunsolve.sun.com/handbook_private/validateUser.do?target=Devices/SCSI/SCSI_PCIX_SAS_SATA_HBA

$280 or so, looks like. Might be overkill for me though.

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Adam Sherman

On 16-Jul-09, at 18:01 , Will Murnane wrote:

We have a SC846E1 at work; it's the 24-disk, 4u version of the 826e1.
It's working quite nicely as a SATA JBOD enclosure.  We'll probably be
buying another in the coming year to have more capacity.



I should also ask: any other solutions I should have a look at to get  
>=12 SATA disks externally attached to my systems?


Thanks!

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Adam Sherman

On 16-Jul-09, at 20:52 , James C. McPherson wrote:

Another thought in the same vein, I notice many of these systems
support "SES-2" for management. Does this do anything useful under
Solaris?


We've got some integration between FMA and SES devices which
allows us to to some management tasks.


So that would allow FMA to detect SATA disk failures then?


libtopo, libscsi and libses are the main methods of getting
that information out. For an example outside FMA, you could
have a look into the ses/sgen plugin from pluggable fwflash.

Is there anything you're specifically interested in wrt management
uses of SES?


I'm really just exploring. Where can I read about how FMA is going to  
help with failures in my setup?


Thanks,

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Adam Sherman
Another thought in the same vein, I notice many of these systems  
support "SES-2" for management. Does this do anything useful under  
Solaris?


Sorry for these questions, I seem to be having a tough time locating  
relevant information on the web.


Thanks,

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Adam Sherman

On 16-Jul-09, at 18:01 , Will Murnane wrote:

The "direct attached" backplane is right out.  This means that each
drive has its own individual sata port, meaning you'd need three SAS
wide ports just to connect the drives.

The single-expander version has one LSI SAS expander, which connects
to all the drives and has two "upstream" ports.  This means you plug
in one or two servers directly, and they can both see all the disks.
I've only tested this with one-server configurations.  It also has one
"downstream" port which you could use to daisy-chain more expanders
(i.e., more 826/846 cases) onto the same server.


That makes things a heck of a lot clearer, thank you very much for  
taking the time to explain!


Ever seen/read about anyone use this kind of setup for HA clustering?  
I'm getting ideas about Open HA/Solaris Cluster on top of this setup  
with two systems connecting, that would rock!



We have a SC846E1 at work; it's the 24-disk, 4u version of the 826e1.
It's working quite nicely as a SATA JBOD enclosure.  We'll probably be
buying another in the coming year to have more capacity.


Good to hear. What HBA(s) are you using against it?


Thanks for pointing to relevant documentation.

The manual for the Supermicro cases [1, 2] does a pretty good job IMO
explaining the different options.  See page D-14 and on in the 826
manual, or page D-11 and on in the 846 manual.



I'll read though that, thanks for the detailed pointers.

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Adam Sherman

Hello All,

I'm just starting to think about building some mass-storage arrays and  
am looking to better understand some of the components involved.


For example, the Supermicro SC826 series of systems is available with  
three backplanes:


1. SAS / SATA Expander Backplane with single LSI SASX28 Expander Chip
2. SAS / SATA Expander Backplane with dual LSI SASX28 Expander Chips
3. SAS / SATA Direct Attached Backplane

Assuming I am using this an external array, connected to a server via  
SAS, how do these fit into my topology? Expander, dual-expanders and  
no expander? Huh?


Thanks for pointing to relevant documentation.

A.


--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss