Re: hardware for home use large storage

2010-03-22 Thread Andrei Kolu
2010/3/21 Andriy Gapon :
> on 09/02/2010 14:53 Andriy Gapon said the following:
>> on 09/02/2010 12:32 Matthew D. Fuller said the following:
>>> On Tue, Feb 09, 2010 at 04:37:50PM +1030 I heard the voice of
>>> Daniel O'Connor, and lo! it spake thus:
 Probably the result of idiotic penny pinching though :-/
>>> Irritating.  One of my favorite parts of AMD's amd64 chips is that I
>>> no longer have to spend through the nose or be a detective (or, often,
>>> both) to get ECC.  So far, it seems like there are relatively few
>>> hidden holes on that path, and I haven't stepped in one, but every new
>>> one I hear about increases my terror of the day when there are more
>>> holes than solid ground   :(
>>
>> Yep.
>> For sure, Gigabyte BIOS on this board is completely missing ECC 
>> initialization
>> code.  I mean not only the menus in setup, but the code that does memory
>> controller programming.
>> Not sure about the physical lanes though.
>
> BTW, not 100% sure if I my test method was correct, but it seems that ECC pins
> of DIMM sockets (CB0, CB1, etc) of my motherboard (GA-MA780G-UD3H) are not
> connected to anywhere.
> So looks like Gigabyte is saving some cents on this.
>
Hi,

I got this reply from Gigabyte about my concern about actual ECC
support on board:
Model Name : GA-X48-DS4(rev. 1.3)
---
Dear Sir,

Thank you for your kindly mail and inquiry. About the issue you
mentioned, we do not guarantee all Third party H/W monitor utilities
will work properly with our motherboard because most of the S/W does
not know our H/W design and impossible optimize their S/W for our
motherboard. We are sorry if there is any inconvenience.

In addition, basically, if you could boot up your system with ECC
memory module, it means your motherboard could fully support the ECC
memory module. If the motherboard does not support the ECC memory, you
could not use this kind of memory module for the system to use. By the
way, ECC function will automatically enable in the BIOS program. You
do not need to turn on it manually.
---

I call this bullshit because all testing utilities I used, not a
single one confirmed any presence of ECC.

Also, most of the Asus boards are extremely unstable with ECC enabled.
Finally I replaced my Kingston ECC DDR2 with no-ECC memory on Asus
board and bought Intel s3210shlx server board for my workstation.


Andrei Kolu
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-03-20 Thread Andriy Gapon
on 09/02/2010 14:53 Andriy Gapon said the following:
> on 09/02/2010 12:32 Matthew D. Fuller said the following:
>> On Tue, Feb 09, 2010 at 04:37:50PM +1030 I heard the voice of
>> Daniel O'Connor, and lo! it spake thus:
>>> Probably the result of idiotic penny pinching though :-/
>> Irritating.  One of my favorite parts of AMD's amd64 chips is that I
>> no longer have to spend through the nose or be a detective (or, often,
>> both) to get ECC.  So far, it seems like there are relatively few
>> hidden holes on that path, and I haven't stepped in one, but every new
>> one I hear about increases my terror of the day when there are more
>> holes than solid ground   :(
> 
> Yep.
> For sure, Gigabyte BIOS on this board is completely missing ECC initialization
> code.  I mean not only the menus in setup, but the code that does memory
> controller programming.
> Not sure about the physical lanes though.

BTW, not 100% sure if I my test method was correct, but it seems that ECC pins
of DIMM sockets (CB0, CB1, etc) of my motherboard (GA-MA780G-UD3H) are not
connected to anywhere.
So looks like Gigabyte is saving some cents on this.


-- 
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-16 Thread Dan Langille

On Tue, February 16, 2010 2:05 pm, Alexander Motin wrote:
> Dan Langille wrote:
>> On Wed, February 10, 2010 10:00 pm, Bruce Simpson wrote:
>>> On 02/10/10 19:40, Steve Polyack wrote:
 I haven't had such bad experience as the above, but it is certainly a
 concern.  Using ZFS we simply 'offline' the device, pull, replace with
 a new one, glabel, and zfs replace.  It seems to work fine as long as
 nothing is accessing the device you are replacing (otherwise you will
 get a kernel panic a few minutes down the line).  m...@freebsd.org has
 also committed a large patch set to 9-CURRENT which implements
 "proper" SATA/AHCI hot-plug support and error-recovery through CAM.
>>> I've been running with this patch in 8-STABLE for well over a week now
>>> on my desktop w/o issues; I am using main disk for dev, and eSATA disk
>>> pack for light multimedia use.
>>
>> MFC to 8.x?
>
> Merged.

Thank you.  :)

-- 
Dan Langille -- http://langille.org/

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-16 Thread Alexander Motin
Dan Langille wrote:
> On Wed, February 10, 2010 10:00 pm, Bruce Simpson wrote:
>> On 02/10/10 19:40, Steve Polyack wrote:
>>> I haven't had such bad experience as the above, but it is certainly a
>>> concern.  Using ZFS we simply 'offline' the device, pull, replace with
>>> a new one, glabel, and zfs replace.  It seems to work fine as long as
>>> nothing is accessing the device you are replacing (otherwise you will
>>> get a kernel panic a few minutes down the line).  m...@freebsd.org has
>>> also committed a large patch set to 9-CURRENT which implements
>>> "proper" SATA/AHCI hot-plug support and error-recovery through CAM.
>> I've been running with this patch in 8-STABLE for well over a week now
>> on my desktop w/o issues; I am using main disk for dev, and eSATA disk
>> pack for light multimedia use.
> 
> MFC to 8.x?

Merged.

-- 
Alexander Motin
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-16 Thread Peter C. Lai
On 2010-02-15 10:29:22PM +0100, Gót András wrote:
> On Hét, Február 15, 2010 10:15 pm, Dan Naumov wrote:
> >>> A C2Q CPU makes little sense right now from a performance POV. For
> >>> the price of that C2Q CPU + LGA775 board you can get an i5 750 CPU and
> >>> a 1156 socket motherboard that will run circles around that C2Q. You
> >>> would lose the ECC though, since that requires the more expensive 1366
> >>> socket CPUs and boards.
> >>>
> >>> - Sincerely,
> >>> Dan Naumov
> >>>
> >>
> >> Hi,
> >>
> >>
> >> Do have test about this? I'm not really impressed with the i5 series.
> >>
> >>
> >> Regards,
> >> Andras
> >>
> >
> > There: http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=3634&p=10
> >
> >
> > The i5 750, which is a 180 euro CPU, beats Q9650 C2Q, which is a 300 euro
> > CPU.
> >
> >
> >
> > - Sincerely,
> > Dan Naumov
> >
> >
> 
> Oh, I was not up to date on price performance ratio. However I'd compare
> the i5 750 to the Q8400 which is also a 2,66GHz one.
> 

Perhaps there is some confusion between the i5 and i3? A C2Q will probably
beat an i3 at the same clock speed but i5 750 has 8mb of unified cache and 
the turboboost feature. IMO a lot of the benchmark differences between an 
i5 and C2Q can be attributed to DDR3 and the onboard ram controller on the 
i5 reducing latency (which if one is to believe Herb Sutter, is the bane 
of all modern CPUs).

-- 
===
Peter C. Lai | Bard College at Simon's Rock
Systems Administrator| 84 Alford Rd.
Information Technology Svcs. | Gt. Barrington, MA 01230 USA
peter AT simons-rock.edu | (413) 528-7428
===

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-16 Thread Peter C. Lai
On 2010-02-15 02:25:57PM -0800, Artem Belevich wrote:
> > How much ram are you running with?
> 
> 8GB on amd64. kmem_size=16G, zfs.arc_max=6G
> 
> > In a latest test with 8.0-R on i386 with 2GB of ram, an install to a ZFS
> > root *will* panic the kernel with kmem_size too small with default
> > settings. Even dropping down to Cy Schubert's uber-small config will panic
> > the kernel (vm.kmem_size_max = 330M, vfs.zfs.arc_size = 40M,
> > vfs.zfs.vdev.cache_size = 5M); the system is currently stable using DIST
> > kernel, vm.kmem_size/max = 512M, arc_size = 40M and vdev.cache_size = 5M.
> 
> On i386 you don't really have much wiggle room. Your address space is
> 32-bit and, to make things more interesting, it's split between
> user-land and kernel. You can keep bumping KVA_PAGES only so far and
> that's what limits your vm.kmem_size_max which is the upper limit for
> vm.kmem_size.
> 
> The bottom line -- if you're planning to use ZFS, do switch to amd64.
> Even with only 2GB of physical RAM available, your box will behave
> better. At the very least it will be possible to avoid the panics
> caused by kmem exhaustion.
> 
> --Artem

Well this ZFS box (which admittedly is mostly a testbed) is only a lowly 
NetBurst Gallatin Xeon, pre-amd64 :(

-- 
===
Peter C. Lai | Bard College at Simon's Rock
Systems Administrator| 84 Alford Rd.
Information Technology Svcs. | Gt. Barrington, MA 01230 USA
peter AT simons-rock.edu | (413) 528-7428
===

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-16 Thread Dan Langille

On 2/16/2010 6:28 AM, Miroslav Lachman wrote:

Dan Langille wrote:

Daniel O'Connor wrote:


[...]


Why even bother with the LSI card at all?
That board already has 6 SATA slots - depends how many disks you want
to use of course. (5 HDs + 1 DVD drive?)


Plus two SATA drives in a gmirror for the base OS, and one optical. I
want a minimum of 8 slots.


I think that 2 HDDs in gmirror just for base OS is an overkill if you
want this machine as home storage. You will be fine with booting the
base OS from CF card or USB stick. (and you can put two USB flash disks
in gmirror if you want redundancy)
This way you will save some money, SATA ports/cards and if you will use
some kind of fast and big USB stick, you can use part of it as L2ARC for
speeding up read performance of ZFS
http://www.leidinger.net/blog/2010/02/10/making-zfs-faster/

I have my backup storage machine booted from USB stick (as read-only
UFS) with 4x 1TB HDDs in RAIDZ. It is running one and half year without
problem.


I agree.  However, the machine will be primarily storage, but it will 
also be running PostgreSQL and Bacula.  I already have smaller unused 
SATA drives laying around here.


Thank you

--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-16 Thread Daniel O'Connor
On Tue, 16 Feb 2010, Miroslav Lachman wrote:
> I have my backup storage machine booted from USB stick (as read-only
> UFS) with 4x 1TB HDDs in RAIDZ. It is running one and half year
> without problem.

Yeah, I am booting off a 4Gb CF card with adapter (I didn't trust the 
BIOS enough for USB :)

I wouldn't use it for a remote system without redundancy but for a home 
setup a very lightly used flash device seems like an obvious (and cost 
effective) choice.

-- 
Daniel O'Connor software and network engineer
for Genesis Software - http://www.gsoft.com.au
"The nice thing about standards is that there
are so many of them to choose from."
  -- Andrew Tanenbaum
GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C


signature.asc
Description: This is a digitally signed message part.


Re: hardware for home use large storage

2010-02-16 Thread Miroslav Lachman

Dan Langille wrote:

Daniel O'Connor wrote:


[...]


Why even bother with the LSI card at all?
That board already has 6 SATA slots - depends how many disks you want
to use of course. (5 HDs + 1 DVD drive?)


Plus two SATA drives in a gmirror for the base OS, and one optical. I
want a minimum of 8 slots.


I think that 2 HDDs in gmirror just for base OS is an overkill if you 
want this machine as home storage. You will be fine with booting the 
base OS from CF card or USB stick. (and you can put two USB flash disks 
in gmirror if you want redundancy)
This way you will save some money, SATA ports/cards and if you will use 
some kind of fast and big USB stick, you can use part of it as L2ARC for 
speeding up read performance of ZFS

http://www.leidinger.net/blog/2010/02/10/making-zfs-faster/

I have my backup storage machine booted from USB stick (as read-only 
UFS) with 4x 1TB HDDs in RAIDZ. It is running one and half year without 
problem.


Miroslav Lachman
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-16 Thread Dmitry Morozovsky
On Mon, 15 Feb 2010, jfar...@goldsword.com wrote:

> 
> Just out of curiousity, would not an older server like this:
> http://www.geeks.com/details.asp?InvtId=DL145-5R (~$75 + shipping) or
> http://www.geeks.com/details.asp?invtid=DL360-6R&cat=SYS (~$190 + shipping)
> 
> be a reasonable option? Unless you're looking to suck every last bit of speed
> or energy savings out the machine, I would think bumping the memory up on one
> of these, adding one or more eSATA or SAS interfaces and an external drive
> rack would result in an exceptional "home" server with several TB of storage,
> decent speed, still costing less than $1K usd

way too noisy, even for the basement :(

-- 
Sincerely,
D.Marck [DM5020, MCK-RIPE, DM3-RIPN]
[ FreeBSD committer: ma...@freebsd.org ]

*** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- ma...@rinet.ru ***

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-15 Thread Dan Langille

Daniel O'Connor wrote:

On Tue, 16 Feb 2010, Steve Polyack wrote:

I'm not sure about that particular card, but we've never seen that
great of performance out of the LSI MegaRAID cards that ship with
Dell servers as the PERC.  The newest incarnations are better, but I
would try to get an Areca.  The ones we have tested have displayed
fantastic
performance.  They are fairly expensive in comparison, though.  If
you're using ZFS in place of the RAID on the LSI MegaRAID, I'd
instead recommend other simpler SAS cards which are known to have
good driver support.


Why even bother with the LSI card at all?
That board already has 6 SATA slots - depends how many disks you want to 
use of course. (5 HDs + 1 DVD drive?)


Plus two SATA drives in a gmirror for the base OS, and one optical.  I 
want a minimum of 8 slots.

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-15 Thread Daniel O'Connor
On Tue, 16 Feb 2010, Steve Polyack wrote:
> I'm not sure about that particular card, but we've never seen that
> great of performance out of the LSI MegaRAID cards that ship with
> Dell servers as the PERC.  The newest incarnations are better, but I
> would try to get an Areca.  The ones we have tested have displayed
> fantastic
> performance.  They are fairly expensive in comparison, though.  If
> you're using ZFS in place of the RAID on the LSI MegaRAID, I'd
> instead recommend other simpler SAS cards which are known to have
> good driver support.

Why even bother with the LSI card at all?
That board already has 6 SATA slots - depends how many disks you want to 
use of course. (5 HDs + 1 DVD drive?)

-- 
Daniel O'Connor software and network engineer
for Genesis Software - http://www.gsoft.com.au
"The nice thing about standards is that there
are so many of them to choose from."
  -- Andrew Tanenbaum
GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C


signature.asc
Description: This is a digitally signed message part.


Re: hardware for home use large storage

2010-02-15 Thread Steve Polyack

On 2/15/2010 6:04 PM, Dan Langille wrote:

Steve Polyack wrote:

On 02/15/10 12:14, Dan Langille wrote:



   7. Supermicro LSI MegaRAID 8 Port SAS RAID Controller $118



Dan,
I'm not sure about that particular card, but we've never seen that 
great of performance out of the LSI MegaRAID cards that ship with 
Dell servers as the PERC.  The newest incarnations are better, but I 
would try to get an Areca.  The ones we have tested have displayed 
fantastic performance.  They are fairly expensive in comparison, 
though.  If you're using ZFS in place of the RAID on the LSI 
MegaRAID, I'd instead recommend other simpler SAS cards which are 
known to have good driver support.


Yes, the card will be used as a straight-through and not use for RAID. 
ZFS will be running raidz for me, possibly raidz2.  Given that, I'm 
not sure if you're suggesting the3 Areca or something else.


In addition, I'm not sure what makes a SAS card simpler and supported. 
Recommendation?


Other cards I have considered include:

LSI SAS3041E-R 4 port $120
http://www.google.com/products/catalog?q=lsi+sas+pcie&hl=en&cid=1824913543877548833&sa=title#p 



SYBA SY-PEX40008 PCI Express SATA II 4 port $60
http://www.newegg.com/Product/Product.aspx?Item=N82E16816124027

LSISAS1064 chipset -> SAS3042e
http://www.lsi.com/DistributionSystem/AssetDocument/PCIe_3GSAS_UG.pdf

SUPERMICRO AOC-SAT2-MV8 64-bit PCI-X133MHz SATA Controller Card $99
http://www.newegg.com/Product/Product.aspx?Item=N82E16815121009



All I meant by simpler was that if you're not going to use the RAID 
portion then you do not have to pay for it.   Same goes for SAS - if 
you're not going to use SAS disks, you only need a SATA controller.  The 
SYBA card you have listed works great with the siis driver (NCQ 
support).  If you don't need SAS support, I think there are Areca cards 
around $100 that just do SATA w/o RAID.


As far as supported, I know that the Areca driver is very good, as is 
the recently revamped ahci generic and siis drivers.  Like I said, those 
newer LSI cards may be fine, but I've had bad experiences with the PERC4 
and PERC5 (/LSI/ MegaRAID SAS 8408E), which are both re-branded LSI 
MegaRAID cards.  They were always very reliable and handled failed disks 
quite well, but performance was often ... just bad. YMMV.


The PERC6 (one of the newer generations) isn't half as bad... If you end 
up with one of the LSI cards, let us know how it performs ;)

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-15 Thread Dan Langille

Steve Polyack wrote:

On 02/15/10 12:14, Dan Langille wrote:



   7. Supermicro LSI MegaRAID 8 Port SAS RAID Controller $118



Dan,
I'm not sure about that particular card, but we've never seen that great 
of performance out of the LSI MegaRAID cards that ship with Dell servers 
as the PERC.  The newest incarnations are better, but I would try to get 
an Areca.  The ones we have tested have displayed fantastic 
performance.  They are fairly expensive in comparison, though.  If 
you're using ZFS in place of the RAID on the LSI MegaRAID, I'd instead 
recommend other simpler SAS cards which are known to have good driver 
support.


Yes, the card will be used as a straight-through and not use for RAID. 
ZFS will be running raidz for me, possibly raidz2.  Given that, I'm not 
sure if you're suggesting the3 Areca or something else.


In addition, I'm not sure what makes a SAS card simpler and supported. 
Recommendation?


Other cards I have considered include:

LSI SAS3041E-R 4 port $120
http://www.google.com/products/catalog?q=lsi+sas+pcie&hl=en&cid=1824913543877548833&sa=title#p

SYBA SY-PEX40008 PCI Express SATA II 4 port $60
http://www.newegg.com/Product/Product.aspx?Item=N82E16816124027

LSISAS1064 chipset -> SAS3042e
http://www.lsi.com/DistributionSystem/AssetDocument/PCIe_3GSAS_UG.pdf

SUPERMICRO AOC-SAT2-MV8 64-bit PCI-X133MHz SATA Controller Card $99
http://www.newegg.com/Product/Product.aspx?Item=N82E16815121009
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-15 Thread jfarmer


Just out of curiousity, would not an older server like this:  
http://www.geeks.com/details.asp?InvtId=DL145-5R (~$75 + shipping) or  
http://www.geeks.com/details.asp?invtid=DL360-6R&cat=SYS (~$190 +  
shipping)


be a reasonable option? Unless you're looking to suck every last bit  
of speed or energy savings out the machine, I would think bumping the  
memory up on one of these, adding one or more eSATA or SAS interfaces  
and an external drive rack would result in an exceptional "home"  
server with several TB of storage, decent speed, still costing less  
than $1K usd


John

-
J. T. Farmer 
GoldSword Systems, Knoxville TN  Coach & Instructor
   Consulting, Knoxville Academy of the Blade
Software Development,  Maryville Fencing Club
 Project Management


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-15 Thread Dan Langille

Dan Naumov wrote:

On Mon, Feb 15, 2010 at 7:14 PM, Dan Langille  wrote:

Dan Naumov wrote:

On Sun, Feb 14, 2010 at 11:38 PM, Dan Langille  wrote:

Dan Naumov wrote:

On Sun, 14 Feb 2010, Dan Langille wrote:

After creating three different system configurations (Athena,
Supermicro, and HP), my configuration of choice is this Supermicro
setup:

  1. Samsung SATA CD/DVD Burner $20 (+ $8 shipping)
  2. SuperMicro 5046A $750 (+$43 shipping)
  3. LSI SAS 3081E-R $235
  4. SATA cables $60
  5. Crucial 3×2G ECC DDR3-1333 $191 (+ $6 shipping)
  6. Xeon W3520 $310

You do realise how much of a massive overkill this is and how much you
are overspending?

I appreciate the comments and feedback.  I'd also appreciate alternative
suggestions in addition to what you have contributed so far.  Spec out
the
box you would build.

==
Case: Fractal Design Define R2 - 89 euro:
http://www.fractal-design.com/?view=product&prod=32

Mobo/CPU: Supermicro X7SPA-H / Atom D510 - 180-220 euro:
http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=H

PSU: Corsair 400CX 80+ - 59 euro:
http://www.corsair.com/products/cx/default.aspx

RAM: Corsair 2x2GB, DDR2 800MHz SO-DIMM, CL5 - 85 euro
==
Total: ~435 euro

The motherboard has 6 native AHCI-capable ports on ICH9R controller
and you have a PCI-E slot free if you want to add an additional
controller card. Feel free to blow the money you've saved on crazy
fast SATA disks and if your system workload is going to have a lot of
random reads, then spend 200 euro on a 80gb Intel X25-M for use as a
dedicated L2ARC device for your pool.

Based on the Fractal Design case mentioned above, I was told about Lian Lia
cases, which I think are great.  As a result, I've gone with a tower  case
without hot-swap.  The parts are listed at and reproduced below:

 http://dan.langille.org/2010/02/15/a-full-tower-case/

  1. LIAN LI PC-A71F Black Aluminum ATX Full Tower Computer Case $240 (from
mwave)
  2. Antec EarthWatts EA650 650W PSU $80
  3. Samsung SATA CD/DVD Burner $20 (+ $8 shipping)
  4. Intel S3200SHV LGA 775 Intel 3200 m/b $200
  5. Intel Core2 Quad Q9400 CPU $190
  6. SATA cables $22
  7. Supermicro LSI MegaRAID 8 Port SAS RAID Controller $118
  8. Kingston ValueRAM 4GB (2 x 2GB) 240-Pin DDR2 SDRAM ECC $97

Total cost is about $1020 with shipping.  Plus HDD.

No purchases yet, but the above is what appeals to me now.


A C2Q CPU makes little sense right now from a performance POV. For the
price of that C2Q CPU + LGA775 board you can get an i5 750 CPU and a
1156 socket motherboard that will run circles around that C2Q. You
would lose the ECC though, since that requires the more expensive 1366
socket CPUs and boards.


ECC RAM appeals and yes, that comes with a cost.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-15 Thread Artem Belevich
> How much ram are you running with?

8GB on amd64. kmem_size=16G, zfs.arc_max=6G

> In a latest test with 8.0-R on i386 with 2GB of ram, an install to a ZFS
> root *will* panic the kernel with kmem_size too small with default
> settings. Even dropping down to Cy Schubert's uber-small config will panic
> the kernel (vm.kmem_size_max = 330M, vfs.zfs.arc_size = 40M,
> vfs.zfs.vdev.cache_size = 5M); the system is currently stable using DIST
> kernel, vm.kmem_size/max = 512M, arc_size = 40M and vdev.cache_size = 5M.

On i386 you don't really have much wiggle room. Your address space is
32-bit and, to make things more interesting, it's split between
user-land and kernel. You can keep bumping KVA_PAGES only so far and
that's what limits your vm.kmem_size_max which is the upper limit for
vm.kmem_size.

The bottom line -- if you're planning to use ZFS, do switch to amd64.
Even with only 2GB of physical RAM available, your box will behave
better. At the very least it will be possible to avoid the panics
caused by kmem exhaustion.

--Artem
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-15 Thread Gót András
On Hét, Február 15, 2010 10:15 pm, Dan Naumov wrote:
>>> A C2Q CPU makes little sense right now from a performance POV. For
>>> the price of that C2Q CPU + LGA775 board you can get an i5 750 CPU and
>>> a 1156 socket motherboard that will run circles around that C2Q. You
>>> would lose the ECC though, since that requires the more expensive 1366
>>> socket CPUs and boards.
>>>
>>> - Sincerely,
>>> Dan Naumov
>>>
>>
>> Hi,
>>
>>
>> Do have test about this? I'm not really impressed with the i5 series.
>>
>>
>> Regards,
>> Andras
>>
>
> There: http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=3634&p=10
>
>
> The i5 750, which is a 180 euro CPU, beats Q9650 C2Q, which is a 300 euro
> CPU.
>
>
>
> - Sincerely,
> Dan Naumov
>
>

Oh, I was not up to date on price performance ratio. However I'd compare
the i5 750 to the Q8400 which is also a 2,66GHz one.


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-15 Thread Dan Naumov
>> A C2Q CPU makes little sense right now from a performance POV. For the
>> price of that C2Q CPU + LGA775 board you can get an i5 750 CPU and a 1156
>> socket motherboard that will run circles around that C2Q. You would lose
>> the ECC though, since that requires the more expensive 1366 socket CPUs
>> and boards.
>>
>> - Sincerely,
>> Dan Naumov
>
> Hi,
>
> Do have test about this? I'm not really impressed with the i5 series.
>
> Regards,
> Andras

There: http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=3634&p=10

The i5 750, which is a 180 euro CPU, beats Q9650 C2Q, which is a 300 euro CPU.


- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-15 Thread Steve Polyack

On 02/15/10 12:14, Dan Langille wrote:

Dan Naumov wrote:

On Sun, Feb 14, 2010 at 11:38 PM, Dan Langille  wrote:

Dan Naumov wrote:

On Sun, 14 Feb 2010, Dan Langille wrote:

After creating three different system configurations (Athena,
Supermicro, and HP), my configuration of choice is this Supermicro
setup:

   1. Samsung SATA CD/DVD Burner $20 (+ $8 shipping)
   2. SuperMicro 5046A $750 (+$43 shipping)
   3. LSI SAS 3081E-R $235
   4. SATA cables $60
   5. Crucial 3×2G ECC DDR3-1333 $191 (+ $6 shipping)
   6. Xeon W3520 $310

You do realise how much of a massive overkill this is and how much you
are overspending?


I appreciate the comments and feedback.  I'd also appreciate 
alternative
suggestions in addition to what you have contributed so far.  Spec 
out the

box you would build.


==
Case: Fractal Design Define R2 - 89 euro:
http://www.fractal-design.com/?view=product&prod=32

Mobo/CPU: Supermicro X7SPA-H / Atom D510 - 180-220 euro:
http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=H

PSU: Corsair 400CX 80+ - 59 euro:
http://www.corsair.com/products/cx/default.aspx

RAM: Corsair 2x2GB, DDR2 800MHz SO-DIMM, CL5 - 85 euro
==
Total: ~435 euro

The motherboard has 6 native AHCI-capable ports on ICH9R controller
and you have a PCI-E slot free if you want to add an additional
controller card. Feel free to blow the money you've saved on crazy
fast SATA disks and if your system workload is going to have a lot of
random reads, then spend 200 euro on a 80gb Intel X25-M for use as a
dedicated L2ARC device for your pool.


Based on the Fractal Design case mentioned above, I was told about 
Lian Lia cases, which I think are great.  As a result, I've gone with 
a tower  case without hot-swap.  The parts are listed at and 
reproduced below:


  http://dan.langille.org/2010/02/15/a-full-tower-case/

   1. LIAN LI PC-A71F Black Aluminum ATX Full Tower Computer Case $240 
(from mwave)

   2. Antec EarthWatts EA650 650W PSU $80
   3. Samsung SATA CD/DVD Burner $20 (+ $8 shipping)
   4. Intel S3200SHV LGA 775 Intel 3200 m/b $200
   5. Intel Core2 Quad Q9400 CPU $190
   6. SATA cables $22
   7. Supermicro LSI MegaRAID 8 Port SAS RAID Controller $118
   8. Kingston ValueRAM 4GB (2 x 2GB) 240-Pin DDR2 SDRAM ECC $97

Total cost is about $1020 with shipping.  Plus HDD.

No purchases yet, but the above is what appeals to me now.

Dan,
I'm not sure about that particular card, but we've never seen that great 
of performance out of the LSI MegaRAID cards that ship with Dell servers 
as the PERC.  The newest incarnations are better, but I would try to get 
an Areca.  The ones we have tested have displayed fantastic 
performance.  They are fairly expensive in comparison, though.  If 
you're using ZFS in place of the RAID on the LSI MegaRAID, I'd instead 
recommend other simpler SAS cards which are known to have good driver 
support.



___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-15 Thread Gót András
On Hét, Február 15, 2010 9:39 pm, Dan Naumov wrote:
> On Mon, Feb 15, 2010 at 7:14 PM, Dan Langille  wrote:
>
>> Dan Naumov wrote:
>>
>>>
>>> On Sun, Feb 14, 2010 at 11:38 PM, Dan Langille 
>>> wrote:
>>>

 Dan Naumov wrote:

>>
>> On Sun, 14 Feb 2010, Dan Langille wrote:
>>
>>>
>>> After creating three different system configurations (Athena,
>>>  Supermicro, and HP), my configuration of choice is this
>>> Supermicro
>>> setup:
>>>
>>>
>>>   1. Samsung SATA CD/DVD Burner $20 (+ $8 shipping)
>>>   2. SuperMicro 5046A $750 (+$43 shipping)
>>>   3. LSI SAS 3081E-R $235
>>>   4. SATA cables $60
>>>   5. Crucial 3×2G ECC DDR3-1333 $191 (+ $6 shipping)
>>>   6. Xeon W3520 $310
>>>
>
> You do realise how much of a massive overkill this is and how
> much you are overspending?

 I appreciate the comments and feedback.  I'd also appreciate
 alternative suggestions in addition to what you have contributed so
 far.  Spec out the box you would build.
>>>
>>> ==
>>> Case: Fractal Design Define R2 - 89 euro:
>>> http://www.fractal-design.com/?view=product&prod=32
>>>
>>>
>>> Mobo/CPU: Supermicro X7SPA-H / Atom D510 - 180-220 euro:
>>> http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ
>>> =H
>>>
>>>
>>> PSU: Corsair 400CX 80+ - 59 euro:
>>> http://www.corsair.com/products/cx/default.aspx
>>>
>>>
>>> RAM: Corsair 2x2GB, DDR2 800MHz SO-DIMM, CL5 - 85 euro
>>> ==
>>> Total: ~435 euro
>>>
>>>
>>> The motherboard has 6 native AHCI-capable ports on ICH9R controller
>>> and you have a PCI-E slot free if you want to add an additional
>>> controller card. Feel free to blow the money you've saved on crazy
>>> fast SATA disks and if your system workload is going to have a lot of
>>>  random reads, then spend 200 euro on a 80gb Intel X25-M for use as a
>>>  dedicated L2ARC device for your pool.
>>
>> Based on the Fractal Design case mentioned above, I was told about Lian
>> Lia
>> cases, which I think are great.  As a result, I've gone with a tower
>>  case
>> without hot-swap.  The parts are listed at and reproduced below:
>>
>>  http://dan.langille.org/2010/02/15/a-full-tower-case/
>>
>>
>>   1. LIAN LI PC-A71F Black Aluminum ATX Full Tower Computer Case $240
>> (from
>> mwave)   2. Antec EarthWatts EA650 650W PSU $80
>>   3. Samsung SATA CD/DVD Burner $20 (+ $8 shipping)
>>   4. Intel S3200SHV LGA 775 Intel 3200 m/b $200
>>   5. Intel Core2 Quad Q9400 CPU $190
>>   6. SATA cables $22
>>   7. Supermicro LSI MegaRAID 8 Port SAS RAID Controller $118
>>   8. Kingston ValueRAM 4GB (2 x 2GB) 240-Pin DDR2 SDRAM ECC $97
>>
>>
>> Total cost is about $1020 with shipping.  Plus HDD.
>>
>>
>> No purchases yet, but the above is what appeals to me now.
>>
>
> A C2Q CPU makes little sense right now from a performance POV. For the
> price of that C2Q CPU + LGA775 board you can get an i5 750 CPU and a 1156
> socket motherboard that will run circles around that C2Q. You would lose
> the ECC though, since that requires the more expensive 1366 socket CPUs
> and boards.
>
> - Sincerely,
> Dan Naumov

Hi,

Do have test about this? I'm not really impressed with the i5 series.

Regards,
Andras


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-15 Thread Dan Naumov
On Mon, Feb 15, 2010 at 7:14 PM, Dan Langille  wrote:
> Dan Naumov wrote:
>>
>> On Sun, Feb 14, 2010 at 11:38 PM, Dan Langille  wrote:
>>>
>>> Dan Naumov wrote:
>
> On Sun, 14 Feb 2010, Dan Langille wrote:
>>
>> After creating three different system configurations (Athena,
>> Supermicro, and HP), my configuration of choice is this Supermicro
>> setup:
>>
>>   1. Samsung SATA CD/DVD Burner $20 (+ $8 shipping)
>>   2. SuperMicro 5046A $750 (+$43 shipping)
>>   3. LSI SAS 3081E-R $235
>>   4. SATA cables $60
>>   5. Crucial 3×2G ECC DDR3-1333 $191 (+ $6 shipping)
>>   6. Xeon W3520 $310

 You do realise how much of a massive overkill this is and how much you
 are overspending?
>>>
>>> I appreciate the comments and feedback.  I'd also appreciate alternative
>>> suggestions in addition to what you have contributed so far.  Spec out
>>> the
>>> box you would build.
>>
>> ==
>> Case: Fractal Design Define R2 - 89 euro:
>> http://www.fractal-design.com/?view=product&prod=32
>>
>> Mobo/CPU: Supermicro X7SPA-H / Atom D510 - 180-220 euro:
>> http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=H
>>
>> PSU: Corsair 400CX 80+ - 59 euro:
>> http://www.corsair.com/products/cx/default.aspx
>>
>> RAM: Corsair 2x2GB, DDR2 800MHz SO-DIMM, CL5 - 85 euro
>> ==
>> Total: ~435 euro
>>
>> The motherboard has 6 native AHCI-capable ports on ICH9R controller
>> and you have a PCI-E slot free if you want to add an additional
>> controller card. Feel free to blow the money you've saved on crazy
>> fast SATA disks and if your system workload is going to have a lot of
>> random reads, then spend 200 euro on a 80gb Intel X25-M for use as a
>> dedicated L2ARC device for your pool.
>
> Based on the Fractal Design case mentioned above, I was told about Lian Lia
> cases, which I think are great.  As a result, I've gone with a tower  case
> without hot-swap.  The parts are listed at and reproduced below:
>
>  http://dan.langille.org/2010/02/15/a-full-tower-case/
>
>   1. LIAN LI PC-A71F Black Aluminum ATX Full Tower Computer Case $240 (from
> mwave)
>   2. Antec EarthWatts EA650 650W PSU $80
>   3. Samsung SATA CD/DVD Burner $20 (+ $8 shipping)
>   4. Intel S3200SHV LGA 775 Intel 3200 m/b $200
>   5. Intel Core2 Quad Q9400 CPU $190
>   6. SATA cables $22
>   7. Supermicro LSI MegaRAID 8 Port SAS RAID Controller $118
>   8. Kingston ValueRAM 4GB (2 x 2GB) 240-Pin DDR2 SDRAM ECC $97
>
> Total cost is about $1020 with shipping.  Plus HDD.
>
> No purchases yet, but the above is what appeals to me now.

A C2Q CPU makes little sense right now from a performance POV. For the
price of that C2Q CPU + LGA775 board you can get an i5 750 CPU and a
1156 socket motherboard that will run circles around that C2Q. You
would lose the ECC though, since that requires the more expensive 1366
socket CPUs and boards.

- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-15 Thread Dmitry Morozovsky
On Mon, 15 Feb 2010, Artem Belevich wrote:

AB> It used to be that vm.kmem_size_max needed to be bumped to allow for
AB> larger vm.kmem_size. It's no longer needed on amd64. Not sure about
AB> i386.
AB> 
AB> vm.kmem_size still needs tuning, though. While vm.kmem_size_max is no
AB> longer a limit, there are other checks in place that result in default
AB> vm.kmem_size being a bit on the conservative side for ZFS.

it seems so at least: on a machine with 8G RAM kmem_size is set to 2G, from 
which 1.5G is allocated for arc_max.

I'll try to increase them to 4G / 3G and test whether machine is stable...

-- 
Sincerely,
D.Marck [DM5020, MCK-RIPE, DM3-RIPN]
[ FreeBSD committer: ma...@freebsd.org ]

*** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- ma...@rinet.ru ***

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-15 Thread Peter C. Lai

>>> * vm.kmem_size
>>> * vm.kmem_size_max
>>
>> I tried kmem_size_max on -current (this year), and I got a panic during
>> use,
>> I changed kmem_size to the same value I have for _max and it didn't
>> panic
>> anymore. It looks (from mails on the lists) that _max is supposed to
>> give a
>> max value for auto-enhancement, but at least it was not working with ZFS
>> last month (and I doubt it works now).
>
> It used to be that vm.kmem_size_max needed to be bumped to allow for
> larger vm.kmem_size. It's no longer needed on amd64. Not sure about
> i386.
>
> vm.kmem_size still needs tuning, though. While vm.kmem_size_max is no
> longer a limit, there are other checks in place that result in default
> vm.kmem_size being a bit on the conservative side for ZFS.
>
>>> Then, when it comes to debugging problems as a result of tuning
>>> improperly (or entire lack of), the following counters (not tunables)
>>> are thrown into the mix as "things people should look at":
>>>
>>>  kstat.zfs.misc.arcstats.c
>>>  kstat.zfs.misc.arcstats.c_min
>>>  kstat.zfs.misc.arcstats.c_max
>>
>> c_max is vfs.zfs.arc_max, c_min is vfs.zfs.arc_min.
>>
>>>  kstat.zfs.misc.arcstats.evict_skip
>>>  kstat.zfs.misc.arcstats.memory_throttle_count
>>>  kstat.zfs.misc.arcstats.size
>>
>> I'm not very sure about size and c... both represent some kind of
>> current
>> size, but they are not the same.
>
> arcstats.c -- adaptive ARC target size. I.e. that's what ZFS thinks it
> can grow ARC to. It's dynamically adjusted based on when/how ZFS is
> back-pressured for memory.
> arcstats.size -- current ARC size
> arcstats.p -- portion of arcstats.c that's used by "Most Recently
> Used" items. What's left of arcstats.c is used by "Most Frequently
> Used" items.
>
> --Artem
> ___
> freebsd-stable@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
>

How much ram are you running with?

In a latest test with 8.0-R on i386 with 2GB of ram, an install to a ZFS
root *will* panic the kernel with kmem_size too small with default
settings. Even dropping down to Cy Schubert's uber-small config will panic
the kernel (vm.kmem_size_max = 330M, vfs.zfs.arc_size = 40M,
vfs.zfs.vdev.cache_size = 5M); the system is currently stable using DIST
kernel, vm.kmem_size/max = 512M, arc_size = 40M and vdev.cache_size = 5M.

-- 
Peter C. Lai
ITS Systems Administrator
Bard College at Simon's Rock
84 Alford Rd.
Great Barrington, MA 01230
(413) 528-7428
peter at simons-rock.edu
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-15 Thread Dan Langille

Dan Naumov wrote:

On Sun, Feb 14, 2010 at 11:38 PM, Dan Langille  wrote:

Dan Naumov wrote:

On Sun, 14 Feb 2010, Dan Langille wrote:

After creating three different system configurations (Athena,
Supermicro, and HP), my configuration of choice is this Supermicro
setup:

   1. Samsung SATA CD/DVD Burner $20 (+ $8 shipping)
   2. SuperMicro 5046A $750 (+$43 shipping)
   3. LSI SAS 3081E-R $235
   4. SATA cables $60
   5. Crucial 3×2G ECC DDR3-1333 $191 (+ $6 shipping)
   6. Xeon W3520 $310

You do realise how much of a massive overkill this is and how much you
are overspending?


I appreciate the comments and feedback.  I'd also appreciate alternative
suggestions in addition to what you have contributed so far.  Spec out the
box you would build.


==
Case: Fractal Design Define R2 - 89 euro:
http://www.fractal-design.com/?view=product&prod=32

Mobo/CPU: Supermicro X7SPA-H / Atom D510 - 180-220 euro:
http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=H

PSU: Corsair 400CX 80+ - 59 euro:
http://www.corsair.com/products/cx/default.aspx

RAM: Corsair 2x2GB, DDR2 800MHz SO-DIMM, CL5 - 85 euro
==
Total: ~435 euro

The motherboard has 6 native AHCI-capable ports on ICH9R controller
and you have a PCI-E slot free if you want to add an additional
controller card. Feel free to blow the money you've saved on crazy
fast SATA disks and if your system workload is going to have a lot of
random reads, then spend 200 euro on a 80gb Intel X25-M for use as a
dedicated L2ARC device for your pool.


Based on the Fractal Design case mentioned above, I was told about Lian 
Lia cases, which I think are great.  As a result, I've gone with a tower 
 case without hot-swap.  The parts are listed at and reproduced below:


  http://dan.langille.org/2010/02/15/a-full-tower-case/

   1. LIAN LI PC-A71F Black Aluminum ATX Full Tower Computer Case $240 
(from mwave)

   2. Antec EarthWatts EA650 650W PSU $80
   3. Samsung SATA CD/DVD Burner $20 (+ $8 shipping)
   4. Intel S3200SHV LGA 775 Intel 3200 m/b $200
   5. Intel Core2 Quad Q9400 CPU $190
   6. SATA cables $22
   7. Supermicro LSI MegaRAID 8 Port SAS RAID Controller $118
   8. Kingston ValueRAM 4GB (2 x 2GB) 240-Pin DDR2 SDRAM ECC $97

Total cost is about $1020 with shipping.  Plus HDD.

No purchases yet, but the above is what appeals to me now.

Thank you.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-15 Thread Artem Belevich
>> * vm.kmem_size
>> * vm.kmem_size_max
>
> I tried kmem_size_max on -current (this year), and I got a panic during use,
> I changed kmem_size to the same value I have for _max and it didn't panic
> anymore. It looks (from mails on the lists) that _max is supposed to give a
> max value for auto-enhancement, but at least it was not working with ZFS
> last month (and I doubt it works now).

It used to be that vm.kmem_size_max needed to be bumped to allow for
larger vm.kmem_size. It's no longer needed on amd64. Not sure about
i386.

vm.kmem_size still needs tuning, though. While vm.kmem_size_max is no
longer a limit, there are other checks in place that result in default
vm.kmem_size being a bit on the conservative side for ZFS.

>> Then, when it comes to debugging problems as a result of tuning
>> improperly (or entire lack of), the following counters (not tunables)
>> are thrown into the mix as "things people should look at":
>>
>>  kstat.zfs.misc.arcstats.c
>>  kstat.zfs.misc.arcstats.c_min
>>  kstat.zfs.misc.arcstats.c_max
>
> c_max is vfs.zfs.arc_max, c_min is vfs.zfs.arc_min.
>
>>  kstat.zfs.misc.arcstats.evict_skip
>>  kstat.zfs.misc.arcstats.memory_throttle_count
>>  kstat.zfs.misc.arcstats.size
>
> I'm not very sure about size and c... both represent some kind of current
> size, but they are not the same.

arcstats.c -- adaptive ARC target size. I.e. that's what ZFS thinks it
can grow ARC to. It's dynamically adjusted based on when/how ZFS is
back-pressured for memory.
arcstats.size -- current ARC size
arcstats.p -- portion of arcstats.c that's used by "Most Recently
Used" items. What's left of arcstats.c is used by "Most Frequently
Used" items.

--Artem
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-15 Thread Alexander Leidinger
Quoting Jeremy Chadwick  (from Mon, 15 Feb  
2010 04:27:44 -0800):



On Mon, Feb 15, 2010 at 10:50:00AM +0100, Alexander Leidinger wrote:

Quoting Jeremy Chadwick  (from Mon, 15 Feb
2010 01:07:56 -0800):

>On Mon, Feb 15, 2010 at 10:49:47AM +0200, Dan Naumov wrote:
>>> I had a feeling someone would bring up L2ARC/cache devices.  This gives
>>> me the opportunity to ask something that's been on my mind for quite
>>> some time now:
>>>
>>> Aside from the capacity different (e.g. 40GB vs. 1GB), is there a
>>> benefit to using a dedicated RAM disk (e.g. md(4)) to a pool for
>>> L2ARC/cache?  The ZFS documentation explicitly states that cache
>>> device content is considered volatile.
>>
>>Using a ramdisk as an L2ARC vdev doesn't make any sense at all. If you
>>have RAM to spare, it should be used by regular ARC.
>
>...except that it's already been proven on FreeBSD that the ARC getting
>out of control can cause kernel panics[1], horrible performance until


First and foremost, sorry for the long post.  I tried to keep it short,
but sometimes there's just a lot to be said.


And sometimes a shorter answer takes longer...


There are other ways (not related to ZFS) to shoot into your feet
too, I'm tempted to say that this is
 a) a documentation bug
and
 b) a lack of sanity checking of the values... anyone out there with
a good algorithm for something like this?

Normally you do some testing with the values you use, so once you
resolved the issues, the system should be stable.


What documentation?  :-)  The Wiki?  If so, that's been outdated for


Hehe... :)


some time; I know Ivan Voras was doing his best to put good information
there, but it's hard given the below chaos.


Do you want write access to it (in case you haven't, I didn't check)?


The following tunables are recurrently mentioned as focal points, but no
one's explained in full how to tune these "properly", or which does what
(perfect example: vm.kmem_size_max vs. vm.kmem_size.  _max used to be
what you'd adjust to solve kmem exhaustion issues, but now people are
saying otherwise?).  I realise it may differ per system (given how much
RAM the system has), so different system configurations/examples would
need to be provided.  I realise that the behaviour of some have changed
too (e.g. -RELEASE differs from -STABLE, and 7.x differs from 8.x).
I've marked commonly-referred-to tunables with an asterisk:


It can also be that some people just tell something without really  
knowing what they say (based upon some kind of observed evidence, not  
because of being a bad guy).



  kern.maxvnodes


Needs to be tuned if you run out of vnodes... ok, this is obvious. I  
do not know how it will show up (panic or graceful error handling,  
e.g. ENOMEM).



* vm.kmem_size
* vm.kmem_size_max


I tried kmem_size_max on -current (this year), and I got a panic  
during use, I changed kmem_size to the same value I have for _max and  
it didn't panic anymore. It looks (from mails on the lists) that _max  
is supposed to give a max value for auto-enhancement, but at least it  
was not working with ZFS last month (and I doubt it works now).



* vfs.zfs.arc_min
* vfs.zfs.arc_max


_min = minimum even when the system is running out of memory (the ARC  
gives back memory if other parts of the kernel need it).
_max = maximum (with a recent ZFS on 7/8/9 (7.3 will have it, 8.1 will  
have it too) I've never seen the size exceed the _max anymore)



  vfs.zfs.prefetch_disable  (auto-tuned based on available RAM on 8-STABLE)
  vfs.zfs.txg.timeout


It looks like the txg is just a workaround. I've read a little bit in  
Brendan's blog and it seems they noticed the periodic writes too (with  
the nice graphical performance monitoring of OpenStorage) and they are  
investigating the issue. It looks like we are more affected by this  
(for whatever reason). What it is doing (attention, this is an  
observation, not a technical description of code I've read!) seems to  
be to write out data to the disks more early (and thus there is less  
data to write -> less blocking to notice).



  vfs.zfs.vdev.cache.size
  vfs.zfs.vdev.cache.bshift
  vfs.zfs.vdev.max_pending


Uhm... this smells like you got it out of one of my posts where I told  
that I experiment with this on a system. I can tell you that I have no  
system with this tuned anymore, tuning kmem_size (and KVA_PAGES during  
kernel compile) has a bigger impact.



  vfs.zfs.zil_disable


What it does should be obvious. IMHO this should not help much  
regarding stability (changing kmem_size should give a bigger impact).  
As don't know what was tested on systems where this is disabled, I  
want to highlight the "IMHO" in the sentence before...



Then, when it comes to debugging problems as a result of tuning
improperly (or entire lack of), the following counters (not tunables)
are thrown into the mix as "things people should look at":

  kstat.zfs.misc.arcstats.c
  kstat.zfs.misc.arcstats.c_min
  kstat.zfs.misc.arcstats.c_m

Re: hardware for home use large storage

2010-02-15 Thread Dan Langille

Ulf Zimmermann wrote:

On Sun, Feb 14, 2010 at 07:33:07PM -0500, Dan Langille wrote:

Get a dock for holding 2 x 2,5" disks in a single 5,25" slot and put
it at the top, in the only 5,25" bay of the case.
That sounds very interesting.  I just looking around for such a thing, 
and could not find it.  Is there a more specific name? URL?


I had an Addonics 5.25" frame for 4x 2.5" SAS/SATA but the small fans in it
are unfortunatly of the cheap kind. I ended up using the 2x2.5" to 3.5"
frame from Silverstone (for the small Silverstone case I got).


Ahh, something like this:

http://silverstonetek.com/products/p_contents.php?pno=SDP08&area=usa

I understand now.  Thank you.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-15 Thread Jeremy Chadwick
On Mon, Feb 15, 2010 at 10:50:00AM +0100, Alexander Leidinger wrote:
> Quoting Jeremy Chadwick  (from Mon, 15 Feb
> 2010 01:07:56 -0800):
> 
> >On Mon, Feb 15, 2010 at 10:49:47AM +0200, Dan Naumov wrote:
> >>> I had a feeling someone would bring up L2ARC/cache devices.  This gives
> >>> me the opportunity to ask something that's been on my mind for quite
> >>> some time now:
> >>>
> >>> Aside from the capacity different (e.g. 40GB vs. 1GB), is there a
> >>> benefit to using a dedicated RAM disk (e.g. md(4)) to a pool for
> >>> L2ARC/cache?  The ZFS documentation explicitly states that cache
> >>> device content is considered volatile.
> >>
> >>Using a ramdisk as an L2ARC vdev doesn't make any sense at all. If you
> >>have RAM to spare, it should be used by regular ARC.
> >
> >...except that it's already been proven on FreeBSD that the ARC getting
> >out of control can cause kernel panics[1], horrible performance until

First and foremost, sorry for the long post.  I tried to keep it short,
but sometimes there's just a lot to be said.

> There are other ways (not related to ZFS) to shoot into your feet
> too, I'm tempted to say that this is
>  a) a documentation bug
> and
>  b) a lack of sanity checking of the values... anyone out there with
> a good algorithm for something like this?
> 
> Normally you do some testing with the values you use, so once you
> resolved the issues, the system should be stable.

What documentation?  :-)  The Wiki?  If so, that's been outdated for
some time; I know Ivan Voras was doing his best to put good information
there, but it's hard given the below chaos.

The following tunables are recurrently mentioned as focal points, but no
one's explained in full how to tune these "properly", or which does what
(perfect example: vm.kmem_size_max vs. vm.kmem_size.  _max used to be
what you'd adjust to solve kmem exhaustion issues, but now people are
saying otherwise?).  I realise it may differ per system (given how much
RAM the system has), so different system configurations/examples would
need to be provided.  I realise that the behaviour of some have changed
too (e.g. -RELEASE differs from -STABLE, and 7.x differs from 8.x).
I've marked commonly-referred-to tunables with an asterisk:

  kern.maxvnodes
* vm.kmem_size
* vm.kmem_size_max
* vfs.zfs.arc_min
* vfs.zfs.arc_max
  vfs.zfs.prefetch_disable  (auto-tuned based on available RAM on 8-STABLE)
  vfs.zfs.txg.timeout
  vfs.zfs.vdev.cache.size
  vfs.zfs.vdev.cache.bshift
  vfs.zfs.vdev.max_pending
  vfs.zfs.zil_disable

Then, when it comes to debugging problems as a result of tuning
improperly (or entire lack of), the following counters (not tunables)
are thrown into the mix as "things people should look at":

  kstat.zfs.misc.arcstats.c
  kstat.zfs.misc.arcstats.c_min
  kstat.zfs.misc.arcstats.c_max
  kstat.zfs.misc.arcstats.evict_skip
  kstat.zfs.misc.arcstats.memory_throttle_count
  kstat.zfs.misc.arcstats.size

None of these have sysctl descriptions (sysctl -d) either.  I can
provide posts to freebsd-stable, freebsd-current, freebsd-fs, or
freebsd-questions, or freebsd-users referencing these variables or
counters if you need context.

All that said:

I would be more than happy to write some coherent documentation that
folks could refer to "officially", but rather than spend my entire
lifetime reverse-engineering the ZFS code I think it'd make more sense
to get some official parties involved to explain things.

I'd like to add some kind of monitoring section as well -- how
administrators could keep an eye on things and detect, semi-early, if
additional tuning is required or something along those lines.

> >ZFS has had its active/inactive lists flushed[2], and brings into
> 
> Someone needs to sit down and play a little bit with ways to tell
> the ARC that there is free memory. The mail you reference already
> tells that the inactive/cached lists should maybe taken into account
> too (I didn't had a look at this part of the ZFS code).
> 
> >question how proper tuning is to be established and what the effects are
> >on the rest of the system[3].  There are still reports of people
> 
> That's what I talk about regarding b) above. If you specify an
> arc_max which is too big (arc_max > kmem_size - SOME_SAVE_VALUE),
> there should be a message from the kernel and the value should be
> adjusted to a save amount.
> 
> Until the problems are fixed, a MD for L2ARC may be a viable
> alternative (if you have enough mem to give for this). Feel free to
> provide benchmark numbers, but in general I see this just as a
> workaround for the current issues.

I've played with this a bit (2-disk mirror + one 256MB md), but I'm not
completely sure how to read the bonnie++ results, nor am I sure I'm
using the right arguments (bonnie++ -s8192 -n64 -d/pool on a machine
that has 4GB).

L2ARC ("cache" vdev) is supposed to improve random reads, while a "log"
vdev (presumably something that links in with the ZIL) improves random
writes.  I'm not sure where bonn

Re: hardware for home use large storage

2010-02-15 Thread Alexander Leidinger


Quoting Jeremy Chadwick  (from Mon, 15 Feb  
2010 01:07:56 -0800):



On Mon, Feb 15, 2010 at 10:49:47AM +0200, Dan Naumov wrote:

> I had a feeling someone would bring up L2ARC/cache devices.  This gives
> me the opportunity to ask something that's been on my mind for quite
> some time now:
>
> Aside from the capacity different (e.g. 40GB vs. 1GB), is there a
> benefit to using a dedicated RAM disk (e.g. md(4)) to a pool for
> L2ARC/cache?  The ZFS documentation explicitly states that cache
> device content is considered volatile.

Using a ramdisk as an L2ARC vdev doesn't make any sense at all. If you
have RAM to spare, it should be used by regular ARC.


...except that it's already been proven on FreeBSD that the ARC getting
out of control can cause kernel panics[1], horrible performance until


There are other ways (not related to ZFS) to shoot into your feet too,  
I'm tempted to say that this is

 a) a documentation bug
and
 b) a lack of sanity checking of the values... anyone out there with  
a good algorithm for something like this?


Normally you do some testing with the values you use, so once you  
resolved the issues, the system should be stable.



ZFS has had its active/inactive lists flushed[2], and brings into


Someone needs to sit down and play a little bit with ways to tell the  
ARC that there is free memory. The mail you reference already tells  
that the inactive/cached lists should maybe taken into account too (I  
didn't had a look at this part of the ZFS code).



question how proper tuning is to be established and what the effects are
on the rest of the system[3].  There are still reports of people


That's what I talk about regarding b) above. If you specify an arc_max  
which is too big (arc_max > kmem_size - SOME_SAVE_VALUE), there should  
be a message from the kernel and the value should be adjusted to a  
save amount.


Until the problems are fixed, a MD for L2ARC may be a viable  
alternative (if you have enough mem to give for this). Feel free to  
provide benchmark numbers, but in general I see this just as a  
workaround for the current issues.



disabling ZIL "for stability reasons" as well.


For the ZIL you definitively do not want to have a MD. If you do not  
specify a log vdev for the pool, the ZIL will be written somewhere on  
the disks of the pool. When the data hits the ZIL, it has to be really  
on a non-volatile storage. If you lose the ZIL, you lose data.



The "Internals" section of Brendan Gregg's blog[4] outlines where the
L2ARC sits in the scheme of things, or if the ARC could essentially
be disabled by setting the minimum size to something very small (a few
megabytes) and instead using L2ARC which is manageable.


At least in 7-stable, 8-stable and 9-current, the arc_max now really  
corresponds to a max value, so it is more of providing a save arc_max  
than a minimal arc_max. No matter how you construct the L2ARC, ARC  
access will be faster than L2ARC access.


[1]:  
http://lists.freebsd.org/pipermail/freebsd-questions/2010-January/211009.html
[2]:  
http://lists.freebsd.org/pipermail/freebsd-stable/2010-January/053949.html
[3]:  
http://lists.freebsd.org/pipermail/freebsd-stable/2010-February/055073.html

[4]: http://blogs.sun.com/brendan/entry/test


Bye,
Alexander.

--
BOFH excuse #439:

Hot Java has gone cold

http://www.Leidinger.netAlexander @ Leidinger.net: PGP ID = B0063FE7
http://www.FreeBSD.org   netchild @ FreeBSD.org  : PGP ID = 72077137
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-15 Thread Ulf Zimmermann
On Sun, Feb 14, 2010 at 07:33:07PM -0500, Dan Langille wrote:
> >Get a dock for holding 2 x 2,5" disks in a single 5,25" slot and put
> >it at the top, in the only 5,25" bay of the case.
> 
> That sounds very interesting.  I just looking around for such a thing, 
> and could not find it.  Is there a more specific name? URL?

I had an Addonics 5.25" frame for 4x 2.5" SAS/SATA but the small fans in it
are unfortunatly of the cheap kind. I ended up using the 2x2.5" to 3.5"
frame from Silverstone (for the small Silverstone case I got).

-- 
Regards, Ulf.

-
Ulf Zimmermann, 1525 Pacific Ave., Alameda, CA-94501, #: 510-865-0204
You can find my resume at: http://www.Alameda.net/~ulf/resume.html
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-15 Thread Jeremy Chadwick
On Mon, Feb 15, 2010 at 10:49:47AM +0200, Dan Naumov wrote:
> > I had a feeling someone would bring up L2ARC/cache devices.  This gives
> > me the opportunity to ask something that's been on my mind for quite
> > some time now:
> >
> > Aside from the capacity different (e.g. 40GB vs. 1GB), is there a
> > benefit to using a dedicated RAM disk (e.g. md(4)) to a pool for
> > L2ARC/cache?  The ZFS documentation explicitly states that cache
> > device content is considered volatile.
> 
> Using a ramdisk as an L2ARC vdev doesn't make any sense at all. If you
> have RAM to spare, it should be used by regular ARC.

...except that it's already been proven on FreeBSD that the ARC getting
out of control can cause kernel panics[1], horrible performance until
ZFS has had its active/inactive lists flushed[2], and brings into
question how proper tuning is to be established and what the effects are
on the rest of the system[3].  There are still reports of people
disabling ZIL "for stability reasons" as well.

My thought process basically involves "getting rid" of the ARC and using
L2ARC entirely, given that it provides more control/containment which
cannot be achieved on FreeBSD (see above).  In English: I'd trust a
whole series of md(4) disks (with sizes that I choose) over something
"variable/dynamic" which cannot be controlled or managed effectively.

The "Internals" section of Brendan Gregg's blog[4] outlines where the
L2ARC sits in the scheme of things, or if the ARC could essentially
be disabled by setting the minimum size to something very small (a few
megabytes) and instead using L2ARC which is manageable.

[1]: 
http://lists.freebsd.org/pipermail/freebsd-questions/2010-January/211009.html
[2]: http://lists.freebsd.org/pipermail/freebsd-stable/2010-January/053949.html
[3]: http://lists.freebsd.org/pipermail/freebsd-stable/2010-February/055073.html
[4]: http://blogs.sun.com/brendan/entry/test

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


RE: hardware for home use large storage

2010-02-15 Thread Dan Naumov
> I had a feeling someone would bring up L2ARC/cache devices.  This gives
> me the opportunity to ask something that's been on my mind for quite
> some time now:
>
> Aside from the capacity different (e.g. 40GB vs. 1GB), is there a
> benefit to using a dedicated RAM disk (e.g. md(4)) to a pool for
> L2ARC/cache?  The ZFS documentation explicitly states that cache
> device content is considered volatile.

Using a ramdisk as an L2ARC vdev doesn't make any sense at all. If you
have RAM to spare, it should be used by regular ARC.


- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-15 Thread Jeremy Chadwick
On Mon, Feb 15, 2010 at 08:57:10AM +0100, Alexander Leidinger wrote:
> Quoting Dan Naumov  (from Mon, 15 Feb 2010
> 01:10:49 +0200):
> 
> >Get a dock for holding 2 x 2,5" disks in a single 5,25" slot and put
> >it at the top, in the only 5,25" bay of the case. Now add an
> >additional PCI-E SATA controller card, like the often mentioned PCIE
> >SIL3124. Now you have 2 x 2,5" disk slots and 8 x 3,5" disk slots,
> >with 6 native SATA ports on the motherboard and more ports on the
> >controller card. Now get 2 x 80gb Intel SSDs and put them into the
> >dock. Now partition each of them in the following fashion:
> >
> >1: swap: 4-5gb
> >2: freebsd-zfs: ~10-15gb for root filesystem
> >3: freebsd-zfs: rest of the disk: dedicated L2ARC vdev
> 
> If you already have 2 SSDs I suggest to make 4 partitions. The
> additional one for the ZIL (decide yourself what you want to speed
> up "more" and size the L2ARC and ZIL partitions accordingly...).
> This should speed up write operations. The ZIL one should be zfs
> mirrored, because the ZIL is more sensitive to disk failures than
> the L2ARC: zpool add  log mirror  
> 
> >GMirror your SSD swap partitions.
> >Make a ZFS mirror pool out of your SSD root filesystem partitions
> >Build your big ZFS pool however you like out of the mechanical
> >disks you have.
> >Add the 2 x ~60gb partitions as dedicated independant L2ARC devices
> >for your SATA disk ZFS pool.
> 
> BTW, the cheap way of doing something like this is to add a USB
> memory stick as L2ARC:
> http://www.leidinger.net/blog/2010/02/10/making-zfs-faster/
> This will not give you the speed boost of a real SSD attached via
> SATA, but for the price (maybe you even got the memory stick for
> free somewhere) you can not get something better.

I had a feeling someone would bring up L2ARC/cache devices.  This gives
me the opportunity to ask something that's been on my mind for quite
some time now:

Aside from the capacity different (e.g. 40GB vs. 1GB), is there a
benefit to using a dedicated RAM disk (e.g. md(4)) to a pool for
L2ARC/cache?  The ZFS documentation explicitly states that cache
device content is considered volatile.

Example:

# zpool status storage
  pool: storage
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
storage ONLINE   0 0 0
  mirrorONLINE   0 0 0
ad10ONLINE   0 0 0
ad14ONLINE   0 0 0

errors: No known data errors

# mdconfig -a -t malloc -o reserve -s 256m -u 16
# zpool add storage cache md16
# zpool status storage
  pool: storage
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
storage ONLINE   0 0 0
  mirrorONLINE   0 0 0
ad10ONLINE   0 0 0
ad14ONLINE   0 0 0
cache
  md16  ONLINE   0 0 0


And removal:

# zpool remove storage md16
# mdconfig -d -u 16
#

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-15 Thread Wes Morgan
On Mon, 15 Feb 2010, Dmitry Morozovsky wrote:

> On Sun, 14 Feb 2010, Dan Langille wrote:
>
> [snip]
>
> DL> > SAS controller ($120):
> DL> > 
> http://www.buy.com/prod/supermicro-lsi-megaraid-lsisas1068e-8-port-sas-raid-controller-16mb/q/loc/101/207929556.html
> DL> > Note: You'll need to change or remove the mounting bracket since it is
> DL> > "backwards". I was able to find a bracket with matching screw holes on 
> an
> DL> > old nic and secure it to my case. It uses the same chipset as the more
> DL> > expensive 3081E-R, if I remember correctly.
> DL>
> DL> I follow what you say, but cannot comprehend why the bracket is backwards.
>
> It's because IO slot is ot the other side of the bracked, like good old ISA

Yeah. Mirror image would be a more accurate description. I'm surprised I
had an ISA card that matched up with the mounting holes. Supermicro calls
it "UIO".

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-15 Thread Dmitry Morozovsky
On Mon, 15 Feb 2010, Dan Naumov wrote:

DN> >> PSU: Corsair 400CX 80+ - 59 euro -
DN> >
DN> >> http://www.corsair.com/products/cx/default.aspx
DN> >
DN> > http://www.newegg.com/Product/Product.aspx?Item=N82E16817139008 for $50
DN> >
DN> > Is that sufficient power up to 10 SATA HDD and an optical drive?
DN> 
DN> Disk power use varies from about 8 watt/disk for "green" disks to 20
DN> watt/disk for really powerhungry ones. So yes.

The only thing one should be aware that startup current on contemporary 3.5 
SATA disks would exceed 2.5A on 12V buse, so delaying plate startup is rather 
vital.

Or get 500-520 VA PSU to be sure. Or do both just to be on the safe side ;-)

-- 
Sincerely,
D.Marck [DM5020, MCK-RIPE, DM3-RIPN]
[ FreeBSD committer: ma...@freebsd.org ]

*** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- ma...@rinet.ru ***

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-14 Thread Alexander Leidinger
Quoting Dan Naumov  (from Mon, 15 Feb 2010  
01:10:49 +0200):



Get a dock for holding 2 x 2,5" disks in a single 5,25" slot and put
it at the top, in the only 5,25" bay of the case. Now add an
additional PCI-E SATA controller card, like the often mentioned PCIE
SIL3124. Now you have 2 x 2,5" disk slots and 8 x 3,5" disk slots,
with 6 native SATA ports on the motherboard and more ports on the
controller card. Now get 2 x 80gb Intel SSDs and put them into the
dock. Now partition each of them in the following fashion:

1: swap: 4-5gb
2: freebsd-zfs: ~10-15gb for root filesystem
3: freebsd-zfs: rest of the disk: dedicated L2ARC vdev


If you already have 2 SSDs I suggest to make 4 partitions. The  
additional one for the ZIL (decide yourself what you want to speed up  
"more" and size the L2ARC and ZIL partitions accordingly...). This  
should speed up write operations. The ZIL one should be zfs mirrored,  
because the ZIL is more sensitive to disk failures than the L2ARC:  
zpool add  log mirror  



GMirror your SSD swap partitions.
Make a ZFS mirror pool out of your SSD root filesystem partitions
Build your big ZFS pool however you like out of the mechanical disks  
you have.

Add the 2 x ~60gb partitions as dedicated independant L2ARC devices
for your SATA disk ZFS pool.


BTW, the cheap way of doing something like this is to add a USB memory  
stick as L2ARC:

http://www.leidinger.net/blog/2010/02/10/making-zfs-faster/
This will not give you the speed boost of a real SSD attached via  
SATA, but for the price (maybe you even got the memory stick for free  
somewhere) you can not get something better.


Bye,
Alexander.

--
Crito, I owe a cock to Asclepius; will you remember to pay the debt?
-- Socrates' last words

http://www.Leidinger.netAlexander @ Leidinger.net: PGP ID = B0063FE7
http://www.FreeBSD.org   netchild @ FreeBSD.org  : PGP ID = 72077137
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-14 Thread Dmitry Morozovsky
On Sun, 14 Feb 2010, Dan Langille wrote:

[snip]

DL> > SAS controller ($120):
DL> > 
http://www.buy.com/prod/supermicro-lsi-megaraid-lsisas1068e-8-port-sas-raid-controller-16mb/q/loc/101/207929556.html
DL> > Note: You'll need to change or remove the mounting bracket since it is
DL> > "backwards". I was able to find a bracket with matching screw holes on an
DL> > old nic and secure it to my case. It uses the same chipset as the more
DL> > expensive 3081E-R, if I remember correctly.
DL> 
DL> I follow what you say, but cannot comprehend why the bracket is backwards.

It's because IO slot is ot the other side of the bracked, like good old ISA


-- 
Sincerely,
D.Marck [DM5020, MCK-RIPE, DM3-RIPN]
[ FreeBSD committer: ma...@freebsd.org ]

*** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- ma...@rinet.ru ***

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-14 Thread Alexander Motin
Dan Langille wrote:
> Dan Naumov wrote:
>> Now add an
>> additional PCI-E SATA controller card, like the often mentioned PCIE
>> SIL3124. 
> 
> http://www.newegg.com/Product/Product.aspx?Item=N82E16816124026 for $35

This is PCI-X version. Unless you have PCI-X slot, PCIe x1 version seems
preferable: http://www.newegg.com/Product/Product.aspx?Item=N82E16816124027

-- 
Alexander Motin
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-14 Thread Alexander Motin
Dan Langille wrote:
> Alexander Motin wrote:
>> Steve Polyack wrote:
>>> On 2/10/2010 12:02 AM, Dan Langille wrote:
 Don't use a port multiplier and this goes away.  I was hoping to avoid
 a PM and using something like the Syba PCI Express SATA II 4 x Ports
 RAID Controller seems to be the best solution so far.

 http://www.amazon.com/Syba-Express-Ports-Controller-SY-PEX40008/dp/B002R0DZWQ/ref=sr_1_22?ie=UTF8&s=electronics&qid=1258452902&sr=1-22

>>> Dan, I can personally vouch for these cards under FreeBSD.  We have 3 of
>>> them in one system, with almost every port connected to a port
>>> multiplier (SiI5xxx PMs).  Using the siis(4) driver on 8.0-RELEASE
>>> provides very good performance, and supports both NCQ and FIS-based
>>> switching (an essential for decent port-multiplier performance).
>>>
>>> One thing to consider, however, is that the card is only single-lane
>>> PCI-Express.  The bandwidth available is only 2.5Gb/s (~312MB/sec,
>>> slightly less than that of the SATA-2 link spec), so if you have 4
>>> high-performance drives connected, you may hit a bottleneck at the
>>> bus.   I'd be particularly interested if anyone can find any similar
>>> Silicon Image SATA controllers with a PCI-E 4x or 8x interface ;)
>>
>> Here is SiI3124 based card with built-in PCIe x8 bridge:
>> http://www.addonics.com/products/host_controller/adsa3gpx8-4em.asp
>>
>> It is not so cheap, but with 12 disks connected via 4 Port Multipliers
>> it can give up to 1GB/s (4x250MB/s) of bandwidth.
>>
>> Cheaper PCIe x1 version mentioned above gave me up to 200MB/s, that is
>> maximum of what I've seen from PCIe 1.0 x1 controllers. Looking on NCQ
>> and FBS support it can be enough for many real-world applications, that
>> don't need so high linear speeds, but have many concurrent I/Os.
> 
> Is that the URL you meant to post?  "4 Port eSATA PCI-E 8x Controller
> for Mac Pro".  I'd rather use internal connections.

Not exactly what I meant, as it is a Mac version, but yes. At least such
controllers exist. May be they also could be found with internal SATA.

-- 
Alexander Motin
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-14 Thread Daniel O'Connor
On Mon, 15 Feb 2010, Dan Langille wrote:
> > I priced a decent ZFS PC for a small business and it was AUD$2500
> > including the disks (5x750Gb), case, PSU etc..
>
> Yes, and this one doesn't yet have HDD.
>
> Can you supply details of your system?

1   AP400791A   4U Rackmount chassis (no PSU)
1   MB455SPF5 drive hot swap bay (in 3x5.25")
5   HAWD7502ABYSWD 750Gb 24x7 RAID
1   GA-MA770T-UD3P  Gigabyte AMD770T AM3 motherboard
1   CPAP-965AMD PhenomII X4 AM2+/3
2   MEK-4G1333D3D4R Kingston 4Gb DDR3/1333 ECC RAM
1   PSS-PSR700  Seasonic 700W PSU
1   VCMS4350-D512H  Radeon 4350 PCIe video card
1   FMCFP4G 4Gb CF card
1   n/a CF to IDE adapter

Note that I haven't actually built it yet, I don't expect any problems 
though.

I built a much cheaper version (non hot swap) at home using a Gigabyte 
GA-MA785GM-US2H, Athlon II X2 240 2.8GHz, 4Gb DDR2 RAM and 5 1Tb WD 
drives in an Antec NineHundred case. It boots of a CF card too, but has 
onboard video and only a 400W PSU (which is probably overkill, steady 
state draw was ~110W)

-- 
Daniel O'Connor software and network engineer
for Genesis Software - http://www.gsoft.com.au
"The nice thing about standards is that there
are so many of them to choose from."
  -- Andrew Tanenbaum
GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C


signature.asc
Description: This is a digitally signed message part.


Re: hardware for home use large storage

2010-02-14 Thread Dan Langille

Charles Sprickman wrote:

On Sun, 14 Feb 2010, Dan Langille wrote:


Dan Naumov wrote:

On Sun, 14 Feb 2010, Dan Langille wrote:

After creating three different system configurations (Athena,
Supermicro, and HP), my configuration of choice is this Supermicro
setup:

1. Samsung SATA CD/DVD Burner $20 (+ $8 shipping)
2. SuperMicro 5046A $750 (+$43 shipping)
3. LSI SAS 3081E-R $235
4. SATA cables $60
5. Crucial 3×2G ECC DDR3-1333 $191 (+ $6 shipping)
6. Xeon W3520 $310


You do realise how much of a massive overkill this is and how much you
are overspending?



I appreciate the comments and feedback.  I'd also appreciate 
alternative suggestions in addition to what you have contributed so 
far.  Spec out the box you would build.


$1200, and I'll run any benchmarks you'd like to see:

http://secure.newegg.com/WishList/PublicWishDetail.aspx?WishListNumber=8441629 



This box is really only for backups, so no fancy CPU.  The sub-$100 
celeron seems to not impact ZFS performance a bit.  It does have ECC 
memory, and a fancy "server" mainboard.


That's pretty neat.  Especially given it has 4x1TB of disks.

For my needs, I'd like a bigger case and PSU: $720 without HDD.

https://secure.newegg.com/WishList/MySavedWishDetail.aspx?ID=8918889

My system will have a minimum of 8 SATA devices (5 for ZFS, 2 for the 
gmirror'd OS, and 1 for the optical drive).  Thus, I'd still need to buy 
another SATA controller on top of the above.


Thank you.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-14 Thread Dan Naumov
>> PSU: Corsair 400CX 80+ - 59 euro -
>
>> http://www.corsair.com/products/cx/default.aspx
>
> http://www.newegg.com/Product/Product.aspx?Item=N82E16817139008 for $50
>
> Is that sufficient power up to 10 SATA HDD and an optical drive?

Disk power use varies from about 8 watt/disk for "green" disks to 20
watt/disk for really powerhungry ones. So yes.


- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-14 Thread Dan Langille

Dan Naumov wrote:

On Mon, Feb 15, 2010 at 12:42 AM, Dan Naumov  wrote:

On Sun, Feb 14, 2010 at 11:38 PM, Dan Langille  wrote:

Dan Naumov wrote:

On Sun, 14 Feb 2010, Dan Langille wrote:

After creating three different system configurations (Athena,
Supermicro, and HP), my configuration of choice is this Supermicro
setup:

   1. Samsung SATA CD/DVD Burner $20 (+ $8 shipping)
   2. SuperMicro 5046A $750 (+$43 shipping)
   3. LSI SAS 3081E-R $235
   4. SATA cables $60
   5. Crucial 3×2G ECC DDR3-1333 $191 (+ $6 shipping)
   6. Xeon W3520 $310

You do realise how much of a massive overkill this is and how much you
are overspending?


I appreciate the comments and feedback.  I'd also appreciate alternative
suggestions in addition to what you have contributed so far.  Spec out the
box you would build.

==
Case: Fractal Design Define R2 - 89 euro:
http://www.fractal-design.com/?view=product&prod=32

Mobo/CPU: Supermicro X7SPA-H / Atom D510 - 180-220 euro:
http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=H

PSU: Corsair 400CX 80+ - 59 euro:
http://www.corsair.com/products/cx/default.aspx

RAM: Corsair 2x2GB, DDR2 800MHz SO-DIMM, CL5 - 85 euro
==
Total: ~435 euro

The motherboard has 6 native AHCI-capable ports on ICH9R controller
and you have a PCI-E slot free if you want to add an additional
controller card. Feel free to blow the money you've saved on crazy
fast SATA disks and if your system workload is going to have a lot of
random reads, then spend 200 euro on a 80gb Intel X25-M for use as a
dedicated L2ARC device for your pool.


And to expand a bit, if you want that crazy performance without
blowing silly amounts of money:

Get a dock for holding 2 x 2,5" disks in a single 5,25" slot and put
it at the top, in the only 5,25" bay of the case.


That sounds very interesting.  I just looking around for such a thing, 
and could not find it.  Is there a more specific name? URL?



Now add an
additional PCI-E SATA controller card, like the often mentioned PCIE
SIL3124. 


http://www.newegg.com/Product/Product.aspx?Item=N82E16816124026 for $35


Now you have 2 x 2,5" disk slots and 8 x 3,5" disk slots,
with 6 native SATA ports on the motherboard and more ports on the
controller card. Now get 2 x 80gb Intel SSDs and put them into the
dock. Now partition each of them in the following fashion:

1: swap: 4-5gb
2: freebsd-zfs: ~10-15gb for root filesystem
3: freebsd-zfs: rest of the disk: dedicated L2ARC vdev

GMirror your SSD swap partitions.
Make a ZFS mirror pool out of your SSD root filesystem partitions
Build your big ZFS pool however you like out of the mechanical disks you have.
Add the 2 x ~60gb partitions as dedicated independant L2ARC devices
for your SATA disk ZFS pool.

Now you have redundant swap, redundant and FAST root filesystem and
your ZFS pool of SATA disks has 120gb worth of L2ARC space on the
SSDs. The L2ARC vdevs dont need to be redundant, because should an IO
error occur while reading off L2ARC, the IO is deferred to the "real"
data location on the pool of your SATA disks. You can also remove your
L2ARC vdevs from your pool at will, on a live pool.


That is nice.

Thank you.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-14 Thread Dan Langille

Dan Naumov wrote:

On Sun, Feb 14, 2010 at 11:38 PM, Dan Langille  wrote:

Dan Naumov wrote:

On Sun, 14 Feb 2010, Dan Langille wrote:

After creating three different system configurations (Athena,
Supermicro, and HP), my configuration of choice is this Supermicro
setup:

   1. Samsung SATA CD/DVD Burner $20 (+ $8 shipping)
   2. SuperMicro 5046A $750 (+$43 shipping)
   3. LSI SAS 3081E-R $235
   4. SATA cables $60
   5. Crucial 3×2G ECC DDR3-1333 $191 (+ $6 shipping)
   6. Xeon W3520 $310

You do realise how much of a massive overkill this is and how much you
are overspending?


I appreciate the comments and feedback.  I'd also appreciate alternative
suggestions in addition to what you have contributed so far.  Spec out the
box you would build.


==
Case: Fractal Design Define R2 - 89 euro -
http://www.fractal-design.com/?view=product&prod=32


That is a nice case.  It's one slot short for what I need.  The trays 
are great.  I want three more slots for 2xSATA for a gmirror base-OS and 
an optical drive.  As someone mentioned on IRC, there are many similar 
non hot-swap cases.  From the website, I couldn't see this for sale in 
USA.  But converting your price, to US$, it is about $121.


Looking around, this case was suggested to me.  I like it a lot:

LIAN LI PC-A71F Black Aluminum ATX Full Tower Computer Case $240
http://www.newegg.com/Product/Product.aspx?Item=N82E1682244


Mobo/CPU: Supermicro X7SPA-H / Atom D510 - 180-220 euro -
http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=H


Non-ECC RAM, which is something I'd like to have.  $175


PSU: Corsair 400CX 80+ - 59 euro -

> http://www.corsair.com/products/cx/default.aspx

http://www.newegg.com/Product/Product.aspx?Item=N82E16817139008 for $50

Is that sufficient power up to 10 SATA HDD and an optical drive?

> RAM: Corsair 2x2GB, DDR2 800MHz SO-DIMM, CL5 - 85 euro

http://www.newegg.com/Product/Product.aspx?Item=N82E16820145238 $82



==
Total: ~435 euro


With my options, it's about $640 with shipping etc.


The motherboard has 6 native AHCI-capable ports on ICH9R controller
and you have a PCI-E slot free if you want to add an additional
controller card. Feel free to blow the money you've saved on crazy
fast SATA disks and if your system workload is going to have a lot of
random reads, then spend 200 euro on a 80gb Intel X25-M for use as a
dedicated L2ARC device for your pool.


I have been playing with the idea of an L2ARC device.  They sound crazy 
cool.


Thank you Dan.

-- dan
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-14 Thread Artem Belevich
> your ZFS pool of SATA disks has 120gb worth of L2ARC space

Keep in mind that housekeeping of 120G L2ARC may potentially require
fair amount of RAM, especially if you're dealing with tons of small
files.

See this thread:
http://www.mail-archive.com/zfs-disc...@opensolaris.org/msg34674.html

--Artem
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-14 Thread Tortise
- Original Message - 
From: "Dan Langille" 

To: "Wes Morgan" 
Cc: "FreeBSD Stable" 
Sent: Monday, February 15, 2010 12:07 PM
Subject: Re: hardware for home use large storage



Whether I use hardware or software RAID is undecided.  I

I think I am leaning towards software RAID, probably ZFS under FreeBSD 8.x
but I'm open to hardware RAID but I think the cost won't justify it given
ZFS.

Given that, what motherboard and RAM configuration would you recommend to
work with FreeBSD [and probably ZFS].  The lists seems to indicate that more
RAM is better with ZFS.



.


That is a nice card.  However, I don't want hardware RAID.  I want ZFS.


I hope its not too rude to ask, and with no rudeness intended.

Is ZFS better under FreeBSD or Open Solaris? (I gather an server version of 
open solaris is less than a couple of months away.)


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-14 Thread Dan Langille

Dmitry Morozovsky wrote:

On Mon, 8 Feb 2010, Dan Langille wrote:

DL> I'm looking at creating a large home use storage machine.  Budget is a
DL> concern, but size and reliability are also a priority.  Noise is also a
DL> concern, since this will be at home, in the basement.  That, and cost,
DL> pretty much rules out a commercial case, such as a 3U case.  It would be
DL> nice, but it greatly inflates the budget.  This pretty much restricts me to
DL> a tower case.

[snip]

We use the following at work, but it's still pretty cheap and pretty silent:

Chieftec WH-02B-B (9x5.25 bays)


$130 http://www.ncixus.com/products/33591/WH-02B-B-OP/Chieftec/ but not 
available


$87.96 at http://www.xpcgear.com/chieftec-wh-02b-b-mid-tower-case.html

http://www.chieftec.com/wh02b-b.html


filled with

2 x Supermicro CSE-MT35T 
http://www.supermicro.nl/products/accessories/mobilerack/CSE-M35T-1.cfm

for regular storage, 2 x raidz1


I could not find a price on that, but guessing at $100 each


1 x Promise SuperSwap 1600
http://www.promise.com/product/product_detail_eng.asp?product_id=169
for changeable external backups


$100 from 
http://www.overstock.com/Electronics/Promise-SuperSwap-1600-Drive-Enclosure/2639699/product.html


So that's $390.  Not bad.

Still need RAM, M/B, PSU, and possibly video.


and still have 2 5.25 bays for anything interesting ;-)


I'd be filling those three with DVD-RW and two SATA drives in a gmirror 
configuration.


other parts are regular SocketAM2+ motherboard, Athlon X4, 8G ram, 
FreeBSD/amd64


Let's say $150 for the M/B, $150 for the CPU, and $200 for the RAM.

Total is $890.  Nice.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-14 Thread Dan Naumov
On Mon, Feb 15, 2010 at 12:42 AM, Dan Naumov  wrote:
> On Sun, Feb 14, 2010 at 11:38 PM, Dan Langille  wrote:
>> Dan Naumov wrote:

 On Sun, 14 Feb 2010, Dan Langille wrote:
>
> After creating three different system configurations (Athena,
> Supermicro, and HP), my configuration of choice is this Supermicro
> setup:
>
>    1. Samsung SATA CD/DVD Burner $20 (+ $8 shipping)
>    2. SuperMicro 5046A $750 (+$43 shipping)
>    3. LSI SAS 3081E-R $235
>    4. SATA cables $60
>    5. Crucial 3×2G ECC DDR3-1333 $191 (+ $6 shipping)
>    6. Xeon W3520 $310
>>>
>>> You do realise how much of a massive overkill this is and how much you
>>> are overspending?
>>
>>
>> I appreciate the comments and feedback.  I'd also appreciate alternative
>> suggestions in addition to what you have contributed so far.  Spec out the
>> box you would build.
>
> ==
> Case: Fractal Design Define R2 - 89 euro:
> http://www.fractal-design.com/?view=product&prod=32
>
> Mobo/CPU: Supermicro X7SPA-H / Atom D510 - 180-220 euro:
> http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=H
>
> PSU: Corsair 400CX 80+ - 59 euro:
> http://www.corsair.com/products/cx/default.aspx
>
> RAM: Corsair 2x2GB, DDR2 800MHz SO-DIMM, CL5 - 85 euro
> ==
> Total: ~435 euro
>
> The motherboard has 6 native AHCI-capable ports on ICH9R controller
> and you have a PCI-E slot free if you want to add an additional
> controller card. Feel free to blow the money you've saved on crazy
> fast SATA disks and if your system workload is going to have a lot of
> random reads, then spend 200 euro on a 80gb Intel X25-M for use as a
> dedicated L2ARC device for your pool.

And to expand a bit, if you want that crazy performance without
blowing silly amounts of money:

Get a dock for holding 2 x 2,5" disks in a single 5,25" slot and put
it at the top, in the only 5,25" bay of the case. Now add an
additional PCI-E SATA controller card, like the often mentioned PCIE
SIL3124. Now you have 2 x 2,5" disk slots and 8 x 3,5" disk slots,
with 6 native SATA ports on the motherboard and more ports on the
controller card. Now get 2 x 80gb Intel SSDs and put them into the
dock. Now partition each of them in the following fashion:

1: swap: 4-5gb
2: freebsd-zfs: ~10-15gb for root filesystem
3: freebsd-zfs: rest of the disk: dedicated L2ARC vdev

GMirror your SSD swap partitions.
Make a ZFS mirror pool out of your SSD root filesystem partitions
Build your big ZFS pool however you like out of the mechanical disks you have.
Add the 2 x ~60gb partitions as dedicated independant L2ARC devices
for your SATA disk ZFS pool.

Now you have redundant swap, redundant and FAST root filesystem and
your ZFS pool of SATA disks has 120gb worth of L2ARC space on the
SSDs. The L2ARC vdevs dont need to be redundant, because should an IO
error occur while reading off L2ARC, the IO is deferred to the "real"
data location on the pool of your SATA disks. You can also remove your
L2ARC vdevs from your pool at will, on a live pool.


- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-14 Thread Dan Langille

Wes Morgan wrote:

On Sun, 14 Feb 2010, Dan Langille wrote:


Dan Langille wrote:

Hi,

I'm looking at creating a large home use storage machine.  Budget is a
concern, but size and reliability are also a priority.  Noise is also a
concern, since this will be at home, in the basement.  That, and cost,
pretty much rules out a commercial case, such as a 3U case.  It would be
nice, but it greatly inflates the budget.  This pretty much restricts me to
a tower case.

The primary use of this machine will be a backup server[1].  It will do
other secondary use will include minor tasks such as samba, CIFS, cvsup,
etc.

I'm thinking of 8x1TB (or larger) SATA drives.  I've found a case[2] with
hot-swap bays[3], that seems interesting.  I haven't looked at power
supplies, but given that number of drives, I expect something beefy with a
decent reputation is called for.

Whether I use hardware or software RAID is undecided.  I

I think I am leaning towards software RAID, probably ZFS under FreeBSD 8.x
but I'm open to hardware RAID but I think the cost won't justify it given
ZFS.

Given that, what motherboard and RAM configuration would you recommend to
work with FreeBSD [and probably ZFS].  The lists seems to indicate that more
RAM is better with ZFS.

Thanks.


[1] - FYI running Bacula, but that's out of scope for this question

[2] - http://www.newegg.com/Product/Product.aspx?Item=N82E16811192058

[3] - nice to have, especially for a failure.

After creating three different system configurations (Athena, Supermicro, and
HP), my configuration of choice is this Supermicro setup:

   1. Samsung SATA CD/DVD Burner $20 (+ $8 shipping)
   2. SuperMicro 5046A $750 (+$43 shipping)
   3. LSI SAS 3081E-R $235
   4. SATA cables $60
   5. Crucial 3×2G ECC DDR3-1333 $191 (+ $6 shipping)
   6. Xeon W3520 $310

Total price with shipping $1560

Details and links at http://dan.langille.org/2010/02/14/supermicro/


Wow um... That's quite a setup. Do you really need the Xeon W3520? You
could get a regular core 2 system for much less and still use the ECC ram
(highly recommended). The case you're looking at only has 6 hot-swap bays
according to the manuals, although the pictures show 8 (???). 


Going to 
http://www.supermicro.com/products/system/tower/5046/SYS-5046A-X.cfm it 
does say 6 hot-swap and two spare.  I'm guessing they say that because 
the M/B supports only 6 SATA connections:


http://www.supermicro.com/products/motherboard/Core2Duo/X58/C7X58.cfm


You could
shave some off the case and cpu, upgrade your 3081E-R to an ARC-1222 for
$200 more and have the hardware raid option.


That is a nice card.  However, I don't want hardware RAID.  I want ZFS.



If I was building a tower system, I'd put together something like this:


Thank you for the suggestions.



Case with 8 hot-swap SATA bays ($250):
http://www.newegg.com/Product/Product.aspx?Item=N82E16811192058
Or if you prefer screwless, you can find the case without the 2 hotswap
bays and use an icy dock screwless version.


I do like this case, it's one I have priced:

  http://dan.langille.org/2010/02/14/pricing-the-athena/


Intel server board (for ECC support) ($200):
http://www.newegg.com/Product/Product.aspx?Item=N82E16813121328


ECC, nice, which is something I've found appealing.


SAS controller ($120):
http://www.buy.com/prod/supermicro-lsi-megaraid-lsisas1068e-8-port-sas-raid-controller-16mb/q/loc/101/207929556.html
Note: You'll need to change or remove the mounting bracket since it is
"backwards". I was able to find a bracket with matching screw holes on an
old nic and secure it to my case. It uses the same chipset as the more
expensive 3081E-R, if I remember correctly.


I follow what you say, but cannot comprehend why the bracket is backwards.


Quad-core CPU ($190):
http://www.newegg.com/Product/Product.aspx?Item=N82E16819115131

4x2gb ram sticks (97*2):
http://www.newegg.com/Product/Product.aspx?Item=N82E16820139045

same SATA cables for sata to mini-sas, same CD burner. Total cost probably
$400 less, which you can use to buy some of the drives.


I put this all together, and named it after you (hope you don't mind):

  http://dan.langille.org/2010/02/14/273/

You're right, $400 less.

I also wrote up the above suggestions with a Supermicro case instead:

SUPERMICRO CSE-743T-645B Black 4U Pedestal Chassis w/ 645W Power Supply 
 $320

http://www.newegg.com/Product/Product.aspx?Item=N82E16811152047

I like your suggestions with the above case.  It is now my preferred 
solution.



For my personal (overkill) setup I have a chenbro 4U chassis with 16
hotswap bays and mini-SAS backplanes, a zippy 2+1 640 watt redundant power
supply (sounds like a freight train). I cannot express the joy I felt in
ripping out all the little SATA cables and snaking a couple fat 8087s
under the fans. 8 of the bays are dedicated to my media array, and the
other 8 are there for swapping in and out of backup drives mostly, but the
time they REALLY come in handy is when you need to upgrade 

Re: hardware for home use large storage

2010-02-14 Thread Dan Naumov
On Sun, Feb 14, 2010 at 11:38 PM, Dan Langille  wrote:
> Dan Naumov wrote:
>>>
>>> On Sun, 14 Feb 2010, Dan Langille wrote:

 After creating three different system configurations (Athena,
 Supermicro, and HP), my configuration of choice is this Supermicro
 setup:

    1. Samsung SATA CD/DVD Burner $20 (+ $8 shipping)
    2. SuperMicro 5046A $750 (+$43 shipping)
    3. LSI SAS 3081E-R $235
    4. SATA cables $60
    5. Crucial 3×2G ECC DDR3-1333 $191 (+ $6 shipping)
    6. Xeon W3520 $310
>>
>> You do realise how much of a massive overkill this is and how much you
>> are overspending?
>
>
> I appreciate the comments and feedback.  I'd also appreciate alternative
> suggestions in addition to what you have contributed so far.  Spec out the
> box you would build.

==
Case: Fractal Design Define R2 - 89 euro:
http://www.fractal-design.com/?view=product&prod=32

Mobo/CPU: Supermicro X7SPA-H / Atom D510 - 180-220 euro:
http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=H

PSU: Corsair 400CX 80+ - 59 euro:
http://www.corsair.com/products/cx/default.aspx

RAM: Corsair 2x2GB, DDR2 800MHz SO-DIMM, CL5 - 85 euro
==
Total: ~435 euro

The motherboard has 6 native AHCI-capable ports on ICH9R controller
and you have a PCI-E slot free if you want to add an additional
controller card. Feel free to blow the money you've saved on crazy
fast SATA disks and if your system workload is going to have a lot of
random reads, then spend 200 euro on a 80gb Intel X25-M for use as a
dedicated L2ARC device for your pool.


- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-14 Thread Charles Sprickman

On Sun, 14 Feb 2010, Dan Langille wrote:


Dan Naumov wrote:

On Sun, 14 Feb 2010, Dan Langille wrote:

After creating three different system configurations (Athena,
Supermicro, and HP), my configuration of choice is this Supermicro
setup:

1. Samsung SATA CD/DVD Burner $20 (+ $8 shipping)
2. SuperMicro 5046A $750 (+$43 shipping)
3. LSI SAS 3081E-R $235
4. SATA cables $60
5. Crucial 3×2G ECC DDR3-1333 $191 (+ $6 shipping)
6. Xeon W3520 $310


You do realise how much of a massive overkill this is and how much you
are overspending?



I appreciate the comments and feedback.  I'd also appreciate alternative 
suggestions in addition to what you have contributed so far.  Spec out the 
box you would build.


$1200, and I'll run any benchmarks you'd like to see:

http://secure.newegg.com/WishList/PublicWishDetail.aspx?WishListNumber=8441629

This box is really only for backups, so no fancy CPU.  The sub-$100 
celeron seems to not impact ZFS performance a bit.  It does have ECC 
memory, and a fancy "server" mainboard.


C


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: hardware for home use large storage

2010-02-14 Thread Dan Langille

Alexander Motin wrote:

Steve Polyack wrote:

On 2/10/2010 12:02 AM, Dan Langille wrote:

Don't use a port multiplier and this goes away.  I was hoping to avoid
a PM and using something like the Syba PCI Express SATA II 4 x Ports
RAID Controller seems to be the best solution so far.

http://www.amazon.com/Syba-Express-Ports-Controller-SY-PEX40008/dp/B002R0DZWQ/ref=sr_1_22?ie=UTF8&s=electronics&qid=1258452902&sr=1-22

Dan, I can personally vouch for these cards under FreeBSD.  We have 3 of
them in one system, with almost every port connected to a port
multiplier (SiI5xxx PMs).  Using the siis(4) driver on 8.0-RELEASE
provides very good performance, and supports both NCQ and FIS-based
switching (an essential for decent port-multiplier performance).

One thing to consider, however, is that the card is only single-lane
PCI-Express.  The bandwidth available is only 2.5Gb/s (~312MB/sec,
slightly less than that of the SATA-2 link spec), so if you have 4
high-performance drives connected, you may hit a bottleneck at the
bus.   I'd be particularly interested if anyone can find any similar
Silicon Image SATA controllers with a PCI-E 4x or 8x interface ;)


Here is SiI3124 based card with built-in PCIe x8 bridge:
http://www.addonics.com/products/host_controller/adsa3gpx8-4em.asp

It is not so cheap, but with 12 disks connected via 4 Port Multipliers
it can give up to 1GB/s (4x250MB/s) of bandwidth.

Cheaper PCIe x1 version mentioned above gave me up to 200MB/s, that is
maximum of what I've seen from PCIe 1.0 x1 controllers. Looking on NCQ
and FBS support it can be enough for many real-world applications, that
don't need so high linear speeds, but have many concurrent I/Os.


Is that the URL you meant to post?  "4 Port eSATA PCI-E 8x Controller 
for Mac Pro".  I'd rather use internal connections.

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-14 Thread Dan Langille

Dmitry Morozovsky wrote:

On Wed, 10 Feb 2010, Dmitry Morozovsky wrote:

DM> other parts are regular SocketAM2+ motherboard, Athlon X4, 8G ram, 
DM> FreeBSD/amd64


well, not exactly "regular" - it's ASUS M2N-LR-SATA with 10 SATA channels, but 
I suppose there are comparable in "workstation" mobo market now...


I couldn't find this one for sale, FWIW.  But looks interesting.  Thanks.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-14 Thread Dan Langille

Dan Naumov wrote:

On Sun, 14 Feb 2010, Dan Langille wrote:

After creating three different system configurations (Athena,
Supermicro, and HP), my configuration of choice is this Supermicro
setup:

1. Samsung SATA CD/DVD Burner $20 (+ $8 shipping)
2. SuperMicro 5046A $750 (+$43 shipping)
3. LSI SAS 3081E-R $235
4. SATA cables $60
5. Crucial 3×2G ECC DDR3-1333 $191 (+ $6 shipping)
6. Xeon W3520 $310


You do realise how much of a massive overkill this is and how much you
are overspending?



I appreciate the comments and feedback.  I'd also appreciate alternative 
suggestions in addition to what you have contributed so far.  Spec out 
the box you would build.

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-14 Thread Alexander Motin
Steve Polyack wrote:
> On 2/10/2010 12:02 AM, Dan Langille wrote:
>> Don't use a port multiplier and this goes away.  I was hoping to avoid
>> a PM and using something like the Syba PCI Express SATA II 4 x Ports
>> RAID Controller seems to be the best solution so far.
>>
>> http://www.amazon.com/Syba-Express-Ports-Controller-SY-PEX40008/dp/B002R0DZWQ/ref=sr_1_22?ie=UTF8&s=electronics&qid=1258452902&sr=1-22
> 
> Dan, I can personally vouch for these cards under FreeBSD.  We have 3 of
> them in one system, with almost every port connected to a port
> multiplier (SiI5xxx PMs).  Using the siis(4) driver on 8.0-RELEASE
> provides very good performance, and supports both NCQ and FIS-based
> switching (an essential for decent port-multiplier performance).
> 
> One thing to consider, however, is that the card is only single-lane
> PCI-Express.  The bandwidth available is only 2.5Gb/s (~312MB/sec,
> slightly less than that of the SATA-2 link spec), so if you have 4
> high-performance drives connected, you may hit a bottleneck at the
> bus.   I'd be particularly interested if anyone can find any similar
> Silicon Image SATA controllers with a PCI-E 4x or 8x interface ;)

Here is SiI3124 based card with built-in PCIe x8 bridge:
http://www.addonics.com/products/host_controller/adsa3gpx8-4em.asp

It is not so cheap, but with 12 disks connected via 4 Port Multipliers
it can give up to 1GB/s (4x250MB/s) of bandwidth.

Cheaper PCIe x1 version mentioned above gave me up to 200MB/s, that is
maximum of what I've seen from PCIe 1.0 x1 controllers. Looking on NCQ
and FBS support it can be enough for many real-world applications, that
don't need so high linear speeds, but have many concurrent I/Os.

-- 
Alexander Motin
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-14 Thread Wes Morgan
On Sun, 14 Feb 2010, Dan Langille wrote:

> Dan Langille wrote:
> > Hi,
> >
> > I'm looking at creating a large home use storage machine.  Budget is a
> > concern, but size and reliability are also a priority.  Noise is also a
> > concern, since this will be at home, in the basement.  That, and cost,
> > pretty much rules out a commercial case, such as a 3U case.  It would be
> > nice, but it greatly inflates the budget.  This pretty much restricts me to
> > a tower case.
> >
> > The primary use of this machine will be a backup server[1].  It will do
> > other secondary use will include minor tasks such as samba, CIFS, cvsup,
> > etc.
> >
> > I'm thinking of 8x1TB (or larger) SATA drives.  I've found a case[2] with
> > hot-swap bays[3], that seems interesting.  I haven't looked at power
> > supplies, but given that number of drives, I expect something beefy with a
> > decent reputation is called for.
> >
> > Whether I use hardware or software RAID is undecided.  I
> >
> > I think I am leaning towards software RAID, probably ZFS under FreeBSD 8.x
> > but I'm open to hardware RAID but I think the cost won't justify it given
> > ZFS.
> >
> > Given that, what motherboard and RAM configuration would you recommend to
> > work with FreeBSD [and probably ZFS].  The lists seems to indicate that more
> > RAM is better with ZFS.
> >
> > Thanks.
> >
> >
> > [1] - FYI running Bacula, but that's out of scope for this question
> >
> > [2] - http://www.newegg.com/Product/Product.aspx?Item=N82E16811192058
> >
> > [3] - nice to have, especially for a failure.
>
> After creating three different system configurations (Athena, Supermicro, and
> HP), my configuration of choice is this Supermicro setup:
>
>1. Samsung SATA CD/DVD Burner $20 (+ $8 shipping)
>2. SuperMicro 5046A $750 (+$43 shipping)
>3. LSI SAS 3081E-R $235
>4. SATA cables $60
>5. Crucial 3×2G ECC DDR3-1333 $191 (+ $6 shipping)
>6. Xeon W3520 $310
>
> Total price with shipping $1560
>
> Details and links at http://dan.langille.org/2010/02/14/supermicro/

Wow um... That's quite a setup. Do you really need the Xeon W3520? You
could get a regular core 2 system for much less and still use the ECC ram
(highly recommended). The case you're looking at only has 6 hot-swap bays
according to the manuals, although the pictures show 8 (???). You could
shave some off the case and cpu, upgrade your 3081E-R to an ARC-1222 for
$200 more and have the hardware raid option.

If I was building a tower system, I'd put together something like this:

Case with 8 hot-swap SATA bays ($250):
http://www.newegg.com/Product/Product.aspx?Item=N82E16811192058
Or if you prefer screwless, you can find the case without the 2 hotswap
bays and use an icy dock screwless version.

Intel server board (for ECC support) ($200):
http://www.newegg.com/Product/Product.aspx?Item=N82E16813121328

SAS controller ($120):
http://www.buy.com/prod/supermicro-lsi-megaraid-lsisas1068e-8-port-sas-raid-controller-16mb/q/loc/101/207929556.html
Note: You'll need to change or remove the mounting bracket since it is
"backwards". I was able to find a bracket with matching screw holes on an
old nic and secure it to my case. It uses the same chipset as the more
expensive 3081E-R, if I remember correctly.

Quad-core CPU ($190):
http://www.newegg.com/Product/Product.aspx?Item=N82E16819115131

4x2gb ram sticks (97*2):
http://www.newegg.com/Product/Product.aspx?Item=N82E16820139045

same SATA cables for sata to mini-sas, same CD burner. Total cost probably
$400 less, which you can use to buy some of the drives.

For my personal (overkill) setup I have a chenbro 4U chassis with 16
hotswap bays and mini-SAS backplanes, a zippy 2+1 640 watt redundant power
supply (sounds like a freight train). I cannot express the joy I felt in
ripping out all the little SATA cables and snaking a couple fat 8087s
under the fans. 8 of the bays are dedicated to my media array, and the
other 8 are there for swapping in and out of backup drives mostly, but the
time they REALLY come in handy is when you need to upgrade your array. Buy
the replacement drives, pop them in, migrate the pool, and remove the old
drives.

I've been running with this for almost 3 years. If I had to do it over
again, I probably wouldn't get the power supply, it was more expensive
than the chassis and I don't think it has ever "saved" me from anything
(although I can't complain, it runs 24/7 and never had a glitch).

If I could find a good tower case I might consider it, but I've never seen
one I liked with mini-sas backplanes. Really the only thing I'm missing is
a nice 21U rack on casters, then the whole thing disappears into a corner
humming away.___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

RE: hardware for home use large storage

2010-02-14 Thread Dan Naumov
> On Sun, 14 Feb 2010, Dan Langille wrote:
>> After creating three different system configurations (Athena,
>> Supermicro, and HP), my configuration of choice is this Supermicro
>> setup:
>>
>> 1. Samsung SATA CD/DVD Burner $20 (+ $8 shipping)
>> 2. SuperMicro 5046A $750 (+$43 shipping)
>> 3. LSI SAS 3081E-R $235
>> 4. SATA cables $60
>> 5. Crucial 3×2G ECC DDR3-1333 $191 (+ $6 shipping)
>> 6. Xeon W3520 $310

You do realise how much of a massive overkill this is and how much you
are overspending?


- Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-14 Thread Dan Langille

Daniel O'Connor wrote:

On Sun, 14 Feb 2010, Dan Langille wrote:

After creating three different system configurations (Athena,
Supermicro, and HP), my configuration of choice is this Supermicro
setup:

1. Samsung SATA CD/DVD Burner $20 (+ $8 shipping)
2. SuperMicro 5046A $750 (+$43 shipping)
3. LSI SAS 3081E-R $235
4. SATA cables $60
5. Crucial 3×2G ECC DDR3-1333 $191 (+ $6 shipping)
6. Xeon W3520 $310

Total price with shipping $1560

Details and links at http://dan.langille.org/2010/02/14/supermicro/

I'll probably start with 5 HDD in the ZFS array, 2x gmirror'd drives
for the boot, and 1 optical drive (so 8 SATA ports).


That is f**king expensive for a home setup :)

I priced a decent ZFS PC for a small business and it was AUD$2500 
including the disks (5x750Gb), case, PSU etc..


Yes, and this one doesn't yet have HDD.

Can you supply details of your system?
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-13 Thread Daniel O'Connor
On Sun, 14 Feb 2010, Daniel O'Connor wrote:
> On Sun, 14 Feb 2010, Dan Langille wrote:
> > After creating three different system configurations (Athena,
> > Supermicro, and HP), my configuration of choice is this Supermicro
> > setup:
> >
> >     1. Samsung SATA CD/DVD Burner $20 (+ $8 shipping)
> >     2. SuperMicro 5046A $750 (+$43 shipping)
> >     3. LSI SAS 3081E-R $235
> >     4. SATA cables $60
> >     5. Crucial 3×2G ECC DDR3-1333 $191 (+ $6 shipping)
> >     6. Xeon W3520 $310
> >
> > Total price with shipping $1560
> >
> > Details and links at http://dan.langille.org/2010/02/14/supermicro/
> >
> > I'll probably start with 5 HDD in the ZFS array, 2x gmirror'd
> > drives for the boot, and 1 optical drive (so 8 SATA ports).
>
> That is f**king expensive for a home setup :)
>
> I priced a decent ZFS PC for a small business and it was AUD$2500
> including the disks (5x750Gb), case, PSU etc..

Also, that one booted off a 4Gb CF card (non RAID/mirror though).

-- 
Daniel O'Connor software and network engineer
for Genesis Software - http://www.gsoft.com.au
"The nice thing about standards is that there
are so many of them to choose from."
  -- Andrew Tanenbaum
GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C


signature.asc
Description: This is a digitally signed message part.


Re: hardware for home use large storage

2010-02-13 Thread Daniel O'Connor
On Sun, 14 Feb 2010, Dan Langille wrote:
> After creating three different system configurations (Athena,
> Supermicro, and HP), my configuration of choice is this Supermicro
> setup:
>
>     1. Samsung SATA CD/DVD Burner $20 (+ $8 shipping)
>     2. SuperMicro 5046A $750 (+$43 shipping)
>     3. LSI SAS 3081E-R $235
>     4. SATA cables $60
>     5. Crucial 3×2G ECC DDR3-1333 $191 (+ $6 shipping)
>     6. Xeon W3520 $310
>
> Total price with shipping $1560
>
> Details and links at http://dan.langille.org/2010/02/14/supermicro/
>
> I'll probably start with 5 HDD in the ZFS array, 2x gmirror'd drives
> for the boot, and 1 optical drive (so 8 SATA ports).

That is f**king expensive for a home setup :)

I priced a decent ZFS PC for a small business and it was AUD$2500 
including the disks (5x750Gb), case, PSU etc..

-- 
Daniel O'Connor software and network engineer
for Genesis Software - http://www.gsoft.com.au
"The nice thing about standards is that there
are so many of them to choose from."
  -- Andrew Tanenbaum
GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C


signature.asc
Description: This is a digitally signed message part.


Re: hardware for home use large storage

2010-02-13 Thread Dan Langille

Dan Langille wrote:

Hi,

I'm looking at creating a large home use storage machine.  Budget is a 
concern, but size and reliability are also a priority.  Noise is also a 
concern, since this will be at home, in the basement.  That, and cost, 
pretty much rules out a commercial case, such as a 3U case.  It would be 
nice, but it greatly inflates the budget.  This pretty much restricts me 
to a tower case.


The primary use of this machine will be a backup server[1].  It will do 
other secondary use will include minor tasks such as samba, CIFS, cvsup, 
etc.


I'm thinking of 8x1TB (or larger) SATA drives.  I've found a case[2] 
with hot-swap bays[3], that seems interesting.  I haven't looked at 
power supplies, but given that number of drives, I expect something 
beefy with a decent reputation is called for.


Whether I use hardware or software RAID is undecided.  I

I think I am leaning towards software RAID, probably ZFS under FreeBSD 
8.x but I'm open to hardware RAID but I think the cost won't justify it 
given ZFS.


Given that, what motherboard and RAM configuration would you recommend 
to work with FreeBSD [and probably ZFS].  The lists seems to indicate 
that more RAM is better with ZFS.


Thanks.


[1] - FYI running Bacula, but that's out of scope for this question

[2] - http://www.newegg.com/Product/Product.aspx?Item=N82E16811192058

[3] - nice to have, especially for a failure.


After creating three different system configurations (Athena, 
Supermicro, and HP), my configuration of choice is this Supermicro setup:


   1. Samsung SATA CD/DVD Burner $20 (+ $8 shipping)
   2. SuperMicro 5046A $750 (+$43 shipping)
   3. LSI SAS 3081E-R $235
   4. SATA cables $60
   5. Crucial 3×2G ECC DDR3-1333 $191 (+ $6 shipping)
   6. Xeon W3520 $310

Total price with shipping $1560

Details and links at http://dan.langille.org/2010/02/14/supermicro/

I'll probably start with 5 HDD in the ZFS array, 2x gmirror'd drives for 
the boot, and 1 optical drive (so 8 SATA ports).

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-12 Thread Dan Langille

On Wed, February 10, 2010 10:00 pm, Bruce Simpson wrote:
> On 02/10/10 19:40, Steve Polyack wrote:
>>
>> I haven't had such bad experience as the above, but it is certainly a
>> concern.  Using ZFS we simply 'offline' the device, pull, replace with
>> a new one, glabel, and zfs replace.  It seems to work fine as long as
>> nothing is accessing the device you are replacing (otherwise you will
>> get a kernel panic a few minutes down the line).  m...@freebsd.org has
>> also committed a large patch set to 9-CURRENT which implements
>> "proper" SATA/AHCI hot-plug support and error-recovery through CAM.
>
> I've been running with this patch in 8-STABLE for well over a week now
> on my desktop w/o issues; I am using main disk for dev, and eSATA disk
> pack for light multimedia use.

MFC to 8.x?

-- 
Dan Langille -- http://langille.org/

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-10 Thread Bruce Simpson

On 02/10/10 19:40, Steve Polyack wrote:


I haven't had such bad experience as the above, but it is certainly a 
concern.  Using ZFS we simply 'offline' the device, pull, replace with 
a new one, glabel, and zfs replace.  It seems to work fine as long as 
nothing is accessing the device you are replacing (otherwise you will 
get a kernel panic a few minutes down the line).  m...@freebsd.org has 
also committed a large patch set to 9-CURRENT which implements 
"proper" SATA/AHCI hot-plug support and error-recovery through CAM.


I've been running with this patch in 8-STABLE for well over a week now 
on my desktop w/o issues; I am using main disk for dev, and eSATA disk 
pack for light multimedia use.


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-10 Thread Dmitry Morozovsky
On Wed, 10 Feb 2010, Dan Langille wrote:

DL> Dmitry Morozovsky wrote:
DL> > On Wed, 10 Feb 2010, Dmitry Morozovsky wrote:
DL> > 
DL> > DM> other parts are regular SocketAM2+ motherboard, Athlon X4, 8G ram, DM>
DL> > FreeBSD/amd64
DL> > 
DL> > well, not exactly "regular" - it's ASUS M2N-LR-SATA with 10 SATA channels,
DL> > but I suppose there are comparable in "workstation" mobo market now...
DL> 
DL> 10 SATA channels?  Newegg claims only 6:

You refer to regular M2N-LR, M2N-LR-SATA contains additional 4-channel 
Marvell chip: 

ma...@moose:~> grep '^atapci.*: <' /var/run/dmesg.boot
atapci0:  port 
0x1f0-0x1f7,0x3f6,0x170-0x177,0x376,0xffa0-0xffaf at device 4.0 on pci0
atapci1:  port 
0xc400-0xc407,0xc080-0xc083,0xc000-0xc007,0xbc00-0xbc03,0xb880-0xb88f mem 
0xef9bd000-0xef9bdfff irq 21 at device 5.0 on pci0
atapci2:  port 
0xb800-0xb807,0xb480-0xb483,0xb400-0xb407,0xb080-0xb083,0xb000-0xb00f mem 
0xef9bc000-0xef9bcfff irq 22 at device 5.1 on pci0
atapci3:  port 
0xac00-0xac07,0xa880-0xa883,0xa800-0xa807,0xa480-0xa483,0xa400-0xa40f mem 
0xef9b3000-0xef9b3fff irq 23 at device 5.2 on pci0
atapci4:  port 0xe800-0xe8ff mem 
0xefd0-0xefdf irq 17 at device 0.0 on pci3
atapci5:  port 0xe400-0xe4ff mem 
0xefb0-0xefbf irq 18 at device 6.0 on pci3

(atapci4 is now used for 1-disk Promise enclosure; I tried to use SiL card to 
use eSATA port native, but it failed to initialize there, so I use simple 
SATA-eSATA bracket to use eSATA capabilities to this Eternal Beast [tm] ;-P)


 -- Sincerely,
D.Marck [DM5020, MCK-RIPE, DM3-RIPN]
[ FreeBSD committer: ma...@freebsd.org ]

*** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- ma...@rinet.ru ***

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-10 Thread Dan Langille

Dmitry Morozovsky wrote:

On Wed, 10 Feb 2010, Dmitry Morozovsky wrote:

DM> other parts are regular SocketAM2+ motherboard, Athlon X4, 8G ram, 
DM> FreeBSD/amd64


well, not exactly "regular" - it's ASUS M2N-LR-SATA with 10 SATA channels, but 
I suppose there are comparable in "workstation" mobo market now...


10 SATA channels?  Newegg claims only 6:

  http://www.newegg.com/Product/Product.aspx?Item=N82E16813131134
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-10 Thread Dmitry Morozovsky
On Wed, 10 Feb 2010, Dmitry Morozovsky wrote:

DM> other parts are regular SocketAM2+ motherboard, Athlon X4, 8G ram, 
DM> FreeBSD/amd64

well, not exactly "regular" - it's ASUS M2N-LR-SATA with 10 SATA channels, but 
I suppose there are comparable in "workstation" mobo market now...

-- 
Sincerely,
D.Marck [DM5020, MCK-RIPE, DM3-RIPN]
[ FreeBSD committer: ma...@freebsd.org ]

*** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- ma...@rinet.ru ***

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-10 Thread Dmitry Morozovsky
On Mon, 8 Feb 2010, Dan Langille wrote:

DL> I'm looking at creating a large home use storage machine.  Budget is a
DL> concern, but size and reliability are also a priority.  Noise is also a
DL> concern, since this will be at home, in the basement.  That, and cost,
DL> pretty much rules out a commercial case, such as a 3U case.  It would be
DL> nice, but it greatly inflates the budget.  This pretty much restricts me to
DL> a tower case.

[snip]

We use the following at work, but it's still pretty cheap and pretty silent:

Chieftec WH-02B-B (9x5.25 bays)

filled with

2 x Supermicro CSE-MT35T 
http://www.supermicro.nl/products/accessories/mobilerack/CSE-M35T-1.cfm
for regular storage, 2 x raidz1

1 x Promise SuperSwap 1600
http://www.promise.com/product/product_detail_eng.asp?product_id=169
for changeable external backups

and still have 2 5.25 bays for anything interesting ;-)

other parts are regular SocketAM2+ motherboard, Athlon X4, 8G ram, 
FreeBSD/amd64

-- 
Sincerely,
D.Marck [DM5020, MCK-RIPE, DM3-RIPN]
[ FreeBSD committer: ma...@freebsd.org ]

*** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- ma...@rinet.ru ***

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-10 Thread Jonathan

On 2/8/2010 12:01 AM, Dan Langille wrote:

Hi,

I'm thinking of 8x1TB (or larger) SATA drives. I've found a case[2] with
hot-swap bays[3], that seems interesting. I haven't looked at power
supplies, but given that number of drives, I expect something beefy with
a decent reputation is called for.


I have a system with two of these [1] and an 8 port LSI SAS card that 
runs fine for me.  I run an 8 drive ZFS array off the LSI card and then 
have 2 drives mirrored off the motherboard SATA ports for booting with 
ZFS.  Hotswap works fine for me as well with this hardware.


Jonathan

http://www.newegg.com/Product/Product.aspx?Item=N82E16816215001
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-10 Thread Steve Polyack

On 2/10/2010 12:02 AM, Dan Langille wrote:

Trying to make sense of stuff I don't know about...

Matthew Dillon wrote:


AHCI on-motherboard with equivalent capabilities do not appear to be
in wide distribution yet.  Most AHCI chips can do NCQ to a single
target (even a single target behind a PM), but not concurrently to
multiple targets behind a port multiplier.  Even though SATA 
bandwidth

constraints might seem to make this a reasonable alternative it
actually isn't because any seek heavy activity to multiple drives
will be serialized and perform EXTREMELY poorly.  Linear performance
will be fine.  Random performance will be horrible.


Don't use a port multiplier and this goes away.  I was hoping to avoid 
a PM and using something like the Syba PCI Express SATA II 4 x Ports 
RAID Controller seems to be the best solution so far.


http://www.amazon.com/Syba-Express-Ports-Controller-SY-PEX40008/dp/B002R0DZWQ/ref=sr_1_22?ie=UTF8&s=electronics&qid=1258452902&sr=1-22 



Dan, I can personally vouch for these cards under FreeBSD.  We have 3 of 
them in one system, with almost every port connected to a port 
multiplier (SiI5xxx PMs).  Using the siis(4) driver on 8.0-RELEASE 
provides very good performance, and supports both NCQ and FIS-based 
switching (an essential for decent port-multiplier performance).


One thing to consider, however, is that the card is only single-lane 
PCI-Express.  The bandwidth available is only 2.5Gb/s (~312MB/sec, 
slightly less than that of the SATA-2 link spec), so if you have 4 
high-performance drives connected, you may hit a bottleneck at the 
bus.   I'd be particularly interested if anyone can find any similar 
Silicon Image SATA controllers with a PCI-E 4x or 8x interface ;)






It should be noted that while hotswap is supported with silicon 
image
chipsets and port multiplier enclosures (which also use Sili 
chips in
the enclosure), the hot-swap capability is not anywhere near as 
robust

as you would find with a more costly commercial SAS setup.  SI chips
are very poorly made (this is the same company that went bust under
another name a few years back due to shoddy chipsets), and have a 
lot

of on-chip hardware bugs, but fortunately OSS driver writers (linux
guys) have been able to work around most of them.  So even though 
the

chipset is a bit shoddy actual operation is quite good.  However,
this does mean you generally want to idle all activity on the 
enclosure

to safely hot swap anything, not just the drive you are pulling out.
I've done a lot of testing and hot-swapping an idle disk while other
drives in the same enclosure are hot is not reliable (for a cheap 
port

multiplier enclosure using a Sili chip inside, which nearly all do).



I haven't had such bad experience as the above, but it is certainly a 
concern.  Using ZFS we simply 'offline' the device, pull, replace with a 
new one, glabel, and zfs replace.  It seems to work fine as long as 
nothing is accessing the device you are replacing (otherwise you will 
get a kernel panic a few minutes down the line).  m...@freebsd.org has 
also committed a large patch set to 9-CURRENT which implements "proper" 
SATA/AHCI hot-plug support and error-recovery through CAM.


-Steve Polyack
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-10 Thread David N
On 10 February 2010 08:33, Christian Weisgerber  wrote:
> Matthew D. Fuller  wrote:
>
>> > I have something similar (5x1Tb) - I have a Gigabyte GA-MA785GM-US2H
>> > with an Athlon X2 and 4Gb of RAM (only half filled - 2x2Gb)
>> >
>> > Note that it doesn't support ECC, I don't know if that is a problem.
>>
>> How's that?  Is the BIOS just stupid, or is the board physically
>> missing traces?
>
> Doesn't matter really, does it?
>
> I have a GA-MA78G-DS3H.  According to the specs, it supports ECC
> memory.  And that is all the mention of ECC you will find anywhere.
> There is nothing in the BIOS.  My best guess is that they quite
> literally mean that you can plug ECC memory into the board and it
> will work, but that there are no provisions to actually use ECC.
>
> That said, I also have an Asus M2N-SLI Deluxe.  If I enable ECC in
> the BIOS, the board locks up sooner or later, even when just sitting
> in the BIOS.  memtest86 dies a screaming death immediately.  When
> I disable ECC, the board is solid, both in actual use and with
> memtest.
>
> I thought if I built a PC from components, I'd be already a step
> above the lowest dregs of the consumer market, but apparently not.
>
> --
> Christian "naddy" Weisgerber                          na...@mips.inka.de
>
> ___
> freebsd-stable@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
>

I had an M2A-VM HDMI that had the ECC problem, ASUS released a BIOS
update for it, not sure for the M2N if they fixed that problem.

>From what I've seen, most ASUS boards have the ECC option, dont take
my word for it though.

Regards
David N
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-10 Thread Boris Kochergin

Dan Langille wrote:

Boris Kochergin wrote:

Dan Langille wrote:

Boris Kochergin wrote:

Peter C. Lai wrote:

On 2010-02-09 06:37:47AM -0500, Dan Langille wrote:
 

Charles Sprickman wrote:
 

On Mon, 8 Feb 2010, Dan Langille wrote:
Also, it seems like
people who use zfs (or gmirror + gstripe) generally end up 
buying pricey hardware raid cards for compatibility reasons.  
There seem to be no decent add-on SATA cards that play nice with 
FreeBSD other than that weird supermicro card that has to be 
physically hacked about to fit.
  


Mostly only because certain cards have issues w/shoddy JBOD 
implementation. Some cards (most notably ones like Adaptec 2610A 
which was rebranded by Dell as the "CERC SATA 1.5/6ch" back in the 
day) won't let you run the drives in passthrough mode and seem to 
all want to stick their grubby little RAID paws into your JBOD 
setup (i.e. the only way to have minimal
participation from the "hardware" RAID is to set each disk as its 
own RAID-0/volume in the controller BIOS) which then cascades into 
issues with SMART, AHCI, "triple caching"/write reordering, etc on 
the FreeBSD side (the controller's own craptastic cache, ZFS vdev 
cache, vmm/app cache, oh my!). So *some* people go with something 
tried-and-true (basically bordering on server-level cards that let 
you ditch any BIOS type of RAID config and present the raw disk 
devices to the kernel)
As someone else has mentioned, recent SiL stuff works well. I have 
multiple 
http://www.newegg.com/Product/Product.aspx?Item=N82E16816132008 
cards servicing RAID-Z2 and GEOM_RAID3 arrays on 8.0-RELEASE and 
8.0-STABLE machines using both the old ata(4) driver and ATA_CAM. 
Don't let the RAID label scare you--that stuff is off by default 
and the controller just presents the disks to the operating system. 
Hot swap works. I haven't had the time to try the siis(4) driver 
for them, which would result in better performance.


That's a really good price. :)

If needed, I could host all eight SATA drives for $160, much cheaper 
than any of the other RAID cards I've seen.


The issue then is finding a motherboard which has 4x PCI Express 
slots.  ;)
If you want to go this route, I bought one a while ago so that I 
could stuff as many dual-port Gigabit Ethernet controllers into it as 
possible (it was a SPAN port replicator): 
http://www.newegg.com/Product/Product.aspx?Item=N82E16813130136. 
Newegg doesn't carry it anymore, but if you can find it elsewhere, I 
can vouch for its stability:


# uptime
1:20PM  up 494 days,  5:23, 1 user, load averages: 0.05, 0.07, 0.05

In my setups with those Silicon Image cards, though, they serve as 
additional controllers, with the following onboard SATA controllers 
being used to provide most of the ports:


I don't know what the above means.

I think it means you are primarily using the onboard SATA contollers 
and have those Silicon Image cards providing additional ports where 
required.

Correct.




SB600 (AMD/ATI)
SB700 (AMD/ATI)
ICH9 (Intel)
63XXESB2 (Intel)


These are the chipsets on that motherboard?
Those are the SATA controller chipsets. Here are the corresponding 
chipsets advertised on the motherboards, in north bridge/south bridge form:


SB600 SATA: AMD 770/AMD SB600
SB700 SATA: AMD SR5690/AMD SP5100
ICH9 SATA: Intel 3200/Intel ICH9
63XXESB2 SATA: Intel 5000X/Intel ESB2

-Boris




I haven't had any problems with any of them.

-Boris 



___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-10 Thread Dan Langille

Boris Kochergin wrote:

Dan Langille wrote:

Boris Kochergin wrote:

Peter C. Lai wrote:

On 2010-02-09 06:37:47AM -0500, Dan Langille wrote:
 

Charles Sprickman wrote:
 

On Mon, 8 Feb 2010, Dan Langille wrote:
Also, it seems like
people who use zfs (or gmirror + gstripe) generally end up buying 
pricey hardware raid cards for compatibility reasons.  There seem 
to be no decent add-on SATA cards that play nice with FreeBSD 
other than that weird supermicro card that has to be physically 
hacked about to fit.
  


Mostly only because certain cards have issues w/shoddy JBOD 
implementation. Some cards (most notably ones like Adaptec 2610A 
which was rebranded by Dell as the "CERC SATA 1.5/6ch" back in the 
day) won't let you run the drives in passthrough mode and seem to 
all want to stick their grubby little RAID paws into your JBOD setup 
(i.e. the only way to have minimal
participation from the "hardware" RAID is to set each disk as its 
own RAID-0/volume in the controller BIOS) which then cascades into 
issues with SMART, AHCI, "triple caching"/write reordering, etc on 
the FreeBSD side (the controller's own craptastic cache, ZFS vdev 
cache, vmm/app cache, oh my!). So *some* people go with something 
tried-and-true (basically bordering on server-level cards that let 
you ditch any BIOS type of RAID config and present the raw disk 
devices to the kernel)
As someone else has mentioned, recent SiL stuff works well. I have 
multiple 
http://www.newegg.com/Product/Product.aspx?Item=N82E16816132008 cards 
servicing RAID-Z2 and GEOM_RAID3 arrays on 8.0-RELEASE and 8.0-STABLE 
machines using both the old ata(4) driver and ATA_CAM. Don't let the 
RAID label scare you--that stuff is off by default and the controller 
just presents the disks to the operating system. Hot swap works. I 
haven't had the time to try the siis(4) driver for them, which would 
result in better performance.


That's a really good price. :)

If needed, I could host all eight SATA drives for $160, much cheaper 
than any of the other RAID cards I've seen.


The issue then is finding a motherboard which has 4x PCI Express 
slots.  ;)
If you want to go this route, I bought one a while ago so that I could 
stuff as many dual-port Gigabit Ethernet controllers into it as possible 
(it was a SPAN port replicator): 
http://www.newegg.com/Product/Product.aspx?Item=N82E16813130136. Newegg 
doesn't carry it anymore, but if you can find it elsewhere, I can vouch 
for its stability:


# uptime
1:20PM  up 494 days,  5:23, 1 user, load averages: 0.05, 0.07, 0.05

In my setups with those Silicon Image cards, though, they serve as 
additional controllers, with the following onboard SATA controllers 
being used to provide most of the ports:


I don't know what the above means.

I think it means you are primarily using the onboard SATA contollers and 
have those Silicon Image cards providing additional ports where required.




SB600 (AMD/ATI)
SB700 (AMD/ATI)
ICH9 (Intel)
63XXESB2 (Intel)


These are the chipsets on that motherboard?



I haven't had any problems with any of them.

-Boris


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-10 Thread Boris Kochergin

Dan Langille wrote:

Boris Kochergin wrote:

Peter C. Lai wrote:

On 2010-02-09 06:37:47AM -0500, Dan Langille wrote:
 

Charles Sprickman wrote:
  

On Mon, 8 Feb 2010, Dan Langille wrote:
Also, it seems like
people who use zfs (or gmirror + gstripe) generally end up buying 
pricey hardware raid cards for compatibility reasons.  There seem 
to be no decent add-on SATA cards that play nice with FreeBSD 
other than that weird supermicro card that has to be physically 
hacked about to fit.
  


Mostly only because certain cards have issues w/shoddy JBOD 
implementation. Some cards (most notably ones like Adaptec 2610A 
which was rebranded by Dell as the "CERC SATA 1.5/6ch" back in the 
day) won't let you run the drives in passthrough mode and seem to 
all want to stick their grubby little RAID paws into your JBOD setup 
(i.e. the only way to have minimal
participation from the "hardware" RAID is to set each disk as its 
own RAID-0/volume in the controller BIOS) which then cascades into 
issues with SMART, AHCI, "triple caching"/write reordering, etc on 
the FreeBSD side (the controller's own craptastic cache, ZFS vdev 
cache, vmm/app cache, oh my!). So *some* people go with something 
tried-and-true (basically bordering on server-level cards that let 
you ditch any BIOS type of RAID config and present the raw disk 
devices to the kernel)
As someone else has mentioned, recent SiL stuff works well. I have 
multiple 
http://www.newegg.com/Product/Product.aspx?Item=N82E16816132008 cards 
servicing RAID-Z2 and GEOM_RAID3 arrays on 8.0-RELEASE and 8.0-STABLE 
machines using both the old ata(4) driver and ATA_CAM. Don't let the 
RAID label scare you--that stuff is off by default and the controller 
just presents the disks to the operating system. Hot swap works. I 
haven't had the time to try the siis(4) driver for them, which would 
result in better performance.


That's a really good price. :)

If needed, I could host all eight SATA drives for $160, much cheaper 
than any of the other RAID cards I've seen.


The issue then is finding a motherboard which has 4x PCI Express 
slots.  ;)
If you want to go this route, I bought one a while ago so that I could 
stuff as many dual-port Gigabit Ethernet controllers into it as possible 
(it was a SPAN port replicator): 
http://www.newegg.com/Product/Product.aspx?Item=N82E16813130136. Newegg 
doesn't carry it anymore, but if you can find it elsewhere, I can vouch 
for its stability:


# uptime
1:20PM  up 494 days,  5:23, 1 user, load averages: 0.05, 0.07, 0.05

In my setups with those Silicon Image cards, though, they serve as 
additional controllers, with the following onboard SATA controllers 
being used to provide most of the ports:


SB600 (AMD/ATI)
SB700 (AMD/ATI)
ICH9 (Intel)
63XXESB2 (Intel)

I haven't had any problems with any of them.

-Boris
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-10 Thread Matthew Dillon
:Correction -- more than likely on a consumer motherboard you *will not*
:be able to put a non-VGA card into the PCIe x16 slot.  I have numerous
:Asus and Gigabyte motherboards which only accept graphics cards in their
:PCIe x16 slots; this """feature""" is documented in user manuals.  I
:don't know how/why these companies chose to do this, but whatever.
:
:I would strongly advocate that the OP (who has stated he's focusing on
:stability and reliability over speed) purchase a server motherboard that
:has a PCIe x8 slot on it and/or server chassis (usually best to buy both
:of these things from the same vendor) and be done with it.
:
:-- 
:| Jeremy Chadwick   j...@parodius.com |

It is possible this is related to the way Intel on-board graphics
work in recent chipsets.  e.g. i915 or i925 chipsets.  The
on-motherboard video uses a 16-lane internal PCI-e connection which
is SHARED with the 16-lane PCI-e slot.  If you plug something into
the slot (e.g. a graphics card), it disables the on-motherboard
video.  I'm not sure if the BIOS can still boot if you plug something
other than a video card into these MBs and no video at all is available.
Presumably it should be able to, you just wouldn't have any video at
all.

Insofar as I know AMD-based MBs with on-board video don't have this
issue, though it should also be noted that AMD-based MBs tend to be
about 6-8 months behind Intel ones in terms of features.

-Matt

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage / remote management KVM card

2010-02-10 Thread Jeremy Chadwick
On Wed, Feb 10, 2010 at 02:30:54PM +0100, Miroslav Lachman wrote:
> Svein Skogen (Listmail Account) wrote:
> >-BEGIN PGP SIGNED MESSAGE-
> >Hash: SHA1
> >
> >On 09.02.2010 15:37, Miroslav Lachman wrote:
> >*SNIP*
> >>
> >>I can't agree with the last statement about HP's iLO. I have addon card
> >>in ML110 G5 (dedicated NIC), the card is "expensive" and bugs are
> >>amazing. The management NIC freezes once a day (or more often) with
> >>older firmware and must be restarted from inside the installed system by
> >>IPMI command on "localhost". With newer firmware, the interface is
> >>periodicaly restarded. The virtual media doesn't work at all. It is my
> >>worst experience with remote management cards.
> >>I believe that other HP servers with built-in card with different FW is
> >>working better, this is just my experience.
> >>
> >>Next one is eLOM in Sun Fire X2100 (shared NIC using bge + ASF). ASF
> >>works without problem, but virtual media works only if you are
> >>connecting by IP address, not by domain name (from Windows machines) and
> >>there is some issue with timeouts of virtual media / console.
> >>I reported this + 8 different bugs of web management interface to Sun
> >>more than year ago - none was fixed.
> >>
> >>Next place is for IBM 3650 + RSA II card (dedicated NIC). Expensive,
> >>something works, somthing not. For example the card can't read CPU
> >>temperature, so you will not recieve any alert in case of overheating.
> >>(it was 2 years ago, maybe newer firmware is fixed)
> >>
> >>Then I have one Supermicro Twin server 6016TT-TF with built-in IPMI /
> >>KVM with dedicated NIC port. I found one bug with fan rpm readings (half
> >>the number compared to BIOS numbers) and one problem with FreeBSD 7.x
> >>sysinstall (USB keyboard not working, but sysinstall from 8.x works
> >>without problem). In installed FreeBSD system keyboard and virtual media
> >>is working without problems.
> >>
> >>On the top is Dell R610 DRAC (dedicated NIC) - I didn't find any bugs
> >>and there are a lot more features compared to concurrent products.
> >>
> >
> >I think the general consensus here is "nice theory lousy
> >implementation", and the added migraine of no such thing as a common
> >standard.
> >
> >Maybe creating a common standard for this could be a nice GSOC project,
> >to build a nice "remote console" based on SSH and arm/mips?
> >
> >p.s. I've seen the various proprietary remote console solutions. They
> >didn't really impress me much, so I ended up using off-the-shelf
> >components for building my servers. Not necessarily cheaper, but at
> >least it's under _MY_ control.
> >
> >//Svein
> 
> Does anybody have experiences with ATEN IP8000 card?
> I found it today
> http://www.aten.com/products/productItem.php?pcid=2006041110563001&psid=20060411131311002&pid=20080401180847001&layerid=subClass1
> 
> It is not cheap, but it seems as universal solution for any
> motherboard with PCI slot.
> 
> "Host-side OS support - Windows 2000/2003/XP
> /NT/VistaRedhat 7.1 and above; FreeBSD, Novell"

There's also the PC Weasel[1], which does VGA-to-serial and provides
reset/power-cycle capability over the serial port.  100% OS-independent.
The concept itself is really cool[2], but there's 3 major problems:

1) PCI version is 5V; some systems are limited to 3.3V PCI slots (see
   Wikipedia: http://en.wikipedia.org/wiki/File:PCI_Keying.png) -- not
   to mention lots of systems are doing away with PCI altogether (in
   servers especially)
2) Limited to 38400 bps throughput (I run serial consoles at 115200),
3) Very expensive -- US$350 *per card*.

I'm surprised no one else has come up with a similar solution especially
given the regularity of DSPs, CPLDs, and FPGAs in this day and age.

[1]: http://www.realweasel.com/intro.html
[2]: http://www.realweasel.com/design.html

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage / remote management KVM card

2010-02-10 Thread Miroslav Lachman

Svein Skogen (Listmail Account) wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09.02.2010 15:37, Miroslav Lachman wrote:
*SNIP*


I can't agree with the last statement about HP's iLO. I have addon card
in ML110 G5 (dedicated NIC), the card is "expensive" and bugs are
amazing. The management NIC freezes once a day (or more often) with
older firmware and must be restarted from inside the installed system by
IPMI command on "localhost". With newer firmware, the interface is
periodicaly restarded. The virtual media doesn't work at all. It is my
worst experience with remote management cards.
I believe that other HP servers with built-in card with different FW is
working better, this is just my experience.

Next one is eLOM in Sun Fire X2100 (shared NIC using bge + ASF). ASF
works without problem, but virtual media works only if you are
connecting by IP address, not by domain name (from Windows machines) and
there is some issue with timeouts of virtual media / console.
I reported this + 8 different bugs of web management interface to Sun
more than year ago - none was fixed.

Next place is for IBM 3650 + RSA II card (dedicated NIC). Expensive,
something works, somthing not. For example the card can't read CPU
temperature, so you will not recieve any alert in case of overheating.
(it was 2 years ago, maybe newer firmware is fixed)

Then I have one Supermicro Twin server 6016TT-TF with built-in IPMI /
KVM with dedicated NIC port. I found one bug with fan rpm readings (half
the number compared to BIOS numbers) and one problem with FreeBSD 7.x
sysinstall (USB keyboard not working, but sysinstall from 8.x works
without problem). In installed FreeBSD system keyboard and virtual media
is working without problems.

On the top is Dell R610 DRAC (dedicated NIC) - I didn't find any bugs
and there are a lot more features compared to concurrent products.



I think the general consensus here is "nice theory lousy
implementation", and the added migraine of no such thing as a common
standard.

Maybe creating a common standard for this could be a nice GSOC project,
to build a nice "remote console" based on SSH and arm/mips?

p.s. I've seen the various proprietary remote console solutions. They
didn't really impress me much, so I ended up using off-the-shelf
components for building my servers. Not necessarily cheaper, but at
least it's under _MY_ control.

//Svein


Does anybody have experiences with ATEN IP8000 card?
I found it today
http://www.aten.com/products/productItem.php?pcid=2006041110563001&psid=20060411131311002&pid=20080401180847001&layerid=subClass1

It is not cheap, but it seems as universal solution for any motherboard 
with PCI slot.


"Host-side OS support - Windows 2000/2003/XP
/NT/VistaRedhat 7.1 and above; FreeBSD, Novell"

Miroslav Lachman
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-10 Thread Gót András
On Sze, Február 10, 2010 11:55 am, Jeremy Chadwick wrote:
> On Wed, Feb 10, 2010 at 11:27:53AM +0100, Pieter de Goeje wrote:
>
>> On Wednesday 10 February 2010 05:28:57 Dan Langille wrote:
>>
>>> Boris Kochergin wrote:
>>>
 Peter C. Lai wrote:

> On 2010-02-09 06:37:47AM -0500, Dan Langille wrote:
>
>> Charles Sprickman wrote:
>>
>>> On Mon, 8 Feb 2010, Dan Langille wrote:
>>> Also, it seems like
>>> people who use zfs (or gmirror + gstripe) generally end up
>>> buying pricey hardware raid cards for compatibility reasons.
>>> There seem to
>>> be no decent add-on SATA cards that play nice with FreeBSD
>>> other than that weird supermicro card that has to be
>>> physically hacked about to fit.
>
> Mostly only because certain cards have issues w/shoddy JBOD
> implementation. Some cards (most notably ones like Adaptec 2610A
> which was rebranded by Dell as the "CERC SATA 1.5/6ch" back in the
> day) won't let you run the drives in passthrough mode and seem to
> all want to stick their grubby little RAID paws into your JBOD
> setup (i.e. the only way to have minimal participation from the
> "hardware" RAID is to set each disk as its own
> RAID-0/volume in the controller BIOS) which then cascades into
> issues with SMART, AHCI, "triple caching"/write reordering, etc on
> the FreeBSD side (the controller's own craptastic cache, ZFS vdev
> cache, vmm/app cache, oh my!). So *some* people go with something
> tried-and-true (basically bordering on server-level cards that
> let you ditch any BIOS type of RAID config and present the raw
> disk devices to the kernel)

 As someone else has mentioned, recent SiL stuff works well. I have
 multiple
 http://www.newegg.com/Product/Product.aspx?Item=N82E16816132008
 cards servicing RAID-Z2 and GEOM_RAID3 arrays on 8.0-RELEASE and
 8.0-STABLE machines using both the old ata(4) driver and ATA_CAM.
 Don't
 let the RAID label scare you--that stuff is off by default and the
 controller just presents the disks to the operating system. Hot
 swap works. I haven't had the time to try the siis(4) driver for
 them, which would result in better performance.
>>>
>>> That's a really good price. :)
>>>
>>>
>>> If needed, I could host all eight SATA drives for $160, much cheaper
>>> than any of the other RAID cards I've seen.
>>>
>>> The issue then is finding a motherboard which has 4x PCI Express
>>> slots.  ;)
>>
>> You should be able to put PCIe 4x card in a PCIe 16x or 8x slot.
>> For an explanation allow me to quote wikipedia:
>>
>>
>> "A PCIe card will fit into a slot of its physical size or bigger, but
>> may not fit into a smaller PCIe slot. Some slots use open-ended sockets
>> to permit physically longer cards and will negotiate the best available
>> electrical connection. The number of lanes actually connected to a slot
>> may also be less than the number supported by the physical slot size. An
>> example is a x8 slot that actually only runs at ×1; these slots will
>> allow any ×1, ×2, ×4 or ×8 card to be used, though only running at the
>> ×1 speed. This type of socket is
>> described as a ×8 (×1 mode) slot, meaning it physically accepts up to ×8
>> cards but only runs at ×1 speed. The advantage gained is that a larger
>> range of PCIe cards can still be used without requiring the motherboard
>> hardware to support the full transfer rate???in so doing keeping design
>> and implementation costs down."
>
> Correction -- more than likely on a consumer motherboard you *will not*
> be able to put a non-VGA card into the PCIe x16 slot.  I have numerous Asus
> and Gigabyte motherboards which only accept graphics cards in their PCIe
> x16 slots; this """feature""" is documented in user manuals.  I don't know
> how/why these companies chose to do this, but whatever.
>
> I would strongly advocate that the OP (who has stated he's focusing on
> stability and reliability over speed) purchase a server motherboard that
> has a PCIe x8 slot on it and/or server chassis (usually best to buy both
> of these things from the same vendor) and be done with it.

Hi,

We're running an 'old' LSI U320 x4 (or x8) PCIe hw raid card in a simple
Gigabyte mobo without any problems. It was plug and play. The mobo has
some P35 chipset and an E7400 CPU. If the exact types needed I'll look
after them. (And yes, the good old U320 scsi is lightning fast compared to
any new SATA drives and only 3x36GB disks are in raid5. I know that it
won't win the capacity contest... :) )

I think these single cpu server boards are quite overpriced regarding to
the few extra features that would make some to buy them.

Anyway, I liked that Atom D510 supermicro mobo that was mentioned earlier.
I think it would handle any good PCIe cards and would fit in a nice
Supermicro tower. I'd also suggest to with as less disk as you can. 2TB
disks are here so you can make a 4TB R5 aray with

Re: hardware for home use large storage

2010-02-10 Thread Jeremy Chadwick
On Wed, Feb 10, 2010 at 11:27:53AM +0100, Pieter de Goeje wrote:
> On Wednesday 10 February 2010 05:28:57 Dan Langille wrote:
> > Boris Kochergin wrote:
> > > Peter C. Lai wrote:
> > >> On 2010-02-09 06:37:47AM -0500, Dan Langille wrote:
> > >>> Charles Sprickman wrote:
> >  On Mon, 8 Feb 2010, Dan Langille wrote:
> >  Also, it seems like
> >  people who use zfs (or gmirror + gstripe) generally end up buying
> >  pricey hardware raid cards for compatibility reasons.  There seem to
> >  be no decent add-on SATA cards that play nice with FreeBSD other
> >  than that weird supermicro card that has to be physically hacked
> >  about to fit.
> > >>
> > >> Mostly only because certain cards have issues w/shoddy JBOD
> > >> implementation. Some cards (most notably ones like Adaptec 2610A which
> > >> was rebranded by Dell as the "CERC SATA 1.5/6ch" back in the day)
> > >> won't let you run the drives in passthrough mode and seem to all want
> > >> to stick their grubby little RAID paws into your JBOD setup (i.e. the
> > >> only way to have minimal
> > >> participation from the "hardware" RAID is to set each disk as its own
> > >> RAID-0/volume in the controller BIOS) which then cascades into issues
> > >> with SMART, AHCI, "triple caching"/write reordering, etc on the
> > >> FreeBSD side (the controller's own craptastic cache, ZFS vdev cache,
> > >> vmm/app cache, oh my!). So *some* people go with something
> > >> tried-and-true (basically bordering on server-level cards that let you
> > >> ditch any BIOS type of RAID config and present the raw disk devices to
> > >> the kernel)
> > >
> > > As someone else has mentioned, recent SiL stuff works well. I have
> > > multiple http://www.newegg.com/Product/Product.aspx?Item=N82E16816132008
> > > cards servicing RAID-Z2 and GEOM_RAID3 arrays on 8.0-RELEASE and
> > > 8.0-STABLE machines using both the old ata(4) driver and ATA_CAM. Don't
> > > let the RAID label scare you--that stuff is off by default and the
> > > controller just presents the disks to the operating system. Hot swap
> > > works. I haven't had the time to try the siis(4) driver for them, which
> > > would result in better performance.
> > 
> > That's a really good price. :)
> > 
> > If needed, I could host all eight SATA drives for $160, much cheaper
> > than any of the other RAID cards I've seen.
> > 
> > The issue then is finding a motherboard which has 4x PCI Express slots.  ;)
> 
> You should be able to put PCIe 4x card in a PCIe 16x or 8x slot. 
> For an explanation allow me to quote wikipedia:
> 
> "A PCIe card will fit into a slot of its physical size or bigger, but may not 
> fit into a smaller PCIe slot. Some slots use open-ended sockets to permit 
> physically longer cards and will negotiate the best available electrical 
> connection. The number of lanes actually connected to a slot may also be less 
> than the number supported by the physical slot size. An example is a x8 slot 
> that actually only runs at ×1; these slots will allow any ×1, ×2, ×4 or ×8 
> card to be used, though only running at the ×1 speed. This type of socket is 
> described as a ×8 (×1 mode) slot, meaning it physically accepts up to ×8 
> cards 
> but only runs at ×1 speed. The advantage gained is that a larger range of 
> PCIe 
> cards can still be used without requiring the motherboard hardware to support 
> the full transfer rate???in so doing keeping design and implementation costs 
> down."

Correction -- more than likely on a consumer motherboard you *will not*
be able to put a non-VGA card into the PCIe x16 slot.  I have numerous
Asus and Gigabyte motherboards which only accept graphics cards in their
PCIe x16 slots; this """feature""" is documented in user manuals.  I
don't know how/why these companies chose to do this, but whatever.

I would strongly advocate that the OP (who has stated he's focusing on
stability and reliability over speed) purchase a server motherboard that
has a PCIe x8 slot on it and/or server chassis (usually best to buy both
of these things from the same vendor) and be done with it.

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-10 Thread Pieter de Goeje
On Wednesday 10 February 2010 05:28:57 Dan Langille wrote:
> Boris Kochergin wrote:
> > Peter C. Lai wrote:
> >> On 2010-02-09 06:37:47AM -0500, Dan Langille wrote:
> >>> Charles Sprickman wrote:
>  On Mon, 8 Feb 2010, Dan Langille wrote:
>  Also, it seems like
>  people who use zfs (or gmirror + gstripe) generally end up buying
>  pricey hardware raid cards for compatibility reasons.  There seem to
>  be no decent add-on SATA cards that play nice with FreeBSD other
>  than that weird supermicro card that has to be physically hacked
>  about to fit.
> >>
> >> Mostly only because certain cards have issues w/shoddy JBOD
> >> implementation. Some cards (most notably ones like Adaptec 2610A which
> >> was rebranded by Dell as the "CERC SATA 1.5/6ch" back in the day)
> >> won't let you run the drives in passthrough mode and seem to all want
> >> to stick their grubby little RAID paws into your JBOD setup (i.e. the
> >> only way to have minimal
> >> participation from the "hardware" RAID is to set each disk as its own
> >> RAID-0/volume in the controller BIOS) which then cascades into issues
> >> with SMART, AHCI, "triple caching"/write reordering, etc on the
> >> FreeBSD side (the controller's own craptastic cache, ZFS vdev cache,
> >> vmm/app cache, oh my!). So *some* people go with something
> >> tried-and-true (basically bordering on server-level cards that let you
> >> ditch any BIOS type of RAID config and present the raw disk devices to
> >> the kernel)
> >
> > As someone else has mentioned, recent SiL stuff works well. I have
> > multiple http://www.newegg.com/Product/Product.aspx?Item=N82E16816132008
> > cards servicing RAID-Z2 and GEOM_RAID3 arrays on 8.0-RELEASE and
> > 8.0-STABLE machines using both the old ata(4) driver and ATA_CAM. Don't
> > let the RAID label scare you--that stuff is off by default and the
> > controller just presents the disks to the operating system. Hot swap
> > works. I haven't had the time to try the siis(4) driver for them, which
> > would result in better performance.
> 
> That's a really good price. :)
> 
> If needed, I could host all eight SATA drives for $160, much cheaper
> than any of the other RAID cards I've seen.
> 
> The issue then is finding a motherboard which has 4x PCI Express slots.  ;)

You should be able to put PCIe 4x card in a PCIe 16x or 8x slot. 
For an explanation allow me to quote wikipedia:

"A PCIe card will fit into a slot of its physical size or bigger, but may not 
fit into a smaller PCIe slot. Some slots use open-ended sockets to permit 
physically longer cards and will negotiate the best available electrical 
connection. The number of lanes actually connected to a slot may also be less 
than the number supported by the physical slot size. An example is a x8 slot 
that actually only runs at ×1; these slots will allow any ×1, ×2, ×4 or ×8 
card to be used, though only running at the ×1 speed. This type of socket is 
described as a ×8 (×1 mode) slot, meaning it physically accepts up to ×8 cards 
but only runs at ×1 speed. The advantage gained is that a larger range of PCIe 
cards can still be used without requiring the motherboard hardware to support 
the full transfer rate—in so doing keeping design and implementation costs 
down."

-- Pieter
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-09 Thread Niki Denev
On Wed, Feb 10, 2010 at 12:56 AM, Peter C. Lai  wrote:
> On 2010-02-09 05:32:02PM -0500, Charles Sprickman wrote:
>> On Tue, 9 Feb 2010, Jeremy Chadwick wrote:
>>> One similar product that does seem to work well is iLO, available on
>>> HP/Compaq hardware.
>>
>> I've heard great things about that.  It seems like a much better design -
>> it's essentially a small server that is independent from the main host. Has
>> it's own LAN and serial ports as well.
>>
>> Charles
>
> Dell PowerEdge Remote Access (DRAC) cards also provided this as well,
> and for a while there, you could actually VNC into them. But HP offers iLO
> for no extra charge or discount upon removal (DRACs are worth about $250)
> and has become such a prominent "must-have" datacenter feature that the
> "iLO" term is beginning to become genericized for web-accessible and virtual
> disc-capable onboard out-of-band IP-console management.
>
> --
> ===
> Peter C. Lai                 | Bard College at Simon's Rock
> Systems Administrator        | 84 Alford Rd.
> Information Technology Svcs. | Gt. Barrington, MA 01230 USA
> peter AT simons-rock.edu     | (413) 528-7428
> ===
>

I thought that their VNC implementation is non-standard, and
I wasn't able to VPN into them, at least on the latest Core i7 models.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-09 Thread Dan Langille

Trying to make sense of stuff I don't know about...

Matthew Dillon wrote:


AHCI on-motherboard with equivalent capabilities do not appear to be
in wide distribution yet.  Most AHCI chips can do NCQ to a single
target (even a single target behind a PM), but not concurrently to
multiple targets behind a port multiplier.  Even though SATA bandwidth
constraints might seem to make this a reasonable alternative it
actually isn't because any seek heavy activity to multiple drives
will be serialized and perform EXTREMELY poorly.  Linear performance
will be fine.  Random performance will be horrible.


Don't use a port multiplier and this goes away.  I was hoping to avoid a 
PM and using something like the Syba PCI Express SATA II 4 x Ports RAID 
Controller seems to be the best solution so far.


http://www.amazon.com/Syba-Express-Ports-Controller-SY-PEX40008/dp/B002R0DZWQ/ref=sr_1_22?ie=UTF8&s=electronics&qid=1258452902&sr=1-22



It should be noted that while hotswap is supported with silicon image
chipsets and port multiplier enclosures (which also use Sili chips in
the enclosure), the hot-swap capability is not anywhere near as robust
as you would find with a more costly commercial SAS setup.  SI chips
are very poorly made (this is the same company that went bust under
another name a few years back due to shoddy chipsets), and have a lot
of on-chip hardware bugs, but fortunately OSS driver writers (linux
guys) have been able to work around most of them.  So even though the
chipset is a bit shoddy actual operation is quite good.  However,
this does mean you generally want to idle all activity on the enclosure
to safely hot swap anything, not just the drive you are pulling out.
I've done a lot of testing and hot-swapping an idle disk while other
drives in the same enclosure are hot is not reliable (for a cheap port
multiplier enclosure using a Sili chip inside, which nearly all do).


What I'm planning to use is an SATA enclosure but I'm pretty sure a port 
multiplier is not involved:


http://www.athenapower.us/web_backplane_zoom/bp_sata3141b.html


Also, a disk failure within the enclosure can create major command
sequencing issues for other targets in the enclosure because error
processing has to be serialized.  Fine for home use but don't expect
miracles if you have a drive failure.


Another reason to avoid port multipliers.

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-09 Thread Dan Langille

Boris Kochergin wrote:

Peter C. Lai wrote:

On 2010-02-09 06:37:47AM -0500, Dan Langille wrote:
 

Charles Sprickman wrote:
   

On Mon, 8 Feb 2010, Dan Langille wrote:
Also, it seems like
people who use zfs (or gmirror + gstripe) generally end up buying 
pricey hardware raid cards for compatibility reasons.  There seem to 
be no decent add-on SATA cards that play nice with FreeBSD other 
than that weird supermicro card that has to be physically hacked 
about to fit.
  


Mostly only because certain cards have issues w/shoddy JBOD 
implementation. Some cards (most notably ones like Adaptec 2610A which 
was rebranded by Dell as the "CERC SATA 1.5/6ch" back in the day) 
won't let you run the drives in passthrough mode and seem to all want 
to stick their grubby little RAID paws into your JBOD setup (i.e. the 
only way to have minimal
participation from the "hardware" RAID is to set each disk as its own 
RAID-0/volume in the controller BIOS) which then cascades into issues 
with SMART, AHCI, "triple caching"/write reordering, etc on the 
FreeBSD side (the controller's own craptastic cache, ZFS vdev cache, 
vmm/app cache, oh my!). So *some* people go with something 
tried-and-true (basically bordering on server-level cards that let you 
ditch any BIOS type of RAID config and present the raw disk devices to 
the kernel)
As someone else has mentioned, recent SiL stuff works well. I have 
multiple http://www.newegg.com/Product/Product.aspx?Item=N82E16816132008 
cards servicing RAID-Z2 and GEOM_RAID3 arrays on 8.0-RELEASE and 
8.0-STABLE machines using both the old ata(4) driver and ATA_CAM. Don't 
let the RAID label scare you--that stuff is off by default and the 
controller just presents the disks to the operating system. Hot swap 
works. I haven't had the time to try the siis(4) driver for them, which 
would result in better performance.


That's a really good price. :)

If needed, I could host all eight SATA drives for $160, much cheaper 
than any of the other RAID cards I've seen.


The issue then is finding a motherboard which has 4x PCI Express slots.  ;)
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-09 Thread Emil Mikulic
On Tue, Feb 09, 2010 at 11:31:55AM -0500, Peter C. Lai wrote:
> Also does anybody know if benching dd if=/dev/zero onto a zfs volume that
> has compression turned on might affect what dd (which is getting what it
> knows from vfs/vmm) might report?

Absolutely!

Compression on:
4294967296 bytes transferred in 16.251397 secs (264282961 bytes/sec)
4294967296 bytes transferred in 16.578707 secs (259065276 bytes/sec)
4294967296 bytes transferred in 16.178586 secs (265472353 bytes/sec)
4294967296 bytes transferred in 16.069003 secs (267282747 bytes/sec)

Compression off:
4294967296 bytes transferred in 58.248351 secs (73735432 bytes/sec)
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-09 Thread Daniel O'Connor
On Wed, 10 Feb 2010, Christian Weisgerber wrote:
> Matthew D. Fuller  wrote:
> > > I have something similar (5x1Tb) - I have a Gigabyte
> > > GA-MA785GM-US2H with an Athlon X2 and 4Gb of RAM (only half
> > > filled - 2x2Gb)
> > >
> > > Note that it doesn't support ECC, I don't know if that is a
> > > problem.
> >
> > How's that?  Is the BIOS just stupid, or is the board physically
> > missing traces?
>
> Doesn't matter really, does it?
>
> I have a GA-MA78G-DS3H.  According to the specs, it supports ECC
> memory.  And that is all the mention of ECC you will find anywhere.
> There is nothing in the BIOS.  My best guess is that they quite
> literally mean that you can plug ECC memory into the board and it
> will work, but that there are no provisions to actually use ECC.

FWIW I can't see ECC support listed for that board on Gigabyte's 
website.. (vs the GA-MA770T-UD3P which does list ECC as supported - 
DDR3 board though)

-- 
Daniel O'Connor software and network engineer
for Genesis Software - http://www.gsoft.com.au
"The nice thing about standards is that there
are so many of them to choose from."
  -- Andrew Tanenbaum
GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C


signature.asc
Description: This is a digitally signed message part.


Re: hardware for home use large storage

2010-02-09 Thread Peter C. Lai
On 2010-02-09 05:32:02PM -0500, Charles Sprickman wrote:
> On Tue, 9 Feb 2010, Jeremy Chadwick wrote:
>> One similar product that does seem to work well is iLO, available on
>> HP/Compaq hardware.
> 
> I've heard great things about that.  It seems like a much better design - 
> it's essentially a small server that is independent from the main host. Has 
> it's own LAN and serial ports as well.
> 
> Charles

Dell PowerEdge Remote Access (DRAC) cards also provided this as well,
and for a while there, you could actually VNC into them. But HP offers iLO
for no extra charge or discount upon removal (DRACs are worth about $250) 
and has become such a prominent "must-have" datacenter feature that the 
"iLO" term is beginning to become genericized for web-accessible and virtual 
disc-capable onboard out-of-band IP-console management.

-- 
===
Peter C. Lai | Bard College at Simon's Rock
Systems Administrator| 84 Alford Rd.
Information Technology Svcs. | Gt. Barrington, MA 01230 USA
peter AT simons-rock.edu | (413) 528-7428
===

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-09 Thread Charles Sprickman

On Tue, 9 Feb 2010, Jeremy Chadwick wrote:


On Tue, Feb 09, 2010 at 06:53:26AM -0600, Karl Denninger wrote:

Jeremy Chadwick wrote:

On Tue, Feb 09, 2010 at 05:21:32PM +1100, Andrew Snow wrote:


http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=H

Supermicro just released a new Mini-ITX fanless Atom server board
with 6xSATA ports (based on Intel ICH9) and a PCIe 16x slot.  It
takes up to 4GB of RAM, and there's even a version with KVM-over-LAN
for headless operation and remote management.



Neat hardware.  But with regards to the KVM-over-LAN stuff: it's IPMI,
and Supermicro has a very, *very* long history of having shoddy IPMI
support.  I've been told the latter by too many different individuals in
the industry (some co-workers, some work at Yahoo, some at Rackable,
etc.) for me to rely on it.  If you *have* to go this route, make sure
you get the IPMI module which has its own dedicated LAN port on the
module and ***does not*** piggyback on top of an existing LAN port on
the mainboard.


What's wrong with the Supermicro IPMI implementations?  I have several -
all have a SEPARATE LAN port on the main board for the IPMI KVM
(separate and distinct from the board's primary LAN ports), and I've not
had any trouble with any of them.


http://unix.derkeiler.com/Mailing-Lists/FreeBSD/current/2008-01/msg01206.html
http://forums.freebsd.org/showthread.php?t=7750
http://www.beowulf.org/archive/2007-November/019925.html
http://bivald.com/lessons-learned/2009/06/supermicro_ipmi_problems_web_i.html
http://lists.freebsd.org/pipermail/freebsd-stable/2008-August/044248.html
http://lists.freebsd.org/pipermail/freebsd-stable/2008-August/044237.html

(Last thread piece does mention that the user was able to get keyboard
working by disabling umass(4) of all things)


I have a box down at Softlayer (one of the few major server rental outfits 
that officially supports FreeBSD), and one of the reasons I went with them 
is that they advertised "IP-KVM support".  Turns out they run Supermicro 
boxes with the IPMI card.  It mostly works, but it is very quirky and you 
have to use a very wonky Java client app to get the remote console.  You 
have to build a kernel that omits certain USB devices to make the keyboard 
work over the KVM connection (and their stock FBSD install has it 
disabled).


I can usually get in, but sometimes I have to open a ticket with them and 
a tech does some kind of reset on the card.  I don't know if they a 
hitting a button on the card/chassis or if they have some way to do this 
remotely.  After they do that, I'll see something like this in dmesg:


umass0:  on 
uhub4
ums0:  on 
uhub4

ums0: 3 buttons and Z dir.
ukbd0:  on 
uhub4

kbd2 at ukbd0

The umass device is to support the "virtual media" feature that simply 
does not work.  It's supposed to allow you to point the ipmi card at an 
iso or disk image on an SMB server and boot your server off of it.  I had 
no luck with this.


All the IPMI power on/off, reset, and hw monitoring functions do work well 
though.



It gets worse when you use one of the IPMI modules that piggybacks on an
existing Ethernet port -- the NIC driver for the OS, from the ground up,
has to be fully aware of ASF and any quirks/oddities involved.  For
example, on bge(4) and bce(4), you'll find this (bge mentioned below):

 hw.bge.allow_asf
   Allow the ASF feature for cooperating with IPMI.  Can cause sys-
   tem lockup problems on a small number of systems.  Disabled by
   default.

So unless the administrator intentionally sets the loader tunable prior
to booting the OS installation, they'll find all kinds of MAC problems
as a result of the IPMI piggybacking.  "Why isn't this enabled by
default?"  I believe because there were reports of failures/problems on
people's systems who *did not* have IPMI cards.  Lose-lose situation.


I don't think they have this setup, or if they do, they are using it on 
the internal LAN, so I don't notice any weirdness.



If you really want me to dig up people at Yahoo who have dealt with IPMI
on thousands of Supermicro servers and the insanity involved (due to
bugs, quirks, or implementation differences between the IPMI firmwares
and which revision/model of module used), I can do so.  Most of the
complaints I've heard of stem from serial-over-IPMI.  I don't think
it'd be a very positive/"supportive" thread, however.  :-)

One similar product that does seem to work well is iLO, available on
HP/Compaq hardware.


I've heard great things about that.  It seems like a much better design - 
it's essentially a small server that is independent from the main host. 
Has it's own LAN and serial ports as well.


Charles


--
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

__

Re: hardware for home use large storage

2010-02-09 Thread Charles Sprickman

On Tue, 9 Feb 2010, Dan Langille wrote:


Charles Sprickman wrote:

On Mon, 8 Feb 2010, Dan Langille wrote:

Also, it seems like
people who use zfs (or gmirror + gstripe) generally end up buying pricey 
hardware raid cards for compatibility reasons.  There seem to be no decent 
add-on SATA cards that play nice with FreeBSD other than that weird 
supermicro card that has to be physically hacked about to fit.


They use software RAID and hardware RAID at the same time?  I'm not sure what 
you mean by this.  Compatibility with FreeBSD?


From what I've seen on this list, people buy a nice Areca or 3Ware card 
and put it in JBOD mode and run ZFS on top of the drives.  The card is 
just being used to get lots of sata ports with a stable driver and known 
good hardware.  I've asked here a few times in the last few years for 
recommendations on a cheap SATA card and it seems like such a thing does 
not exist.  This might be a bit dated at this point, but you're playing it 
safe if you go with a 3ware/Areca/LSI card.


I don't recall all the details, but there were issues with siil, 
highpoint, etc.  IIRC it was not really FBSD's issue, but bugginess in 
those cards.  The intel ICH9 chipset works well, but there are no add-on 
PCIe cards that have an intel chip on them...


I'm sure someone will correct me if my info is now outdated or flat-out 
wrong. :)


Charles

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-09 Thread Christian Weisgerber
Matthew D. Fuller  wrote:

> > I have something similar (5x1Tb) - I have a Gigabyte GA-MA785GM-US2H
> > with an Athlon X2 and 4Gb of RAM (only half filled - 2x2Gb)
> >
> > Note that it doesn't support ECC, I don't know if that is a problem.
> 
> How's that?  Is the BIOS just stupid, or is the board physically
> missing traces?

Doesn't matter really, does it?

I have a GA-MA78G-DS3H.  According to the specs, it supports ECC
memory.  And that is all the mention of ECC you will find anywhere.
There is nothing in the BIOS.  My best guess is that they quite
literally mean that you can plug ECC memory into the board and it
will work, but that there are no provisions to actually use ECC.

That said, I also have an Asus M2N-SLI Deluxe.  If I enable ECC in
the BIOS, the board locks up sooner or later, even when just sitting
in the BIOS.  memtest86 dies a screaming death immediately.  When
I disable ECC, the board is solid, both in actual use and with
memtest.

I thought if I built a PC from components, I'd be already a step
above the lowest dregs of the consumer market, but apparently not.

-- 
Christian "naddy" Weisgerber  na...@mips.inka.de

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-09 Thread Matthew Dillon
The Silicon Image 3124A chipsets (the PCI-e version of the 3124.  The
original 3124 was PCI-x).  The 3124A's are starting to make their way
into distribution channels.  This is probably the best 'cheap' solution
which offers fully concurrent multi-target NCQ operation through a port
multiplier enclosure with more than the PCIe 1x bus the ultra-cheap
3132 offers.  I think the 3124A uses an 8x bus (not quite sure, but it
is more than 1x).

AHCI on-motherboard with equivalent capabilities do not appear to be
in wide distribution yet.  Most AHCI chips can do NCQ to a single
target (even a single target behind a PM), but not concurrently to
multiple targets behind a port multiplier.  Even though SATA bandwidth
constraints might seem to make this a reasonable alternative it
actually isn't because any seek heavy activity to multiple drives
will be serialized and perform EXTREMELY poorly.  Linear performance
will be fine.  Random performance will be horrible.

It should be noted that while hotswap is supported with silicon image
chipsets and port multiplier enclosures (which also use Sili chips in
the enclosure), the hot-swap capability is not anywhere near as robust
as you would find with a more costly commercial SAS setup.  SI chips
are very poorly made (this is the same company that went bust under
another name a few years back due to shoddy chipsets), and have a lot
of on-chip hardware bugs, but fortunately OSS driver writers (linux
guys) have been able to work around most of them.  So even though the
chipset is a bit shoddy actual operation is quite good.  However,
this does mean you generally want to idle all activity on the enclosure
to safely hot swap anything, not just the drive you are pulling out.
I've done a lot of testing and hot-swapping an idle disk while other
drives in the same enclosure are hot is not reliable (for a cheap port
multiplier enclosure using a Sili chip inside, which nearly all do).

Also, a disk failure within the enclosure can create major command
sequencing issues for other targets in the enclosure because error
processing has to be serialized.  Fine for home use but don't expect
miracles if you have a drive failure.

The Sili chips and port multiplier enclosures are definitely the
cheapest multi-disk solution.  You lose on aggregate bandwidth and
you lose on some robustness but you get the hot-swap basically for free.

--

Multi-HD setups for home use are usually a lose.  I've found over
the years that it is better to just buy a big whopping drive and
then another one or two for backups and not try to gang them together
in a RAID.  And yes, at one time in the past I was running three
separate RAID-5 using 3ware controllers.  I don't anymore and I'm
a lot happier.

If you have more than 2TB worth of critical data you don't have much
of a choice, but I'd go with as few physical drives as possible
regardless.  The 2TB Maxtor green or black drives are nice.  I
strongly recommend getting the highest-capacity drives you can
afford if you don't want your power bill to blow out your budget.

The bigger problem is always having an independent backup of the data.
Depending on a single-instanced filesystem, even one like ZFS, for a
lifetime's worth of data is not a good idea.  Fire, theft... there are
a lot of ways the data can be lost.  So when designing the main
system you have to take care to also design the backup regimen
including something off-site (or swapping the physical drive once
a month, etc). i.e. multiple backup regimens.

If single-drive throughput is an issue then using ZFS's caching
solution with a small SSD is the way to go (and yes, DFly has a SSD
caching solution now too but that's not pertainant to this thread).
The Intel SSDs are really nice, but I am singularly unimpressed with
the OCZ Colossus's which don't even negotiate NCQ.  I don't know much
re: other vendors.

A little $100 Intel 40G SSD has around a 40TB write endurance and can
last 10 years as a disk meta-data caching environment with a little care,
particularly if you only cache meta-data.  A very small incremental
cost gives you 120-200MB/sec of seek-agnostic bandwidth which is
perfect for network serving, backup, remote filesystems, etc.  Unless
the box has 10GigE or multiple 1xGigE network links there's no real
need to try to push HD throughput beyond what the network can do
so it really comes down to avoiding thrashing the HDs with random seeks.
That is what the small SSD cache gives you.  It can be like night and
day.

-Matt

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubsc

Re: hardware for home use large storage

2010-02-09 Thread Peter C. Lai
On 2010-02-09 07:52:05PM +0100, Andre Wensing wrote:
> 
> 
> Freddie Cash wrote:
>> On Tue, Feb 9, 2010 at 3:37 AM, Dan Langille  wrote:
>> 
>>> Charles Sprickman wrote:
>>> 
 On Mon, 8 Feb 2010, Dan Langille wrote:
 
> Also, it seems like
 people who use zfs (or gmirror + gstripe) generally end up buying pricey
 hardware raid cards for compatibility reasons.  There seem to be no decent
 add-on SATA cards that play nice with FreeBSD other than that weird
 supermicro card that has to be physically hacked about to fit.
 
>>> They use software RAID and hardware RAID at the same time?  I'm not sure
>>> what you mean by this.  Compatibility with FreeBSD?
>>> 
>>> Add-on (PCI-X/PCIe) RAID controllers tend to have solid drivers in FreeBSD.
>>   Add-on SATA controllers not so much.  The RAID controllers also tend to
>> support more SATA features like NCQ, hot-swap, monitoring, etc.  They also
>> enable you to use the same hardware across OSes (FreeBSD, Linux, etc).
>> 
>> For example, we use 3Ware controllers in all our servers, as they have good,
>> solid support under FreeBSD and Linux.  On the Linux servers, we use
>> hardware RAID.  On the FreeBSD servers, we use them as SATA controllers
>> (Single Disk arrays, not JBOD).  Either way, the management is the same, the
>> drivers are the same, the support is the same.
>> 
>> It's hard to find good, non-RAID, SATA controllers with solid FreeBSD
>> support, and good throughput, with any kind of management/monitoring
>> features.
>> 
> 
> And I thought I found one in the Adaptec 1405 Integrated SAS/SATA 
> controller, because it's marketed as an inexpensive SAS/SATA non-RAID 
> addon-card. On top of that, they advertise it as having FreeBSD6 and 
> FreeBSD7-support and drivers. So I ordered it for my storage-box (FreeNAS) 
> with great expectations. Sadly, they don't have support nor drivers for 
> FreeBSD ("drivers will be released Q4 2009") at all, so I'm thinking of 
> leaving FreeNAS and trying some linux-flavor that does support this card...
> But Adaptec doesn't have a great track-record for FreeBSD-support, does it?
> 

Everything is a repackage of some OEM these days (basically gone are the
days of Adaptec==LSI==mpt(4) and all). Find the actual chipset make and
 model if you can, then you can look at what is supported, as the drivers 
deal with the actual chipset and could care less about the brand of some
vertically-integrated fpga package.

Probably will want to give a shout-out on -hardware i.e.
http://markmail.org/message/b5imismi5s3iafc5#query:+page:1+mid:5htpj5fw7uijtzqp+state:results

-- 
===
Peter C. Lai | Bard College at Simon's Rock
Systems Administrator| 84 Alford Rd.
Information Technology Svcs. | Gt. Barrington, MA 01230 USA
peter AT simons-rock.edu | (413) 528-7428
===

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-09 Thread Andre Wensing



Freddie Cash wrote:

On Tue, Feb 9, 2010 at 3:37 AM, Dan Langille  wrote:


Charles Sprickman wrote:


On Mon, 8 Feb 2010, Dan Langille wrote:


Also, it seems like

people who use zfs (or gmirror + gstripe) generally end up buying pricey
hardware raid cards for compatibility reasons.  There seem to be no decent
add-on SATA cards that play nice with FreeBSD other than that weird
supermicro card that has to be physically hacked about to fit.


They use software RAID and hardware RAID at the same time?  I'm not sure
what you mean by this.  Compatibility with FreeBSD?

Add-on (PCI-X/PCIe) RAID controllers tend to have solid drivers in FreeBSD.

  Add-on SATA controllers not so much.  The RAID controllers also tend to
support more SATA features like NCQ, hot-swap, monitoring, etc.  They also
enable you to use the same hardware across OSes (FreeBSD, Linux, etc).

For example, we use 3Ware controllers in all our servers, as they have good,
solid support under FreeBSD and Linux.  On the Linux servers, we use
hardware RAID.  On the FreeBSD servers, we use them as SATA controllers
(Single Disk arrays, not JBOD).  Either way, the management is the same, the
drivers are the same, the support is the same.

It's hard to find good, non-RAID, SATA controllers with solid FreeBSD
support, and good throughput, with any kind of management/monitoring
features.



And I thought I found one in the Adaptec 1405 Integrated SAS/SATA 
controller, because it's marketed as an inexpensive SAS/SATA non-RAID 
addon-card. On top of that, they advertise it as having FreeBSD6 and 
FreeBSD7-support and drivers. So I ordered it for my storage-box 
(FreeNAS) with great expectations. Sadly, they don't have support nor 
drivers for FreeBSD ("drivers will be released Q4 2009") at all, so I'm 
thinking of leaving FreeNAS and trying some linux-flavor that does 
support this card...

But Adaptec doesn't have a great track-record for FreeBSD-support, does it?

André Wensing
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-09 Thread Freddie Cash
On Tue, Feb 9, 2010 at 3:37 AM, Dan Langille  wrote:

> Charles Sprickman wrote:
>
>> On Mon, 8 Feb 2010, Dan Langille wrote:
>>
>> > Also, it seems like
>
>> people who use zfs (or gmirror + gstripe) generally end up buying pricey
>> hardware raid cards for compatibility reasons.  There seem to be no decent
>> add-on SATA cards that play nice with FreeBSD other than that weird
>> supermicro card that has to be physically hacked about to fit.
>>
>
> They use software RAID and hardware RAID at the same time?  I'm not sure
> what you mean by this.  Compatibility with FreeBSD?
>
> Add-on (PCI-X/PCIe) RAID controllers tend to have solid drivers in FreeBSD.
  Add-on SATA controllers not so much.  The RAID controllers also tend to
support more SATA features like NCQ, hot-swap, monitoring, etc.  They also
enable you to use the same hardware across OSes (FreeBSD, Linux, etc).

For example, we use 3Ware controllers in all our servers, as they have good,
solid support under FreeBSD and Linux.  On the Linux servers, we use
hardware RAID.  On the FreeBSD servers, we use them as SATA controllers
(Single Disk arrays, not JBOD).  Either way, the management is the same, the
drivers are the same, the support is the same.

It's hard to find good, non-RAID, SATA controllers with solid FreeBSD
support, and good throughput, with any kind of management/monitoring
features.

-- 
Freddie Cash
fjwc...@gmail.com
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-09 Thread Boris Kochergin

Peter C. Lai wrote:

On 2010-02-09 06:37:47AM -0500, Dan Langille wrote:
  

Charles Sprickman wrote:


On Mon, 8 Feb 2010, Dan Langille wrote:
Also, it seems like
people who use zfs (or gmirror + gstripe) generally end up buying pricey 
hardware raid cards for compatibility reasons.  There seem to be no decent 
add-on SATA cards that play nice with FreeBSD other than that weird 
supermicro card that has to be physically hacked about to fit.
  


Mostly only because certain cards have issues w/shoddy JBOD implementation. 
Some cards (most notably ones like Adaptec 2610A which was rebranded by 
Dell as the "CERC SATA 1.5/6ch" back in the day) won't let you run the 
drives in passthrough mode and seem to all want to stick their grubby 
little RAID paws into your JBOD setup (i.e. the only way to have minimal
participation from the "hardware" RAID is to set each disk as its own 
RAID-0/volume in the controller BIOS) which then cascades into issues with 
SMART, AHCI, "triple caching"/write reordering, etc on the FreeBSD side (the 
controller's own craptastic cache, ZFS vdev cache, vmm/app cache, oh my!). 
So *some* people go with something tried-and-true (basically bordering on 
server-level cards that let you ditch any BIOS type of RAID config and 
present the raw disk devices to the kernel)
As someone else has mentioned, recent SiL stuff works well. I have 
multiple http://www.newegg.com/Product/Product.aspx?Item=N82E16816132008 
cards servicing RAID-Z2 and GEOM_RAID3 arrays on 8.0-RELEASE and 
8.0-STABLE machines using both the old ata(4) driver and ATA_CAM. Don't 
let the RAID label scare you--that stuff is off by default and the 
controller just presents the disks to the operating system. Hot swap 
works. I haven't had the time to try the siis(4) driver for them, which 
would result in better performance.


-Boris
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-09 Thread Peter C. Lai
That's faster than just about anything I have at home. 
So you should be fine. It should be good enough to serve as primary media
center storage even (for retrievals, anyway, probably a tad bit slow for 
live transcoding).

Also does anybody know if benching dd if=/dev/zero onto a zfs volume that
has compression turned on might affect what dd (which is getting what it
knows from vfs/vmm) might report?

On 2010-02-09 03:16:13PM +, Tom Evans wrote:
> On Tue, Feb 9, 2010 at 3:01 PM, Dan Langille  wrote:
> >
> > On Tue, February 9, 2010 9:09 am, Tom Evans wrote:
> >> On Tue, Feb 9, 2010 at 1:45 PM, Dan Langille  wrote:
> >> One thing to point out about using a PM like this: you won't get
> >> fantastic bandwidth out of it. For my needs (home storage server),
> >> this really doesn't matter, I just want oodles of online storage, with
> >> redundancy and reliability.
> >
> >
> > A PM?  What's that?
> >
> > Yes, my priority is reliable storage.  Speed is secondary.
> >
> > What bandwidth are you getting?
> >
> 
> PM = Port Multiplier
> 
> I'm getting disk speed, as I only have one device behind the PM
> currently (just making sure it works properly :). The limits are that
> the link from siis to the PM is SATA (3Gb/s, 375MB/s), and the siis
> sits on a PCIe 1x bus (2Gb/s, 250 MB/s), so the bandwidth from that is
> shared amongst the up-to 5 disks behind the PM.
> 
> Writing from /dev/zero to the pool, I get around 120MB/s. Reading from
> the pool, and writing to /dev/null, I get around 170 MB/s.
> 
> Cheers
> 
> Tom
> ___
> freebsd-stable@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

-- 
===
Peter C. Lai | Bard College at Simon's Rock
Systems Administrator| 84 Alford Rd.
Information Technology Svcs. | Gt. Barrington, MA 01230 USA
peter AT simons-rock.edu | (413) 528-7428
===

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-09 Thread Dan Langille

On Tue, February 9, 2010 10:16 am, Tom Evans wrote:
> On Tue, Feb 9, 2010 at 3:01 PM, Dan Langille  wrote:
>>
>> On Tue, February 9, 2010 9:09 am, Tom Evans wrote:
>>> On Tue, Feb 9, 2010 at 1:45 PM, Dan Langille  wrote:
>>> One thing to point out about using a PM like this: you won't get
>>> fantastic bandwidth out of it. For my needs (home storage server),
>>> this really doesn't matter, I just want oodles of online storage, with
>>> redundancy and reliability.
>>
>>
>> A PM?  What's that?
>>
>> Yes, my priority is reliable storage.  Speed is secondary.
>>
>> What bandwidth are you getting?
>>
>
> PM = Port Multiplier
>
> I'm getting disk speed, as I only have one device behind the PM
> currently (just making sure it works properly :). The limits are that
> the link from siis to the PM is SATA (3Gb/s, 375MB/s), and the siis
> sits on a PCIe 1x bus (2Gb/s, 250 MB/s), so the bandwidth from that is
> shared amongst the up-to 5 disks behind the PM.
>
> Writing from /dev/zero to the pool, I get around 120MB/s. Reading from
> the pool, and writing to /dev/null, I get around 170 MB/s.
>

That leads me to conclude that a number of SATA cards is better than a
port multiplier.  But the impression I'm getting is that few of these work
well with FreeBSD.  Which is odd... I thought these cards would merely
present the HDD to the hardware and no diver was required.  As opposed to
RAID cards for which OS-specific drivers are required.


-- 
Dan Langille -- http://langille.org/

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-09 Thread Peter C. Lai
On 2010-02-09 06:37:47AM -0500, Dan Langille wrote:
> Charles Sprickman wrote:
>> On Mon, 8 Feb 2010, Dan Langille wrote:
> > Also, it seems like
>> people who use zfs (or gmirror + gstripe) generally end up buying pricey 
>> hardware raid cards for compatibility reasons.  There seem to be no decent 
>> add-on SATA cards that play nice with FreeBSD other than that weird 
>> supermicro card that has to be physically hacked about to fit.

Mostly only because certain cards have issues w/shoddy JBOD implementation. 
Some cards (most notably ones like Adaptec 2610A which was rebranded by 
Dell as the "CERC SATA 1.5/6ch" back in the day) won't let you run the 
drives in passthrough mode and seem to all want to stick their grubby 
little RAID paws into your JBOD setup (i.e. the only way to have minimal
participation from the "hardware" RAID is to set each disk as its own 
RAID-0/volume in the controller BIOS) which then cascades into issues with 
SMART, AHCI, "triple caching"/write reordering, etc on the FreeBSD side (the 
controller's own craptastic cache, ZFS vdev cache, vmm/app cache, oh my!). 
So *some* people go with something tried-and-true (basically bordering on 
server-level cards that let you ditch any BIOS type of RAID config and 
present the raw disk devices to the kernel).

> 
> They use software RAID and hardware RAID at the same time?  I'm not sure 
> what you mean by this.  Compatibility with FreeBSD?
> ___
> freebsd-stable@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

-- 
===
Peter C. Lai | Bard College at Simon's Rock
Systems Administrator| 84 Alford Rd.
Information Technology Svcs. | Gt. Barrington, MA 01230 USA
peter AT simons-rock.edu | (413) 528-7428
===

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-09 Thread Tom Evans
On Tue, Feb 9, 2010 at 3:01 PM, Dan Langille  wrote:
>
> On Tue, February 9, 2010 9:09 am, Tom Evans wrote:
>> On Tue, Feb 9, 2010 at 1:45 PM, Dan Langille  wrote:
>> One thing to point out about using a PM like this: you won't get
>> fantastic bandwidth out of it. For my needs (home storage server),
>> this really doesn't matter, I just want oodles of online storage, with
>> redundancy and reliability.
>
>
> A PM?  What's that?
>
> Yes, my priority is reliable storage.  Speed is secondary.
>
> What bandwidth are you getting?
>

PM = Port Multiplier

I'm getting disk speed, as I only have one device behind the PM
currently (just making sure it works properly :). The limits are that
the link from siis to the PM is SATA (3Gb/s, 375MB/s), and the siis
sits on a PCIe 1x bus (2Gb/s, 250 MB/s), so the bandwidth from that is
shared amongst the up-to 5 disks behind the PM.

Writing from /dev/zero to the pool, I get around 120MB/s. Reading from
the pool, and writing to /dev/null, I get around 170 MB/s.

Cheers

Tom
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-09 Thread Jeremy Chadwick
On Tue, Feb 09, 2010 at 10:01:23AM -0500, Dan Langille wrote:
> On Tue, February 9, 2010 9:09 am, Tom Evans wrote:
> > On Tue, Feb 9, 2010 at 1:45 PM, Dan Langille  wrote:
> > One thing to point out about using a PM like this: you won't get
> > fantastic bandwidth out of it. For my needs (home storage server),
> > this really doesn't matter, I just want oodles of online storage, with
> > redundancy and reliability.
> 
> A PM?  What's that?

Port multiplier.

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hardware for home use large storage

2010-02-09 Thread Svein Skogen (Listmail Account)
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09.02.2010 15:37, Miroslav Lachman wrote:
*SNIP*
> 
> I can't agree with the last statement about HP's iLO. I have addon card
> in ML110 G5 (dedicated NIC), the card is "expensive" and bugs are
> amazing. The management NIC freezes once a day (or more often) with
> older firmware and must be restarted from inside the installed system by
> IPMI command on "localhost". With newer firmware, the interface is
> periodicaly restarded. The virtual media doesn't work at all. It is my
> worst experience with remote management cards.
> I believe that other HP servers with built-in card with different FW is
> working better, this is just my experience.
> 
> Next one is eLOM in Sun Fire X2100 (shared NIC using bge + ASF). ASF
> works without problem, but virtual media works only if you are
> connecting by IP address, not by domain name (from Windows machines) and
> there is some issue with timeouts of virtual media / console.
> I reported this + 8 different bugs of web management interface to Sun
> more than year ago - none was fixed.
> 
> Next place is for IBM 3650 + RSA II card (dedicated NIC). Expensive,
> something works, somthing not. For example the card can't read CPU
> temperature, so you will not recieve any alert in case of overheating.
> (it was 2 years ago, maybe newer firmware is fixed)
> 
> Then I have one Supermicro Twin server 6016TT-TF with built-in IPMI /
> KVM with dedicated NIC port. I found one bug with fan rpm readings (half
> the number compared to BIOS numbers) and one problem with FreeBSD 7.x
> sysinstall (USB keyboard not working, but sysinstall from 8.x works
> without problem). In installed FreeBSD system keyboard and virtual media
> is working without problems.
> 
> On the top is Dell R610 DRAC (dedicated NIC) - I didn't find any bugs
> and there are a lot more features compared to concurrent products.
> 

I think the general consensus here is "nice theory lousy
implementation", and the added migraine of no such thing as a common
standard.

Maybe creating a common standard for this could be a nice GSOC project,
to build a nice "remote console" based on SSH and arm/mips?

p.s. I've seen the various proprietary remote console solutions. They
didn't really impress me much, so I ended up using off-the-shelf
components for building my servers. Not necessarily cheaper, but at
least it's under _MY_ control.

//Svein

- -- 
- +---+---
  /"\   |Svein Skogen   | sv...@d80.iso100.no
  \ /   |Solberg Østli 9| PGP Key:  0xE5E76831
   X|2020 Skedsmokorset | sv...@jernhuset.no
  / \   |Norway | PGP Key:  0xCE96CE13
|   | sv...@stillbilde.net
 ascii  |   | PGP Key:  0x58CD33B6
 ribbon |System Admin   | svein-listm...@stillbilde.net
Campaign|stillbilde.net | PGP Key:  0x22D494A4
+---+---
|msn messenger: | Mobile Phone: +47 907 03 575
|sv...@jernhuset.no | RIPE handle:SS16503-RIPE
- +---+---
 If you really are in a hurry, mail me at
   svein-mob...@stillbilde.net
 This mailbox goes directly to my cellphone and is checked
even when I'm not in front of my computer.
- 
 Picture Gallery:
  https://gallery.stillbilde.net/v/svein/
- 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.12 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAktxeSIACgkQODUnwSLUlKQrFgCgoWo9wjqQoQMUe2WmTm8wwB19
1QYAoKHy8i8B+sBd6eCkAN+hdfMscJW4
=gzs3
-END PGP SIGNATURE-
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


  1   2   >