Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-09-10 Thread Al Hopper
On Wed, Sep 10, 2008 at 5:57 AM, W. Wayne Liauh <[EMAIL PROTECTED]> wrote:
>> I'm a fan of ZFS since I've read about it last year.
>>
>> Now I'm on the way to build a home fileserver and I'm
>> thinking to go with Opensolaris and eventually ZFS!!
>
> This seems to be a good candidate to build a home ZFS server:
>
> http://tinyurl.com/msi-so
>
> It's cheap, low power, fan-less; the only concern is the Realtek 8111C NIC.  
> According to a Sun Blogger, there is no Solaris driver:
>
> http://blogs.sun.com/roberth/entry/msi_wind_as_a_low
>
> (Thanks for the info)
> --

>From the other reviews I've read on the Atom 230 and 270, I don't
think this box has enough CPU "horsepower" for a ZFS based fileserver
- or maybe I have different performance expectations than the OP.  To
each his own.

I would like to give the list a heads-up on a mini-ITX board that is
already available based on the Atom 330 - the dual core version of the
chip.  Here you'll find a couple of pictures of the board:
http://www.mp3car.com/vbulletin/general-hardware-discussion/123966-intel-d945gclf2-dual-core-atom.html
 NB: the "2" at the end of the part # is the Atom330 based part; no
"2" indicates the board with the single-core Atom.  Also: the 330 has
twice the cache as the single-core Atom.  This board is already
available for around $85.  Bear in mind that the chipset used on this
board dissipated around 45 Watts - so don't just look at the power
dissipation numbers for the CPU.

I'm not specifically recommending this board for use as a ZFS based
fileserver - but it might provide a solution for someone on this list.

PS: Since the Atom supports hyperthreading, the Atom 330 will appears
to Solaris as 4 CPUs.

Regards,

-- 
Al Hopper Logical Approach Inc,Plano,TX [EMAIL PROTECTED]
 Voice: 972.379.2133 Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-09-10 Thread Mads Toftum
On Wed, Sep 10, 2008 at 03:57:13AM -0700, W. Wayne Liauh wrote:
> This seems to be a good candidate to build a home ZFS server:
> 
> http://tinyurl.com/msi-so
> 
> It's cheap, low power, fan-less; the only concern is the Realtek 8111C NIC.  
> According to a Sun Blogger, there is no Solaris driver:
> 
Looking at the pictures, there may not be a cpu fan but there's still a
case fan. One could also argue that the case really isn't optimal for
multiple disks.

vh

Mads Toftum
-- 
http://soulfood.dk
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-09-10 Thread W. Wayne Liauh
> I'm a fan of ZFS since I've read about it last year.
> 
> Now I'm on the way to build a home fileserver and I'm
> thinking to go with Opensolaris and eventually ZFS!!

This seems to be a good candidate to build a home ZFS server:

http://tinyurl.com/msi-so

It's cheap, low power, fan-less; the only concern is the Realtek 8111C NIC.  
According to a Sun Blogger, there is no Solaris driver:

http://blogs.sun.com/roberth/entry/msi_wind_as_a_low

(Thanks for the info)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-08-02 Thread Casper . Dik

>> How about we complain enough to shame somebody into
>> adding power
>> management to the K8 chips?  We can start by
>> reminding SUN on how much
>> it was trumpeting the early Opterons as 'green
>> computing'.
>> 
>> Cheers,
>> florin
>> 
>
>Casper's frkit power management script works very well with AMD's single-core 
>K8's.  Sun did a ver
y admirable job of pioneering green computing at a time when no one was paying 
any attention.


There's a reason why Intel and AMD changed the "TSC" to be not the same
as the CPU frequency.  You couldn't use TSC for anything interesting if
you also wanted to change the frequency of the CPU.

Solaris uses TSC everywhere so using K8 AMDs with powernow impossible.

(My powernow driver makes dtrace's timestamps return wrong values)

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-08-02 Thread W. Wayne Liauh
> How about we complain enough to shame somebody into
> adding power
> management to the K8 chips?  We can start by
> reminding SUN on how much
> it was trumpeting the early Opterons as 'green
> computing'.
> 
> Cheers,
> florin
> 

Casper's frkit power management script works very well with AMD's single-core 
K8's.  Sun did a very admirable job of pioneering green computing at a time 
when no one was paying any attention.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-08-01 Thread Richard Elling
Florin Iucha wrote:
> On Fri, Aug 01, 2008 at 06:37:29AM -0700, Steve wrote:
>   
>> So, better AMD with ECC but not optimal power mgt (and seems cheaper), or 
>> Intel with NO-ECC but power mgt?
>> 
>
> How about we complain enough to shame somebody into adding power
> management to the K8 chips?  We can start by reminding SUN on how much
> it was trumpeting the early Opterons as 'green computing'.
>   

FWIW, the power management discussions on this are held over in the
laptop-discuss forum.  You can search for the threads there and see the 
current
status.
http://www.opensolaris.org/jive/forum.jspa?forumID=66

 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-08-01 Thread Florin Iucha
On Fri, Aug 01, 2008 at 06:37:29AM -0700, Steve wrote:
> So, better AMD with ECC but not optimal power mgt (and seems cheaper), or 
> Intel with NO-ECC but power mgt?

How about we complain enough to shame somebody into adding power
management to the K8 chips?  We can start by reminding SUN on how much
it was trumpeting the early Opterons as 'green computing'.

Cheers,
florin

-- 
Bruce Schneier expects the Spanish Inquisition.
  http://geekz.co.uk/schneierfacts/fact/163


pgpYb5mVH6r01.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-08-01 Thread Steve
I didn't throughly search, but it seems that newegg doesn't have any micro atx 
mb with the chipset specified on wikipedia that is supporting ECC!... (query: 
Form Factor[Micro ATX ],North Bridge[Intel 925X ],North Bridge[Intel 975X 
],North Bridge[Intel X38 ],North Bridge[Intel X48 ])

So, better AMD with ECC but not optimal power mgt (and seems cheaper), or Intel 
with NO-ECC but power mgt?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-31 Thread Jonathan Loran


Miles Nordin wrote:
>> "s" == Steve  <[EMAIL PROTECTED]> writes:
>> 
>
>  s> http://www.newegg.com/Product/Product.aspx?Item=N82E16813128354
>
> no ECC:
>
>  http://en.wikipedia.org/wiki/List_of_Intel_chipsets#Core_2_Chipsets
>   

This MB will take these:

http://www.intel.com/products/processor/xeon3000/index.htm

Which does support ECC.  Now I'm not sure, but I suspect that this 
Gigabit MB doesn't have the ECC lanes. 

It's a lot more cash, but the following MB is on the HCL, and I have one 
in service working just swell:

http://www.newegg.com/Product/Product.aspx?Item=N82E16813182105

Has the plus (or minus I suppose) of four PCI-X slots to plug in the 
AOC-SAT2-MV8 cards.

Jon
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-31 Thread mike
i must pose the question then:

is ECC required?

i am running non-ECC RAM right now on my machine (it's AMD and it would support 
ECC, i'd just have to buy it online and wait for it)

but will it have any negative effects on ZFS integrity/checksumming if ECC RAM 
is not used? obviously it's nice to have as many error correction systems in 
place, but if all that my non-ECC RAM will do if it fails is make the machine 
crash and require me to pull out the bad DIMM, i am fine with that (at least 
this machine)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-31 Thread Miles Nordin
> "s" == Steve  <[EMAIL PROTECTED]> writes:

 s> http://www.newegg.com/Product/Product.aspx?Item=N82E16813128354

no ECC:

 http://en.wikipedia.org/wiki/List_of_Intel_chipsets#Core_2_Chipsets


pgpSbbK6c48b6.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-31 Thread Steve
Since all the other components can be the same (ram, cpu, hdd, case, etc) why 
not to spend $30 more for this? 
http://www.newegg.com/Product/Product.aspx?Item=N82E16813128354
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-30 Thread Arne Schwabe
Steve schrieb:
> If you're really crazy for miniaturization check out this: 
> http://www.elma.it/ElmaFrame.htm
>
> Is a 4 hot swappable case for 2.5" drives that fits in 1 slot for 5.25!
>
>   
Maybe only true for Notebook 2,5" drives. Altough I haven't check I 
don't think that 2,5" SAS disk with 10k -15k use less energy than 3,5" 
7k2 disks especially when taking space/energy ratio into account :)

Arne

P.S.: 2,5" SAS disk have other advantages.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-30 Thread mike
Yeah but 2.5" aren't that big yet. What, they max out ~ 320 gig right?

I want 1tb+ disks :)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-30 Thread Steve
If you're really crazy for miniaturization check out this: 
http://www.elma.it/ElmaFrame.htm

Is a 4 hot swappable case for 2.5" drives that fits in 1 slot for 5.25!

You'll get low power consumption (= low heating) and will be easier to find a 
mini itx case that fit just this and mobo! ;-)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-30 Thread W. Wayne Liauh
> waynel wrote:
> > 
> > We have a couple of machines similar to your just
> > spec'ed.  They have worked great.  The only
> problem
> > is, the power management routine only works for
> K10
> > and later.  We will move to Intel core 2 duo for
> > future machines (mainly b/c power management
> > considerations).
> > 
> 
> So is Intel better? Which motherboard could be a good
> choice? (microatx?)

As I mentioned previously, Solaris works great on Athlon X2 CPUs.  The only 
reason we are moving to core 2 duo chips is power management.  We are also 
looking at the option of using Athlon X3 and/or X4, for which power manage in 
Solaris is available.  However, we do not have a firm answer as to whether we 
can simply plug the X3/X4 CPUs onto the X2 board.

I am looking at the ASUS P5KPL-CM LGA 775 Intel G31 Micro ATX Intel Motherboard:

http://tinyurl.com/asusp5kpl

Basically I am interested in finding a cheap but high-performance system with a 
minimum power consumption, something that enthusiasts can try out a zfs-enabled 
server before taking a look at Sun's catalog.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-29 Thread mike
exactly.

that's why i'm trying to get an account on that site (looks like open 
registration for the forums is disabled) so i can shoot the breeze and talk 
about all this stuff too.

zfs would be perfect for this as most these guys are trying to find hardware 
raid cards that will fit, etc... with mini-itx boards coming with 4 and now 6 
ports, that isn't an issue, as long as onboard SATA2+ZFS is "fast enough" 
everyone wins.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-29 Thread Steve
If I understood properly there is just one piece that has to be modified: a 
flat alluminium board with a squared hole in the center, that any fine mechanic 
around your city should do very easily...

The problem more than the noise in this tight case might be the temperature!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-29 Thread mike
that mashie link might be exactly what i wanted...

that mini-itx board w/ 6 SATA. use CF maybe for boot (might need IDE to CF 
converter) - 5 drive holder (hotswap as a bonus) - you get 4 gig ram, 
core2-based chip (64-bit), onboard graphics, 5 SATA2 drives... that is cool.

however. would need to hack it up (and I don't have any metal cutting stuff) 
and who knows how loud it is without any front on those drives. i'd want a 
small cover on top to help with noise.

looks like i might have to hang out over on the mashie site now too ;)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-29 Thread Steve
A little case modding maybe not so difficult...there are examples (and 
instructions) like: 
http://www.mashie.org/casemods/udat2.html

But for sure there are more advanced like:
http://forums.bit-tech.net/showthread.php?t=76374&pp=20

And here you can have a full example of the human ingenious!!
http://www.darkroastedblend.com/2007/06/cool-computer-case-mods.html
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-29 Thread mike
I'd say some good places to look are silentpcreview.com and mini-itx.com.

I found this tasty morsel on an ad at mini-itx...
http://www.american-portwell.com/product.php?productid=16133

6x onboard SATA. 4 gig support. core2duo support. which means 64 bit = yes, 4 
gig = yes, 6x sata is nice.

now if only I could find a chassis for this. AFAIK the Chenbro is the only > 2 
drive mini-itx chassis so far. I wish I knew metal working and carve up my own 
:P
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-29 Thread Brandon High
On Tue, Jul 29, 2008 at 9:20 AM, Steve <[EMAIL PROTECTED]> wrote:
> So is Intel better? Which motherboard could be a good choice? (microatx?)

Inexpensive Intel motherboards do not support ECC memory while all
current AMD cpus do.

If ECC is important to you, Intel is not a good choice.

I'm disappointed that there is no support for power management on the
K8, which is a bit of a shock since Sun's been selling K8 based
systems for a few years now. The cost of an X3 ($125) and AM2+ mobo
($80) is about the same as an Intel chip ($80) and motherboard ($150)
that supports ECC.

-B

-- 
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-29 Thread Steve
waynel wrote:
> 
> We have a couple of machines similar to your just
> spec'ed.  They have worked great.  The only problem
> is, the power management routine only works for K10
> and later.  We will move to Intel core 2 duo for
> future machines (mainly b/c power management
> considerations).
> 

So is Intel better? Which motherboard could be a good choice? (microatx?)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-29 Thread Bob Friesenhahn
On Tue, 29 Jul 2008, Steve wrote:

> I agree with mike503. If you create the awareness (of the 
> instability of recorded information) there is a large potential 
> market waiting for a ZFS/NAS little server!

The big mistake in the posting was to assume that Sun should be in 
this market.  Sun has no experience in the "consumer" market and as 
far as I know, it has never tried to field a "consumer" product.

Anyone here is free to go into business selling ready-made NAS servers 
based on OpenSolaris.  Except for Adaptec SnapServer (which is 
pricey), almost all of the competition for small NAS servers is based 
on a special version of Microsoft Windows targeted for NAS service and 
which only offers CIFS.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-29 Thread Steve
I agree with mike503. If you create the awareness (of the instability of 
recorded information) there is a large potential market waiting for a ZFS/NAS 
little server!

Very nice the thin client idea. It will be good to also use the NAS server as a 
full server and use remotely with a very thin client! (in this sense it can be 
larger ;-)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-29 Thread mike
I didn't use any.

That would be my -ideal- setup :)

I waited and waited, and still no eSATA/Port Multiplier support out there, or 
isn't stable enough. So I scrapped it.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-29 Thread Justin Vassallo
> Actually, my ideal setup would be:

>Shuttle XPC w/ 2x PCI-e x8 or x16 lanes
>2x PCI-e eSATA cards (each with 4 eSATA port multiplier ports)

Mike, may I ask which eSATA controllers you used? I searched the Solaris HCL
and found very few listed there

Thanks
justin


smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-28 Thread mike
Holy crap! That sounds cool. Firmware-based-VPN connectivity!

At Intel we're getting better too I suppose.

Anyway... I don't know where you're at in the company but you should rattle 
some cages about my idea :)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-28 Thread Willem van Schaik
W. Wayne Liauh wrote:
> As to cases, our experience is, unless you have good air-conditioning or have 
> a means to nicely enclose your machine (like the BlackBox :-)  ), get a box 
> as big as your space would allow.  We had enough bad experiences with mini 
> cases, especially those Shuttle-type boxes. 
>   
Unfortunately I must agree.

I've two Shuttle servers and I really, really love them. They're small, 
beautiful (at least mine, a SN85G4 
) and pretty 
silent. The one in my study is a real desktop in a sense that it sits 
"on top of my desk" :) and the other one is the PVR in my basement. All 
is fine and humming silently along.

But when the summer nights start to get warmer, like it happened here in 
the last few weeks, in my case :-) the CPU is still OK, but the on-board 
NIC starts to "break up". Kind of doing a ping and 50% + of the packages 
fails. Try to do an ftp with that. :-)

I still adhere to 'small is beautiful', but it gives indeed some 
problems. Luckily, sometimes solutions come from unexpected angles. In 
my case I got a silent desktop through my new "[EMAIL PROTECTED]" 
(http://blogs.sun.com/wwwillem/entry/silence_and_pavlov).

Willem



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-28 Thread mike
I would love to go back to using shuttles.

Actually, my ideal setup would be:

Shuttle XPC w/ 2x PCI-e x8 or x16 lanes
2x PCI-e eSATA cards (each with 4 eSATA port multiplier ports)
then I could chain up to 8 enclosures off a single small, nearly silent host 
machine.

8 enclosures x 5 drives = 40 drives... and it leaves open the possibility to 
change enclosures and find the quietest ones too. I got tired of  having large 
beefy machines a while ago. Problem is, eSATA/PMP support isn't very mature yet 
even in opensolaris.

I would be open to building a 5 drive mini-NAS box using ZFS, if I had more 
money to throw on ordering (and possibly returning, depending on if it worked 
or not) the right equipment. There's a variety of components I've been 
monitoring for just such a project, but I am wary due to hardware compatibility 
concerns.

I am not sure I can find small enough components that will allow for 4 gigs of 
ram, silence, proper heat disappation and a boot environment for Solaris. Also, 
that solves the hardware/main OS piece; creating some sort of fun web-based UI 
and making some sort of appliance out of it would be a great next step.

Sun should totally be looking into ZFS-based small business/home NAS boxes. 
Like the ReadyNAS style - basically just 5 drives, and somehow work with a case 
manufacturer to get a chassis that can fit a decent chip in there worthy of 
powering ZFS/CIFS/NFS and related stuff, a 64bit OS, 4 gig ram etc.

So far I've found the Chenbro 4 drive one, I built it, it's quiet and neat and 
all, but it's only 32-bit, I think it maxes out at 2 gig ram, etc. The VIA 
stuff is cool too because the current generation comes with onboard crypto 
which is -amazingly- fast, and would be great in combination with ZFS crypto 
(think about all those dentist, doctor offices, banking/financial institutions, 
etc which need workstation and backup storage... I see a -huge- market there 
myself, and crypto would meet the HIPPA and financial privacy needs), snapshots 
would probably be damn useful for all of them too!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-28 Thread W. Wayne Liauh
> I have built mine the last few days, and it seems to
> be running fine right now.
> 
> Originally I wanted Solaris 10, but switched to using
> SXCE (nevada build 94, the latest right now) because
> I wanted the new CIFS support and some additional ZFS
> features.
> 
> Here's my setup. These were my goals:
> - Quiet as possible
> - Compact as possible
> - 6 drives minimum
> - Has all the right chipsets/etc. that Solaris of
> some sort supports
> 
> Case: Antec P182
> CPU: Athlon64 X2 Dual Core 4450e 2.3GHz (figured
> lower power is cool)
> Mobo: Asus M2N-SLI Deluxe (6 onboard SATA, 1 PATA [2
> devices], 1 other SATA, but not supported well good
> enough for booting it seems)
> Optical: some random IDE DVD-RW I picked up
> Boot: Seagate IDE
> Data: 6x1TB Seagate SATA2
> RAM: 4GB(2x2GB) DDR2 PC6400 800MHz Matched Pair
> Kingston (non ECC, unbuffered)
> Power Supply: CoolMax PS-V500 500W
> 



We have a couple of machines similar to your just spec'ed.  They have worked 
great.  The only problem is, the power management routine only works for K10 
and later.  We will move to Intel core 2 duo for future machines (mainly b/c 
power management considerations).

Another thought is about the SLI boards.  For use as servers, we really don't 
need fancy video cards, which consume large amounts of power, and are actively 
explore the option of micro MBs with built-in Intel video.

As to cases, our experience is, unless you have good air-conditioning or have a 
means to nicely enclose your machine (like the BlackBox :-)  ), get a box as 
big as your space would allow.  We had enough bad experiences with mini cases, 
especially those Shuttle-type boxes.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-28 Thread Florin Iucha
On Mon, Jul 28, 2008 at 04:13:54PM -0700, Steve wrote:
> Since the information obtained it seems that the better choice is ASUS 
> M2A-VM: tested "happily", enough cheap (47€), not bad performing, 4 sata, gb 
> ethernet, dvi, firewire, ecc. The only notice was a possible DMA bug of the 
> south bridge, but it seems not so important. (!)
> 
> Now the options will be processor and memory.
> 
> What about a (50€) Athlon64 X2 4450E (g2, 45W)?

The combo works and it is power-efficient.  Just not under Solaris, as
for some reason the power management cannot perform frequency scaling,
so you end up running at max frequency and idling at 110W instead of
60W.  Which is nice for latency... but bad for environment.

Cheers,
florin

PS: I filed a bug on Saturday, but I haven't received the
acknowledgment yet.

-- 
Bruce Schneier expects the Spanish Inquisition.
  http://geekz.co.uk/schneierfacts/fact/163


pgpPPuZimK6ew.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-28 Thread mike
I have built mine the last few days, and it seems to be running fine right now.

Originally I wanted Solaris 10, but switched to using SXCE (nevada build 94, 
the latest right now) because I wanted the new CIFS support and some additional 
ZFS features.

Here's my setup. These were my goals:
- Quiet as possible
- Compact as possible
- 6 drives minimum
- Has all the right chipsets/etc. that Solaris of some sort supports

Case: Antec P182
CPU: Athlon64 X2 Dual Core 4450e 2.3GHz (figured lower power is cool)
Mobo: Asus M2N-SLI Deluxe (6 onboard SATA, 1 PATA [2 devices], 1 other SATA, 
but not supported well good enough for booting it seems)
Optical: some random IDE DVD-RW I picked up
Boot: Seagate IDE
Data: 6x1TB Seagate SATA2
RAM: 4GB(2x2GB) DDR2 PC6400 800MHz Matched Pair Kingston (non ECC, unbuffered)
Power Supply: CoolMax PS-V500 500W

I got a Zalman heatsink/fan cooler that runs at 19dBa to replace the stock AMD 
one.

I also got a 5.25" -> 3.5" enclosure so I can put my boot drive in the case. 
The case itself only has room for 6 3.5" drives normally.

Originally I had an SATA DVD-ROM on that 7th port on the motherboard, it would 
boot the Solaris 10u5 and the Nevada 94 DVDs, but when it came time to install 
the OS/drivers, it could not load the DVD any longer. So it appears that 
chipset is not supported properly yet even in snv_94 (or I just didn't know 
what I was doing)

The IDE DVD drive has no issues. I had my choice, and installed Solaris 10u5 
first, but then noticed it didn't have the in-kernel CIFS server, which I was 
really hoping to use. I'd like to get the most performance I can get. I haven't 
done any benchmarks and I am new to Solaris so I am still learning but as of 
right now I think it is smooth sailing. I was able to easily setup a zpool, 
create some ZFS filesystems, share one of them via CIFS and mount it on XP, etc.

It's a damn shame the HCL is so out of date. Also, die-hard Solaris folks don't 
seem to think this is a big issue, but someone coming from Linux/FreeBSD land 
thinks that the whole OpenSolaris vs. Nevada/SXCE vs. Solaris thing is 
confusing as hell, and made product selection a pain in the neck, as I wanted 
to build an Intel-based machine (I get discounts) and some Intel motherboards 
have 8 onboard SATA ports, but I don't know if the NIC chipsets and onboard 
video and such are supported properly... so I didn't want to order those and 
wind up having to return/RMA them somehow if they didn't work.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-28 Thread Steve
Since the information obtained it seems that the better choice is ASUS M2A-VM: 
tested "happily", enough cheap (47€), not bad performing, 4 sata, gb ethernet, 
dvi, firewire, ecc. The only notice was a possible DMA bug of the south bridge, 
but it seems not so important. (!)


Now the options will be processor and memory.

What about a (50€) Athlon64 X2 4450E (g2, 45W)?


And about memory options, what and how much to chose? To have idea of the 
offers:

CORSAIR DDR2 TWIN2X4096-6400C4DHX 4GB (2GB x 2) CAS 4
80,50EUR

KINGSTON 1GB 333MHz (PC-2700) DDR Non-ECC CL2.5
28,30EUR

OCZ Platinum PC3-12800 DDR3 XTC 2x1GB
128,00EUR

CORSAIR DDR PC400 512MB CL 2.5
16,30EUR

TEAM DDR2 800mhz 1GB CL5.0 (pc6400)
18,50EUR

OCZ DDR2 PC2-8500 Reaper HPC 4GB Edition
103,00EUR

V-DATA DDR2 800MHZ 1GB
15,90EUR

TEAM DDR2 800 2x2GB TEDD4096M800HC5DC
64,00EUR

OCZ DDR2 OCZ2T800C44GK Titanium 4GB (2GB x 2)
89,00EUR

GEIL DDR2-800 PC2-6400 KIT 2x1GB ULTRA (GX22GB6400UDC)
42,00EUR

CORSAIR DDR PC400 1GB CL3 VS1GB400C3
28,50EUR

CORSAIR DDR2 KIT 2x2GB PC8500 1066Mhz-555 XMS2-DHX + fan
112,60EUR

CORSAIR DDR2 800Mhz XMS2 2GB DDR2 KIT (2X1GB) DHX Cl.5
49,50EUR

G.Skill DDR2 800Mhz 4GB Cl.4 F2-6400CL4D-4GBPK (2x2GB)
85,00EUR

CORSAIR DDR2 800Mhz XMS2 2GB DDR2 KIT (2X1GB) DHX Cl.4
48,50EUR

A-DATA DDR2 800Mhz 1GB CL5.0
16,20EUR

A-DATA DDR PC400 1GB CL3
24,00EUR

KINGSTON ValueRAM DDR2 2Gb 800MHz PC2-6400 P/N: KVR800D2N5/2G
28,50EUR
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-25 Thread mike
yeah, i have not been pleased with the quality of the HCL.

there's plenty of hardware discussed on the forums and if you search the bugs 
db that has been confirmed and/or fixed to work on various builds of osol and 
solaris 10.

i wound up buying an AMD based machine (i wanted Intel) with 6 onboard SATA; 
Intel mobos had 8 onboard SATA; would have saved me some hassle. it was ICH10 
too which is considered "Solaris Certified" or whatever. but the rest of the 
motherboard chipsets - graphics, network, etc. i could not verify good enough, 
and i am not ordering parts online which don't work and have to return them. i 
have had to do that too many times in the past.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-25 Thread Steve
http://app.blist.com/#/blist/mar.ste/Micro-mini-ATX-mainboards-for-Solaris-ZFS-NAS-server
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-25 Thread Steve
In order to try to track the discussion I've created a "wikiable" web list of 
what was discussed on this thread and what I found on HCL!

The problem is still the same: what are the best ones to pick up? ;-)

Comments are open (also on feature to list) and everyone can edit the list!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-25 Thread mike
Don't take my opinion. I am a newbie to everything solaris.

>From what it looks like in the HCL, some of the VIA stuff is supported. Like I 
>said I tried some nexenta CD...

They don't make 64-bit, first off, and I am not sure if any of their mini-itx 
boards support more than 2 gig ram. ZFS loves 64-bit and RAM, so those might be 
some shortcomings. But you should try it if you really want still. I wasn't 
patient enough to try learning the diffs between Solaris, SXCE, OSOL, Nexenta 
and hack drivers if needed etc. I just needed a working box at the time.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-25 Thread Steve
Since it seems is not working I'm not going for this case!
And is a peaty for such a "perfect" case!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-25 Thread mike
i have that chassis too. did solaris install for you? what version/build?

i think i tried a nexenta build and it crapped out on install.

i also only have 2 gigs of ram in it and a CF card to boot off of...

4 drives is too small for what i want, 5 drives would be my minimum. i was 
hoping this would work a little bit better. but it is a cute little case.

note: be sure to get low profile ram if you get a slim/laptop optical drive!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Steve
>  s> And, if better I'm open also to intel!
> intel you can possibly get onboard AHCI that works,
>  and the intel
> igabit MAC, and 16GB instead of 8GB RAM on a desktop
> board.  Also the
> video may be better-supported.  but it's, you know,
> intel.

Miles, sorry, but probably I'm missing something to properly understand your 
closing comment about intel!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Steve
> On Thu, Jul 24, 2008 at 1:28 AM, Steve
> <[EMAIL PROTECTED]> wrote:
> > And interesting of booting from CF, but it seems is
> possible to boot from the zraid and I would go for
> it!
> 
> It's not possible to boot from a raidz volume yet.
> You can only boot
> from a single drive or a mirror.

If I understood properly is possible since April this year, but yes, there are 
still open issues that are being solved! :-)
http://opensolaris.org/os/community/zfs/boot/
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Steve
Thank you very much Brandon for pointing out the issue for the case!!
(anyway that's really a peaty, I hope it will find a solution!...)

About Atom a person from Sun was pointing out the only good version for ZFS 
would be N200 (64bit). Anyway I wouldn't make a problem of money (still ;-), 
but appropriateness (in case of Atom maybe is the heating / consumption). For 
the sheet specifications for me the intel seems a good one... just I don't know 
if is fully/partially compatible or not!

Hopefully this thread would suggest to me and to many other (I think) are 
thinking about building a "little" home ZFS NAS server one or more reference mb!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Neal Pollack
Miles Nordin wrote:
>> "s" == Steve  <[EMAIL PROTECTED]> writes:
>> 
>
>  s> About freedom: I for sure would prefere open source drivers
>  s> availability, let's account for it!
>
> There is source for the Intel gigabit cards in the source browser.
>
>   
> http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/e1000g/
>
> There is source for some Broadcom gigabit cards (bge)
>
>   
> http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/bge/
>
> but they don't always work well.  There is a closed-source bcme driver
> for the same cards downloaded from Broadcom that Benjamin Ellison is
> using instead.
>
> I believe this is source an nForce ethernet driver (!):
>
>   
> http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/nge/
>
> but can't promise this is the driver that actually attaches to your
> nForce 570 board.  Also there's this:
>
>   http://bugs.opensolaris.org/view_bug.do?bug_id=6728522
>
> wikipedia says the forcedeth chips are crap, and always were even with
> closed-source windows drivers, but they couldn't be worse than
> broadcom.
>
> I believe this source goes with the Realtek 8169/8110 gigabit MAC:
>
>   
> http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/rge/rge_main.c
>
> On NewEgg many boards say they have Realtek ethernet.  If it is 8169
> or 8110, that is an actual MAC chip.  usually it's a different number,
> and they are talking about the PHY chip which doesn't determine the
> driver.
>   

One of our engineers has also just updated the code to support the 
Realtek 8111c version chip.
That should be in build 95 of Nevada.

> This is Theo de Raadt's favorite chip because Realtek is cooperative
> with documentation.  However I think I've read on this list that chip
> is slow and flakey under Solaris.
>
>
> If using the Sil3124 with stable solaris, I guess you need a very new
> release:
>
>  http://bugs.opensolaris.org/view_bug.do?bug_id=2157034
>
> The other problem is that there are different versions of this chip,
> so the lack of bug reports doesn't give you much safety right after a
> new chip stepping silently starts oozing into the market, unmarked by
> the retailers.
>
> It looks like the SATA drivers that come with source have their source
> here:
>
>  
> http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/sata/adapters/
>
> The ATI chipset for AMD/AM2+ is ahci (but does not work well.  you'll
> need an add-on card.)  I assume the nForce chipset is nv_sata, which
> I'm astonished to find seems to come with source.  And, of course,
> there is Sil3124!
>
> The sil3112 driver is somewhere else.  I don't think you should use
> that one.  I think you should use a ``SATA framework'' chip.
>
> Marvell/thumper and LSI Logic mpt SATA drivers are closed-source, so
> if you want a system where most drivers come with source code you
> really need to build your own, not buy one of the Sun systems.  but
> there is what looks like a BSD-licensed LSI Logic driver here:
>
>  
> http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/mega_sas/
>
> so, I am not sure what is the open/closed status of the LSI board.  I
> was pretty sure the one in the Ultra 25 is the mpt and attaches to the
> SCSI/FC stack, not the SATA stack, and was closed-source.  so maybe
> this is another case of two drivers for one chip?  or maybe I was
> wrong?
>
> I'm not sure the ``SATA framework'' itself is open-source.  I believe
> at one time it was not, but I don't know where to find a comprehensive
> list of the unfree bits in your OpenSolaris 2008.5 CD.  
>
> I'm hoping if enough people rant about this nonsense, we will shift
> the situation.  For now it seems to be in Sun's best interest to be
> vague about what's open source and what's not because people see the
> name `Open' in OpenSolaris and impatiently assume the whole thing is
> open-source like most Linux CD's.  We should have a more defensive
> situation where their interest is better-served by being very detailed
> and up-front about what's open and what isn't.
>
>
> I haven't figured out an easy way to tell quickly which drivers are
> free and which are not, even with great effort.  Not only is an
> overall method missing, but a stumbling method does not work well
> because there are many decoy drivers which don't actually attach
> except in circumstances other than yours.  I need to find in the
> source a few more tables, the PCI ID to kernel module name mapping,
> and the kernel module name to build tree mapping.  I don't know if
> such files exist, or if the only way it's stored is through execution
> of spaghetti Makefiles available through numerous scattered ``gates''.
> Of course this won't help root out unfree ``frameworks'' either.
>
> For non-driver pieces of the OS, this is something the package
> management tool can do on Linux and BSD, albeit clumsily---you feed
> object filenames

Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Miles Nordin
> "s" == Steve  <[EMAIL PROTECTED]> writes:

 s> About freedom: I for sure would prefere open source drivers
 s> availability, let's account for it!

There is source for the Intel gigabit cards in the source browser.

  
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/e1000g/

There is source for some Broadcom gigabit cards (bge)

  
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/bge/

but they don't always work well.  There is a closed-source bcme driver
for the same cards downloaded from Broadcom that Benjamin Ellison is
using instead.

I believe this is source an nForce ethernet driver (!):

  
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/nge/

but can't promise this is the driver that actually attaches to your
nForce 570 board.  Also there's this:

  http://bugs.opensolaris.org/view_bug.do?bug_id=6728522

wikipedia says the forcedeth chips are crap, and always were even with
closed-source windows drivers, but they couldn't be worse than
broadcom.

I believe this source goes with the Realtek 8169/8110 gigabit MAC:

  
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/rge/rge_main.c

On NewEgg many boards say they have Realtek ethernet.  If it is 8169
or 8110, that is an actual MAC chip.  usually it's a different number,
and they are talking about the PHY chip which doesn't determine the
driver.

This is Theo de Raadt's favorite chip because Realtek is cooperative
with documentation.  However I think I've read on this list that chip
is slow and flakey under Solaris.


If using the Sil3124 with stable solaris, I guess you need a very new
release:

 http://bugs.opensolaris.org/view_bug.do?bug_id=2157034

The other problem is that there are different versions of this chip,
so the lack of bug reports doesn't give you much safety right after a
new chip stepping silently starts oozing into the market, unmarked by
the retailers.

It looks like the SATA drivers that come with source have their source
here:

 
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/sata/adapters/

The ATI chipset for AMD/AM2+ is ahci (but does not work well.  you'll
need an add-on card.)  I assume the nForce chipset is nv_sata, which
I'm astonished to find seems to come with source.  And, of course,
there is Sil3124!

The sil3112 driver is somewhere else.  I don't think you should use
that one.  I think you should use a ``SATA framework'' chip.

Marvell/thumper and LSI Logic mpt SATA drivers are closed-source, so
if you want a system where most drivers come with source code you
really need to build your own, not buy one of the Sun systems.  but
there is what looks like a BSD-licensed LSI Logic driver here:

 
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/mega_sas/

so, I am not sure what is the open/closed status of the LSI board.  I
was pretty sure the one in the Ultra 25 is the mpt and attaches to the
SCSI/FC stack, not the SATA stack, and was closed-source.  so maybe
this is another case of two drivers for one chip?  or maybe I was
wrong?

I'm not sure the ``SATA framework'' itself is open-source.  I believe
at one time it was not, but I don't know where to find a comprehensive
list of the unfree bits in your OpenSolaris 2008.5 CD.  

I'm hoping if enough people rant about this nonsense, we will shift
the situation.  For now it seems to be in Sun's best interest to be
vague about what's open source and what's not because people see the
name `Open' in OpenSolaris and impatiently assume the whole thing is
open-source like most Linux CD's.  We should have a more defensive
situation where their interest is better-served by being very detailed
and up-front about what's open and what isn't.


I haven't figured out an easy way to tell quickly which drivers are
free and which are not, even with great effort.  Not only is an
overall method missing, but a stumbling method does not work well
because there are many decoy drivers which don't actually attach
except in circumstances other than yours.  I need to find in the
source a few more tables, the PCI ID to kernel module name mapping,
and the kernel module name to build tree mapping.  I don't know if
such files exist, or if the only way it's stored is through execution
of spaghetti Makefiles available through numerous scattered ``gates''.
Of course this won't help root out unfree ``frameworks'' either.

For non-driver pieces of the OS, this is something the package
management tool can do on Linux and BSD, albeit clumsily---you feed
object filenames to tools like rpm and pkg_info, and they slowly
awkwardly lead you back to the source code.

-8<-
zephiris:~$ pkg_info -E `which mutt`
/usr/local/bin/mutt: mutt-1.4.2.3
mutt-1.4.2.3tty-based e-mail client
zephiris:~$ pkg_info -P mutt-1.4.2.3
Information for inst:mutt-1.4.2.3

Pkgpath:
mail/mutt/stable

zephiris:~$ cd /usr/ports/mail/mutt/stable
zephiris:

Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Brandon High
On Thu, Jul 24, 2008 at 3:41 AM, Steve <[EMAIL PROTECTED]> wrote:
> Or "Atom" maybe viable?

The atom CPU has pretty crappy performance. At 1.6 GHz performance is
somewhere between a 900MHz Celeron-M and 1.13 Pentium 3-M. It's also
single-core. It would probably work, but it could be CPU bound on
writes, especially if compression is enabled. If performance is
important, a cheap 2.3GHz dual core AMD and motherboard costs $95 vs.
a 1.6GHz Atom & motherboard for $75.

An embedded system using ZFS and the Atom could easily compete on
price and performance with something like the Infrant ReadyNAS. Being
able to increase the stripe width of raidz would help, too.

> - in the case 
> http://www.chenbro.com/corporatesite/products_detail.php?serno=100

Someone tried to use this case and posted about it. The hotswap
backplane in it didn't work so they had to modify the case to plug the
drives directly to the motherboard.
http://blog.flowbuzz.com/search/label/NAS

-B

-- 
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Steve
PS: I scaled down to mini-ITX form factot because it seems that the 
http://www.chenbro.com/corporatesite/products_detail.php?serno=100 is the 
PERFECT case for the job!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Steve
Or this that seems a very very nice intel mb (4+ sata in a mini package!):
- http://www.intel.com/Products/Desktop/Motherboards/DG45FC/DG45FC-overview.htm

The same: could it be good (/best) for the purpose?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Florin Iucha
On Thu, Jul 24, 2008 at 08:22:16AM -0400, Charles Menser wrote:
> Yes, I am vary happy with the M2A-VM.

You will need at least SNV_93 to use it in AHCI mode.

The northbridge gets quite hot, but that does not seem to be impairing
its performance.  I have the M2A-VM with an AMD 64 BE-2400 (45W) and
a Scythe Ninja Mini heat sink and the only fans that I have in the case
are the two side fans (the case is Antec NSK-2440).  Quiet as a mouse.

florin

-- 
Bruce Schneier expects the Spanish Inquisition.
  http://geekz.co.uk/schneierfacts/fact/163


pgpUoATrg6Ykg.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Charles Menser
I installed it with snv_86 in IDE controller mode, and have since
upgraded ending up at snv_93.

Do you know what implications there are for using AHCI vs IDE modes?

Thanks,
Charles

On Thu, Jul 24, 2008 at 9:26 AM, Florin Iucha <[EMAIL PROTECTED]> wrote:
> On Thu, Jul 24, 2008 at 08:22:16AM -0400, Charles Menser wrote:
>> Yes, I am vary happy with the M2A-VM.
>
> You will need at least SNV_93 to use it in AHCI mode.
>
> The northbridge gets quite hot, but that does not seem to be impairing
> its performance.  I have the M2A-VM with an AMD 64 BE-2400 (45W) and
> a Scythe Ninja Mini heat sink and the only fans that I have in the case
> are the two side fans (the case is Antec NSK-2440).  Quiet as a mouse.
>
> florin
>
> --
> Bruce Schneier expects the Spanish Inquisition.
>  http://geekz.co.uk/schneierfacts/fact/163
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Florin Iucha
On Thu, Jul 24, 2008 at 10:38:49AM -0400, Charles Menser wrote:
> I installed it with snv_86 in IDE controller mode, and have since
> upgraded ending up at snv_93.
> 
> Do you know what implications there are for using AHCI vs IDE modes?

I had the same question and Neal Pollack <[EMAIL PROTECTED]> told
me that:

: For a built-in motherboard port, legacy mode is not really seen as
: much slower.  From my understanding, AHCI mainly adds (in theory)  NCQ
: and hotplug capability.  But hotplug means very little for a boot disk,
: and NCQ is a big joke, as I have not yet seen reproducible benchmarks
: that show any real measurable performance gain.  So I personally think
: legacy mode is fine for now.

I STFW for the topic and I got similar comments on hardware review
sites forums.

Best,
florin

-- 
Bruce Schneier expects the Spanish Inquisition.
  http://geekz.co.uk/schneierfacts/fact/163


pgpyUMapwTXS0.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Charles Menser
Yes, I am vary happy with the M2A-VM.

Charles

On Wed, Jul 23, 2008 at 5:05 PM, Steve <[EMAIL PROTECTED]> wrote:
> Thank you for all the replays!
> (and in the meantime I was just having a dinner! :-)
>
> To recap:
>
> tcook:
> you are right, in fact I'm thinking to have just 3/4 for now, without 
> anything else (no cd/dvd, no videocard, nothing else than mb and drives)
> the case will be the second choice, but I'll try to stick to micro ATX for 
> space reason
>
> Charles Menser:
> 4 is ok, so is the "ASUS M2A-VM" good?
>
> Matt Harrison:
> The post is superb (very compliment to Simon)! And in fact I was already on 
> that, but the MB is unfortunatly ATX. If it will be the only or the suggested 
> choice I would go for it, but I hope there will be a littler one
>
> bhigh:
> so the best is 780G?
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Steve
Or "Atom" maybe viable?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Steve
Following the VIA link and googling a bit I found something that seems 
interesting:
- MB: http://www.avmagazine.it/forum/showthread.php?s=&threadid=108695
- in the case http://www.chenbro.com/corporatesite/products_detail.php?serno=100

Are they viable??
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Brandon High
On Thu, Jul 24, 2008 at 1:28 AM, Steve <[EMAIL PROTECTED]> wrote:
> And interesting of booting from CF, but it seems is possible to boot from the 
> zraid and I would go for it!

It's not possible to boot from a raidz volume yet. You can only boot
from a single drive or a mirror.

-B

-- 
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Steve
many on HD setup:

Thanks for the replies, but actual doubt is on MB.
I would go with the suggestion of different HD (even if I think that the speed 
will be aligned to the slowest of them), and may be raidz2 (even if I think 
raidz is enough for a home server)



bhigh:

It seems than 780G/SB700 and Nvidia 8200 are good choice.

Since the tom's hw website comparison 
(http://www.tomshardware.com/reviews/amd-nvidia-chipset,1972.html) I would 
choose AMD chipset, but there is very little difference on power (speed and 
consumption). So better to evaluate the compatibility feature!

And interesting of booting from CF, but it seems is possible to boot from the 
zraid and I would go for it!

PS: "The good is the enemy of the best.", so what is the best? ;-)

 
 
Miles Nordin:

Interesting the VIA stuff, but for sure I need something proven!...

About the compatibility it seems it will just improve with time, but since the 
only hw I've spare now is 386sx with 20MB HD, I have to buy something new!

About the bugs... this I would like to avoid with the counselling!

About freedom: I for sure would prefere open source drivers availability, let's 
account for it!

For the rest I'm a bit lost again... Let's say that for many reasons I would 
like to choose a botherboard with everything needed onboard... So I'm trying to 
understand how to use all the interesting advice in the post... There is a way 
(mb) that can balance stability (some selected old option) and performance (new 
options) for the expected computer life (next 3 years)?

About the reuse with Linux: for now I'm really interested in fileserver with 
ZFS, so I would focus and stick on the Solaris compatibility (for different 
reasons I wouldn't choose Mac and FreeBSD implementations).

And, if better I'm open also to intel!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-23 Thread Donald Murray, P.Eng.
On Wed, Jul 23, 2008 at 7:21 PM, Miles Nordin <[EMAIL PROTECTED]> wrote:
*SNIP*

>
> Anyway, you can find more anecdotes in the archives of this list.
> IIRC someone else corroborated that he found, among non-DoA drives,
> failures are more likely in the first month than in the second month,
> but I couldn't find the post.
>
> I did find Richard Elling's posting of this paper:
>
>  
> http://www.usenix.org/event/fast08/tech/full_papers/bairavasundaram/bairavasundaram.pdf
>
> but it does not support my claim about first-month failures.  Maybe my
> experience is related to something NetApp didn't have, maybe related
> to the latest batch of consumer drives released after that study, or
> to the consumer supply chain.

*SNIP*

For another good read on drive failures, there's
also "Failure Trends in a Large Disk Drive Population":
http://labs.google.com/papers/disk_failures.html
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-23 Thread Bob Friesenhahn
On Wed, 23 Jul 2008, Miles Nordin wrote:

> the problem is that it's common for a very large drive to have
> unreadable sectors.  This can happen because the drive is so big that
> its bit-error-rate matters.  But usually it happens because the drive
> is starting to go bad but you don't realize this because you haven't
> been scrubbing it weekly.  Then, when some other drive actually does
> fail hard, you notice and replace the hard-failed drive, and you're
> forced to do an implicit scrub, and THEN you discover the second
> failed drive.  too late for mirrors or raidz to help.

The computed MTTDL is better for raidz2 than for two-way mirrors but 
the chance of loss is already small enough that humans are unlikely to 
notice.  Consider that during resilvering the mirror case only has to 
read data from one disk whereas with raidz2 it seems that the number 
of disks which need to be read are the number of total disks minus 
two.  This means that resilvering the mirror will be much faster and 
since it takes less time and fewer components are involved in the 
recovery, there is less opportunity for a second failure.  The concern 
over scrub is not usually an issue since a simple cron job can take 
care of it.

Richard's MTTDL writeup at 
http://blogs.sun.com/relling/entry/raid_recommendations_space_vs_mttdl 
is pretty interesting.  However, Richard's writeup is also `flawed' 
since it only considers the disks involved and ignores the rest of the 
system.  This is admitted early on in the statement that "MTTDL 
calculation is ONE attribute" of all the good things we are hoping 
for.

Raw disk space is cheap.  Mirrors are fast and simple and you can plan 
your hardware so that the data path to the disk is independent of the 
other disk.  When in doubt add a third mirror.  If you start out with 
just a little bit of data which grows over time, you can use three way 
mirroring and transition the extra mirror disks to become regular data 
disks later on.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-23 Thread Aaron Theodore
> >  3. burn in the raidset for at least one month before trusting the
> > disks to not all fail simultaneously. 
> > 
> Has anyone ever seen this happen for real?  I seriously doubt it will
> happen 
> with new drives. 

I have seen it happen on my own home ZFS fileserver...
purchased two new 500gb drives (WD RE2 enterprise ones), both started failing 
within a few days.
Luckly I managed to get both replaced without loosing any data in my RAID-Z 
pool.
Looking at the drive serial numbers they were part of the same batch.

Aaron
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-23 Thread Miles Nordin
> "ic" == Ian Collins <[EMAIL PROTECTED]> writes:

ic> I'd use mirrors rather than raidz2.  You should see better
ic> performance 

the problem is that it's common for a very large drive to have
unreadable sectors.  This can happen because the drive is so big that
its bit-error-rate matters.  But usually it happens because the drive
is starting to go bad but you don't realize this because you haven't
been scrubbing it weekly.  Then, when some other drive actually does
fail hard, you notice and replace the hard-failed drive, and you're
forced to do an implicit scrub, and THEN you discover the second
failed drive.  too late for mirrors or raidz to help.

 http://www.opensolaris.org/jive/message.jspa?messageID=255647&tstart=0#255647

If you don't scrub, in my limited experience this situation is the
rule rather than the exception.  especially with digital video from
security cameras and backups of large DVD movie collections---where
most blocks don't get read for years unless you scrub.

ic> you really can grab two of the disks and still leave behind a
ic> working file server!

this really works with 4-disk raidz2, too.

I don't fully understand ZFS's quorum rules, but I have tried a 4-disk
raidz2 pool running on only 2 disks.  

You're right, it doesn't work quite as simply as two 2-disk mirrors.
Since I have half my disks in one tower, half in another, and each
tower connected to ZFS with iSCSI, I often want to shutdown one whole
tower without rebooting the ZFS host.  I find I can do that with
mirrors, but not with 4-disk raidz2.  I'll elaborate.

The only shitty thing is, zpool will only let you offline one of the
four disks.  When you try to offline the second, it says ``no valid
replicas.''  A pair of mirrors doesn't have that problem.

But, if you forcibly take two disks away from 4-disk raidz2, the pool
does keep working as promised.  The next problem(s) comes after you
give the two disks back.

 1. zpool shows all four disks ONLINE, and then resilvers.  There's no
indication as to which disks are being resilvered and which are
already ``current,'' though---it just shows all four as ONLINE.
so you don't know which two disks absolutely cannot be
removed---which are the target of the resilver and which are the
source.  SVM used to tell you this.  What happens when a disk
fails during the resilver?  Does something different happen
depending on whether it's an up-to-date disk or a resilveree disk?
probably worth testing, but I haven't.

Secondly, if you have many 4-disk raidz2 vdev's, there's no
indication about which vdev is being resilvered.  If I have 20
vdev's, I may very well want to proceed to another vdev, offline
one disk (or two, damnit!), maintain it, before the resilver
finishes.  not enough information in zpool status to do this.  Is
it even possible to 'zpool offline' a disk in another raidz2 vdev
during the resilver, or will it say 'no valid replicas'?  I
haven't tested, probably should, but I only have two towers so
far.

so, (a) disks which will result in 'no valid replicas' when you
attempt to offline them should not be listed as ONLINE in 'zpool
status'.  They're different and should be singled out.

and (b) the set of these disks should be as small as arrangeably
possible

 2. after resilvering says it's complete, 0 errrors everywhere, zpool
still will not let you offline ANY of the four disks, not even
one.  no valid replicas.

 3. 'zpool scrub'

 4. now you can offline any one of the four disks.  You can also
online the disk, and offline a different disk, as much as you like
so long as only one disk is offline (but you're supposed to get
two!).  You do not need to scrub in between.  If you take away a
disk forcibly instead of offlining it, then you go back to step 1
and cannot offline anything without a scrub.

 5. insert a 'step 1.5, reboot' or 'step 2.5, reboot', and although I
didn't test it, I fear checksum errors.  I used to have that
problem, and 6675685 talks about it.  SVM could handle rebooting
during a resilver somewhat well.  I fear at least unwarranted
generosity, like I bet 'step 2.5 reboot' can substitute for 'step
3 scrub', letting me use zpool offline again even though whatever
failsafe was stopping me from using it before can't possibly have
resolved itself.

so, (c) the set of disks which result in 'no valid replicas' when
you attempt to offline them seems to have no valid excuse for
changing across a reboot, yet I'm pretty sure it does.

kind of annoying and confusing.

but, if your plan is to stuff two disks in your bag and catch the next
flight to Tel Aviv, my experience says raidz2 should work ok for that.

 c> 3. burn in the raidset for at least one month before trusting
 c> the disks to not all fail simultaneously.

ic> Has anyone ever seen this happen for real?

yeah.  Among 2

Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-23 Thread Bob Friesenhahn
On Wed, 23 Jul 2008, Brandon High wrote:
>
> With raidz2, you can grab any two disks. With mirroring, you have to
> grab the correct two.
>
> Personally, with only 4 drives I would use raidz to increase the
> available storage or mirroring for better performance rather than use
> raidz2.

If mirroring is chosen, then it is also useful to install two 
interface cards and split the mirrors across the cards so that if a 
card (or its driver) fails, the system keeps on running.  I was 
reminded of this just a few days ago when my dual-channel fiber 
channel card locked up and the system paniced since the ZFS pool was 
not accessible.  With two interface cards there would not have been a 
panic.

With raidz and raidz2 it is not easy to achieve the system robustness 
possible when using mirrors.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-23 Thread Tomas Ögren
On 23 July, 2008 - Brandon High sent me these 1,3K bytes:

> On Wed, Jul 23, 2008 at 3:21 PM, Ian Collins <[EMAIL PROTECTED]> wrote:
> >>  2. get four disks and do raidz2.
> >>
> >> In addition to increasing MTTF, this is good because if you need
> >> to leave in a hurry, you can grab two of the disks and still leave
> >> behind a working file server.  I think this is important for home
> >> setups.
> >>
> > I'd use mirrors rather than raidz2.  You should see better performance and
> > you really can grab two of the disks and still leave behind a working file
> > server!
> 
> With raidz2, you can grab any two disks. With mirroring, you have to
> grab the correct two.
> 
> Personally, with only 4 drives I would use raidz to increase the
> available storage or mirroring for better performance rather than use
> raidz2.
> 
> >>  3. burn in the raidset for at least one month before trusting the
> >> disks to not all fail simultaneously.
> >>
> > Has anyone ever seen this happen for real?  I seriously doubt it will happen
> > with new drives.
> 
> My new workstation in the office had it's (sole) 400gb drive die after
> about 2 months. It does happen. Production lots share failure
> characteristics.

Bit errors, failing S.M.A.R.T test after 27 hours.

/Tomas
-- 
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-23 Thread Brandon High
On Wed, Jul 23, 2008 at 3:21 PM, Ian Collins <[EMAIL PROTECTED]> wrote:
>>  2. get four disks and do raidz2.
>>
>> In addition to increasing MTTF, this is good because if you need
>> to leave in a hurry, you can grab two of the disks and still leave
>> behind a working file server.  I think this is important for home
>> setups.
>>
> I'd use mirrors rather than raidz2.  You should see better performance and
> you really can grab two of the disks and still leave behind a working file
> server!

With raidz2, you can grab any two disks. With mirroring, you have to
grab the correct two.

Personally, with only 4 drives I would use raidz to increase the
available storage or mirroring for better performance rather than use
raidz2.

>>  3. burn in the raidset for at least one month before trusting the
>> disks to not all fail simultaneously.
>>
> Has anyone ever seen this happen for real?  I seriously doubt it will happen
> with new drives.

My new workstation in the office had it's (sole) 400gb drive die after
about 2 months. It does happen. Production lots share failure
characteristics.

-B

-- 
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-23 Thread Ian Collins
Miles Nordin writes: 

>> "mh" == Matt Harrison <[EMAIL PROTECTED]> writes:
> 
> mh> http://breden.org.uk/2008/03/02/home-fileserver-zfs-hardware/ 
> 
> that's very helpful.  I'll reshop for nForce 570 boards.  i think my
> untested guess was an nForce 630 or something, so it probably won't
> work. 
> 
> I would add: 
> 
>  1. do not get three disks all from the same manufacturer 
> 
>  2. get four disks and do raidz2. 
> 
> In addition to increasing MTTF, this is good because if you need
> to leave in a hurry, you can grab two of the disks and still leave
> behind a working file server.  I think this is important for home
> setups. 
> 
I'd use mirrors rather than raidz2.  You should see better performance and 
you really can grab two of the disks and still leave behind a working file 
server! 

>  3. burn in the raidset for at least one month before trusting the
> disks to not all fail simultaneously. 
> 
Has anyone ever seen this happen for real?  I seriously doubt it will happen 
with new drives. 

Ian 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-23 Thread Brandon High
On Wed, Jul 23, 2008 at 2:05 PM, Steve <[EMAIL PROTECTED]> wrote:
> bhigh:
> so the best is 780G?

I'm not sure if it's the best, but it's a good choice. A motherboard
and cpu can be had for about $150. Personally, I'm waiting for the AMD
790GX / SB750 which is due out this month. The 780G has 1 x16 PCIe
slot, the 790GX uses 2 x16 (x8 electrical) slots. I'm planning on
using an LSI 1068e based controller to add more drives, which has an
x8 physical connector.

The nForce 570 works and is well supported, but doesn't have
integrated video. The Nvidia 8200 which has video should be supported
as well. I believe both chipsets support 6 SATA ports.

My current shopping list is here:
http://secure.newegg.com/WishList/PublicWishDetail.aspx?WishListNumber=7739092

This system will act as a NAS, backup location, and media server for
our Roku and Popcorn Hour media players.

The system will boot from flash using the CF to IDE converter. The two
cards will be mirrored. The drives will be in a raidz2.

The motherboard I've chosen is a 780G board, but has 2 x16 slots. If I
decide to add more drives, I want to have the option of a second
controller.

I could use a Sil3132 based card instead of the LSI, which would give
me exactly 8 SATA ports and save about $250. I may still go this route
but given the overall cost it's not that big of a deal.

-B

-- 
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-23 Thread Miles Nordin
> "mh" == Matt Harrison <[EMAIL PROTECTED]> writes:

mh> http://breden.org.uk/2008/03/02/home-fileserver-zfs-hardware/

that's very helpful.  I'll reshop for nForce 570 boards.  i think my
untested guess was an nForce 630 or something, so it probably won't
work.

I would add:

 1. do not get three disks all from the same manufacturer

 2. get four disks and do raidz2.

In addition to increasing MTTF, this is good because if you need
to leave in a hurry, you can grab two of the disks and still leave
behind a working file server.  I think this is important for home
setups.

 3. burn in the raidset for at least one month before trusting the
disks to not all fail simultaneously.

The three steps are really necessary with the bottom-shelf drives they
are feeding us.


pgprYtC6DkLt4.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-23 Thread Miles Nordin
> "s" == Steve  <[EMAIL PROTECTED]> writes:

 s> Apart from the other components, the main problem is to choose
 s> the motherboard. The offer is incredibly high and I'm lost.

here is cut-and-paste of my shopping so far:


2008-07-18
 via
  http://www.logicsupply.com/products/sn1eg -- 4 sata.  $251

 opteron
  1U barebones: Tyan B2935G28V4H
Supermicro H8DMU+
  amd opteron 2344he x2 $412
 bad choice.  stepping B3 needed to avoid TLB bug, xx50he or higher
  amd opteron 2352 x2   $628
  kingston kvr667d2d8p5/2g $440
  motherboard Supermicro H8DMU+ supports steppping BA
  Tyan 2915-E and other -E supports stepping BA
TYAN S3992G3NR-E $430
also avail from 
https://secure.flickerdown.com/index.php?crn=290&rn=497&action=show_\
detail

 phenom
  phenom 9550  $175
 do not get 9600.  it has the B2 stepping TLB bug.
  crucial CT2KIT25672AA667 x2 ~$200
  ecs NFORCE6M-A(3.0)   $50
 downside: old, many reports of DoA, realtek ethernet according to newegg 
comment?--\
-often they uselessly give the PHY model, no builtin video?!
  ASRock ALiveNF7G or  ABIT AN-M2HD $85
 nforce ethernet, builtin video, relatively new (2007-09) chip.  downside:  
slow HT \
bus?

This is **NOT** very helpful to you because none of it is tested with
OpenSolaris.  There are a few things to consider:

 * can you possibly buy something, and then bury it in the sand for a
   year?  or two years if you want it to work with the stable Solaris build.
   or maybe replace a Linux box with new hardware, and run
   OpenSolaris on the old hardware?

 * look on wikipedia to see the stepping of the AMD chip you're
   looking at.  some steppings of the quad-core chips are
   unfashionable.

 * may have better hardware support in SXCE, because OpenSolaris can
   only include closed-source drivers which are freely
   redistributable.  It includes a lot of closed drivers, but maybe
   you'll get some more with SXCE, particularly for SATA chips.

   Unfortunately I don't know one page where you can get a quick view
   of the freedom status of each driver.  I think it is hard even to
   RTFS because some of the drivers are in different ``gates'' than
   the main one, but I'm not sure.  I care about software freedom and 
   get burned on this repeatedly.  And there are people in here a couple 
   times asking for Marvell source to fix a lockup bug or add hotplug, 
   and they cannot get it.  

 * the only network card which works well is the Intel gigabit cards.
   All the other cards, if they work, it is highly dependent on which
   exact stepping, revision, and PHY of the chip you get whether the
   card will work at all, and whether or not it'll have serious
   performance problems.  but intel cards, copper, fiber, new, old,
   3.3V, 5V, PCI-e, have a much better shot of working than the
   broadcom 57xx, via, or realtek.  i was planning to try an nForce on
   the cheap desktop board and hope for luck, then put an intel card
   in the slow 33mhz pci slot if it doesn't work.

 * a lot of motherboards on newegg say they have a ``realtek'' gigabit
   chip, but that's just because they're idiots.  It's really an
   nForce gigabit chip, with a realtek PHY.  i don't know if this
   works well.

 * it sounds like the only SATA card that works well with Solaris is
   the LSI mpt board.  There have been reports of problems and poor
   performance with basically everything else, and in particular the
   AMD northbridge (that's why I picked less-open NVidia chips above).
   the supermicro marvell card his highly sensitive to chipset? or
   BIOS? revisions.  maybe the Sil3124 is okay, I dont know.  I have
   been buying sil3124 from newegg, though they've been through two
   chip steppings silently in the last 6months.  In any case, you
   should plan on plugging your disks into a PCI card, not the
   motherboard, so that you can try a few differnet cards when the
   first one starts locking up for 2s every 5min, or locking up all
   the ports when a bad disk is attached to one port, or giving really
   slow performance, some other weird bullshit.

 * the server boards are nice for solaris because:

   + they can have 3.3V PCI slots, so you can use old boards (which
 have working drivers) on a 64-bit 100mhz bus.  The desktop boards
 will give you a fast interface only in PCIe format, not PCI.

   + they take 4x as much memory as desktop (2x as much per CPU, and 2
 CPUs), though you do have to buy ``registered/buffered'' memory instead
 of ``unregistered/unbuffered'')

   + the chipsets demanded by quad-core are older, I think, and maybe 
 more likely to work.  It is even possible to get LSI mpt onboard 
 with some of them, but maybe it is the wrong stepping of mpt or 
 something.

 * the nVidia boards with 6 sata ports have only 4 useable sata ports.
   the other two ports are behind some kind of goofyraid controller.  
   anyway, plan on running your di

Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-23 Thread Steve
Thank you for all the replays!
(and in the meantime I was just having a dinner! :-)

To recap:

tcook:
you are right, in fact I'm thinking to have just 3/4 for now, without anything 
else (no cd/dvd, no videocard, nothing else than mb and drives)
the case will be the second choice, but I'll try to stick to micro ATX for 
space reason

Charles Menser:
4 is ok, so is the "ASUS M2A-VM" good?

Matt Harrison:
The post is superb (very compliment to Simon)! And in fact I was already on 
that, but the MB is unfortunatly ATX. If it will be the only or the suggested 
choice I would go for it, but I hope there will be a littler one

bhigh:
so the best is 780G?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-23 Thread Brandon High
On Wed, Jul 23, 2008 at 12:37 PM, Steve <[EMAIL PROTECTED]> wrote:
> Minimum requisites should be:
> - working well with Open Solaris ;-)
> - micro ATX (I would put in a little case)
> - low power consumption but more important reliable (!)
> - with Gigabit ethernet
> - 4+ (even better 6+) sata 3gb controller

I'm pretty sure the AMD 780G/SB700 works with Opensolaris in AHCI
mode. There may be a few 780G/SB600 boards, so make sure you check.
I'm not sure how well the integrated video works. The chipset combined
with a 45W CPU should have low power draw. The SB700 can handle up to
6 SATA ports.

Be wary of the SB600 - There's a DMA issue with the controller when
using more than 2GB memory.

There are a lot of 780G boards available in all sorts of form factors
from almost every manufacturer.

> Also: what type of RAM to select toghether? (I would chose if good ECC, but 
> the rest?)

2GB or more of ECC should do it. I believe all the AMD CPUs support
ECC, but you should verify this before buying.

-B

-- 
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-23 Thread Matt Harrison
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Steve wrote:
| I'm a fan of ZFS since I've read about it last year.
|
| Now I'm on the way to build a home fileserver and I'm thinking to go
with Opensolaris and eventually ZFS!!
|
| Apart from the other components, the main problem is to choose the
motherboard. The offer is incredibly high and I'm lost.
|
| Minimum requisites should be:
| - working well with Open Solaris ;-)
| - micro ATX (I would put in a little case)
| - low power consumption but more important reliable (!)
| - with Gigabit ethernet
| - 4+ (even better 6+) sata 3gb controller
|
| Also: what type of RAM to select toghether? (I would chose if good
ECC, but the rest?)
|
| Does it make sense? What are the possibilities?
|

I have just setup a home fileserver with ZFS on opensolaris, I used some
posts from a blog to choose my hardware and eventually went with exactly
the same as the author. I can confirm that after 3 months of running
there hasn't even been a hint of a problem with the hardware choice.

You can see the hardware post here

http://breden.org.uk/2008/03/02/home-fileserver-zfs-hardware/

Hope this helps you decide a bit more easily.

Matt
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)

iEYEARECAAYFAkiHk7AACgkQxNZfa+YAUWHYdQCg8N6FJUWe24jbja8Si1SpCRzl
vj8AoK0qYEHjo0sslB4uogrU2dwjwTxQ
=D/Rf
-END PGP SIGNATURE-

No virus found in this outgoing message.
Checked by AVG - http://www.avg.com 
Version: 8.0.138 / Virus Database: 270.5.4/1567 - Release Date: 22/07/2008 16:05


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-23 Thread Charles Menser
I am wondering how many SATA controllers most motherboards have for
their built-in SATA ports.

Mine, an ASUS M2A-VM, has four ports, but OpenSolaris reports them as
belonging to two controllers.

I have seen motherboards with 6+ SATA ports, and would love to know if
any of them have more controller density or if two-to-one is the norm.

Charles

On Wed, Jul 23, 2008 at 3:37 PM, Steve <[EMAIL PROTECTED]> wrote:
> I'm a fan of ZFS since I've read about it last year.
>
> Now I'm on the way to build a home fileserver and I'm thinking to go with 
> Opensolaris and eventually ZFS!!
>
> Apart from the other components, the main problem is to choose the 
> motherboard. The offer is incredibly high and I'm lost.
>
> Minimum requisites should be:
> - working well with Open Solaris ;-)
> - micro ATX (I would put in a little case)
> - low power consumption but more important reliable (!)
> - with Gigabit ethernet
> - 4+ (even better 6+) sata 3gb controller
>
> Also: what type of RAM to select toghether? (I would chose if good ECC, but 
> the rest?)
>
> Does it make sense? What are the possibilities?
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-23 Thread Tim
On Wed, Jul 23, 2008 at 2:37 PM, Steve <[EMAIL PROTECTED]> wrote:

> I'm a fan of ZFS since I've read about it last year.
>
> Now I'm on the way to build a home fileserver and I'm thinking to go with
> Opensolaris and eventually ZFS!!
>
> Apart from the other components, the main problem is to choose the
> motherboard. The offer is incredibly high and I'm lost.
>
> Minimum requisites should be:
> - working well with Open Solaris ;-)
> - micro ATX (I would put in a little case)
> - low power consumption but more important reliable (!)
> - with Gigabit ethernet
> - 4+ (even better 6+) sata 3gb controller
>
> Also: what type of RAM to select toghether? (I would chose if good ECC, but
> the rest?)
>
> Does it make sense? What are the possibilities?
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>


Just wondering what case you're going to put a micro-atx motherboard in
that's going to support 6+ drives without overheating.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss