>>>>> "s" == Steve  <[EMAIL PROTECTED]> writes:

     s> Apart from the other components, the main problem is to choose
     s> the motherboard. The offer is incredibly high and I'm lost.

here is cut-and-paste of my shopping so far:


2008-07-18
 via
  http://www.logicsupply.com/products/sn10000eg -- 4 sata.  $251

 opteron
  1U barebones: Tyan B2935G28V4H
                Supermicro H8DMU+
  amd opteron 2344he x2 $412
     bad choice.  stepping B3 needed to avoid TLB bug, xx50he or higher
  amd opteron 2352 x2   $628
  kingston kvr667d2d8p5/2g $440
  motherboard Supermicro H8DMU+ supports steppping BA
              Tyan 2915-E and other -E supports stepping BA
    TYAN S3992G3NR-E $430
    also avail from 
https://secure.flickerdown.com/index.php?crn=290&rn=497&action=show_\
detail

 phenom
  phenom 9550                  $175
     do not get 9600.  it has the B2 stepping TLB bug.
  crucial CT2KIT25672AA667 x2 ~$200
  ecs NFORCE6M-A(3.0)           $50
     downside: old, many reports of DoA, realtek ethernet according to newegg 
comment?--\
-often they uselessly give the PHY model, no builtin video?!
  ASRock ALiveNF7G or  ABIT AN-M2HD $85
     nforce ethernet, builtin video, relatively new (2007-09) chip.  downside:  
slow HT \
bus?

This is **NOT** very helpful to you because none of it is tested with
OpenSolaris.  There are a few things to consider:

 * can you possibly buy something, and then bury it in the sand for a
   year?  or two years if you want it to work with the stable Solaris build.
   or maybe replace a Linux box with new hardware, and run
   OpenSolaris on the old hardware?

 * look on wikipedia to see the stepping of the AMD chip you're
   looking at.  some steppings of the quad-core chips are
   unfashionable.

 * may have better hardware support in SXCE, because OpenSolaris can
   only include closed-source drivers which are freely
   redistributable.  It includes a lot of closed drivers, but maybe
   you'll get some more with SXCE, particularly for SATA chips.

   Unfortunately I don't know one page where you can get a quick view
   of the freedom status of each driver.  I think it is hard even to
   RTFS because some of the drivers are in different ``gates'' than
   the main one, but I'm not sure.  I care about software freedom and 
   get burned on this repeatedly.  And there are people in here a couple 
   times asking for Marvell source to fix a lockup bug or add hotplug, 
   and they cannot get it.  </rant off>

 * the only network card which works well is the Intel gigabit cards.
   All the other cards, if they work, it is highly dependent on which
   exact stepping, revision, and PHY of the chip you get whether the
   card will work at all, and whether or not it'll have serious
   performance problems.  but intel cards, copper, fiber, new, old,
   3.3V, 5V, PCI-e, have a much better shot of working than the
   broadcom 57xx, via, or realtek.  i was planning to try an nForce on
   the cheap desktop board and hope for luck, then put an intel card
   in the slow 33mhz pci slot if it doesn't work.

 * a lot of motherboards on newegg say they have a ``realtek'' gigabit
   chip, but that's just because they're idiots.  It's really an
   nForce gigabit chip, with a realtek PHY.  i don't know if this
   works well.

 * it sounds like the only SATA card that works well with Solaris is
   the LSI mpt board.  There have been reports of problems and poor
   performance with basically everything else, and in particular the
   AMD northbridge (that's why I picked less-open NVidia chips above).
   the supermicro marvell card his highly sensitive to chipset? or
   BIOS? revisions.  maybe the Sil3124 is okay, I dont know.  I have
   been buying sil3124 from newegg, though they've been through two
   chip steppings silently in the last 6months.  In any case, you
   should plan on plugging your disks into a PCI card, not the
   motherboard, so that you can try a few differnet cards when the
   first one starts locking up for 2s every 5min, or locking up all
   the ports when a bad disk is attached to one port, or giving really
   slow performance, some other weird bullshit.

 * the server boards are nice for solaris because:

   + they can have 3.3V PCI slots, so you can use old boards (which
     have working drivers) on a 64-bit 100mhz bus.  The desktop boards
     will give you a fast interface only in PCIe format, not PCI.

   + they take 4x as much memory as desktop (2x as much per CPU, and 2
     CPUs), though you do have to buy ``registered/buffered'' memory instead
     of ``unregistered/unbuffered'')

   + the chipsets demanded by quad-core are older, I think, and maybe 
     more likely to work.  It is even possible to get LSI mpt onboard 
     with some of them, but maybe it is the wrong stepping of mpt or 
     something.

 * the nVidia boards with 6 sata ports have only 4 useable sata ports.
   the other two ports are behind some kind of goofyraid controller.  
   anyway, plan on running your disks off a PCI card, and plan on trying 
   a few PCI cards before you find a good one which is still in production.

 * maybe you should instead get an intel board with onboard intel
   gigabit, more RAM than possible with AMD desktop boards, and a very
   conservative AHCI chip.  I'm not shopping for intel myself, but
   objectively it is probably the better plan. :(

the problem is that the latest quad-core AMD CPU's need an extremely
new motherboard to supply their split power plane, or work around
BA/B3 CPU stepping errata, or something, and the new motherboards you
are forced to get (including the ones above) have new chipsets that
probably won't work well.

so, someone else go buy them and let me know before I spend
anything. :)  If it doesnt work, just run Linux and iSCSI Enterprise
Target on it.  (you can get disk encryption that way too)

this is in fact what I do, sorta.  There is an advantage for
availability.  A lot of times when a disk goes bad, it screws up the
controller, the driver, or the whole storage stack.  With iSCSI, that
whole tower containing the bad disk becomes unresponsive.  but I have
a 280R mirroring disks distributed across two peecees, so the pool
stays up in spite of this astonishingly low software quality in the
Linux SATA stack!  Then you have to take the bad tower away from
Solaris, forget about all this fancy FMD stuff, and baby that machine
until it finally admits which drive is the bad one.  The
so-far-unsolveable downside is, iSCSI is extrememly slow.  It's
basically unuseable during a scrub, and a scrub of a few terabytes can
take days, and scrubs need to be done.  Also it's complicated to keep
track of three different dynamically-chosen names for a single disk.
so you should probably try for a direct-attached SATA setup.

ENJOY!

Attachment: pgp0XFP8zZDCH.pgp
Description: PGP signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to