Re: [zfs-discuss] Upgrading my ZFS server

2008-12-08 Thread Joe S
I did not use the Marvell nic.

I use an Intel gigabit pci nic (e1000g0).



On Sun, Dec 7, 2008 at 2:03 PM, SV <[EMAIL PROTECTED]> wrote:
> js.lists , or anyone else who is using a XFX MDA72P7509 Motherboard ---
>
> that onboard NIC is a Marvell? -  Do you choose not to use it in favor of the 
> Intel PCI NIC?
> Marvell provides Solaris 10 x86/x64 drivers on their website, I was hoping 
> the Marvell works in Opensolaris, because 97% of the AMD motherboard I 
> researched have a Realtek NIC which I don't want.
>
> XFX's website is one of those "register your serial number to get access". I 
> hate those manufacturers that don't let you research the details before you 
> buy!
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Upgrading my ZFS server

2008-12-07 Thread SV
js.lists , or anyone else who is using a XFX MDA72P7509 Motherboard ---

that onboard NIC is a Marvell? -  Do you choose not to use it in favor of the 
Intel PCI NIC?
Marvell provides Solaris 10 x86/x64 drivers on their website, I was hoping the 
Marvell works in Opensolaris, because 97% of the AMD motherboard I researched 
have a Realtek NIC which I don't want.

XFX's website is one of those "register your serial number to get access". I 
hate those manufacturers that don't let you research the details before you buy!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Upgrading my ZFS server

2008-08-29 Thread Joe S
Just an update to this thread with my results. To summarize, I have no
problems with the nVidia 750a chipset. It's simply a newer version of
the 5** series chipets that have reportedly worked well. Also, at
IDLE, this system uses 133 Watts:

CPU - AMD Athlon X2 4850e

Motherboard - XFX MD-A72P-7509
  * nVidia nForce 750a SLI chipset
  * 6x SATA, 1x eSATA
  * 2x PCIe 2.0 x16 slots
  * nVidia GeForce 8 series integrated video
  * 1x Marvell gigabit ethernet (disabled in BIOS

2x Kingston 2GB (2 x 1GB) 240-Pin DDR2 SDRAM ECC Unbuffered DDR2 800
(PC2 6400) Dual Channel Kit Server Memory
2x Intel PRO/1000 GT Desktop Adapter (82541PI)
6x Maxtor 6L300S0 drives (SATA)
1x 80GB IDE drive (OS)


# uname -a
SunOS  5.11 snv_96 i86pc i386 i86pc


# psrinfo -pv
The physical processor has 2 virtual processors (0 1)
  x86 (AuthenticAMD 60FB2 family 15 model 107 step 2 clock 2500 MHz)
AMD Athlon(tm) Dual Core Processor 4850e


# isainfo -bv
64-bit amd64 applications
tscp ahf cx16 sse3 sse2 sse fxsr amd_3dnowx amd_3dnow amd_mmx mmx cmov
amd_sysc cx8 tsc fpu


# prtconf -D
System Configuration:  Sun Microsystems  i86pc
Memory size: 3840 Megabytes
System Peripherals (Software Nodes):

i86pc (driver name: rootnex)
scsi_vhci, instance #0 (driver name: scsi_vhci)
isa, instance #0 (driver name: isa)
asy, instance #0 (driver name: asy)
motherboard
pit_beep, instance #0 (driver name: pit_beep)
pci, instance #0 (driver name: npe)
pci10de,cb84
pci10de,cb84
pci10de,cb84
pci10de,cb84
pci10de,cb84
pci10de,cb84
pci10de,cb84, instance #0 (driver name: ohci)
pci10de,cb84, instance #0 (driver name: ehci)
pci10de,cb84, instance #1 (driver name: ohci)
pci10de,cb84, instance #1 (driver name: ehci)
pci-ide, instance #0 (driver name: pci-ide)
ide, instance #0 (driver name: ata)
cmdk, instance #0 (driver name: cmdk)
ide (driver name: ata)
pci10de,75a, instance #0 (driver name: pci_pci)
pci8086,1376, instance #0 (driver name: e1000g)
pci8086,1376, instance #1 (driver name: e1000g)
pci10de,cb84, instance #0 (driver name: ahci)
disk, instance #1 (driver name: sd)
disk, instance #2 (driver name: sd)
disk, instance #3 (driver name: sd)
disk, instance #4 (driver name: sd)
disk, instance #5 (driver name: sd)
disk, instance #6 (driver name: sd)
pci10de,569, instance #1 (driver name: pci_pci)
display, instance #0 (driver name: vgatext)
pci10de,778 (driver name: pcie_pci)
pci10de,75b (driver name: pcie_pci)
pci10de,77a (driver name: pcie_pci)
pci1022,1100, instance #0 (driver name: mc-amd)
pci1022,1101, instance #1 (driver name: mc-amd)
pci1022,1102, instance #2 (driver name: mc-amd)
pci1022,1103, instance #0 (driver name: amd64_gart)
pci, instance #0 (driver name: pci)
iscsi, instance #0 (driver name: iscsi)
pseudo, instance #0 (driver name: pseudo)
options, instance #0 (driver name: options)
agpgart, instance #0 (driver name: agpgart)
xsvc, instance #0 (driver name: xsvc)
used-resources
cpus, instance #0 (driver name: cpunex)
cpu (driver name: cpudrv)
cpu (driver name: cpudrv)


# prtdiag
System Configuration: To Be Filled By O.E.M. To Be Filled By O.E.M.
BIOS Configuration: American Megatrends Inc. 080015  05/30/2008

 Processor Sockets 

Version  Location Tag
 --
AMD Athlon(tm) Dual Core Processor 4850e CPU 1

 Memory Device Sockets 

TypeStatus Set Device Locator  Bank Locator
--- -- --- --- 
DDR2in use 0   DIMM0   BANK0
DDR2in use 0   DIMM1   BANK1
DDR2in use 0   DIMM2   BANK2
DDR2in use 0   DIMM3   BANK3

 On-Board Devices =
  To Be Filled By O.E.M.

 Upgradeable Slots 

ID  StatusType Description
--- -  
0   in useAGP 4X   AGP
1   in usePCI  PCI1


# cfgadm
Ap_Id  Type Receptacle   Occupant Condition
sata0/0::dsk/c2t0d0disk connectedconfigured   ok
sata0/1::dsk/c2t1d0disk connectedconfigured   ok
sata0/2::dsk/c2t2d0disk connectedconfigured   ok
sata0/3::dsk/c2t3d0disk connectedconfigured   ok
sata0/4::dsk/c2t4d0disk connectedconfigured   ok
sata0/5::dsk/c2t5d0disk connectedconfigured   ok




On Sa

Re: [zfs-discuss] Upgrading my ZFS server

2008-08-23 Thread Joe S
While I wanted the Intel Core 2 Duo confuration, it was too much
money. Even when I substituted the processor for a low power wolfdale,
the E7200. The Intel option cost $126 more than the AMD/GeForce
option. I was up to $457.88 with shipping and tax.

Also, in some power benchmarks, the AMD 4850e did used a lot less
power than the most efficient 45nm Intel Core 2 processor, the E7200.
Of course, from a performance point of view, the E7200 blows the AMD
out of the water, but my needs are for a low-power and 64 bit file
server. The AMD 4850e and nForce 750a chipset meets those needs. I'm
staying away from any AMD chipsets. There are too many bugs, and I
read the brand new AMD 750 chipset has the SAME AHCI issues as the
SB600. AMD can't seem to fix it. One other factor that helped in my
decision was that Sun sells both X38 and nForce based chipsets in its
new workstations. If they work for Sun...


I ended up choosing this configuration:

> CPU - AMD Athlon X2 4850e 2.5GHz 2 x 512KB L2 Cache Socket AM2 45W
> Dual-Core ($77.00)
> Motherboard - XFX MDA72P7509 ($134.99)
>  * nVidia nForce 750a SLI chipset
>  * 6x SATA, 1x eSATA
>  * 2x PCIe 2.0 x16
>  * nVidia GeForce 8 series integrated video
>  * 1x Marvell gigabit ethernet

At newegg, with tax, shipping, and 4GB of ECC RAM, this cost me $341.71

This solution allows me to add a dual port gigabit ethernet card
(PCIe) and the LSI SATA card (PCIe) later on.

Thanks for the replies.

I will follow up once I get the hardware.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Upgrading my ZFS server

2008-08-22 Thread Brandon High
On Fri, Aug 22, 2008 at 1:30 PM, Joe S <[EMAIL PROTECTED]> wrote:
> I picked the 790GX of the 790 series because it has integrated video.

The 790GX is a high-clocked 780G, so look at that chipset as well. The
boards are slightly cheaper, too. If you're not overclocking or
running Crossfire, there's no reason to use the GX.

> Also, I avoided the SB600 southbridge as I have read there are SATA
> DMA issues with it. The nForce and AMD chipsets are very new but

The SB700 has a few issues with AHCI as well from what I've heard,
though the specifics are eluding me at the moment. The SB750 is too
new to know but may be an improvement. I'd recommend an LSI 1068e
based HBA like the Supermicro AOC-USAS-L8i.

You may want to put an Intel NIC into the AMD system, since support
with other ethernet solutions seems spotty at best.

-B

-- 
Brandon High [EMAIL PROTECTED]
"You can't blow things up with schools and hospitals." -Stephen Dailey
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Upgrading my ZFS server

2008-08-22 Thread Joe S
I'm want to upgrade the hardware of my Open Solaris b95 server at
home. It's currently running on 32 bit intel hardware. I'm going 64
bit with the new hardware. I don't need server grade hardware since
this is a home server. This means I'm not buying the an Opteron or
Xeon, or any quad core processor. These were too expensive for my
budget. Desktop grade hardware will be fine. I want something that is
energy efficient, not expensive, and with room for expansion. I want
multiple PCIe (pci express) slots so that I can add an LSI MPT based
SATA HBA (requires x4 or x8) and an Intel dual gigabit nic (requires
x4) in the future. I've got my hardware selection down to a few
choices and was hoping someone could comment on their experiences with
any of this hardware. I've determined that in my situation, the
biggest determining factor is how well the chipset is supported in
Open Solaris.

If I choose to go Intel, I like the Intel X38 chipset because it is
the only current Intel chipset that supports ECC memory. This list
contains a number of threads encouraging the use of ECC memory, so I'd
like to use ECC memory. All of the chipsets I list here support ECC
memory. Also, I've read that Open Solaris has better support for CPU
frequency scaling on Intel processors than AMD ( < family 16 )
processors. Anyways, here is the Intel configuration I came up with:

CPU - Intel Core 2 Duo E8400 Wolfdale 3.0GHz LGA 775 65W Dual-Core ($169.99)
Motherboard - Intel DX38BT ($209.99)
 * Intel X38chipset
 * 6x SATA, 2x eSATA
 * 2x PCIe 2.0 x16 slots
 * 1x PCIe 1.0a x16 (electrical x4)
 * 1x Intel 82566DC gigabit ethernet

>From what I read, the X38 is not very different from the 925 chipset
which is supported well in Open Solaris. Also, I chose the Intel
chipset over the nVidia nForice chipsets because I guessed that Intel
CPU + Intel chipset would be a safe bet.

For the AMD configuration, I had trouble picking a chipset. nForce or
AMD? I don't know. Picking the CPU was easy. Here is the AMD + nForce
configuration:

CPU - AMD Athlon X2 4850e 2.5GHz 2 x 512KB L2 Cache Socket AM2 45W
Dual-Core ($77.00)
Motherboard - XFX MDA72P7509 ($134.99)
 * nVidia nForce 750a SLI chipset
 * 6x SATA, 1x eSATA
 * 2x PCIe 2.0 x16
 * nVidia GeForce 8 series integrated video
 * 1x Marvell gigabit ethernet

Here is the AMD + AMD 790GX configuration:

CPU - AMD Athlon X2 4850e 2.5GHz 2 x 512KB L2 Cache Socket AM2 45W
Dual-Core ($77.00)
Motherboard - ASUS M3A78-T ($149.99)
 * AMD 790GX / SB750 chipset
 * 5x, 1x eSATA (no optimal, since I need 6, but if this is the better
chipset, i'll get my sata hba sooner)
 * 3 x PCIe 2.0 x16
 * ATI Radeon HD 3300 GPU
 * 1x Marvell 88E8056 gigabit ethernet

I picked the 790GX of the 790 series because it has integrated video.
Also, I avoided the SB600 southbridge as I have read there are SATA
DMA issues with it. The nForce and AMD chipsets are very new but
aren't too different from their predecessors. I don't mind going AMD
because the CPU is rated at 45W, so I won't completely miss the
benefits of the CPU frequency scaling that the Intel supports. Right
now, I'm leaning towards the AMD + nForce 750a configuration. But the
Intel option isn't bad either. Sun sells the Intel X38 in the Ultra 24
workstation. The AMD 790 option is there because I'm not sure which
chipset to choose from for AMD processors.

So, have any of you used any of these chipsets with Open Solaris? Any
success or failure stories?
Are there any reasons I should steer far away from any of the above chipsets?
Any solid reason I should pick one CPU over the other?

Thanks in advance!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss