Further followup to this thread...

After being beaten sufficiently with a clue-bat, it was determined that 
the nforce 750a could do ahci mode for it's SATA stuff.

I set it to ahci, and redid the devlinks etc and cranked it up as AHCI.

I'm now regularly peaking at 100MB/s, though spending most of the time 
around 70MB/s.

*much better*

The lesson here is: when in ahci mode in the bios, *don't* match that 
PCI-ID with the nv-sata driver. It's not what you want.

heh. *blush*.

Once I removed the extra nv_sata entries I had added to the 
driver_aliases in my miniroot, all was good.

On the NGE front, it turns out that solaris does not seem to like the 
ethernet address of the card. Trying to set it's OWN ethernet address 
using ifconfig yielded this:
# ifconfig nge0 ether 63:d0:b:7d:1d:0
ifconfig: dlpi_set_physaddr failed "nge0": DLSAP address in improper 
format or invalid
ifconfig: failed setting mac address on nge0

using

ifconfig nge0 ether 0:e:c:5b:54:45

worked just fine, and the interface now passes traffic and sees 
responses just fine. So, the workaround here is adding
   ether <a working ether address>
in the hostname.nge0

I guess I'll log a bug on that on Monday...

Awesome. Now to work on audio...

heh.

Nathan.

Nathan Kroenert wrote:
> Hey all -
> 
> Just spent quite some time trying to work out why my 2 disk mirrored ZFS 
> pool was running so slow, and found an interesting answer...
> 
> System: new Gigabyte M750sli-DS4, AMD 9550, 4GB memory and 2 X Seagate 
> 500GB SATA-II 32mb cache disks.
> 
> The SATA ports on the nfoce 750asli chipset don't yet seem to be 
> supported by the nv_sata driver (I'm only running nv_89 at the mo, 
> though I'm not aware of new support going in just yet). I *can* get the 
> driver to attach, but not to see any disks. interesting, but I digress...
> 
> Anyhoo, - I'm stuck in IDE compatability mode for the moment.
> 
> So - using plain dd to the zfs filesystem on said disk
> 
>       dd if=/dev/zero of=delete.me bs=65536
> 
> I could achieve only about 35-40MB/s write speed, whereas, if I dd to 
> the slice directly, I can get around 90-95MB/s
> 
> I tried using whole disks versus a slice and it made no appreciable 
> difference.
> 
> It turns out that when you are in IDE compatability mode, having two 
> disks on the same 'controller' (c# in solaris) behaves just like real 
> IDE... Crap!
> 
> Moving the second disk onto from c1 to c2 got be back to at least 50MB/s 
> with higher peaks, up to 60/70MB/s.
> 
> Also of note, on the gigabyte board (and I guess other nforce 750asli 
> based chipsets) only 4 of the 6 SATA ports work when in IDE mode.
> 
> Other thoughts on the Nforce 750a:
>   - nge plumbs up OK and can send and 'see' packets, but does not seem 
> to know itself... In promiscuous mode, you can see returning icmp echo 
> requests, but they don't make it to the top of the stack.
>     I had to use an e1000g in a PCI slot to get my networking working 
> properly...
>   - Onboard Video works, including compiz, but you need to create an 
> xorg.conf and update the nvidia driver with the latest from the nvidia 
> website
> 
> Seems snappy enough. With 4 cores @ 2.2Ghz (phenom 9550) it's looking 
> like it'll do what I wanted quite nicely.
> 
> Later...
> 
> Nathan.
> 
> 
> 

-- 
//////////////////////////////////////////////////////////////////
// Nathan Kroenert              [EMAIL PROTECTED]         //
// Systems Engineer             Phone:  +61 3 9869-6255         //
// Sun Microsystems             Fax:    +61 3 9869-6288         //
// Level 7, 476 St. Kilda Road  Mobile: 0419 305 456            //
// Melbourne 3004   Victoria    Australia                       //
//////////////////////////////////////////////////////////////////
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to