Re: [zfs-discuss] ZFS Project Hardware

2008-05-23 Thread Pascal Vandeputte
That 1420SA will not work, period. Type "1420sa solaris" in Google and you'll 
find a thread about the problems I had with it.

I sold it and took the cheap route again with a Silicon Image 3124-based 
adapter and had more problems which now probably would be solved with the 
latest Solaris updates.

Anyway, I finally settled for a motherboard with an Intel ICH9-R and couldn't 
be happier (Intel DG33TL/DG33TLM, 6 SATA ports). No hassles and very speedy.

That Supermicro card someone else is recommending should also work without any 
issues, and it's really cheap for what you get (8 ports). Your maximum 
throuhput won't exceed 100MB/s though if you can't plug it in a PCI-X slot but 
resort to a regular PCI slot instead.

Greetings,

Pascal
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS raidz write performance:what to expect from SATA drives on ICH9R (A

2008-04-20 Thread Pascal Vandeputte
Hi,

First of all, my apologies for some of my posts appearing 2 or even 3 times 
here, the forum seems to be acting up, and although I received a Java exception 
for those double postings and they never appeared yesterday, apparently they 
still made it through eventually.

Back on topic: I fruitlessly tried to extract higher write speeds from the 
Seagate drives using an Addonics Silicon Image 3124 based SATA controller. I 
got exactly the same 21 MB/s for each drive (booted from a Knoppix cd).

I was planning on contacting Seagate support about this, but in the mean time I 
absolutely had to start using this system, even if it meant low write speeds. 
So I installed Solaris on a 1GB CF card and wanted to start configuring ZFS. I 
noticed that the first SATA disk was still shown with a different label by the 
"format" command (see my other post somewhere here). I tried to get rid of all 
disk labels (unsuccessfully), so I decided to boot Knoppix again and zero out 
the start and end sectors manually (erasing all GPT data).

Back to Solaris. I ran "zpool create tank raidz c1t0d0 c1t1d0 c1t2d0" and tried 
a dd while monitoring with iostat -xn 1 to see the effect of not having a slice 
as part of the zpool (write cache etc). I was seeing write speeds in excess of 
50MB/s per drive! Whoa! I didn't understand this at all, because 5 minutes 
earlier I couldn't get more than 21MB/s in Linux using block sizes up to 
1048576 bytes. How could this be?

I decided to destroy the zpool and try to dd from Linux once more. This is when 
my jaw dropped to the floor:

[EMAIL PROTECTED]:~# dd if=/dev/zero of=/dev/sda bs=4096
^[250916+0 records in
250915+0 records out
1027747840 bytes (1.0 GB) copied, 10.0172 s, 103 MB/s

Finally, the write speed one should expect from these drives, according to 
various reviews around the web.

I still get a healthy 52MB/s at the end of the disk:

# dd if=/dev/zero of=/dev/sda bs=4096 seek=18300
dd: writing `/dev/sda': No space left on device
143647+0 records in
143646+0 records out
588374016 bytes (588 MB) copied, 11.2223 s, 52.4 MB/s

But how is it possible that I didn't get these speeds earlier? This may be part 
of the explanation:

[EMAIL PROTECTED]:~# dd if=/dev/zero of=/dev/sda bs=2048
101909+0 records in
101909+0 records out
208709632 bytes (209 MB) copied, 9.32228 s, 22.4 MB/s

Could it be that the firmware in these drives has issues with write requests of 
2048 bytes and smaller?

There must be more to it though, because I'm absolutely sure that I used larger 
block sizes when testing with Linux earlier (like 16384, 65536 and 1048576). 
It's impossible to tell, but maybe there was something fishy going on which was 
fixed by zero'ing parts of the drives. I absolutely cannot explain it otherwise.

Anyway, I'm still not seeing much more than 50MB/s per drive from ZFS, but I 
suspect the 2048 VS 4096 byte write block size effect may be influencing this. 
Having a slice as part of the pool earlier perhaps magnified this behavior as 
well. Caching or swap problems are certainly no issues now.

Any thoughts? I certainly want to thank everyone once more for your 
co-operation!

Greetings,

Pascal
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS raidz write performance:what to expect from SATA drives on ICH9R

2008-04-19 Thread Pascal Vandeputte
Thanks a lot for your input, I understand those numbers a lot better now! I'll 
look deeper into hardware issues. It's a pity that I can't get older BIOS 
versions flashed. But I've got some other hardware lying around.

Someone suggested lowering the 35 iops default, but I can't find any 
information anywhere on how to accomplish this (not with Google, not in the ZFS 
Admin guide either).

Greetings,

Pascal
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS raidz write performance:what to expect from SATA drives on ICH9R

2008-04-19 Thread Pascal Vandeputte
Great, superb write speeds with a similar setup, my motivation is growing again 
;-)

It just occurs to me that I have a spare Silicon Image 3124 SATA card lying 
around. I was postponing testing of these drives on my desktop because it has 
an Intel ICH9 SATA controller probably quite similar to the ICH9R (RAID 
support) in my Solaris box, but that 3124 may give completely different results 
with the Seagates. Test coming up.

(the forum seems to be having technical difficulties, I hope my replies end up 
in the right places...)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS raidz write performance:what to expect from SATA drives on ICH9R

2008-04-19 Thread Pascal Vandeputte
(the lt and gt symbols are filtered by the forum I guess; replaced with minus 
signs now)

# format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
   0. c1t0d0 -DEFAULT cyl 45597 alt 2 hd 255 sec 126-
  /[EMAIL PROTECTED],0/pci8086,[EMAIL PROTECTED],2/[EMAIL PROTECTED],0
   1. c1t1d0 -ATA-ST3750330AS-SD15-698.64GB-
  /[EMAIL PROTECTED],0/pci8086,[EMAIL PROTECTED],2/[EMAIL PROTECTED],0
   2. c1t2d0 -ATA-ST3750330AS-SD15-698.64GB-
  /[EMAIL PROTECTED],0/pci8086,[EMAIL PROTECTED],2/[EMAIL PROTECTED],0
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS raidz write performance:what to expect from SATA drives on ICH9R

2008-04-19 Thread Pascal Vandeputte
Thanks, I'll try installing Solaris on a 1GB CF card in an CF-to-IDE adapter, 
so all disks will then be completely available to ZFS. Then I needn't worry 
about different size block devices either.

I also find it weird that the boot disk is displayed differently from the other 
two disks if I run the "format" command... (could be normal though, as I said 
before I'm new to Solaris)


# format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
   0. c1t0d0 
  /[EMAIL PROTECTED],0/pci8086,[EMAIL PROTECTED],2/[EMAIL PROTECTED],0
   1. c1t1d0 
  /[EMAIL PROTECTED],0/pci8086,[EMAIL PROTECTED],2/[EMAIL PROTECTED],0
   2. c1t2d0 
  /[EMAIL PROTECTED],0/pci8086,[EMAIL PROTECTED],2/[EMAIL PROTECTED],0
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS raidz write performance:what to expect from SATA drives on ICH9R

2008-04-19 Thread Pascal Vandeputte
I see. I'll only be running a minimal Solaris install with ZFS and samba on 
this machine, so I wouldn't expect immediate memory issues with 2 gigabytes of 
RAM. OTOH I read that ZFS is a real memory hog so I'll be careful.

I've tested swap on a ZFS volume now, it's really easy so I'll try running 
without swap for some quick performance testing and use swap on ZFS after that. 
This also takes away my fears about using a swap slice on the CompactFlash card 
I'll be booting from.

Thanks!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS raidz write performance:what to expect from SATA drives on ICH9R

2008-04-18 Thread Pascal Vandeputte
Hi,

Thanks for your input. Unfortunately, all 3 drives are identical Seagate 
7200.11 drives which I bought separately and they are attached in no particular 
order.

Thanks about the /dev/zero remark, I didn't know that.

>From what I've seen this afternoon, I'm starting to suspect a 
>hardware/firmware issue as well. Using Linux I cannot extract more than 24,5 
>MB/s sequential write performance out of a single drive (writing directly to 
>/dev/sdX, no filesystem overhead).

I tried flashing the BIOS to an older version, but that firmware update process 
fails somehow. Reflashing the newest BIOS still works however. It's a pity that 
I didn't benchmark before updating the BIOS & RAID firmware package. Maybe then 
I would have gotten decent Windows performance as well. It could even be an 
issue with the Seagate disks, as there have been problems with SD04 and SD14 
firmwares (reported 0MB cache to the system). Mine are SD15 and should be fine 
though.

I'm at a loss, I'm thinking about just settling for the 20MB/s write speeds 
with a 3-drive raidz and enjoy life...

Which leaves me with my other previously asked questions:
 - does Solaris require a swap space on disk
 - does Solaris run from a CompactFlash card
 - does ZFS handle raidz or mirror pools with block devices of a slightly 
different size or am I risking data loss?

Thanks,

Pascal
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS raidz write performance:what to expect from SATA drives on ICH9R

2008-04-18 Thread Pascal Vandeputte
Thanks for all the replies!

Some output from "iostat -x 1" while doing a dd of /dev/zero to a file on a 
raidz of c1t0d0s3, c1t1d0 and c1t2d0 using bs=1048576:

 extended device statistics 
devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b 
sd0   0.0  104.00.0 13312.0  4.0 32.0  346.0 100 100 
sd1   0.0  104.00.0 13312.0  3.0 32.0  336.4 100 100 
sd2   0.0  104.00.0 13312.0  3.0 32.0  336.4 100 100 
 extended device statistics 
devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b 
sd0   0.0  104.00.0 13311.5  4.0 32.0  346.0 100 100 
sd1   0.0  106.00.0 13567.5  3.0 32.0  330.1 100 100 
sd2   0.0  106.00.0 13567.5  3.0 32.0  330.1 100 100 
 extended device statistics 
devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b 
sd0   0.0  135.00.0 12619.3  2.6 25.9  211.3  66 100 
sd1   0.0  107.00.0 8714.6  1.1 16.3  163.3  38  66 
sd2   0.0  101.00.0 8077.0  1.0 14.5  153.5  32  61 
 extended device statistics 
devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b 
sd0   1.0   13.08.0   14.5  1.7  0.2  139.9  29  22 
sd1   0.06.00.04.0  0.0  0.00.9   0   0 
sd2   0.06.00.04.0  0.0  0.00.9   0   0 
 extended device statistics 
devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b 
sd0   0.0   77.00.0 9537.9 19.7  0.6  264.5  63  63 
sd1   0.0  122.00.0 13833.2  1.7 19.6  174.5  58  63 
sd2   0.0  136.00.0 15497.6  1.7 19.6  156.8  59  63 
 extended device statistics 
devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b 
sd0   0.0  106.00.0 13567.8 34.0  1.0  330.1 100 100 
sd1   0.0  103.00.0 13183.8  3.0 32.0  339.7 100 100 
sd2   0.0   97.00.0 12415.8  3.0 32.0  360.7 100 100 
 extended device statistics 
devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b 
sd0   0.0  104.00.0 13311.7 34.0  1.0  336.4 100 100 
sd1   0.0   83.00.0 10623.8  3.0 32.0  421.6 100 100 
sd2   0.0   76.00.0 9727.8  3.0 32.0  460.4 100 100 
 extended device statistics 
devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b 
sd0   0.0  104.00.0 13312.7 34.0  1.0  336.4 100 100 
sd1   0.0  104.00.0 13312.7  3.0 32.0  336.4 100 100 
sd2   0.0  105.00.0 13440.7  3.0 32.0  333.2 100 100 
 extended device statistics 
devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b 
sd0   0.0  104.00.0 13311.9 34.0  1.0  336.4 100 100 
sd1   0.0  106.00.0 13567.9  3.0 32.0  330.1 100 100 
sd2   0.0  105.00.0 13439.9  3.0 32.0  333.2 100 100 
 extended device statistics 
devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b 
sd0   0.0  106.00.0 13567.6 34.0  1.0  330.1 100 100 
sd1   0.0  106.00.0 13567.6  3.0 32.0  330.1 100 100 
sd2   0.0  104.00.0 13311.6  3.0 32.0  336.4 100 100 
 extended device statistics 
devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b 
sd0   0.0  120.00.0 14086.7 17.0 18.0  291.6 100 100 
sd1   0.0  104.00.0 13311.7  7.8 27.1  336.4 100 100 
sd2   0.0  107.00.0 13695.7  7.3 27.7  327.0 100 100 
 extended device statistics 
devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b 
sd0   0.0  103.00.0 13185.0  3.0 32.0  339.7 100 100 
sd1   0.0  104.00.0 13313.0  3.0 32.0  336.4 100 100 
sd2   0.0  104.00.0 13313.0  3.0 32.0  336.4 100 100 
 extended device statistics 
devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b 
sd0   0.0  115.00.0 12824.4  3.0 32.0  304.3 100 100 
sd1   0.0  131.00.0 14360.3  3.0 32.0  267.1 100 100 
sd2   0.0  125.00.0 14104.8  3.0 32.0  279.9 100 100 
 extended device statistics 
devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b 
sd0   0.0   99.00.0 12672.9  3.0 32.0  353.4 100 100 
sd1   0.0   82.00.0 10496.8  3.0 32.0  426.7 100 100 
sd2   0.0   95.00.0 12160.9  3.0 32.0  368.3 100 100 
 extended device statistics 
devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b 
sd0   0.0  104.00.0 13311.7  3.0 32.0  336.4 100 100 
sd1   0.0  103.00.0 13183.7  3.0 32.0  339.7 100 100 
sd2   0.0  105.00.0 13439.7  3.0 32.0  333.2 100 100


Similar output when running "iostat -xn 1":

extended device statistics  
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0.0  103.00.0 13184.3  4.0 32.0   38.7  3

[zfs-discuss] ZFS raidz write performance:what to expect from SATA drives on ICH9R (AHCI)

2008-04-17 Thread Pascal Vandeputte
Hi everyone,

I've bought some new hardware a couple of weeks ago to replace my home 
fileserver:
 Intel DG33TL motherboard with Intel gigabit nic and ICH9R
 Intel Pentium Dual E2160 (= 1.8GHz Core 2 Duo 64-bit architecture with less 
cache, cheap, cool and more than fast enough)
 2 x 1 GB DDR2 RAM
 3 x Seagate 7200.11 750GB SATA drives

Originally I was going to keep running Windows 2003 for a month (to finish 
migrating some data files to an open-source friendly format) and then move to 
Solaris, but because the Intel Matrix RAID 5 write speeds were abysmally low no 
matter which stripe sizes/NTFS allocation unit size I selected, I've already 
thrown out W2K3 completely in favor of Solaris 10 u5.

I have updated the motherboard with the latest Intel BIOS (0413 3/6/2008). I 
have loaded "optimal defaults" and have put the SATA drives in AHCI mode.

At the moment I'm seeing read speeds of 200MB/s on a ZFS raidz filesystem 
consisting of c1t0d0s3, c1t1d0 and c1t2d0 (I'm booting from a small 700MB slice 
on the first sata drive; c1t0d0s3 is about 690 "real" gigabytes large and ZFS 
just uses the same amount of sectors on the other disks and leaves the rest 
untouched). As a single drive should top out at about 104MB/s for sequential 
access in the outer tracks, I'm very pleased with that.

But the write speeds I'm getting are still far below my expectations: about 
20MB/s (versus 14MB/s in Windows 2003 with Intel RAID driver). I was hoping for 
at least 100MB/s, maybe even more.

I'm doing simple dd read and write tests (with /dev/zero, /dev/null etc) using 
blocksizes like 16384 and 65536.

Shouldn't write speed be substantially higher? If I monitor using "vmstat 1", I 
see that cpu usage never exceeds 3% during writes (!), and 10% during reads.

I'm a Solaris newbie (but with the intention of learning a whole lot), so I may 
have overlooked something. I also don't really know where to start looking for 
bottlenecks.

Thanks!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss