I just try ZFS on one of our slave and got some really bad performance.

When I start the server yesterday, it was able to keep up with the main server 
without problem but after two days of consecutive run the server is crushed by 
IO.

After running the dtrace script iopattern, I notice that the workload is now 
100% Random IO. Copying the database (140Go) from one directory to an other 
took more than 4 hours without any other tasks running on the server, and all 
the reads on table that where updated where random... Keeping an eye on 
iopattern and zpool iostat I saw that when the systems was accessing file that 
have not been changed the disk was reading sequentially at more than 50Mo/s but 
when reading files that changed often the speed got down to 2-3 Mo/s.

The server has plenty of diskplace so it should not have such a level of file 
fragmentation in such a short time.

For information I'm using solaris 10/08 with a mirrored root pool on two 1Tb 
Sata harddisk (slow with random io). I'm using MySQL 5.0.67 with MyISAM engine. 
The zfs recordsize is 8k as recommended on the zfs guide.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to