Re: [zfs-discuss] Losts of small files vs fewer big files

2009-07-07 Thread Miles Nordin
> "dt" == Don Turnbull  writes:

dt> Any idea why this is?

maybe prefetch?

WAG, though.

dt> I work with Greenplum which is essentially a number of
dt> Postgres database instances clustered together.

haha, yeah I know who you are.  Too bad the open source postgres can't
do that. :/

AFFERO.


pgpK4PMndIbty.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Losts of small files vs fewer big files

2009-07-07 Thread Don Turnbull

Thanks for the suggestion!

We've fiddled with this in the past.  Our app is 32k instead of 8k 
blocks and it is data warehousing so the I/O model is a lot more long 
sequential reads generally.  Changing the blocksize has very little 
effect on us.  I'll have to look at fsync; hadn't considered that.  
Compression is a killer; it costs us up to 50% of the performance 
sadly.  CPU is not always a problem for us but it can be depending on 
the query workload and the servers involved.


Bryan Allen wrote:

Have you set the recordsize for the filesystem to the blocksize Postgres is
using (8K)? Note this has to be done before any files are created.

Other thoughts: Disable postgres's fsync, enable filesystem compression if disk
I/O is your bottleneck as opposed to CPU. I do this with MySQL and it has
proven useful. My rule of thumb there is 60% for InnoDB cache, 40% for ZFS ARC,
but YMMV.

http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Losts of small files vs fewer big files

2009-07-07 Thread Bryan Allen
Have you set the recordsize for the filesystem to the blocksize Postgres is
using (8K)? Note this has to be done before any files are created.

Other thoughts: Disable postgres's fsync, enable filesystem compression if disk
I/O is your bottleneck as opposed to CPU. I do this with MySQL and it has
proven useful. My rule of thumb there is 60% for InnoDB cache, 40% for ZFS ARC,
but YMMV.

http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
-- 
bda
cyberpunk is dead. long live cyberpunk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Losts of small files vs fewer big files

2009-07-07 Thread Don Turnbull
I work with Greenplum which is essentially a number of Postgres database 
instances clustered together.  Being postgres, the data is held in a lot 
of individual files which can be each fairly big (hundreds of MB or 
several GB) or very small (50MB or less).  We've noticed a performance 
difference when our database files are many and small versus few and large.


To test this outside the database, we built a zpool using RAID-10 (it 
works for RAID-z too) and filled it with 800, 5MB files.  Then we used 4 
concurrent dd processes to read 1/4 of the files each.  This reqiured 
123seconds.


Then we destroyed the pool, recreated it, and filled it with 20 files 
each 200MB and 780 files each 0bytes (same number of files, same total 
space consumed).  The same dd reads took 15 seconds.


Any idea why this is?  Various configurations of our product can divide 
data in the databases into an enormous number of small files.  varying 
the arc cache size limit did not have any effect.  Are  there other 
tunables available to Solaris 10 U7 (not openSolaris) that might affect 
this behavior?


Thanks!
   -dt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss