Re: [PERFORM] Inserts optimization?

2006-04-14 Thread Jim C. Nasby
On Thu, Apr 13, 2006 at 02:59:23PM -0400, Francisco Reyes wrote: In RAID 10 would it matter that WALL is in the same RAID set? Would it be better: 4 disks in RAID10 Data 2 disks RAID 1 WALL 2 hot spares Well, benchmark it with your app and find out, but generally speaking unless your

Re: [PERFORM] Blocks read for index scans

2006-04-14 Thread Jim C. Nasby
On Thu, Apr 13, 2006 at 08:36:09PM -0400, Tom Lane wrote: Jim Nasby [EMAIL PROTECTED] writes: While working on determining a good stripe size for a database, I realized it would be handy to know what the average request size is. Getting this info is a simple matter of joining

Re: [PERFORM] pg 7.4.x - pg_restore impossibly slow

2006-04-14 Thread Jim C. Nasby
On Thu, Apr 13, 2006 at 06:26:00PM -0700, patrick keshishian wrote: $ dropdb dbname $ createdb dbname $ pg_restore -vsOd dbname dbname.DUMP That step is pointless, because the next pg_restore will create the schema for you anyway. $ date db.restore ; pg_restore -vcOd dbname \

Re: [PERFORM] Blocks read for index scans

2006-04-14 Thread Terje Elde
Jim Nasby wrote: While working on determining a good stripe size for a database, I realized it would be handy to know what the average request size is. Getting this info is a simple matter of joining pg_stat_all_tables and pg_statio_all_tables and doing some math, but there's one issue I've

Re: [PERFORM] pg 7.4.x - pg_restore impossibly slow

2006-04-14 Thread patrick keshishian
On 4/13/06, Jim C. Nasby [EMAIL PROTECTED] wrote: On Thu, Apr 13, 2006 at 06:26:00PM -0700, patrick keshishian wrote: $ dropdb dbname $ createdb dbname $ pg_restore -vsOd dbname dbname.DUMP That step is pointless, because the next pg_restore will create the schema for you anyway. Yes, I

Re: [PERFORM] Inserts optimization?

2006-04-14 Thread Francisco Reyes
Jim C. Nasby writes: On Thu, Apr 13, 2006 at 02:59:23PM -0400, Francisco Reyes wrote: In RAID 10 would it matter that WALL is in the same RAID set? Would it be better: 4 disks in RAID10 Data 2 disks RAID 1 WALL 2 hot spares Well, benchmark it with your app and find out, but generally

Re: [PERFORM] Inserts optimization?

2006-04-14 Thread Michael Stone
On Thu, Apr 13, 2006 at 02:59:23PM -0400, Francisco Reyes wrote: In RAID 10 would it matter that WALL is in the same RAID set? Would it be better: 4 disks in RAID10 Data 2 disks RAID 1 WALL 2 hot spares I guess the first question is why 2 hot spares? You don't have many spindles, so you don't

Re: [PERFORM] Inserts optimization?

2006-04-14 Thread Marc Cousin
I hope I'm not going to say stupid things, but here's what i know (or i think i know :) ) about bacula + postgresql If I remember correctly (I allready discussed this with Kern Sibbald a while ago), bacula does each insert in its own transaction : that's how the program is done, and of course

[PERFORM] pg_toast size

2006-04-14 Thread Julien Drouard
Hi everyone, I've seen my pg_toast tables are becoming bigger and bigger. After googling I would like to modify my max_fsm_pages parameter to prevent that kind of problem. So I'm wondering if changing this parameter is enough and after that how can I reduce the size of these tables? By

Re: [PERFORM] bad performance on Solaris 10

2006-04-14 Thread Jignesh K. Shah
Hi Bruce, I saw even on this alias also that people assumed that the default wal_sync_method was fsync on Solaris. I would select fsync or fdsync as the default on Solaris. (I prefer fsync as it is already highlighted as default in postgresql) Another thing to improve the defaults on

Re: [PERFORM] Blocks read for index scans

2006-04-14 Thread Tom Lane
Jim C. Nasby [EMAIL PROTECTED] writes: In my case it would be helpful to break the heap access numbers out between seqscans and index scans, since each of those represents very different access patterns. Would adding that be a mess? Yes; it'd require more counters-per-table than we now keep,

[PERFORM] mergehashloop

2006-04-14 Thread Ian Westmacott
I have this query, where PG (8.1.2) prefers Merge Join over Hash Join over Nested Loop. However, this order turns out to increase in performance. I was hoping someone might be able to shed some light on why PG chooses the plans in this order, and what I might do to influence it otherwise.

Re: [PERFORM] Inserts optimization?

2006-04-14 Thread Francisco Reyes
Marc Cousin writes: If I remember correctly (I allready discussed this with Kern Sibbald a while ago), bacula does each insert in its own transaction : that's how the program is done Thanks for the info. For now, I only could get good performance with bacula and postgresql when disabling

Re: [PERFORM] mergehashloop

2006-04-14 Thread Tom Lane
Ian Westmacott [EMAIL PROTECTED] writes: I have this query, where PG (8.1.2) prefers Merge Join over Hash Join over Nested Loop. However, this order turns out to increase in performance. I was hoping someone might be able to shed some light on why PG chooses the plans in this order, and what

Re: [PERFORM] Inserts optimization?

2006-04-14 Thread Tom Lane
Francisco Reyes [EMAIL PROTECTED] writes: I think I am going to try increasing wal_buffers That will help not at all, if the problem is too-short transactions as it sounds to be. You really need to pester the authors of bacula to try to wrap multiple inserts per transaction. Or maybe find some

Re: [PERFORM] mergehashloop

2006-04-14 Thread Ian Westmacott
On Fri, 2006-04-14 at 12:13 -0400, Tom Lane wrote: Ian Westmacott [EMAIL PROTECTED] writes: I have this query, where PG (8.1.2) prefers Merge Join over Hash Join over Nested Loop. However, this order turns out to increase in performance. I was hoping someone might be able to shed some

Re: [PERFORM] mergehashloop

2006-04-14 Thread Tom Lane
Ian Westmacott [EMAIL PROTECTED] writes: That's what I feared, thanks. But then if I simplify things a bit, such that the row counts are quite good, and PG still chooses a worse plan, can I conclude anything about my configuration settings (like random_page_cost)? Well, the other thing

Re: [PERFORM] Inserts optimization?

2006-04-14 Thread Francisco Reyes
Tom Lane writes: That will help not at all, if the problem is too-short transactions as it sounds to be. How about commit_delay? You really need to pester the authors of bacula to try to wrap multiple inserts per transaction. Like any volunteer project I am sure it's more an issue of

Re: [PERFORM] Inserts optimization?

2006-04-14 Thread Francisco Reyes
Michael Stone writes: I guess the first question is why 2 hot spares? Because we are using RAID 10 larger array with more spindles with outperform a smaller one with fewer, regardless of RAID level (assuming a decent battery-backed cache). Based on what I have read RAID 10 is supposed

Re: [PERFORM] bad performance on Solaris 10

2006-04-14 Thread Josh Berkus
Jignesh, Don't get me wrong. As Luke mentioned it took a while to get the potential of PostgreSQL on Solaris and people like me start doing other complex workarounds in Solaris like forcedirectio, etc. (Yeah I did a test, if you force fsync as wal_sync_method while on Solaris, then you

Re: [PERFORM] Inserts optimization?

2006-04-14 Thread Michael Stone
On Fri, Apr 14, 2006 at 02:01:56PM -0400, Francisco Reyes wrote: Michael Stone writes: I guess the first question is why 2 hot spares? Because we are using RAID 10 I still don't follow that. Why would the RAID level matter? IOW, are you actually wanting 2 spares, or are you just stick with

Re: [PERFORM] Inserts optimization?

2006-04-14 Thread Francisco Reyes
Michael Stone writes: I still don't follow that. Why would the RAID level matter? IOW, are you actually wanting 2 spares, or are you just stick with that because you need a factor of two disks for your mirrors? RAID 10 needs pairs.. so we can either have no spares or 2 spares. Mmm, it's a

Re: [PERFORM] Inserts optimization?

2006-04-14 Thread Scott Marlowe
On Fri, 2006-04-14 at 15:09, Francisco Reyes wrote: Michael Stone writes: I still don't follow that. Why would the RAID level matter? IOW, are you actually wanting 2 spares, or are you just stick with that because you need a factor of two disks for your mirrors? RAID 10 needs pairs..

Re: [PERFORM] Inserts optimization?

2006-04-14 Thread Francisco Reyes
Scott Marlowe writes: Spares are placed in service one at a time. Ah.. that's your point. I know that. :-) You don't need 2 spares for RAID 10, trust me. We bought the machine with 8 drives. At one point we were considering RAID 5, then we decided to give RAID 10 a try. We have a