Re: [PERFORM] postgresql random io test with 2 SSD Kingston V+100 500GB in (software) Raid1

2011-04-21 Thread Laurent Laborde
On Wed, Apr 20, 2011 at 5:40 PM, Shaun Thomas stho...@peak6.com wrote:
 On 04/20/2011 09:01 AM, Laurent Laborde wrote:

 A review of the V+100 on the excellent anandtech :
 http://www.anandtech.com/show/4010/kingston-ssdnow-v-plus-100-review

 That's horrifying. 4.9MB/s random writes? 19.7MB/s random reads? That's at
 least an order of magnitude lower than other SSDs of that generation. I
 can't imagine that would be very good for database usage patterns by
 comparison. Especially with that aggressive garbage collection.

 I mean... an old Indilinx OCZ Vertex has better performance than that

We just orderer 2 Corsair C300 240GB to compare performance and see if
the difference is as huge as claimed on anandtech's benchmark :)

-- 
Laurent ker2x Laborde
Sysadmin  DBA at http://www.over-blog.com/

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgresql random io test with 2 SSD Kingston V+100 500GB in (software) Raid1

2011-04-20 Thread Laurent Laborde
On Wed, Apr 20, 2011 at 2:39 PM, Jeff thres...@torgo.978.org wrote:

 The performance here looks like the old jmicron based ssds that had
 absolutely abysmal performance - the intel x25s do not suffer like this. The
 x25's however suffer from the power durability Greg has mentioned.  (And
 they will eventually need to be security erase'd to restore performance -
 you'll start getting major write stalls). Looks like you were on the cusp of
 stalling here.

A review of the V+100 on the excellent anandtech :
http://www.anandtech.com/show/4010/kingston-ssdnow-v-plus-100-review

 btw, yay pgiosim! :)

yay \o/

-- 
Laurent ker2x Laborde
Sysadmin  DBA at http://www.over-blog.com/

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] postgresql random io test with 2 SSD Kingston V+100 500GB in (software) Raid1

2011-04-19 Thread Laurent Laborde
2 kingston V+100 500GB
Soft RAID1 (md)
ioscheduler [noop]
ext3
Linux pro-cdn1 2.6.26-2-amd64 #1 SMP Tue Jan 25 05:59:43 UTC 2011
x86_64 GNU/Linux
FilesystemSize  Used Avail Use% Mounted on
/dev/md4  452G  301G  128G  71% /home/ssd


Random 8KB read/write with 1% write
./pgiosim  -w1 -a1 -v -b 100 /home/ssd/big.1
Write Mode: 1%
Stallcheck at 1.00
Verbose
Unknown units of blocks
Arg: 1
Read 100 blocks
Added /home/ssd/big.1

---

3.20%,  32036 read,300 written, 25625.08kB/sec 3203.14 iops
5.33%,  21283 read,198 written, 17026.31kB/sec 2128.29 iops
7.47%,  21356 read,245 written, 17084.73kB/sec 2135.59 iops
9.62%,  21511 read,234 written, 17208.72kB/sec 2151.09 iops
11.78%,  21591 read,216 written, 17271.71kB/sec 2158.96 iops
14.08%,  23032 read,245 written, 18425.52kB/sec 2303.19 iops
16.53%,  24527 read,228 written, 19621.52kB/sec 2452.69 iops
18.89%,  23535 read,225 written, 18827.91kB/sec 2353.49 iops
21.19%,  23003 read,229 written, 18402.34kB/sec 2300.29 iops
23.40%,  22139 read,211 written, 17711.13kB/sec 2213.89 iops
25.66%,  22628 read,225 written, 18102.33kB/sec 2262.79 iops
27.86%,  21983 read,238 written, 17586.32kB/sec 2198.29 iops
30.14%,  22823 read,211 written, 18258.31kB/sec 2282.29 iops
32.44%,  22975 read,240 written, 18379.92kB/sec 2297.49 iops
34.83%,  23870 read,214 written, 19095.92kB/sec 2386.99 iops
37.24%,  24129 read,213 written, 19303.10kB/sec 2412.89 iops
39.49%,  22450 read,210 written, 17959.92kB/sec 2244.99 iops
41.77%,  22827 read,235 written, 18261.53kB/sec 2282.69 iops
43.98%,  22138 read,218 written, 17710.30kB/sec 2213.79 iops
46.31%,  23293 read,241 written, 18634.30kB/sec 2329.29 iops
48.86%,  25422 read,258 written, 20337.52kB/sec 2542.19 iops
51.06%,  22091 read,222 written, 17672.72kB/sec 2209.09 iops
53.46%,  23970 read,215 written, 19175.93kB/sec 2396.99 iops
55.80%,  23359 read,224 written, 18687.12kB/sec 2335.89 iops
58.04%,  22472 read,232 written, 17977.24kB/sec 2247.16 iops
60.34%,  22981 read,230 written, 18384.72kB/sec 2298.09 iops
62.17%,  18228 read,192 written, 14580.33kB/sec 1822.54 iops
64.60%,  24336 read,229 written, 19465.85kB/sec 2433.23 iops
66.89%,  22912 read,210 written, 18329.52kB/sec 2291.19 iops
69.06%,  21677 read,231 written, 17341.54kB/sec 2167.69 iops
71.28%,  22255 read,210 written, 17803.91kB/sec 2225.49 iops
73.68%,  23928 read,243 written, 19142.30kB/sec 2392.79 iops
75.90%,  22255 read,205 written, 17803.93kB/sec 2225.49 iops
78.17%,  22641 read,233 written, 18112.72kB/sec 2264.09 iops
80.50%,  23328 read,235 written, 18662.29kB/sec 2332.79 iops
82.84%,  23379 read,230 written, 18703.11kB/sec 2337.89 iops
84.90%,  20670 read,236 written, 16535.95kB/sec 2066.99 iops
86.91%,  20018 read,222 written, 16012.14kB/sec 2001.52 iops
89.24%,  23321 read,235 written, 18654.39kB/sec 2331.80 iops
91.56%,  23224 read,227 written, 18579.13kB/sec 2322.39 iops
94.05%,  24880 read,262 written, 19903.93kB/sec 2487.99 iops
96.40%,  23549 read,205 written, 18839.14kB/sec 2354.89 iops
98.80%,  23956 read,230 written, 19164.73kB/sec 2395.59 iops

--
./pgiosim  -w10 -a1 -v -b 100 /home/ssd/big.1
Write Mode: 10%
Stallcheck at 1.00
Verbose
Unknown units of blocks
Arg: 1
Read 100 blocks
Added /home/ssd/big.1

1.62%,  16226 read,   1642 written, 12979.00kB/sec 1622.37 iops
1.67%,433 read, 38 written,  346.40kB/sec   43.30 iops
2.95%,  12839 read,   1282 written, 10271.06kB/sec 1283.88 iops
2.95%,  3 read,  0 written,2.40kB/sec0.30 iops
3.50%,   5500 read,548 written, 4399.83kB/sec  549.98 iops
4.95%,  14524 read,   1468 written, 11619.12kB/sec 1452.39 iops
4.95%,  3 read,  1 written,2.40kB/sec0.30 iops
4.95%,  8 read,  0 written,6.40kB/sec0.80 iops
6.20%,  12471 read,   1241 written, 9976.67kB/sec 1247.08 iops
6.20%,  3 read,  0 written,2.40kB/sec0.30 iops
7.63%,  14272 read,   1445 written, 11417.26kB/sec 1427.16 iops
7.63%,  3 read,  0 written,2.40kB/sec0.30 iops
7.65%,263 read, 22 written,  210.40kB/sec   26.30 iops
8.65%,   9930 read,990 written, 7943.83kB/sec  992.98 iops
8.67%,268 read, 24 written,  214.40kB/sec   26.80 iops
9.30%,   6296 read,621 written, 5036.78kB/sec  629.60 iops
9.83%,   5233 read,541 written, 4186.35kB/sec  523.29 iops
10.75%,   9222 read,960 written, 7377.56kB/sec  922.20 iops
10.80%,506 read, 52 written,  404.80kB/sec   50.60 iops
11.74%,   9417 read,933 written, 7533.53kB/sec  941.69 iops
11.77%,314 read, 29 written,  251.20kB/sec   31.40 iops
12.56%,   7906 read,793 written, 6324.78kB/sec  790.60 iops
13.37%,   8052 read,830 written, 6441.52kB/sec  805.19 iops
13.40%,309 read, 29 written,  247.20kB/sec   30.90 iops
14.01%,   6116 read,

Re: [PERFORM] postgresql random io test with 2 SSD Kingston V+100 500GB in (software) Raid1

2011-04-19 Thread Laurent Laborde
On Tue, Apr 19, 2011 at 2:07 PM, Greg Smith g...@2ndquadrant.com wrote:
 On 04/19/2011 05:15 AM, Laurent Laborde wrote:

 2 kingston V+100 500GB

4x250GB in Raid10 (see my 2nd post)

 Thanks for the performance report.  The V+100 is based on a Toshiba T6UG1XBG
 controller, and it doesn't have any durable cache from either a battery or
 capacitor.  As such, putting a database on that drive is very risky.  You
 can expect the database to be corrupted during an unusual power outage
 event.  See http://wiki.postgresql.org/wiki/Reliable_Writes for more
 information.

 At this point most people considering one of Kingston's drives for a
 database would be better off getting an Intel 320 series drive, which is
 around the same price but doesn't have this issue.

If we use them (unlikely), recovery in case of power outage isn't a
problem, as we will use it on slave database (using Slony-I) that can
be created/destroyed at will.
And, anyway, our slave have fsync=off so the battery won't change
anything in case of power outage :)

i am currently testing on a single V+100 250GB (without raid).
Report will follow soon :)

-- 
Laurent ker2x Laborde
Sysadmin  DBA at http://www.over-blog.com/

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgresql random io test with 2 SSD Kingston V+100 500GB in (software) Raid1

2011-04-19 Thread Laurent Laborde
1 SSD Kingston V+100 250GB, no raid.

/home/pgiosim-0.5/pgiosim  -w1 -a1 -v -b 100 /home/ssd/big1
Write Mode: 1%
Stallcheck at 1.00
Verbose
Unknown units of blocks
Arg: 1
Read 100 blocks
Added /home/ssd/big1
3.57%,  35720 read,365 written, 28567.73kB/sec 3570.97 iops
6.14%,  25684 read,276 written, 20519.77kB/sec 2564.97 iops
8.17%,  20270 read,219 written, 16215.94kB/sec 2026.99 iops
9.66%,  14937 read,131 written, 11945.84kB/sec 1493.23 iops
12.91%,  32494 read,327 written, 25995.08kB/sec 3249.38 iops
14.06%,  11508 read,118 written, 9206.33kB/sec 1150.79 iops
16.09%,  20292 read,187 written, 16233.55kB/sec 2029.19 iops
17.57%,  14817 read,141 written, 11853.49kB/sec 1481.69 iops
19.62%,  20515 read,201 written, 16411.94kB/sec 2051.49 iops
21.90%,  22794 read,214 written, 18222.39kB/sec 2277.80 iops
23.92%,  20207 read,197 written, 16160.23kB/sec 2020.03 iops
26.11%,  21812 read,213 written, 17427.32kB/sec 2178.42 iops
28.29%,  21852 read,213 written, 17475.40kB/sec 2184.43 iops
30.73%,  24416 read,234 written, 19507.42kB/sec 2438.43 iops
32.46%,  17298 read,183 written, 13833.15kB/sec 1729.14 iops
33.25%,   7863 read, 87 written, 6290.35kB/sec  786.29 iops
35.67%,  24229 read,213 written, 19383.12kB/sec 2422.89 iops
37.71%,  20397 read,208 written, 16317.50kB/sec 2039.69 iops
39.61%,  19022 read,200 written, 15217.51kB/sec 1902.19 iops
41.63%,  20190 read,202 written, 16151.85kB/sec 2018.98 iops
44.00%,  23651 read,266 written, 18913.60kB/sec 2364.20 iops
45.30%,  13066 read,112 written, 10452.69kB/sec 1306.59 iops
47.37%,  20697 read,218 written, 16557.55kB/sec 2069.69 iops
49.75%,  23726 read,217 written, 18980.50kB/sec 2372.56 iops
51.55%,  18087 read,170 written, 14469.56kB/sec 1808.69 iops
53.47%,  19194 read,193 written, 15355.08kB/sec 1919.39 iops
55.30%,  18250 read,205 written, 14599.93kB/sec 1824.99 iops
57.00%,  16999 read,160 written, 13599.09kB/sec 1699.89 iops
58.79%,  17912 read,180 written, 14329.56kB/sec 1791.19 iops
61.76%,  29694 read,318 written, 23753.91kB/sec 2969.24 iops
62.96%,  12039 read,113 written, 9631.16kB/sec 1203.90 iops
65.67%,  27048 read,273 written, 21609.48kB/sec 2701.18 iops
67.00%,  13305 read,130 written, 10639.63kB/sec 1329.95 iops
69.22%,  9 read,227 written, 17783.07kB/sec .88 iops
71.13%,  19062 read,170 written, 15249.52kB/sec 1906.19 iops
72.06%,   9299 read, 97 written, 7437.79kB/sec  929.72 iops
74.31%,  22492 read,202 written, 17986.09kB/sec 2248.26 iops
76.66%,  23493 read,219 written, 18768.77kB/sec 2346.10 iops
78.75%,  20979 read,209 written, 16775.76kB/sec 2096.97 iops
80.68%,  19305 read,194 written, 15428.97kB/sec 1928.62 iops
83.05%,  23670 read,222 written, 18927.19kB/sec 2365.90 iops
84.59%,  15391 read,169 written, 12299.46kB/sec 1537.43 iops
86.32%,  17246 read,166 written, 13796.73kB/sec 1724.59 iops
88.33%,  20133 read,201 written, 16106.22kB/sec 2013.28 iops
89.98%,  16561 read,172 written, 13248.30kB/sec 1656.04 iops
92.81%,  28298 read,252 written, 22627.87kB/sec 2828.48 iops
94.85%,  20388 read,198 written, 16308.57kB/sec 2038.57 iops
96.75%,  18974 read,178 written, 15179.09kB/sec 1897.39 iops
98.45%,  16956 read,190 written, 13564.73kB/sec 1695.59 iops

-

/home/pgiosim-0.5/pgiosim  -w10 -a1 -v -b 100 /home/ssd/big1
Write Mode: 10%
Stallcheck at 1.00
Verbose
Unknown units of blocks
Arg: 1
Read 100 blocks
Added /home/ssd/big1
2.01%,  20122 read,   1978 written, 16097.57kB/sec 2012.20 iops
2.01%,  3 read,  0 written,2.40kB/sec0.30 iops
2.01%,  3 read,  0 written,2.40kB/sec0.30 iops
2.01%,  3 read,  0 written,2.40kB/sec0.30 iops
3.82%,  18036 read,   1779 written, 14428.73kB/sec 1803.59 iops
4.03%,   2175 read,209 written, 1739.98kB/sec  217.50 iops
4.03%,  3 read,  0 written,2.40kB/sec0.30 iops
4.03%,  2 read,  0 written,1.60kB/sec0.20 iops
4.04%,  4 read,  0 written,3.20kB/sec0.40 iops
4.04%,  4 read,  0 written,3.20kB/sec0.40 iops
5.62%,  15804 read,   1614 written, 12643.13kB/sec 1580.39 iops
5.62%,  3 read,  2 written,2.40kB/sec0.30 iops
5.62%,  4 read,  0 written,3.20kB/sec0.40 iops
5.62%,  3 read,  0 written,2.40kB/sec0.30 iops
5.62%,  3 read,  0 written,2.40kB/sec0.30 iops
6.86%,  12414 read,   1264 written, 9931.17kB/sec 1241.40 iops
6.86%,  3 read,  0 written,2.40kB/sec0.30 iops
6.86%,  2 read,  0 written,1.60kB/sec0.20 iops
7.18%,   3213 read,343 written, 2570.39kB/sec  321.30 iops
8.34%,  11563 read,   1215 written, 9250.36kB/sec 1156.30 iops
8.34%,  3 read,  0 written,2.40kB/sec0.30 iops
8.34%,  4 read,  0 written,3.20kB/sec0.40 iops
8.34%,  

Re: [PERFORM] postgresql random io test with 2 SSD Kingston V+100 500GB in (software) Raid1

2011-04-19 Thread Laurent Laborde
On Tue, Apr 19, 2011 at 3:21 PM, Nicholson, Brad (Toronto, ON, CA)
bnichol...@hp.com wrote:
 -Original Message-
 From: pgsql-performance-ow...@postgresql.org [mailto:pgsql-performance-
 ow...@postgresql.org] On Behalf Of Laurent Laborde
 Sent: Tuesday, April 19, 2011 8:37 AM
 To: pgsql-performance@postgresql.org
 Subject: Re: [PERFORM] postgresql random io test with 2 SSD Kingston
 V+100 500GB in (software) Raid1

 If we use them (unlikely), recovery in case of power outage isn't a
 problem, as we will use it on slave database (using Slony-I) that can
 be created/destroyed at will.
 And, anyway, our slave have fsync=off so the battery won't change
 anything in case of power outage :)

 Are these on the same UPS?  If so, you have a failure case that could cause 
 you to lose everything.

Oh, not at all.
We're doing balancing/switch/failover between 2 different datacenter.
We can maintain (somewhat degraded) operation if one of the datacenter fail :)

-- 
Laurent ker2x Laborde
Sysadmin  DBA at http://www.over-blog.com/

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Analyse without locking?

2009-12-03 Thread Laurent Laborde
On Sat, Nov 28, 2009 at 6:57 PM, Richard Neill rn...@cam.ac.uk wrote:
 Greg Smith wrote:

 Richard Neill wrote:

 Or am I barking up the wrong tree entirely?

 If you haven't already tuned checkpoint behavior, it's more likely that's
 causing a dropout than autovacuum.  See the checkpoint_segments section of
 http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server for an intro.


 Greg Smith wrote:
 Richard Neill wrote:
 Or am I barking up the wrong tree entirely?
 If you haven't already tuned checkpoint behavior, it's more likely
 that's causing a dropout than autovacuum.  See the checkpoint_segments
 section of http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
 for an intro.


 Thanks - I did that already - it's currently
   checkpoint_segments = 64

 Now, I understand that increasing checkpoint_segments is generally a good
 thing (subject to some limit), but doesn't that just mean that instead of
 say a 1 second outage every minute, it's a 10 second outage every 10
 minutes?

 Also, correct me if I'm wrong, but mere selects shouldn't cause any addition
 to the WAL. I'd expect that a simple row insert might require perhaps 1kB of
 disk writes(*), in which case we're looking at only a few kB/sec at most of
 writes in normal use.?

 Is it possible (or even sensible) to do a manual vacuum analyze with
 nice/ionice?

this is the job of autovacuum_vacuum_cost_delay and vacuum_cost_delay.

About checkpoint, you may eventually set :
synchronous_commit = off

Please note that you may loose some queries if the server badly crash.
(but that shouldn't cause database corruption like a fsync = off)

If you are running on linux, you could try to monitor (rrd is your
friend) /proc/meminfo and specifically the Dirty field.

Read your syslog log to see if the checkpoint is a problem.
Here is a sample of mine (cleaned) :
checkpoint complete: wrote 3117 buffers (1.2%); 0 transaction log
file(s) added, 0 removed, 3 recycled;
write=280.213 s, sync=0.579 s, total=280.797 s

The more Dirty page (/proc/meminfo), the longer is your sync time.
A high sync time can easily lock your server.

To reduce the dirty page, tune /proc/sys/vm/dirty_background_ratio
I have set it to 1 on my 32GB servers.

You should also be carefull about all the other
/proc/sys/vm/dirty_*
And specifically /proc/sys/vm/dirty_ratio :
Maximum percentage of total memory that can be filled with dirty pages
before processes are forced to write dirty buffers themselves during
their time slice instead of being allowed to do more writes.
Note that all processes are blocked for writes when this happens, not
just the one that filled the write buffers.

About ionice : it only work with the CFQ I/O Scheduler.
And CFQ is a very bad idea when using postgresql.

-- 
Laurent ker2x Laborde
Sysadmin  DBA at http://www.over-blog.com/

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Cost of sort/order by not estimated by the query planner

2009-12-02 Thread Laurent Laborde
hummm Adding pgsql-perf :)

On Mon, Nov 30, 2009 at 5:54 PM, Laurent Laborde kerdez...@gmail.com wrote:
 Friendly greetings !
 I use postgresql 8.3.6.

 here is a few info about the table i'm querying :
 -
 - select count(*) from _article : 17301610
 - select count(*) from _article WHERE (_article.bitfield  getbit(0)) : 6729


 Here are both request with problems :
 --

 QUERY 1 : Very fast !
 -

 explain SELECT *
 FROM   _article
 WHERE (_article.bitfield  getbit(0))
 ORDER BY _article.id ASC
 LIMIT 500;
                                             QUERY PLAN
 -
  Limit  (cost=66114.13..66115.38 rows=500 width=1114)
   -  Sort  (cost=66114.13..66157.37 rows=17296 width=1114)
         Sort Key: id
         -  Bitmap Heap Scan on _article  (cost=138.32..65252.29
 rows=17296 width=1114)
               Recheck Cond: (bitfield  B'1'::bit varying)
               -  Bitmap Index Scan on idx_article_bitfield
 (cost=0.00..134.00 rows=17296 width=0)
                     Index Cond: (bitfield  B'1'::bit varying)




 QUERY 2 : Endless ... (more than 30mn... i stopped the query)
 -

 explain SELECT *
 FROM   _article
 WHERE (_article.bitfield  getbit(0))
 ORDER BY _article.id ASC
 LIMIT 5;
                                           QUERY PLAN
 -
  Limit  (cost=0.00..2042.87 rows=5 width=1114)
   -  Index Scan using _article_pkey on _article
 (cost=0.00..7066684.46 rows=17296 width=1114)
         Filter: (bitfield  B'1'::bit varying)
 (3 rows)


 With LIMIT 5 and LIMIT 500, the query plan are differents.
 Postgresql estimate that it can do a a simple index scan to find only 5 row.
 With more than LIMIT ~400 it estimate that it's faster to do a more
 complex plan.
 and it make sense !

 The problem is in the order by, of course.
 If i remove the order by the LIMIT 5 is faster (0.044 ms) and do an
 index scan.
 At limit 500 (without order) it still use an index scan and it is
 slightly slower.
 At limit 5000 (without order) it switch to a Bitmap Index Scan +
 Bitmap Heap Scan and it's slower but acceptable (5.275 ms)

 Why, with the QUERY 2, postgresql doesn't estimate the cost of the
 Sort/ORDER BY ?
 Of course, by ignoring the order, both query plan are right and the
 choice for thoses differents plans totally make sense.

 But... if the planner would be kind enough to considerate the cost of
 the order by, it would certainly choose the Bitmap Index + Bitmap Heap
 scan for the limit 5.
 And not an index_scan pkey !

 I have set the statistics to 1000 for _article.bitfield, just in case
 (and ran a vacuum analyze), it doesn't change anything.

 Is that a bug ? any Idea ?

 Thank you :)

 --
 Laurent ker2x Laborde
 Sysadmin  DBA at http://www.over-blog.com/




-- 
Laurent ker2x Laborde
Sysadmin  DBA at http://www.over-blog.com/

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Cost of sort/order by not estimated by the query planner

2009-12-02 Thread Laurent Laborde
* without order by, limit 5 : 70ms
--
 explain analyze SELECT *
FROM   _article
WHERE (_article.bitfield  getbit(0))
LIMIT 5;

QUERY PLAN  :
Limit  (cost=0.00..20.03 rows=5 width=1109) (actual
time=70.190..70.265 rows=5 loops=1)
   -  Index Scan using idx_article_bitfield on _article
(cost=0.00..69290.99 rows=17298 width=1109) (actual
time=70.188..70.260 rows=5 loops=1)
 Index Cond: (bitfield  B'1'::bit varying)
 Total runtime: 70.406 ms
(4 rows)

* without order by, limit 500 (same plan as above) : 371ms
--
explain analyze SELECT *
FROM   _article
WHERE (_article.bitfield  getbit(0))
LIMIT 500;

QUERY PLAN:
 Limit  (cost=0.00..2002.86 rows=500 width=1109) (actual
time=0.087..371.257 rows=500 loops=1)
   -  Index Scan using idx_article_bitfield on _article
(cost=0.00..69290.99 rows=17298 width=1109) (actual
time=0.086..371.075 rows=500 loops=1)
 Index Cond: (bitfield  B'1'::bit varying)
 Total runtime: 371.369 ms

* without order by, limit 5000 (query plan changed) : 1307ms
---
 explain analyze SELECT *
FROM   _article
WHERE (_article.bitfield  getbit(0))
LIMIT 5000;

QUERY PLAN :
 Limit  (cost=138.34..18971.86 rows=5000 width=1109) (actual
time=53.782..1307.173 rows=5000 loops=1)
   -  Bitmap Heap Scan on _article  (cost=138.34..65294.79 rows=17298
width=1109) (actual time=53.781..1305.565 rows=5000 loops=1)
 Recheck Cond: (bitfield  B'1'::bit varying)
 -  Bitmap Index Scan on idx_article_bitfield
(cost=0.00..134.01 rows=17298 width=0) (actual time=53.606..53.606
rows=6743 loops=1)
   Index Cond: (bitfield  B'1'::bit varying)
 Total runtime: 1307.972 ms


So... *without* order by, differents limit and different query plan
: the queries are fast.

* with order by, limit 5 :
--
explain analyze SELECT *
FROM   _article
WHERE (_article.bitfield  getbit(0))
ORDER BY _article.id ASC
LIMIT 5;

QUERY PLAN :
Mmmm the query is running since 2h ... waiting, waiting.


* with order by, limit 500 : 546ms
---
explain analyze SELECT *
FROM   _article
WHERE (_article.bitfield  getbit(0))
ORDER BY _article.id ASC
LIMIT 500;
QUERY PLAN :
 Limit  (cost=66156.73..66157.98 rows=500 width=1109) (actual
time=545.671..545.900 rows=500 loops=1)
   -  Sort  (cost=66156.73..66199.98 rows=17298 width=1109) (actual
time=545.670..545.766 rows=500 loops=1)
 Sort Key: id
 Sort Method:  top-N heapsort  Memory: 603kB
 -  Bitmap Heap Scan on _article  (cost=138.34..65294.79
rows=17298 width=1109) (actual time=1.059..541.359 rows=6729 loops=1)
   Recheck Cond: (bitfield  B'1'::bit varying)
   -  Bitmap Index Scan on idx_article_bitfield
(cost=0.00..134.01 rows=17298 width=0) (actual time=0.922..0.922
rows=6743 loops=1)
 Index Cond: (bitfield  B'1'::bit varying)
 Total runtime: 546.163 ms


Now... with ordery by, different limit, different query plan, the
limit 5 query is insanly *SLOW* (while the limit 500 is super fast).

What is think : The query planner do not consider the time taken by
the order by... which is *much* slower !!


-- 
Laurent ker2x Laborde
Sysadmin  DBA at http://www.over-blog.com/

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] limiting performance impact of wal archiving.

2009-11-16 Thread Laurent Laborde
On Thu, Nov 12, 2009 at 3:21 PM, Laurent Laborde kerdez...@gmail.com wrote:
 Hi !

 Here is my plan :
 - rebuilding a spare with ext3, raid10, without lvm
 - switch the slony master to this new node.

Done 3 days ago : Problem solved ! It totally worked. \o/

-- 
ker2x
sysadmin  DBA @ http://www.over-blog.com/

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] limiting performance impact of wal archiving.

2009-11-12 Thread Laurent Laborde
Hi !

Here is my plan :
- rebuilding a spare with ext3, raid10, without lvm
- switch the slony master to this new node.

We'll see ...
Thx for all the info !!!

-- 
Ker2x

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] limiting performance impact of wal archiving.

2009-11-10 Thread Laurent Laborde
Hi !
We recently had a problem with wal archiving badly impacting the
performance of our postgresql master.
And i discovered cstream, that can limite the bandwidth of pipe stream.

Here is our new archive command, FYI, that limit the IO bandwidth to 500KB/s  :
archive_command = '/bin/cat %p | cstream -i  -o  -t -500k | nice
gzip -9 -c | /usr/bin/ncftpput etc...'


PS : While writing that mail, i just found that i could replace :
cat %p | cstream -i  ...
with
cstream -i %p ...
*grins*


-- 
ker2x
Sysadmin  DBA @ http://Www.over-blog.com/

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] limiting performance impact of wal archiving.

2009-11-10 Thread Laurent Laborde
On Tue, Nov 10, 2009 at 3:05 PM, Ivan Voras ivo...@freebsd.org wrote:
 Laurent Laborde wrote:

 Hi !
 We recently had a problem with wal archiving badly impacting the
 performance of our postgresql master.

 Hmmm, do you want to say that copying 16 MB files over the network (and
 presumably you are not doing it absolutely continually - there are pauses
 between log shipping - or you wouldn't be able to use bandwidth limiting) in
 an age when desktop drives easily read 60 MB/s (and besides most of the file
 should be cached by the OS anyway) is a problem for you? Slow hardware?

 (or I've misunderstood the problem...)

Desktop drive can easily do 60MB/s in *sequential* read/write.
We use high performance array of 15.000rpm SAS disk on an octocore
32GB and IO is always a problem.

I explain the problem :

This server (doing wal archiving) is the master node of the
over-blog's server farm.
hundreds of GB of data, tens of millions of articles and comments,
millions of user, ...
~250 read/write sql requests per seconds for the master
~500 read sql request per slave.

Awefully random access overload our array at 10MB/s at best.
Of course, when doing sequential read it goes to +250MB/s :)

Waiting for cheap memory to be cheap enough to have 512Go of ram per server ;)

We tought about SSD.
But interleaved read/write kill any SSD performance and is not better
than SSD. Just more expensive with an unknown behaviour over age.

-- 
ker2x
sysadmin  DBA @ http://www.over-blog.com/

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] limiting performance impact of wal archiving.

2009-11-10 Thread Laurent Laborde
checkpoint log :


 checkpoint starting: time
 checkpoint complete: wrote 1972 buffers (0.8%); 0 transaction log
file(s) added, 0 removed, 13 recycled;
 write=179.123 s, sync=26.284 s, total=205.451 s

with a 10mn timeout.

-- 
ker2x

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] limiting performance impact of wal archiving.

2009-11-10 Thread Laurent Laborde
On Tue, Nov 10, 2009 at 4:11 PM, Ivan Voras ivo...@freebsd.org wrote:
 Laurent Laborde wrote:

 Ok, this explains it. It also means you are probably not getting much
 runtime performance benefits from the logging and should think about moving
 the logs to different drive(s), among other things because...

It is on a separate array which does everything but tablespace (on a
separate array) and indexspace (another separate array).

 Of course, when doing sequential read it goes to +250MB/s :)

 ... it means you cannot dedicate 0.064 of second from the array to read
 through a single log file without your other transactions suffering.

Well, actually, i also change the configuration to synchronous_commit=off
It probably was *THE* problem with checkpoint and archiving :)

But adding cstream couldn't hurt performance, and i wanted to share
this with the list. :)

BTW, if you have any idea to improve IO performance, i'll happily read it.
We're 100% IO bound.

eg: historically, we use JFS with LVM on linux. from the good old time
when IO wasn't a problem.
i heard that ext3 is not better for postgresql. what else ? xfs ?

*hugs*

-- 
ker2x
Sysadmin  DBA @ http://www.over-blog.com/

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] limiting performance impact of wal archiving.

2009-11-10 Thread Laurent Laborde
On Tue, Nov 10, 2009 at 4:48 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
 Laurent Laborde kerdez...@gmail.com wrote:

 BTW, if you have any idea to improve IO performance, i'll happily
 read it.  We're 100% IO bound.

 At the risk of stating the obvious, you want to make sure you have
 high quality RAID adapters with large battery backed cache configured
 to write-back.

Not sure how high quality the 3ware is.
/c0 Driver Version = 2.26.08.004-2.6.18
/c0 Model = 9690SA-8I
/c0 Available Memory = 448MB
/c0 Firmware Version = FH9X 4.04.00.002
/c0 Bios Version = BE9X 4.01.00.010
/c0 Boot Loader Version = BL9X 3.08.00.001
/c0 Serial Number = L340501A7360026
/c0 PCB Version = Rev 041
/c0 PCHIP Version = 2.00
/c0 ACHIP Version = 1501290C
/c0 Controller Phys = 8
/c0 Connections = 8 of 128
/c0 Drives = 8 of 128
/c0 Units = 3 of 128
/c0 Active Drives = 8 of 128
/c0 Active Units = 3 of 32
/c0 Max Drives Per Unit = 32
/c0 Total Optimal Units = 2
/c0 Not Optimal Units = 1
/c0 Disk Spinup Policy = 1
/c0 Spinup Stagger Time Policy (sec) = 1
/c0 Auto-Carving Policy = off
/c0 Auto-Carving Size = 2048 GB
/c0 Auto-Rebuild Policy = on
/c0 Controller Bus Type = PCIe
/c0 Controller Bus Width = 8 lanes
/c0 Controller Bus Speed = 2.5 Gbps/lane


 If you haven't already done so, you might want to try
 elevator=deadline.

That's what we use.
Also tried noop scheduler without signifiant performance change.

-- 
ker2x

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] limiting performance impact of wal archiving.

2009-11-10 Thread Laurent Laborde
On Tue, Nov 10, 2009 at 5:35 PM, Greg Smith g...@2ndquadrant.com wrote:
 Laurent Laborde wrote:

 It is on a separate array which does everything but tablespace (on a
 separate array) and indexspace (another separate array).


 On Linux, the types of writes done to the WAL volume (where writes are
 constantly being flushed) require the WAL volume not be shared with anything
 else for that to perform well.  Typically you'll end up with other things
 being written out too because it can't just selectively flush just the WAL
 data.  The whole write barriers implementation should fix that, but in
 practice rarely does.

 If you put many drives into one big array, somewhere around 6 or more
 drives, at that point you might put the WAL on that big volume too and be OK
 (presuming a battery-backed cache which you have).  But if you're carving up
 array sections so finely for other purposes, it doesn't sound like your WAL
 data is on a big array.  Mixed onto a big shared array or single dedicated
 disks (RAID1) are the two WAL setups that work well, and if I have a bunch
 of drives I personally always prefer a dedicated drive mainly because it
 makes it easy to monitor exactly how much WAL activity is going on by
 watching that drive.

On the new slave i have 6 disk in raid-10 and 2 disk in raid-1.
I tought about doing the same thing with the master.


 Well, actually, i also change the configuration to synchronous_commit=off
 It probably was *THE* problem with checkpoint and archiving :)


 This is basically turning off the standard WAL implementation for one where
 you'll lose some data if there's a crash.  If you're OK with that, great; if
 not, expect to lose some number of transactions if the server ever goes down
 unexpectedly when configured like this.

I have 1 spare dedicated to hot standby, doing nothing but waiting for
the master to fail.
+ 2 spare candidate for cluster mastering.

In theory, i could even disable fsync and all safety feature on the master.
In practice, i'd like to avoid using the slony's failover capabilities
if i can avoid it :)

 Generally if checkpoints and archiving are painful, the first thing to do is
 to increase checkpoint_segments to a very high amount (100), increase
 checkpoint_timeout too, and push shared_buffers up to be a large chunk of
 memory.

Shared_buffer is 2GB.
I'll reread domcumentation about checkpoint_segments.
thx.

 Disabling synchronous_commit should be a last resort if your
 performance issues are so bad you have no choice but to sacrifice some data
 integrity just to keep things going, while you rearchitect to improve
 things.

 eg: historically, we use JFS with LVM on linux. from the good old time
 when IO wasn't a problem.
 i heard that ext3 is not better for postgresql. what else ? xfs ?


 You never want to use LVM under Linux if you care about performance.  It
 adds a bunch of overhead that drops throughput no matter what, and it's
 filled with limitations.  For example, I mentioned write barriers being one
 way to interleave WAL writes without other types without having to write the
 whole filesystem cache out.  Guess what:  they don't work at all regardless
 if you're using LVM.  Much like using virtual machines, LVM is an approach
 only suitable for low to medium performance systems where your priority is
 easier management rather than speed.

*doh* !!
Everybody told me nooo ! LVM is ok, no perceptible overhead, etc ...)
Are you 100% about LVM ? I'll happily trash it :)

 Given the current quality of Linux code, I hesitate to use anything but ext3
 because I consider that just barely reliable enough even as the most popular
 filesystem by far.  JFS and XFS have some benefits to them, but none so
 compelling to make up for how much less testing they get.  That said, there
 seem to be a fair number of people happily running high-performance
 PostgreSQL instances on XFS.

Thx for the info :)

-- 
ker2x

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] same query in high number of times

2009-06-23 Thread Laurent Laborde
On Mon, Jun 22, 2009 at 12:06 AM, Scott Marlowescott.marl...@gmail.com wrote:
 On Sun, Jun 21, 2009 at 12:28 PM, Peter Albanpeter.alb...@gmail.com wrote:
 Hi,

 Here is the query  :
 duration: 2533.734 ms  statement:

 SNIP

  Limit  (cost=4313.54..4313.55 rows=3 width=595) (actual
 time=288.525..288.528 rows=3 loops=1)

 According to this query plan, your query is taking up 288
 milliseconds.  I'm guessing the rest of the time is actually is spent
 transferring data.

Huu ...
The cost is _certainly_ not the time in ms.
See the planner cost constants in a config file, or in any good documentation.

-- 
F4FQM
Kerunix Flan
Laurent Laborde

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] same query in high number of times

2009-06-23 Thread Laurent Laborde
On Tue, Jun 23, 2009 at 10:52 AM, Laurent Labordekerdez...@gmail.com wrote:
 On Mon, Jun 22, 2009 at 12:06 AM, Scott Marlowescott.marl...@gmail.com 
 wrote:
 On Sun, Jun 21, 2009 at 12:28 PM, Peter Albanpeter.alb...@gmail.com wrote:
 Hi,

 Here is the query  :
 duration: 2533.734 ms  statement:

 SNIP

  Limit  (cost=4313.54..4313.55 rows=3 width=595) (actual
 time=288.525..288.528 rows=3 loops=1)

 According to this query plan, your query is taking up 288
 milliseconds.  I'm guessing the rest of the time is actually is spent
 transferring data.

 Huu ...
 The cost is _certainly_ not the time in ms.
 See the planner cost constants in a config file, or in any good documentation.

Wps... cost... time... my mistake ... :)
*duck and cover*

-- 
F4FQM
Kerunix Flan
Laurent Laborde

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] What is the most optimal config parameters to keep stable write TPS ?..

2009-05-12 Thread Laurent Laborde
On Mon, May 11, 2009 at 6:31 PM, Dimitri dimitrik...@gmail.com wrote:
 Hi Kevin,

 PostgreSQL: 8.3.7  8.4
 Server: Sun M5000 32cores
 OS: Solaris 10

 current postgresql.conf:

 #
 max_connections = 2000                  # (change requires restart)

Are you sure about the 2000 connections ?
Why don't you use a pgbouncer or pgpool instead ?


-- 
F4FQM
Kerunix Flan
Laurent Laborde

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] [HACKERS] high shared buffer and swap

2009-05-05 Thread Laurent Laborde
On Tue, May 5, 2009 at 11:15 AM, PFC li...@peufeu.com wrote:

 An octocore server with 32GB of ram, running postgresql 8.3.6
 Running only postgresql, slony-I and pgbouncer.

 Just for testing purpose, i tried a setting with 26GB of shared_buffer.

 I quickly noticed that the performances wasn't very good and the
 server started to swap slowly but surely.
  (but still up to 2000query/second as reported by pgfouine)

 It used all the 2GB of swap.
 I removed the server from production, added 10GB of swap and left it
 for the weekend with only slony and postgresql up to keep it in sync
 with the master database.

 This morning i found that the whole 12GB of swap were used :

 Hm, do you really need swap with 32Gb of RAM ?
 One could argue yes but swap is useful to avoid out of memory
 errors.
 But if a loaded server starts to swap a lot, it is as good as dead
 anyway...

Not really, but we have it.
I tried with swappinness set to 0 and ... it swaps !

I'm back to 4GB of shared_buffer :)
I'll try various setting, maybe 16GB, etc ...
But my goal was to avoid OS filesystem cache and usage of
shared_buffer instead : FAIL.

-- 
F4FQM
Kerunix Flan
Laurent Laborde

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] high shared buffer and swap

2009-05-04 Thread Laurent Laborde
Friendly greetings !
I found something odd (something that i can't explain) this weekend.

An octocore server with 32GB of ram, running postgresql 8.3.6
Running only postgresql, slony-I and pgbouncer.

Just for testing purpose, i tried a setting with 26GB of shared_buffer.

I quickly noticed that the performances wasn't very good and the
server started to swap slowly but surely.
 (but still up to 2000query/second as reported by pgfouine)

It used all the 2GB of swap.
I removed the server from production, added 10GB of swap and left it
for the weekend with only slony and postgresql up to keep it in sync
with the master database.

This morning i found that the whole 12GB of swap were used :
Mem:  32892008k total, 32714728k used,   177280k free,70872k buffers
Swap: 12582896k total, 12531812k used,51084k free, 27047696k cached

# cat /proc/meminfo
MemTotal: 32892008 kB
MemFree:171140 kB
Buffers: 70852 kB
Cached:   27065208 kB
SwapCached:4752492 kB
Active:   24362168 kB
Inactive:  7806884 kB
HighTotal:   0 kB
HighFree:0 kB
LowTotal: 32892008 kB
LowFree:171140 kB
SwapTotal:12582896 kB
SwapFree:53064 kB
Dirty:  122636 kB
Writeback:   0 kB
AnonPages:  280336 kB
Mapped:   14118588 kB
Slab:   224632 kB
PageTables: 235120 kB
NFS_Unstable:0 kB
Bounce:  0 kB
CommitLimit:  29028900 kB
Committed_AS: 28730620 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 12916 kB
VmallocChunk: 34359725307 kB

While i understand that a very high shared_buffer wasn't a good idea,
i don't understand this behaviour.
Any tought ?

I tried this setup because having 2 level of data caching doesn't make
sense to me. (1 in OS filesystem cache and 1 in shm (shared_buffer)).

I'd love to understand what's happening here ! Thank  you :)

-- 
F4FQM
Kerunix Flan
Laurent Laborde

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] any interest of changing the page size ?

2009-04-28 Thread Laurent Laborde
Friendly greetings !

According to : 
http://developer.postgresql.org/pgdocs/postgres/storage-page-layout.html
Every table and index is stored as an array of pages of a fixed size
(usually 8 kB, although a different page size can be selected when
compiling the server).

Is there any usage/interest to change this page size ?

thank you.

-- 
F4FQM
Kerunix Flan
Laurent Laborde

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Using IOZone to simulate DB access patterns

2009-04-27 Thread Laurent Laborde
you can also play with this-tiny-shiny tool :
http://pgfoundry.org/projects/pgiosim/
It just works and heavily stress the disk with random read/write.


-- 
F4FQM
Kerunix Flan
Laurent Laborde

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] iowait bug?

2009-03-22 Thread Laurent Laborde
On Sun, Mar 22, 2009 at 8:49 AM, Laurent Wandrebeck
l.wandreb...@gmail.com wrote:
 2009/3/21 M. Edward (Ed) Borasky zzn...@gmail.com:
 I just discovered this on a LinkedIn user group:

 http://bugzilla.kernel.org/show_bug.cgi?id=12309

 Is anyone here seeing evidence of this in PostgreSQL??
 I've been hit by an I/O wait problem, as described here:
 https://bugzilla.redhat.com/show_bug.cgi?id=444759
 I've told it to that other bug, but no one seems to have followed that path.

We applied this mwi patch on 3 pgsql servers, and seen great
performance improvement.
Using 3ware, 8 SAS HDD, Octocore (2x4) Xeon and 32GB RAM, on a custom
2.6.18 kernel.

-- 
Laurent Laborde
http://www.over-blog.com/

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Full statement logging problematic on larger machines?

2009-03-13 Thread Laurent Laborde
On Fri, Mar 13, 2009 at 9:28 AM, sathiya psql sathiya.p...@gmail.com wrote:
 for profiling, you can also use the epqa.

 http://epqa.sourceforge.net/

or PGSI : http://bucardo.org/pgsi/
But it require a syslog date format we don't use here. So i wasn't
able to test it :/

-- 
F4FQM
Kerunix Flan
Laurent Laborde

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Full statement logging problematic on larger machines?

2009-03-12 Thread Laurent Laborde
On Wed, Mar 11, 2009 at 11:42 PM, Frank Joerdens fr...@joerdens.de wrote:

 effective_cache_size            = 4GB

Only 4GB with 64GB of ram ?

About logging, we have 3 partition :
- data
- index
- everything else, including logging.

Usually, we log on a remote syslog (a dedicated log server for the
whole server farm).

For profiling (pgfouine), we have a crontab that change the postgresql
logging configuration for just a few mn.
and log all on the everything but postgresql partition.

around 2000 query/seconds/servers, no problem.


-- 
Laurent Laborde
Sysadmin at JFG-Networks / Over-blog

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance