Re: [PERFORM] ext4 finally doing the right thing

2010-01-21 Thread Pierre Frédéric Caillau d
Now, with ext4 moving to full barrier/fsync support, we could get to the point where WAL in the main data FS can mimic the state where WAL is seperate, namely that WAL writes can jump the queue and be written without waiting for the data pages to be flushed down to disk, but also that you'll

Re: [PERFORM] Inserting 8MB bytea: just 25% of disk perf used?

2010-01-16 Thread Pierre Frédéric Caillau d
I've changed the setting a bit: (1) Replaced 7.200 disk by a 10.000 one, still sata though. (2) Inserting rows only 10x times (instead of 100x times) but 80mb each, so having the same amount of 800mb in total. (3) Changed the WAL path to the system disk (by the great 'junction' trick

Re: [PERFORM] Inserting 8MB bytea: just 25% of disk perf used?

2010-01-15 Thread Pierre Frédéric Caillau d
http://www.hagander.net/talks/Advanced%20PostgreSQL%20on%20Windows.pdf Great doc ! I'm keeping that ;) -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Re: New server to improve performance on our large and busy DB - advice? (v2)

2010-01-15 Thread Pierre Frédéric Caillau d
2) Which Windows OS would you recommend? (currently 2008 x64 Server) Would not recommend Windows OS. BTW, I'd be interested to know the NTFS fragmentation stats of your database file. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your

Re: [PERFORM] new server I/O setup

2010-01-15 Thread Pierre Frédéric Caillau d
No-one has mentioned SSDs yet ?... -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] performance config help

2010-01-14 Thread Pierre Frédéric Caillau d
So, pgBouncer is pretty good. It doesn't appear to be as good as limiting TCON and using pconnect, but since we can't limit TCON in a production environment, we may not have a choice. Actually, you can : use lighttpd and php/fastcgi. Lighttpd handles the network stuff, and

Re: [PERFORM] Inserting 8MB bytea: just 25% of disk perf used?

2010-01-14 Thread Pierre Frédéric Caillau d
However the harddisk (sata) could write 43 MB/s in the worst case! Why is write performance limited to 16 MB/s? Some more hints what I do: I use PQexecParams() and the INSERT ... $001 notation to NOT create a real escapted string from the data additionally but use a pointer to the 8MB data

Re: [PERFORM] a heavy duty operation on an unused table kills my server

2010-01-14 Thread Pierre Frédéric Caillau d
high CPU usage It might very well be high IO usage. Try this : Copy (using explorer, the shell, whatever) a huge file. This will create load similar to ALTER TABLE. Measure throughput, how much is it ? If your server blows up just like it did on ALTER TABLE,

Re: [PERFORM] Inserting 8MB bytea: just 25% of disk perf used?

2010-01-14 Thread Pierre Frédéric Caillau d
On Thu, 14 Jan 2010 22:28:07 +0100, fka...@googlemail.com fka...@googlemail.com wrote: Pierre Frédéric Caillaud: 4 Core CPU 3 Ghz, WinXP, 1 TB SATA disk. Big CPU and slow disk... You should add another disk just for the WAL -- disks are pretty cheap these days.

Re: [PERFORM] Choice of bitmap scan over index scan

2010-01-11 Thread Pierre Frédéric Caillau d
Postgres is being conservative. The plan it uses (bitmap index scan) will perform much better than an index scan when the data is not in the cache, by maybe an order of magnitude, depending on your hardware setup. The index scan may perform better at the moment, but the bitmap index scan

Re: [PERFORM] PG optimization question

2010-01-11 Thread Pierre Frédéric Caillau d
On Sun, 10 Jan 2010 19:45:32 +0100, Robert Haas robertmh...@gmail.com wrote: 2010/1/10 Pierre Frédéric Caillaud li...@peufeu.com: If you transfer (delete from staging, insert into archive) in one transaction , then it will be always visible in exactly one of them, and exatly once in a view

Re: [PERFORM] performance config help

2010-01-11 Thread Pierre Frédéric Caillau d
Each of the 256 requests was being processed by a php process. So, it could certainly be faster. But, the fact that we're seeing the db performance degrade would seem to indicate that our application is fast enough to punish the db. Isn't that true? Not necessarily. Your DB still has

Re: [PERFORM] PG optimization question

2010-01-10 Thread Pierre Frédéric Caillau d
If you transfer (delete from staging, insert into archive) in one transaction , then it will be always visible in exactly one of them, and exatly once in a view over both staging and archive(s). Does the latest version implement this : INSERT INTO archive (...) DELETE FROM staging

Re: [PERFORM] Massive table (500M rows) update nightmare

2010-01-09 Thread Pierre Frédéric Caillau d
crank it up more and delay the checkpoints as much as possible during these updates. 64 segments is already 1024M. We have 425M rows, total table size is 78GB, so we can imagine a worst case UPDATE write is less than 200 bytes * number of rows specified in the update (is that logic

Re: [PERFORM] PG optimization question

2010-01-09 Thread Pierre Frédéric Caillau d
That may help with the queries speed (not a problem now), but we'll then have to add UNION statement for daily staging table for other 5% of requests, right? And there would be a moment when daily message is in archive table AND in daily table (while transferring from daily table to

Re: [PERFORM] performance while importing a very large data set in to database

2009-12-06 Thread Pierre Frédéric Caillau d
I have a very bit big database around 15 million in size, and the dump file is around 12 GB. While importing this dump in to database I have noticed that initially query response time is very slow but it does improves with time. Any suggestions to improve performance after dump in

Re: [PERFORM] PG 8.3 and large shared buffer settings

2009-09-26 Thread Pierre Frédéric Caillau d
Is there any practical limit to the number of shared buffers PG 8.3.7 can handle before more becomes counter-productive? It is more efficient to have the page in shared buffers, rather than doing a context switch to the OS, copying the entire page from the OS's cache into shared buffers,

Re: [PERFORM] Using Gprof with Postgresql

2009-09-08 Thread Pierre Frédéric Caillau d
I just compiled it with gcc and produces the gmon.out file for every process; by the way I am running below script in order to produce readable .out files gprof .../pgsql/bin/postgres gmon.out createtable2.out is postgres the right executable? regards reydan Off topic, but

Re: [PERFORM] moving data between tables causes the db to overwhelm the system

2009-09-01 Thread Pierre Frédéric Caillau d
We have a table that's 2billion rows big and growing fast. We've setup monthly partitions for it. Upon running the first of many select * from bigTable insert into partition statements (330million rows per month) the entire box eventually goes out to lunch. Any thoughts/suggestions? Thanks

Re: [PERFORM] moving data between tables causes the db to overwhelm the system

2009-09-01 Thread Pierre Frédéric Caillau d
Indexes are on the partitions, my bad. If you need to insert lots of data, it is faster to create the indexes afterwards (and then you can also create them in parallel, since you have lots of RAM and cores). The explain plan looks like this: explain SELECT * from bigTable where time =

Re: [PERFORM] What exactly is postgres doing during INSERT/UPDATE ?

2009-08-28 Thread Pierre Frédéric Caillau d
top shows the cpu usage of the pg process ranges from zero to never more than ten percent of a cpu, and that one cpu is always ninety some odd percent in iowait. So what is postgres doing (with fsync off) that causes the cpu to spend so much time in iowait? Updating indexes ?

Re: [SQL] [PERFORM] SQL Query Performance - what gives?

2009-08-19 Thread Pierre Frédéric Caillau d
The bitmask allows the setting of multiple permissions but the table definition doesn't have to change (well, so long as the bits fit into a word!) Finally, this is a message forum - the actual code itself is template-driven and the bitmask permission structure is ALL OVER the templates;

Re: [PERFORM] Getting time of a postgresql-request

2009-08-18 Thread Pierre Frédéric Caillau d
On Tue, 18 Aug 2009 06:25:57 +0200, Russell Smith mr-r...@pws.com.au wrote: Kai Behncke wrote: But I would like to get it in a php-script, like $timerequest_result=pg_result($timerequest,0); (well, that does not work). I wonder: Is there another way to get the time a request needs? How

Re: [PERFORM] transaction delays to apply

2009-08-12 Thread Pierre Frédéric Caillau d
Does anybody know any way to solve this? I did monitor the system running at full load (~20 messages per second) - postmaster's processes didn't eat more than 10-20% of CPU and memory. Neither did any of my application's processes. now() like current_timestamp is the time of transaction

Re: [PERFORM] PG-related ACM Article: The Pathologies of Big Data

2009-08-08 Thread Pierre Frédéric Caillau d
I don't see how on any recent hardware, random access to RAM is slower than sequential from disk.  RAM access, random or not, is measured in GB/sec... I don't think anybody's arguing that. http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2795p=5 These guys mention about 50 ns memory