Now, with ext4 moving to full barrier/fsync support, we could get to the
point where WAL in the main data FS can mimic the state where WAL is
seperate, namely that WAL writes can jump the queue and be written
without waiting for the data pages to be flushed down to disk, but also
that you'll
I've changed the setting a bit:
(1) Replaced 7.200 disk by a 10.000 one, still sata though.
(2) Inserting rows only 10x times (instead of 100x times)
but 80mb each, so having the same amount of 800mb in total.
(3) Changed the WAL path to the system disk (by the
great 'junction' trick
http://www.hagander.net/talks/Advanced%20PostgreSQL%20on%20Windows.pdf
Great doc ! I'm keeping that ;)
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
2) Which Windows OS would you recommend? (currently 2008 x64 Server)
Would not recommend Windows OS.
BTW, I'd be interested to know the NTFS fragmentation stats of your
database file.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
No-one has mentioned SSDs yet ?...
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
So, pgBouncer is pretty good. It doesn't appear to be as good as
limiting TCON and using pconnect, but since we can't limit TCON in a
production environment, we may not have a choice.
Actually, you can : use lighttpd and php/fastcgi.
Lighttpd handles the network stuff, and
However the harddisk (sata) could write 43 MB/s in the worst
case! Why is write performance limited to 16 MB/s?
Some more hints what I do:
I use PQexecParams() and the INSERT ... $001 notation to NOT
create a real escapted string from the data additionally but
use a pointer to the 8MB data
high CPU usage It might very well be high IO usage.
Try this :
Copy (using explorer, the shell, whatever) a huge file.
This will create load similar to ALTER TABLE.
Measure throughput, how much is it ?
If your server blows up just like it did on ALTER TABLE,
On Thu, 14 Jan 2010 22:28:07 +0100, fka...@googlemail.com
fka...@googlemail.com wrote:
Pierre Frédéric Caillaud:
4 Core CPU 3 Ghz, WinXP, 1 TB SATA disk.
Big CPU and slow disk...
You should add another disk just for the WAL -- disks are pretty cheap
these days.
Postgres is being conservative. The plan it uses (bitmap index scan)
will perform much better than an index scan when the data is not in the
cache, by maybe an order of magnitude, depending on your hardware setup.
The index scan may perform better at the moment, but the bitmap index
scan
On Sun, 10 Jan 2010 19:45:32 +0100, Robert Haas robertmh...@gmail.com
wrote:
2010/1/10 Pierre Frédéric Caillaud li...@peufeu.com:
If you transfer (delete from staging, insert into archive) in one
transaction , then it will be always visible in exactly one of them,
and exatly once in a view
Each of the 256 requests was being processed by a php process. So, it
could
certainly be faster. But, the fact that we're seeing the db performance
degrade would seem to indicate that our application is fast enough to
punish
the db. Isn't that true?
Not necessarily. Your DB still has
If you transfer (delete from staging, insert into archive) in one
transaction , then it will be always visible in exactly one of them,
and exatly once in a view over both staging and archive(s).
Does the latest version implement this :
INSERT INTO archive (...) DELETE FROM staging
crank it up more and delay the checkpoints as much as possible during
these updates. 64 segments is already 1024M.
We have 425M rows, total table size is 78GB, so we can imagine a worst
case UPDATE write is less than 200 bytes * number of rows specified in
the update (is that logic
That may help with the queries speed (not a problem now), but we'll then
have to add UNION statement for daily staging table for other 5% of
requests, right? And there would be a moment when daily message is in
archive table AND in daily table (while transferring from daily table to
I have a very bit big database around 15 million in size, and the dump
file
is around 12 GB.
While importing this dump in to database I have noticed that initially
query
response time is very slow but it does improves with time.
Any suggestions to improve performance after dump in
Is there any practical limit to the number of shared buffers PG 8.3.7
can handle before more becomes counter-productive?
It is more efficient to have the page in shared buffers, rather than doing
a context switch to the OS, copying the entire page from the OS's cache
into shared buffers,
I just compiled it with gcc and produces the gmon.out file for every
process; by the way I am running below script in order to produce
readable .out files
gprof .../pgsql/bin/postgres gmon.out createtable2.out
is postgres the right executable?
regards
reydan
Off topic, but
We have a table that's 2billion rows big and growing fast. We've setup
monthly partitions for it. Upon running the first of many select * from
bigTable insert into partition statements (330million rows per month) the
entire box eventually goes out to lunch.
Any thoughts/suggestions?
Thanks
Indexes are on the partitions, my bad.
If you need to insert lots of data, it is faster to create the indexes
afterwards (and then you can also create them in parallel, since you have
lots of RAM and cores).
The explain plan looks like this:
explain SELECT * from bigTable
where
time =
top shows the cpu usage of the pg process ranges from zero to never more
than ten percent of a cpu, and that one cpu is always ninety some odd
percent in iowait. So what is postgres doing (with fsync off) that
causes the cpu to spend so much time in iowait?
Updating indexes ?
The bitmask allows the setting of multiple permissions but the table
definition doesn't have to change (well, so long as the bits fit into a
word!) Finally, this is a message forum - the actual code itself is
template-driven and the bitmask permission structure is ALL OVER the
templates;
On Tue, 18 Aug 2009 06:25:57 +0200, Russell Smith mr-r...@pws.com.au
wrote:
Kai Behncke wrote:
But I would like to get it in a php-script, like
$timerequest_result=pg_result($timerequest,0);
(well, that does not work).
I wonder: Is there another way to get the time a request needs?
How
Does anybody know any way to solve this? I did monitor the system
running at full load (~20 messages per second) - postmaster's processes
didn't eat more than 10-20% of CPU and memory. Neither did any of my
application's processes.
now() like current_timestamp is the time of transaction
I don't see how on any recent hardware, random access to RAM is slower
than
sequential from disk. RAM access, random or not, is measured in
GB/sec...
I don't think anybody's arguing that.
http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2795p=5
These guys mention about 50 ns memory
25 matches
Mail list logo