On Thursday, March 10, 2011 05:26:00 PM Julius Tuskenis wrote:
3) As STABLE function should be executed once for every different set of
parameters
Thats not true. Thats not what any of the volatility information (like STABLE,
IMMUTABLE, VOLATILE) does.
See
On Friday, January 28, 2011 06:30:19 PM yazan suleiman wrote:
I am evaluating postgres 9 to migrate away from Oracle. The following
query runs too slow, also please find the explain plan:
First:
explain analyze
SELECT DISTINCT
EVENT.ID
,ORIGIN.ID AS ORIGINID
On Thursday, January 27, 2011 12:24:10 PM Cédric Villemain wrote:
maintenance_work_mem = 512MB
128MB is usualy enough
Uhm, I don't want to be picky, but thats not really my experience. Sorts for
index creation are highly dependent on a high m_w_m. Quite regularly I find the
existing 1GB limit
On Thursday, January 27, 2011 02:23:48 PM Cédric Villemain wrote:
2011/1/27 Andres Freund and...@anarazel.de:
On Thursday, January 27, 2011 12:24:10 PM Cédric Villemain wrote:
maintenance_work_mem = 512MB
128MB is usualy enough
Uhm, I don't want to be picky, but thats not really my
On Thursday, January 27, 2011 07:13:17 PM Scott Marlowe wrote:
On Thu, Jan 27, 2011 at 10:54 AM, Stephen Frost sfr...@snowman.net wrote:
* Michael Kohl (michael.k...@tupalo.com) wrote:
HDD: 2x 120 GB OCZ Vertex 2 SSD; RAID 1
I'm amazed no one else has mentioned this yet, but you should
On Wednesday, January 19, 2011 07:06:58 PM Chris Browne wrote:
kevin.gritt...@wicourts.gov (Kevin Grittner) writes:
Filip Rembia*kowskiplk.zu...@gmail.com wrote:
2011/1/19 Charles.Hou giveme...@gmail.com:
select * from mybook SQL command also increase the XID ?
Yes. Single SELECT is a
On Friday, January 07, 2011 02:45:02 PM Florian Weimer wrote:
* Andres Freund:
On Friday, January 07, 2011 01:45:25 PM Florian Weimer wrote:
On 9.0, this configuration
checkpoint_segments = 512 # in logfile segments, min 1, 16MB each
results in 1034 segments, so the effective
Hi,
On Thursday 23 December 2010 17:53:24 Desmond Coertzen wrote:
Is is possible to create an index on a field on a function that returns a
data type that contains subfields?
Is this possible? How would I write the statement?
I am not sure I understood you correctly. Maybe you mean something
On Tuesday 07 December 2010 18:34:25 Tom Polak wrote:
Then I did the same test via Postgresql and it took 8.85 seconds! I tried
it again as I thought I did something wrong. I did a few tweaks such as
increasing the shared buffers. Still the best I could get it to was 7.5
seconds. This is
On Thursday 25 November 2010 13:02:08 t...@fuzzy.cz wrote:
I don't think you'll get performance improvement from running two
PostgreSQL clusters (one for DB1, one for DB2). And when running two
databases within the same cluster, there's no measurable performance
difference AFAIK.
That one is
On Wednesday 17 November 2010 00:31:34 Tom Lane wrote:
Josh Berkus j...@agliodbs.com writes:
On 11/16/10 12:39 PM, Greg Smith wrote:
I want to next go through and replicate some of the actual database
level tests before giving a full opinion on whether this data proves
it's worth changing
On Wednesday 17 November 2010 01:51:28 Tom Lane wrote:
Andres Freund and...@anarazel.de writes:
On Wednesday 17 November 2010 00:31:34 Tom Lane wrote:
Well, we're not going to increase the default to gigabytes
Especially not as I don't think it will have any effect after
On Wednesday 17 November 2010 02:04:28 Tom Lane wrote:
Andres Freund and...@anarazel.de writes:
On Wednesday 17 November 2010 01:51:28 Tom Lane wrote:
Well, there's a forced fsync after writing the last page of an xlog
file, but I don't believe that proves that more than 16MB of xlog
On Thursday 11 November 2010 19:58:49 Tom Lane wrote:
I wrote:
I do think that something based around a settable-per-table caching
percentage might be a reasonable way to proceed.
BTW ... on reflection it seems that this would *not* solve the use-case
Kevin described at the start of this
Hi,
On Monday 08 November 2010 23:12:57 Greg Smith wrote:
This seems to be ignoring the fact that unless you either added a
non-volatile cache or specifically turned off all write caching on your
drives, the results of all power-fail testing done on earlier versions
of Linux was that it
On Monday 08 November 2010 00:35:29 Greg Smith wrote:
Marti Raudsepp wrote:
I will grant you that the details were wrong, but I stand by the
conclusion. I can state for a fact that PostgreSQL's default
wal_sync_method varies depending on the fcntl.h header.
Yes; it's supposed to, and
On Friday 05 November 2010 21:15:20 Josh Berkus wrote:
All,
Domas (of Facebook/Wikipedia, MySQL geek) pointed me to this report:
http://www.phoronix.com/scan.php?page=articleitem=linux_perf_regressionsn
I guess thats the O_DSYNC thingy. See the Defaulting wal_sync_method to
fdatasync on
On Friday 05 November 2010 22:10:36 Greg Smith wrote:
Andres Freund wrote:
On Sunday 31 October 2010 20:59:31 Greg Smith wrote:
Writes only are sync'd out when you do a commit, or the database does a
checkpoint.
Hm? WAL is written out to disk after an the space provided
Hi,
On Monday 01 November 2010 13:49:14 Divakar Singh wrote:
When I tried inserting 1M rows into a table with a Primary Key, it took
almost 62 seconds.
After adding a composite index of 2 columns, the performance degrades to
125 seconds.
I am using COPY to insert all data in 1 transaction.
On Monday 01 November 2010 15:08:10 Divakar Singh wrote:
here are my parameters:
Which pg version is that?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Monday 01 November 2010 15:16:49 Divakar Singh wrote:
I am using 9.0.1
Either thats not true or you cargo culted loads of your config from a
significantly older pg version.
Things like:
#bgwriter_delay = 200# 10-1 milliseconds between rounds
bgwriter_lru_percent = 0#
On Monday 01 November 2010 15:28:19 Divakar Singh wrote:
Do you mean these parameters have been removed starting 9.X?
As I see on
http://www.network-theory.co.uk/docs/postgresql/vol3/BackgroundWriter.html
,these parameters were added starting from 8.0 right?
No, I mean setting to 0 is a bit
On Sunday 31 October 2010 20:59:31 Greg Smith wrote:
Writes only are sync'd out when you do a commit, or the database does a
checkpoint.
Hm? WAL is written out to disk after an the space provided by wal_buffers(def
8) * XLOG_BLCKSZ (def 8192) is used. The default is 64kb which you reach
On Thursday 21 October 2010 21:42:06 Kevin Grittner wrote:
Bruce Momjian br...@momjian.us wrote:
I assume we send a full 8k to the controller, and a failure during
that write is not registered as a write.
On what do you base that assumption? I assume that we send a full
8K to the OS
On Tuesday 17 August 2010 10:29:10 Greg Smith wrote:
Andres Freund wrote:
An fsync() equals a barrier so it has the effect of stopping
reordering around it - especially on systems with larger multi-disk
arrays thats pretty expensive.
You can achieve surprising speedups, at least in my
On Mon, Aug 16, 2010 at 01:46:21PM -0400, Greg Smith wrote:
Scott Carey wrote:
This is because an fsync on ext3 flushes _all dirty pages in the file
system_ to disk, not just those for the file being fsync'd.
One partition for WAL, one for data. If using ext3 this is
essentially a
On Mon, Aug 16, 2010 at 04:13:22PM -0400, Greg Smith wrote:
Andres Freund wrote:
Or use -o sync. Or configure a ridiciuosly low dirty_memory amount
(which has a problem on large systems because 1% can still be too
much. Argh.)...
-o sync completely trashes performance, and trying to set
On Mon, Aug 16, 2010 at 04:54:19PM -0400, Greg Smith wrote:
Andres Freund wrote:
A new checkpointing logic + a new syncing logic
(prepare_fsync() earlier and then fsync() later) would be a nice
thing. Do you plan to work on that?
The background writer already caches fsync calls into a queue
Hi,
On Mon, Aug 02, 2010 at 12:12:51PM -0700, Maciek Sakrejda wrote:
I'm running PostgreSQL 8.3 and I have a query with a couple of NOT IN
subqueries:
With 8.3 you will have to use manual antijoins (i.e LEFT JOIN
... WHERE NULL). If you use 8.4 NOT EXISTS() will do that
automatically in many
On Mon, Aug 02, 2010 at 01:06:00PM -0700, Maciek Sakrejda wrote:
All fields involved are declared NOT NULL, but thanks for the heads up.
Afair the planner doesnt use that atm.
Andres
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
On Mon, Aug 02, 2010 at 01:35:13PM -0700, Maciek Sakrejda wrote:
All fields involved are declared NOT NULL, but thanks for the heads up.
Afair the planner doesnt use that atm.
I was referring to not having to care about the strange NULL semantics
(as per your original comment), since I have
On Wednesday 28 July 2010 12:27:44 Louis-David Mitterrand wrote:
The EXPLAIN ANALYSE output is attached with, first the fast version and
then the slow one.
I think you forgot to attach it.
Andres
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
On Wed, Jul 28, 2010 at 04:10:08PM -0400, Tom Lane wrote:
Robert Haas robertmh...@gmail.com writes:
However, if we don't support that, we can't do any sort of pooling-ish
thing without the ability to pass file descriptors between processes;
and Tom seems fairly convinced there's no portable
Hi,
On Tue, Jul 27, 2010 at 04:08:16PM -0700, Peter Hussey wrote:
Now for the questions:
1) If we tell the customer to set his work_mem value to 500MB or 1GB in
postgres.config, what problems might they see? the documentation and the
guidelines we received from Rupinder Singh in support
On Mon, Jul 26, 2010 at 03:23:20PM -0600, Greg Spiegelberg wrote:
On Mon, Jul 26, 2010 at 1:45 PM, Greg Smith g...@2ndquadrant.com wrote:
Yeb Havinga wrote:
I did some ext3,ext4,xfs,jfs and also ext2 tests on the just-in-memory
read/write test. (scale 300) No real winners or losers, though
On Fri, Jul 23, 2010 at 01:28:53PM -0400, Robert Haas wrote:
On Fri, Jul 23, 2010 at 11:58 AM, Hannu Krosing ha...@krosing.net wrote:
On Thu, 2010-07-22 at 20:57 -0400, Robert Haas wrote:
On Thu, Jul 22, 2010 at 3:15 PM, Hannu Krosing ha...@krosing.net wrote:
On Thu, 2010-07-22 at 14:36
On Sat, Jul 24, 2010 at 01:23:08AM -0400, Greg Smith wrote:
Joshua Tolley wrote:
Relatively minor, but it would be convenient to avoid having to query
$external_pooler to determine the client_addr of an incoming connection.
You suggest this as a minor concern, but I consider it to be one of
On Thu, Jul 22, 2010 at 02:33:43PM -0400, Robert Haas wrote:
On Mon, Jul 12, 2010 at 6:58 AM, Craig Ringer
cr...@postnewspapers.com.au wrote:
On 12/07/10 17:45, Matthew Wakeling wrote:
I'm surprised. Doesn't apache httpd do this? Does it have to do a whole
load of non-portable stuff? It
On Monday 28 June 2010 13:39:27 Yeb Havinga wrote:
It looks like seq_scans are disabled, since the index scan has only a
filter expression but not an index cond.
Or its using it to get an ordered result...
Andres
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
On Friday 04 June 2010 14:17:35 Jon Schewe wrote:
Some interesting data about different filesystems I tried with
PostgreSQL and how it came out.
I have an application that is backed in postgres using Java JDBC to
access it. The tests were all done on an opensuse 11.2 64-bit machine,
on the
On Friday 04 June 2010 16:25:30 Tom Lane wrote:
Andres Freund and...@anarazel.de writes:
On Friday 04 June 2010 14:17:35 Jon Schewe wrote:
XFS (logbufs=8): ~4 hours to finish
ext4: ~1 hour 50 minutes to finish
ext3: 15 minutes to finish
ext3 on LVM: 15 minutes to finish
My guess
On Friday 04 June 2010 15:59:05 Tom Lane wrote:
Marc Cousin cousinm...@gmail.com writes:
I hope I'm not going to expose an already known problem, but I couldn't
find it mailing list archives (I only found
http://archives.postgresql.org/pgsql- hackers/2009-12/msg01543.php).
You sure this
On Friday 04 June 2010 20:26:27 Jon Schewe wrote:
ext3 barrier=1: ~15 minutes
ext4 nobarrier: ~15 minutes
Any message in the kernel log about barriers or similar?
Andres
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
On Tuesday 25 May 2010 11:00:24 Joachim Worringen wrote:
Am 25.05.2010 10:49, schrieb Grzegorz Jaśkiewicz:
temporary tables are handled pretty much like the regular table. The
magic happens on schema level, new schema is setup for connection, so
that it can access its own temporary tables.
Hi,
On Monday 05 April 2010 16:28:35 Oliver Kindernay wrote:
Hi, I have table with just on column named url (it's not real url,
just random string for testing purposes), type text. I have lots of
entries in it (it's dynamic, i add and remove them on the fly), 100
000 and more. I've created
On Wednesday 17 March 2010 22:18:47 Hannu Krosing wrote:
On Wed, 2010-03-17 at 16:49 -0400, Greg Smith wrote:
Alvaro Herrera wrote:
Andres Freund escribió:
I find it way much easier to believe such issues exist on a tables in
constrast to indexes. The likelihood to get sequential
On Wednesday 17 March 2010 19:44:56 Greg Smith wrote:
Rob Wultsch wrote:
On Wed, Mar 17, 2010 at 7:30 AM, Tom Lane t...@sss.pgh.pa.us wrote:
No, it's not optimistic in the least, at least not since we implemented
synchronized seqscans (in 8.3 or thereabouts).
Where can I find details
On Wednesday 10 February 2010 15:56:40 Tom Lane wrote:
Franck Routier franck.rout...@axege.com writes:
I am wondering if deferring foreign key constraints (instead of
disableing them) would increase performance, compared to non deferred
constraints
No, it wouldn't make any noticeable
On Monday 08 February 2010 05:53:23 Robert Haas wrote:
On Sun, Feb 7, 2010 at 10:09 PM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
Andres Freund escribió:
I personally think the fsync on the directory should be added to the
stable branches - other opinions?
If wanted I can
On Monday 08 February 2010 19:34:01 Greg Stark wrote:
On Mon, Feb 8, 2010 at 4:53 AM, Robert Haas robertmh...@gmail.com wrote:
On Sun, Feb 7, 2010 at 10:09 PM, Alvaro Herrera
Yeah, it seems there are two patches here -- one is the addition of
fsync_fname() and the other is the
On Sunday 07 February 2010 19:23:10 Robert Haas wrote:
On Sun, Feb 7, 2010 at 11:24 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Greg Smith g...@2ndquadrant.com writes:
This is turning into yet another one of those situations where something
simple and useful is being killed by trying to
On Sunday 07 February 2010 19:27:02 Andres Freund wrote:
On Sunday 07 February 2010 19:23:10 Robert Haas wrote:
On Sun, Feb 7, 2010 at 11:24 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Greg Smith g...@2ndquadrant.com writes:
This is turning into yet another one of those situations where
On Monday 08 February 2010 05:53:23 Robert Haas wrote:
On Sun, Feb 7, 2010 at 10:09 PM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
Andres Freund escribió:
I personally think the fsync on the directory should be added to the
stable branches - other opinions?
If wanted I can
On Saturday 06 February 2010 06:03:30 Greg Smith wrote:
Andres Freund wrote:
On 02/03/10 14:42, Robert Haas wrote:
Well, maybe we should start with a discussion of what kernel calls
you're aware of on different platforms and then we could try to put an
API around it.
In linux
On 02/03/10 12:53, Greg Stark wrote:
On Tue, Feb 2, 2010 at 7:45 PM, Robert Haasrobertmh...@gmail.com wrote:
I think you're probably right, but it's not clear what the new name
should be until we have a comment explaining what the function is
responsible for.
So I wrote some comments but
On 02/03/10 14:42, Robert Haas wrote:
On Wed, Feb 3, 2010 at 6:53 AM, Greg Starkgsst...@mit.edu wrote:
On Tue, Feb 2, 2010 at 7:45 PM, Robert Haasrobertmh...@gmail.com wrote:
I think you're probably right, but it's not clear what the new name
should be until we have a comment explaining what
On Tuesday 02 February 2010 18:36:12 Robert Haas wrote:
On Fri, Jan 29, 2010 at 1:56 PM, Greg Stark gsst...@mit.edu wrote:
On Tue, Jan 19, 2010 at 3:25 PM, Tom Lane t...@sss.pgh.pa.us wrote:
That function *seriously* needs documentation, in particular the fact
that it's a no-op on machines
On Tuesday 02 February 2010 19:14:40 Robert Haas wrote:
On Tue, Feb 2, 2010 at 12:50 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Andres Freund and...@anarazel.de writes:
On Tuesday 02 February 2010 18:36:12 Robert Haas wrote:
I took a look at this patch today and I agree with Tom
On Tuesday 02 February 2010 20:06:32 Robert Haas wrote:
On Tue, Feb 2, 2010 at 1:34 PM, Andres Freund and...@anarazel.de wrote:
For now it could - but it very well might be converted to sync_file_range
or similar, which would have different sideeffects.
As the potential code duplication
On Wednesday 27 January 2010 15:49:06 Matthew Wakeling wrote:
On Wed, 27 Jan 2010, Thom Brown wrote:
Had a quick look at a benchmark someone put together of MySQL vs
PostgreSQL, and while PostgreSQL is generally faster, I noticed the bulk
delete was very slow:
On Tuesday 26 January 2010 01:39:48 nair rajiv wrote:
On Tue, Jan 26, 2010 at 1:01 AM, Craig James
craig_ja...@emolecules.comwrote:
I am working on a project that will take out structured content
from wikipedia
and put it in our database. Before putting the data into the database I
On Tuesday 19 January 2010 15:52:25 Greg Stark wrote:
On Mon, Jan 18, 2010 at 4:35 PM, Greg Stark gsst...@mit.edu wrote:
Looking at this patch for the commitfest I have a few questions.
So I've touched this patch up a bit:
1) moved the posix_fadvise call to a new fd.c function
Hi Greg,
On Monday 18 January 2010 17:35:59 Greg Stark wrote:
2) Why does the second pass to do the fsyncs read through fromdir to
find all the filenames. I find that odd and counterintuitive. It would
be much more natural to just loop through the files in the new
directory. But I suppose it
Hi Greg,
On Tuesday 19 January 2010 15:52:25 Greg Stark wrote:
On Mon, Jan 18, 2010 at 4:35 PM, Greg Stark gsst...@mit.edu wrote:
Looking at this patch for the commitfest I have a few questions.
So I've touched this patch up a bit:
1) moved the posix_fadvise call to a new fd.c function
On Tuesday 19 January 2010 15:57:14 Greg Stark wrote:
On Tue, Jan 19, 2010 at 2:52 PM, Greg Stark gsst...@mit.edu wrote:
Barring any objections shall I commit it like this?
Actually before we get there could someone who demonstrated the
speedup verify that this patch still gets that same
On Tuesday 29 December 2009 11:48:10 Greg Stark wrote:
On Tue, Dec 29, 2009 at 2:05 AM, Andres Freund and...@anarazel.de wrote:
Reads Completed:2,8KiB Writes Completed: 2362,
29672KiB New:
Reads Completed:0,0KiB Writes Completed: 550
On Saturday 12 December 2009 21:38:41 Andres Freund wrote:
On Saturday 12 December 2009 21:36:27 Michael Clemmons wrote:
If ppl think its worth it I'll create a ticket
Thanks, no need. I will post a patch tomorrow or so.
Well. It was a long day...
Anyway.
In this patch I delay the fsync done
On Monday 28 December 2009 23:54:51 Andres Freund wrote:
On Saturday 12 December 2009 21:38:41 Andres Freund wrote:
On Saturday 12 December 2009 21:36:27 Michael Clemmons wrote:
If ppl think its worth it I'll create a ticket
Thanks, no need. I will post a patch tomorrow or so.
Well
On Tuesday 29 December 2009 00:06:28 Tom Lane wrote:
Andres Freund and...@anarazel.de writes:
This speeds up CREATE DATABASE from ~9 seconds to something around 0.8s
on my laptop. Still slower than with fsync off (~0.25) but quite a
worthy improvement.
I can't help wondering whether
On Tuesday 29 December 2009 00:06:28 Tom Lane wrote:
Andres Freund and...@anarazel.de writes:
This speeds up CREATE DATABASE from ~9 seconds to something around 0.8s
on my laptop. Still slower than with fsync off (~0.25) but quite a
worthy improvement.
I can't help wondering whether
On Tuesday 29 December 2009 01:27:29 Greg Stark wrote:
On Mon, Dec 28, 2009 at 10:54 PM, Andres Freund and...@anarazel.de wrote:
fsync everything in that pass.
Including the directory - which was not done before and actually might be
necessary in some cases.
Er. Yes. At least on ext4
On Tuesday 29 December 2009 01:46:21 Greg Smith wrote:
Andres Freund wrote:
As I said the real benefit only occurred after adding posix_fadvise(..,
FADV_DONTNEED) which is somewhat plausible, because i.e. the directory
entries don't need to get scheduled for every file and because
On Tuesday 29 December 2009 03:53:12 Michael Clemmons wrote:
Andres,
Great job. Looking through the emails and thinking about why this works I
think this patch should significantly speedup 8.4 on most any file
system(obviously some more than others) unless the system has significantly
On Tuesday 29 December 2009 04:04:06 Michael Clemmons wrote:
Maybe not crash out but in this situation.
N=0
while(N=0):
CREATE DATABASE new_db_N;
Since the fsync is the part which takes the memory and time but is
happening in the background want the fsyncs pile up in the background
On Saturday 12 December 2009 21:36:27 Michael Clemmons wrote:
If ppl think its worth it I'll create a ticket
Thanks, no need. I will post a patch tomorrow or so.
Andres
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
Hi,
On Saturday 12 December 2009 00:59:13 Scott Marlowe wrote:
On Fri, Dec 11, 2009 at 3:52 PM, Michael Clemmons
Createdb takes
12secs on my system(9.10 pg8.4 and ext4) which is impossibly slow for
running 200unittests.
Fsync got it to .2secs or so which is blazing but
also the
On Thursday 10 December 2009 21:41:08 Michael Clemmons wrote:
Hey,
I've got a computer which runs but 8.3 and 8.4. To create a db it takes 4s
for 8.3 and 9s for 8.4. I have many unit tests which create databases all
of the time and now run much slower than 8.3 but it seems to be much longer
Hi,
On Thursday 10 December 2009 23:01:08 Michael Clemmons wrote:
Im not sure what that means ppl in my office with slower hd speeds using
8.4 can create a db in 2s vs my 8-12s.
- Possibly their config is different - they could have disabled the fsync
parameter which turns the database to be
Hi David,
On Tuesday 08 December 2009 10:59:51 Schmitz, David wrote:
With our data it is a performance difference from 1h16min
(8.3.8) to 2h43min (8.4.1)
On Postgresql 8.4.1
Total runtime: 101.446 ms
and on Postgresql 8.3.8:
Total runtime: 29.366 ms
Hm. There obviously is more going on
On Wednesday 09 December 2009 03:05:40 Greg Smith wrote:
On Linux having the WAL on a separate disk can improve things much more
than you might expect, simply because of how brain-dead the filesystem
fsync implementation is. Reducing the seeks for WAL traffic can help a
lot too.
Not using
Hi Andreas,
Could you please properly quote the email? The way you did it is quite
unreadable because you always have to guess who wrote what.
On Sunday 06 December 2009 17:06:39 Andreas Thiel wrote:
I'm going to work on the table size of the largest table (result_orig)
itself by
On Sunday 06 December 2009 19:20:17 Andreas Thiel wrote:
Hi Andres,
Thanks a lot for your answers. As bottom line I think the answer is I
have to rethink my DB structure.
Can't answer that one without knowing much more ;)
Could you please properly quote the email? The way you did it is
Hi,
On Saturday 05 December 2009 00:03:12 Andreas Thiel wrote:
I'm running PostgreSQL 8.3.6 on a 32-Bit Centos 4 machine (which I
probably should update to 64 Bit soon)
How much memory?
I'm going to work on the table size of the largest table (result_orig)
itself by eliminating columns,
On Thursday 26 November 2009 17:20:35 Richard Neill wrote:
Dear All,
I'm wondering whether Vacuum/analyse (notably by the autovaccuum daemon)
is responsible for some deadlocks/dropouts I'm seeing.
One particular table gets hit about 5 times a second (for single row
updates and inserts) +
On Thursday 16 July 2009 03:11:29 ning wrote:
Hi Andres,
The log for the test you suggested is as follows in PostgreSQL8.2.4,
but I cannot find a clue to prove or prove not PostgreSQL is doing
plan caching.
Well. How long is the PREPARE and the EXECUTEs taking?
Andres
--
Sent via
On Thursday 16 July 2009 11:30:00 ning wrote:
Hi Andres,
By executing
#explain analyze execute test_query;
the first execution cost 0.389 seconds
the second cost 0.285 seconds
Seconds or milliseconds?
If seconds that would be by far slower than the plain SELECT, right?
Andres
--
Sent
On Thursday 16 July 2009 11:46:18 ning wrote:
I'm sorry, they are in milliseconds, not seconds.
The time used is quite same to the result of explain analyze select
I pasted above,
which was Total runtime: 0.479 ms.
Yea. Unfortunately that time does not including planning time. If you
On Wednesday 15 July 2009 10:27:50 Craig Ringer wrote:
On Wed, 2009-07-15 at 12:10 +0900, ning wrote:
First execution: PostgreSQL 0.006277 seconds / DB2 0.009028 seconds
Second execution: PostgreSQL 0.005932 seconds / DB2 0.000332 seconds
Actually, on second thoughts that looks a lot like
Hi Marc,
I don't have really extensive comments, but I found two small things...
On Monday 13 July 2009 15:40:18 Marc Cousin wrote:
I'm trying to solve big performance issues with PostgreSQL + bacula while
inserting very big sets of records.
I'm sorry, this email will be a long one, as I've
On 06/29/2009 03:33 PM, Sean Ma wrote:
I have a table about 50 million rows. There are a few writers to pump
data into the table at the rate of 4 row/hours. Most the time, the
SELECT is less than 100 ms. However sometime it is very slow, from 30
seconds to 500 seconds. The database is vacuum
On 06/23/2009 01:12 PM, Mathieu Nebra wrote:
I'm running a quite large website which has its own forums. They are
currently heavily used and I'm getting performance issues. Most of them
are due to repeated UPDATE queries on a flags table.
This flags table has more or less the following fields:
On 06/23/2009 02:37 PM, Alexander Staubo wrote:
(That said, I believe PostgreSQL diffs tuple updates, so in practice
PostgreSQL might not be writing anything if you run an update with
the same value. I will let someone more intimate with the internal
details of updates to comment on this.)
No,
On 06/23/2009 04:54 PM, Mathieu Nebra wrote:
On 06/23/2009 01:12 PM, Mathieu Nebra wrote:
I'm running a quite large website which has its own forums.
They are currently heavily used and I'm getting performance
issues. Most of
them
are due to repeated UPDATE queries on a flags table.
This
Hi,
On 05/12/2009 12:46 AM, Dimitri wrote:
So, why I don't use prepare here: let's say I'm testing the worst
stress case :-) Imagine you have thousands of such kind of queries -
you cannot prepare all of them! :-) or you'll maybe prepare it once,
but as I showed previously in this thread
101 - 194 of 194 matches
Mail list logo