. Nasby; PFC
Cc: Greg Stark; Tom Lane; pgsql-performance@postgresql.org;
pgsql-hackers@postgresql.org
Subject:Re: [PERFORM] [HACKERS] Big IN() clauses etc : feature proposal
> Something else worth considering is not using the normal
> catalog methods
> for storing information a
On Thu, May 11, 2006 at 06:08:36PM -0400, Tom Lane wrote:
> "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> > I'd hope that wasn't what's happening... is the backend smart enough to
> > know not to fsync anything involved with the temp table?
>
> The catalog entries required for it have to be fsync'd
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> I'd hope that wasn't what's happening... is the backend smart enough to
> know not to fsync anything involved with the temp table?
The catalog entries required for it have to be fsync'd, unless you enjoy
putting your entire database at risk (a bad block
> 0.450 ms INSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id
DESC
> LIMIT 20
> 0.443 ms ANALYZE tmp
> 0.365 ms SELECT * FROM tmp
> 0.310 ms DROP TABLE tmp
> 32.918 ms COMMIT
The 32 seconds for commit can hardly be catalog related. It seems the
file is
fsynced before it is dropped.
On Thu, May 11, 2006 at 09:55:15AM +0200, Zeugswetter Andreas DCP SD wrote:
>
> > Something else worth considering is not using the normal
> > catalog methods
> > for storing information about temp tables, but hacking that together
> > would probably be a rather large task.
>
> But the timings s
On Thu, May 11, 2006 at 08:43:46PM +0200, Martijn van Oosterhout wrote:
> On Thu, May 11, 2006 at 11:35:34AM -0400, Greg Stark wrote:
> > I can say that I've seen plenty of instances where the ability to create
> > temporary tables very quickly with no overhead over the original query would
> > be
On Thu, May 11, 2006 at 11:35:34AM -0400, Greg Stark wrote:
> I can say that I've seen plenty of instances where the ability to create
> temporary tables very quickly with no overhead over the original query would
> be useful.
I wonder if this requires what the standard refers to as a global
tempo
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> Perhaps it would be worth creating a class of temporary tables that used
> a tuplestore, although that would greatly limit what could be done with
> that temp table.
I can say that I've seen plenty of instances where the ability to create
temporary ta
On Thu, May 11, 2006 at 09:55:15AM +0200, Zeugswetter Andreas DCP SD wrote:
> > 0.101 ms BEGIN
> > 1.451 ms CREATE TEMPORARY TABLE tmp ( a INTEGER NOT NULL, b INTEGER
> NOT
> > NULL, c TIMESTAMP NOT NULL, d INTEGER NOT NULL ) ON COMMIT DROP
>
> 1.4 seconds is not great for create table, is that
> Something else worth considering is not using the normal
> catalog methods
> for storing information about temp tables, but hacking that together
> would probably be a rather large task.
But the timings suggest, that it cannot be the catalogs in the worst
case
he showed.
> 0.101 ms BEGIN
> 1.
On Tue, May 09, 2006 at 06:29:31PM +0200, PFC wrote:
You mean the cursors'storage is in fact the same internal machinery
as a temporary table ?
Use the source, Luke...
LOL, yeah, I should have, sorry.
See tuplestore_begin_heap in backend/utils/sort/tuplestore.c an
On Tue, May 09, 2006 at 06:29:31PM +0200, PFC wrote:
> You mean the cursors'storage is in fact the same internal machinery
> as a temporary table ?
Use the source, Luke...
See tuplestore_begin_heap in backend/utils/sort/tuplestore.c and
heap_create_with_catalog in backend/catalog/he
Creating cursors for a simple plan like a single sequential scan is fast
because it's using the original data from the table.
I used the following query :
SELECT * FROM bookmarks ORDER BY annonce_id DESC LIMIT 20
It's a backward index scan + limit... not a seq scan. And it's
PFC <[EMAIL PROTECTED]> writes:
>
> I really like this. It's clean, efficient, and easy to use.
>
> This would be a lot faster than using temp tables.
> Creating cursors is very fast so we can create two, and avoid doing
> twice the same work (ie. hashing the ids from the res
The moral of the story is that you're probably better off running a
bunch of small selects than in trying to optimize things with one
gargantuan select.
Ever experiment with loading the parameters into a temp table and
joining to that?
Also, it might be worth re-testing that conclusion with
15 matches
Mail list logo