On Thu, May 11, 2006 at 08:03:19PM +0200, Martijn van Oosterhout wrote:
> On Thu, May 11, 2006 at 12:18:06PM -0500, Jim C. Nasby wrote:
> > > Yes, because there can be more than one active snapshot within a single
> > > transaction (think about volatile functions in particular).
> >
> > Any docume
On Thu, May 11, 2006 at 12:18:06PM -0500, Jim C. Nasby wrote:
> > Yes, because there can be more than one active snapshot within a single
> > transaction (think about volatile functions in particular).
>
> Any documentation on how snapshot's work? They're a big mystery to me.
> :(
A snapshot is a
On Thu, 2006-05-11 at 12:18, Jim C. Nasby wrote:
> On Wed, May 10, 2006 at 08:31:54PM -0400, Tom Lane wrote:
> > "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> > > On Tue, May 09, 2006 at 03:13:01PM -0400, Tom Lane wrote:
> > >> PFC <[EMAIL PROTECTED]> writes:
> > >>> Fun thing is, the rowcount from
On Wed, May 10, 2006 at 08:31:54PM -0400, Tom Lane wrote:
> "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> > On Tue, May 09, 2006 at 03:13:01PM -0400, Tom Lane wrote:
> >> PFC <[EMAIL PROTECTED]> writes:
> >>> Fun thing is, the rowcount from a temp table (which is the problem here)
> >>> should be
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> On Tue, May 09, 2006 at 03:13:01PM -0400, Tom Lane wrote:
>> PFC <[EMAIL PROTECTED]> writes:
>>> Fun thing is, the rowcount from a temp table (which is the problem here)
>>> should be available without ANALYZE ; as the temp table is not concurrent,
>
On Tue, May 09, 2006 at 01:29:56PM +0200, PFC wrote:
> 0.101 ms BEGIN
> 1.451 ms CREATE TEMPORARY TABLE tmp ( a INTEGER NOT NULL, b INTEGER NOT
> NULL, c TIMESTAMP NOT NULL, d INTEGER NOT NULL ) ON COMMIT DROP
> 0.450 ms INSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id DESC
> LIMIT 2
On Tue, May 09, 2006 at 03:13:01PM -0400, Tom Lane wrote:
> PFC <[EMAIL PROTECTED]> writes:
> > Fun thing is, the rowcount from a temp table (which is the problem
> > here)
> > should be available without ANALYZE ; as the temp table is not concurrent,
> > it would be simple to inc/decremen
On Tue, May 09, 2006 at 11:33:42AM +0200, PFC wrote:
> - Repeating the query might yield different results if records were
> added or deleted in the meantime.
BTW, SET TRANSACTION ISOLATION LEVEL serializeable or BEGIN ISOLATION
LEVEL serializeable would cure that.
--
Jim C. Nasby,
Hi, Nils,
Nis Jorgensen wrote:
> It will probably be quite common for the number to depend on the number
> of rows in other tables. Even if this is fairly constant within one db
> (some assumption), it is likely to be different in others using the same
> function definition. Perhaps a better solu
Martijn van Oosterhout wrote:
> On Wed, May 10, 2006 at 04:38:31PM +0200, PFC wrote:
>> You need to do some processing to know how many rows the function
>> would return.
>> Often, this processing will be repeated in the function itself.
>> Sometimes it's very simple (ie. the
Hi, PFC,
PFC wrote:
> You need to do some processing to know how many rows the function
> would return.
> Often, this processing will be repeated in the function itself.
> Sometimes it's very simple (ie. the function will RETURN NEXT each
> element in an array, you know the array le
On Wed, May 10, 2006 at 04:38:31PM +0200, PFC wrote:
> You need to do some processing to know how many rows the function
> would return.
> Often, this processing will be repeated in the function itself.
> Sometimes it's very simple (ie. the function will RETURN NEXT each
The problem is that you need a set-returning function to retrieve
the values. SRFs don't have rowcount estimates, so the plans suck.
What about adding some way of rowcount estimation to SRFs, in the way of:
CREATE FUNCTION foo (para, meters) RETURNS SETOF bar AS
$$ ... function code ...
Hi, PFC,
PFC wrote:
> The problem is that you need a set-returning function to retrieve
> the values. SRFs don't have rowcount estimates, so the plans suck.
What about adding some way of rowcount estimation to SRFs, in the way of:
CREATE FUNCTION foo (para, meters) RETURNS SETOF bar AS
$$
On Tue, 2006-05-09 at 13:29 +0200, PFC wrote:
> 0.101 ms BEGIN
> 1.451 ms CREATE TEMPORARY TABLE tmp ( a INTEGER NOT NULL, b INTEGER NOT
> NULL, c TIMESTAMP NOT NULL, d INTEGER NOT NULL ) ON COMMIT DROP
> 0.450 ms INSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id DESC
> LIMIT 20
> 0.4
PFC <[EMAIL PROTECTED]> writes:
> Fun thing is, the rowcount from a temp table (which is the problem
> here)
> should be available without ANALYZE ; as the temp table is not concurrent,
> it would be simple to inc/decrement a counter on INSERT/DELETE...
No, because MVCC rules still appl
> SELECT * FROM somewhere WHERE id IN (SELECT id FROM result)
Well, you can either
SELECT * FROM somewhere JOIN (SELECT id FROM result GROUP BY id) AS
a USING (id);
It's the same thing (and postgres knows it)
You might want to use PL to store values, say PLperl, or even C
On 5/9/06, PFC <[EMAIL PROTECTED]> wrote:
> You might consider just selecting your primary key or a set of
> primary keys to involved relations in your search query. If you
> currently use "select *" this can make your result set very large.
>
> Copying all the result set to the temp. costs you
Does the time for commit change much if you leave out the analyze?
Yes, when I don't ANALYZE the temp table, commit time changes from 30 ms
to about 15 ms ; but the queries get horrible plans (see below) :
Fun thing is, the rowcount from a temp table (which is the problem here)
should
It would be interesting to know what the bottleneck is for temp tables
for you. They do not go via the buffer-cache, they are stored in
private memory in the backend, they are not xlogged. Nor flushed to
disk on backend exit. They're about as close to in-memory tables as
you're going to get...
[snip]
> It would be interesting to know what the bottleneck is for temp tables
> for you. They do not go via the buffer-cache, they are stored in
[snip]
Is it possible that the temp table creation is the bottleneck ? Would
that write into system catalogs ? If yes, maybe the system catalogs are
no
On Tue, May 09, 2006 at 12:10:37PM +0200, PFC wrote:
> Yes, but in this case temp tables add too much overhead. I wish
> there were RAM based temp tables like in mysql. However I guess the
> current temp table slowness comes from the need to mark their existence in
> the system ca
Additionally to your query you are already transferring the whole result
set multiple times. First you copy it to the result table. Then you
read it again. Your subsequent queries will also have to read over
all the unneeded tuples just to get your primary key.
Considering that the resul
23 matches
Mail list logo