M
To: [EMAIL PROTECTED]; Greg Stark
Cc: Richard Huxton; pgsql-sql@postgresql.org;
pgsql-performance@postgresql.org
Subject: Re: [PERFORM] [SQL] OFFSET impact on Performance???
The problems still stays open.
The thing is that I have about 20 - 30 clients that are using that SQL
query
where the offse
As I read the docs, a temp table doesn't solve our problem, as it does
not persist between sessions. With a web page there is no guarentee
that you will receive the same connection between requests, so a temp
table doesn't solve the problem. It looks like you either have to
create a real table (
PFC wrote:
>
> Supposing your searches display results which are rows coming from one
> specific table, you could create a cache table :
>
> search_id serial primary key
> index_n position of this result in the global result set
> result_id id of the resulting row.
>
> Then, maki
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> what about
> CREATE AGGREGATE array_accum (
> sfunc = array_append,
> basetype = anyelement,
> stype = anyarray,
> initcond = '{}'
> );
huh, that is faster. It's only 14x slower than the C implementation.
For completeness, here are
What you want is called UNNEST. It didn't get done in time for 8.0. But
if
what you have is an array of integers the int_array_enum() function I
quoted
in the other post is basically that.
Yes, I used it, thanks. That's what I wanted. The query plans are good.
You don't really need the int_
Greg Stark wrote:
> test=> create or replace function array_push (anyarray, anyelement)
> returns anyarray as 'select $1 || $2' language sql immutable strict;
> CREATE FUNCTION
> test=> create aggregate array_aggregate (basetype=anyelement,
> sfunc=array_push, stype=anyarray, initcond = '{}');
>
PFC <[EMAIL PROTECTED]> writes:
> intset(x) seems to be like array[x] ?
> Actually what I want is the opposite.
What you want is called UNNEST. It didn't get done in time for 8.0. But if
what you have is an array of integers the int_array_enum() function I quoted
in the other post i
However, it seems that integer && integer[] does not exist :
Try intset(id) && int[]. intset is an undocumented function :)
I'm going to add intset() to README.
SELECT * FROM table WHERE id && int[]
Mm.
intset(x) seems to be like array[x] ?
Actually what I want is the opposite. I have a btree
> Actually, you can if you assume you can "temporarily materialize" that
> view.
>
> Then, you use a join on my_query to pull the bits you want:
>
> select [big table.details] from [big table],
> [select * from my_query order by [something] offset 280 limit 20]
> where [join criteria between
On Thu, 27 Jan 2005, PFC wrote:
for example,
http://www.sai.msu.su/~megera/postgres/gist/code/7.3/README.intarray
see OPERATIONS and EXAMPLE USAGE:
Thanks, I already know this documentation and have used intarray
before (I find it absolutely fabulous in the right application, it has a
great p
for example,
http://www.sai.msu.su/~megera/postgres/gist/code/7.3/README.intarray
see OPERATIONS and EXAMPLE USAGE:
Thanks, I already know this documentation and have used intarray before
(I find it absolutely fabulous in the right application, it has a great
potential for getting out of t
Oleg Bartunov writes:
> On Thu, 27 Jan 2005, PFC wrote:
>
> >
> > > > beware that SELECT * FROM table WHERE id =ANY( array ) won't use an
> > > > index,
> > > contrib/intarray provides index access to such queries.
> >
> > Can you provide an example of such a query ? I've looked at the operato
On Thu, 27 Jan 2005, PFC wrote:
The best part is that you can skip the LIMIT/OFFSET entirely if you
put page numbers in your cache table while inserting into it, via a
temporary sequence or something. Retrieving the results will then be very
fast, but beware that SELECT * FROM table WHERE id =
The best part is that you can skip the LIMIT/OFFSET entirely if you
put page numbers in your cache table while inserting into it, via a
temporary sequence or something. Retrieving the results will then be
very fast, but beware that SELECT * FROM table WHERE id =ANY( array )
won't use an i
On Thu, 27 Jan 2005, PFC wrote:
Thats a really good idea, just store a list of the sorted ids in the
temp table - small amount of data for insert... I like it!
Alex Turner
NetEconomist
The best part is that you can skip the LIMIT/OFFSET entirely if you
put page numbers in your cache table while
Thats a really good idea, just store a list of the sorted ids in the
temp table - small amount of data for insert... I like it!
Alex Turner
NetEconomist
The best part is that you can skip the LIMIT/OFFSET entirely if you put
page numbers in your cache table while inserting into it, via a tempor
In an attempt to throw the authorities off his trail, [EMAIL PROTECTED]
("Merlin Moncure") transmitted:
> Alex wrote:
>> How do you create a temporary view that has only a small subset of the
>> data from the DB init? (Links to docs are fine - I can read ;). My
>> query isn't all that complex, a
Thats a really good idea, just store a list of the sorted ids in the
temp table - small amount of data for insert... I like it!
Alex Turner
NetEconomist
On Wed, 26 Jan 2005 22:24:34 +0100, PFC <[EMAIL PROTECTED]> wrote:
> > The problem with this approach is TTFB (Time to first Byte). The
> > in
The problem with this approach is TTFB (Time to first Byte). The
initial query is very slow, but additional requests are fast. In most
situations we do not want the user to have to wait a disproportionate
amount of time for the initial query. If this is the first time using
the system this will
Alex Turner <[EMAIL PROTECTED]> writes:
> The problem with this approach is TTFB (Time to first Byte). The
> initial query is very slow, but additional requests are fast. In most
> situations we do not want the user to have to wait a disproportionate
> amount of time for the initial query. If
> The problem with this approach is TTFB (Time to first Byte). The
> initial query is very slow, but additional requests are fast. In most
> situations we do not want the user to have to wait a disproportionate
> amount of time for the initial query. If this is the first time using
> the system
The problem with this approach is TTFB (Time to first Byte). The
initial query is very slow, but additional requests are fast. In most
situations we do not want the user to have to wait a disproportionate
amount of time for the initial query. If this is the first time using
the system this will
Supposing your searches display results which are rows coming from one
specific table, you could create a cache table :
search_id serial primary key
index_n position of this result in the global result set
result_id id of the resulting row.
Then, making a search with 50k results woul
Alex Turner wrote:
As I read the docs, a temp table doesn't solve our problem, as it does
not persist between sessions. With a web page there is no guarentee
that you will receive the same connection between requests, so a temp
table doesn't solve the problem. It looks like you either have to
cre
;
> To: <[EMAIL PROTECTED]>
> Cc: "Richard Huxton" ; "Andrei Bintintan"
> <[EMAIL PROTECTED]>; ;
>
> Sent: Tuesday, January 25, 2005 8:28 PM
> Subject: Re: [PERFORM] [SQL] OFFSET impact on Performance???
>
>
> >
> > Alex Turner
of Andrei Bintintan
Sent: Wed 1/26/2005 11:11 AM
To: [EMAIL PROTECTED]; Greg Stark
Cc: Richard Huxton; pgsql-sql@postgresql.org; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] [SQL] OFFSET impact on Performance???
The problems still stays open.
The thing is that I have about 20 - 30
gards,
Andy.
- Original Message -
From: "Greg Stark" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc: "Richard Huxton" ; "Andrei Bintintan"
<[EMAIL PROTECTED]>; ;
Sent: Tuesday, January 25, 2005 8:28 PM
Subject: Re: [PERFORM] [SQL] OFFSET imp
Alex Turner <[EMAIL PROTECTED]> writes:
> I am also very interesting in this very question.. Is there any way to
> declare a persistant cursor that remains open between pg sessions?
> This would be better than a temp table because you would not have to
> do the initial select and insert into a f
Alex wrote:
> How do you create a temporary view that has only a small subset of the
> data from the DB init? (Links to docs are fine - I can read ;). My
> query isn't all that complex, and my number of records might be from
> 10 to 2k depending on how I implement it.
Well, you can't. My point
> > Now I read all the posts and I have some answers.
> >
> > Yes, I have a web aplication. I HAVE to know exactly how many pages I have
> > and I have to allow the user to jump to a specific page(this is where I
> > used limit and offset). We have this feature and I cannot take it out.
I'm afra
> Now I read all the posts and I have some answers.
>
> Yes, I have a web aplication.
> I HAVE to know exactly how many pages I have and I have to allow the
user
> to
> jump to a specific page(this is where I used limit and offset). We
have
> this
> feature and I cannot take it out.
If your worki
DIGGING for solutions.
Andy.
- Original Message -
From: "Ragnar Hafstað" <[EMAIL PROTECTED]>
To:
Cc: "Andrei Bintintan" <[EMAIL PROTECTED]>;
Sent: Thursday, January 20, 2005 9:23 PM
Subject: Re: [PERFORM] [SQL] OFFSET impact on Performance???
On Thu, 2005-01-
How do you create a temporary view that has only a small subset of the
data from the DB init? (Links to docs are fine - I can read ;). My
query isn't all that complex, and my number of records might be from
10 to 2k depending on how I implement it.
Alex Turner
NetEconomist
On Thu, 20 Jan 2005
> this will only work unchanged if the index is unique. imagine , for
> example if you have more than 50 rows with the same value of col.
>
> one way to fix this is to use ORDER BY col,oid
nope! oid is
1. deprecated
2. not guaranteed to be unique even inside a (large) table.
Use a sequence inst
On Thu, 2005-01-20 at 19:12 +, Ragnar Hafstað wrote:
> On Thu, 2005-01-20 at 11:59 -0500, Greg Stark wrote:
>
> > The best way to do pages for is not to use offset or cursors but to use an
> > index. This only works if you can enumerate all the sort orders the
> > application might be using an
On Thu, 2005-01-20 at 11:59 -0500, Greg Stark wrote:
> The best way to do pages for is not to use offset or cursors but to use an
> index. This only works if you can enumerate all the sort orders the
> application might be using and can have an index on each of them.
>
> To do this the query woul
AM
To: Andrei Bintintan
Cc: pgsql-sql@postgresql.org; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] [SQL] OFFSET impact on Performance???
Andrei Bintintan wrote:
>> If you're using this to provide "pages" of results, could you use a
>> cursor?
>
> What
Greg Stark wrote:
"Andrei Bintintan" <[EMAIL PROTECTED]> writes:
If you're using this to provide "pages" of results, could you use a cursor?
What do you mean by that? Cursor?
Yes I'm using this to provide "pages", but If I jump to the last pages it goes
very slow.
The best way to do pages for is
Ron Mayer wrote:
Richard Huxton wrote:
If you've got a web-application then you'll probably want to insert
the results into a cache table for later use.
If I have quite a bit of activity like this (people selecting 1 out
of a few million rows and paging through them in a web browser), would
i
> I am also very interesting in this very question.. Is there any way to
> declare a persistant cursor that remains open between pg sessions?
> This would be better than a temp table because you would not have to
> do the initial select and insert into a fresh table and incur those IO
> costs, whic
"Andrei Bintintan" <[EMAIL PROTECTED]> writes:
> > If you're using this to provide "pages" of results, could you use a cursor?
> What do you mean by that? Cursor?
>
> Yes I'm using this to provide "pages", but If I jump to the last pages it goes
> very slow.
The best way to do pages for is not t
Alex Turner wrote:
I am also very interesting in this very question.. Is there any way
to declare a persistant cursor that remains open between pg sessions?
Not sure how this would work. What do you do with multiple connections?
Only one can access the cursor, so which should it be?
This would b
Richard Huxton wrote:
If you've got a web-application then you'll probably want to insert the
results into a cache table for later use.
If I have quite a bit of activity like this (people selecting 1 out
of a few million rows and paging through them in a web browser), would
it be good to have
I am also very interesting in this very question.. Is there any way to
declare a persistant cursor that remains open between pg sessions?
This would be better than a temp table because you would not have to
do the initial select and insert into a fresh table and incur those IO
costs, which are oft
Andrei Bintintan wrote:
If you're using this to provide "pages" of results, could you use a
cursor?
What do you mean by that? Cursor?
Yes I'm using this to provide "pages", but If I jump to the last pages
it goes very slow.
DECLARE mycursor CURSOR FOR SELECT * FROM ...
FETCH FORWARD 10 IN mycurso
If you're using this to provide "pages" of results, could you use a
cursor?
What do you mean by that? Cursor?
Yes I'm using this to provide "pages", but If I jump to the last pages it
goes very slow.
Andy.
- Original Message -
From: "Richard Huxton"
To: "Andrei Bintintan" <[EMAIL PROTE
Andrei Bintintan wrote:
Hi to all,
I have the following 2 examples. Now, regarding on the offset if it
is small(10) or big(>5) what is the impact on the performance of
the query?? I noticed that if I return more data's(columns) or if I
make more joins then the query runs even slower if the OFFS
47 matches
Mail list logo