>select min(time) from transactions where payment_id is null
>So for that situation I tried whether a specific index helped, i.e. :
>create index transactions__time_payment_id__null__idx on transactions(time)
>where payment_id is null;
>But this does not really seem to help. It might be better to
> It's possible to combine independent indexes for resolving AND-type
> queries, but the combination process does not preserve ordering, so
> it's useless for this type of situation.
Ok, I'm going to try the double column index. Your suggestion about the index
with nulls left out worked great btw
henk de wit <[EMAIL PROTECTED]> writes:
> I see, that sounds very interesting too. As you might have noticed, I'm not=
> an expert on this field but I'm trying to learn. I was under the impressio=
> n that the last few incarnations of postgresql automatically combined singl=
> e column indexes for
> The only way I can see for that to be so slow is if you have a very
> large number of rows where payment_id is null --- is that the case?
The number of rows where payment_id is null is indeed large. They increase
every day to about 1 million at the end of the so-called "payment period" (so
cur
I wrote:
> The only way I can see for that to be so slow is if you have a very
> large number of rows where payment_id is null --- is that the case?
> There's not a lot you could do about that in existing releases :-(.
Actually, there is a possibility if you are willing to change the query:
make
On Oct 12, 2007, at 4:48 PM, henk de wit wrote:
> > I have work_mem set to 256MB.
> Wow. That's inordinately high. I'd recommend dropping that to
32-43MB.
Ok, it seems I was totally wrong with the work_mem setting. I'll
adjust it to a more saner level. Thanks a lot for the advice everyone!
On Friday 12 October 2007, henk de wit <[EMAIL PROTECTED]> wrote:
> > select payment_id from transactions order by payment_id desc limit 1;
>
> This one is indeed instant! Less than 50ms. In my case I can't use it for
> max though because of the fact that payment_id can be null (which is an
> unf
henk de wit <[EMAIL PROTECTED]> writes:
> The plan looks like this:
> "Result (cost=3D0.37..0.38 rows=3D1 width=3D0) (actual time=3D184231.636..=
> 184231.638 rows=3D1 loops=3D1)"
> " InitPlan"
> "-> Limit (cost=3D0.00..0.37 rows=3D1 width=3D8) (actual time=3D18423=
> 1.620..184231.622 row
> > I have work_mem set to 256MB.
> Wow. That's inordinately high. I'd recommend dropping that to 32-43MB.
Ok, it seems I was totally wrong with the work_mem setting. I'll adjust it to a
more saner level. Thanks a lot for the advice everyone!
> Explain is your friend in that respect.
It sh
> select payment_id from transactions order by payment_id desc limit 1;
This one is indeed instant! Less than 50ms. In my case I can't use it for max
though because of the fact that payment_id can be null (which is an unfortunate
design choice). The other variant however didn't become instant.
On Oct 12, 2007, at 4:09 PM, henk de wit wrote:
> It looks to me like you have work_mem set optimistically large. This
> query seems to be doing *many* large sorts and hashes:
I have work_mem set to 256MB. Reading in PG documentation I now
realize that "several sort or hash operations might b
> Do you have autovacuum turned on? With what settings?
Yes, I have it turned on. The settings are:
autovacuum on
autovacuum_analyze_scale_factor 0.1
autovacuum_analyze_threshold 250
autovacuum_freeze_max_age 2
autovacuum_naptime 1min
autovacuum_vacuum_cost_delay
> It looks to me like you have work_mem set optimistically large. This
> query seems to be doing *many* large sorts and hashes:
I
have work_mem set to 256MB. Reading in PG documentation I now realize
that "several sort or hash operations might be running in parallel". So
this is most likely the
> This query takes a totally unrealistic amount of time for execution (I have
> it running for >30 minutes now on a machine with 8GB and 4 [EMAIL PROTECTED],
> and it still isn't finished).
To correct myself, I looked at the wrong window earlier, when I typed the email
the query had in fact fin
>>> On Fri, Oct 12, 2007 at 3:41 PM, in message
<[EMAIL PROTECTED]>, henk de wit
<[EMAIL PROTECTED]> wrote:
>
> I have a table with some 50 millions rows in PG 8.2. The table has indexes
> on relevant columns. My problem is that most everything I do with this table
> (which are actually very b
Hi,
I have a table with some 50 millions rows in PG 8.2. The table has indexes on
relevant columns. My problem is that most everything I do with this table
(which are actually very basic selects) is unbearable slow. For example:
select max(payment_id) from transactions
This takes 161 seconds.
henk de wit <[EMAIL PROTECTED]> writes:
> I indeed found them in the logs. Here they are:
It looks to me like you have work_mem set optimistically large. This
query seems to be doing *many* large sorts and hashes:
> HashBatchContext: 262144236 total in 42 blocks; 3977832 free (40 chunks);
> 258
>>> On Fri, Oct 12, 2007 at 9:57 AM, in message
<[EMAIL PROTECTED]>, Theo Kramer
<[EMAIL PROTECTED]> wrote:
>
> select * from foo where
> (a = a1 and b = b1 and c >= c1) or
> (a = a1 and b < b1) or
> (a > a1)
> order by a, b desc, c;
>
> I have, however, found that transforming the above
On Fri, 2007-10-12 at 09:02 -0400, Merlin Moncure wrote:
> fwiw, I converted a pretty large cobol app (acucobol) to postgresql
> backend translating queries on the fly. if this is a fresh effort,
> you definately want to use the row-wise comparison feature of 8.2.
> not only is it much simpler, it
On 10/12/07, Theo Kramer <[EMAIL PROTECTED]> wrote:
> On Thu, 2007-10-11 at 16:04 -0400, Merlin Moncure wrote:
> > is this a converted cobol app?
>
> :) - on the right track - it is a conversion from an isam based package
> where I have changed the backed to PostgreSQL. Unfortunately there is
> w
Theo Kramer wrote:
On Thu, 2007-10-11 at 18:28 +, Andrew - Supernews wrote:
When you do this from the application, are you passing it 3 parameters,
or 5? The plan is clearly taking advantage of the fact that the two
occurrences of $1 and $2 are known to be the same value; if your app is
usi
On Thu, 2007-10-11 at 18:28 +, Andrew - Supernews wrote:
> On 2007-10-10, Theo Kramer <[EMAIL PROTECTED]> wrote:
> > When doing a 'manual' prepare and explain analyze I get the following
> >
> > rascal=# prepare cq (char(12), smallint, integer) as SELECT oid,
> > calllog_mainteng, calllog_phase
On Thu, 2007-10-11 at 13:28 -0400, Merlin Moncure wrote:
> On 10/11/07, Theo Kramer <[EMAIL PROTECTED]> wrote:
> > On Thu, 2007-10-11 at 10:12 +0100, Richard Huxton wrote:
> > > Theo Kramer wrote:
> > > >
> > > > So I suspect that there is something more fundamental here...
> > >
> > > OK, so there
On Thu, 2007-10-11 at 16:04 -0400, Merlin Moncure wrote:
> On 10/11/07, Andrew - Supernews <[EMAIL PROTECTED]> wrote:
> > On 2007-10-10, Theo Kramer <[EMAIL PROTECTED]> wrote:
> > > When doing a 'manual' prepare and explain analyze I get the following
> > >
> > > rascal=# prepare cq (char(12), smal
24 matches
Mail list logo