On 10/16/2012 04:41 PM, Filippos Kalamidas wrote:
the first thing you should probably do is run an 'analyze' on one of
these tables and then run again the delete statement. if there are no
stats for these tables, it's normal not to have very good plans.
Yep, and the fact that the stats are that
> On Wednesday, October 17, 2012 4:30, Merlin Moncure wrote,
>
> On Tue, Oct 16, 2012 at 3:15 AM, Sam Wong wrote:
> > Hi communities,
> >
> > I am investigating a performance issue involved with LIKE '%' on
> > an index in a complex query with joins.
> >
> > The problem boils down into this s
Thanks Bruce,
I have, and I even thought, I understood it :).
I just ran an explain analyze on another table - and ever since the query plan
changed. It's now using the index as expected. I guess, I have some more
reading to do.
On Oct 16, 2012, at 20:31 , Bruce Momjian wrote:
>
> Have yo
On Tue, Oct 16, 2012 at 08:19:43PM -0400, Chris Ruprecht wrote:
>
> On Oct 16, 2012, at 20:01 , Evgeny Shishkin wrote:
>
> > Selecting 5 yours of data is not selective at all, so postgres decides it
> > is cheaper to do seqscan.
> >
> > Do you have an index on patient.dnsortpersonnumber? Can
On Oct 16, 2012, at 20:01 , Evgeny Shishkin wrote:
> Selecting 5 yours of data is not selective at all, so postgres decides it is
> cheaper to do seqscan.
>
> Do you have an index on patient.dnsortpersonnumber? Can you post a result
> from
> select count(*) from patient where dnsortpersonnu
On Oct 17, 2012, at 3:52 AM, Chris Ruprecht wrote:
> Hi guys,
>
> PG = 9.1.5
> OS = winDOS 2008R8
>
> I have a table that currently has 207 million rows.
> there is a timestamp field that contains data.
> more data gets copied from another database into this database.
> How do I make this do a
Hi guys,
PG = 9.1.5
OS = winDOS 2008R8
I have a table that currently has 207 million rows.
there is a timestamp field that contains data.
more data gets copied from another database into this database.
How do I make this do an index scan instead?
I did an "analyze audittrailclinical" to no avail.
On Tue, Oct 16, 2012 at 10:47 PM, Karl Denninger wrote:
> Put an index on time_stamp (I assume there is one on id_signal already)
Well the optimal index for this particular query would include both columns:
(id_signal, time_stamp) -- in this order.
Additionally, if you want to take advantage of
On Tue, Oct 16, 2012 at 3:15 AM, Sam Wong wrote:
> Hi communities,
>
> I am investigating a performance issue involved with LIKE '%' on an
> index in a complex query with joins.
>
> The problem boils down into this simple scenario---:
> Scenario
> My database locale is C, using UTF-8 e
Put an index on time_stamp (I assume there is one on id_signal already)
On 10/15/2012 12:44 PM, Pedro Jiménez wrote:
> Hello,
> I'm trying to do a simple SQL query over Postgresl 9.0 running on
> Ubuntu.
>
> I have a large table (over 100 million records) with three fields,
> id_signal (bigint),
On 10/15/2012 12:44 PM, Pedro Jiménez wrote:
select var_value from ism_floatvalues where id_signal = 29660 order by
time_stamp desc limit 1;
Well, we'd have to see an EXPLAIN plan to really know what's going on
here, but it often boils down to the planner being overly optimistic
when low limit
2012/10/15 Pedro Jiménez :
> Hello,
> I'm trying to do a simple SQL query over Postgresl 9.0 running on Ubuntu.
>
> I have a large table (over 100 million records) with three fields, id_signal
> (bigint), time_stamp (timestamp) and var_value (float).
>
> My query looks like this:
>
> select var_v
Hello,
I'm trying to do a simple SQL query over Postgresl 9.0 running on Ubuntu.
I have a large table (over 100 million records) with three fields,
id_signal (bigint), time_stamp (timestamp) and var_value (float).
My query looks like this:
select var_value from ism_floatvalues where id_sign
Hi communities,
I am investigating a performance issue involved with LIKE '%' on an
index in a complex query with joins.
The problem boils down into this simple scenario---:
Scenario
My database locale is C, using UTF-8 encoding. I tested this on 9.1.6 and 9.
2.1.
Q1.
SELECT * FROM
On Sun, Oct 7, 2012 at 7:43 AM, Ants Aasma wrote:
> On Wed, Sep 26, 2012 at 11:11 PM, Jeff Janes wrote:
>> On Wed, Sep 12, 2012 at 11:40 PM, Ants Aasma wrote:
>>> I don't have any links for OS level monitoring, but with version 9.2
>>> track_io_timing would do the job.
>>
>> I don't know how to
On Tue, Oct 16, 2012 at 01:26:37PM +0100, Alejandro Carrillo wrote:
> Hi,
>
> Why PostgreSQL, the EnterpriseBD supports create/alter/drop package and the
> opensource doesn't?
> Is a project or never will have support?
>
Hi Alejandro,
Isn't that part of their Oracle compatibility secret sauce?
2012/10/16 Alejandro Carrillo :
> Hi,
>
> Why PostgreSQL, the EnterpriseBD supports create/alter/drop package and the
> opensource doesn't?
> Is a project or never will have support?
Packages are part of EnterpriseDB Oracle compatibility layer.
PostgreSQL doesn't support this functionality. Packag
Hi,
Why PostgreSQL, the EnterpriseBD supports create/alter/drop package and the
opensource doesn't?
Is a project or never will have support?
Thanks
Hi communities,
I am investigating a performance issue involved with LIKE '%' on an
index in a complex query with joins.
The problem boils down into this simple scenario---:
Scenario
My database locale is C, using UTF-8 encoding. I tested this on 9.1.6 and 9.
2.1.
Q1.
SE
the first thing you should probably do is run an 'analyze' on one of these
tables and then run again the delete statement. if there are no stats for
these tables, it's normal not to have very good plans.
On Tue, Oct 16, 2012 at 11:24 AM, Sylvain CAILLET wrote:
> Hi Craig,
>
> Here are the outpu
Hi Craig,
Here are the outputs :
flows=# explain analyze delete from agg_t377_incoming_a40_dst_net_f5 where
start_date < 1346487911000;
QUERY PLAN
---
Seq
On 10/16/2012 03:50 PM, Sylvain CAILLET wrote:
Hi to all,
I've got a trouble with some delete statements. My db contains a little
more than 1 tables and runs on a dedicated server (Debian 6 - bi
quad - 16Gb - SAS disks raid 0). Most of the tables contains between 2
and 3 million rows and no
Hi Sékine,
You're right : my question is why the planner doesn't use the index ! My DELETE
statements have WHERE clause like : start_date<134648610. They are executed
to delete too old rows.
My postgresql version is 8.4. Below is an example of a table (they all have the
same structure) :
Hi Sylvain,
Might sound like a nasty question, and gurus will correct me if I'm wrong,
but first thing to investigate is why the index is not used :
- You have 2/3 million rows per table so the planner should use the index.
Seqscan is prefered for small tables.
- Maybe the WHERE clause of your DEL
24 matches
Mail list logo