Thanks a lot for your detailed explanation. I will try ASAP with no
DISTINCT ( we are quite sure it is not needed anyway ), and report back
here.
Michaël
2017-08-21 23:52 GMT+02:00 David Rowley :
> On 19 August 2017 at 04:46, kimaidou wrote:
>
On 19 August 2017 at 04:46, kimaidou wrote:
> When we call the WHERE on the view:
>
> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)
> SELECT *
> FROM "qgep"."vw_qgep_reach"
> WHERE "progression_geometry" &&
>
Hi all
I also tried to change the values of join_collapse_limit and
rom_collapse_limit to higher values than default: 12, 50 or even 100, with
no improvement on the query plan.
Is this a typical behavior, or is there something particular in my query
that causes this big difference between the
Hi all,
I have come across a unexpected behavior.
You can see full detail on an issue on the QGEP project in Github :
https://github.com/QGEP/QGEP/issues/308#issuecomment-323122514
Basically, we have this view with some LEFT JOIN :
http://paste.debian.net/982003/
We have indexes on some fields
You can give it a try only on that partition just to see if your query plan
gets better. I prefer defining partitioning over ranging attributes like,
for example: cid between 123 and 456. It makes more sense, especially when
there are attributes which value strictly depends on the check attribute.
I tried to add index on partition key and it didn't help. we have
autovacuum running. The updates and inserts are very frequent on these
tables. The server kernel version is 3.5.0-22-generic. It has 376G memory.
max_connections = 2500# (change requires restart)
shared_buffers = 32GB
Platform: pg 9.2.9 on Ubuntu 12.04.4 LTS.
I have a table which is partitioned to about 80 children. There are usualy
several dozens of connections accessing these tables concurrently. I found
sometimes the query planing time is very long if I query against the parent
table with partition key. The
On Tue, Aug 11, 2015 at 6:46 PM, Rural Hunter ruralhun...@gmail.com wrote:
Platform: pg 9.2.9 on Ubuntu 12.04.4 LTS.
I have a table which is partitioned to about 80 children. There are usualy
several dozens of connections accessing these tables concurrently. I found
sometimes the query
On Tue, Aug 11, 2015 at 11:44 PM, Rural Hunter ruralhun...@gmail.com
wrote:
# \dt+
article_729
List of relations
Schema |Name | Type | Owner | Size | Description
+-+---+++-
public | article_729 |
# \d article_729
Table public.article_729
Column|Type |
Modifiers
--+-+---
aid | bigint | not null default
# \d+ article_729
Table
public.article_729
Column|Type |
Modifiers | Storage | Stats target | Description
yes i'm very sure. from what i observed, it has something to do with the
concurrent query planing. if i disconnect other connections, the plan is
very quick.
2015-08-11 22:42 GMT+08:00 Maxim Boguk maxim.bo...@gmail.com:
Check constraints:
article_729_cid_check CHECK (cid = 729)
Used
# \dt+ article_729
List of relations
Schema |Name | Type | Owner | Size | Description
+-+---+++-
public | article_729 | table | omuser1 | 655 MB |
(1 row)
The problem exists on not only this specific child
Check constraints:
article_729_cid_check CHECK (cid = 729)
Used partition schema looks very simple and straightforward, and should
have no issues with 80 partitions.
Are you sure that you have only 80 partitions but not (lets say) 800?
Are every other partition of the article table use the
Hi Rural Hunter,
Try to create an index on cid attribute.
How many rows has article_729?
Pietro Pugni
Il 11/ago/2015 16:51, Rural Hunter ruralhun...@gmail.com ha scritto:
yes i'm very sure. from what i observed, it has something to do with the
concurrent query planing. if i disconnect other
article_729 has about 0.8 million rows. The rows of the children tables are
variance from several thousands to dozens of millions. How can it help to
create index on the partition key?
2015-08-12 1:03 GMT+08:00 Pietro Pugni pietro.pu...@gmail.com:
Hi Rural Hunter,
Try to create an index on cid
This is a question about how to read explain analyze. I've anonymized
column names and table names.
In the output of explain analyze below, what was the query doing
between actual time 1.426 and 17.077?
Kind regards,
Andomar
HashAggregate (cost=862.02..862.62 rows=48 width=90) (actual
Andomar ando...@aule.net wrote:
In the output of explain analyze below, what was the query
doing between actual time 1.426 and 17.077?
Looping through 384 index scans of tbl, each taking 0.040 ms.
That's 15.36 ms. That leaves 0.291 ms unaccounted for, which means
that's about how much time
Looping through 384 index scans of tbl, each taking 0.040 ms.
That's 15.36 ms. That leaves 0.291 ms unaccounted for, which means
that's about how much time the top level nested loop took to do its
work.
Thanks for your reply, interesting! I'd have thought that this line
actually implied 0
Hello,
At Wed, 22 Apr 2015 21:59:27 +0200, Andomar ando...@aule.net wrote in
5537fd9f.3060...@aule.net
Looping through 384 index scans of tbl, each taking 0.040 ms.
That's 15.36 ms. That leaves 0.291 ms unaccounted for, which means
that's about how much time the top level nested loop took
On Apr 22, 2015, at 1:59 PM, Andomar ando...@aule.net wrote:
Is there a way to tell postgres that a function will always return the same
result for the same parameter, within the same transaction?
Yup… read over the Function Volatility Categories
Thanks for your reply, Marti, as I answered to Tom couple of days ago
adjusting of 'effective_cache_size' to 80% of RAM and 'random_page_cost'
from 2 to 1 helped me.
On 8 October 2014 00:26, Marti Raudsepp ma...@juffo.org wrote:
On Fri, Oct 3, 2014 at 6:38 PM, Andrey Lizenko
Hi,
I have similar problem as in
http://www.postgresql.org/message-id/flat/52b311c4.1070...@gmail.com#52b311c4.1070...@gmail.com
server version is 9.3.4
Here is only two quite simple tables:
db_new=# \d activities_example
Table public.activities_example
Column | Type | Modifiers
On Fri, Oct 3, 2014 at 6:38 PM, Andrey Lizenko lizenk...@gmail.com wrote:
Is it possible to force optimizer choose the second plan without doing set
enable_hashjoin = off; ?
Increasing of 'effective_cache_size' leads to similar thing with mergejoin,
other options (work_mem, shared_buffers.
Thanks a lot, Tom,
reducing 'random_page_cost' from 2 to 1 and increasing 'effective_cache_size'
from 70% to 80% of RAM solved this at least on my virtual sandbox.
By the way, why increasing of cache only (with the same random_page_cost=2) can
lead to mergejoin selection?
On 5 October 2014
As I answered to Tom few moments ago:
reducing 'random_page_cost' from 2 to 1 and increasing
'effective_cache_size' from 70% to 80% of RAM solved this at least on my
virtual sandbox.
I've observed same behaviour both on weak virtual machine and on the quite
powerfull stress test platform.
The
Hi,
I have similar problem as in
http://www.postgresql.org/message-id/flat/52b311c4.1070...@gmail.com#52b311c4.1070...@gmail.com
server version is 9.3.4
Here is only two quite simple tables:
db_new=# \d activities_example
Table public.activities_example
Column | Type | Modifiers
2014-10-05 21:57 GMT+03:00 Andrey Lizenko lizenk...@gmail.com:
Increasing of 'effective_cache_size' leads to similar thing with
mergejoin,
other options (work_mem, shared_buffers. etc) do not change anything.
I think increasing `work_mem` should have effects, as plan with `Nested
Loop` is
Andrey Lizenko lizenk...@gmail.com writes:
What is the reason of Seq Scan on activities_example in the first case?
Is it possible to force optimizer choose the second plan without doing
set enable_hashjoin = off; ?
Disabling hashjoins altogether would be a pretty dangerous fix.
I think the
On Thu, May 15, 2014 at 10:52 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Scott Marlowe scott.marl...@gmail.com writes:
OK so we have a query that does OK in 8.4, goes to absolute crap in
9.2 and then works great in 9.3. Thing is we've spent several months
regression testing 9.2 and no time testing
OK so we have a query that does OK in 8.4, goes to absolute crap in
9.2 and then works great in 9.3. Thing is we've spent several months
regression testing 9.2 and no time testing 9.3, so we can't just go
to 9.3 in an afternoon. But we might have to. 9.2 seems hopelessly
broken here.
The query
Scott Marlowe scott.marl...@gmail.com writes:
OK so we have a query that does OK in 8.4, goes to absolute crap in
9.2 and then works great in 9.3. Thing is we've spent several months
regression testing 9.2 and no time testing 9.3, so we can't just go
to 9.3 in an afternoon. But we might have
On Thu, May 15, 2014 at 9:35 AM, Scott Marlowe scott.marl...@gmail.comwrote:
OK so we have a query that does OK in 8.4, goes to absolute crap in
9.2 and then works great in 9.3. Thing is we've spent several months
regression testing 9.2 and no time testing 9.3, so we can't just go
to 9.3 in
On 29/12/2013 19:51, Jeff Janes wrote:
On Thursday, December 19, 2013, Marc Cousin wrote:
Yeah, I had forgotten to set it up correctly on this test environment
(its value is correctly set in production environments). Putting it to a
few gigabytes here gives me this cost:
On Thursday, December 19, 2013, Marc Cousin wrote:
Yeah, I had forgotten to set it up correctly on this test environment
(its value is correctly set in production environments). Putting it to a
few gigabytes here gives me this cost:
bacula=# explain select pathid, filename from batch join
Hi,
I'm having something I feel is a bit of a limitation of the optimizer (or
something I don't understand :) ).
Sorry, this is a rather long mail.
I have a workaround for the problem below, but I don't like cheating the
optimizer for no good reason.
First a little bit of context, because
QUERY PLAN
--
Nested Loop (cost=0.56..4001768.10 rows=479020 width=26) (actual
time=2.303..15371.237
On 19/12/2013 19:33, Jeff Janes wrote:
QUERY PLAN
--
Nested Loop (cost=0.56..4001768.10
Marc Cousin cousinm...@gmail.com wrote:
Then we insert missing paths. This is one of the plans that fail
insert into path (path)
select path from batch
where not exists
(select 1 from path where path.path=batch.path)
group by path;
I know you said you wanted to focus
On 19/12/2013 21:36, Kevin Grittner wrote:
Marc Cousin cousinm...@gmail.com wrote:
Then we insert missing paths. This is one of the plans that fail
insert into path (path)
select path from batch
where not exists
(select 1 from path where path.path=batch.path)
group
Hi all,
I'm migrating a web application to an ORM framework (Doctrine) so I need a
new way to get statistics about entities into the application without
importing all data, only the results (e.g. load total number of children
instead of loading all children into the application and counting it
Mathieu De Zutter math...@dezutter.org writes:
The problem is that this join is performing very badly when more than one
work is involved. It chooses a plan that is orders of magnitude slower.
I have attached
- The (simplified) table definitions
- The (simplified) view
- Two queries with
Hi all,
I'm looking at this query plan, an excerpt of which is shown here, and I am
just wondering how the estimated cost for the Nested Loop is calculated?
- Nested Loop (*cost=0.00..2888.16* rows=240 width=16) (actual
time=0.034..2.180 rows=91 loops=1)
Output: public.mg.lctime,
pg noob pgn...@gmail.com writes:
I'm looking at this query plan, an excerpt of which is shown here, and I am
just wondering how the estimated cost for the Nested Loop is calculated?
http://www.postgresql.org/docs/9.2/static/using-explain.html
There's a nestloop example about a third of the way
Howdy, I've been debugging a client's slow query today and I'm curious
about the query plan. It's picking a plan that hashes lots of rows from the
versions table (on v9.0.10)...
EXPLAIN ANALYZE
SELECT COUNT(*) FROM notes a WHERE
a.project_id = 114 AND
EXISTS (
SELECT 1 FROM note_links b
Matt Daw m...@shotgunsoftware.com writes:
Howdy, I've been debugging a client's slow query today and I'm curious
about the query plan. It's picking a plan that hashes lots of rows from the
versions table (on v9.0.10)...
EXPLAIN ANALYZE
SELECT COUNT(*) FROM notes a WHERE
a.project_id = 114
Hi Tom, thank you very much. I'll load these tables onto a 9.2 instance and
report back.
Matt
On Fri, Sep 28, 2012 at 2:44 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Matt Daw m...@shotgunsoftware.com writes:
Howdy, I've been debugging a client's slow query today and I'm curious
about the query
Hi Tom, v9.2.1 looks good!
Aggregate (cost=420808.99..420809.00 rows=1 width=0) (actual
time=147.345..147.345 rows=1 loops=1)
- Nested Loop Semi Join (cost=0.00..420786.71 rows=8914 width=0)
(actual time=13.847..147.219 rows=894 loops=1)
- Index Scan using
John Papandriopoulos dr.j...@gmail.com writes:
The memory issue has indeed disappeared---there was no noticeable memory
increase in the three queries below, with 4096 children. Inheritance
planning overhead is around 20x for UPDATE/DELETE compared to SELECT;
thankfully they are required
On 12/6/10 10:03 AM, Tom Lane wrote:
John Papandriopoulosdr.j...@gmail.com writes:
I am still wondering whether the inheritance_planner(...) can be avoided if
the rowtypes of children are the same as the parent?
Possibly, but it's far from a trivial change. The difficulty is that
you'd
On 12/4/10 3:38 PM, Jochen Erwied wrote:
Sunday, December 5, 2010, 12:19:29 AM you wrote:
Hmmm, what happens if I need 10 years of data, in monthly partitions? It
would be 120 partitions. Can you please elaborate on that limitation?
Any plans on lifting that restriction?
I'm running a
On 12/4/10 2:40 PM, Tom Lane wrote:
[ pokes at that for a bit ... ] Ah, I had forgotten that UPDATE/DELETE
go through inheritance_planner() while SELECT doesn't. And
inheritance_planner() makes a copy of the querytree, including the
already-expanded range table, for each target relation. So
On 12/4/10 3:19 PM, Mladen Gogala wrote:
Tom Lane wrote:
Partitioning using these techniques will work well with up to
perhaps a hundred partitions; don't try to use many thousands of
partitions.
regards, tom lane
Hmmm, what happens if I need 10 years of data, in monthly partitions? It
John Papandriopoulos dr.j...@gmail.com writes:
On 12/4/10 2:40 PM, Tom Lane wrote:
[ pokes at that for a bit ... ] Ah, I had forgotten that UPDATE/DELETE
go through inheritance_planner() while SELECT doesn't. And
inheritance_planner() makes a copy of the querytree, including the
On 12/5/10 12:14 PM, Tom Lane wrote:
I wrote:
You could get rid of the memory growth, at the cost of a lot of
tree-copying, by doing each child plan step in a discardable memory
context. I'm not sure that'd be a win for normal sizes of inheritance
trees though --- you'd need to copy the
On 12/3/10 10:20 PM, Tom Lane wrote:
John Papandriopoulosdr.j...@gmail.com writes:
I've found that a k-ary table inheritance tree works quite well to
reduce the O(n) CHECK constraint overhead [1] in the query planner
when enabling partition constraint exclusion.
Um ... you mean you're
John Papandriopoulos dr.j...@gmail.com writes:
I've recreated the same example with just one parent table, and 4096 child
tables.
SELECT query planning is lightning fast as before; DELETE and UPDATE cause my
machine to swap.
What's different about DELETE and UPDATE here?
Hmm. Rules?
On 12/4/10 8:42 AM, Tom Lane wrote:
John Papandriopoulosdr.j...@gmail.com writes:
I've recreated the same example with just one parent table, and 4096 child
tables.
SELECT query planning is lightning fast as before; DELETE and UPDATE cause my
machine to swap.
What's different about
Tom Lane wrote:
Hmm. Rules? Triggers? You seem to be assuming the problem is at the
planner stage but I'm not sure you've proven that.
regards, tom lane
Hmmm, I vaguely recollect a similar thread, started by me, although with
fewer partitions. In my experience,
John Papandriopoulos dr.j...@gmail.com writes:
I set up my schema using a machine generated SQL file [1] that simply
creates a table
create table ptest ( id integer );
and N = 0..4095 inherited children
create table ptest_N (
check ( (id = N_min) and (id = N_max) )
)
Tom Lane wrote:
Partitioning using these techniques will work well with up to
perhaps a hundred partitions; don't try to use many thousands of
partitions.
regards, tom lane
Hmmm, what happens if I need 10 years of data, in monthly partitions? It
Sunday, December 5, 2010, 12:19:29 AM you wrote:
Hmmm, what happens if I need 10 years of data, in monthly partitions? It
would be 120 partitions. Can you please elaborate on that limitation?
Any plans on lifting that restriction?
I'm running a partitioning scheme using 256 tables with a
Hi everyone,
I've been trialling different inheritance schemes for partitioning to a large
number of tables. I am looking at ~1e9 records, totaling ~200GB.
I've found that a k-ary table inheritance tree works quite well to reduce the
O(n) CHECK constraint overhead [1] in the query planner
John Papandriopoulos dr.j...@gmail.com writes:
I've found that a k-ary table inheritance tree works quite well to
reduce the O(n) CHECK constraint overhead [1] in the query planner
when enabling partition constraint exclusion.
Um ... you mean you're creating intermediate child tables for no
On Mon, Oct 5, 2009 at 8:35 PM, Guy Rouillier guyr-...@burntmail.comwrote:
Grzegorz Jaśkiewicz wrote:
well, as a rule of thumb - unless you can't think of a default value of
column - don't use nulls. So using nulls as default 'idunno' - is a bad
practice, but everybody's opinion on that
Grzegorz Jaœkiewiczgryz...@gmail.com wrote:
Guy Rouillier guyr-...@burntmail.comwrote:
Grzegorz Jaœkiewicz wrote:
using nulls as default 'idunno' - is a bad practice
I don't understand this point of view. The concept of null was
introduced into the SQL vernacular by Codd and Date
Kevin Grittner wrote:
Grzegorz Jaœkiewiczgryz...@gmail.com wrote:
A failing of the SQL standard is that it uses the same mark (NULL) to
show the absence of a value because it is unknown as for the case
where it is known that no value exists (not applicable). Codd argued
for a distinction
Guy Rouillier guyr-...@burntmail.com wrote:
Kevin Grittner wrote:
A failing of the SQL standard is that it uses the same mark (NULL)
to show the absence of a value because it is unknown as for the
case where it is known that no value exists (not applicable). Codd
argued for a distinction
Kevin Grittner wrote:
Which leaves the issue open -- a flexible way to flag the *reason* (or
*reasons*) for the absence of a value could be a nice enhancement, if
someone could invent a good implementation. Of course, one could
always add a column to indicate the reason for a NULL; and perhaps
Craig James wrote:
Kevin Grittner wrote:
Which leaves the issue open -- a flexible way to flag the *reason* (or
*reasons*) for the absence of a value could be a nice enhancement, if
someone could invent a good implementation. Of course, one could
always add a column to indicate the reason for
mnw21-modmine-r13features-copy=# select count(*) from project;
count
---
10
(1 row)
mnw21-modmine-r13features-copy=# select count(*) from intermineobject;
count
--
26344616
(1 row)
mnw21-modmine-r13features-copy=# \d intermineobject;
Table public.intermineobject
Column |
On Mon, Oct 5, 2009 at 2:52 PM, Matthew Wakeling matt...@flymine.orgwrote:
mnw21-modmine-r13features-copy=# select count(*) from project;
count
---
10
(1 row)
mnw21-modmine-r13features-copy=# select count(*) from intermineobject;
count
--
26344616
(1 row)
2009/10/5 Matthew Wakeling matt...@flymine.org
Yes, that does work, but only because id is NOT NULL. I thought Postgres
8.4 had had a load of these join types unified to make it less important how
the query is written?
well, as a rule of thumb - unless you can't think of a default value of
Matthew Wakeling matt...@flymine.org writes:
Yes, that does work, but only because id is NOT NULL. I thought Postgres
8.4 had had a load of these join types unified to make it less important
how the query is written?
NOT IN is not easily optimizable because of its odd behavior in the
Grzegorz Jaśkiewicz wrote:
well, as a rule of thumb - unless you can't think of a default value of
column - don't use nulls. So using nulls as default 'idunno' - is a bad
practice, but everybody's opinion on that differ.
I don't understand this point of view. The concept of null was
Brian Herlihy wrote:
We have a problem with some of our query plans. One of our
tables is quite volatile, but postgres always uses the last
statistics snapshot from the last time it was analyzed for query
planning. Is there a way to tell postgres that it should not
trust the statistics for
Hi,
We have a problem with some of our query plans. One of our tables is quite
volatile, but postgres always uses the last statistics snapshot from the last
time it was analyzed for query planning. Is there a way to tell postgres that
it should not trust the statistics for this table?
On Dec 9, 2008, at 3:27 PM, Tom Lane wrote:
Richard Yen [EMAIL PROTECTED] writes:
I've discovered a peculiarity with using btrim in an index and was
wondering if anyone has any input.
What PG version is this?
This is running on 8.3.3
In particular, I'm wondering if it's one of the early
Richard Yen [EMAIL PROTECTED] writes:
You guys are right. I tried Miller and gave me the same result. Is
there any way to tune this so that for the common last names, the
query run time doesn't jump from 1s to 300s?
If the planner's estimation is that far off then there must be something
You guys are right. I tried Miller and gave me the same result. Is there
any way to tune this so that for the common last names, the query run time
doesn't jump from 1s to 300s?
Thanks for the help!
Can you send the output of EXPLAIN ANALYZE for both cases?
...Robert
--
Sent via
BTW, if your queries typically constrain both lastname and firstname,
it'd likely be worthwhile to make a 2-column index on
lower(btrim(x_lastname)), lower(btrim(x_firstname))
regards, tom lane
--
Sent via pgsql-performance mailing list
On Dec 10, 2008, at 11:34 AM, Tom Lane wrote:
Richard Yen [EMAIL PROTECTED] writes:
You guys are right. I tried Miller and gave me the same result.
Is
there any way to tune this so that for the common last names, the
query run time doesn't jump from 1s to 300s?
If the planner's
On Dec 10, 2008, at 11:39 AM, Robert Haas wrote:
You guys are right. I tried Miller and gave me the same result.
Is there
any way to tune this so that for the common last names, the query
run time
doesn't jump from 1s to 300s?
Thanks for the help!
Can you send the output of EXPLAIN
tii=# explain analyze SELECT m_object_paper.id FROM m_object_paper,
m_assignment WHERE m_object_paper.assignment = m_assignment.id AND
m_object_paper.owner=-1 AND m_assignment.class = 2450798 AND
lower(btrim(x_firstname)) = lower(btrim('Jordan')) and
lower(btrim(x_lastname)) =
Richard Yen [EMAIL PROTECTED] writes:
Is there any way to tune this so that for the common last names, the query
run time doesn't jump from 1s to 300s?
Well, as near as I can tell there's factor of a couple hundred
difference between the frequencies of 'smith' and 'smithers', so
you shouldn't
On Dec 10, 2008, at 4:08 PM, Tom Lane wrote:
Richard Yen [EMAIL PROTECTED] writes:
Is there any way to tune this so that for the common last names,
the query
run time doesn't jump from 1s to 300s?
Well, as near as I can tell there's factor of a couple hundred
difference between the
Hi,
I've discovered a peculiarity with using btrim in an index and was
wondering if anyone has any input.
My table is like this:
Table public.m_object_paper
Column| Type | Modifiers
On Tue, Dec 9, 2008 at 2:56 PM, Richard Yen [EMAIL PROTECTED] wrote:
In practice, the difference is 300+ seconds when $LASTNAME == 5 chars and 1
second when $LASTNAME != 5 chars.
Would anyone know what's going on here? Is there something about the way
btrim works, or perhaps with the way
Richard Yen [EMAIL PROTECTED] writes:
I've discovered a peculiarity with using btrim in an index and was
wondering if anyone has any input.
What PG version is this?
In particular, I'm wondering if it's one of the early 8.2.x releases,
which had some bugs in and around choose_bitmap_and()
On Jul 18, 2008, at 5:28 AM, Stefan Zweig wrote:
CREATE TABLE nw_tla_2008_4_deu
(
ID bigint NOT NULL,
NET2CLASS smallint,
FOW smallint,
CONSTRAINT nw_tla_2008_4_deu_pkey PRIMARY KEY (ID),
)
WITHOUT OIDS;
You might want to give up on the double-quotes... you'll have to use
them everywhere.
Hi, I have a timestamptz field that I want to use with a query, but I
don’t need the full timestamp resolution, so I’ve created a
day_trunc(timestamptz) immutable function which I’ll use with the
query and with a new index:
logs= create index test_idx on blackbox (day_trunc(ts));
However,
Giorgio Valoti wrote:
Hi, I have a timestamptz field that I want to use with a query, but I
don’t need the full timestamp resolution, so I’ve created a
day_trunc(timestamptz) immutable function which I’ll use with the query
and with a new index:
logs= create index test_idx on blackbox
On 07/ago/08, at 10:35, Richard Huxton wrote:
Giorgio Valoti wrote:
Hi, I have a timestamptz field that I want to use with a query, but
I don’t need the full timestamp resolution, so I’ve created a
day_trunc(timestamptz) immutable function which I’ll use with the
query and with a new
Giorgio Valoti [EMAIL PROTECTED] writes:
GroupAggregate (cost=98431.58..119773.92 rows=74226 width=8)
- Sort (cost=98431.58..99050.92 rows=247736 width=8)
Sort Key: (day_trunc(ts))
- Seq Scan on blackbox (cost=0.00..72848.36 rows=247736 width=8)
On 07/ago/08, at 17:50, Tom Lane wrote:
Giorgio Valoti [EMAIL PROTECTED] writes:
GroupAggregate (cost=98431.58..119773.92 rows=74226 width=8)
- Sort (cost=98431.58..99050.92 rows=247736 width=8)
Sort Key: (day_trunc(ts))
- Seq Scan on blackbox (cost=0.00..72848.36
On 07/ago/08, at 20:37, Giorgio Valoti wrote:
[…]
If you haven't mucked with the cost parameters, the only way I can
think
of to get this result is to have an enormously bloated table that's
mostly empty. Maybe you need to review your vacuuming procedures.
I’ll review them.
I’ve
Giorgio Valoti [EMAIL PROTECTED] writes:
On 07/ago/08, at 17:50, Tom Lane wrote:
These numbers seem pretty bogus: there is hardly any scenario in
which a
full-table indexscan should be costed as significantly cheaper than a
seqscan. Have you put in silly values for random_page_cost?
No,
On 07/ago/08, at 23:01, Tom Lane wrote:
Giorgio Valoti [EMAIL PROTECTED] writes:
On 07/ago/08, at 17:50, Tom Lane wrote:
These numbers seem pretty bogus: there is hardly any scenario in
which a
full-table indexscan should be costed as significantly cheaper
than a
seqscan. Have you put in
hi list,
i have a problem with time consuming query. first of all my table structure:
CREATE TABLE nw_tla_2008_4_deu
(
ID bigint NOT NULL,
NET2CLASS smallint,
FOW smallint,
CONSTRAINT nw_tla_2008_4_deu_pkey PRIMARY KEY (ID),
)
WITHOUT OIDS;
CREATE INDEX nw_tla_2008_4_deu_fow_idx
ON
Removing the constants definitely did take care of the issue on 8.3 (still
same query plan on 8.1). Thanks for your help in getting this resolved, and
sorry again for not including all relevant information on my initial request
On Fri, Apr 4, 2008 at 10:20 PM, Tom Lane [EMAIL PROTECTED] wrote:
1 - 100 of 189 matches
Mail list logo