Hello,
I wonder why Postgres does not use index in the query below? It is a
quite common use-case when you want to sort records by an arbitrary set
of columns but do not want to create a lot of compound indexes for all
possible combinations of them. It seems that if, for instance, your
query
It was just a minimal example. The real query looks like this.
select *
from commons.financial_documents fd
where fd.creation_time < '2011-11-07 10:39:07.285022+08'
or (fd.creation_time = '2011-11-07 10:39:07.285022+08' and
fd.financial_document_id < 100)
order by fd.creation_time desc
limit
08.12.2010 22:46, Tom Lane writes:
Are you by any chance restoring from an 8.3 or older pg_dump file made
on Windows? If so, it's a known issue.
No, I tried Linux only.
Not without a complete reproducible example ... and not at all if it's
the known problem. The fix for that is to update pg
08.12.2010 22:46, Tom Lane writes:
Are you by any chance restoring from an 8.3 or older pg_dump file made
on Windows? If so, it's a known issue.
No, I tried Linux only.
Not without a complete reproducible example ... and not at all if it's
the known problem. The fix for that is to update
I discovered this issue a bit more. -j option is slowing down BLOBs
restoring. It's about 1000x times slower if you specify this option.
Does anybody plan to fix it?
I have encountered a problem while restoring the database. There is a
table that contains XML data (BLOB), ~ 3 000 000 records, ~
I have encountered a problem while restoring the database. There is a
table that contains XML data (BLOB), ~ 3 000 000 records, ~ 5.5Gb of
data. pg_restore has been running for a week without any considerable
progress. There are plenty of lines like these in the log:
pg_restore: processing ite
There is a partitioned table with 2 partitions:
drop table if exists p cascade;
create table p (
id bigint not null primary key,
ts timestamp);
create table p_actual ( check (ts is null) ) inherits (p);
create table p_historical ( check (ts is not null) ) inherits (p);
-- I skipped the tri
2010/4/28 Robert Haas :
On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain
wrote:
In the first query, the planner doesn't use the information of the 2,3,4.
It just does a : I'll bet I'll have 2 rows in t1 (I think it should
say 3, but it doesn't)
So it divide the estimated number of ro
On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov wrote:
I don't think this is just an issue with statistics, because the same
problem arises when I try executing a query like this:
I'm not sure how you think this proves that it isn't a problem with
statistics, but I
Greg Smith пишет:
I can't replicate your problem on the current development 9.0; all
three plans come back with results quickly when I just tried it:
Nested Loop (cost=0.00..50.76 rows=204 width=32) (actual
time=0.049..0.959 rows=200 loops=1)
-> Seq Scan on t1 (cost=0.00..1.06 rows=1 widt
Greg Smith пишет:
Vlad Arkhipov wrote:
Please do this small optimization if it is possible. It seem that the
optimizer have the all information to create a fast plan but it does
not do that.
This isn't strictly an optimization problem; it's an issue with
statistics the optimizer h
Please do this small optimization if it is possible. It seem that the
optimizer have the all information to create a fast plan but it does not
do that.
create temp table t1 (id bigint, t bigint);
insert into t1 values (1, 1);
insert into t1 values (2, 2);
insert into t1 values (2, 3);
insert i
Is there a way to limit I/O bandwidth/CPU usage of a certain backend? It
seems that ionice/renice makes no sense because of bgwriter/WAL writer
processes are not a part of a backend. I have a periodic query (every
hour) that make huge I/O load and should run in background. When this
query runs all
I have a problem with a part of big query because of incorrect
estimation. It's easy to emulate the case:
create table a (id bigint, id2 bigint);
create table b (id bigint, id2 bigint);
insert into a (id, id2)
select random() * 10, random() * 100
from generate_series(1, 10);
insert int
I have a big table that is used in many queries. Most used index is
created on date field. Number of records in this table when date field
is saturday is about 5 times smaller than other days, on sunday this
number is always 0. Statistics target is 1000. Many queries have
problems when condition o
I wonder why join_collapse_limit default values is set to 8 but
geqo_threshold is 12. Optimizer doesn't change the order of JOIN's of
queries that contains from 8 to 11 tables. Why it's 'wise' decision as
documentation says?
from_collapse_limit (integer)
The planner will merge sub-queries
Tom Lane writes:
Vlad Arkhipov <[EMAIL PROTECTED]> writes:
I've just discovered a problem with quite simple query. It's really
confusing me.
Postgresql 8.3.1, random_page_cost=1.1. All tables were analyzed before
query.
What have you got effective_c
I've just discovered a problem with quite simple query. It's really
confusing me.
Postgresql 8.3.1, random_page_cost=1.1. All tables were analyzed before
query.
EXPLAIN ANALYZE
SELECT i.c, d.r
FROM i
JOIN d ON d.cr = i.c
WHERE i.dd between '2007-08-01' and '2007-08-30'
Hash Join (cost=2505.4
I run on PostgreSQL 8.3, default settings (also tried to change
random_page_cost close to 1).
What need I change to make the second query run as fast as the first?
Set enable_hashjoin to off solves this problem, but it's not the way I
can use.
Statistics for all columns is on the level 1000.
e
PFC пишет:
On Thu, 24 Apr 2008 03:14:54 +0200, Vlad Arkhipov
<[EMAIL PROTECTED]> wrote:
I found strange issue in very simple query. Statistics for all columns
is on the level 1000 but I also tried other levels.
create table g (
id bigint primary key,
isgroup boolean not null);
Albe Laurenz пишет:
Vlad Arkhipov wrote:
I found strange issue in very simple query.
You forgot to mention what your problem is.
Yours,
Laurenz Albe
It was written below in my first post:
"These queries are part of big query and optimizer put them on the leaf
of query tre
I found strange issue in very simple query. Statistics for all columns
is on the level 1000 but I also tried other levels.
create table g (
id bigint primary key,
isgroup boolean not null);
create table a (
groupid bigint references g(id),
id bigint,
unique(id, groupid));
analyze g;
analyz
Tom Lane writes:
Vlad Arkhipov <[EMAIL PROTECTED]> writes:
I've came across this issue while writing report-like query for 2 not
very large tables. I've tried several methods to resolve this one (see
below). But now I'm really stuck...
It looks like you are wishin
I've came across this issue while writing report-like query for 2 not
very large tables. I've tried several methods to resolve this one (see
below). But now I'm really stuck...
PostgreSQL 8.3, default configuration
There are 2 tables (structure was simplified to show only problematic
place):
crea
24 matches
Mail list logo