Re: [PERFORM] CREATE STATISTICS and join selectivity

2017-11-15 Thread David Rowley
On 16 November 2017 at 09:19, Justin Pryzby <pry...@telsasoft.com> wrote:
> I just noticed that PG10 CREATE STATISTICS (dependencies) doesn't seem to work
> for joins on multiple columns; is that right?

Unfortunately, for now, they're not used for join selectivity
estimates, only for the base rel selectivities. That's all there was
time for with PG10. This is highly likely to be improved sometime in
the future.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Unnecessary DISTINCT while primary key in SQL

2017-11-05 Thread David Rowley
On 5 November 2017 at 04:20, 刘瑞 <whx20...@gmail.com> wrote:
> CREATE TABLE test_tbl ( k INT PRIMARY KEY, col text)
> INSERT into test_tbl select generate_series(1,1000), 'test';
>
> SQL with DISTINCT:
> test=# explain analyze select distinct col, k from test_tbl order by k limit
> 1000;
> QUERY PLAN
> --
>  Limit  (cost=1277683.22..1277690.72 rows=1000 width=36) (actual
> time=12697.994..12698.382 rows=1000 loops=1)
>->  Unique  (cost=1277683.22..1329170.61 rows=6864985 width=36) (actual
> time=12697.992..12698.311 rows=1000 loops=1)
>  ->  Sort  (cost=1277683.22..1294845.68 rows=6864985 width=36)
> (actual time=12697.991..12698.107 rows=1000 loops=1)
>Sort Key: k, col
>Sort Method: external sort  Disk: 215064kB
>->  Seq Scan on test_tbl  (cost=0.00..122704.85 rows=6864985
> width=36) (actual time=0.809..7561.215 rows=1000 loops=1)
>  Planning time: 2.368 ms
>  Execution time: 12728.471 ms
> (8 rows)

The current planner does not make much of an effort into recording
which columns remain distinct at each level. I have ideas on how to
improve this and it would include improving your case here.

9.6 did improve a slight variation of your query, but this was for
GROUP BY instead of DISTINCT. Probably there's no reason why the same
optimisation could not be applied to DISTINCT, I just didn't think of
it when writing the patch.

The item from the release notes [1] reads "Ignore GROUP BY columns
that are functionally dependent on other columns"

So, if you were to write the query as:

explain analyze select col, k from test_tbl group by col, k order by k
limit 1000;

It should run much more quickly, although still not as optimal as it could be.

[1] https://www.postgresql.org/docs/9.6/static/release-9-6.html

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Cheaper subquery scan not considered unless offset 0

2017-10-29 Thread David Rowley
On 30 October 2017 at 00:24, Benjamin Coutu <ben.co...@zeyos.com> wrote:
>   ->  Index Scan using "PK_items_ID" on items a  (cost=0.42..1.05 rows=1 
> width=332) (actual time=0.003..0.003 rows=1 loops=15521 total=46.563)

I've never seen EXPLAIN output like that before.

Is this some modified version of PostgreSQL?

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] How does max_parallel_workers_per_gather change load averages?

2017-10-03 Thread David Rowley
On 4 October 2017 at 08:48, Ben Nachtrieb  wrote:
> I have 2 cores and my max_parallel_workers_per_gather = 2 and
> max_worker_processes = 8, but my load averages are between 8 and 5 with
> scheduled at 1/189 to 5/195. Are these so high because I increased
> max_parallel_workers_per_gather? My understanding is that if my load
> averages are greater than my number of cores the system is overloaded.
> Should I think about it differently once I increase
> max_parallel_workers_per_gather? How should I think about it?

Parallel query is not 100% efficient. For example, adding twice the
CPU, in theory, will never double the performance, there's always some
overhead to this. It's really only useful to do on systems with spare
CPU cycles to perform this extra work. You don't seem to have much to
spare, so you may get along better if you disable parallel query.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] query of partitioned object doesnt use index in qa

2017-09-20 Thread David Rowley
On 21 September 2017 at 04:15, Mike Broers <mbro...@gmail.com> wrote:
> Ultimately I think this is just highlighting the need in my environment to
> set random_page_cost lower (we are on an SSD SAN anyway..), but I dont think
> I have a satisfactory reason by the row estimates are so bad in the QA
> planner and why it doesnt use that partition index there.

Without the index there are no stats to allow the planner to perform a
good estimate on "e.body->>'SID' is not null", so it applies a default
of 99.5%. So, as a simple example, if you have a partition with 1
million rows. If you apply 99.5% to that you get 995000 rows. Now if
you add the selectivity for "e.validation_status_code = 'P' ", let's
say that's 50%, the row estimate for the entire WHERE clause would be
497500 (100 * 0.995 * 0.5). Since the 99.5% is applied in both
cases, then the only variable part is validation_status_code. Perhaps
validation_status_code  = 'P' is much more common in QA than in
production.

You can look at the stats as gathered by ANALYZE with:

\x on
select * from pg_stats where tablename = 'event__' and attname
= 'validation_status_code';
\x off

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] query of partitioned object doesnt use index in qa

2017-09-13 Thread David Rowley
On 14 September 2017 at 08:28, Mike Broers <mbro...@gmail.com> wrote:
> I have a query of a partitioned table that uses the partition index in
> production but uses sequence scans in qa.  The only major difference I can
> tell is the partitions are much smaller in qa.  In production the partitions
> range in size from around 25 million rows to around 60 million rows, in QA
> the partitions are between 4 and 12 million rows.  I would think this would
> be big enough to get the planner to prefer the index but this is the major
> difference between the two database as far as I can tell.


QA:

> │   ->  Seq Scan on event__ e_1
> (cost=0.00..2527918.06 rows=11457484 width=782)│
>

Production:
>
> │   ->  Index Scan using
> ix_event__00011162_landing_id on event__00011162 e_1  (cost=0.56..15476.59
> rows=23400 width=572)   │


If QA has between 4 and 12 million rows, then the planner's row
estimate for the condition thinks 11457484 are going to match, so a
Seqscan is likely best here. If those estimates are off then it might
be worth double checking your nightly analyze is working correctly on
QA.

The planner may be able to be coaxed into using the index with a
higher effective_cache_size and/or a lower random_page_cost setting,
although you really should be looking at those row estimates first.
Showing us the EXPLAIN ANALYZE would have been much more useful so
that we could have seen if those are accurate or not.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Hi

2017-08-25 Thread David G. Johnston
On Thu, Aug 24, 2017 at 11:49 PM, Daulat Ram <daulat@cyient.com> wrote:

> Hello,
>
>
>
> Would I request to help me on this query.
>
>
>
> SELECT 'Inspection Completed' as "ALL Status" ,COUNT(*) as "Number of
> Count" FROM ud_document WHERE status = 'Inspection Completed' union SELECT
> 'Pending', COUNT(*) FROM ud_document WHERE status = 'Pending' union SELECT
> 'Approved', COUNT(*) FROM ud_document WHERE status = 'Approved' union
> SELECT 'Rejected', COUNT(*) FROM ud_document WHERE status = 'Rejected'
> union SELECT 'Payment Due',count(*) from ud_document where payment_status =
> 'Payment Due' union SELECT 'Payment Done' ,count(*) from ud_document where
> payment_status = 'Payment Done'
>
>
>
> And now I want to exclude the uniqueid= '201708141701018' from the above
> query. how it can be ???
>
>
>
​Your use of UNION here seems necessary.  Just write a normal GROUP BY
aggregation query.  You might need to get a bit creative since you are
collapsing status and payment_status into a single column.  "CASE ... WHEN
... THEN ... ELSE ... END" is quite helpful for doing stuff like that.  For
now I'll just leave them as two columns.

​SELECT status, payment_status, count(*)
FROM ud_document
WHERE uniqueid <> '201708141701018'
GROUP BY 1, 2;

David J.


Re: [PERFORM] Query plan for views and WHERE clauses, Luke is not using the index

2017-08-21 Thread David Rowley
On 19 August 2017 at 04:46, kimaidou <kimai...@gmail.com> wrote:
> When we call the WHERE on the view:
>
> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)
> SELECT *
> FROM "qgep"."vw_qgep_reach"
> WHERE "progression_geometry" &&
> st_makeenvelope(1728327.03249295568093657,8240789.26074041239917278,1728608.10987572139129043,8240958.16933418624103069,3949)
>
>
> The query plan is "wrong", as PostgreSQL seems to consider it should do a
> seq scan on the tables, and only afterwards filter with the WHERE:
> https://explain.depesz.com/s/wXV
>
> The query takes about 1 second instead of less than 100ms.
>
> Do you have any hint on this kind of issue ?

This is by design due to the DISTINCT ON() clause. Only quals which
filter columns which are in the DISTINCT ON can be safely pushed down.

Consider the following, where I've manually pushed the WHERE clause.

postgres=# create table tt (a int, b int);
CREATE TABLE
postgres=# create index on tt (a);
CREATE INDEX
postgres=# insert into tt values(1,1),(1,2),(2,1),(2,2);
INSERT 0 4
postgres=# select * from (select distinct on (a) a,b from tt order by
a,b) tt where b = 2;
 a | b
---+---
(0 rows)


postgres=# select * from (select distinct on (a) a,b from tt where b =
2 order by a,b) tt;
 a | b
---+---
 1 | 2
 2 | 2
(2 rows)

Note the results are not the same.

If I'd done WHERE a = 2, then the planner would have pushed the qual
down into the subquery.

More reading in check_output_expressions() in allpaths.c:

/* If subquery uses DISTINCT ON, check point 3 */
if (subquery->hasDistinctOn &&
!targetIsInSortList(tle, InvalidOid, subquery->distinctClause))
{
/* non-DISTINCT column, so mark it unsafe */
safetyInfo->unsafeColumns[tle->resno] = true;
continue;
}

The comment for point 3 reads:

 * 3. If the subquery uses DISTINCT ON, we must not push down any quals that
 * refer to non-DISTINCT output columns, because that could change the set
 * of rows returned.  (This condition is vacuous for DISTINCT, because then
 * there are no non-DISTINCT output columns, so we needn't check.  Note that
 * subquery_is_pushdown_safe already reported that we can't use volatile
 * quals if there's DISTINCT or DISTINCT ON.)


-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Unlogged tables

2017-08-09 Thread David G. Johnston
On Wed, Aug 9, 2017 at 3:39 AM, Michael Paquier <michael.paqu...@gmail.com>
wrote:

> This triggers a table rewrite and makes sure that all the data gets
> WAL-logged. The cost to pay for durability.
>
> > Is there a way to get my cake and eat it too?
>
> Not completely. Making data durable will have a cost at the end, but
> you can leverage it.
>
>
​Aren't you over-playing the role of the WAL in providing durability.  An
unlogged table remains intact after a clean shutdown and so is "durable" if
one considers the primary "permanence" aspect of the word.

The trade-off the OP wishes for is "lose crash-safety to gain write-once
(to the data files) performance".  Seeming having this on a per-table basis
would be part of the desirability.  It sounds like OP would be willing to
place the table into "read only" mode in order to ensure this - which is
something that is not presently possible.  I could envision that putting an
unlogged table into read-only mode would cause the system to ensure that
the data files are fully populated and then set a flag in the catalog that
informs the crash recovery process to go ahead and omit truncating that
particular unlogged table since the data files are known to be accurate.

David J.


Re: [PERFORM] join estimate of subqueries with range conditions and constraint exclusion

2017-06-03 Thread David G. Johnston
On Wed, May 24, 2017 at 2:17 PM, Justin Pryzby <pry...@telsasoft.com> wrote:

> We got bitten again by what appears to be the same issue I reported
> (perhaps
> poorly) here:
> https://www.postgresql.org/message-id/20170326193344.
> GS31628%40telsasoft.com
>
> We have PG9.6.3 table heirarchies partitioned by time.  Our reports use
> subqueries each with their own copies of a range clauses on time column, as
> needed to get constraint exclusion reference:
> https://www.postgresql.org/message-id/25076.1366321335%40sss.pgh.pa.us
>
> SELECT * FROM
> (SELECT * FROM t WHERE col>const) a JOIN
> (SELECT * FROM t WHERE col>const) b USING (col)
>
> I'm diagnosing a bad estimate/plan due to excessively high n_distinct
> leading
> to underestimated rowcount when selecting from a small fraction of the
> table
> heirarchy.  This leads intermittently to bad things, specifically a
> cascade of
> misestimates and associated nested loops around millions of rows.
>

​Justin,

I'm not going to be much help personally but I just wanted to say that with
PGCon just completed and Beta1 just starting, combined with the somewhat
specialized nature of the problem, a response should be forthcoming even
though its taking a bit longer than usual.

David J.


Re: [PERFORM] Can postgresql plan a query using multiple CPU cores?

2017-05-22 Thread David G. Johnston
On Monday, May 22, 2017, Clemens Eisserer <linuxhi...@gmail.com> wrote:

> Hi,
>
> I have a letancy-sensitive legacy application, where the time consumed
> by query planning was always causing some headaches.
> Currently it is running on postgresql-8.4 - will postgresql-10 support
> generating plans using multiple CPU cores to reduce the time required
> to generate a single plan?
>
>
My understanding, from both list monitoring and the release notes, is that
query parallelization happens only during execution, not planning.  A
single process is still responsible for evaluating all (possibly partial)
plans and picking the best one - flagging those plan steps that can
leverage parallelism for possible execution.

David J.


Re: [PERFORM] Inefficient max query when using group by

2017-05-04 Thread David Rowley
On 4 May 2017 at 22:52,  <jesse.hieta...@vaisala.com> wrote:
> I have a performance problem with my query. As a simplified example, I have
> a table called Book, which has three columns: id, released (timestamp) and
> author_id. I have a need to search for the latest books released by multiple
> authors, at a specific point in the history. This could be latest book
> between beginning of time and now, or latest book released last year etc. In
> other words, only the latest book for each author, in specific time window.
> I have also a combined index for released and author_id columns.
>
> First, I tried a simple query that selects maximum value of released and the
> author_id, which are grouped by the author_id (then later do a join by these
> author_id, released columns to get the whole rows).  Performance of this
> query is pretty bad (Execution time around 250-300ms for five authors). See
> query and query plan in the link below:
>
> https://gist.github.com/jehie/ca9fac16b6e3c19612d815446a0e1bc0
>
>
>
> The execution time seems to grow linearly when the number of author_ids
> increase (50ms per author_id). I don’t completely understand why it takes so
> long for this query to execute and why it does not use the directional index
> scan?
>
> I also tried second query using limit (where I can only ask for one
> author_id at a time, so cannot use this directly when searching for books of
> multiple author), which performs nicely (0.2ms):
>
> https://gist.github.com/jehie/284e7852089f6debe22e05c63e73027f
>
>
>
> So, any ideas how to make multiple-author lookups (like in the first query)
> perform better? Or any other ideas?

Yes, you could sidestep the whole issue by using a LATERAL join.

Something like:

EXPLAIN ANALYZE
SELECT b.released, b.author_id
FROM (VALUES('1'),('2'),('3'),('4'),('5')) a (author_id)
CROSS JOIN LATERAL (SELECT released, author_id
 FROM book
  WHERE author_id = a.author_id
AND released<=to_timestamp(2e9)
AND released>=to_timestamp(0)
ORDER BY released desc
 LIMIT 1) b;

or you could write a function which just runs that query. Although,
with the above or the function method, if you give this enough
authors, then it'll eventually become slower than the problem query.
Perhaps if you know the number of authors will not be too great, then
you'll be ok.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Slow query with 3 table joins

2017-04-25 Thread David Rowley
On 26 April 2017 at 15:19, Alessandro Ferrucci
<alessandroferru...@gmail.com> wrote:
> After about 40 inutes the slow query finally finished and the result of the
> EXPLAIN plan can be found here:
>
> https://explain.depesz.com/s/BX22

> Index Scan using field_unit_id_idx on field  (cost=0.00..8746678.52 
> rows=850149 width=8) (actual time=0.030..2414345.998 rows=10315 loops=1)"

This estimate seems a long way off. Are the stats up-to-date on the
table? Try again after running: ANALYZE field;

It might also be a good idea to ANALYZE all the tables. Is auto-vacuum
switched on?

The plan in question would work better if you create an index on field
(field_name, unit_id);

but I think if you update the stats the plan will switch.

A HashJoin, hashing "unit"  and index scanning on field_field_name_idx
would have been a much smarter plan choice for the planner to make.

Also how many distinct field_names are there? SELECT COUNT(DISTINCT
field_name) FROM field;

You may want to increase the histogram buckets on that columns if
there are more than 100 field names, and the number of rows with each
field name is highly variable. ALTER TABLE field ALTER COLUMN
field_name SET STATISTICS ; 100 is the default, and 1
is the maximum.


-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Insert Concurrency

2017-04-18 Thread David McKelvie

>> To understand recursion, one must first understand recursion.

This makes no sense unless you also provide the base case.

David


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Insert Concurrency

2017-04-17 Thread David Rowley
On 18 April 2017 at 14:55, ROBERT PRICE <rprice...@hotmail.com> wrote:
> I come from an Oracle background and am porting an application to postgres.
> App has a table that will contain 100 million rows and has to be loaded by a
> process that reads messages off a SQS queue and makes web service calls to
> insert records one row at a time in a postgres RDS instance. I know slow by
> slow is not the ideal approach but I was wondering if postgres had
> partitioning or other ways to tune concurrent insert statements. Process
> will run 50 - 100 concurrent threads.

Have you tested performance and noticed that it is insufficient for
your needs? or do you just assume PostgreSQL suffers from the same
issue as Oracle in regards to INSERT contention on a single table?

You may like to look at pgbench [1] to test the performance if you've
not done so already.

[1] https://www.postgresql.org/docs/9.6/static/pgbench.html

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Create a materialized view in PostgreSQL which can be access by all the user account

2017-04-17 Thread David G. Johnston
On Mon, Apr 17, 2017 at 10:00 AM, Dinesh Chandra 12108 <
dinesh.chan...@cyient.com> wrote:

> Hi Experts,
>
>
>
> How can we create a materialized view in PostgreSQL which can be access by
> all the user account in all Database?
>

​Databases are isolated - while connected to one you cannot directly see
objects in another.  You need to use something like postgres_fdw to link
current database and the one containing the materialized view together.​

​https://www.postgresql.org/docs/9.6/static/postgres-fdw.html
​
And ensure the proper permissions are setup.

https://www.postgresql.org/docs/9.6/static/sql-grant.html

David J.


Re: [PERFORM] Filter certain range of IP address.

2017-04-07 Thread David G. Johnston
On Fri, Apr 7, 2017 at 8:18 AM, Dinesh Chandra 12108 <
dinesh.chan...@cyient.com> wrote:

> Dear Vinny,
>
> Thanks for your valuable replay.
>
> but I need a select query, which select only that record which starts from
> IP "172.23.110" only from below table.
>
> xxx 172.23.110.175
> yyy 172.23.110.178
> zzz 172.23.110.177
> aaa 172.23.110.176
> bbb 172.23.111.180
> ccc 172.23.115.26
>

​SELECT ... WHERE substring(ip_addr::text, 1, 10) = '172.23.110'

David J.
 ​


Re: [PERFORM] anti-join with small table via text/varchar cannot estimate rows correctly

2017-03-01 Thread David G. Johnston
On Wed, Mar 1, 2017 at 5:24 PM, Jeff Janes <jeff.ja...@gmail.com> wrote:

> On Wed, Mar 1, 2017 at 2:12 PM, David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> On Wed, Mar 1, 2017 at 3:00 PM, Stefan Andreatta <s.andrea...@synedra.com
>> > wrote:
>>
>>> plain analyze
>>>  select tmp_san_1.id
>>>  from tmp_san_1
>>>left join tmp_san_2 on tmp_san_1.text = tmp_san_2.text
>>>  where tmp_san_2.id is null;
>>>
>>> ​Does it help if you check for "tmp_san_2.text is null"?
>>
>>
>>
> Yes.  And if you swap it so that the left join is on the integer while IS
> NULL is on the text, that also gets poorly estimated.  Also, if you make
> both column of both tables be integers, same thing--you get bad estimates
> when the join condition refers to one column and the where refers to the
> other.  I don't know why the estimate is poor, but it is not related to the
> types of the columns, but rather the identities of them.
>
>
​I suspect it has to with the lack of a NOT NULL constraint on either
column causing the planner to disregard the potential to implement a LEFT
JOIN using ANTI-JOIN semantics - or, also possible - the form itself is
invalid regardless of the presence or absence of contraints.  IIUC, while a
true anti-join syntax doesn't exist the canonical form for one uses NOT
EXISTS - which would force the author to use only the correct column pair.

David J.
​


Re: [PERFORM] anti-join with small table via text/varchar cannot estimate rows correctly

2017-03-01 Thread David G. Johnston
On Wed, Mar 1, 2017 at 3:00 PM, Stefan Andreatta <s.andrea...@synedra.com>
wrote:

> plain analyze
>  select tmp_san_1.id
>  from tmp_san_1
>left join tmp_san_2 on tmp_san_1.text = tmp_san_2.text
>  where tmp_san_2.id is null;
>
> ​Does it help if you check for "tmp_san_2.text is null"?

David J.


Re: [PERFORM] Number of characters in column preventing index usage

2017-02-17 Thread David G. Johnston
On Fri, Feb 17, 2017 at 3:49 PM, Tomas Vondra <tomas.von...@2ndquadrant.com>
wrote:

>  That may seem a bit strange, but I'd bet it finds the short value in some
> statistic (MCV, histogram) ans so can provide very accurate estimate.


​​ ->  Index Only Scan using tab_idx1 on tab  (cost=0.27..8.29 rows=1
width=0) (actual time=0.043..0.043 rows=0 loops=1)

​I'm not seeing how any of the statistic columns would capture a value that
doesn't actually appear in the table...(actual ... row=0)​

Unless there is some prefix matching going on here since the short value is
a substring(1, n) of the longer one which does appear 5 times.

​I guess maybe because the value doesn't appear it uses the index (via IOS)
to confirm absence (or near absence, i.e., 1) while, knowing the larger
value appears 5 times out of 223, it decides a quick table scan is faster
than any form of double-lookup (whether on the visibility map or the heap).

​https://www.postgresql.org/docs/9.6/static/indexes-index-only-scans.html​

​David J.​


Re: [PERFORM] Number of characters in column preventing index usage

2017-02-17 Thread David G. Johnston
On Fri, Feb 17, 2017 at 3:19 PM, Hustler DBA <hustler...@gmail.com> wrote:

>
> my_db=# create index tab_idx1 on tab(ID);
>
> CREATE INDEX
> my_db=# explain (analyze, buffers) select count(*) from tab where ID = '
> 01625cfa-2bf8-45cf' ;
>   QUERY PLAN
>
> 
> 
> ---
>  Aggregate  (cost=8.29..8.30 rows=1 width=0) (actual time=0.048..0.048
> rows=1 loops=1)
>Buffers: shared read=2
>->  Index Only Scan using tab_idx1 on tab  (cost=0.27..8.29 rows=1
> width=0) (actual time=0.043..0.043 rows=0 loops=1)
>  Index Cond: (ID = '01625cfa-2bf8-45cf'::text)
>
>

>->  Seq Scan on tab  (cost=0.00..14.79 rows=5 width=0) (actual
> time=0.031..0.108 rows=5 loops=1)
>  Filter: ((ID)::text = '01625cfa-2bf8-45cf-bf4c-aa5f3c6fa8ea
> '::text)
>  Rows Removed by Filter: 218
>  Buffers: shared hit=12
>  Planning time: 0.122 ms
>  Execution time: 0.180 ms
> (8 rows)
>

​IIRC the only reason the first query cares to use the index is because it
can perform an Index Only Scan and thus avoid touching the heap at all.  If
it cannot avoid touching the heap the planner is going to just use a
sequential scan to retrieve the records directly from the heap and save the
index lookup step.

David J.


Re: [PERFORM] Backup taking long time !!!

2017-01-31 Thread David Steele
On 1/20/17 10:06 AM, Stephen Frost wrote:

> Ah, yes, I noticed that you passed over the file twice but wasn't quite
> sure what functools.partial() was doing and a quick read of the docs
> made me think you were doing seeking there.
> 
> All the pages are the same size, so I'm surprised you didn't consider
> just having a format along the lines of: magic+offset+page,
> magic+offset+page, magic+offset+page, etc...
> 
> I'd have to defer to David on this, but I think he was considering
> having some kind of a bitmap to indicate which pages changed instead
> of storing the full offset as, again, all the pages are the same size.

I have actually gone through a few different ideas (including both of
the above) and haven't settled on anything final yet.  Most of the ideas
I've come up with so far are more optimal for backup performance but I
would rather bias towards restores which tend to be more time sensitive.

The ideal solution would be something that works well for both.

-- 
-David
da...@pgmasters.net



signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] optimizing immutable vs. stable function calls?

2017-01-18 Thread David G. Johnston
On Wed, Jan 18, 2017 at 4:23 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> "David G. Johnston" <david.g.johns...@gmail.com> writes:
> > ​I'm feeling a bit dense here but even after having read a number of
> these
> > kinds of interchanges I still can't get it to stick.  I think part of the
> > problem is this sentence from the docs:
> > https://www.postgresql.org/docs/current/static/xfunc-volatility.html
>
> > (Stable): "​This category allows the optimizer to optimize multiple calls
> > of the function to a single call"
>


> If PostgreSQL cannot execute it only once at query start then all this
> talk
> > about optimization seems misleading.  At worse there should be an
> sentence
> > explaining when the optimizations noted in that paragraph cannot occur -
> > and probably examples of both as well since its not clear when it can
> occur.
>
> If you want an exact definition of when things will happen or not happen,
> start reading the source code.  I'm loath to document small optimizer
> details since they change all the time.
>

​That would not be a productive exercise for me, or most people who just
want
some idea of what to expect in terms of behavior when they write and use a
Stable function (Immutable and Volatile seem fairly easy to reason about).

Is there anything fatally wrong with the following comprehension?

"""
A STABLE function cannot modify the database and is guaranteed to
return the same results given the same arguments for all rows
within a single statement.

This category allows the optimizer to take an expression of the form
(indexed_column = stable_function(...)) and evaluate stable_function(...)
once at the beginning of the query and use the result to scan
the index. (Since an index scan will evaluate the comparison
value only once, not once at each row, it is not valid to use a VOLATILE
 function in an index scan condition).  ?Note that should an index scan not
be
chosen for the plan the function will be invoked once-per-row?

Expressions of the forms (constant = stable_function()),
and (SELECT stable_function() FROM generate_series(1,5)) are not presently
optimized to a single per-query evaluation.  To obtain the equivalent you
can invoke the function in a sub-query or CTE and reference the result
wherever it is needed.
"""

It probably isn't perfect but if the average user isn't going to benefit
from
anything besides "index_column = function()" with an index plan then the
false hope that is being held due to the use of "allows + in particular"
should probably be dispelled.

Thanks!

David J.


Re: [PERFORM] optimizing immutable vs. stable function calls?

2017-01-18 Thread David G. Johnston
On Wed, Jan 18, 2017 at 3:54 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> Karl Czajkowski <kar...@isi.edu> writes:
> > The query planner does not seem to
> > recognize that it can eliminate redundant calls to a STABLE function.
>
> No, it doesn't.
>
> > In my case, the function call does not take any arguments and is thus
> > trivially independent of row data, and appears in a WHERE clause being
> > compared to constants. Why wouldn't the optimizer treat this case the
> > same as IMMUTABLE?
>
> "The same as IMMUTABLE" would be to reduce the function to a constant at
> plan time, which would be the wrong thing.  It would be valid to execute
> it only once at query start, but there's no built-in mechanism for that.
>

​I'm feeling a bit dense here but even after having read a number of these
kinds of interchanges I still can't get it to stick.  I think part of the
problem is this sentence from the docs:

https://www.postgresql.org/docs/current/static/xfunc-volatility.html

(Stable): "​This category allows the optimizer to optimize multiple calls
of the function to a single call"

I read that sentence (and the surrounding paragraph) and wonder why then
doesn't it do so in this case.

If PostgreSQL cannot execute it only once at query start then all this talk
about optimization seems misleading.  At worse there should be an sentence
explaining when the optimizations noted in that paragraph cannot occur -
and probably examples of both as well since its not clear when it can occur.

Some TLC to the docs here would be welcomed.

David J.


Re: [PERFORM] why we do not create indexes on master

2016-12-27 Thread David G. Johnston
On Tue, Dec 27, 2016 at 10:38 AM, Valerii Valeev <valerii.val...@mail.ru>
wrote:

> Thank you David,
>
> I used same rationale to convince my colleague — it didn’t work :)
> Sort of “pragmatic” person who does what seems working no matter what
> happens tomorrow.
> So I’m seeking for better understanding of what's happening to have other
> cause to convince him.
>
> Let me break it down once again. The experience is as follows:
>
> - partitioning follows the guide
>

​Only somewhat helpful...
​

> - master empty, no indexes
> - child tables have index on field “field”
> - query like
> SELECT * FROM “master” WHERE “field” BETWEEN ‘1' AND ‘2’
> takes more than 100 sec
>

​All retrieved data now exists in cache/buffers...


> - after that my mate adds index on “master”(“field”) — again, all data is
> in child tables
> - same query takes under 1sec
>

​As ​Andreas said if you really want to explore what is happening here you
need to use EXPLAIN ANALYZE.

Given the flow described above I/O retrieval performance differences, or
the attempt to query the table kicking off an ANALYZE, seems like possible
contributing factors.


> Questions I’d love to clarify:
>
> - Q1: is it correct that described situation happens because index created
> on master does account data that is already there in child?
>

​No
​

> - Q2: is it correct that index on master created before inserting record
> to child tables will not take into account this record?
>

Yes
​

> - Q3: are there any other bad sides of indexes on master table?
>

No​

David J.


Re: [PERFORM] why we do not create indexes on master

2016-12-27 Thread David G. Johnston
On Tue, Dec 27, 2016 at 8:22 AM, Valerii Valeev <valerii.val...@mail.ru>
wrote:

> I have naive idea that it won’t help if index is created before the data
> is there  — i.e. indexes on master aren’t updated when data loaded to child
> table.
>

​Indexes on the master table of a partition scheme never reflect the
contents of child​ tables.

In most partitioning schemes the master table is empty so even if it
doesn't have an index on a particular field execution would typically be
quick.  This is why #4 on the page you linked to:

"""
For each partition, create an index on the key column(s), as well as any
other indexes you might want. (The key index is not strictly necessary, but
in most scenarios it is helpful. If you intend the key values to be unique
then you should always create a unique or primary-key constraint for each
partition.)
"""

doesn't say anything about creating other indexes on the master table.  See
#1 in that list for an explicit statement of this assumption.

If the master is not empty, and of considerable size, and the field being
searched is not indexed, then it is unsurprising that the query would take
a long time to execute when obtaining rows from the master table.  If this
is the case then you've gotten away from the expected usage of partitions
and so need to do things that aren't in the manual to make them work.

David J.



David J.


Re: [PERFORM] [HACKERS] temporary table vs array performance

2016-09-26 Thread David G. Johnston
Its considered bad form to post to multiple lists.  Please pick the most
relevant one - in this case I'd suggest -general.

On Mon, Sep 26, 2016 at 8:39 AM, dby...@163.com <dby...@163.com> wrote:

>
> Array is not convenient to use in function, whether
> there are other methods can be replaced temp table in function
>
>
​I have no difficulty using arrays in functions.

As for "other methods" - you can use CTE (WITH) to create a truly local
table - updating the catalogs by using a temp table is indeed quite
expensive.

WITH vals AS  ( VALUES (1, 'lw'), (2, 'lw2') )
SELECT * FROM vals;

David J.


Re: [PERFORM] Performance problems with 9.2.15

2016-07-21 Thread David G. Johnston
On Thu, Jul 21, 2016 at 2:24 PM, Claudio Freire <klaussfre...@gmail.com>
wrote:

> That cross join doesn't look right. It has no join condition.


​That is that the definition of a "CROSS JOIN"...

David J.


Re: [PERFORM] Capacitors, etc., in hard drives and SSD for DBMS machines...

2016-07-08 Thread Jean-David Beyer
On 07/08/2016 07:44 AM, vincent wrote:
> 
> 
> Op 7/8/2016 om 12:23 PM schreef Jean-David Beyer:
>> Why all this concern about how long a disk (or SSD) drive can stay up
>> after a power failure?
>>
>> It seems to me that anyone interested in maintaining an important
>> database would have suitable backup power on their entire systems,
>> including the disk drives, so they could coast over any power loss.
>>
> As others have mentioned; *any* link in the power line can fail, from
> the building's power
> to the plug literaly falling out of the harddisk itself. Using multiple
> power sources,
> UPS, BBU etc reduce the risk, but the internal capacitors of an SSD are
> the only thing
> that will *always* provide power to the disk, no matter what caused the
> power to fail.
> 
> It's like having a small UPS in the disk itself, with near-zero chance
> of failure.
> 
> 
Thank you for all the responses.

The only time I had a power supply fail in a computer was in a 10 year
old computer. When storm Sandy came by, the power went out and the
computer had plenty of time to do a controlled shutdown.

But when the power was restored about a week later, the power flipped on
and off at just the right rate to fry the power supply, before the
system even started up enough to shut down again. So I lost no data. All
I had to do is buy a new computer and restore from the backup tape.

Of course, those capacitors in the disk itself could fail. Fortunately,
there have been giant improvements in capacitor manufacture reliability
since I had to study reliability of large electronic systems for a
military contract way back then.

-- 
  .~.  Jean-David Beyer  Registered Linux User 85642.
  /V\  PGP-Key:166D840A 0C610C8B Registered Machine  1935521.
 /( )\ Shrewsbury, New Jerseyhttp://linuxcounter.net
 ^^-^^ 10:50:01 up 36 days, 16:52, 2 users, load average: 4.95, 5.23, 5.18


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Capacitors, etc., in hard drives and SSD for DBMS machines...

2016-07-08 Thread Jean-David Beyer
Why all this concern about how long a disk (or SSD) drive can stay up
after a power failure?

It seems to me that anyone interested in maintaining an important
database would have suitable backup power on their entire systems,
including the disk drives, so they could coast over any power loss.

I do not have any database that important, but my machine has an APC
Smart-UPS that has 2 1/2 hours of backup time with relatively new
batteries in it. It is so oversize because my previous computer used
much more power than this one does. And if my power company has a brown
out or black out of over 7 seconds, my natural gas fueled backup
generator picks up the load very quickly.

Am I overlooking something?

-- 
  .~.  Jean-David Beyer  Registered Linux User 85642.
  /V\  PGP-Key:166D840A 0C610C8B Registered Machine  1935521.
 /( )\ Shrewsbury, New Jerseyhttp://linuxcounter.net
 ^^-^^ 06:15:01 up 36 days, 12:17, 2 users, load average: 4.16, 4.26, 4.30


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Index not used

2016-06-16 Thread David G. Johnston
On Thu, Jun 16, 2016 at 11:05 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> meike.talb...@women-at-work.org writes:
> > When I query this through pgsql, the queries are fast as expected.
> > select * from push_topic where guid =
> 'DD748CCD-B8A4-3B9F-8F60-67F1F673CFE5'
> > Index Scan using push_topic_idx_topicguid on push_topic
> (cost=0.42..8.44 rows=1 width=103) (actual time=0.117..0.121 rows=1 loops=1)
> >   Index Cond: ((guid)::bpchar =
> 'DD748CCD-B8A4-3B9F-8F60-67F1F673CFE5'::bpchar)
> >   Buffers: shared hit=3 read=1
> > Total runtime: 0.191 ms
>
> > However when I run the exact query through a different application
> (CodeSynthesis ORM) the query is very slow (~ 115ms logged)
> > I noted this is due to a sequential scan happening on the table instead
> of an index scan.
>
> It looks like what that app is actually issuing is something different
> from what you tested by hand, to wit
>
> select * from push_topic where guid =
> 'DD748CCD-B8A4-3B9F-8F60-67F1F673CFE5'::text
>
> which causes the comparison to be resolved as texteq not bpchareq, ie you
> effectively have
>
> select * from push_topic where guid::text =
> 'DD748CCD-B8A4-3B9F-8F60-67F1F673CFE5'::text
>
> and that doesn't match a bpchar index.  If you can't persuade the app to
> label the comparison value as bpchar not text, the easiest fix would be
> to create an additional index on "guid::text".
>

​Or, better, persuade the app to label the value "
​
public.push_guid
​" since that is the column's type​...a type you haven't defined for us.
If you get to add explicit casts this should be easy...but I'm not familiar
with the framework you are using.

David J.


Re: [PERFORM] pg_restore seems very slow

2016-06-15 Thread David G. Johnston
On Wed, Jun 15, 2016 at 6:00 PM, Adrian Myers <hadrianmy...@gmail.com>
wrote:

> This is my first post to the mailing list, so I apologize for any
> etiquette issues.
>
> I have a few databases that I am trying to move from one system to
> another.  Both systems are running Windows 7 and Postgres 8.4, and they are
> pretty powerful machines (40-core Xeon workstations with decent hardware
> across the board). While the DBs vary in size, I'm working right now with
> one that is roughly 50 tables and probably 75M rows, and is about 300MB on
> disk when exported via pg_dump.
>
> I am exporting and restoring using these commands (on separate sytems):
> pg_dump -F c mydb > mydb.dump
> pg_restore -C -j 10 mydb.dump
>
> The dump process runs in about a minute and seems fine. The restore
> process has already been running for around 7 hours.
>
> Yesterday, I tried restoring a larger DB that is roughly triple the
> dimensions listed above, and it ran for over 16 hours without completing.
>
> I followed the advice given at
> http://www.databasesoup.com/2014/09/settings-for-fast-pgrestore.html and
> set the conf settings as directed and restarted the server.
>
> You can see in the command line that I am trying to use the -j parameter
> for parallelism, but I don't see much evidence of that in Task Manager. CPU
> load is consistently 1 or 2% and only a couple cores seem to be doing
> anything, there certainly aren't 10 cpu-bound cores. I'm not sure where to
> look for pg_restore's disk I/O, but there is an entry for pg_restore in
> Task Manager/Processes which shows almost no I/O Read Bytes and 0 I/O Write
> Bytes. Since that's just the parent process that might make sense but I
> don't see much activity elsewhere either.
>
> Is there something simple that I am missing here? Does the -j flag not
> work in 8.4 and I should use --jobs? It just seems like none of the CPU or
> RAM usage I'd expect from this process are evident, it's taking many times
> longer than I would expect, and I don't know how to verify if the things
> I'm trying are working or not.
>
> Any insight would be appreciated!
>
>
​Did any databases restore properly?

Are there any message in logs or on the terminal​?  You should add the
"--verbose" option to your pg_restore command to help provoke this.

-C can be problematic at times.  Consider manually ensuring the desired
target database exists and is setup correctly (matches the original) and
then do a non-create restoration to it specifically.

-j should work fine in 8.4 (according to the docs)

You need to get to a point where you are seeing feedback from the
pg_restore process.  Once you get it telling you what it is doing (or
trying to do) then diagnosing can begin.

​David J.
​


Re: [PERFORM] Performance of LIKE/NOT LIKE when used in single query

2016-06-07 Thread David G. Johnston
On Wednesday, June 8, 2016, Ed Felstein <efelst...@gmail.com> wrote:

> Hello,
> First time poster here.  Bear with me.
> Using PostgreSQL 9.5
> I have a situation where I have a LIKE and a NOT LIKE in the same query to
> identify strings in a varchar field.  Since I am using wildcards, I have
> created a GIN index on the field in question, which makes LIKE '%%'
> searches run very fast.  The problem is the NOT LIKE phrases, which (as
> would be expected) force a sequential scan.  Being that we're talking about
> millions of records, this is not desirable.
> Here's the question...
> Is there a way, *using a single query*, to emulate the process of running
> the LIKE part first, then running the NOT LIKE just on those results?  I
> can accomplish this in a multi-step process by separating the single query
> into two queries, populating a temporary table with the results of the
> LIKEs, then running the NOT LIKEs on the temporary table.  For various
> reasons, this is not the ideal solution for me.
> Or is there another approach that would accomplish the same thing with the
> same level of performance?
>


Try AND...where col like '' and col not like ''

Or a CTE (with)

With likeqry as ( select where like )
Select from likeqry where not like

(sorry for brevity but not at a pc)

David J.


Re: [PERFORM] array size exceeds the maximum allowed (1073741823) when building a json

2016-06-07 Thread David G. Johnston
On Tue, Jun 7, 2016 at 8:42 AM, Nicolas Paris <nipari...@gmail.com> wrote:

>
>
> 2016-06-07 14:39 GMT+02:00 David G. Johnston <david.g.johns...@gmail.com>:
>
>> On Tue, Jun 7, 2016 at 8:36 AM, Nicolas Paris <nipari...@gmail.com>
>> wrote:
>>
>>> 2016-06-07 14:31 GMT+02:00 David G. Johnston <david.g.johns...@gmail.com
>>> >:
>>>
>>>> On Tue, Jun 7, 2016 at 7:44 AM, Nicolas Paris <nipari...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hello,
>>>>>
>>>>> I run a query transforming huge tables to a json document based on a 
>>>>> period.
>>>>> It works great for a modest period (little dataset).
>>>>> However, when increasing the period (huge dataset) I get this error:
>>>>>
>>>>> SQL ERROR[54000]
>>>>> ERROR: array size exceeds the maximum allowed (1073741823)
>>>>>
>>>>> ​https://www.postgresql.org/about/​
>>>>
>>>> ​Maximum Field Size: 1 GB​
>>>>
>>>
>>> It means a json cannot exceed 1GB in postgresql, right ?
>>>
>>
>> ​Yes​
>>
>>
>>> Then I must build it with an external tool ?
>>> ​
>>>
>>>
>>
>> ​​You have to do something different.  Using multiple columns and/or
>> multiple rows might we workable.
>>
>
> ​Certainly. Kind of disappointing, because I won't find any json builder
> as performant as postgresql.​
>
> ​
>
> Will this 1GO restriction is supposed to increase in a near future ?​
>
>
There has been zero chatter on the public lists about increasing any of the
limits on that page I linked to.

David J.
​


Re: [PERFORM] array size exceeds the maximum allowed (1073741823) when building a json

2016-06-07 Thread David G. Johnston
On Tue, Jun 7, 2016 at 8:36 AM, Nicolas Paris <nipari...@gmail.com> wrote:

> 2016-06-07 14:31 GMT+02:00 David G. Johnston <david.g.johns...@gmail.com>:
>
>> On Tue, Jun 7, 2016 at 7:44 AM, Nicolas Paris <nipari...@gmail.com>
>> wrote:
>>
>>> Hello,
>>>
>>> I run a query transforming huge tables to a json document based on a period.
>>> It works great for a modest period (little dataset).
>>> However, when increasing the period (huge dataset) I get this error:
>>>
>>> SQL ERROR[54000]
>>> ERROR: array size exceeds the maximum allowed (1073741823)
>>>
>>> ​https://www.postgresql.org/about/​
>>
>> ​Maximum Field Size: 1 GB​
>>
>
> It means a json cannot exceed 1GB in postgresql, right ?
>

​Yes​


> Then I must build it with an external tool ?
> ​
>
>

​​You have to do something different.  Using multiple columns and/or
multiple rows might we workable.

David J.


Re: [PERFORM] array size exceeds the maximum allowed (1073741823) when building a json

2016-06-07 Thread David G. Johnston
On Tue, Jun 7, 2016 at 7:44 AM, Nicolas Paris <nipari...@gmail.com> wrote:

> Hello,
>
> I run a query transforming huge tables to a json document based on a period.
> It works great for a modest period (little dataset).
> However, when increasing the period (huge dataset) I get this error:
>
> SQL ERROR[54000]
> ERROR: array size exceeds the maximum allowed (1073741823)
>
> ​https://www.postgresql.org/about/​

​Maximum Field Size: 1 GB​

​It doesn't matter that the data never actually is placed into a physical
table.

David J.


Re: [PERFORM] similarity and operator '%'

2016-05-30 Thread David G. Johnston
On Mon, May 30, 2016 at 1:53 PM, Volker Boehm <vol...@vboehm.de> wrote:

>
> The reason for using the similarity function in place of the '%'-operator
> is that I want to use different similarity values in one query:
>
> select name, street, zip, city
> from addresses
> where name % $1
> and street % $2
> and (zip % $3 or city % $4)
> or similarity(name, $1) > 0.8
>
> which means: take all addresses where name, street, zip and city have
> little similarity _plus_ all addresses where the name matches very good.
>
>
> The only way I found, was to create a temporary table from the first
> query, change the similarity value with set_limit() and then select the
> second query UNION the temporary table.
>
> Is there a more elegant and straight forward way to achieve this result?
>

​Not that I can envision.

You are forced into using an operator due to our index implementation.

You are thus forced into using a GUC to control the parameter that the
index scanning function uses to compute true/false.

A GUC can only take on a single value within a given query - well, not
quite true[1] but the exception doesn't seem like it will help here.

Th
us you are consigned to​

​using two queries.

*​A functional index​ doesn't work since the second argument is query
specific

[1]​ When defining a function you can attach a "SET" clause to it; commonly
used for search_path but should work with any GUC.  If you could wrap the
operator comparison into a custom function you could use this capability.
It also would require a function that would take the threshold as a value -
the extension only provides variations that use the GUC.

I don't think this will use the index even if it compiles (not tested):

CREATE FUNCTION similarity_80(col, val)
RETURNS boolean
SET similarity_threshold = 0.80
LANGUAGE sql
AS $$
​SELECT ​col % val;
$$;

​David J.​


Re: [PERFORM] Fast HashJoin only after a cluster/recreate table

2016-04-01 Thread David Rowley
  Index Cond: ((es09codemp = 1) AND
> (es09tipdoc = t3.es09tipdoc) AND (es09numdoc = t3.es09numdoc))
>  Filter: (es09tipdoc ~~ '%'::text)
>  Buffers: shared hit=6706
>->  Sort  (cost=37.35..38.71 rows=547 width=32) (actual
> time=0.592..2.206 rows=2919 loops=1)
>  Sort Key: t2.es08tipdoc
>  Sort Method: quicksort  Memory: 67kB
>  Buffers: shared hit=7
>  ->  Seq Scan on es08t t2  (cost=0.00..12.47 rows=547
> width=32) (actual time=0.003..0.126 rows=547 loops=1)
>Buffers: shared hit=7
>  ->  Materialize  (cost=0.56..287644.85 rows=716126 width=23)
> (actual time=0.027..68577.800 rows=993087854 loops=1)
>Buffers: shared hit=75342
>->  GroupAggregate  (cost=0.56..278693.28 rows=716126
> width=15) (actual time=0.025..4242.453 rows=3607573 loops=1)
>  Group Key: es09t1.es09codemp, es09t1.es09tipdoc,
> es09t1.es09numdoc
>  Buffers: shared hit=75342
>  ->  Index Only Scan using es09t1_pkey on es09t1
>  (cost=0.56..199919.49 rows=7161253 width=15) (actual time=0.016..1625.031
> rows=7160921 loops=1)
>Index Cond: (es09codemp = 1)
>Heap Fetches: 51499
>Buffers: shared hit=75342
>  Planning time: 50.129 ms
>  Execution time: 380419.435 ms
>


-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [PERFORM] Merge joins on index scans

2016-02-28 Thread David Rowley
On 27 February 2016 at 11:07, James Parks <james.pa...@meraki.net> wrote:
>
> CREATE TABLE a (id bigint primary key, nonce bigint);
> CREATE TABLE b (id bigint primary key, a_id bigint not null);
> CREATE INDEX a_idx ON b (a_id);
>
> The query:
>
> SELECT b.* FROM b JOIN a ON b.a_id = a.id WHERE a.nonce = ? ORDER BY b.id
> ASC;
>
> (skip down to [1] and [2] to see the query performance)
>
> What I know:
>
> If you force the query planner to use a merge join on the above query, it
> takes 10+ minutes to complete using the data as per below. If you force the
> query planner to use a hash join on the same data, it takes ~200
> milliseconds.

I believe I know what is going on here, but can you please test;

SELECT b.* FROM b WHERE EXISTS (SELECT 1 FROM a ON b.a_id = a.id AND
a.nonce = ?) ORDER BY b.id ASC;

using the merge join plan.

If this performs much better then the problem is due to the merge join
mark/restore causing the join to have to transition through many
tuples which don't match the a.nonce = ? predicate. The mark and
restore is not required for the rewritten query, as this use a semi
join rather than a regular inner join. With the semi join the executor
knows that it's only meant to be matching a single tuple in "a", so
once the first match is found it can move to the next row in the outer
relation without having to restore the scan back to where it started
matching that inner row again.

If I'm right, to get around the problem you could; create index on a
(nonce, id);

If such an index is out of the question then a patch has been
submitted for review which should fix this problem in (hopefully)
either 9.6 or 9.7
https://commitfest.postgresql.org/9/129/
If you have a test environment handy, it would be nice if you could
test the patch on the current git head to see if this fixes your
problem. The findings would be quite interesting for me. Please note
this patch is for test environments only at this stage, not for
production use.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Odd behavior with indices

2016-02-26 Thread David G. Johnston
On Fri, Feb 26, 2016 at 1:38 PM, joe meiring <josephmeir...@gmail.com>
wrote:

> Here's the distribution of parameter_id's
>
> select count(parameter_id), parameter_id from datavalue group by parameter_id
> 88169   142889171 815805   178570124257262 213947049 151225902 24091090 
> 3103877  10633764  11994442  1849232   2014935   4563638  132955919 7
>
>
​Ok...again its beyond my present experience ​but its what the planner
thinks about the distribution, and not what actually is present, that
matters.

David J.


Re: [PERFORM] Odd behavior with indices

2016-02-26 Thread David G. Johnston
On Fri, Feb 26, 2016 at 12:43 PM, joe meiring <josephmeir...@gmail.com>
wrote:

> Also available on S.O.:
>
>
> http://stackoverflow.com/questions/35658238/postgres-odd-behavior-with-indices
>
> I've got a datavalue table with ~200M rows or so, with indices on both
> site_id and parameter_id. I need to execute queries like "return all
> sites with data" and "return all parameters with data". The site table
> has only 200 rows or so, and the parameter table has only 100 or so rows.
>
> The site query is fast and uses the index:
>
> EXPLAIN ANALYZEselect *from sitewhere exists (
>   select 1 from datavalue
>   where datavalue.site_id = site.id limit 1);
>
> Seq Scan on site  (cost=0.00..64.47 rows=64 width=113) (actual 
> time=0.046..1.106 rows=89 loops=1)
>   Filter: (SubPlan 1)
>   Rows Removed by Filter: 39
>   SubPlan 1
> ->  Limit  (cost=0.44..0.47 rows=1 width=0) (actual time=0.008..0.008 
> rows=1 loops=128)
>   ->  Index Only Scan using ix_datavalue_site_id on datavalue  
> (cost=0.44..8142.71 rows=248930 width=0) (actual time=0.008..0.008 rows=1 
> loops=128)
> Index Cond: (site_id = site.id)
> Heap Fetches: 0
> Planning time: 0.361 ms
> Execution time: 1.149 ms
>
> The same query for parameters is rather slow and does NOT use the index:
>
> EXPLAIN ANALYZEselect *from parameterwhere exists (
>   select 1 from datavalue
>   where datavalue.parameter_id = parameter.id limit 1);
>
> Seq Scan on parameter  (cost=0.00..20.50 rows=15 width=2648) (actual 
> time=2895.972..21331.701 rows=15 loops=1)
>   Filter: (SubPlan 1)
>   Rows Removed by Filter: 6
>   SubPlan 1
> ->  Limit  (cost=0.00..0.34 rows=1 width=0) (actual 
> time=1015.790..1015.790 rows=1 loops=21)
>   ->  Seq Scan on datavalue  (cost=0.00..502127.10 rows=1476987 
> width=0) (actual time=1015.786..1015.786 rows=1 loops=21)
> Filter: (parameter_id = parameter.id)
> Rows Removed by Filter: 7739355
> Planning time: 0.123 ms
> Execution time: 21331.736 ms
>
> What the deuce is going on here? Alternatively, whats a good way to do
> this?
>
> Any help/guidance appreciated!
>
>
>
> Some of the table description:
>
> \d datavalue
>
> id BIGINT DEFAULT nextval('datavalue_id_seq'::regclass) NOT NULL,
> value DOUBLE PRECISION NOT NULL,
> site_id INTEGER NOT NULL,
> parameter_id INTEGER NOT NULL,
> deployment_id INTEGER,
> instrument_id INTEGER,
> invalid BOOLEAN,
> Indexes:
> "datavalue_pkey" PRIMARY KEY, btree (id)
> "datavalue_datetime_utc_site_id_parameter_id_instrument_id_key" UNIQUE 
> CONSTRAINT, btree (datetime_utc, site_id, parameter_id, instrument_id)
> "ix_datavalue_instrument_id" btree (instrument_id)
> "ix_datavalue_parameter_id" btree (parameter_id)
> "ix_datavalue_site_id" btree (site_id)
> "tmp_idx" btree (site_id, datetime_utc)
> Foreign-key constraints:
> "datavalue_instrument_id_fkey" FOREIGN KEY (instrument_id) REFERENCES 
> instrument(id) ON UPDATE CASCADE ON DELETE CASCADE
> "datavalue_parameter_id_fkey" FOREIGN KEY (parameter_id) REFERENCES 
> parameter(id) ON UPDATE CASCADE ON DELETE CASCADE
> "datavalue_site_id_fkey" FOREIGN KEY (site_id) REFERENCES 
> coastal.site(id) ON UPDATE CASCADE ON DELETE CASCADE
> "datavalue_statistic_type_id_fkey"
>
>
> ​I'm not great with the details but the short answer - aside from the fact
that you should consider increasing the statistics on these columns - is
that at a certain point querying the index and then subsequently checking
the table for visibility is more expensive than simply scanning and then
discarding ​the extra rows.

The fact that you could perform an INDEX ONLY scan in the first query makes
that cost go away since no subsequent heap check is required.  In the
parameters query the planner thinks it needs 1.5 million of the rows and
will have to check each of them for visibility.  It decided that scanning
the entire table was more efficient.

The LIMIT 1 in both queries should not be necessary.  The planner is smart
enough to stop once it finds what it is looking for.  In fact the LIMIT's
presence may be a contributing factor...but I cannot say for sure.

A better query seems like it would be:

WITH active_sites AS (
SELECT DISTINCT site_id FROM datavalues;
)
SELECT *
FROM sites
JOIN active_sites USING (site_id);

David J.


Re: [PERFORM] PostgreSQL seems to create inefficient plans in simple conditional joins

2016-01-30 Thread David Rowley
On 31 January 2016 at 01:30, Hedayat Vatankhah <hedayat@gmail.com> wrote:
> Personally, I expect both queries below to perform exactly the same:
>
> SELECT
> t1.id, *
> FROM
> t1
> INNER JOIN
> t2 ON t1.id = t2.id
> where t1.id > -9223372036513411363;
>
> And:
>
> SELECT
> t1.id, *
> FROM
> t1
> INNER JOIN
> t2 ON t1.id = t2.id
> where t1.id > -9223372036513411363 and t2.id > -9223372036513411363;
>
> Unfortunately, they do not. PostgreSQL creates different plans for these
> queries, which results in very poor performance for the first one compared
> to the second (What I'm testing against is a DB with around 350 million
> rows in t1, and slightly less in t2).
>
> EXPLAIN output:
> First query: http://explain.depesz.com/s/uauk
> Second query: link: http://explain.depesz.com/s/uQd

Yes, unfortunately you've done about the only thing that you can do,
and that's just include both conditions in the query. Is there some
special reason why you can't just write the t2.id > ... condition in
the query too? or is the query generated dynamically by some software
that you have no control over?

I'd personally quite like to see improvements in this area, and even
wrote a patch [1] which fixes this problem too. The problem I had when
proposing the fix for this was that I was unable to report details
about how many people are hit by this planner limitation. The patch I
proposed caused a very small impact on planning time for many queries,
and was thought by many not to apply in enough cases for it to be
worth slowing down queries which cannot possibly benefit. Of course I
agree with this, I've no interest in slowing down planning on queries,
but at the same time understand the annoying poor optimisation in this
area.

Although please remember the patch I proposed was merely a first draft
proposal. Not for production use.

[1] 
http://www.postgresql.org/message-id/flat/cakjs1f9fk_x_5hkcpcseimy16owe3empmmgsgwlckkj_rw9...@mail.gmail.com#cakjs1f9fk_x_5hkcpcseimy16owe3empmmgsgwlckkj_rw9...@mail.gmail.com

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] PostgreSQL seems to create inefficient plans in simple conditional joins

2016-01-30 Thread David Rowley
On 31 January 2016 at 06:14, Vitalii Tymchyshyn <v...@tym.im> wrote:
> It may be more for -hackers, but I often hear "this wont be used because of
> planning time increase". Now as I know we have statistics on real query time
> after few runs that is used to decide if plan should be switched.
> Can this statistics be used to apply advanced planning features for
> relatively long running queries? E.g. a parameter like
> sophisticated_planning_l1_threshold=500ms. If query runs over this
> threshold, replan it with more sophisticated features taking few more
> millis. Possibly different levels can be introduced. Also allow to set
> threshold to 0, saying "apply to all queries right away".
> Another good option is to threshold against cumulative query time. E.g. if
> there was 1 runs 0.5 millis each, it may be beneficial to spend few
> millis to get 0.2 millis each.

I agree with you. I recently was working with long running queries on
a large 3TB database. I discovered a new optimisation was possible,
and wrote a patch to implement. On testing the extra work which the
optimiser performed took 7 micoseconds, and this saved 6 hours of
execution time. Now, I've never been much of an investor in my life,
but a 3 billion times return on an investment seems quite favourable.
Of course, that's quite an extreme case, but it's hard to ignore the
benefit is still significant in less extreme cases.

The idea you've mentioned here is very similar to what I bought up at
the developer meeting a few days ago, see AOB section in [1]

Unfortunately I didn't really get many of the correct people on my
side with it, and some wanted examples of specific patches, which is
completely not what I wanted to talk about. I was more aiming for some
agreement for generic infrastructure to do exactly as you describe.

[1]  https://wiki.postgresql.org/wiki/FOSDEM/PGDay_2016_Developer_Meeting


-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Query order of magnitude slower with slightly different where clause

2016-01-17 Thread David Rowley
t;> Execution time: 5459.461 ms
>
>
> Please let me know if there is any more info I can provide to help figure
> out why it's choosing an undesirable plan with just a slight change in the
> the clause.
>

Hi Adam,

This is fairly simple to explain. The reason you see better performance
with the singe claim_id is that IN() clauses with a single 1 item are
converted to a single equality expression. For example: (just using system
tables so you can try this too, without having to create any special tables)

# explain select * from pg_class where oid in(1);
 QUERY PLAN
-
 Index Scan using pg_class_oid_index on pg_class  (cost=0.27..8.29 rows=1
width=219)
   Index Cond: (oid = '1'::oid)

We get an index scan with the index condition: oid = 1.

If we have 2 items, then we don't get this.

# explain select * from pg_class where oid in(1,2);
   QUERY PLAN
-
 Bitmap Heap Scan on pg_class  (cost=8.56..14.03 rows=2 width=219)
   Recheck Cond: (oid = ANY ('{1,2}'::oid[]))
   ->  Bitmap Index Scan on pg_class_oid_index  (cost=0.00..8.56 rows=2
width=0)
 Index Cond: (oid = ANY ('{1,2}'::oid[]))
(4 rows)

Now I also need to explain that PostgreSQL will currently push ONLY
equality expressions into other relations. For example, if we write:

# explain select * from pg_class pc inner join pg_attribute pa on pc.oid =
pa.attrelid where pc.oid in(1);
   QUERY PLAN

 Nested Loop  (cost=0.55..22.63 rows=4 width=422)
   ->  Index Scan using pg_class_oid_index on pg_class pc  (cost=0.27..8.29
rows=1 width=223)
 Index Cond: (oid = '1'::oid)
   ->  Index Scan using pg_attribute_relid_attnum_index on pg_attribute pa
 (cost=0.28..14.30 rows=4 width=203)
 Index Cond: (attrelid = '1'::oid)
(5 rows)

You can see that I only put pg_class.oid = 1 in the query, but internally
the query planner also added the pg_attribute.attrelid = 1. It was able to
do this due to the join condition dictating that pc.oid = pa.attrelid,
therefore this will always be equal, and since pc.oid = 1, then pa.attrelid
must also be 1.

If we have 2 items in the IN() clause, then this no longer happens:

# explain select * from pg_class pc inner join pg_attribute pa on pc.oid =
pa.attrelid where pc.oid in(1,2);
   QUERY PLAN

 Nested Loop  (cost=8.84..54.84 rows=15 width=422)
   ->  Bitmap Heap Scan on pg_class pc  (cost=8.56..14.03 rows=2 width=223)
 Recheck Cond: (oid = ANY ('{1,2}'::oid[]))
 ->  Bitmap Index Scan on pg_class_oid_index  (cost=0.00..8.56
rows=2 width=0)
   Index Cond: (oid = ANY ('{1,2}'::oid[]))
   ->  Index Scan using pg_attribute_relid_attnum_index on pg_attribute pa
 (cost=0.28..20.33 rows=8 width=203)
 Index Cond: (attrelid = pc.oid)
(7 rows)

In your case the claim_id = 'e8a38718-7997-4304-bbfa-138deb84aa82'::uuid
was pushed down into the subqueries, thus giving them less work to do, and
also the flexibility of using indexes on claim_id in the tables contained
within the subqueries. PostgreSQL currently does not push any inequality
predicates down at all.

A few months ago I did a little bit of work to try and lift this
restriction, although I only made it cover the >=, >, < and <= operators as
a first measure.

Details here:
http://www.postgresql.org/message-id/flat/cakjs1f9fk_x_5hkcpcseimy16owe3empmmgsgwlckkj_rw9...@mail.gmail.com#cakjs1f9fk_x_5hkcpcseimy16owe3empmmgsgwlckkj_rw9...@mail.gmail.com

If you didn't have the VIEW, you could manually push these predicates into
each subquery. However this is not really possible to do with the VIEW.
Perhaps something could be done with a function and using dynamic SQL to
craft a query manually, or you could just get rid of the view and have the
application build the query. If that's not an option then maybe you could
response to the thread above to mention that you've been hit by this
problem and would +1 some solution to fix it, and perhaps cross link to
this thread. I did have a little bit of a hard time in convincing people
that this was in fact a fairly common problem in the above thread, so it
would be nice to see people who have hit this problem respond to that.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [PERFORM] Getting an optimal plan on the first execution of a pl/pgsql function

2015-12-14 Thread David G. Johnston
On Mon, Dec 14, 2015 at 11:53 AM, Pedro França <pedro.fra...@golsat.com.br>
wrote:

> I have a really busy function that I need to optimize the best way I can.
> This function is just a nested select statement that is requested several
> times a sec by a legacy application. I'm running a PostgreSQL 9.4 on a
> CentOS 6;
>
> The indexes are in place but I've noticed that it is only used after the
> first execution of the function.
>

​How do you know this?​

I think that the problem is that Postgres isn't getting the best execution
> plan at first because of a parameter that it is highly exclusive in the
> majority of the cases, but it can be not as good sometimes. We can't change
> the way we call the function to a plain sql statement or a view because we
> can't change the application code itself.
>
> When I test with EXPLAIN ANALYZE after the first execution, the query runs
> really fast but the aplication sessions call the function only once and
> then are terminated. I need that the first execution use the actual
> optimized plan.
>
> We tried messing around with the connector driver that manage the
> connection pooling to issue a DISCARD TEMP instead of DISCARD ALL, so it
> could keep the cached plan of the sessions and the performance improved a
> lot, but I don't want to do that in a production environment.
>

Given the constraints you've listed this seems like it might be your only
avenue of improvement.​  Your problem that the performance improvement is
seen due to caching effects.  If you throw away the cache you loose the
improvement.


> I've tried to change the language to a sql function but it didn't help as
> the execution time didn't drop after the first execution.
>

​Yes, this likely would make thing worse...depending upon how it is called.

I've tried to add the "SET LOCAL join_collapse_limit = 1" too but it
> appears it doesn't work inside a function;
>

​I wouldn't expect that parameter to have any effect in this scenario.

Here is the function code:
>
> CREATE OR REPLACE FUNCTION public.ap_keepalive_geteqpid_veiid(
> IN tcbserie bigint,
> IN protocolo integer)
>   RETURNS TABLE(eqpid integer, veiid integer, tcbid integer, veiplaca
> character varying, veiproprietariocliid integer, tcbtppid integer,
> tcbversao character, veirpmparametro double precision, tcbconfiguracao
> bigint, tcbevtconfig integer, veibitsalertas integer, sluid integer, harid
> integer) AS
> $BODY$
> BEGIN
>
> RETURN QUERY
> SELECT teqp.eqpID,
> teqp.eqpveiID AS veiID,
> tcb.tcbID,
> tvei.veiPlaca,
> tvei.veiProprietariocliID,
> tcb.tcbtppID,
> tcb.tcbVersao,
> tvei.veiRPMParametro,
> COALESCE(COALESCE(NULLIF(tcb.tcbConfiguracao, 0),
> tcc.clcConfiguracaoBitsVeic), 0) AS tcbConfiguracao,
> COALESCE(tcb.tcbevtConfig, 0) AS tcbevtConfig,
> COALESCE(tvei.veiBitsAlertas, 0) AS veiBitsAlertas,
> COALESCE(tvei.veisluID, 0) AS sluID,
> COALESCE(tcb.tcbharID, 0) AS harID
> FROM TabComputadorBordo tcb
> INNER JOIN TabEquipamento teqp ON teqp.eqptcbID = tcb.tcbID
> INNER JOIN TabPacoteProduto tpp ON teqp.eqptppID = tpp.tppID
> INNER JOIN TabVeiculos tvei ON teqp.eqpveiID = tvei.veiID
> LEFT JOIN TabCliente tcli ON tcli.cliid = tvei.veiProprietariocliID
> LEFT JOIN TabClienteConfig tcc ON tcc.clcCliID = tcli.cliID
> WHERE   tcb.tcbserie = $1
> AND teqp.eqpAtivo = 1
> AND tpp.tppIDProtocolo = $2
> AND tvei.veiBloqueioSinal = 0;
>
> END
> $BODY$
>   LANGUAGE plpgsql VOLATILE
>   COST 1
>   ROWS 1;
>
> Execution plan in the first execution:
>

​You likely could make this STABLE instead of VOLATILE; though that doesn't
solve your problem.​


> "Function Scan on ap_keepalive_geteqpid_veiid  (cost=0.25..0.26 rows=1
> width=116) (actual time=3.268..3.268 rows=1 loops=1)"
> "Planning time: 0.032 ms"
> "Execution time: 3.288 ms"
>
> Second execution:
>
> "Function Scan on ap_keepalive_geteqpid_veiid  (cost=0.25..0.26 rows=1
> width=116) (actual time=0.401..0.402 rows=1 loops=1)"
> "Planning time: 0.058 ms"
> "Execution time: 0.423 ms"
>
>
​I'm doubting the query inside of the function is the problem here...it is
the function usage itself.  Calling a function has overhead in that the
body of function needs to be processed.  This only has to happen once per
session.  The first call of the function incurs this overhead while
subsequent calls do not.

Pending others correcting me...I fairly certain regarding my conclusions
though somewhat inexperienced in doing this kind of diagnostics.

David J.


Re: [PERFORM] No index only scan on md5 index

2015-11-25 Thread David G. Johnston
On Wednesday, November 25, 2015, Adam Brusselback <adambrusselb...@gmail.com>
wrote:

> Hey all,
>
> I have an attachment table in my database which stores a file in a bytea
> column, the file name, and the size of the file.
>
> Schema:
> CREATE TABLE attachment
> (
>   attachment_id uuid NOT NULL DEFAULT gen_random_uuid(),
>   attachment_name character varying NOT NULL,
>   attachment_bytes_size integer NOT NULL,
>   attachment_bytes bytea NOT NULL,
>   CONSTRAINT attachment_pkey PRIMARY KEY (attachment_id)
> );
>
> I do lookups on this table based on the md5 of the attachment_bytes
> column, so I added an index:
> CREATE INDEX idx_attachment_bytes_md5 ON attachment
> ((md5(attachment_bytes)::uuid));
>
> Queries like this are sped up by the index no problem:
> SELECT attachment_id
> FROM attachment
> WHERE md5(attachment_bytes)::uuid = 'b2ab855ece13a72a398096dfb6c832aa';
>
> But if I wanted to return the md5 value, it seems to be totally unable to
> use an index only scan:
> SELECT md5(attachment_bytes)::uuid
> FROM attachment;
>
>
Ok.

Any reason not to add the uuid column to the table?

AFAIK The system is designed to return data from the heap, not an index.
While it possibly can in some instances if you need to return data you
should store it directly in the table.

David J.


Re: [PERFORM] Why is now()::date so much faster than current_date

2015-11-17 Thread David Rowley
On 17 November 2015 at 21:49, Thomas Kellerer <spam_ea...@gmx.net> wrote:

> Hello,
>
> I stumbled over this answer: http://stackoverflow.com/a/9717125/330315
> and this sounded quite strange to me.
>
> So I ran this on my Windows laptop with Postgres 9.4.5, 64bit and indeed
> now()::date is much faster than current_date:
>
>   explain analyze
>   select current_date
>   from   generate_series (1, 100);
>
>   Function Scan on generate_series  (cost=0.00..6.00 rows=1000 width=0)
> (actual time=243.878..1451.839 rows=100 loops=1)
>   Planning time: 0.047 ms
>   Execution time: 1517.881 ms
>
> And:
>
>   explain analyze
>   select now()::date
>   from   generate_series (1, 100);
>
>   Function Scan on generate_series  (cost=0.00..6.00 rows=1000 width=0)
> (actual time=244.491..785.819 rows=100 loops=1)
>   Planning time: 0.037 ms
>   Execution time: 826.612 ms
>
>
>
The key to this is in the EXPLAIN VERBOSE output:

postgres=# explain verbose select current_date;
QUERY PLAN
--
 Result  (cost=0.00..0.01 rows=1 width=0)
   Output: ('now'::cstring)::date
(2 rows)

You can see that the implementation of current_date requires using the
date_in() function as well as the date_out() function. date_in() parses the
'now' string, then the resulting date is converted back into a date string
with date_out().  Using now()::date does not have to parse any date
strings, it just needs to call date_out() to give the final output.

The reason for this is likely best explained by the comment in gram.y:

/*
* Translate as "'now'::text::date".
*
* We cannot use "'now'::date" because coerce_type() will
* immediately reduce that to a constant representing
* today's date.  We need to delay the conversion until
* runtime, else the wrong things will happen when
* CURRENT_DATE is used in a column default value or rule.
*
* This could be simplified if we had a way to generate
* an expression tree representing runtime application
* of type-input conversion functions.  (As of PG 7.3
* that is actually possible, but not clear that we want
* to rely on it.)
*
* The token location is attached to the run-time
* typecast, not to the Const, for the convenience of
* pg_stat_statements (which doesn't want these constructs
* to appear to be replaceable constants).
*/

--
 David Rowley   http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/>
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [PERFORM] Slow 3 Table Join with v bad row estimate

2015-11-11 Thread David Osborne
Sorry Igor - yes wrong plan.

Here's the new one ...
(running a wee bit slower this morning - still 20x faster that before
however)

http://explain.depesz.com/s/64YM


QUERY PLAN


 HashAggregate  (cost=70661.35..70661.36 rows=1 width=24) (actual
time=1305.098..1326.956 rows=52624 loops=1)
   Buffers: shared hit=232615 read=3871 dirtied=387
   ->  Nested Loop  (cost=1.29..70661.34 rows=1 width=24) (actual
time=6.307..1242.567 rows=53725 loops=1)
 Buffers: shared hit=232615 read=3871 dirtied=387
 ->  Index Scan using branch_po_state_idx on branch_purchase_order
o  (cost=0.42..822.22 rows=1768 width=17) (actual time=0.042..6.001
rows=1861 loops=1)
   Index Cond: ((po_state)::text = 'PLACED'::text)
   Filter: ((supplier)::text = 'XX'::text)
   Rows Removed by Filter: 3016
   Buffers: shared hit=2218
 ->  Nested Loop  (cost=0.87..39.49 rows=1 width=36) (actual
time=0.151..0.651 rows=29 loops=1861)
   Buffers: shared hit=230397 read=3871 dirtied=387
   ->  Index Scan using ssales_ib_replace_order_no on
stocksales_ib ss  (cost=0.44..33.59 rows=1 width=31) (actual
time=0.093..0.401 rows=29 loops=1861)
 Index Cond: (replace((order_no)::text, ' '::text,
''::text) = ((o.branch_code)::text || (o.po_number)::text))
 Filter: ((o.supplier)::bpchar = branch_code)
 Buffers: shared hit=13225 read=2994
   ->  Index Only Scan using
branch_purchase_order_products_po_id_product_code_idx on
branch_purchase_order_products p  (cost=0.43..5.90 rows=1 width=12) (actual
time=0.006..0.007 rows=1 loops=54396)
 Index Cond: ((po_id = o.po_id) AND (product_code =
(ss.product_code)::text))
 Heap Fetches: 54475
 Buffers: shared hit=217172 read=877 dirtied=387
 Total runtime: 1336.253 ms
(20 rows)


Re: [PERFORM] Slow 3 Table Join with v bad row estimate

2015-11-10 Thread David Osborne
Thanks very much Tom.

Doesn't seem to quite do the trick. I created both those indexes (or the
missing one at least)
Then I ran analyse on stocksales_ib and branch_purchase_order.
I checked there were stats held in pg_stats for both indexes, which there
were.
But the query plan still predicts 1 row and comes up with the same plan.

I also tried setting default_statistics_target to 1 and reran analyse
on both tables with the same results.

In addition, also no change if I change the query to have the join ss.order_
no=o.branch_code || ' ' || o.po_number and create an index on  (branch_code
|| ' ' || o.po_number)

Am I right in thinking my workaround with the WITH clause is in no way
guaranteed to continue to perform better than the current query if I rolled
that out?



On 10 November 2015 at 15:03, Tom Lane  wrote:

>
> Yeah, the planner is not nearly smart enough to draw any useful
> conclusions about the selectivity of that clause from standard statistics.
> What you might try doing is creating functional indexes on the two
> subexpressions:
>
> create index on branch_purchase_order ((branch_code || po_number));
> create index on stocksales_ib (replace(order_no,' ',''));
>
> (actually it looks like you've already got the latter one) and then
> re-ANALYZING.  I'm not necessarily expecting that the planner will
> actually choose to use these indexes in its plan; but their existence
> will prompt ANALYZE to gather stats about the expression results,
> and that should at least let the planner draw more-accurate conclusions
> about the selectivity of the equality constraint.
>
> regards, tom lane
>


Re: [PERFORM] Slow 3 Table Join with v bad row estimate

2015-11-10 Thread David Osborne
Ok - wow.

Adding that index, I get the same estimate of 1 row, but a runtime of
~450ms.
A 23000ms improvement.

http://explain.depesz.com/s/TzF8h

This is great. So as a general rule of thumb, if I see a Join Filter
removing an excessive number of rows, I can check if that condition can be
added to an index from the same table which is already being scanned.

Thanks for this!

On 10 November 2015 at 17:05, Tom Lane  wrote:

>
> But taking a step back, it seems like the core problem in your explain
> output is here:
>
> >>->  Nested Loop  (cost=1.29..83263.71 rows=1 width=24) (actual
> time=0.196..23799.930 rows=53595 loops=1)
> >>  Join Filter: (o.po_id = p.po_id)
> >>  Rows Removed by Join Filter: 23006061
> >>  Buffers: shared hit=23217993 dirtied=1
>
> That's an awful lot of rows being formed by the join only to be rejected.
> You should try creating an index on
> branch_purchase_order_products(po_id, product_code)
> so that the po_id condition could be enforced at the inner indexscan
> instead of the join.
>
>
>


[PERFORM] Slow 3 Table Join with v bad row estimate

2015-11-10 Thread David Osborne
-
 HashAggregate  (cost=83263.72..83263.73 rows=1 width=24) (actual
time=23908.777..23927.461 rows=52500 loops=1)
   Buffers: shared hit=23217993 dirtied=1
   ->  Nested Loop  (cost=1.29..83263.71 rows=1 width=24) (actual
time=0.196..23799.930 rows=53595 loops=1)
 Join Filter: (o.po_id = p.po_id)
 Rows Removed by Join Filter: 23006061
 Buffers: shared hit=23217993 dirtied=1
 ->  Nested Loop  (cost=0.86..57234.41 rows=3034 width=23) (actual
time=0.162..129.508 rows=54259 loops=1)
   Buffers: shared hit=18520
   ->  Index Scan using branch_po_state_idx on
branch_purchase_order o  (cost=0.42..807.12 rows=1672 width=17) (actual
time=0.037..4.863 rows=1916 loops=1)
 Index Cond: ((po_state)::text = 'PLACED'::text)
 Filter: ((supplier)::text = 'XX'::text)
 Rows Removed by Filter: 3050
 Buffers: shared hit=2157
   ->  Index Scan using ssales_ib_replace_order_no on
stocksales_ib ss  (cost=0.44..33.74 rows=1 width=31) (actual
time=0.014..0.044 rows=28 loops=1916)
 Index Cond: (replace((order_no)::text, ' '::text,
''::text) = ((o.branch_code)::text || (o.po_number)::text))
 Filter: ((o.supplier)::bpchar = branch_code)
 Rows Removed by Filter: 0
 Buffers: shared hit=16363
 ->  Index Scan using branch_purchase_order_product_code_idx on
branch_purchase_order_products p  (cost=0.43..5.45 rows=250 width=12)
(actual time=0.018..0.335 rows=425 loops=54259)
   Index Cond: ((product_code)::text = (ss.product_code)::text)
   Buffers: shared hit=23199473 dirtied=1
 Total runtime: 23935.995 ms
(22 rows)


So we can see straight away that the outer Nested loop expects 1 row, and
gets 53595. This isn't going to help the planner pick the most efficient
plan I suspect.

I've tried increasing default_statistics_target to the max and re analysing
all the tables involved but this does not help the estimate.
I suspect it's due to the join being based on functional result meaning any
stats are ignored?

What has improved runtimes is using a WITH clause to carry out the first
join explicitly. But although it runs in half the time, the stats are still
way out and I feel it is maybe just because I'm limiting the planner's
choices that it by chance picks a different, quicker, plan.
It does a Hash Join and Seq Scan


with bpo as (
select
o.branch_code || o.po_number as order_no,
o.po_id,
o.supplier,
o.branch_code,
p.product_code
from branch_purchase_order o
join branch_purchase_order_products p using(po_id)
where
o.po_state='PLACED'
and o.supplier='XX'
)
select
po_id,
product_code,
sum(qty) as dispatch_qty,
max(invoice_date) as dispatch_date,
count(invoice_date) as dispatch_count
from (

select
o.po_id,
o.product_code,
ss.qty,
ss.invoice_date
from
bpo o
join stocksales_ib ss on o.supplier=ss.branch_code
and o.product_code=ss.product_code
and o.order_no=replace(ss.order_no,' ','')

) x
group by po_id,product_code

Explain:
http://explain.depesz.com/s/r7v


Can anyone suggest a better approach for improving the plan for this type
of query?


 select version();
version

---
 PostgreSQL 9.3.10 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.6.3
20120306 (Red Hat 4.6.3-2), 64-bit


Regards,
-- 
David


Re: [PERFORM] GroupAggregate and Integer Arrays

2015-10-23 Thread David Osborne
Ah yes sorry:

I think these cover it...

CREATE AGGREGATE sum (
  sfunc = array_add,
  basetype = INTEGER[],
  stype = INTEGER[],
  initcond = '{}'
   );

CREATE OR REPLACE FUNCTION array_add(int[],int[]) RETURNS int[] AS $$
   -- Add two arrays.
   select
  ARRAY (
 SELECT coalesce($1[i],0) + coalesce($2[i],0)
 FROM (
select generate_series(least(array_lower($1, 1),array_lower($2,
1)), greatest(array_upper($1, 1),array_upper($2, 1)), 1) AS i
 ) sub
   GROUP BY i
   ORDER BY i
   );
$$ LANGUAGE sql STRICT IMMUTABLE;




On 23 October 2015 at 17:15, Jeff Janes <jeff.ja...@gmail.com> wrote:

> On Fri, Oct 23, 2015 at 7:29 AM, David Osborne <da...@qcode.co.uk> wrote:
>
>
>> Hi,
>>
>> Wondering if anyone could suggest how we could improve the performance of
>> this type of query?
>> The intensive part is the summing of integer arrays as far as I can see.
>>
>
>
> Postgres does not ship with any 'sum' function which takes array arguments.
>
> > select sum('{1,2,3,4,5,6}'::int[]);
>
> ERROR:  function sum(integer[]) does not exist
>
> Are you using a user defined function?  If so, how did you define it?
>
> Cheers,
>
> Jeff
>


[PERFORM] GroupAggregate and Integer Arrays

2015-10-23 Thread David Osborne
Hi,

Wondering if anyone could suggest how we could improve the performance of
this type of query?
The intensive part is the summing of integer arrays as far as I can see.
We're thinking there's not much we can do to improve performance apart from
throw more CPU at it... would love to be proven wrong though!


*Query:*

  explain (analyse,buffers)
  select
  sum(s2.array_a),sum(s2.array_b)
  from mytable s1 left join mytable s2
  on s1.code=s2.code and s1.buyer=s2.seller and s2.seller='XX'
  where s1.buyer='XX'
  group by s1.buyer,s1.code
;


*Depesz Explain Link:*

http://explain.depesz.com/s/m3XP


   QUERY PLAN


 GroupAggregate  (cost=275573.49..336223.36 rows=2547 width=524) (actual
time=1059.340..22946.772 rows=22730 loops=1)
   Buffers: shared hit=113596 read=1020 dirtied=15
   ->  Merge Left Join  (cost=275573.49..278850.09 rows=113560 width=524)
(actual time=1058.773..1728.186 rows=240979 loops=1)
 Merge Cond: ((s1.code)::text = (s2.code)::text)
 Join Filter: (s1.buyer = (s2.seller)::bpchar)
 Buffers: shared hit=113596 read=1020 dirtied=15
 ->  Index Only Scan using mytable_buyer_idx on mytable s1
 (cost=0.42..1226.06 rows=25465 width=12) (actual time=0.015..35.790
rows=22730 loops=1)
   Index Cond: (buyer = 'XX'::bpchar)
   Heap Fetches: 3739
   Buffers: shared hit=16805 dirtied=1
 ->  Sort  (cost=275573.07..275818.33 rows=98106 width=525) (actual
time=1058.736..1141.560 rows=231662 loops=1)
   Sort Key: s2.code
   Sort Method: quicksort  Memory: 241426kB
   Buffers: shared hit=96791 read=1020 dirtied=14
   ->  Bitmap Heap Scan on mytable s2
 (cost=12256.28..267439.07 rows=98106 width=525) (actual
time=60.330..325.730 rows=231662 loops=1)
 Recheck Cond: ((seller)::text = 'XX'::text)
 Filter: ((seller)::bpchar = 'XX'::bpchar)
 Buffers: shared hit=96791 read=1020 dirtied=14
 ->  Bitmap Index Scan on mytable_seller_idx
 (cost=0.00..12231.75 rows=254844 width=0) (actual time=40.474..40.474
rows=233244 loops=1)
   Index Cond: ((seller)::text = 'XX'::text)
   Buffers: shared hit=30 read=1020
 Total runtime: 22968.292 ms
(22 rows)



*Table size:*

=> select count(*) from mytable;
 count

 602669
(1 row)


*Array types:*

# select array_a,array_b from mytable limit 1;
  array_a  | array_b
---+---
 {0,0,0,0,0,0,0,0,0,0,0,0} | {0,0,0,0,0,0,0,0,0,0,0,0}


*Example schema:*

# \d mytable
Table "public.mytable"
  Column   | Type  |   Modifiers
---+---+
 buyer | character(2)  | not null
 code  | character varying(20) | not null
 seller| character varying(50) |
 array_a   | integer[] |
 array_b   | integer[] |
Indexes:
"mytable_buyer_code_idx" UNIQUE, btree (buyer, code) CLUSTER
"mytable_buyer_idx" btree (buyer)
"mytable_code_idx" btree (code)
"mytable_seller_idx" btree (seller)


*Version:*

> SELECT version() ;
   version

--
 PostgreSQL 9.3.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.6.3
20120306 (Red Hat 4.6.3-2), 64-bit
(1 row)

This is running on an AWS RDS instance.

Thanks for any pointers
-- 
David


Re: [PERFORM] Having some problems with concurrent COPY commands

2015-10-13 Thread David Rowley
On 14 October 2015 at 08:33, Shaun Thomas <bonesmo...@gmail.com> wrote:

> On Tue, Oct 13, 2015 at 7:23 AM, Andres Freund <and...@anarazel.de> wrote:
> > and send the results.
>
> Whelp, I'm an idiot. I can't account for how I did it, but I can only
> assume I didn't export my ports in the tests properly. I ran
> everything again and there's a marked difference between 9.3 and 9.4.
> The parallel copy times still inflate, but only from 1.4s to 2.5s at 4
> procs. Though it gets a bit dicey after that.
>
>
>
Do the times still inflate in the same way if you perform the COPY before
adding the indexes to the table?

--
 David Rowley   http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/>
 PostgreSQL Development, 24x7 Support, Training & Services


[PERFORM] Re: [PERFORM] Query > 1000× slowdown after adding datetime comparison

2015-08-31 Thread David G. Johnston
On Mon, Aug 31, 2015 at 12:09 PM, twoflower <standa.ku...@gmail.com> wrote:

> I have the following three tables:
>
> DOCUMENT
>   id (index)
>   documenttype
>   date_last_updated: timestamp(6) (indexed)
>
> EXTERNAL_TRANSLATION_UNIT
>   id (indexed)
>   fk_id_document (indexed)
>
> EXTERNAL_TRANSLATION
>   id (indexed)
>   fk_id_translation_unit (indexed)
>
> Table sizes:
>  DOCUMENT: 381 000
>  EXTERNAL_TRANSLATION_UNIT: 76 000 000
>  EXTERNAL_TRANSLATION: 76 000 000
>
> Now the following query takes about 36 minutes to finish:
>
>  SELECT u.id AS id_external_translation_unit,
> r.id AS id_external_translation,
> u.fk_id_language AS fk_id_source_language,
> r.fk_id_language AS fk_id_target_language,
> doc.fk_id_job
>  FROM "EXTERNAL_TRANSLATION_UNIT" u
>  JOIN "DOCUMENT" doc ON u.fk_id_document = doc.id
>  JOIN "EXTERNAL_TRANSLATION" r ON u.id = r.fk_id_translation_unit
>  WHERE doc.date_last_updated >= date(now() - '171:00:00'::interval)
>  ORDER BY r.id LIMIT 1000
>
> This is the query plan:
>
> <http://postgresql.nabble.com/file/n5864045/qp1.png>
>
> If I remove the WHERE condition, it returns immediately.
>
>
​So does  "SELECT 1;" - but since that doesn't give the same answer it is
not very relevant.​


> Am I doing something obviously wrong?
>

​Not obviously...​


> Thank you for any ideas.
>

​Consider updating the translation tables at the same time the document
table is updated.  That way you can apply the WHERE and ORDER BY clauses
against the same table.

​I presume you've run ANALYZE on the data.

I would probably try something like:

WITH docs AS ( SELECT ... WHERE date > ...)
SELECT ... FROM (translations join translation_unit) t
WHERE EXISTS (SELECT 1 FROM docs WHERE t.doc_id = docs.doc_id)
ORDER BY t.id LIMIT 1000

You are trying to avoid the NESTED LOOP and the above has a decent chance
of materializing docs and then building either a bit or hash map for both
docs and translations thus performing a single sequential scan over both
instead of performing 70+ million index lookups.

Take this with a grain of salt as my fluency in this area is limited - I
tend to work with trial-and-error but without data that is difficult.

I'm not sure if the planner could be smarter because you are asking a
question it is not particularly suited to estimating - namely cross-table
correlations.  Rethinking the model is likely to give you a better outcome
long-term though it does seem like there should be room for improvement
within the stated query and model.

As Tomas said you likely will benefit from increased working memory in
order to make materializing and hashing/bitmapping favorable compared to a
nested loop.

David J.


[PERFORM] Re: [PERFORM] Re: Query > 1000× slowdown after adding datetime comparison

2015-08-31 Thread David G. Johnston
On Mon, Aug 31, 2015 at 3:03 PM, twoflower <standa.ku...@gmail.com> wrote:

> Tomas Vondra-4 wrote
> > Please share explain plans for both the slow and the fast query. That
> > makes it easier to spot the difference, and possibly identify the cause.
> >
> > Also, what PostgreSQL version is this, and what are "basic" config
> > parameters (shared buffers, work mem)?
>
> I am running 9.4.4, here are the basic config parameters:
>
> work_mem = 32 MB
> shared_buffers = 8196 MB
> temp_buffers = 8 MB
> effective_cache_size = 4 GB
>
> I have run ANALYZE on all tables prior to running the queries. The query
> plan for the fast version (without the WHERE clause) follows:
>
> <http://postgresql.nabble.com/file/n5864075/qp2.png>
>
> What I don't understand is the difference between the inner NESTED LOOP
> between the slow and the fast query plan. In the fast one, both index scans
> have 1000 as the actual row count. I would expect that, given the LIMIT
> clause. The slow query plan, however, shows ~ 75 000 000 as the actual row
> count. Is the extra WHERE condition the only and *plausible* explanation
> for
> this difference?
>
>
​In the slow query it requires evaluating every single document to
determine which of the 75 million translations can be discarded; after
which the first 1000 when sorted by translation id are returned.

In the first query the executor simply scans the translation index in
ascending order and stops after retrieving the first 1,000.

What you are expecting, I think, is for that same process to continue
beyond 1,000 should any of the first 1,000 be discarded due to the
corresponding document not being updated recently enough, until 1,000
translations are identified.  I'm not sure why the nested loop executor is
not intelligent enough to do this...

The important number in these plans is "loops", not "rows"

David J.


[PERFORM] Re: [PERFORM] Re: Query > 1000× slowdown after adding datetime comparison

2015-08-31 Thread David G. Johnston
On Mon, Aug 31, 2015 at 3:19 PM, twoflower <standa.ku...@gmail.com> wrote:

> And another thing which comes out as a little surprising to me - if I
> replace
> the *date_last_updated* condition with another one, say *doc.documenttype =
> 4*, the query finishes immediately. *documenttype* is an unindexed integer
> column.
>
>
​The only index that matters here is the pkey on document.  The problem is
the failure to exit the nested loop once 1,000 translations have been
gathered.  Translation is related to document via key - hence the nested
loop.  A hashing-based plan would make use of the secondary indexes but
likely would not be particularly useful in this query (contrary to my
earlier speculation).

Here's the query plan:
>
> <http://postgresql.nabble.com/file/n5864080/qp3.png>
>
> What's so special about that *date_last_updated* condition that makes it so
> slow to use? Is it because it involves the *date()* function call that it
> makes it difficult for the planner to guess the data distribution in the
> DOCUMENT table?
>

What happens if you pre-compute the date condition and hard code it?

​
​David J.
​


Re: [PERFORM] Most efficient way of querying M 'related' tables where N out of M may contain the key

2015-08-21 Thread David G. Johnston
On Fri, Aug 21, 2015 at 8:07 AM, Stephane Bailliez sbaill...@gmail.com
wrote:


 On Thu, Aug 20, 2015 at 8:19 PM, David G. Johnston 
 david.g.johns...@gmail.com wrote:


 ​SELECT [...]
 FROM (SELECT reference_id, [...] FROM table_where_referenced_id_is_a_pk
 WHERE reference_id EXISTS/IN/JOIN)​

 ​src
 ​LEFT JOIN type1 USING (reference_id)
 LEFT JOIN type2 USING (reference_id)
 [...]



​Place ^ in a CTE named (find_all)​


 there are no tables where reference_id is a pk, I could create one or do :
 select reference_id from ( values (..), (...), (...)  )

 the tricky part with the join (and where I was not clear about it in my
 original description) is that a reference_id  can match in multiple tables
 (eg. it can be a fk in type1 and type2), so it then becomes a bit harder to
 collect all the common attributes and 'types' when doing joins like this.

 For example let's assume there is a group_id to be be retrieved among all
 tables as a common attribute:

 if reference_id was existing only in one table, I could do
 coalesce(type1.group_id, ... type5.group_id) as group_id in the main select
 however that would not work in this case.


​WITH find_all (reference_id, type_identifier, type_id) AS ( ... )
SELECT ​type_identifier, array_agg(reference_id), array_agg(type_id)
FROM find_all
WHERE type_identifier IS NOT NULL
GROUP BY type_identifier

​find_all will return at least one row, possibly empty if no matches are
present, and will return multiple rows if more than one matches.  You can
use array_agg as shown, or play around with custom composite types, ​or
even build a JSON document.

David J.


Re: [PERFORM] Most efficient way of querying M 'related' tables where N out of M may contain the key

2015-08-20 Thread David G. Johnston
On Thu, Aug 20, 2015 at 8:03 PM, Stephane Bailliez sbaill...@gmail.com
wrote:

 Pretty bad subject description... but let me try to explain.


 I'm trying to figure out what would be the most efficient way to query
 data from multiple tables using a foreign key.


​SELECT [...]
FROM (SELECT reference_id, [...] FROM table_where_referenced_id_is_a_pk
WHERE reference_id EXISTS/IN/JOIN)​

​src
​LEFT JOIN type1 USING (reference_id)
LEFT JOIN type2 USING (reference_id)
[...]

Or consider whether PostgreSQL Inheritance would work - though basically
its a friendly API over the UNION ALL query you proposed.

David J.


Re: [PERFORM] Slow HashAggregate/cache access

2015-08-05 Thread David Rowley
On 6 August 2015 at 07:55, Andreas Joseph Krogh andr...@visena.com wrote:

 På onsdag 05. august 2015 kl. 20:25:25, skrev Maxim Boguk 
 maxim.bo...@gmail.com:

 [snip]

 ​I think I know where issue is.
 The PostgreSQL planner unable pass join conditions into subquery with
 aggregate functions (it's well known limitation).
 [snip]


 I'm curious; will 9.5 help here as it has WHERE clause pushdown in
 subqueries with window functions?

 http://michael.otacoo.com/postgresql-2/postgres-9-5-feature-highlight-where-pushdown-with-window-function/



I've not looked at the query in any detail, but that particular patch won't
help as it only allows pushdown of predicate into subqueries with window
functions where the predicate is part of all of the subquery's PARTITION BY
clauses.

The query in question has no window clauses, so qual pushdown is not
disabled for that reason.

Regards

David Rowley
--
 David Rowley   http://www.2ndQuadrant.com/
http://www.2ndquadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


Re: [PERFORM] Slow HashAggregate/cache access

2015-08-05 Thread David Rowley
On 6 August 2015 at 06:25, Maxim Boguk maxim.bo...@gmail.com wrote:



 On Wed, Aug 5, 2015 at 11:41 AM, Alexandre de Arruda Paes 
 adald...@gmail.com wrote:

 Hi,

 First, sorry to compare Post with other database system, but I know
 nothing about Oracle...

 This customer have an application made with a framework thats generates
 the SQL statements (so, We can't make any query optimizations) .

 We did the following tests:

 1) Postgresql 9.3 and Oracle 10 in a desktop machine(8 GB RAM, 1 SATA
 disk,Core i5)
 2) Postgresql 9.3 in a server + FC storage (128 GB RAM, Xeon 32 cores,
 SAS disks)


 ​I think I know where issue is.
 The PostgreSQL planner unable pass join conditions into subquery with
 aggregate functions (it's well known limitation).


I think this statement is quite misleading. Let's look at an example:

create table t1 (a int not null, v int not null);
create table t2 (a int not null);
insert into t1 select s.i,10 from generate_series(1,1000)
s(i),generate_series(1,1000);
insert into t2 select generate_series(1,1000);
create index on t1 (a);


explain select t2.a,s.sumv from (select a,sum(v) sumv from t1 group by a) s
inner join t2 on t2.a = s.a where t2.a = 1;
QUERY PLAN
--
 Nested Loop  (cost=0.42..59.76 rows=1 width=12)
   -  GroupAggregate  (cost=0.42..42.24 rows=1 width=8)
 Group Key: t1.a
 -  Index Scan using t1_a_idx on t1  (cost=0.42..37.38 rows=969
width=8)
   Index Cond: (a = 1)
   -  Seq Scan on t2  (cost=0.00..17.50 rows=1 width=4)
 Filter: (a = 1)
(7 rows)

As you can see, the predicate is pushes down just fine into a subquery with
aggregates.

The likely reason that PostgreSQL Is not behaving the same as SQL Server
and Oracle is because the predicate pushdowns are limited to equality
operators only as internally these are all represented by a series of
equivalence classes which in this case say that 1 = t2.a = t1.a,
therefore it's possible to apply t1.a = 1 at the lowest level.

These equivalence classes don't currently handle non-equality operators.
Here's an example:

explain select t2.a,s.sumv from (select a,sum(v) sumv from t1 group by a) s
inner join t2 on t2.a = s.a where t2.a = 1;
   QUERY PLAN

 Hash Join  (cost=19442.51..19466.27 rows=1 width=12)
   Hash Cond: (t1.a = t2.a)
   -  HashAggregate  (cost=19425.00..19435.00 rows=1000 width=8)
 Group Key: t1.a
 -  Seq Scan on t1  (cost=0.00..14425.00 rows=100 width=8)
   -  Hash  (cost=17.50..17.50 rows=1 width=4)
 -  Seq Scan on t2  (cost=0.00..17.50 rows=1 width=4)
   Filter: (a = 1)
(8 rows)

Notice the seq scan on t1 instead of the index scan on t1_a_idx.

A way around this is to manually push the predicate down into the subquery:

explain select t2.a,s.sumv from (select a,sum(v) sumv from t1 where t1.a =
1 group by a) s inner join t2 on t2.a = s.a where t2.a = 1;
  QUERY PLAN
---
 Nested Loop  (cost=0.42..21.98 rows=1 width=12)
   Join Filter: (t1.a = t2.a)
   -  GroupAggregate  (cost=0.42..4.46 rows=1 width=8)
 Group Key: t1.a
 -  Index Scan using t1_a_idx on t1  (cost=0.42..4.44 rows=1
width=8)
   Index Cond: (a = 1)
   -  Seq Scan on t2  (cost=0.00..17.50 rows=1 width=4)
 Filter: (a = 1)
(8 rows)


The query in question is likely performing badly because of this:

 -  Seq Scan on fr13t1  (cost=0.00..25072.50
rows=311800 width=21) (actual time=0.007..115.766 rows=311800 loops=1)
   Filter: (fr01codemp = '1'::smallint)
   Buffers: shared hit=21175

Just how selective is fr01codemp = '1'::smallint ? Is there an index on
that column ?

Regards

David Rowley

--
 David Rowley   http://www.2ndQuadrant.com/
http://www.2ndquadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


Re: [PERFORM] hyperthreadin low performance

2015-07-21 Thread David Rowley
On 21 July 2015 at 14:59, Jeison Bedoya Delgado jeis...@audifarma.com.co
wrote:

 hi everyone,

 Recently update a database to machine with RHEL7, but i see that the
 performance is betther if the hyperthreading tecnology is deactivated and
 use only 32 cores.

 is normal that the machine performance is better with 32 cores that 64
 cores?.


You might be interested in
http://www.postgresql.org/message-id/53f4f36e.6050...@agliodbs.com

Regards

David Rowley

--
 David Rowley   http://www.2ndQuadrant.com/
http://www.2ndquadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


Re: [PERFORM] Insert vs Update

2015-07-15 Thread David G. Johnston
On Wed, Jul 15, 2015 at 12:16 PM, Robert DiFalco robert.difa...@gmail.com
wrote:

 First off I apologize if this is question has been beaten to death. I've
 looked around for a simple answer and could not find one.

 Given a database that will not have it's PKEY or indices modified, is it
 generally faster to INSERT or UPDATE data. And if there is a performance
 difference is it substantial?

 I have a situation where I can easily do one or the other to the same
 effect. For example, I have a journaling schema with a limited number of
 states for an entry. Currently each state is it's own table so I just
 insert them as they occur. But I could easily have a single entry table
 where the row is updated with column information for states (after the
 entry's initial insertion).

 Not a big deal but since it's so easy for me to take either approach I was
 wondering if one was more efficient (for a large DB) than another.


​There is HOT (heap only tuple?) optimization that can occur if only
non-indexed data is altered.  I do not recall the specifics.

Dave
​


Re: [PERFORM] Insert vs Update

2015-07-15 Thread David G. Johnston
On Wednesday, July 15, 2015, Robert DiFalco robert.difa...@gmail.com
wrote:

 First off I apologize if this is question has been beaten to death. I've
 looked around for a simple answer and could not find one.

 Given a database that will not have it's PKEY or indices modified, is it
 generally faster to INSERT or UPDATE data. And if there is a performance
 difference is it substantial?


This seems odd.  If you have an option to update but choose to insert what
becomes of the other record?


Re: [PERFORM] Insert vs Update

2015-07-15 Thread David G. Johnston
On Wed, Jul 15, 2015 at 3:16 PM, Robert DiFalco robert.difa...@gmail.com
wrote:

 The different event types have differing amounts of related data.


​On this basis alone I would select the multiple-table version as my
baseline and only consider something different if the performance of this
was insufficient and I could prove that an alternative arrangement was more
performant.

A single optional date with meta-data embedded in the column name​

​is usually workable but if you then have a bunch of other columns with
name like:

preparation_date, preparation_col1, preparation_col2, consumed_col1,
consumed_col2, consumed_date
​

​I would find that to be undesirable.

You may be able to put Table Inheritance to good use here...

I do not know (but doubt) if HOT optimization works when going from NULL to
non-NULL since the former is stored in a bitmap while the later occupies
normal relation space and thus the update would likely end up writing an
entirely new​ record upon each event category recording.

David J.


Re: [PERFORM] Insert vs Update

2015-07-15 Thread David G. Johnston
On Wed, Jul 15, 2015 at 1:56 PM, Robert DiFalco robert.difa...@gmail.com
wrote:



 On Wed, Jul 15, 2015 at 10:33 AM, David G. Johnston 
 david.g.johns...@gmail.com wrote:

 On Wednesday, July 15, 2015, Robert DiFalco robert.difa...@gmail.com
 wrote:

 First off I apologize if this is question has been beaten to death. I've
 looked around for a simple answer and could not find one.

 Given a database that will not have it's PKEY or indices modified, is it
 generally faster to INSERT or UPDATE data. And if there is a performance
 difference is it substantial?


 This seems odd.  If you have an option to update but choose to insert
 what becomes of the other record?



 Consider the two pseudo-schemas, I'm just making this up for example
 purposes:

 SCHEMA A
 =
 meal(id SEQUENCE,user_id, started DEFAULT NOW())
 meal_prepared(ref_meal_id, prepared DEFAULT NOW())
 meal_abandoned(ref_meal_id, abandoned ...)
 meal_consumed(ref_meal_id, consumed ...)
 etc.

 Then in response to different meal events you always have an insert.

 aMealId = INSERT INTO meal(user_id) VALUES (aUserId);

 When preparation starts:

 INSERT INTO meal_prepared(ref_meal_id) VALUES (aMealId);

 And so on for each event.

 Compare that to this:

 SCHEMA B
 =
 meal_event(id, started, prepared, abandoned, consumed, ...)

 The start of the meal is an INSERT:

 aMealId = INSERT INTO meal_event(user_id, started) VALUES (aUserId, NOW());

 When preparation starts:

 UPDATE meal_event SET prepared = NOW() WHERE id = aMealId;

 And so on.

 Basically the same data, in one case you always do inserts and add new
 tables for new events. In the other case you only insert once and then
 update for each state, then you add columns if you have new states.

 As I said this is just an example. But in SCHEMA A you have only inserts,
 lots of tables and in SCHEMA B you have a lot of updates and a lot of
 possibly NULL columns if certain events don't occur.

 Is that more clear?


​Yes, you are trying to choose between a bunch of one-to-one (optional)
relationships versus adding additional columns to a table all of which can
be null.

​I'd argue that neither option is normal (in the DB normalization sense).

CREATE TABLE meal (meal_id bigserial)
CREATE TABLE meal_even​t_type (meal_event_id bigserial)
CREATE TABLE meal_event (meal_id bigint, meal_event_id bigint, occurred_at
timestamptz)

​So now the decision is one of how to denormalize.  materialzed views and
two ways to do so.  The specific solution would depend in part on the final
application queries that you need to write.

If you do want to model the de-normalized form, which I would likely be
tempted to do given a fixed set of events that do not require additional
related attributes, would be to place the few event timestamps on the main
table and UPDATE them to non-null.

In the normal form you will likely find partial indexes to be quite useful.

David J.
​


Re: [PERFORM] Insert vs Update

2015-07-15 Thread David G. Johnston
On Wed, Jul 15, 2015 at 4:53 PM, Michael Nolan htf...@gmail.com wrote:

 On Wed, Jul 15, 2015 at 3:16 PM, Robert DiFalco robert.difa...@gmail.com
 wrote:


 Thanks David, my example was a big simplification, but I appreciate your
 guidance. The different event types have differing amounts of related data.
 Query speed on this schema is not important, it's really the write speed
 that matters. So I was just wondering given the INSERT or UPDATE approach
 (with no indexed data being changed) if one is likely to be substantially
 faster than the other.


 As I understand how ACID compliance is done, updating a record will
 require updating any indexes for that record, even if the index keys are
 not changing.  That's because any pending transactions still need to be
 able to find the 'old' data, while new transactions need to be able to find
 the 'new' data.  And ACID also means an update is essentially a
 delete-and-insert.


​I might be a bit pedantic here but what you describe is a byproduct of the
specific​ implementation that PostgreSQL uses to affect Consistency (the C
in ACID) as opposed to a forgone outcome in being ACID compliant.

http://www.postgresql.org/docs/9.4/static/mvcc-intro.html

I'm out of my comfort zone here but the HOT optimization is designed to
leverage the fact that an update to a row that does not affect indexed
values is able to leave the index alone and instead during index lookup the
index points to the old tuple, notices that there is a chain present, and
walks that chain to find the currently active tuple.

In short, if the only index is a PK an update of the row can avoid touching
that index.

I mentioned that going from NULL to Not NULL may disrupt this but I'm
thinking I may have mis-spoken.

Also, with separate tables the amount of data to write is going to be less
because you'd have fewer columns on the affected tables.

While an update is a delete+insert a delete is mostly just a bit-flip
action - at least mid-transaction.  Depending on volume, though, the
periodic impact of vaccuming may want to be taken into consideration.

David J.


Re: [PERFORM] Are there tuning parameters that don't take effect immediately?

2015-06-12 Thread David G. Johnston
On Fri, Jun 12, 2015 at 4:37 PM, Michael Nolan htf...@gmail.com wrote:

 The only thing I can come up that's happened since last night was that we
 ran the nightly vacuum analyze on that database, but I did not change the
 statistics target.



​The answer to your question is no, parameters changes are worse would take
effect after a reboot - though most are used on the very next query that
runs.

The vacuum would indeed likely account for the gains - there being
significantly fewer ​dead/invisible rows to have to scan over and discard
while retrieving the live rows that fulfill your query.

David J.


Re: [PERFORM] How to reduce writing on disk ? (90 gb on pgsql_tmp)

2015-06-03 Thread David G. Johnston
You should repost this directly and not through Nabble.  It has wrapped
your code in raw tags which the PostgreSQL mailing list software strips.

On Wednesday, June 3, 2015, ben.play benjamin.co...@playrion.com wrote:

 Hi all,

 We have a big database (more than 300 Gb) and we run a lot of queries each
 minute.

 However, once an hour, the (very complex) query writes A LOT on the disk
 (more than 95 Gb !!!)
 We have 64 Gb of RAM and this is our config :


 And my error on the query is :


 Do you know how to solve this problem ?

 Best regards,
 Benjamin.



 --
 View this message in context:
 http://postgresql.nabble.com/How-to-reduce-writing-on-disk-90-gb-on-pgsql-tmp-tp5852321.html
 Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


 --
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
 javascript:;)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance



Re: [PERFORM] Postgres is using 100% CPU

2015-05-31 Thread Jean-David Beyer
On 05/30/2015 09:46 AM, Ashik S L wrote:
 We are using postgres SQL version 8.4.17..
 Postgres DB szie is 900 MB and we are inserting 273 rows at once .and
 each row is of 60 bytes.Every time we insert 16380 bytes of data.

Way back when, I was inserting a lot of rows of date (millions of rows)
and it was taking many hours on a machine with 6 10,000 rpm Ultra/320
SCSI hard drives and 8 GBytes of ram. Each insert was a separate
transaction.

When I bunched up lots of rows (thousaands) into a single transaction,
the whole thing took less than an hour.

Is it possible that when you insert 273 rows at once, you are doing it
as 273 transactions instead of one?

-- 
  .~.  Jean-David Beyer  Registered Linux User 85642.
  /V\  PGP-Key:166D840A 0C610C8B Registered Machine  1935521.
 /( )\ Shrewsbury, New Jerseyhttp://linuxcounter.net
 ^^-^^ 09:00:01 up 3 days, 9:57, 2 users, load average: 4.89, 4.90, 4.91


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Fastest way / best practice to calculate next birthdays

2015-05-21 Thread David G. Johnston
On Thursday, May 21, 2015, Bosco Rama postg...@boscorama.com wrote:

 On 05/20/15 20:22, David G. Johnston wrote:
  On Monday, May 18, 2015, er.tejaspate...@gmail.com javascript:; 
  er.tejaspate...@gmail.com javascript:; wrote:
 
  If I have to find upcoming birthdays in current week and the current
 week
  fall into different months - how would you handle that?
 
 
  Extract(week from timestamptz_column)
 
  ISO weeks are not affected by month boundaries but do start on Monday.

 There is the year start/end boundary conditions to worry about there.

 If the current week covers Dec28-Jan02 then week of year won't help for
 a birthday on Jan01 or Jan02 if 'today' is in the Dec portion.  Ditto
 for birthday in Dec portion when 'today' is in the Jan portion.


You need to read the documentation regarding ISO year and ISO week more
carefully.  There is no issue with years only ensuring that your definition
of week starts with Monday and contains 7 days.  The ISO year for January
1st can be different than the Gregorian year for the same.

David J.


Re: [PERFORM] Fastest way / best practice to calculate next birthdays

2015-05-20 Thread David G. Johnston
On Monday, May 18, 2015, er.tejaspate...@gmail.com 
er.tejaspate...@gmail.com wrote:

 If I have to find upcoming birthdays in current week and the current week
 fall into different months - how would you handle that?


Extract(week from timestamptz_column)

ISO weeks are not affected by month boundaries but do start on Monday.

David J.


[PERFORM] Index Scan Backward Slow

2015-05-01 Thread David Osborne
=278731 width=0) (actual time=23.298..23.298 rows=275909 loops=1)
 Index Cond: (code = 'XX'::bpchar)
 Buffers: shared hit=765
 Total runtime: 184.043 ms
(13 rows)

http://explain.depesz.com/s/E9VE

Thanks in advance for any help.

Regards,
-- 
David Osborne
Qcode Software Limited
http://www.qcode.co.uk


Re: [PERFORM] Index Scan Backward Slow

2015-05-01 Thread David Osborne
Simple... that did it... thanks!

dev= create index on table(code,row_id);
CREATE INDEX
Time: 38088.482 ms
dev= explain (analyse,buffers)  select row_id as last_row_id from table
where code='XX' order by row_id desc limit 1;

   QUERY PLAN


 Limit  (cost=0.43..0.46 rows=1 width=4) (actual time=0.070..0.071 rows=1
loops=1)
   Buffers: shared hit=2 read=3
   -  Index Only Scan Backward using table_code_row_id_idx on table
 (cost=0.43..7999.28 rows=278743 width=4) (actual time=0.067..0.067 rows=1
loops=1)
 Index Cond: (code = 'XX'::bpchar)
 Heap Fetches: 1
 Buffers: shared hit=2 read=3
 Total runtime: 0.097 ms
(7 rows)


On 1 May 2015 at 11:59, Evgeniy Shishkin itparan...@gmail.com wrote:


  On 01 May 2015, at 13:54, David Osborne da...@qcode.co.uk wrote:
 
  Hi,
 
  We have a query which finds the latest row_id for a particular code.
 
  We've found a backwards index scan is much slower than a forward one, to
 the extent that disabling indexscan altogether actually improves the query
 time.
 
  Can anyone suggest why this might be, and what's best to do to improve
 the query time?
 
 
 
  dev= \d table
 Table public.table
  Column|  Type  | Modifiers
  --++---
   row_id   | integer|
   code | character(2)   |
  Indexes:
  table_code_idx btree (code)
  table_row_idx btree (row_id)
 
  dev= select count(*) from table;
count
  -
   6090254
  (1 row)
 
  dev= select count(distinct(row_id)) from table;
count
  -
   5421022
  (1 row)
 
  dev= select n_distinct from pg_stats where tablename='table' and
 attname='row_id';
   n_distinct
  
-0.762951
  (1 row)
 
  dev= show work_mem;
   work_mem
  ---
   1249105kB
  (1 row)
 
  dev= select version();
 version
 
 --
   PostgreSQL 9.3.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC)
 4.6.3 20120306 (Red Hat 4.6.3-2), 64-bit
  (1 row)
 
 
  The query in question:
 
  dev= explain (analyse,buffers)  select row_id as last_row_id from table
 where code='XX' order by row_id desc limit 1;
 
  QUERY PLAN
 
 --
   Limit  (cost=0.43..1.67 rows=1 width=4) (actual time=835.281..835.282
 rows=1 loops=1)
 Buffers: shared hit=187961
 -  Index Scan Backward using table_row_idx on table
 (cost=0.43..343741.98 rows=278731 width=4) (actual time=835.278..835.278
 rows=1 loops=1)
   Filter: (code = 'XX'::bpchar)
   Rows Removed by Filter: 4050971
   Buffers: shared hit=187961
   Total runtime: 835.315 ms
  (7 rows)
 
  http://explain.depesz.com/s/uGC
 
 
  So we can see it's doing a backwards index scan. Out of curiosity I
 tried a forward scan and it was MUCH quicker:
 
  dev= explain (analyse,buffers)  select row_id as first_row_id from
 table where code='XX' order by row_id asc limit 1;
 
  QUERY PLAN
 
 ---
   Limit  (cost=0.43..1.67 rows=1 width=4) (actual time=19.473..19.474
 rows=1 loops=1)
 Buffers: shared hit=26730
 -  Index Scan using table_row_idx on table  (cost=0.43..343741.98
 rows=278731 width=4) (actual time=19.470..19.470 rows=1 loops=1)
   Filter: (code = 'XX'::bpchar)
   Rows Removed by Filter: 62786
   Buffers: shared hit=26730
   Total runtime: 19.509 ms
  (7 rows)
 
  http://explain.depesz.com/s/ASxD
 
 
  I thought adding a index on row_id desc might be the answer but it has
 little effect:
 
  dev= create index row_id_desc_idx on table(row_id desc);
  CREATE INDEX
  Time: 5293.812 ms
 
  dev= explain (analyse,buffers)  select row_id as last_row_id from table
 where code='XX' order by row_id desc limit 1;
 
 QUERY PLAN
 
 
   Limit  (cost=0.43..1.66 rows=1 width=4) (actual time=944.666..944.667
 rows=1 loops=1)
 Buffers: shared hit=176711 read=11071
 -  Index Scan using row_id_desc_idx on table  (cost=0.43..342101.98
 rows=278731 width=4) (actual time=944.663..944.663 rows=1 loops=1)
   Filter: (code = 'XX'::bpchar)
   Rows Removed by Filter: 4050971
   Buffers: shared hit=176711 read=11071
   Total runtime: 944.699 ms
  (7 rows)
 
  http

Re: [PERFORM] unlogged tables

2015-04-13 Thread David G. Johnston
On Monday, April 13, 2015, Matheus de Oliveira matioli.math...@gmail.com
wrote:


 On Mon, Apr 13, 2015 at 4:31 PM, dgabriel gabriel.do...@gmail.com
 javascript:_e(%7B%7D,'cvml','gabriel.do...@gmail.com'); wrote:

 In the event of a normal shutdown, we can flush all the writes to disk
 so we know all the data has been written, so there is no need to
 truncate.

 Isn't possible to periodically flush data to disk and in case of crush
 postgres to load only the data that existed at last flush? The periodic
 flush could be configurable, for example every 30 minutes or after x rows
 updated/inserted.


 There is no such facility implemented for UNLOGGED TABLEs. That could be a
 feature request though.


Well, that is half right anyway.  UNLOGGED tables obey checkpoints just
like any other table.  The missing feature is an option to leaved restored
the last checkpoint.  Instead, not knowing whether there were changes since
the last checkpoint, the system truncated the relation.

What use case is there for a behavior that the last checkpoint data is left
on the relation upon restarting - not knowing whether it was possible the
other data could have been written subsequent?

David J.


Re: [PERFORM] unlogged tables

2015-04-13 Thread David G. Johnston
On Mon, Apr 13, 2015 at 4:49 PM, Jeff Janes jeff.ja...@gmail.com wrote:

 On Mon, Apr 13, 2015 at 1:49 PM, David G. Johnston 
 david.g.johns...@gmail.com wrote:

 On Monday, April 13, 2015, Matheus de Oliveira matioli.math...@gmail.com
 wrote:


 On Mon, Apr 13, 2015 at 4:31 PM, dgabriel gabriel.do...@gmail.com
 wrote:

 In the event of a normal shutdown, we can flush all the writes to disk
 so we know all the data has been written, so there is no need to
 truncate.

 Isn't possible to periodically flush data to disk and in case of crush
 postgres to load only the data that existed at last flush? The periodic
 flush could be configurable, for example every 30 minutes or after x
 rows
 updated/inserted.


 There is no such facility implemented for UNLOGGED TABLEs. That could be
 a feature request though.


 One way would be to lock dirty buffers from unlogged relations into
 shared_buffers (which hardly seems like a good thing) until the start of a
 super-checkpoint and then write them all out as fast as possible (which
 kind of defeats checkpoint_completion_target).  And then if the crash
 happened during a super-checkpoint, the data would still be inconsistent
 and need to be truncated.




 Well, that is half right anyway.  UNLOGGED tables obey checkpoints just
 like any other table.


 Do they?  I thought they only obeyed shutdown checkpoints, not online
 checkpoints.  I do remember some changes around this area, but none that
 completely reverted that logic.


​I vaguely recall that conversation now...I'm not positive on the exact
mechanics here and, as it pertains to the OP, the difference you describe
is immaterial since in either case the status quo mandates an all or
nothing approach to an unlogged table's contents.​





 The missing feature is an option to leaved restored the last checkpoint.
 Instead, not knowing whether there were changes since the last checkpoint,
 the system truncated the relation.

 What use case is there for a behavior that the last checkpoint data is
 left on the relation upon restarting - not knowing whether it was possible
 the other data could have been written subsequent?


 I would like a way to have unlogged tables be available on a replica
 provided that no changes were made to them between the pg_basebackup and
 the recovery point.


 My use case is that I mark certain read-only-after-bulk-loading tables as
 unlogged solely to avoid blowing out the log archive during the loading
 phase and refresh phase.  This is stuff like vendor catalogs, NCBI
 datasets, ChEMBL datasets, etc, which can simply be re-derived from the
 reference.  It would be nice if these were still available (without having
 to repeat the ETL) after crashes provided they were not written to since a
 checkpoint, and available on cloned test servers without having to repeat
 the ETL on those as well.



​My gut reaction is that those should be in their own clusters and accessed
via postgres_fdw...

That particular use-case would probably best be served with a separate
replication channel which pushes data files from the primary to the slaves
and allows for the slave to basically rewrite its existing table by
pointing to the newly supplied version.  Some kind of CREATE STATIC TABLE
and PUSH STATIC TABLE TO {all | replica name} command combo...though
ideally with less manual intervention...

David J.​


Re: [PERFORM] unlogged tables

2015-04-13 Thread David G. Johnston
On Mon, Apr 13, 2015 at 7:45 PM, Jim Nasby jim.na...@bluetreble.com wrote:

 On 4/13/15 7:32 PM, David G. Johnston wrote:​

  That particular use-case would probably best be served with a separate
 replication channel which pushes data files from the primary to the
 slaves and allows for the slave to basically rewrite its existing
 table by pointing to the newly supplied version.  Some kind of CREATE
 STATIC TABLE and PUSH STATIC TABLE TO {all | replica name} command
 combo...though ideally with less manual intervention...


 You still have the same problem of knowing if someone has scribbled on the
 data since the last checkpoint.


​That seems like an automation concern though...the more limited idea was
to simply have a means for a table to exist on the master and allow the
user to cause an exact copy of that table to appear on a replica via direct
data transfer (i.e., without need to create a backup/dump).  If the table
already exists on the replica the existing version remains as-is until the
new table is fully push and then a filenode pointer update happens.  If
changes are made to the master the two tables will remain diverged until a
new push occurs.​

I imaging this same idea could be handled external to the database though
I'm don't know enough to comment on the specific technical merits of each.​


 There's been recent discussion of adding support for read-only tables. If
 we had those, we might be able to support something like...

 INSERT INTO unlogged;
 ALTER TABLE unlogged SET READ ONLY;
 CHECKPOINT;
 /* take backup */

 This should be safe as long as we WAL log changes to read-only status
 (which presumably we would).

 How much work that would entail though, I don't know.

 Ultimately you still have to get the data over to the other machine
 anyway. ISTM it'd be a LOT more useful to look at ways to make the WAL
 logging of bulk inserts (and especially COPY into a known empty table) a
 lot more efficient.


​Jeff Janes makes a comment about wanting ...to avoid blowing out the log
archive...; which I also don't quite follow...

WAL does seem to be designed to solve a different problem that what is
described here - lots of small changes versus few large changes.  Improving
WAL to move the size at which small becomes large is a win but another
channel designed for few large changes may be less complex to implement.
The current work in logical replication likely has merit here as well but
my familiarity with that technology is fairly limited.

David J.


Re: [PERFORM] Weird CASE WHEN behaviour causing query to be suddenly very slow

2015-03-31 Thread David G. Johnston
On Tue, Mar 31, 2015 at 8:58 AM, Kevin Viraud 
kevin.vir...@rocket-internet.de wrote:

 Touche ! Thanks a lot.

 Looking more at the data yes it goes very often to ELSE Clause.  And
 therefore reaching  the MAX_CACHED_RES.

 In there anyway to increase that value  ?

 Basically, I have several tables containing millions of rows and let say 5
 columns. Those five columns, depending of their  combination give me a 6th
 value.
 We have complex patterns to match and using simple LIKE / EQUAL and so on
 wouldn't be enough. This can be applied to N number of table so we
 refactored this process into a function that we can use in the SELECT
 statement, by giving only the 5 values each time.

 I wouldn't mind using a table and mapping it through a join  if it were for
 my own use.
 But the final query has to be readable and usable for almost-non-initiated
 SQL user... So using a function with encapsulated case when seemed to be a
 good idea and so far worked nicely.

 But we might consider changing it if we have no other choice...

 Regards,

 Kevin


​Thoughts...​

​Memoization: http://en.wikipedia.org/wiki/Memoization

Rewrite the function in pl/perl​ and compare performance

​Hierarchy of CASE statements allowing you to reduce the number of
possibilities in exchange for manually pre-processing the batches on a
significantly less complicated condition probably using only 1 or 2 columns
instead of all five.​

​I'm not familiar with the caching constraint or the data so its hard to
make more specific suggestions.

David J.​


Re: [PERFORM] Query RE: Optimising UUID Lookups

2015-03-24 Thread David Rowley
On 21 March 2015 at 23:34, Roland Dunn roland.d...@gmail.com wrote:


 If we did add more RAM, would it be the effective_cache_size setting
 that we would alter? Is there a way to force PG to load a particular
 table into RAM? If so, is it actually a good idea?


Have you had a look at EXPLAIN (ANALYZE, BUFFERS) for the query?

Pay special attention to Buffers: shared read=NNN and Buffers: shared
hit=NNN, if you're not reading any buffers between runs then the pages are
in the PostgreSQL shared buffers. By the looks of your config you have 10GB
of these. On the other hand if you're getting buffer reads, then they're
either coming from disk, or from the OS cache. PostgreSQL won't really know
the difference.

If you're not getting any buffer reads and it's still slow, then the
problem is not I/O

Just for fun... What happens if you stick the 50 UUIDs in some table,
analyze it, then perform a join between the 2 tables, using IN() or
EXISTS()... Is that any faster?

Also how well does it perform with: set enable_bitmapscan = off; ?

Regards

David Rowley


Re: [PERFORM] query - laziness of lateral join with function

2015-02-12 Thread David G Johnston
Tom Lane-2 wrote
 paulcc lt;

 paulcc.two@

 gt; writes:
select count(alpha.id) 
from alpha
cross join lateral some_function(alpha.id) as some_val
where alpha.test
 
Here the function is strict, and moreover its argument will never
be null - hence there should always be a non-null value returned. 
 
 In both cases though, I rather wonder why you're using LATERAL at all, as
 opposed to just calling the function in the main query when you want its
 result.  The query planner can't be expected to make up for arbitrary
 amounts of stupidity in the formulation of the submitted query.

I'm trying to answer this with a bit more detail but cannot because the OP
provided too little information which is then causing Tom to make
assumptions.  I'm not sure to what degree the ORM is being stupid here since
I do not know why it thinks LATERAL is more appropriate than a select-list
function call for a non-SRF function (which I have to presume this is, but
it is not stated).

With respect to the function will never return NULL: this is not the
issue.  The issue is that the function could return nothing (i.e., zero
records) in which case the CROSS JOIN would suppress the corresponding
correlated row from the result.

Non-SRF functions are more easily used within the select-list of the query
instead of attached to a LATERAL clause; the only issue there is when the
function returns a composite and you try to immediately explode it into its
constituent parts - the function will be evaluated multiple times.  I'm not
sure if that is what Tom is saying above but the combination of that
limitation and limited optimizations if the function is in LATERAL seems to
be in conflict here.

There has been a recent uptick in interest in making PostgreSQL more ORM
friendly (i.e., more able to simply ignore stuff that is added to the query
even though a particular call doesn't actually need it) but I haven't seen
anyone looking into LATERAL.  More detailed reports may at least bring
exposure to what is being used in the wild and garner interest from other
parties in improving things.  Unfortunately this report is too limited to
really make a dent; lacking even the name of the ORM that is being used and
the entire queries that are being generated - and why.

David J.




--
View this message in context: 
http://postgresql.nabble.com/query-laziness-of-lateral-join-with-function-tp5837706p5837735.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Re: Migrating a FoxPro system and would like input on the best way to achieve optimal performance

2015-02-09 Thread David G Johnston
TonyS wrote
 Then each client has files within their own directory to keep the size of
 the tables manageable. Each client has 165 tables. These tables are all
 the same definition across the different groups.
 
 I have considered partitioning tables, but if I am correct that would
 result in 330,000 files and I am not certain if that will cause an issue
 with degraded file system performance.

I suggest you not think about files when pondering about PostgreSQL.  That
said, 330,000 tables within a single database, or even cluster, is likely to
be problematic.


 Is it possible to create a tablespace for each group and then create
 partitioned tables for the groups within the group's tablespace to limit
 the number of files in a directory? 

Same point about ignoring files and directories.  Tablespaces let you
place different kinds of data onto different filesystems; using them for
directory management is not particularly helpful.

Note that I presume you are planning on leaving the database backend on
Windows...my experience is more with Linux but your core issue is data model
which is largely O/S agnostic.


 I plan on utilizing the built-in streaming replication, so I assume if I
 went the tablespace route I would need to create directories for all
 future groups from the outset since creating them individually with code
 on the backup systems would be difficult.

Which is another reason why tablespaces should not implement logical
attributes of the system.


 Another option would be placing an extra field in each table identifying
 the group it belongs to and combining all of the separate tables of the
 same definition into one table. This would result in some tables having
 300 million entries currently and that would climb over the next 18
 months.

This is the canonical solution to multi-tenancy.  Physical partitioning then
occurs on a hash of whatever key you are using; you do not have one tenant
per table.


 The final option I can see is creating a schema for each of the different
 clients. I am not certain if this is a better option than partitioned
 tables. I haven't been able to determine if schema objects are stored in a
 sub directory or if they are in the same directory as all of the other
 tables. If they are in the same directory then the same issue arises as
 the partitioned tables.

Depending on whether clients are able to get access to the data directly you
can also consider having a separate database for each client.  I would then
recommend using either dblink or postgres_fdw to connect to the single
shared database - or just replicate the shared schema and data subset into
each individual client database.


 Of course, I am certain there are a number of other possibilities that I
 am overlooking. I am just trying to determine the best way to move this
 over and get things into a more modern system.

Without understanding how your application works and makes use of the
existing data it is difficult to suggest alternatives.  Specifically around
data visibility and the mechanics behind how the application access
different clients' data.

I would personally choose only between having different databases for each
client or using a client_id column in conjunction with a multi-tenant
database.  Those are the two logical models; everything else (e.g.
partitioning) are physical implementation details.

David J.



--
View this message in context: 
http://postgresql.nabble.com/Migrating-a-FoxPro-system-and-would-like-input-on-the-best-way-to-achieve-optimal-performance-tp5837211p5837241.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Postgres does not use indexes with OR-conditions

2014-11-07 Thread David G Johnston
Kevin Grittner-5 wrote
 Andrew Dunstan lt;

 andrew@

 gt; wrote:
 On 11/07/2014 12:06 AM, Vlad Arkhipov wrote:
 
 I need to rewrite it in the way below to make Postgres use the index.

 select *
 from commons.financial_documents fd
 where fd.creation_time = '2011-11-07 10:39:07.285022+08'
  and (
fd.creation_time  '2011-11-07 10:39:07.285022+08'
  or (fd.creation_time = '2011-11-07 10:39:07.285022+08' and
 fd.financial_document_id  100)
  )
 order by fd.creation_time desc
 limit 200

 Could you not rewrite it as something this?:

 where fd.creation_time = '2011-11-07 10:39:07.285022+08'
 and (fd.creation_time  '2011-11-07 10:39:07.285022+08'
   or fd.financial_document_id  100)
 
 Yeah, when there are two ways to write a query that are logically
 equivalent, it is better to put the AND at the higher level than
 the OR.  On the other hand, why not simply write it as?:
 
 select *
   from commons.financial_documents fd
   where (fd.creation_time, fd.financial_document_id)
('2011-11-07 10:39:07.285022+08', 100)
   order by fd.creation_time desc
   limit 200

From personal experience and observation on these lists record inequality is
not particularly intuitive.  I'm also not sure someone is likely to really
get it until they have a problem for which the above is the solution.

That said is there a place where we supply solutions and idioms to common
queries?  This query as well as pagination-oriented queries are two that
come to mind.  I think the material would fit well in the tutorial section
but having some kind of quick synopsis and cross reference in the
performance chapter would aid someone whose looking to solve a problem and
not in general education mode.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Postgres-does-not-use-indexes-with-OR-conditions-tp5826027p5826065.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Postgres does not use indexes with OR-conditions

2014-11-06 Thread David Rowley
On Fri, Nov 7, 2014 at 5:16 PM, arhipov arhi...@dc.baikal.ru wrote:

 Hello,

 I have just came across interesting Postgres behaviour with OR-conditions.
 Are there any chances that the optimizer will handle this situation in the
 future?

 select *
 from commons.financial_documents fd
 where fd.creation_time = '2011-11-07 10:39:07.285022+08'
 order by fd.creation_time desc
 limit 200

 select *
 from commons.financial_documents fd
 where fd.creation_time = '2011-11-07 10:39:07.285022+08'
or fd.creation_time  '2011-11-07 10:39:07.285022+08'
 order by fd.creation_time desc
 limit 200


 It would certainly be possible, providing the constants compare equally,
but... Question: Would you really want to pay a, say 1% increase in
planning time for ALL queries, so that you could have this unique case of
queries perform better at execution time?

Is there a valid reason why you don't just write the query with the =
operator?

Regards

David Rowley


Re: [PERFORM] unnecessary sort in the execution plan when doing group by

2014-10-28 Thread David Rowley
On Tue, Oct 28, 2014 at 7:26 PM, Huang, Suya suya.hu...@au.experian.com
wrote:

  Hi,



 This is the Greenplum database 4.3.1.0.


Likely this is the wrong place to ask for help. The plan output that you've
pasted below looks very different to PostgreSQL's EXPLAIN output.



  QUERY PLAN

---

Gather Motion 24:1  (slice2; segments: 24)  (cost=31286842.08..31287447.81
rows=1683 width=536)

   Rows out:  15380160 rows at destination with 14860 ms to first row,
23856 ms to end, start offset by 104 ms.

   -  HashAggregate  (cost=31286842.08..31287447.81 rows=1683 width=536)


-

Gather Motion 24:1  (slice2; segments: 24)
(cost=152269717.33..157009763.41 rows=1196982 width=568)

   Rows out:  15380160 rows at destination with 35320 ms to first row,
70091 ms to end, start offset by 102 ms.

   -  GroupAggregate  (cost=152269717.33..157009763.41 rows=1196982
width=568)


Most likely the reason you're getting the difference in plan is because the
planner is probably decided that there will be too many hash entries for a
hash table based on the 3 grouping columns... Look at the estimates, 1683 with
2 columns and 1196982 with the 3 columns. If those estimates turned out to
be true, then the hash table for 3 columns will be massively bigger than it
would be with 2 columns. With PostgreSQL you might see the plan changing if
you increased the work_mem setting. For greenplum, I've no idea if that's
the same.

Databases are often not very good at knowing with the number of distinct
values would be over more than 1 column. Certain databases have solved this
with multi column statistics, but PostgreSQL does not have these. Although
I just noticed last night that someone is working on them.

Regards

David Rowley


Re: [PERFORM] IS NOT NULL and LEFT JOIN

2014-10-21 Thread David Rowley
On Tue, Oct 21, 2014 at 2:58 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Laurent Martelli laurent.marte...@enercoop.org writes:
  Do we agree that both queries are identical ?

 No, they *aren't* identical.  Go consult any SQL reference.  Left join
 conditions don't work the way you seem to be thinking: after the join,
 the RHS column might be null, rather than equal to the LHS column.



For what it's worth I'd say they are identical, at least, if you discount
deferring  foreign key constraints or also executing the query from within
a volatile function which was called by a query which just updated the
user_info table to break referential integrity.

The presence of the foreign key on contract_contract.user_info which
references user_user_info.id means that any non-null
contract_contract.user_info record must reference a valid user_user_info
record, therefore the join is not required to prove that a non nulled
user_info contract records match a user info record, therefore the join to
check it exists is pretty much pointless in just about all cases that
you're likely to care about.

Although, saying that I'm still a bit confused about the question. Are you
asking if there's some way to get PostgreSQL to run the 1st query faster?
Or are you asking if both queries are equivalent?

Regards

David Rowley


Re: [PERFORM] IS NOT NULL and LEFT JOIN

2014-10-20 Thread David G Johnston
Laurent Martelli wrote
 Le 20/10/2014 15:58, Tom Lane a écrit :
 Laurent Martelli lt;

 laurent.martelli@

 gt; writes:
 Do we agree that both queries are identical ?
 No, they *aren't* identical.  Go consult any SQL reference.  Left join
 conditions don't work the way you seem to be thinking: after the join,
 the RHS column might be null, rather than equal to the LHS column.
 Yes, I was wrong to assume that c.user_info=u.id because of the LEFT JOIN.
 
 But since I only want rows where u.id IS NOT NULL, in any case I will 
 also have c.user_info IS NOT NULL.
 
 Also, having a foreign key, if c.user_info is not null, it will have a 
 match in u. So in that case, either both c.user_info and c.id are null 
 in the result rows, or they are equal.

The planner only expends so much effort converting between equivalent query
forms.  By adding u.id IS NOT NULL you are saying that you really meant to
use INNER JOIN instead of LEFT JOIN but whether the planner can and/or does
act on that information in the WHERE clause to modify its joins is beyond my
knowledge.  It doesn't seem to and probably correctly isn't worth adding the
planner cycles to fix a poorly written/generated query on-the-fly.


Now that it has been pointed out that the two queries you supplied are
semantically different it is unclear what your point here is.  It is known
that Hibernate (and humans too) will generate sub-optimal plans that can be
rewritten using relational algebra and better optimized for having done so. 
But such work takes resources that would be expended for every single query
while manually rewriting the sub-optimal query solves the problem
once-and-for-all.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/IS-NOT-NULL-and-LEFT-JOIN-tp5823591p5823737.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] IS NOT NULL and LEFT JOIN

2014-10-20 Thread David G Johnston
David G Johnston wrote
 
 Laurent Martelli wrote
 Le 20/10/2014 15:58, Tom Lane a écrit :
 Laurent Martelli lt;

 laurent.martelli@

 gt; writes:
 Do we agree that both queries are identical ?
 No, they *aren't* identical.  Go consult any SQL reference.  Left join
 conditions don't work the way you seem to be thinking: after the join,
 the RHS column might be null, rather than equal to the LHS column.
 Yes, I was wrong to assume that c.user_info=u.id because of the LEFT
 JOIN.
 
 But since I only want rows where u.id IS NOT NULL, in any case I will 
 also have c.user_info IS NOT NULL.
 
 Also, having a foreign key, if c.user_info is not null, it will have a 
 match in u. So in that case, either both c.user_info and c.id are null 
 in the result rows, or they are equal.
 The planner only expends so much effort converting between equivalent
 query forms.  By adding u.id IS NOT NULL you are saying that you really
 meant to use INNER JOIN instead of LEFT JOIN but whether the planner can
 and/or does act on that information in the WHERE clause to modify its
 joins is beyond my knowledge.  It doesn't seem to and probably correctly
 isn't worth adding the planner cycles to fix a poorly written/generated
 query on-the-fly.
 
 
 Now that it has been pointed out that the two queries you supplied are
 semantically different it is unclear what your point here is.  It is known
 that Hibernate (and humans too) will generate sub-optimal plans that can
 be rewritten using relational algebra and better optimized for having done
 so.  But such work takes resources that would be expended for every single
 query while manually rewriting the sub-optimal query solves the problem
 once-and-for-all.
 
 David J.

Didn't sound right what I wrote above...

The presence of the OR screws things up even further since it does force
the use of LEFT JOIN mechanics for the single case where the name and e-mail
match.

I would maybe try a UNION DISTINCT query instead of an OR clause if you want
to have a query that performs better than the Hibernate one...otherwise
others more knowledgeable than myself have not made any indication that the
planner is unintentionally deficient in its handling of your original query.

You may try posting your actual question, and not the SQL, and see if that
sparks any suggestions.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/IS-NOT-NULL-and-LEFT-JOIN-tp5823591p5823739.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] IS NOT NULL and LEFT JOIN

2014-10-19 Thread David Rowley
On Sun, Oct 19, 2014 at 5:10 PM, Laurent Martelli 
laurent.marte...@enercoop.org wrote:

  Hello there,

 I have a strange query plan involving an IS NOT NULL and a LEFT JOIN.

 I grant you that the query can be written without the JOIN on
 user_user_info,
 but it is generated like this by hibernate. Just changing the IS NOT NULL
 condition
 to the other side of useless JOIN makes a big difference in the query plan
 :

 -- THE BAD ONE : given the selectivity on c.name and c.email, barely more
 than one row will ever be returned


But it looks like you're ignoring the fact that the OR condition would
force the query to match not only the user and the email, but also any row
that finds a match in the user_user_info table, which going by the
planner's estimates, that's every row in the contract_contract table. This
is why the planner chooses a seqscan on the contract_contract table instead
of using the index on lower(name).

Is it really your intention to get all rows that find a this martelli
contract that has this email, and along with that, get every contract that
has a not null user_info record?

I see that you have a foreign key on c.user_info to reference the user, so
this should be matching everything with a non null user_info record.


explain analyze select c.*
from contact_contact c
left outer join user_user_info u on c.user_info=u.id
left outer join contact_address a on c.address=a.id
   where lower(c.name)='martelli'
 and c.email='ds...@ezrfz.com' or u.id is not null;

   QUERY
 PLAN

 
  Hash Left Join  (cost=1.83..2246.76 rows=59412 width=4012) (actual
 time=53.645..53.645 rows=0 loops=1)
Hash Cond: (c.user_info = u.id)
Filter: (((lower((c.name)::text) = 'martelli'::text) AND
 ((c.email)::text = 'ds...@ezrfz.com'::text)) OR (u.id IS NOT NULL))
Rows Removed by Filter: 58247
-  Seq Scan on contact_contact c  (cost=0.00..2022.12 rows=59412
 width=4012) (actual time=0.007..6.892 rows=58247 loops=1)
-  Hash  (cost=1.37..1.37 rows=37 width=8) (actual time=0.029..0.029
 rows=37 loops=1)
  Buckets: 1024  Batches: 1  Memory Usage: 2kB
  -  Seq Scan on user_user_info u  (cost=0.00..1.37 rows=37
 width=8) (actual time=0.004..0.015 rows=37 loops=1)
  Planning time: 0.790 ms
  Execution time: 53.712 ms

 -- THE GOOD ONE (test IS NOT NULL on contact0_.user_info instead of
 userinfo1_.id)
 explain analyze select c.*
from contact_contact c
left outer join user_user_info u on c.user_info=u.id
left outer join contact_address a on c.address=a.id
   where lower(c.name)='martelli'
 and c.email='ds...@ezrfz.com' or c.user_info is not null;
  QUERY
 PLAN

 
  Bitmap Heap Scan on contact_contact c  (cost=8.60..16.41 rows=1
 width=4012) (actual time=0.037..0.037 rows=0 loops=1)
Recheck Cond: (((email)::text = 'ds...@ezrfz.com'::text) OR (user_info
 IS NOT NULL))
Filter: (((lower((name)::text) = 'martelli'::text) AND ((email)::text =
 'ds...@ezrfz.com'::text)) OR (user_info IS NOT NULL))
-  BitmapOr  (cost=8.60..8.60 rows=2 width=0) (actual
 time=0.034..0.034 rows=0 loops=1)
  -  Bitmap Index Scan on idx_contact_email  (cost=0.00..4.30
 rows=2 width=0) (actual time=0.027..0.027 rows=0 loops=1)
Index Cond: ((email)::text = 'ds...@ezrfz.com'::text)
  -  Bitmap Index Scan on contact_contact_user_info_idx
 (cost=0.00..4.30 rows=1 width=0) (actual time=0.007..0.007 rows=0 loops=1)
Index Cond: (user_info IS NOT NULL)
  Planning time: 0.602 ms
  Execution time: 0.118 ms



If you look closely at the 2nd query plan, you'll see that no joins are
performed, and it's only the contract_contract table that's looked at. This
is because PostgresSQL sees that none of the columns from the 2 tables
which are being left joined to are used, and also that the columns that
you're joining to on these tables are unique, therefore joining to them
cannot duplicate any rows, and since these are left joined, if there was no
matching row, then it wouldn't filter out rows from the contract_contract
table, as it would with INNER JOINs. The planner sees that these left joins
are pointless, so just removes them from the plan.

Regards

David Rowley


Re: [PERFORM] Slow query

2014-09-23 Thread David G Johnston
Ross Elliott-2 wrote
 Maybe someone can explain this. The following SQL will reproduce our
 issue:
 DROP TABLE IF EXISTS t1 CASCADE;
 CREATE TABLE t1 (name text,
  state text);
 CREATE INDEX t1_name ON t1(name);
 CREATE INDEX t1_state ON t1(state);
 CREATE INDEX t1_name_state ON t1(name,state);
 
 -- Create some sample data
 DO $$
 DECLARE
 states text[] := array['UNKNOWN', 'TODO', 'DONE', 'UNKNOWN'];
 BEGIN
 FOR v IN 1..20 LOOP
   INSERT INTO t1 VALUES('user'||v, states[floor(random()*4)]);
   INSERT INTO t1 VALUES('user'||v, states[floor(random()*4)]);
   INSERT INTO t1 VALUES('user'||v, states[floor(random()*4)]);
   INSERT INTO t1 VALUES('user'||v, states[floor(random()*4)]);
 END LOOP;
 END $$;
 
 
 CREATE OR REPLACE FUNCTION state_to_int(state character varying) RETURNS
 integer
 LANGUAGE plpgsql IMMUTABLE STRICT
 AS $$BEGIN
 IF state = 'UNKNOWN' THEN RETURN 0;
 ELSIF state = 'TODO' THEN RETURN 1;
 ELSIF state = 'DONE' THEN RETURN 2;
 ELSIF state = 'NOT REQUIRED' THEN RETURN 3;
 ELSE RAISE EXCEPTION 'state_to_int called with invalid state value';
 END IF;
 END;$$;
 
 CREATE OR REPLACE FUNCTION int_to_state(state integer) RETURNS character
 varying
 LANGUAGE plpgsql IMMUTABLE STRICT
 AS $$BEGIN
 IF state = 0 THEN RETURN 'UNKNOWN';
 ELSIF state = 1 THEN RETURN 'TODO';
 ELSIF state = 2 THEN RETURN 'DONE';
 ELSIF state = 3 THEN RETURN 'NOT REQUIRED';
 ELSE RAISE EXCEPTION 'int_to_state called with invalid state value';
 END IF;
 END;$$;
 
 -- Why is this a lot slower
 explain (analyse, buffers) select name,
 int_to_state(min(state_to_int(state))) as status from t1 group by t1.name;
 
 -- Than this?
 explain (analyze, buffers) select name, (array['UNKNOWN', 'TODO', 'DONE',
 'NOT REQUIRED'])[min(
 CASE state
 WHEN 'UNKNOWN' THEN 0
 WHEN 'TODO' THEN 1
 WHEN 'DONE' THEN 2
 WHEN 'NOT REQUIRED' THEN 3
 END)] AS status from t1 group by t1.name;
 
 -- This is also very much slower
 explain (analyze, buffers) select name, (array['UNKNOWN', 'TODO', 'DONE',
 'NOT REQUIRED'])[min(state_to_int(state))] AS status from t1 group by
 t1.name;
 
 This was done on:
 PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu
 4.8.2-19ubuntu1) 4.8.2, 64-bit
 
 We get results like this:
 QUERY PLAN
 
 ---
  GroupAggregate  (cost=0.42..280042.62 rows=208120 width=15) (actual
 time=0.076..2439.066 rows=20 loops=1)
Buffers: shared hit=53146
-  Index Scan using t1_name on t1  (cost=0.42..21931.42 rows=80
 width=15) (actual time=0.009..229.477 rows=80 loops=1)
  Buffers: shared hit=53146
  Total runtime: 2460.860 ms
 (5 rows)
 
 QUERY PLAN
 
 ---
  GroupAggregate  (cost=0.42..36012.62 rows=208120 width=15) (actual
 time=0.017..559.384 rows=20 loops=1)
Buffers: shared hit=53146
-  Index Scan using t1_name on t1  (cost=0.42..21931.42 rows=80
 width=15) (actual time=0.008..197.133 rows=80 loops=1)
  Buffers: shared hit=53146
  Total runtime: 574.550 ms
 (5 rows)
 
 QUERY PLAN
 
 ---
  GroupAggregate  (cost=0.42..228012.62 rows=208120 width=15) (actual
 time=0.042..2089.367 rows=20 loops=1)
Buffers: shared hit=53146
-  Index Scan using t1_name on t1  (cost=0.42..21931.42 rows=80
 width=15) (actual time=0.008..237.854 rows=80 loops=1)
  Buffers: shared hit=53146
  Total runtime: 2111.004 ms
 (5 rows)
 
 
 We cannot change our table structure to reflect something more sensible.
 What we would really like to know is why using functions is so much slower
 than the unreadable method.
 
 Regards
 
 Ross

Pl/pgsql functions are black boxes and expensive to execute; you should
define these functions as SQL functions and see if that helps.

David J.






--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Slow-query-tp5820086p5820096.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] weird execution plan

2014-09-12 Thread David G Johnston
Huang, Suya wrote
 Both queries have been run several times so cache would have same effect
 on both of them?  Below is the plan with buffer information.

Not everyone does so its nice to make certain - especially since I'm not all
that familiar with the code involved.  But since no one else has answered I
will theorize.

SELECT count(*) FROM ( SELECT DISTINCT col FROM tbl )

vs

SELECT count(DISTINCT col) FROM tbl

The code for SELECT DISTINCT col is likely highly efficient because it
works on complete sets of records.

The code for SELECT count(DISTINCT col) is at a relative disadvantage
since it must evaluate one row at a time and remember whether it had seen
the same value previously before deciding whether to increment a counter.

With a large number of duplicate rows the process of making the row set
smaller before counting the end result will perform better since fewer rows
must be evaluated in the less efficient count(DISTINCT) expression - the
time saved there more than offset by the fact that you are effectively
passing over that subset of the data a second time.

HashAggregate(1M rows) + Aggregate(200k rows)  Aggregate(1M rows)

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/weird-execution-plan-tp5818730p5818905.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] weird execution plan

2014-09-11 Thread David G Johnston
Huang, Suya wrote
 Can someone figure out why the first query runs so slow  comparing to the
 second one? They generate the same result...

Try: EXPLAIN (ANALYZE, BUFFERS)

I believe you are only seeing caching effects.

David J.



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/weird-execution-plan-tp5818730p5818733.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] query performance with hstore vs. non-hstore

2014-09-01 Thread David G Johnston
Huang, Suya wrote
 See output of explain (analyze,timing off), the total runtime is close to
 the one enable timing.

Calling 43s close to 70s doesn't sound right...


 dev=# explain (analyze, timing off) select cha_type, sum(visits) from
 (select (each(visits)).key as cha_type,(each(visits)).value::numeric as
 visits from weekly_hstore a  join seg1 b on a.ref_id=b.ref_id )foo  group
 by cha_type  order by sum(visits) desc;

What version of PostgreSQL are you using?

Two calls to each() and cast to numeric are not free.

Your sequential scan savings is nearly 9 seconds but you lose all of that,
and more, when PostgreSQL evaluates the result of the scan and has to
process the each() and the cast before it performs the join against the
expanded result.  There is no planner node for this activity but it does
cost time - in this case more time than it would take to simply store the
native data types in separate rows.

You really should expand the hstore after the join (i.e., in the top-most
select-list) but in this case since the join removed hardly any rows the
gain from doing so would be minimal.  The idea being you should not expand
the hstore of any row that fails the join condition since it will not end up
in the final result anyway.

Also, in this specific case, the call to each(...).key is pointless - you
never use the data.

If you did need to use both columns, and are using 9.3, you should re-write
this to use LATERAL.

In 9.2- you, possibly using a CTE, could do something like this:

SELECT (each).* FROM (
SELECT each(hs) FROM ( VALUES('k=1'::hstore) ) h (hs)
) src

This is a single call to each(), in a subquery, which result is then
expanded using (col).* notation in the parent query.  This avoids calling
each twice - and note that (each(...).*) does not work to avoid the
double-call - you have to use a subquery / a CTE one to ensure that it is
not collapsed (offset 0 should work too but I find the CTE one a little
cleaner personally).

David J.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/query-performance-with-hstore-vs-non-hstore-tp5817109p5817281.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] autocommit (true/false) for more than 1 million records

2014-08-27 Thread David Johnston
On Wed, Aug 27, 2014 at 1:02 AM, Alex Goncharov 
alex.goncharov@gmail.com wrote:

  Thank you, Kevin -- this is helpful.

 Thank you David, too.


  But it still leaves questions for me.

 Still...


 Alex Goncharov alex.goncharov@gmail.com wrote:

  How do I decide, before starting a COPY data load, whether such a load
  protection (complexity) makes sense (is necessary)?

 This is *the* practical question.


 David G Johnston david.g.johns...@gmail.com wrote:

  You should probably consider something like: http://pgloader.io/

 This is not my question; I want to see if anybody can offer a
 meaningful situation evaluation strategy for a potential using or not
 using COPY for loading the big data.


​OK.  Though I presume that given limitations to copy - of which the whole
all-or-nothing is one - that pointing out more user-friendly API's would
be worthwhile.​


 If nobody can, fine: it'll give me the reason to claim Nobody knows.

  Normal case, with normal COPY,

 This is the case I am asking about: the COPY operation limitations for
 the big data: until what point a plain COPY can be used.

  you load a bad file into an empty table, it fails, you truncate and
  get better data for the next attempt.

 This is not how many businesses operate.


​Yet this is basically what you are asking about​



  How long that will take is system (IOPS/CPU) and data dependent.

 How long, was not the question: my question was originally about the
 behavior for a bad record at the end of a large data set submitted to
 COPY; when it was stated that the data in process becomes an
 invisible (until committed) part of the target table, it became
 obvious to me that the fundamental question has to be asked: How much
 can fit there, in the temporary operational space (whatever it's
 called in PostgreSQL.)?  df /mount - free or 2^32?

  The probability of failure is source dependent - and prior
  experience plays a large role here as well.

 Not the question.

  If you plan to load directly into a live table the wasted space from
  a bad load could kill you so smaller partial loads are better - if
  you can afford the implicit system inconsistency such a partial load
  would cause.

 Not the question.


​These were things to consider when deciding on whether it is worthwhile to
split the large file into chunks.​


  If you understand how the system works

 I don't, to the necessary extent, so I asked for an expert opinion :)

  you should be able to evaluate the different pieces and come to a
  conclusion as how best to proceed in a specific situation.  No one
  else on this list has the relevant information to make that
  judgement call.

 We'll see; too early to tell yet :)

  If this is just asking about rules-of-thumb

 Yes.

  I'd say figure out how many records 100MB consumes and COMMIT after that
  many records.

 Pardon me: I am running COPY and know how many records are processed
 so far?.. (Hmm... can't be.)


​Take you 1TB file, extract the first 100MB, count the number of
records-separators.  Commit after that many.​
​


  10,000 records is also a nice round number to pick - regardless of
  the amount of MB consumed.  Start there and tweak based upon
  experience.

 You are clearly suggesting to split the large data file into many
 small ones.  To split very intelligently, on the record boundaries.

 And since this is very hard and would involve quite another, external
 processing machinery, I am trying to understand until what point this
 is safe not to do (subject to what factors.)


​See thoughts to consider from previous e-mail.​


  If you are not taking advantage of the unlogged load optimization,

 I don't see any way to control this for COPY only.  Are you talking
 about the 'postgresql.conf' settings?


​I am not sure if this is the same thing but I am pretty sure he is
referring to creating an unlogged table as the copy target - thus avoiding
WAL.​


  If you only have 500k free in your archive directory that 1MB file
  will pose a problem...though if you have 4TB of archive available
  the 1TB would fit easily.

 So the answer to the How much data can fit in the COPY storage
 areas? question is solely a df /mount/point thing?

 I.e. before initiating the COPY, I should:

ls -l DATA-FILE
df -m /server/db-cluster/pg_data-or-something

 compare the two values and be assured that my COPY will reach the end
 of my DATA-FILE (whether is stumbles in the end or not) if the former
 value is meaningfully smaller than the latter?

 I would take this for the answer. (Let's see if there are other
 evaluation suggestions.)


​That should get the copy to succeed though whether you blow up your
archives or slaves would not be addressed.


  Do you compress your WAL files before shipping them off to the
  archive?  How compressible is your data?

 Try to give me the upper limit evaluation strategy, when all the
 compression and archive factors are working in my favor.


​Assume worse-case unless you

Re: [PERFORM] autocommit (true/false) for more than 1 million records

2014-08-26 Thread David G Johnston
 in particular are likely to be involved.  You get some leeway
depending on compression but that is data specific and thus something you
have to test yourself if you are operating at the margin of your system's
resources.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/autocommit-true-false-for-more-than-1-million-records-tp5815943p5816460.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.

Re: [PERFORM] autocommit (true/false) for more than 1 million records

2014-08-25 Thread David Johnston
On Mon, Aug 25, 2014 at 9:40 AM, Emi Lu em...@encs.concordia.ca wrote:


 By the way, could someone let me know why set autocommit(false) is for
 sure faster than true please? Or, some online docs talk about this.


​Not sure about the docs specifically but:

Commit is expensive because as soon as it is issued all of the data has to
be guaranteed written.  ​While ultimately the same amount of data is
guaranteed by doing them in batches there is opportunity to achieve
economies of scale.

(I think...)
When you commit you flush data to disk - until then you can make use of
RAM.  Once you exhaust RAM you might as well commit and free up that RAM
for the next batch.

David J.


Re: [PERFORM] autocommit (true/false) for more than 1 million records

2014-08-22 Thread David G Johnston
Emi Lu-2 wrote
 Hello,
 
 Trying to insert into one table with 1 million records through java JDBC 
 into psql8.3. May I know (1) or (2) is better please?
 
 (1) set autocommit(true)
 (2) set autocommit(false)
   commit every n records (e.g., 100, 500, 1000, etc)
 
 Thanks a lot!
 Emi

Typically the larger the n the better.  Locking and risk of data loss on a
failure are the tradeoffs to consider.  Other factors, like memory, make
choosing too large an n bad so using 500,000 is probably wrong but 500 is
probably overly conservative.  Better advice depends on context and
hardware.

You should also consider upgrading to a newer, supported, version of
PostgreSQL.

David J.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/autocommit-true-false-for-more-than-1-million-records-tp5815943p5815946.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] query on parent partition table has bad performance

2014-08-20 Thread David G Johnston
Huang, Suya wrote
 Hi,
 
 I have a question about partition table query performance in postgresql,
 it's an old version 8.3.21, I know it's already out of support. so any
 words about the reason for the behavior would be very much appreciated.
 
 I have a partition table which name is test_rank_2014_monthly and it has 7
 partitions inherited from the parent table, each month with one partition. 
 The weird thing is query out of the parent partition is as slow as query
 from a non-partitioned table, however, query from child table directly is
 really fast.
 
 have no idea... is this an expected behavior of partition table in old
 releases?
 
 
 hitwise_uk=# explain analyze select * from test_rank_2014_07 r WHERE
 r.date = 201407 ;
 QUERY PLAN
 --
  Seq Scan on test_rank_2014_07 r  (cost=0.00..169797.75 rows=7444220
 width=54) (actual time=0.007..1284.622 rows=7444220 loops=1)
Filter: (date = 201407)
  Total runtime: 1831.379 ms
 (3 rows)
 
 -- query on parent table
 hitwise_uk=# explain analyze select * from test_rank_2014_monthly r WHERE
 r.date = 201407 ;
   QUERY
 PLAN
 --
  Result  (cost=0.00..169819.88 rows=7444225 width=54) (actual
 time=0.009..4484.552 rows=7444220 loops=1)
-  Append  (cost=0.00..169819.88 rows=7444225 width=54) (actual
 time=0.008..2495.457 rows=7444220 loops=1)
  -  Seq Scan on test_rank_2014_monthly r  (cost=0.00..22.12
 rows=5 width=54) (actual time=0.000..0.000 rows=0 loops=1)
Filter: (date = 201407)
  -  Seq Scan on test_rank_2014_07 r  (cost=0.00..169797.75
 rows=7444220 width=54) (actual time=0.007..1406.600 rows=7444220 loops=1)
Filter: (date = 201407)
  Total runtime: 5036.092 ms
 (7 rows)
 
 --query on non-partitioned table
 hitwise_uk=# explain analyze select * from rank_2014_monthly r WHERE
 r.date = 201407 ;
   QUERY PLAN
 --
  Seq Scan on rank_2014_monthly r  (cost=0.00..1042968.85 rows=7424587
 width=54) (actual time=3226.983..4537.974 rows=7444220 loops=1)
Filter: (date = 201407)
  Total runtime: 5086.096 ms
 (3 rows)
 
 
 check constraints on child table is something like below:
 ...
 Check constraints:
 test_rank_2014_07_date_check CHECK (date = 201407)
 Inherits: test_rank_2014_monthly
 
 Thanks,
 Suya

Given that the 2nd and 3rd queries perform about equal the question is why
the first query performs so much better.  I suspect you are not taking any
care to avoid caching effects and so that it what you are seeing.  Its hard
to know for sure whether you ran the three queries in the order
listed...which if so would likely negate this theory somewhat.

Adding (BUFFERS) to your explain would at least give some visibility into
caching effects - though since that is only available in supported versions
that is not an option for you.  Still, it is the most likely explanation for
what you are seeing.

There is time involved to process the partition constraint exclusion but I'm
doubting it accounts for a full 3 seconds...

David J.






--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/query-on-parent-partition-table-has-bad-performance-tp5815523p5815552.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] two table join with order by on both tables attributes

2014-08-07 Thread David G Johnston
Evgeniy Shishkin wrote
 Hello,
 
 suppose you have two very simple tables with fk dependency, by which we
 join them
 and another attribute for sorting
 
 like this
 select * from users join  notifications on users.id=notifications.user_id
 ORDER BY users.priority desc ,notifications.priority desc limit 10;
 
 Very typical web query.
 
 No matter which composite indexes i try, postgresql can not make efficient
 nested loop plan using indexes.
 It chooses all sorts of seq scans and hash joins or merge join and always
 a sort node and then a limit 10.
 
 Neither plan provides acceptable performance. And tables tend to grow =\
 
 Can anybody suggest something or explain this behavior?

Can you explain why a nested loop is best for your data?  Given my
understanding of an expected prioritycardinality I would expect your ORDER
BY to be extremely inefficient and not all that compatible with a nested
loop approach.

You can use the various parameters listed on this page to force the desired
plan and then provide EXPLAIN ANALYZE results for the various executed plans
and compare them.

http://www.postgresql.org/docs/9.3/interactive/runtime-config-query.html#RUNTIME-CONFIG-QUERY-ENABLE

And now for the obligatory read this link:

https://wiki.postgresql.org/wiki/SlowQueryQuestions

If you can show that in fact the nested loop (or some other plan) performs
better than the one chosen by the planner - and can provide data that the
developers can use to replicate the experiment - then improvements can be
made.  At worse you will come to understand why the planner is right and can
then explore alternative models.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/two-table-join-with-order-by-on-both-tables-attributes-tp5814135p5814137.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Query performing very bad and sometimes good

2014-08-05 Thread David G Johnston
Andreas Joseph Krogh-2 wrote
 Hi all.   Running version: on= select version();
    version
  
 
   PostgreSQL 9.3.2 on x86_64-unknown-linux-gnu, compiled by gcc
 (Ubuntu/Linaro 
 4.6.3-1ubuntu5) 4.6.3, 64-bit     

9.3.2 is not release-worthy


 Bad:
 Index Scan Backward using origo_email_delivery_received_idx on
 origo_email_delivery del (cost=0.42..1102717.48 rows=354038 width=98)
 (actual time=0.017..309196.670 rows=354296 loops=1)
 
Add 4 new records
 
 Good (-ish):
 Index Scan Backward using origo_email_delivery_received_idx on
 origo_email_delivery del (cost=0.42..1102717.48 rows=354038 width=98)
 (actual time=0.019..2431.773 rows=354300 loops=1)

The plans appear to be basically identical - and the queries/data as well
aside from the addition of 4 more unmatched records.

The difference between the two is likely attributable to system load
variations combined with the effect of caching after running the query the
first (slow) time.

Doing OFFSET/LIMIT pagination can be problematic so I'd be curious what
would happen if you got rid of it.  In this specific case the result set is
only 75 with 101 allowed anyway.

The left joins seem to be marginal so I'd toss those out and optimize the
inner joins and, more likely, the correlated subqueries in the select list. 
You need to avoid nested looping over 300,000+ records somehow - though I'm
not going to be that helpful in the actual how part...

Note that in the inner-most loop the actual time for the cached data is half
of the non-cached data.  While both are quite small (0.002/0.004) the
300,000+ loops do add up.  The same likely applies to the other planning
nodes but I didn't dig that deep.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Query-performing-very-bad-and-sometimes-good-tp5813831p5813847.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Re: Slow query with indexed ORDER BY and LIMIT when using OR'd conditions

2014-07-21 Thread David G Johnston
johno wrote
 The question is... why is the query planner unable to make this
 optimization for the slow query? What am I missing?

Short answer - your first and last queries are not relationally equivalent
and the optimizer cannot change the behavior of the query which it is
optimizing.  i.e. you did not make an optimization but rather choose to
reformulate the question so that it could be answered more easily while
still providing an acceptable answer.

The question main question is better phrased as:

Give me 100 updated at t(0) but only that are subsequent to a given ID.  If
there are less than 100 such records give me enough additional rows having t
 t(0) so that the total number of rows returned is equal to 100.

Both queries give the same answer but only due to the final LIMIT 100. They
arrive there in different ways which necessitates generating different
plans.  At a basic level it is unable to push down LIMIT into a WHERE clause
and it cannot add additional sub-queries that do not exist in the original
plan - which includes adding a UNION node.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Slow-query-with-indexed-ORDER-BY-and-LIMIT-when-using-OR-d-conditions-tp5812282p5812285.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


  1   2   3   4   5   6   7   8   9   10   >