Hi!
> The double buffering> itself does not slow anything down.
That was what I was suspecting a little. Double buffering may not matter in our
case, because the whole server is meant for PostgreSQL only.
In our case, we can e.g. reserve almost "all memory" for PostgreSQL (shared
buffers etc.).
On Thu, Sep 27, 2018 at 04:50:36PM -0700, Vladimir Ryabtsev wrote:
> Additionally, I think author can try CREATE STATISTICS on the bunch of
> columns used in join. Very low rows estimate for this join may come from
> multiplying selectivities for each column assuming they are independent.
MV stati
> It did --- see the Gather node.
But "workers launched: 1"...
To my opinion, such a dramatic difference cannot be explained with avoiding
parallelism, the query was just stuck in a very inefficient plan (even
though almost all source data is read from cache).
Additionally, I think author can try
Vladimir Ryabtsev writes:
>> The reason for the plan shape difference is probably that the bare SELECT
>> is allowed to use parallelism while INSERT/SELECT isn't.
> In case parallelism is used, should it report in the plan as something like
> "workers planned: N"?
It did --- see the Gather node.
> The reason for the plan shape difference is probably that the bare SELECT
> is allowed to use parallelism while INSERT/SELECT isn't.
In case parallelism is used, should it report in the plan as something like
"workers planned: N"?
Vlad
"As a workaround/test, you could maybe add an expression index
ON( (vw2.product_group_name ||'.'|| vw2.product_node_name) )"
Unfortunately, vw2 is a view, but I had a similar thought. I'm looking
into splitting i.product-node_name into separate columns though, thanks!
On Thu, Sep 27, 2018 at 3:
On Thu, Sep 27, 2018 at 03:37:57PM -0400, Arjun Ranade wrote:
> Yes, that join is concerning (red text below). The conditions all need to
> be checked so they are independent.
You can play with the join conditions to see which test is getting such a bad
estimate, or if it's a combination of tests
Yes, that join is concerning (red text below). The conditions all need to
be checked so they are independent.
The query (with consistent obfuscation) is below :
select distinct
a.sale_id
, a.test_date
, a.product_id as original_product_id
,vw2.product_id
, a.volume as volume
On Thu, Sep 27, 2018 at 01:08:05PM -0400, Arjun Ranade wrote:
> When I look at the EXPLAIN ANALYZE output, it seems that it's using a
> drastically different query plan for the INSERT+SELECT than SELECT by
> itself.
The fast, SELECT plan is using parallel query, which isn't available for
INSERT+S
Hi Tom,
Thank you for your suggestions. I tried increasing from_collapse_limit and
join_collapse_limit to 16 in a specific session and that significantly
improved my query performance (it takes < 2s now). Now, my instinct is to
increase this globally but I'm sure there are some drawbacks to this
Arjun Ranade writes:
> I have a strange performance situation that I cannot resolve with my usual
> process.
> I have a SELECT statement that completes in about 12 seconds for the full
> result (~1100 rows).
> If I create an empty table first, and then INSERT with the SELECT query, it
> takes 6.5
I have a strange performance situation that I cannot resolve with my usual
process.
I have a SELECT statement that completes in about 12 seconds for the full
result (~1100 rows).
If I create an empty table first, and then INSERT with the SELECT query, it
takes 6.5 minutes.
When I look at the EXP
On 09/26/2018 07:15 PM, Vladimir Ryabtsev wrote:
>> Since you have a very big toast table, given you are using spinning
> disks, I think that increasing the block size will bring benefits.
> But will it worsen caching? I will have lesser slots in cache. Also will
> it affect required storage spa
13 matches
Mail list logo