Good morning,
I've increased sort_mem until 2Go !!
and the error "out of memory" appears again.
Here the request I try to pass with her explain plan,
Nested Loop (cost=2451676.23..2454714.73 rows=1001 width=34)
-> Subquery Scan "day" (cost=2451676.23..2451688.73 rows=1000 width=16)
Selon Tom Lane <[EMAIL PROTECTED]>:
> [EMAIL PROTECTED] writes:
> > In fact, I don't know how to have explain plan of remote node.
>
> You send it an EXPLAIN.
Please, Could you send me what to put at end of request :
select * from dblink('my_connexion', 'EXPLAIN select * from test where
number
Hello,
I want to split table partitioned across two servers postgres (two hosts).
To query this remote object, I want to make view with union on two servers with
two dblink.
But, How to be sure that optimizer plan on remote node is same than local node
(ie : optimizer scan only the selected parti
Here the result with hashAgg to false :
Nested Loop (cost=2487858.08..2490896.58 rows=1001 width=34) (actual
time=1028044.781..1030251.260 rows=1000 loops=1)
-> Subquery Scan "day" (cost=2487858.08..2487870.58 rows=1000 width=16)
(actual time=1027996.748..1028000.969 rows=1000 loops=1)
You're right, release is 7.4.7.
there's twenty millions records "query"
> On Tue, 2006-02-14 at 11:36, Tom Lane wrote:
> > [EMAIL PROTECTED] writes:
> > > Yes, I've launched ANALYZE command before sending request.
> > > I precise that's postgres version is 7.3.4
> >
> > Can't possibly be 7.3.4, t
Good morning,
I try to understand how optimizer uses HashAggregate instead of GroupAggregate
and I want to know what is exactly this two functionnality (benefits
/inconvenients)
In my case, I've this explain plan.
---
Nested Loop (cost=2451676.23..2454714.73 rows=1001 width=
Good morning,
I've increased sort_mem until 2Go !!
and the error "out of memory" appears again.
Here the request I try to pass with her explain plan,
Nested Loop (cost=2451676.23..2454714.73 rows=1001 width=34)
-> Subquery Scan "day" (cost=2451676.23..2451688.73 rows=1000 width=16)
command explain analyze crash with the "out of memory" error
I precise that I've tried a lot of values from parameters shared_buffer and
sort_mem
now, in config file, values are :
sort_mem=32768
and shared_buffer=3
server has 4Go RAM.
and kernel.shmmax=30720
> On Tue, 2006-02-14 at 10
Yes, I've launched ANALYZE command before sending request.
I precise that's postgres version is 7.3.4
> On Tue, 2006-02-14 at 10:03, [EMAIL PROTECTED] wrote:
> > Thanks for your response,
>
> SNIP
>
> > if HashAgg operation ran out of memory, what can i do ?
>
> 1: Don't top post.
>
> 2: Have you
Thanks for your response,
I've made this request :
SELECT query_string, DAY.ocu from search_data.query_string,
(SELECT SUM(occurence) as ocu, query
FROM daily.queries_detail_statistics
WHERE date >= '2006-01-01' AND date <= '2006-01-30'
AND portal IN (1,2)
GROUP BY query
ORDER BY ocu DESC
L
Hello,
I've error "out of memory" with these traces :
TopMemoryContext: 32768 total in 3 blocks; 5152 free (1 chunks); 27616 used
TopTransactionContext: 8192 total in 1 blocks; 8136 free (0 chunks); 56 used
DeferredTriggerXact: 0 total in 0 blocks; 0 free (0 chunks); 0 used
MessageContext: 24576
11 matches
Mail list logo