On Wed, Jul 27, 2011 at 5:11 AM, shailesh singh wrote:
> I want to configure Logging of postgres in such a way that messages of
> different severity should be logged in different log file. eg: all ERROR
> message should be written in error-msg.log file while all NOTICE mesage
> should be written i
Mark Kirkwood writes:
> [ assorted examples showing that commit
> 7f3eba30c9d622d1981b1368f2d79ba0999cdff2 has got problems ]
Thanks for the test cases. After playing with these for a bit I believe
I've figured out the error in my previous thinking. Clamping the
ndistinct value like that can
On August 31, 2011 11:56:56 AM Andy Colson wrote:
> On 8/31/2011 1:51 PM, Alan Hodgson wrote:
> > On August 31, 2011 11:26:57 AM Andy Colson wrote:
> >> When you ran it, did it really feel like 30 seconds? Or did it come
> >> right back real quick?
> >>
> >> Because your report says:
> >> > 35
"Kai Otto" wrote:
> Time taken:
>
> 35.833 ms (i.e. roughly 35 seconds)
Which is it? 35 ms or 35 seconds?
> Number of rows:
>
> 121830
>
> Number of columns:
>
> 38
> This is extremely slow for a database server.
>
> Can anyone help me in finding the problem?
> "Seq Scan on "Frame"
On 8/31/2011 1:51 PM, Alan Hodgson wrote:
On August 31, 2011 11:26:57 AM Andy Colson wrote:
When you ran it, did it really feel like 30 seconds? Or did it come
right back real quick?
Because your report says:
> 35.833 ms
Thats ms, or milliseconds, or 0.035 seconds.
I think the "." is a
On August 31, 2011 11:26:57 AM Andy Colson wrote:
> When you ran it, did it really feel like 30 seconds? Or did it come
> right back real quick?
>
> Because your report says:
> > 35.833 ms
>
> Thats ms, or milliseconds, or 0.035 seconds.
>
I think the "." is a thousands separator in some loca
When you ran it, did it really feel like 30 seconds? Or did it come
right back real quick?
Because your report says:
> 35.833 ms
Thats ms, or milliseconds, or 0.035 seconds.
-Andy
On 8/31/2011 8:04 AM, Kai Otto wrote:
Hi all,
I am running a simple query:
SELECT * FROM public.“Frame”
Ti
Hi all,
I am running a simple query:
SELECT * FROM public."Frame"
Time taken:
35.833 ms (i.e. roughly 35 seconds)
Number of rows:
121830
Number of columns:
38
This is extremely slow for a database server.
Can anyone help me in finding the problem?
Thanks,
KOtto
Client
On 31 Srpen 2011, 15:59, Andy Colson wrote:
> I assume:
> Buckets: 16384 Batches: 1 Memory Usage: 4531kB
>
> That means a total of 4.5 meg of ram was used for the hash, so if my
> work_mem was lower than that it would swap? (or choose a different plan?)
Why don't you try that? Just set the work
On 8/30/2011 8:33 PM, Craig Ringer wrote:
On 31/08/2011 4:30 AM, Andy Colson wrote:
Hi all,
I have read things someplace saying not exists was better than not
in... or something like that. Not sure if that was for in/exists and
not in/not exists, and for a lot of records or not.
`EXISTS' may
Jayadevan M wrote:
>> And the schema of the tables involved, and any indexes on them.
> The details of the tables and indexes may take a bit of effort to
> explain. Will do that.
In psql you can do \d to get a decent summary.
Without seeing the query and the table definitions, it's hard t
>
> A really interesting part is the sort near the bottom -
>
> -> Sort (cost=1895.95..1896.49 rows=215 width=61) (actual
> time=25.926..711784.723 rows=2673340321 loops=1)
> Sort Key: memmst.memshpsta
> Sort Method: quicksort Memory: 206kB
> -> Nested Loop (cost=0.01..1887.62 r
On 31 Srpen 2011, 13:19, Jayadevan M wrote:
> Hello,
>
>> >
>> > Please run EXPLAIN ANALYZE on the query and post that, it's hard to
> say
>> > what's wrong from just the query plan, without knowing where the time
> is
>> > actually spent.
>> Here is the explain analyze
>> http://explain.depesz.com
Missed out looping in community...
On Wed, Aug 31, 2011 at 5:01 PM, Venkat Balaji wrote:
> Could you help us know the tables and columns on which Indexes are built ?
>
> Query is performing sorting based on key upper(column) and that is where i
> believe the cost is high.
>
> The 'upper' functio
Hello,
> >
> > Please run EXPLAIN ANALYZE on the query and post that, it's hard to
say
> > what's wrong from just the query plan, without knowing where the time
is
> > actually spent.
> Here is the explain analyze
> http://explain.depesz.com/s/MY1
Going through the url tells me that statis
Hello,
>
> Please run EXPLAIN ANALYZE on the query and post that, it's hard to say
> what's wrong from just the query plan, without knowing where the time is
> actually spent.
Here is the explain analyze
http://explain.depesz.com/s/MY1
Regards,
Jayadevan
DISCLAIMER:
"The information in
Hello,
> Please run EXPLAIN ANALYZE on the query and post that, it's hard to say
> what's wrong from just the query plan, without knowing where the time is
> actually spent. And the schema of the tables involved, and any indexes
> on them. (see also http://wiki.postgresql.org/wiki/SlowQueryQue
Where is the query? And also paste the \d to show the tables and
indexes.
-Sushant.
On Wed, 2011-08-31 at 14:30 +0530, Jayadevan M wrote:
> Hello all,
> I have a query which takes about 20 minutes to execute and retrieves
> 2000-odd records. The explain for the query is pasted here
> http://exp
On 31.08.2011 12:00, Jayadevan M wrote:
Hello all,
I have a query which takes about 20 minutes to execute and retrieves
2000-odd records. The explain for the query is pasted here
http://explain.depesz.com/s/52f
The same query, with similar data structures/indexes and data comes back
in 50 seconds
Hello all,
I have a query which takes about 20 minutes to execute and retrieves
2000-odd records. The explain for the query is pasted here
http://explain.depesz.com/s/52f
The same query, with similar data structures/indexes and data comes back
in 50 seconds in Oracle. We just ported the product t
On 31/08/2011 4:30 AM, Andy Colson wrote:
Hi all,
I have read things someplace saying not exists was better than not
in... or something like that. Not sure if that was for in/exists and
not in/not exists, and for a lot of records or not.
`EXISTS' may perform faster than `IN', yes. Using `IN
21 matches
Mail list logo