[ Sorry, forgot to cc list ]
>> It is said to be 10%. i would like to raise that, because we are getting bas
>> estimations for n_distinct.
>
> More to the point, the estimator we use is going to be biased for many
> ( probably most ) distributions no matter how large your sample size
> is.
>
> If
On 6/10/11 5:15 AM, Willy-Bas Loos wrote:
> Hi,
>
> is there a way to change the sample size for statistics (that analyze
> gathers)?
> It is said to be 10%. i would like to raise that, because we are getting bas
> estimations for n_distinct.
It's not 10%. We use a fixed sample size, which is co
Ok, I think I found possible bottleneck.
The function that do some selects run really fast, more than 1.000
executions per seconds
But the whole thing slowdown when update of one record in a very very small
table happed
I test with insert instead of update and same behavior occur.
So, the only wa
On 06/10/2011 07:29 AM, Anibal David Acosta wrote:
When 1 client connected postgres do 180 execution per second
With 2 clients connected postgres do 110 execution per second
With 3 clients connected postgres do 90 execution per second
Finally with 6 connected clients postgres do 60 executions pe
When 1 client connected postgres do 180 execution per second
This is suspiciously close to 10.000 executions per minute.
You got 10k RPM disks ?
How's your IO system setup ?
Try setting synchronous_commit to OFF in postgresql.conf and see if that
changes the results. That'll give useful i
On Fri, Jun 10, 2011 at 1:22 PM, wrote:
>> If I had set the primary key to (diag_id, create_time) would simple
>> queries on
>> diag_id still work well i.e.
>> select * from tdiag where diag_id = 1234;
>
> Yes. IIRC the performance penalty for using non-leading column of an index
> is negligi
When 1 client connected postgres do 180 execution per second
This is suspiciously close to 10.000 executions per minute.
You got 10k RPM disks ?
How's your IO system setup ?
Try setting synchronous_commit to OFF in postgresql.conf and see if that
changes the results. That'll give useful inf
Excellent.
Thanks I'll buy and read that book :)
Thanks!
-Mensaje original-
De: Craig Ringer [mailto:cr...@postnewspapers.com.au]
Enviado el: viernes, 10 de junio de 2011 09:13 a.m.
Para: Anibal David Acosta
CC: t...@fuzzy.cz; pgsql-performance@postgresql.org
Asunto: Re: [PERFORM] ho
On 06/10/2011 08:56 PM, Anibal David Acosta wrote:
The version is Postgres 9.0
Yes, I setup the postgres.conf according to instructions in the
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
Cool, I will check this
http://wiki.postgresql.org/wiki/Logging_Difficult_Queries
Looks l
On 06/10/2011 07:29 PM, Anibal David Acosta wrote:
I know that with this information you can figure out somethigns, but in
normal conditions, Is normal the degradation of performance per connection
when connections are incremented?
With most loads, you will find that the throughput per-worker
The version is Postgres 9.0
Yes, I setup the postgres.conf according to instructions in the
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
Cool, I will check this
http://wiki.postgresql.org/wiki/Logging_Difficult_Queries
Looks like great starting point to find bottleneck
But so,
On 06/10/2011 03:39 PM, bakkiya wrote:
http://postgresql.1045698.n5.nabble.com/file/n4475458/untitled.bmp
404 file not found.
That's ... not overly useful.
Again, *PLEASE* read
http://wiki.postgresql.org/wiki/Guide_to_reporting_problems
and try posting again with enough information that s
Hi,
is there a way to change the sample size for statistics (that analyze
gathers)?
It is said to be 10%. i would like to raise that, because we are getting bas
estimations for n_distinct.
Cheers,
WBL
--
"Patriotism is the conviction that your country is superior to all others
because you were
> I have a function in pgsql language, this function do some select to some
> tables for verify some conditions and then do one insert to a table with
> NO
> index. Update are not performed in the function
>
> When 1 client connected postgres do 180 execution per second
> With 2 clients connected p
I have a function in pgsql language, this function do some select to some
tables for verify some conditions and then do one insert to a table with NO
index. Update are not performed in the function
When 1 client connected postgres do 180 execution per second
With 2 clients connected postgres do 11
> If I had set the primary key to (diag_id, create_time) would simple
> queries on
> diag_id still work well i.e.
> select * from tdiag where diag_id = 1234;
Yes. IIRC the performance penalty for using non-leading column of an index
is negligible. But why don't you try that on your own - just
On Thu, Jun 9, 2011 at 7:44 PM, Greg Smith wrote:
> **
> On 06/09/2011 07:43 AM, Willy-Bas Loos wrote:
>
> Well, after reading your article i have been reading some materail about it
> on the internet, stating that separating indexes from data for performance
> benefits is a myth.
> I found your
On Wed, Jun 8, 2011 at 07:19, bakkiya wrote:
> We have a postgresql 8.3.8 DB which consumes 100% of the CPU whenever we run
> any query. We got vmstat output Machine details are below:
Any query? Does even "SELECT 1" not work? Or "SELECT * FROM sometable LIMIT 1"
Or are you having problems with
On Wednesday 08 June 2011 19:47, t...@fuzzy.cz wrote:
> Have you tried to create a composite index on those two columns? Not sure
> if that helps but I'd try that.
>
> Tomas
This finally works well enough
CREATE TABLE tdiag (
diag_id integer DEFAULT nextval('diag_id_seq'::text),
http://postgresql.1045698.n5.nabble.com/file/n4475458/untitled.bmp
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/100-CPU-Utilization-when-we-run-queries-tp4465765p4475458.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
--
Sent via pgsq
20 matches
Mail list logo