Ivan Voras wrote:
> It looks like a hack
Not to everyone. In the referenced section, Hellerstein,
Stonebraker and Hamilton say:
"any good multi-user system has an admission control policy"
In the case of PostgreSQL I understand the counter-argument,
although I'm inclined to think that it'
On 11/22/10 16:26, Kevin Grittner wrote:
Ivan Voras wrote:
On 11/22/10 02:47, Kevin Grittner wrote:
Ivan Voras wrote:
After 16 clients (which is still good since there are only 12
"real" cores in the system), the performance drops sharply
Yet another data point to confirm the importance o
Ivan Voras wrote:
> On 11/22/10 02:47, Kevin Grittner wrote:
>> Ivan Voras wrote:
>>
>>> After 16 clients (which is still good since there are only 12
>>> "real" cores in the system), the performance drops sharply
>>
>> Yet another data point to confirm the importance of connection
>> pooling. :
Martin Boese wrote:
> The table has only ~1400 rows. A count(*) takes more than 70
> seconds. Other tables are fast as usual.
>
> When this happens I can also see my system's disks are suffering.
> 'systat -vm' shows 100% disk load at ~4MB/sec data rates.
>
> A simple VACUUM does *not* fix it
> I believe you can set work_mem to a different value just for the duration
> of
> a single query, so you needn't have work_mem set so high if for every
> query
> on the system. A single query may well use a multiple of work_mem, so you
> really probably don't want it that high all the time unless
On Sun, Nov 21, 2010 at 10:21 PM, Humair Mohammed wrote:
>
> Correct, the optimizer did not take the settings with the pg_ctl reload
> command. I did a pg_ctl restart and work_mem now displays the updated value.
> I had to bump up all the way to 2047 MB to get the response below (with
> work_mem a
>
>
> Correct, the optimizer did not take the settings with the pg_ctl reload
> command. I did a pg_ctl restart and work_mem now displays the updated
> value. I had to bump up all the way to 2047 MB to get the response below
> (with work_mem at 1024 MB I see 7 seconds response time) and with 2047 M
Hi Ivan,
We have the same issue on our database machines (which are 2x6
Intel(R) Xeon(R) CPU X5670 @ 2.93GHz with 24 logical cores and 144Gb
of RAM) -- they run RHEL 5. The issue occurs with our normal OLTP
workload, so it's not just pgbench.
We use pgbouncer to limit total connections to 15 (thi
Hi,
I am using Postgresql: 9.01, PostGIS 1.5 on FreeBSD 7.0. I have at
least one table on which SELECT's turn terribly slow from time to time.
This happened at least three times, also on version 8.4.
The table has only ~1400 rows. A count(*) takes more than 70 seconds.
Other tables are fast as us
Correct, the optimizer did not take the settings with the pg_ctl reload
command. I did a pg_ctl restart and work_mem now displays the updated value. I
had to bump up all the way to 2047 MB to get the response below (with work_mem
at 1024 MB I see 7 seconds response time) and with 2047 MB (whic
That was a typo:
work_mem = 2GBshared_buffers = 2GB
> From: pavel.steh...@gmail.com
> Date: Sun, 21 Nov 2010 12:38:43 +0100
> Subject: Re: [PERFORM] Query Performance SQL Server vs. Postgresql
> To: huma...@hotmail.com
> CC: pgsql-performance@postgresql.org
>
> 2010/11/21 Humair Mohammed :
> >
>
Thanks for your information. I am using postgresql 8.4 and this
version should have already supported HOT. The frequently updated
columns are not indexed columns. So, the frequent updates should not
create many dead records. I also did a small test. If I don't execute
vacuum, the number of pages of
In my experiment, I need about 1~3 min to finish the analyze operation
on the big table (which depends on the value of vacuum_cost_delay). I
am not surprised because this table is a really big one (now, it has
over 200M records).
However, the most of my concerns is the behavior of analyze/vacuum.
13 matches
Mail list logo