On 10/10/2011 12:14 PM, Leonardo Francalanci wrote:
database makes the fsync call, and suddenly the OS wants to flush 2-6GB of data
straight to disk. Without that background trickle, you now have a flood that
only the highest-end disk controller or a backing-store full of SSDs or PCIe
NVRAM cou
On 10/10/2011 01:31 PM, alexandre - aldeia digital wrote:
I drop checkpoint_timeout to 1min and turn on log_checkpoint:
<2011-10-10 14:18:48 BRT >LOG: checkpoint complete: wrote 6885
buffers (1.1%); 0 transaction log file(s) added, 0 removed, 1
recycled; write=29.862 s, sync=28.466 s, total=5
On Tue, Oct 11, 2011 at 12:02 AM, Samuel Gendler
wrote:
> The original question doesn't actually say that performance has gone down,
> only that cpu utilization has gone up. Presumably, with lots more RAM, it is
> blocking on I/O a lot less, so it isn't necessarily surprising that CPU
> utilizatio
On Mon, Oct 10, 2011 at 1:52 PM, Kevin Grittner wrote:
> alexandre - aldeia digital wrote:
>
> > I came to the list to see if anyone else has experienced the same
> > problem
>
> A high load average or low idle CPU isn't a problem, it's a
> potentially useful bit of information in diagnosing a p
alexandre - aldeia digital wrote:
> I came to the list to see if anyone else has experienced the same
> problem
A high load average or low idle CPU isn't a problem, it's a
potentially useful bit of information in diagnosing a problem. I
was hoping to hear what the actual problem was, since I'
Em 10-10-2011 16:39, Kevin Grittner escreveu:
alexandre - aldeia digital wrote:
From the point of view of the client, the question is simple:
until the last friday (with 16 GB of RAM), the load average of
server rarely surpasses 4. Nothing change in normal database use.
Really? The applica
On 10/10/2011 12:31 PM, alexandre - aldeia digital wrote:
<2011-10-10 14:18:48 BRT >LOG: checkpoint complete: wrote 6885 buffers
(1.1%); 0 transaction log file(s) added, 0 removed, 1 recycled;
write=29.862 s, sync=28.466 s, total=58.651 s
28.466s sync time?! That's horrifying. At this point, I
alexandre - aldeia digital wrote:
> From the point of view of the client, the question is simple:
> until the last friday (with 16 GB of RAM), the load average of
> server rarely surpasses 4. Nothing change in normal database use.
Really? The application still performs as well or better, and
Em 10-10-2011 14:46, Kevin Grittner escreveu:
alexandre - aldeia digital wrote:
Notice that we have no idle % in cpu column.
So they're making full use of all the CPUs they paid for. That in
itself isn't a problem. Unfortunately you haven't given us nearly
enough information to know whethe
alexandre - aldeia digital wrote:
> Notice that we have no idle % in cpu column.
So they're making full use of all the CPUs they paid for. That in
itself isn't a problem. Unfortunately you haven't given us nearly
enough information to know whether there is indeed a problem, or if
so, what.
Em 10-10-2011 11:04, Shaun Thomas wrote:
That's not entirely surprising. The problem with having lots of memory
is... that you have lots of memory. The operating system likes to cache,
and this includes writes. Normally this isn't a problem, but with 48GB
of RAM, the defaults (for CentOS 5.5 in p
> Then the
> database makes the fsync call, and suddenly the OS wants to flush 2-6GB of
> data
> straight to disk. Without that background trickle, you now have a flood that
> only the highest-end disk controller or a backing-store full of SSDs or PCIe
> NVRAM could ever hope to absorb.
Isn
On 10/10/2011 10:04 AM, Shaun Thomas wrote:
The problem with having lots of memory is... that you have lots of
memory. The operating system likes to cache, and this includes writes.
Normally this isn't a problem, but with 48GB of RAM, the defaults (for
CentOS 5.5 in particular) are to use up to
On 10/10/2011 10:14 AM, Leonardo Francalanci wrote:
I don't understand: don't you want postgresql to issue the fsync
calls when it "makes sense" (and configure them), rather than having
the OS decide when it's best to flush to disk? That is: don't you
want all the memory to be used for caching,
> That's not entirely surprising. The problem with having lots of memory is...
> that you have lots of memory. The operating system likes to cache, and this
> includes writes. Normally this isn't a problem, but with 48GB of RAM, the
> defaults (for CentOS 5.5 in particular) are to use up to 40
On 10/10/2011 08:26 AM, alexandre - aldeia digital wrote:
Yesterday, a customer increased the server memory from 16GB to 48GB.
Today, the load of the server hit 40 ~ 50 points.
With 16 GB, the load not surpasses 5 ~ 8 points.
That's not entirely surprising. The problem with having lots of mem
alexandre - aldeia digital wrote:
> Yesterday, a customer increased the server memory from 16GB to
> 48GB.
That's usually for the better, but be aware that on some hardware
adding RAM beyond a certain point causes slower RAM access. Without
knowing more details, it's impossible to say whether
Hi,
Yesterday, a customer increased the server memory from 16GB to 48GB.
Today, the load of the server hit 40 ~ 50 points.
With 16 GB, the load not surpasses 5 ~ 8 points.
The only parameter that I changed is effective_cache_size (from 14 GB to
40GB) and shared_buffers (from 1 GB to 5 GB). Set
>> * Allow CLUSTER to sort the table rather than scanning the index
> when it seems likely to be cheaper (Leonardo Francalanci)
>
> Looks like I owe Leonardo Francalanci a pizza.
Well, the patch started from a work by Gregory Stark, and Tom fixed
a nasty bug; but I'll take a slice ;)
19 matches
Mail list logo