On Friday, February 08, 2013 6:06 PM Karolis Pocius wrote:
> I've tried changing autovacuum_analyze_scale_factor as well as setting
> job_batches table to auto analyze every 500 changes (by setting scale
> factor to 0 and threshold to 500), but I still keep running into that
> issue, sometimes mi
On Saturday, February 9, 2013, Scott Marlowe wrote:
> On Sat, Feb 9, 2013 at 1:16 PM, Jeff Janes
> >
> wrote:
> > On Sat, Feb 9, 2013 at 6:51 AM, Scott Marlowe
> > >
> wrote:
> >> On Thu, Feb 7, 2013 at 7:41 AM, Charles Gomes
> >> >
> wrote:
> >>> I've benchmarked shared_buffers with high and l
On Sat, Feb 9, 2013 at 1:16 PM, Jeff Janes wrote:
> On Sat, Feb 9, 2013 at 6:51 AM, Scott Marlowe wrote:
>> On Thu, Feb 7, 2013 at 7:41 AM, Charles Gomes wrote:
>>> I've benchmarked shared_buffers with high and low settings, in a server
>>> dedicated to postgres with 48GB my settings are:
>>> sh
On Sat, Feb 9, 2013 at 6:51 AM, Scott Marlowe wrote:
> On Thu, Feb 7, 2013 at 7:41 AM, Charles Gomes wrote:
>> I've benchmarked shared_buffers with high and low settings, in a server
>> dedicated to postgres with 48GB my settings are:
>> shared_buffers = 37GB
>> effective_cache_size = 38GB
>>
>>
Johnny,
Sure thing, here's the system tap script:
#! /usr/bin/env stap
global pauses, counts
probe begin {
printf("%s\n", ctime(gettimeofday_s()))
}
probe kernel.function("compaction_alloc@mm/compaction.c").return {
elapsed_time = gettimeofday_us() - @entry(gettimeofday_us())
key = spri
On Thu, Feb 7, 2013 at 7:41 AM, Charles Gomes wrote:
> I've benchmarked shared_buffers with high and low settings, in a server
> dedicated to postgres with 48GB my settings are:
> shared_buffers = 37GB
> effective_cache_size = 38GB
>
> Having a small number and depending on OS caching is unpredict
On Thu, Feb 7, 2013 at 11:16 PM, Tony Chan wrote:
> Hi,
>
> May I know what is your setting for OS cache?
>
>
Tony:
Wasn't sure if you were asking me, but here's the output from "free":
# free
total used free sharedbuffers cached
Mem: 198333224 187151280
Josh:
Are you able to share your systemtap script? Our problem will be to try and
regenerate the same amount of traffic/load that we see in production. We
could replay our queries, but we don't even capture a full set because it'd
be roughly 150GB per day.
johnny
On Thu, Feb 7, 2013 at 12:49 PM