This is a super-interesting topic, thanks for all the info.
On Thu, Sep 4, 2014 at 7:44 AM, Shaun Thomas stho...@optionshouse.com
wrote:
Check /proc/meminfo for a better breakdown of how the memory is being
used. This should work:
grep -A1 Active /proc/meminfo
I suspect your inactive file
On Tue, Aug 5, 2014 at 8:27 AM, Shaun Thomas stho...@optionshouse.com
wrote:
On 07/30/2014 12:51 PM, Kevin Goess wrote:
A couple months ago we upgraded the RAM on our database servers from
48GB to 64GB. Immediately afterwards the new RAM was being used for
page cache, which is what we want
A couple months ago we upgraded the RAM on our database servers from 48GB
to 64GB. Immediately afterwards the new RAM was being used for page cache,
which is what we want, but that seems to have dropped off over time, and
there's currently actually like 12GB of totally unused RAM.
Good suggestion, but nope, that ain't it:
$ cat /proc/sys/vm/zone_reclaim_mode
0
On Wed, Jul 30, 2014 at 11:49 AM, Merlin Moncure mmonc...@gmail.com wrote:
On Wed, Jul 30, 2014 at 12:51 PM, Kevin Goess kgo...@bepress.com wrote:
A couple months ago we upgraded the RAM on our database
So my conclusion is that for now, the best way to scale read-only queries
for a sharded master is to
implement map-reduce at the application level.
That's the conclusion I would expect. It's the price you pay for sharding,
it's part of the deal.
But it's also the benefit you get from sharding.
On Thu, May 15, 2014 at 6:39 AM, Dorian Hoxha dorian.ho...@gmail.comwrote:
If you don't do read queries on the slave than it will not have hot
data/pages/rows/tables/indexes in ram like the primary ?
Yeah, that was the first thing we noticed, the cacti graph shows it took
two hours for the
We have a master/slave setup with replication. Today we failed over to the
slave and saw disk I/O go through the roof.
Are the pg_statistic statistics synced along with streaming replication?
Are you expected to have to do a vacuum analyze after failing over? That's
what we're trying now to see
We're looking at upgrading our database hosts running postgres 9.2 from
debian 6/squeeze to debian 7/wheezy. It seems to me that using apt to
upgrade in-place would be less work than wiping the boxes and reinstalling
debian 7 from scratch, but the latter way would be cleaner.
Does anybody have
, March 17, 2014, Kevin Goess kgo...@bepress.com wrote:
We had a big increase in load, iowait, and disk i/o on a dedicated
database host the other night.
Looking at the sar logs, the problem shows itself in a big increase in
pgpgout/s, which I believe is postgres paging out parts of itself
We had a big increase in load, iowait, and disk i/o on a dedicated database
host the other night.
Looking at the sar logs, the problem shows itself in a big increase in
pgpgout/s, which I believe is postgres paging out parts of itself to disk?
02:15:01 AM pgpgin/s pgpgout/s fault/s majflt/s
On Tue, Nov 12, 2013 at 2:47 PM, Tom Lane t...@sss.pgh.pa.us wrote:
That's right, we store 90 days and roll up data older than that into a
different table.
Ah-hah. The default statistics target is 100, so indeed ANALYZE is going
to be able to fit every date entry in the table into the
Thanks for the reply!
Your analysis matches everything I see here, so what you say is probably
the case. As to why it changed for us with the 9.0 = 9.2 upgrade, I also
don't know--the change was pretty dramatic though. Since we've compensated
for it, and since you say the current behavior is
Given this table
articles_1= \d hits_user_daily_count;
Table public.hits_user_daily_count
Column | Type | Modifiers
---+-+---
userid| integer | not null
date | date| not null
We're seeing a problem with some of our processes hanging on locks. The
select below makes it look like it's *waiting* for a ShareLock on
transactionid, but it *has* an ExclusiveLock on the same value in
virtualxid.
That makes it look like the process has deadlocked on its own
transactionid. Or
On Tue, Jul 23, 2013 at 5:40 PM, Adrian Klaver adrian.kla...@gmail.comwrote:
On 07/23/2013 05:29 PM, Some Developer wrote:
I'm in the middle of building a database and was going to make extensive
use of stored procedures and trigger functions because it makes more
sense for the actions to
Thanks for looking into it, Tom. We're using 9.0.4, so that might indeed
be the problem. What additional data (if any) would you like to see? If
you want to look into it further, I can give you schema, though I hesitate
to spam the whole list. I could also mock up some tables and see what's
the
On Mon, Mar 19, 2012 at 9:24 AM, Albe Laurenz laurenz.a...@wien.gv.atwrote:
That means that your statistics are not accurate.
As a first measure, you should ANALYZE the tables involved and see if
the problem persists. If yes, post the new plans.
Aha, thanks, that explains why my test table
My apologies, I'm sure this question has been asked before but I couldn't
find anything on the list that meant anything to me.
We have a table contexts with 1.6 million rows, and a table articles
with 1.4 million rows, where an article is a particular kind of
context. We want to select from a
18 matches
Mail list logo