Re: [ADMIN] Postgre Eating Up Too Much RAM

2012-11-13 Thread Gunnar Nick Bluth

Am 14.11.2012 04:19, schrieb Tom Lane:

Craig Ringer cr...@2ndquadrant.com writes:

On 11/14/2012 06:12 AM, Aaron Bono wrote:

Am I reading this right?  Are there individual connections using over
300 MB or RAM by themselves?

If I recall correctly, RSS is charged against a PostgreSQL back-end when
it touches `shared_buffers`. So that doesn't necessarily mean that the
back-end is using the full amount of memory listed as RSS.

Yeah.  Since Aaron's got shared_buffers set to 256MB, the shared memory
segment is something more than that (maybe 270-280MB, hard to be sure
without checking).  The RSS numbers probably count all or nearly all of
that for each process, but of course there's really only one copy of the
shared memory segment.  RSS is likely double-counting the postgres
executable as well, which means that the actual additional memory used
per process is probably just a few meg, which is in line with most
folks' experience with PG.

The free stats didn't look like a machine under any sort of memory
pressure --- there's zero swap usage, and nearly half of real RAM is
being used for disk cache, which means the kernel can find no better
use for it than caching copies of disk files.  Plus there's still 10G
that's totally free.  Maybe things get worse when the machine's been up
longer, but this sure isn't evidence of trouble.
Keep in mind though that (SysV) SHM is accounted as cached in all 
Linux tools (I know), thus free is never complete without ipcs -m 
+ ipcs -mu outputs. However I second Tom here; your machine looks 
perfectly healthy.
Note that RSS usage of your sessions can quickly explode though (due 
to copy on write) if your clients start creating large return sets (and 
in the worst case, take a lot of time to collect them).


You might consider deploying atop (atoptool.nl), which offers to sum up 
all data based on user and/or process name, and will enable you to track 
the usage stats to the past. Plus the latest version could already have 
a seperate display for SHM usage (at least Gerlof promised me to add it ;-).


Cheers,

--
Gunnar Nick Bluth
RHCE/SCLA

Mobil   +49 172 8853339
Email: gunnar.bl...@pro-open.de
__
In 1984 mainstream users were choosing VMS over UNIX.  Ten years later
they are choosing Windows over UNIX.  What part of that message aren't you
getting? - Tom Payne



--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Detecting DB corruption

2012-11-01 Thread Gunnar Nick Bluth

Am 01.11.2012 06:47, schrieb Craig Ringer:

On 11/01/2012 01:10 PM, Scott Ribe wrote:

On Oct 31, 2012, at 8:50 PM, Craig Ringer wrote:


Seriously, if you're facing DB corruption then something is already
horribly wrong with your setup.
Horribly is not strong enough a word IMHO when we're discussing double 
primary key values... except if Raj is not using sequences to generate 
them. Although on the other hand, in that case, it's in turn an even 
more horrible setup; questionable DB design on unreliable hardware.


Raj, would you mind pasting your schema somewhere, at least of the 
tables you experienced the corruption?

True, but. In a past life, complaints from the db (it was a db that stored a 
checksum with every block) were the very first symptom when something went horribly 
wrong with the hardware. (Partial short between wires of an internal SCSI cable; 
eventually we determined that about every 1MB, 1 bit would get flipped between the 
controller  disk.)

So, if there were an official db verifier tool for PG, I for one would have it 
run periodically.

If there were a way to reliably detect corruption, so would I. As things
stand there are no block checksums, so if a bit gets flipped in some
random `text` field you're never going to know, corruption-checker or
no. Some forms of random corruption - like bad blocks on disks causing
I think checksums are currently being worked on and are to be expected 
for 9.3. Might be interesting to scan -hackers for that once more...

I/O errors, zeroed blocks, truncated files, etc - will become apparent
with general checking, but others won't be detectable unless you know
what the expected vs actual data is.

If page checksumming or any other reliable method of detecting possible
incipient corruption were available I'd quite likely want to use it for
much the same reason you outlined. For that matter, if there were a
general sanity check my tables and indexes tool I'd probably use that
too. However, no such tool exists - and in a good setup, none should be
needed. I'd want to use one anyway purely out of paranoia.

--
Craig Ringer


On a side note, Raj, you might want to read the descriptions of MVCC and 
WAL once more, then re-think about your idea of updating all rows and 
rolling back the transaction. That would potentially produce the effect 
you're looking for with InnoDB or Oracle, but not with PG.


Cheers,

--
Gunnar Nick Bluth
RHCE/SCLA

Mobil   +49 172 8853339
Email: gunnar.bl...@pro-open.de
__
In 1984 mainstream users were choosing VMS over UNIX.  Ten years later
they are choosing Windows over UNIX.  What part of that message aren't you
getting? - Tom Payne



--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin