[PERFORM] Re: [PERFORM] Re: [PERFORM] Re: [PERFORM] Re: [PERFORM] Re: 回复: [PERFORM] PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory?

2012-03-02 Thread Tomas Vondra
On 2.3.2012 03:05, Claudio Freire wrote:
 On Thu, Mar 1, 2012 at 10:13 PM, Tomas Vondra t...@fuzzy.cz wrote:

 Maybe. I still am not sure how fsync=off affects the eviction in your
 opinion. I think it does not (or just very remotely) and you were saying
 the opposite. IMHO the eviction of (dirty) buffers is either very fast
 or slow, no matter what the fsync setting is.
 
 I was thinking page cleanup, but if you're confident it doesn't happen
 on a read-only database, I'd have to agree on all your other points.
 
 I have seen a small amount of writes on a read-only devel DB I work
 with, though. Usually in the order of 100kb/s writes per 10mb/s reads
 - I attributed that to page cleanup. In that case, it can add some
 wait time to fsync, even though it's really a slow volume of writes.
 If you're right, I'm thinking, it may be some other thing... atime
 updates maybe, I'd have to check the filesystem configuration I guess.

I'd guess those writes were caused by hint bits (~ page cleanup, but
that's a one-time thing and should be fixed by VACUUM FREEZE right after
the load). Or maybe it was related to runtime stats (i.e. pgstat).

T.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Re: [PERFORM] Re: [PERFORM] Re: [PERFORM] Re: [PERFORM] Re: 回复: [PERFORM] PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory?

2012-03-01 Thread Tomas Vondra
On 28.2.2012 17:42, Claudio Freire wrote:
 On Tue, Feb 28, 2012 at 1:05 PM, Tomas Vondra t...@fuzzy.cz wrote:
 On 28 Únor 2012, 15:24, Claudio Freire wrote:
 It speeds a lot more than the initial load of data.

 Assuming the database is read-only, but not the filesystem (ie: it's
 not a slave, in which case all this is moot, as you said, there are no
 writes on a slave). That is, assuming this is a read-only master, then
 read-only queries don't mean read-only filesystem. Bookkeeping tasks
 like updating catalog dbs, statistics tables, page cleanup, stuff like
 that can actually result in writes.

 Writes that go through the WAL and then the filesystem.

 I'm not sure what maintenance tasks you mean. Sure, there are tasks that
 need to be performed after the load (stats, hint bits, updating system
 catalogs etc.) but this may happen once right after the load and then
 there's effectively zero write activity. Unless the database needs to
 write temp files, but that contradicts the 'fits into RAM' assumption ...
 
 AFAIK, stats need to be constantly updated.

Err, what kind of stats are we talking about? Statistics capturing
characteristics of the data or runtime stats? There's no point in
updating data stats (histograms, MCV, ...) for read-only data and
PostgreSQL doesn't do that.

Runtime stats OTOH are collected and written continuously, that's true.
But in most cases this is not a write-heavy task, and if it is then it's
recommended to place the pg_stat_tmp on ramdrive (it's usually just few
MBs, written repeatedly).

 Not sure about the rest.

AFAIK it's like this:

  updating catalog tables - no updates on read-only data

  updating statistics - data stats: no, runtime stats: yes

  page cleanup - no (just once after the load)

 And yes, it's quite possible to require temp files without a database
 that doesn't fit in memory, only big OLAP-style queries and small
 enough work_mem.

Right. I'm not exactly sure how I arrived to the crazy conclusion that
writing temp files somehow contradicts the 'fits into RAM' assumption.
That's clearly nonsense ...

 
 The writes are always carried out by the OS - except when dirty_ratio is
 exceeded (but that's a different story) and WAL with direct I/O enabled.
 The best way to allow merging the writes in shared buffers or page cache
 is to set the checkpoint_segments / checkpoint_timeout high enough.
 That way the transactions won't need to wait for writes to data files
 (which is the part related to evictions of buffers from cache). And
 read-only transactions won't need to wait at all because they don't need
 to wait for fsync on WAL.
 
 Exactly
 
 In essence, what was required, to keep everything in RAM for as much
 as possible.

 It *does* in the same way affect buffer eviction - it makes eviction
 *very* quick, and re-population equally as quick, if everything fits
 into memory.

 No it doesn't. Only a write caused by a background process (due to full
 shared buffers) means immediate eviction. A simple write (caused by a
 checkpoint) does not evict the page from shared buffers. Not even a
 background writer evicts a page from shared buffers, it merely marks them
 as 'clean' and leaves them there. And all those writes happen on the
 background, so the clients don't need to wait for them to complete (except
 for xlog checkpoints).
 
 So, we're saying the same.

Maybe. I still am not sure how fsync=off affects the eviction in your
opinion. I think it does not (or just very remotely) and you were saying
the opposite. IMHO the eviction of (dirty) buffers is either very fast
or slow, no matter what the fsync setting is.

 With all that, and enough RAM, it already does what was requested.
 
 Maybe it would help to tune shared_buffers-to-os-cache ratio, and
 dirty_ratio to allow a big portion of RAM used for write caching (if
 there were enough writes which I doubt), but, in essence, un
 unmodified postgres installation with enough RAM to hold the whole DB
 + shared buffers in RAM should perform quite optimally.

Probably, for a read-write database that fits into memory. In case of a
read-only database I don't think this really matters because the main
issue there are temp files and if you can stuff them into page cache
then you can just increase the work_mem instead and you're golden.

Tomas

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Re: [PERFORM] Re: [PERFORM] Re: [PERFORM] Re: [PERFORM] Re: 回复: [PERFORM] PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory?

2012-03-01 Thread Claudio Freire
On Thu, Mar 1, 2012 at 10:13 PM, Tomas Vondra t...@fuzzy.cz wrote:

 Maybe. I still am not sure how fsync=off affects the eviction in your
 opinion. I think it does not (or just very remotely) and you were saying
 the opposite. IMHO the eviction of (dirty) buffers is either very fast
 or slow, no matter what the fsync setting is.

I was thinking page cleanup, but if you're confident it doesn't happen
on a read-only database, I'd have to agree on all your other points.

I have seen a small amount of writes on a read-only devel DB I work
with, though. Usually in the order of 100kb/s writes per 10mb/s reads
- I attributed that to page cleanup. In that case, it can add some
wait time to fsync, even though it's really a slow volume of writes.
If you're right, I'm thinking, it may be some other thing... atime
updates maybe, I'd have to check the filesystem configuration I guess.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Re: 回复: [PERFORM] PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory?

2012-02-28 Thread Stefan Keller
Hi Wales

2012/2/27 Wales Wang wormw...@yahoo.com wrote:
 There are many approach for PostgreSQL in-memory.
 The quick and easy way is making slave pgsql run on persistent RAM
 filesystem, the slave is part of master/slave replication cluster.

 The fstab and script make RAM file system persistent is below:
 Setup:
 First, create a mountpoint for the disk :
 mkdir /mnt/ramdisk
 Secondly, add this line to /etc/fstab in to mount the drive at boot-time.
 tmpfs   /mnt/ramdisk tmpfs  defaults,size=65536M 0 0
 #! /bin/sh
 # /etc/init.d/ramdisk.sh
 #

 case $1 in
   start)
     echo Copying files to ramdisk
     rsync -av /data/ramdisk-backup/ /mnt/ramdisk/
     echo [`date +%Y-%m-%d %H:%M`] Ramdisk Synched from HD 
 /var/log/ramdisk_sync.log
     ;;
   sync)
     echo Synching files from ramdisk to Harddisk
     echo [`date +%Y-%m-%d %H:%M`] Ramdisk Synched to HD 
 /var/log/ramdisk_sync.log
     rsync -av --delete --recursive --force /mnt/ramdisk/
 /data/ramdisk-backup/
     ;;
   stop)
     echo Synching logfiles from ramdisk to Harddisk
     echo [`date +%Y-%m-%d %H:%M`] Ramdisk Synched to HD 
 /var/log/ramdisk_sync.log
     rsync -av --delete --recursive --force /mnt/ramdisk/
 /data/ramdisk-backup/
     ;;
   *)
     echo Usage: /etc/init.d/ramdisk {start|stop|sync}
     exit 1
     ;;
 esac
 exit 0

 you can run it when startup and shutdown and crontabe hoursly.

 Wales Wang

Thank you for the tipp.
Making slave pgsql run on persistent RAM filesystem is surely at least
a possibility which I'll try out.

But what I'm finally after is a solution, where records don't get
pushed back to disk a.s.a.p. but rather got hold in memory as long as
possible assuming that there is enough memory.
I suspect that currently there is quite some overhead because of that
(besides disk-oriented structures).

-Stefan

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Re: [PERFORM] Re: 回复: [PERFORM] PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory?

2012-02-28 Thread Claudio Freire
On Tue, Feb 28, 2012 at 5:30 AM, Stefan Keller sfkel...@gmail.com wrote:

 But what I'm finally after is a solution, where records don't get
 pushed back to disk a.s.a.p. but rather got hold in memory as long as
 possible assuming that there is enough memory.

fsync = off ?

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Re: [PERFORM] Re: [PERFORM] Re: 回复: [PERFORM] PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory?

2012-02-28 Thread Tomas Vondra
On 28 Únor 2012, 14:08, Claudio Freire wrote:
 On Tue, Feb 28, 2012 at 5:30 AM, Stefan Keller sfkel...@gmail.com wrote:

 But what I'm finally after is a solution, where records don't get
 pushed back to disk a.s.a.p. but rather got hold in memory as long as
 possible assuming that there is enough memory.

 fsync = off ?

I don't think this is a viable idea, unless you don't care about the data.

Moreover, fsyn=off does not mean not writing and writing does not mean
removing from shared buffers. A page written/fsynced during a checkpoint
may stay in shared buffers.

AFAIK the pages are not removed from shared buffers without a reason. So a
dirty buffer is written to a disk (because it needs to, to keep ACID) but
stays in shared buffers as clean (unless it was written by a backend,
which means there's not enough memory).

Tomas


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Re: [PERFORM] Re: [PERFORM] Re: 回复: [PERFORM] PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory?

2012-02-28 Thread Claudio Freire
On Tue, Feb 28, 2012 at 10:38 AM, Tomas Vondra t...@fuzzy.cz wrote:
 On 28 Únor 2012, 14:08, Claudio Freire wrote:
 On Tue, Feb 28, 2012 at 5:30 AM, Stefan Keller sfkel...@gmail.com wrote:

 But what I'm finally after is a solution, where records don't get
 pushed back to disk a.s.a.p. but rather got hold in memory as long as
 possible assuming that there is enough memory.

 fsync = off ?

 I don't think this is a viable idea, unless you don't care about the data.

Well, if you keep things in memory as long as possible (as per the
quoted message), then you don't care about memory. There's no way
memory-only DBs can provide ACID guarantees.

synchronous_commit=off goes half way there without sacrificing crash
recovery, which is another option.

 Moreover, fsyn=off does not mean not writing and writing does not mean
 removing from shared buffers. A page written/fsynced during a checkpoint
 may stay in shared buffers.

The OS will write in the background (provided there's enough memory,
which was an assumption on the quoted message). It will not interfere
with other operations, so, in any case, writing or not, you get what
you want.

 AFAIK the pages are not removed from shared buffers without a reason. So a
 dirty buffer is written to a disk (because it needs to, to keep ACID) but
 stays in shared buffers as clean (unless it was written by a backend,
 which means there's not enough memory).

Just writing is not enough. ACID requires fsync. If you don't fsync
(be it with synchronous_commit=off or fsync=off), then it's not full
ACID already.
Because a crash at a bad moment can always make your data nonpersistent.

That's an unavoidable result of keeping things in memory.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Re: [PERFORM] Re: [PERFORM] Re: [PERFORM] Re: 回复: [PERFORM] PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory?

2012-02-28 Thread Tomas Vondra
On 28 Únor 2012, 14:52, Claudio Freire wrote:
 On Tue, Feb 28, 2012 at 10:38 AM, Tomas Vondra t...@fuzzy.cz wrote:
 On 28 Únor 2012, 14:08, Claudio Freire wrote:
 On Tue, Feb 28, 2012 at 5:30 AM, Stefan Keller sfkel...@gmail.com
 wrote:

 But what I'm finally after is a solution, where records don't get
 pushed back to disk a.s.a.p. but rather got hold in memory as long as
 possible assuming that there is enough memory.

 fsync = off ?

 I don't think this is a viable idea, unless you don't care about the
 data.

 Well, if you keep things in memory as long as possible (as per the
 quoted message), then you don't care about memory. There's no way
 memory-only DBs can provide ACID guarantees.

 synchronous_commit=off goes half way there without sacrificing crash
 recovery, which is another option.

 Moreover, fsyn=off does not mean not writing and writing does not
 mean
 removing from shared buffers. A page written/fsynced during a
 checkpoint
 may stay in shared buffers.

 The OS will write in the background (provided there's enough memory,
 which was an assumption on the quoted message). It will not interfere
 with other operations, so, in any case, writing or not, you get what
 you want.

 AFAIK the pages are not removed from shared buffers without a reason. So
 a
 dirty buffer is written to a disk (because it needs to, to keep ACID)
 but
 stays in shared buffers as clean (unless it was written by a backend,
 which means there's not enough memory).

 Just writing is not enough. ACID requires fsync. If you don't fsync
 (be it with synchronous_commit=off or fsync=off), then it's not full
 ACID already.
 Because a crash at a bad moment can always make your data nonpersistent.

I haven't said writing is sufficient for ACID, I said it's required. Which
is kind of obvious because of the durability part.

 That's an unavoidable result of keeping things in memory.

Why? IIRC the OP was interested in keeping the data in memory for querying
and that the database is read-only after it's populated with data (once a
day). How does writing the transactional logs / data files properly
interfere with that?

I haven't investigated why exactly the data are not cached initially, but
none of the options that I can think of could be fixed by setting
fsync=off. That's something that influences writes (not read-only
database) and I don't think it influences how buffers are evicted from
shared buffers / page cache.

It might speed up the initial load of data, but that's not what the OP was
asking.

kind regards
Tomas


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Re: [PERFORM] Re: [PERFORM] Re: [PERFORM] Re: 回复: [PERFORM] PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory?

2012-02-28 Thread Claudio Freire
On Tue, Feb 28, 2012 at 11:15 AM, Tomas Vondra t...@fuzzy.cz wrote:
 I haven't investigated why exactly the data are not cached initially, but
 none of the options that I can think of could be fixed by setting
 fsync=off. That's something that influences writes (not read-only
 database) and I don't think it influences how buffers are evicted from
 shared buffers / page cache.

 It might speed up the initial load of data, but that's not what the OP was
 asking.

It speeds a lot more than the initial load of data.

Assuming the database is read-only, but not the filesystem (ie: it's
not a slave, in which case all this is moot, as you said, there are no
writes on a slave). That is, assuming this is a read-only master, then
read-only queries don't mean read-only filesystem. Bookkeeping tasks
like updating catalog dbs, statistics tables, page cleanup, stuff like
that can actually result in writes.

Writes that go through the WAL and then the filesystem.

With fsync=off, those writes happen on the background, and are carried
out by the OS. Effectively releasing postgres from having to wait on
them, and, assuming there's enough RAM, merging repeated writes to the
same sectors in one operation in the end. For stats, bookkeeping, and
who knows what else, the merging would be quite effective. With enough
RAM to hold the entire DB, the merging would effectively keep
everything in RAM (in system buffers) until there's enough I/O
bandwidth to transparently push that to persistent storage.

In essence, what was required, to keep everything in RAM for as much
as possible.

It *does* in the same way affect buffer eviction - it makes eviction
*very* quick, and re-population equally as quick, if everything fits
into memory.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Re: 回复: [PERFORM] PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory?

2012-02-28 Thread Jeff Janes
On Tue, Feb 28, 2012 at 12:30 AM, Stefan Keller sfkel...@gmail.com wrote:

 Thank you for the tipp.
 Making slave pgsql run on persistent RAM filesystem is surely at least
 a possibility which I'll try out.

 But what I'm finally after is a solution, where records don't get
 pushed back to disk a.s.a.p. but rather got hold in memory as long as
 possible assuming that there is enough memory.

That is already the case.  There are two separate issues, when dirty
data is written to disk, and when clean data is dropped from memory.
The only connection between them is that dirty data can't just be
dropped, it must be written first.  But have written it, there is no
reason to immediately drop it.  When a checkpoint cleans data from the
shard_buffers, that now-clean data remains in shared_buffers.  And at
the OS level, when an fsync forces dirty data out to disk, the
now-clean data generally remains in cache (although I've seen nfs
implementations where that was not the case).

It is hard to figure out what problem you are facing.  Is your data
not getting loaded into cache, or is it not staying there?

Cheers,

Jeff

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Re: [PERFORM] Re: [PERFORM] Re: [PERFORM] Re: [PERFORM] Re: 回复: [PERFORM] PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory?

2012-02-28 Thread Tomas Vondra
On 28 Únor 2012, 15:24, Claudio Freire wrote:
 On Tue, Feb 28, 2012 at 11:15 AM, Tomas Vondra t...@fuzzy.cz wrote:
 I haven't investigated why exactly the data are not cached initially,
 but
 none of the options that I can think of could be fixed by setting
 fsync=off. That's something that influences writes (not read-only
 database) and I don't think it influences how buffers are evicted from
 shared buffers / page cache.

 It might speed up the initial load of data, but that's not what the OP
 was
 asking.

 It speeds a lot more than the initial load of data.

 Assuming the database is read-only, but not the filesystem (ie: it's
 not a slave, in which case all this is moot, as you said, there are no
 writes on a slave). That is, assuming this is a read-only master, then
 read-only queries don't mean read-only filesystem. Bookkeeping tasks
 like updating catalog dbs, statistics tables, page cleanup, stuff like
 that can actually result in writes.

 Writes that go through the WAL and then the filesystem.

I'm not sure what maintenance tasks you mean. Sure, there are tasks that
need to be performed after the load (stats, hint bits, updating system
catalogs etc.) but this may happen once right after the load and then
there's effectively zero write activity. Unless the database needs to
write temp files, but that contradicts the 'fits into RAM' assumption ...

 With fsync=off, those writes happen on the background, and are carried
 out by the OS. Effectively releasing postgres from having to wait on
 them, and, assuming there's enough RAM, merging repeated writes to the
 same sectors in one operation in the end. For stats, bookkeeping, and
 who knows what else, the merging would be quite effective. With enough
 RAM to hold the entire DB, the merging would effectively keep
 everything in RAM (in system buffers) until there's enough I/O
 bandwidth to transparently push that to persistent storage.

The writes are always carried out by the OS - except when dirty_ratio is
exceeded (but that's a different story) and WAL with direct I/O enabled.
The best way to allow merging the writes in shared buffers or page cache
is to set the checkpoint_segments / checkpoint_timeout high enough.

That way the transactions won't need to wait for writes to data files
(which is the part related to evictions of buffers from cache). And
read-only transactions won't need to wait at all because they don't need
to wait for fsync on WAL.

 In essence, what was required, to keep everything in RAM for as much
 as possible.

 It *does* in the same way affect buffer eviction - it makes eviction
 *very* quick, and re-population equally as quick, if everything fits
 into memory.

No it doesn't. Only a write caused by a background process (due to full
shared buffers) means immediate eviction. A simple write (caused by a
checkpoint) does not evict the page from shared buffers. Not even a
background writer evicts a page from shared buffers, it merely marks them
as 'clean' and leaves them there. And all those writes happen on the
background, so the clients don't need to wait for them to complete (except
for xlog checkpoints).

kind regards
Tomas


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Re: [PERFORM] Re: [PERFORM] Re: [PERFORM] Re: [PERFORM] Re: 回复: [PERFORM] PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory?

2012-02-28 Thread Claudio Freire
On Tue, Feb 28, 2012 at 1:05 PM, Tomas Vondra t...@fuzzy.cz wrote:
 On 28 Únor 2012, 15:24, Claudio Freire wrote:
 It speeds a lot more than the initial load of data.

 Assuming the database is read-only, but not the filesystem (ie: it's
 not a slave, in which case all this is moot, as you said, there are no
 writes on a slave). That is, assuming this is a read-only master, then
 read-only queries don't mean read-only filesystem. Bookkeeping tasks
 like updating catalog dbs, statistics tables, page cleanup, stuff like
 that can actually result in writes.

 Writes that go through the WAL and then the filesystem.

 I'm not sure what maintenance tasks you mean. Sure, there are tasks that
 need to be performed after the load (stats, hint bits, updating system
 catalogs etc.) but this may happen once right after the load and then
 there's effectively zero write activity. Unless the database needs to
 write temp files, but that contradicts the 'fits into RAM' assumption ...

AFAIK, stats need to be constantly updated.
Not sure about the rest.

And yes, it's quite possible to require temp files without a database
that doesn't fit in memory, only big OLAP-style queries and small
enough work_mem.

 The writes are always carried out by the OS - except when dirty_ratio is
 exceeded (but that's a different story) and WAL with direct I/O enabled.
 The best way to allow merging the writes in shared buffers or page cache
 is to set the checkpoint_segments / checkpoint_timeout high enough.
 That way the transactions won't need to wait for writes to data files
 (which is the part related to evictions of buffers from cache). And
 read-only transactions won't need to wait at all because they don't need
 to wait for fsync on WAL.

Exactly

 In essence, what was required, to keep everything in RAM for as much
 as possible.

 It *does* in the same way affect buffer eviction - it makes eviction
 *very* quick, and re-population equally as quick, if everything fits
 into memory.

 No it doesn't. Only a write caused by a background process (due to full
 shared buffers) means immediate eviction. A simple write (caused by a
 checkpoint) does not evict the page from shared buffers. Not even a
 background writer evicts a page from shared buffers, it merely marks them
 as 'clean' and leaves them there. And all those writes happen on the
 background, so the clients don't need to wait for them to complete (except
 for xlog checkpoints).

So, we're saying the same.

With all that, and enough RAM, it already does what was requested.

Maybe it would help to tune shared_buffers-to-os-cache ratio, and
dirty_ratio to allow a big portion of RAM used for write caching (if
there were enough writes which I doubt), but, in essence, un
unmodified postgres installation with enough RAM to hold the whole DB
+ shared buffers in RAM should perform quite optimally.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance