On Sun, Dec 6, 2009 at 12:09 PM, Andres Freund wrote:
> I know of several instances running with a larger fsm_pages - you could try to
> reduce the fsm_relations setting - I dont know if there are problems lurking
> with such a oversized value.
I run a db with 10M max_fsm_pages and 500k max_fam_r
On Sunday 06 December 2009 19:20:17 Andreas Thiel wrote:
> Hi Andres,
>
> Thanks a lot for your answers. As bottom line I think the answer is I
> have to rethink my DB structure.
Can't answer that one without knowing much more ;)
> > Could you please properly quote the email? The way you did it i
Hi Andreas,
Could you please properly quote the email? The way you did it is quite
unreadable because you always have to guess who wrote what.
On Sunday 06 December 2009 17:06:39 Andreas Thiel wrote:
> > I'm going to work on the table size of the largest table (result_orig)
> > itself by elimina
Kris Kewley wrote:
Does postgres have the concept of "pinning" procs, functions, etc to
cache.
No. Everything that's in PostgreSQL's cache gets a usage count attached
to is. When the buffer is used by something else, that count gets
incremented. And when new buffers need to be allocated, t
Does postgres have the concept of "pinning" procs, functions, etc to
cache.
As you mention, typically frequently used statements are cached
improving performance.
If waiting for the DBMS to do this is not an option then pinning
critical ones should improve performance immediately followin
I have a very bit big database around 15 million in size, and the dump
file
is around 12 GB.
While importing this dump in to database I have noticed that initially
query
response time is very slow but it does improves with time.
Any suggestions to improve performance after dump in importe