On Thu, Jan 7, 2016 at 12:32 AM, Sachin Srivastava <ssr.teleat...@gmail.com>
wrote:

> Dear David,
>
>
>
>
>
> Q: RAM holds data that is recently accessed - how much of that will you
> have?
>
>
>
> Ans: Kindly confirm, as per your question “RAM holds data that is recently
> accessed” :  How we figured out that how much data we will have. Is it
> depends of Total WAL files (total "checkpoint_segment" I have given 32), am
> I correct or thinking wrong, please clarify to me.
>
>
>
> Right now we have 10 GB RAM for first database server and 3 GB RAM for
> another database server.
>
>
Using WAL to measure your active dataset is not going to work.  WAL
activity occurs when you WRITE data while in many cases the data in RAM is
data that was written to the WAL a long time ago.


>
>
>
>
> Q: Cores help service concurrent requests - how many of those will you
> have?  How fast will they complete?
>
>
>
> Ans: It’s means, if we have more core then we can do our work fast. Like
> from 9.3 onwards for pg_dump as example, if machines having multiple cores
> as the load can be shared among separate threads.
>
>
>
> So if possible to us then more core should be available on database server
> for better performance, please clarify the benefit of more core to me.
>
>
>
> Right now we have 1 core for first database server and 2 core for another
> database server.
>
>
>

​PostgreSQL is process-oriented and presently only uses a single process to
service a single connection.  Application software can upon up multiple
connections -  which is what pg_dump does.  More rypically you'd have
something like a web server where all of the incoming requests are funneled
through a connection pool which then opens a number of connections to the
database which it then shares among those requests.

If you want advice you are going to have to give considerably more detail
of your application and database usage patterns than you have.

David J.

Reply via email to