On 03/28/2016 02:55 PM, Mat Arye wrote:
This will run on EC2 (or other cloud service) machines and on ssds.
Right now runs on m4.4xlarge with 64GiB of ram.
Willing to pay for beefy instances if it means better performance.


On Mon, Mar 28, 2016 at 4:49 PM, Rob Sargent <robjsarg...@gmail.com <mailto:robjsarg...@gmail.com>> wrote:



    On 03/28/2016 02:41 PM, Mat Arye wrote:

        Hi All,

        I am writing a program that needs time-series-based insert
        mostly workload. I need to make the system scaleable with many
        thousand of inserts/s. One of the techniques I plan to use is
        time-based table partitioning and I am trying to figure out
        how large to make my time tables.

        Does anybody have any hints on optimal table sizes either in
        terms of rows or in terms of size? Any rule of thumbs I can
        use for table size in relation to amount of memory on the
        machine? Is the size of the index more important than the size
        of the table (if queries mostly use indexes)?

        Basically, I am asking for pointers about how to think about
        this problem and any experiences people have had.

        Thanks,
        Mat

        P.S. I am aware of limits listed here:
        http://www.postgresql.org/about/. I am asking about practical
        size limits for performance consideration.

    Your current hardware, or hardware budget might play into the answer.



-- Sent via pgsql-general mailing list (pgsql-general@postgresql.org
    <mailto:pgsql-general@postgresql.org>)
    To make changes to your subscription:
    http://www.postgresql.org/mailpref/pgsql-general


Those who supply real answers on this list um, er, discourage top-posting. (Not my fave, but there you go)


Reply via email to