2011/9/11 pasman pasmański <pasma...@gmail.com>

> For 10 TB table and 3hours, disks should have a transfer about 1GB/s
> (seqscan).
>
>

I have 6 Gb/s disk drives, so it should be not too far, maybe 5 hours for a
seqscan.

i


> 2011/9/11, Scott Marlowe <scott.marl...@gmail.com>:
> > On Sun, Sep 11, 2011 at 6:35 AM, Igor Chudov <ichu...@gmail.com> wrote:
> >> I have a server with about 18 TB of storage and 48 GB of RAM, and 12
> >> CPU cores.
> >
> > 1 or 2 fast cores is plenty for what you're doing.  But the drive
> > array and how it's configured etc are very important.  There's a huge
> > difference between 10 2TB 7200RPM SATA drives in a software RAID-5 and
> > 36 500G 15kRPM SAS drives in a RAID-10 (SW or HW would both be ok for
> > data warehouse.)
> >
> >> I do not know much about Postgres, but I am very eager to learn and
> >> see if I can use it for my purposes more effectively than MySQL.
> >> I cannot shell out $47,000 per CPU for Oracle for this project.
> >> To be more specific, the batch queries that I would do, I hope,
> >
> > Hopefully if needs be you can spend some small percentage of that for
> > a fast IO subsystem is needed.
> >
> >> would either use small JOINS of a small dataset to a large dataset, or
> >> just SELECTS from one big table.
> >> So... Can Postgres support a 5-10 TB database with the use pattern
> >> stated above?
> >
> > I use it on a ~3TB DB and it works well enough.  Fast IO is the key
> > here.  Lots of drives in RAID-10 or HW RAID-6 if you don't do a lot of
> > random writing.
> >
> > --
> > Sent via pgsql-performance mailing list (
> pgsql-performance@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-performance
> >
>
>
> --
> ------------
> pasman
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>

Reply via email to