On Wed, 2020-02-05 at 12:03 -0500, Arya F wrote:
> I'm looking to write about 1100 rows per second to tables up to 100 million 
> rows. I'm trying to
> come up with a design that I can do all the writes to a database with no 
> indexes. When having
> indexes the write performance slows down dramatically after the table gets 
> bigger than 30 million rows.
> 
> I was thinking of having a server dedicated for all the writes and have 
> another server for reads
> that has indexes and use logical replication to update the read only server.
> 
> Would that work? Or any recommendations how I can achieve good performance 
> for a lot of writes?

Logical replication wouldn't make a difference, because with many indexes, 
replay of the
inserts would be slow as well, and replication would lag more and more.

No matter what you do, there will be no magic way to have your tables indexed 
and
have fast inserts at the same time.

One idea I can come up with is a table that is partitioned by a column that 
appears
in a selective search condition, but have no indexes on the table, so that you 
always get
away with a sequential scan of a single partition.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



Reply via email to