Re: Table performance with millions of rows (partitioning)

2017-12-27 Thread pinker
No, it's unfortunately not possible. Documentation says in Caveats part: /Constraint exclusion only works when the query's WHERE clause contains constants (or externally supplied parameters). For example, a comparison against a non-immutable function such as CURRENT_TIMESTAMP cannot be optimized,

Re: Table performance with millions of rows (partitioning)

2017-12-27 Thread Robert Blayzor
On Dec 27, 2017, at 8:20 PM, Justin Pryzby wrote: > > That's one of the major use cases for partitioning (DROP rather than DELETE > and > thus avoiding any following vacuum+analyze). > https://www.postgresql.org/docs/10/static/ddl-partitioning.html#DDL-PARTITIONING-OVERVIEW That’s the plan to

Re: Table performance with millions of rows (partitioning)

2017-12-27 Thread Justin Pryzby
On Wed, Dec 27, 2017 at 07:54:23PM -0500, Robert Blayzor wrote: > Question on large tables… > > When should one consider table partitioning vs. just stuffing 10 million rows > into one table? IMO, whenever constraint exclusion, DROP vs DELETE, or seq scan on individual children justify the minor

Table performance with millions of rows

2017-12-27 Thread Robert Blayzor
Question on large tables… When should one consider table partitioning vs. just stuffing 10 million rows into one table? I currently have CDR’s that are injected into a table at the rate of over 100,000 a day, which is large. At some point I’ll want to prune these records out, so being able t

Re: Batch insert heavily affecting query performance.

2017-12-27 Thread David Miller
Jean, It is very likely you are running out of IOPS with that size of server. We have several Postgres databases running at AWS. We consistently run out of IOPS on our development servers due to the types queries and sizing of our development databases. I would check the AWS monitoring graphs to

RE: Batch insert heavily affecting query performance.

2017-12-27 Thread Mike Sofen
In my experience, that 77ms will stay quite constant even if your db grew to > 1TB. Postgres IS amazing. BTW, for a db, you should always have provisioned IOPS or else your performance can vary wildly, since the SSDs are shared. Re Lambda: another team is working on a new web app using Lam

Re: Batch insert heavily affecting query performance.

2017-12-27 Thread Alvaro Hernandez
On 27/12/17 18:02, Jean Baro wrote: Sorry guys, The performance problem is not caused by PG. 'Index Scan using idx_user_country on public.old_card  (cost=0.57..1854.66 rows=460 width=922) (actual time=3.442..76.606 rows=200 loops=1)' '  Output: id, user_id, user_country, user_channel, user_

Re: Batch insert heavily affecting query performance.

2017-12-27 Thread Jean Baro
Sorry guys, The performance problem is not caused by PG. 'Index Scan using idx_user_country on public.old_card (cost=0.57..1854.66 rows=460 width=922) (actual time=3.442..76.606 rows=200 loops=1)' ' Output: id, user_id, user_country, user_channel, user_role, created_by_system_key, created_by_us

Re: Batch insert heavily affecting query performance.

2017-12-27 Thread Jean Baro
General purpose, 500GB but we are planing to increase it to 1TB before going into production. 500GB 1.500 iops (some burst of 3.000 iops) 1TB 3.000 iops Em 27 de dez de 2017 14:23, "Jeff Janes" escreveu: > On Sun, Dec 24, 2017 at 11:51 AM, Jean Baro wrote: > >> Hi there, >> >> We are testing

RE: Batch insert heavily affecting query performance.

2017-12-27 Thread Jean Baro
Thanks Mike, We are using the standard RDS instance m4.large, it's not Aurora, which is a much more powerful server (according to AWS). Yes, we could install it on EC2, but it would take some extra effort from our side, it can be an investment though in case it will help us finding the bottle ne

Re: Batch insert heavily affecting query performance.

2017-12-27 Thread Jean Baro
Thanks Jeremy, We will provide a more complete EXPLAIN as other people have suggested. I am glad we might end up with a much better performance (currently each query takes around 2 seconds!). Cheers Em 27 de dez de 2017 14:02, "Jeremy Finzel" escreveu: > The EXPLAIN > > 'Index Scan using i

Re: Batch insert heavily affecting query performance.

2017-12-27 Thread Jean Baro
Thanks Rick, We are now partitioning the DB (one table) into 100 sets of data. As soon as we finish this new experiment we will provide a better EXPLAIN as you suggested. :) Em 27 de dez de 2017 13:38, "Rick Otten" escreveu: On Wed, Dec 27, 2017 at 10:13 AM, Jean Baro wrote: > Hello, > > W

Re: Batch insert heavily affecting query performance.

2017-12-27 Thread Jeff Janes
On Sun, Dec 24, 2017 at 11:51 AM, Jean Baro wrote: > Hi there, > > We are testing a new application to try to find performance issues. > > AWS RDS m4.large 500GB storage (SSD) > Is that general purpose SSD, or provisioned IOPS SSD? If provisioned, what is the level of provisioning? Cheers, Je

Re: Batch insert heavily affecting query performance.

2017-12-27 Thread Jeremy Finzel
> > The EXPLAIN > > 'Index Scan using idx_user_country on card (cost=0.57..1854.66 rows=460 > width=922)' > ' Index Cond: (((user_id)::text = '4684'::text) AND (user_country = > 'BR'::bpchar))' > Show 3 runs of the full explain analyze plan on given condition so that we can also see cold vs warm

RE: Batch insert heavily affecting query performance.

2017-12-27 Thread Mike Sofen
Hi Jean, I’ve used Postgres on a regular EC2 instance (an m4.xlarge), storing complex genomic data, hundreds of millions of rows in a table and “normal” queries that used an index returned in 50-100ms, depending on the query…so this isn’t a Postgres issue per se. Your table and index s

Re: Batch insert heavily affecting query performance.

2017-12-27 Thread Rick Otten
On Wed, Dec 27, 2017 at 10:13 AM, Jean Baro wrote: > Hello, > > We are still seeing queries (by UserID + UserCountry) taking over 2 > seconds, even when there is no batch insert going on at the same time. > > Each query returns from 100 to 200 messagens, which would be a 400kb pay > load, which

Re: Batch insert heavily affecting query performance.

2017-12-27 Thread Jean Baro
Hello, We are still seeing queries (by UserID + UserCountry) taking over 2 seconds, even when there is no batch insert going on at the same time. Each query returns from 100 to 200 messagens, which would be a 400kb pay load, which is super tiny. I don't know what else I can do with the limitati