Re: How many billion rows of data I can store in PostgreSQL RDS.

2019-02-22 Thread Adrian Klaver
On 2/22/19 4:46 AM, github kran wrote: On Fri, Feb 22, 2019 at 5:48 AM Samuel Teixeira Santos mailto:arcano...@gmail.com>> wrote: Just adding that my case it's not a Amazon RDS, it's common server, if I can say like that... Aplologies I missed the point to mention that this is a

Re: How many billion rows of data I can store in PostgreSQL RDS.

2019-02-22 Thread github kran
On Fri, Feb 22, 2019 at 5:48 AM Samuel Teixeira Santos wrote: > Just adding that my case it's not a Amazon RDS, it's common server, if I > can say like that... > Aplologies I missed the point to mention that this is a question to PostgreSQL community. We are currently using PostgreSQL. ( Aurora

Re: How many billion rows of data I can store in PostgreSQL RDS.

2019-02-22 Thread Samuel Teixeira Santos
Just adding that my case it's not a Amazon RDS, it's common server, if I can say like that...

Re: How many billion rows of data I can store in PostgreSQL RDS.

2019-02-22 Thread Samuel Teixeira Santos
Hi all. Taking advantage of the topic I would like to know the recommendations about how to update to a newer Postgres version having all that amount of data. Anyone one could share your experience about? Thanks in advance. Regards, Samuel

Re: How many billion rows of data I can store in PostgreSQL RDS.

2019-02-21 Thread Michael Paquier
On Thu, Feb 21, 2019 at 09:14:24PM -0800, Adrian Klaver wrote: > This would be a question for AWS RDS support. And this depends also a lot on your schema, your column alignment and the level of bloat of your relations.. -- Michael signature.asc Description: PGP signature

Re: How many billion rows of data I can store in PostgreSQL RDS.

2019-02-21 Thread Adrian Klaver
On 2/21/19 9:08 PM, github kran wrote: Hello Pgsql-General, We have currently have around 6 TB of data and have plans to move some historic datainto RDS of about close to 1 TB of data. The total rows in partitioned tables is around 6 billion rows today and have plans to keep the data long