Re: Partitioning options

2024-02-21 Thread Alec Lazarescu
Hi, Justin.

The example link has self-contained DDL to create the partitions (in
flat vs composite mode for comparison) and then making the FK's on
each showing the marked speed difference for the same net number of
partitions (1200 flat vs 80x15 = 1200 composite):
https://www.postgresql.org/message-id/CAE%2BE%3DSQacy6t_3XzCWnY1eiRcNWfz4pp02FER0N7mU_F%2Bo8G_Q%40mail.gmail.com

Alec

On Tue, Feb 20, 2024 at 11:59 AM Justin  wrote:
>
>
> On Sun, Feb 18, 2024 at 5:20 PM Alec Lazarescu  wrote:
>>
>> "Would probably look at a nested partitioning"
>>
>> I'm not the original poster, but I have a schema with nested
>> (composite) partitions and I do run into some significant
>> inefficiencies compared to flat partitions in various schema metadata
>> operations (queries to get the list of tables, creating foreign keys,
>> etc.) in tables with 1,000+ total partitions.
>>
>> One example: 
>> https://www.postgresql.org/message-id/CAE%2BE%3DSQacy6t_3XzCWnY1eiRcNWfz4pp02FER0N7mU_F%2Bo8G_Q%40mail.gmail.com
>>
>> Alec
>>
>>
>
> Hi Alec,
>
> would need to see the DDL of the partitions and the queries accessing these 
> partitions to have an opinion
>
> Thank you
> Justin




Re: Partitioning options

2024-02-20 Thread Justin
On Sun, Feb 18, 2024 at 5:20 PM Alec Lazarescu  wrote:

> "Would probably look at a nested partitioning"
>
> I'm not the original poster, but I have a schema with nested
> (composite) partitions and I do run into some significant
> inefficiencies compared to flat partitions in various schema metadata
> operations (queries to get the list of tables, creating foreign keys,
> etc.) in tables with 1,000+ total partitions.
>
> One example:
> https://www.postgresql.org/message-id/CAE%2BE%3DSQacy6t_3XzCWnY1eiRcNWfz4pp02FER0N7mU_F%2Bo8G_Q%40mail.gmail.com
>
> Alec
>
>
>
Hi Alec,

would need to see the DDL of the partitions and the queries accessing these
partitions to have an opinion

Thank you
Justin


Re: Partitioning options

2024-02-18 Thread Alec Lazarescu
"Would probably look at a nested partitioning"

I'm not the original poster, but I have a schema with nested
(composite) partitions and I do run into some significant
inefficiencies compared to flat partitions in various schema metadata
operations (queries to get the list of tables, creating foreign keys,
etc.) in tables with 1,000+ total partitions.

One example: 
https://www.postgresql.org/message-id/CAE%2BE%3DSQacy6t_3XzCWnY1eiRcNWfz4pp02FER0N7mU_F%2Bo8G_Q%40mail.gmail.com

Alec

On Sun, Feb 11, 2024 at 8:25 AM Justin  wrote:
>
> Hi Marc,
>
> Nested partitioning still allows for simple data deletion by dropping the 
> table that falls in that date range.
>
> Probably thinking of partitioning by multicolomn rules which is very complex  
> to set up
>
> On Fri, Feb 9, 2024, 10:29 AM Marc Millas  wrote:
>>
>>
>>
>>
>> On Thu, Feb 8, 2024 at 10:25 PM Justin  wrote:
>>>
>>> Hi Sud,
>>>
>>> Would not look at HASH partitioning as it is very expensive to add or 
>>> subtract the number of partitions.
>>>
>>> Would probably look at a nested partitioning using  customer ID using range 
>>> or list of IDs then  by transaction date,  Its easy to add  partitions and 
>>> balance the partitions segments.
>>
>>
>>  I'll not do that because, then, when getting rid of obsolete data, you must 
>> delete a huge number of records, and vacuum each partition.
>> if partitioning by date, you will ease greatly the cleaning, by just getting 
>> rid of obsolete partitions which is quite speedy.( no delete, no vacuum, no 
>> index updates, ...)
>> Marc
>>
>>>
>>> Keep in mind that SELECT queries being used on the partition must  use the 
>>> partitioning KEY in the WHERE clause of the query or performance will 
>>> suffer.
>>>
>>> Suggest doing a query analysis before deploying partition to confirm the 
>>> queries WHERE clauses matched the planned partition rule.  I suggest that 
>>> 80% of the queries of the executed queries must match the partition rule if 
>>> not don't deploy partitioning or change  all the queries in the application 
>>> to match the partition rule
>>>
>>>
>>> On Thu, Feb 8, 2024 at 3:51 PM Greg Sabino Mullane  
>>> wrote:
>
> Out of curiosity, As OP mentioned that there will be Joins and also 
> filters on column Customer_id column , so why don't you think that 
> subpartition by customer_id will be a good option? I understand List 
> subpartition may not be an option considering the new customer_ids gets 
> added slowly in the future(and default list may not be allowed) and also 
> OP mentioned, there is skewed distribution of data for customer_id 
> column. However what is the problem if OP will opt for HASH subpartition 
> on customer_id in this situation?


 It doesn't really gain you much, given you would be hashing it, the 
 customers are unevenly distributed, and OP talked about filtering on the 
 customer_id column. A hash partition would just be a lot more work and 
 complexity for us humans and for Postgres. Partitioning for the sake of 
 partitioning is not a good thing. Yes, smaller tables are better, but they 
 have to be smaller targeted tables.

 sud wrote:

> 130GB of storage space as we verified using the "pg_relation_size" 
> function, for a sample data set.


 You might also want to closely examine your schema. At that scale, every 
 byte saved per row can add up.

 Cheers,
 Greg





Re: Partitioning options

2024-02-11 Thread Justin
Hi Marc,

Nested partitioning still allows for simple data deletion by dropping the
table that falls in that date range.

Probably thinking of partitioning by multicolomn rules which is very
complex  to set up

On Fri, Feb 9, 2024, 10:29 AM Marc Millas  wrote:

>
>
>
> On Thu, Feb 8, 2024 at 10:25 PM Justin  wrote:
>
>> Hi Sud,
>>
>> Would not look at HASH partitioning as it is very expensive to add or
>> subtract the number of partitions.
>>
>> Would probably look at a nested partitioning using  customer ID using
>> range or list of IDs then  by transaction date,  Its easy to add
>> partitions and balance the partitions segments.
>>
>
>  I'll not do that because, then, when getting rid of obsolete data, you
> must delete a huge number of records, and vacuum each partition.
> if partitioning by date, you will ease greatly the cleaning, by just
> getting rid of obsolete partitions which is quite speedy.( no delete, no
> vacuum, no index updates, ...)
> Marc
>
>
>> Keep in mind that SELECT queries being used on the partition must  use
>> the partitioning KEY in the WHERE clause of the query or performance will
>> suffer.
>>
>> Suggest doing a query analysis before deploying partition to confirm the
>> queries WHERE clauses matched the planned partition rule.  I suggest that
>> 80% of the queries of the executed queries must match the partition rule if
>> not don't deploy partitioning or change  all the queries in the
>> application to match the partition rule
>>
>>
>> On Thu, Feb 8, 2024 at 3:51 PM Greg Sabino Mullane 
>> wrote:
>>
>>> Out of curiosity, As OP mentioned that there will be Joins and also
 filters on column Customer_id column , so why don't you think that
 subpartition by customer_id will be a good option? I understand List
 subpartition may not be an option considering the new customer_ids gets
 added slowly in the future(and default list may not be allowed) and also OP
 mentioned, there is skewed distribution of data for customer_id column.
 However what is the problem if OP will opt for HASH subpartition on
 customer_id in this situation?

>>>
>>> It doesn't really gain you much, given you would be hashing it, the
>>> customers are unevenly distributed, and OP talked about filtering on the
>>> customer_id column. A hash partition would just be a lot more work and
>>> complexity for us humans and for Postgres. Partitioning for the sake of
>>> partitioning is not a good thing. Yes, smaller tables are better, but they
>>> have to be smaller targeted tables.
>>>
>>> sud wrote:
>>>
>>> 130GB of storage space as we verified using the "pg_relation_size"
 function, for a sample data set.
>>>
>>>
>>> You might also want to closely examine your schema. At that scale, every
>>> byte saved per row can add up.
>>>
>>> Cheers,
>>> Greg
>>>
>>>


Re: Partitioning options

2024-02-09 Thread Marc Millas
On Thu, Feb 8, 2024 at 10:25 PM Justin  wrote:

> Hi Sud,
>
> Would not look at HASH partitioning as it is very expensive to add or
> subtract the number of partitions.
>
> Would probably look at a nested partitioning using  customer ID using
> range or list of IDs then  by transaction date,  Its easy to add
> partitions and balance the partitions segments.
>

 I'll not do that because, then, when getting rid of obsolete data, you
must delete a huge number of records, and vacuum each partition.
if partitioning by date, you will ease greatly the cleaning, by just
getting rid of obsolete partitions which is quite speedy.( no delete, no
vacuum, no index updates, ...)
Marc


> Keep in mind that SELECT queries being used on the partition must  use the
> partitioning KEY in the WHERE clause of the query or performance will
> suffer.
>
> Suggest doing a query analysis before deploying partition to confirm the
> queries WHERE clauses matched the planned partition rule.  I suggest that
> 80% of the queries of the executed queries must match the partition rule if
> not don't deploy partitioning or change  all the queries in the
> application to match the partition rule
>
>
> On Thu, Feb 8, 2024 at 3:51 PM Greg Sabino Mullane 
> wrote:
>
>> Out of curiosity, As OP mentioned that there will be Joins and also
>>> filters on column Customer_id column , so why don't you think that
>>> subpartition by customer_id will be a good option? I understand List
>>> subpartition may not be an option considering the new customer_ids gets
>>> added slowly in the future(and default list may not be allowed) and also OP
>>> mentioned, there is skewed distribution of data for customer_id column.
>>> However what is the problem if OP will opt for HASH subpartition on
>>> customer_id in this situation?
>>>
>>
>> It doesn't really gain you much, given you would be hashing it, the
>> customers are unevenly distributed, and OP talked about filtering on the
>> customer_id column. A hash partition would just be a lot more work and
>> complexity for us humans and for Postgres. Partitioning for the sake of
>> partitioning is not a good thing. Yes, smaller tables are better, but they
>> have to be smaller targeted tables.
>>
>> sud wrote:
>>
>> 130GB of storage space as we verified using the "pg_relation_size"
>>> function, for a sample data set.
>>
>>
>> You might also want to closely examine your schema. At that scale, every
>> byte saved per row can add up.
>>
>> Cheers,
>> Greg
>>
>>


Re: Partitioning options

2024-02-08 Thread Justin
Hi Sud,

Would not look at HASH partitioning as it is very expensive to add or
subtract the number of partitions.

Would probably look at a nested partitioning using  customer ID using range
or list of IDs then  by transaction date,  Its easy to add  partitions and
balance the partitions segments.

Keep in mind that SELECT queries being used on the partition must  use the
partitioning KEY in the WHERE clause of the query or performance will
suffer.

Suggest doing a query analysis before deploying partition to confirm the
queries WHERE clauses matched the planned partition rule.  I suggest that
80% of the queries of the executed queries must match the partition rule if
not don't deploy partitioning or change  all the queries in the
application to match the partition rule


On Thu, Feb 8, 2024 at 3:51 PM Greg Sabino Mullane 
wrote:

> Out of curiosity, As OP mentioned that there will be Joins and also
>> filters on column Customer_id column , so why don't you think that
>> subpartition by customer_id will be a good option? I understand List
>> subpartition may not be an option considering the new customer_ids gets
>> added slowly in the future(and default list may not be allowed) and also OP
>> mentioned, there is skewed distribution of data for customer_id column.
>> However what is the problem if OP will opt for HASH subpartition on
>> customer_id in this situation?
>>
>
> It doesn't really gain you much, given you would be hashing it, the
> customers are unevenly distributed, and OP talked about filtering on the
> customer_id column. A hash partition would just be a lot more work and
> complexity for us humans and for Postgres. Partitioning for the sake of
> partitioning is not a good thing. Yes, smaller tables are better, but they
> have to be smaller targeted tables.
>
> sud wrote:
>
> 130GB of storage space as we verified using the "pg_relation_size"
>> function, for a sample data set.
>
>
> You might also want to closely examine your schema. At that scale, every
> byte saved per row can add up.
>
> Cheers,
> Greg
>
>


Re: Partitioning options

2024-02-08 Thread Greg Sabino Mullane
>
> Out of curiosity, As OP mentioned that there will be Joins and also
> filters on column Customer_id column , so why don't you think that
> subpartition by customer_id will be a good option? I understand List
> subpartition may not be an option considering the new customer_ids gets
> added slowly in the future(and default list may not be allowed) and also OP
> mentioned, there is skewed distribution of data for customer_id column.
> However what is the problem if OP will opt for HASH subpartition on
> customer_id in this situation?
>

It doesn't really gain you much, given you would be hashing it, the
customers are unevenly distributed, and OP talked about filtering on the
customer_id column. A hash partition would just be a lot more work and
complexity for us humans and for Postgres. Partitioning for the sake of
partitioning is not a good thing. Yes, smaller tables are better, but they
have to be smaller targeted tables.

sud wrote:

130GB of storage space as we verified using the "pg_relation_size"
> function, for a sample data set.


You might also want to closely examine your schema. At that scale, every
byte saved per row can add up.

Cheers,
Greg


Re: Partitioning options

2024-02-08 Thread Jim Nasby

On 2/8/24 1:43 PM, veem v wrote:


On Thu, 8 Feb 2024 at 20:08, Greg Sabino Mullane > wrote:




Should we go for simple daily range partitioning on the
transaction_date column?


This one gets my vote. That and some good indexes.





Hello Greg,

Out of curiosity, As OP mentioned that there will be Joins and also 
filters on column Customer_id column , so why don't you think that 
subpartition by customer_id will be a good option? I understand List 
subpartition may not be an option considering the new customer_ids gets 
added slowly in the future(and default list may not be allowed) and also 
OP mentioned, there is skewed distribution of data for customer_id 
column. However what is the problem if OP will opt for HASH subpartition 
on customer_id in this situation?


Is it because the number of partitions will be higher i.e.

If you go with simple range partitioning, for 5 months you will have 
~150 daily range partitions and with each index the count of partition 
will gets double, for e.g if you will have 10 indexes, the total 
partitions will be = ~150 table partition+ (10*150)index partition= 1650 
total number of partitions.


If OP goes for , range-hash, and hash will mostly have to be 2^N, so say 
8, hash sub-partitions , then the total number of partitions will be = 
(8*150) table partitions+ (8*150*10) index partitions= ~13200 partitions.


Though there are no theoretical limits to the number of partitions in 
postgres, there are some serious issues noted in the past with higher 
number of table partitions. One such is below. Is this the reason?


https://www.kylehailey.com/post/postgres-partition-pains-lockmanager-waits 



The issue with partitioning by customer_id is that it won't do much (if 
anything) to improve data locality. When partitioning by date, you can 
at least benefit from partition elimination *IF* your most frequent 
queries limit the number of days that the query will look at. Per the 
OP, all queries will include transaction date. Note that does NOT 
actually mean the number of days/partitions will be limited (ie, WHERE 
date > today - 150 will hit all the partitions), but if we assume that 
the majority of queries will limit themselves to the past few days then 
partitioning by date should greatly increase data locality.


Also, when it comes to customer partitioning... really what you probably 
want there isn't partitioning but sharding.

--
Jim Nasby, Data Architect, Austin TX





Re: Partitioning options

2024-02-08 Thread veem v
On Thu, 8 Feb 2024 at 20:08, Greg Sabino Mullane  wrote:

> On Thu, Feb 8, 2024 at 12:42 AM sud  wrote:
> ...
>
>> The key transaction table is going to have ~450 Million transactions per
>> day and the data querying/filtering will always happen based on the
>> "transaction date" column.
>>
> ...
>
>> Should we go for simple daily range partitioning on the transaction_date
>> column?
>>
>
> This one gets my vote. That and some good indexes.
>
> Cheers,
> Greg
>
>
Hello Greg,

Out of curiosity, As OP mentioned that there will be Joins and also filters
on column Customer_id column , so why don't you think that subpartition by
customer_id will be a good option? I understand List subpartition may not
be an option considering the new customer_ids gets added slowly in the
future(and default list may not be allowed) and also OP mentioned, there is
skewed distribution of data for customer_id column. However what is the
problem if OP will opt for HASH subpartition on customer_id in this
situation?

Is it because the number of partitions will be higher i.e.

If you go with simple range partitioning, for 5 months you will have ~150
daily range partitions and with each index the count of partition will gets
double, for e.g if you will have 10 indexes, the total partitions will be =
~150 table partition+ (10*150)index partition= 1650 total number of
partitions.

If OP goes for , range-hash, and hash will mostly have to be 2^N, so say 8,
hash sub-partitions , then the total number of partitions will be = (8*150)
table partitions+ (8*150*10) index partitions= ~13200 partitions.

Though there are no theoretical limits to the number of partitions in
postgres, there are some serious issues noted in the past with higher
number of table partitions. One such is below. Is this the reason?

https://www.kylehailey.com/post/postgres-partition-pains-lockmanager-waits

Regards
Veem


Re: Partitioning options

2024-02-08 Thread Greg Sabino Mullane
On Thu, Feb 8, 2024 at 12:42 AM sud  wrote:
...

> The key transaction table is going to have ~450 Million transactions per
> day and the data querying/filtering will always happen based on the
> "transaction date" column.
>
...

> Should we go for simple daily range partitioning on the transaction_date
> column?
>

This one gets my vote. That and some good indexes.

Cheers,
Greg