On Sat, Dec 7, 2013 at 10:09 AM, desmodemone wrote:
> Hi Dave,
> About the number of partitions , I didn't have so much
> problems with hundreds of partitions ( like 360 days in a year ).
> Moreover you could bypass the overhead of trigger with a direct insert on
> the partition, als
On Thu, Dec 5, 2013 at 7:36 AM, Dave Johansen wrote:
> I'm managing a database that is adding about 10-20M records per day to a
> table and time is a core part of most queries,
>
What is the nature of how the time column is used in the queries?
Depending on how it is used, you might not get much
Hi Dave,
About the number of partitions , I didn't have so much
problems with hundreds of partitions ( like 360 days in a year ).
Moreover you could bypass the overhead of trigger with a direct insert on
the partition, also to have a parallel insert without to firing too much
the trig
2013/12/7 chidamparam muthusamy
> hi,
> thank you so much for the input.
> Can you please clarify the following points:
> *1. Output of BitmapAnd = 303660 rows*
> -> BitmapAnd (cost=539314.51..539314.51 rows=303660 width=0) (actual
> time=9083.085..9083.085 rows=0 loops=1)
>
On Thu, Dec 05, 2013 at 02:42:10AM -0800, Max wrote:
> Hello,
>
> We are starting a new project to deploy a solution in cloud with the
> possibility to be used for 2.000+ clients. Each of this clients will use
> several tables to store their information (our model has about 500+
> tables but there
hi,
thank you so much for the input.
Can you please clarify the following points:
*1. Output of BitmapAnd = 303660 rows*
-> BitmapAnd (cost=539314.51..539314.51 rows=303660 width=0) (actual
time=9083.085..9083.085 rows=0 loops=1)
-> Bitmap Index Scan on groupid_index
(cost