> AFAIU the OP is trying to give the cache a chance of
> doing some useful
> work by partitioning by time so it's going to be forced to
> go to disk
> less.
Exactly
> have you
> considered a couple of
> "levels" to your hierarchy. Maybe bi-hourly (~15
> million records?)
> within the current
On Mon, Jun 14, 2010 at 08:27:49AM -0400, David Wilson wrote:
> On Mon, Jun 14, 2010 at 5:24 AM, Leonardo F wrote:
> > > For "inserts" I do not see the reason why it would be better to
> > > use index partitioning because AFAIK b-tree would behave exactly
> > > the same in both cases.
> >
> > no,
On Mon, Jun 14, 2010 at 8:38 AM, Peter Hunsberger <
peter.hunsber...@gmail.com> wrote:
>
>
> Can you define acceptable? IIRC the OP is looking for 20,000+ inserts /
> sec.
>
>
> He's actually only looking for 2k inserts/sec. With a battery backed
controller I can sustain that, yes. That's also on
On Mon, Jun 14, 2010 at 7:27 AM, David Wilson wrote:
>
>
> On Mon, Jun 14, 2010 at 5:24 AM, Leonardo F wrote:
>>
>> > For "inserts" I do not see the reason
>> > why
>> > it would be better to use index partitioning because AFAIK
>> > b-tree
>> > would behave exactly the same in both cases.
>>
>>
On Mon, Jun 14, 2010 at 5:24 AM, Leonardo F wrote:
> > For "inserts" I do not see the reason
> > why
> > it would be better to use index partitioning because AFAIK
> > b-tree
> > would behave exactly the same in both cases.
>
> no, when the index gets very big inserting random values gets
> very
On 14 June 2010 13:24, Leonardo F wrote:
>> For "inserts" I do not see the reason
>> why
>> it would be better to use index partitioning because AFAIK
>> b-tree
>> would behave exactly the same in both cases.
>
> no, when the index gets very big inserting random values gets
> very slow.
Hm, inter
> For "inserts" I do not see the reason
> why
> it would be better to use index partitioning because AFAIK
> b-tree
> would behave exactly the same in both cases.
no, when the index gets very big inserting random values gets
very slow.
But still, my approach doesn't work because I thought Postg
On 11 June 2010 17:15, Leonardo F wrote:
> Basically what I'm trying to do is to partition the index in the table
> where the data is going to be inserted into smaller indexes, but
> without using partitions: I would use partial indexes.
> "Historic" data will have just the big index...
Well, you
> Well the situation is still ambiguous
> so:
> Is it possible to provide this table and indexes definitions?
> And it
> would be great it you describe the queries you are going to do
> on this table
> or just provide the SQL.
Sure!
Basically what I'm trying to do is to partition the index in
On 11 June 2010 16:29, Leonardo F wrote:
>
>> Could you please explain the reason to do so many
>> partitions?
>
>
> Because otherwise there would be tons of rows in each
> partition, and randomly "updating" the index for that many
> rows 2000 times per second isn't doable (the indexes
> get so bi
> Could you please explain the reason to do so many
> partitions?
Because otherwise there would be tons of rows in each
partition, and randomly "updating" the index for that many
rows 2000 times per second isn't doable (the indexes
get so big that it would be like writing a multi-GB file
random
On 11 June 2010 13:00, Leonardo F wrote:
> a) create 480 partitions, 1 for each hour of the day. 2 indexes on each
> partition
> b) create 20 partitions, and create 24*2 partial indexes on the current
> partition; then the next day (overnight) create 2 global indexes for the
> table and drop the 2
HI all,
I have a very big table (2000 inserts per sec, I have to store 20 days of data).
The table has 2 indexes, in columns that have almost-random values.
Since keeping those two indexes up-to-date can't be done (updating 2000
times per second 2 indexes with random values on such a huge table
13 matches
Mail list logo