On Wed, Apr 14, 2010 at 8:01 AM, Jan Krcmar wrote:
> hi
>
> i've got the database (about 300G) and it's still growing.
>
> i am inserting new data (about 2G/day) into the database (there is
> only one table there) and i'm also deleting about 2G/day (data older
> than month).
>
> the documentation
On Thursday 15 April 2010 15.56:20 Jan Krcmar wrote:
> i'm doing one big insert per day, and one big delete per day
>
> anyway, i've found, this article
> http://www.postgresql.org/docs/8.4/interactive/ddl-partitioning.html
>
> could the partitioning be helpfull for this situation?
Yes, I'm quit
hi
2010/4/14 Adrian von Bidder :
> -> vacuum can run concurrently to other stuff, so it's not necessary to
> wait before it finishes.
> -> in most cases, autovacuum should do the Right Thing(tm) atomatically, so
> you should not need to call vacuum manually.
>
> This is with a recent pg version.
On Wednesday 14 April 2010 16.01:39 Jan Krcmar wrote:
> the documentation says, one should run VACUUM if there are many
> changes in the database, but the vacuumdb never finishes sooner than
> the new data should be imported.
>
> is there any technique that can solve this problem?
-> vacuum can
Hi
> >
> > > You might consider partitioning this table by date, either by day or by
> > > week, and instead of deleting old rows, drop entire old partitions
> >
> > this is not really good workaround...
As a First choice, This is a very good workaround for your present
situation.
As a second
On Wednesday 14 April 2010, Jan Krcmar wrote:
>
> > You might consider partitioning this table by date, either by day or by
> > week, and instead of deleting old rows, drop entire old partitions
>
> this is not really good workaround...
Actually it's a very good workaround, that a lot of people u
Jan Krcmar wrote:
You might consider partitioning this table by date, either by day or by
week, and instead of deleting old rows, drop entire old partitions
this is not really good workaround...
It is in fact the only good workaround for your problem, which you'll
eventually come to r
2010/4/14 John R Pierce :
> Jan Krcmar wrote:
>>
>> hi
>>
>> i've got the database (about 300G) and it's still growing.
>>
>> i am inserting new data (about 2G/day) into the database (there is
>> only one table there) and i'm also deleting about 2G/day (data older
>> than month).
>>
>> the document
Jan Krcmar wrote:
hi
i've got the database (about 300G) and it's still growing.
i am inserting new data (about 2G/day) into the database (there is
only one table there) and i'm also deleting about 2G/day (data older
than month).
the documentation says, one should run VACUUM if there are many
c
hi
i've got the database (about 300G) and it's still growing.
i am inserting new data (about 2G/day) into the database (there is
only one table there) and i'm also deleting about 2G/day (data older
than month).
the documentation says, one should run VACUUM if there are many
changes in the databa
10 matches
Mail list logo