On Aug 25, 2007, at 2:58 PM, Erik Jones wrote:
On Aug 24, 2007, at 7:41 PM, Benjamin Arai wrote:
Hi,
I have an application which loads millions of NEW documents each
month
into a PostgreSQL tsearch2 table. I have the initial version
completed
and searching performance is great but my p
On Aug 24, 2007, at 7:41 PM, Benjamin Arai wrote:
Hi,
I have an application which loads millions of NEW documents each month
into a PostgreSQL tsearch2 table. I have the initial version
completed
and searching performance is great but my problem is that each time
a new
month rolls around
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Since I am using tsearch2 on the table I think there is going to be a
significant performance hit - e.g., I partition by batch (batches are
not separated by date, they are essentially random subsets of a much
larger data-set). I am querying thi
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Benjamin Arai wrote:
> As stated in the previous email if I use partitioning then queries will
> be executed sequentially - i.e., instead of log(n) it would be (#
> partitions) * log(n). Right?
The planner will consider every relevant partition durin
As stated in the previous email if I use partitioning then queries will
be executed sequentially - i.e., instead of log(n) it would be (#
partitions) * log(n). Right?
depends.. since indexes would be hit for each child table, the time for
query is dependent on the amount of data that is
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
As stated in the previous email if I use partitioning then queries
will be executed sequentially - i.e., instead of log(n) it would be
(# partitions) * log(n). Right?
Benjamin
On Aug 25, 2007, at 9:18 AM, Joshua D. Drake wrote:
-BEGIN PG
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Benjamin Arai wrote:
> This kind of disappointing, I was hoping there was more that could be done.
>
> There has to be another way to do incremental indexing without loosing
> that much performance.
What makes you think you are loosing performance by
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
This kind of disappointing, I was hoping there was more that could be
done.
There has to be another way to do incremental indexing without
loosing that much performance.
Benjamin
On Aug 24, 2007, at 6:29 PM, Joshua D. Drake wrote:
-BEGI
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Brandon Shalton wrote:
> Benjamin,
>
>
>>
>> In order to avoid the re-indexing I was thinking of instead creating
>> a new
>> table each month (building its indexes and etc) and accessing them all
>> through a view. This way I only have to index the
Benjamin,
In order to avoid the re-indexing I was thinking of instead creating a
new
table each month (building its indexes and etc) and accessing them all
through a view. This way I only have to index the new data each month.
Take a look at bizgres.org (based on postgres).
They have a
Hi,
I have an application which loads millions of NEW documents each month
into a PostgreSQL tsearch2 table. I have the initial version completed
and searching performance is great but my problem is that each time a
new
month rolls around I have to drop all the indexes do a COPY and re-index
11 matches
Mail list logo