explain SELECT qrydocumentos.coddocumento, qrydocumentos.nomedocumento,
qrydocumentos.conteudo, qrydocumentos.tamanho, qrydocumentos.hora,
qrydocumentos.data, qrydocumentos.codfonte, qrydocumentos.nomefonte,
qrydocumentos.numeroimagens as "numeroImagens", qrydocumentos.subtitulo,
qrydocumentos.
The issue here might be just organizing the data differently. Or getting
an Opteron server with 16GB RAM :-) Based on the strength of the
developers recommendations in this newsgroup, we recently upgraded to a
dual Opteron 2GHZ with 16GB Ram and 15K hard drives. We set
shared_buffers to 40,000
Look into running Swish-e instead:
http://www.swish-e.org
Great speed, nice engine, excellent boolean searches. We run it on
several sites each with over 500,000 documents. Performance is
consistently sub-second response time, and we also integrate it within
PHP, Perl and Postgresql too.
I know
Hello,
I have a single table that just went over 234GB in size with about 290M+
rows. I think that I'm starting to approach some limits since things
have gotten quite a bit slower over the last couple days. The table is
really simple and I'm mostly doing simple data mining queries like the
quer
Diogo Biazus wrote:
Hi folks,
I have a database using tsearch2 to index 300 000 documents.
I've already have optimized the queries, and the database is vacuumed on
a daily basis.
The stat function tells me that my index has aprox. 460 000 unique words
(I'm using stemmer and a nice stopword list)
Does anyone have a simple example showing how to perform non-blocking
queries using libpq that they can post? Thus far I can initiate a
non-blocking connection, and use PQsendQuery but I am having a hard
time figuring out when and how to consume the results.
Cheers,
Randall
---
Can we see the schema for the table qrydocumentos ?
-Original Message-
From: Diogo Biazus [mailto:[EMAIL PROTECTED]
Sent: Tuesday, March 30, 2004 2:32 PM
To: Dann Corbit
Cc: [EMAIL PROTECTED]
Subject: Re: [GENERAL] Wich hardware suits best for large full-text
indexed databases
Dann Corb
Dann Corbit wrote:
What does the EXPLAIN command say about the slowest queries?
explain SELECT qrydocumentos.coddocumento, qrydocumentos.nomedocumento,
qrydocumentos.conteudo, qrydocumentos.tamanho, qrydocumentos.hora,
qrydocumentos.data, qrydocumentos.codfonte, qrydocumentos.nomefonte,
> -Original Message-
> From: Diogo Biazus [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, March 30, 2004 1:55 PM
> To: [EMAIL PROTECTED]
> Subject: [GENERAL] Wich hardware suits best for large
> full-text indexed databases
>
>
> Hi folks,
>
> I have a database using tsearch2 to index 300 00
Hi folks,
I have a database using tsearch2 to index 300 000 documents.
I've already have optimized the queries, and the database is vacuumed on
a daily basis.
The stat function tells me that my index has aprox. 460 000 unique words
(I'm using stemmer and a nice stopword list).
The problem is per
> I would like to know if can have a time based
> trigger, for example a procedure that could be run everyday at say 10 in
> the night. Thanking you,
Isn't this what cron is for? Just set it up so it invokes pg with a command
line
query.
Chris
--
At 06:16 PM 3/29/2004 -0600, Mike Nolan wrote:
> Now, that doesn't preclude clients from seeing the names of another
> clients database using \l, but unless there is gross mis-management of the
> pg_hba.conf, seeing the names of other databases doesn't give other
> clients any benefits ...
That ra
Jan Wieck wrote:
If you don't know the answers to that, I assume it isn't that easy as
people try to make believe. And in case the answer is "that is not
possible but ...", then you better think again what you want that
replication setup for.
Although I agree with your points (especially having
> I would like to know if can have a time based
> trigger, for example a procedure that could be run everyday at say 10 in
> the night. Thanking you,
Cron?
---(end of broadcast)---
TIP 3: if posting/reading through Usenet,
Gregory Wood wrote:
Jan Wieck wrote:
If you don't know the answers to that, I assume it isn't that easy as
people try to make believe. And in case the answer is "that is not
possible but ...", then you better think again what you want that
replication setup for.
Although I agree with your points
15 matches
Mail list logo