Bruce Momjian wrote:
> Tom Lane wrote:
> > Dimitri Fontaine writes:
> > > Josh Berkus writes:
> > >> a) Eliminate WAL logging entirely
> > >> b) Eliminate checkpointing
> > >> c) Turn off the background writer
> > >> d) Have PostgreSQL refuse to restart after a crash and instead call an
> > >> ex
Benjamin Krajmalnik wrote:
> Rajesh,
>
> I had a similar situation a few weeks ago whereby performance all of a
> sudden decreased.
> The one tunable which resolved the problem in my case was increasing the
> number of checkpoint segments.
> After increasing them, everything went back to its norma
Rajesh Kumar Mallah writes:
> co_name_vec is actually the auxiliary tsvector column that is mantained via
> a
> an update trigger. and the index that you suggested is there .
Well, in that case it's just a costing/statistics issue. The planner is
probably estimating there are more tsvector match
The way to make this go faster is to set up the actually recommended
> infrastructure for full text search, namely create an index on
> (co_name_vec)::tsvector (either directly or using an auxiliary tsvector
> column). If you don't want to maintain such an index, fine, but don't
> expect full text
Dear Tom/Kevin/List
thanks for the insight, i will check the suggestion more closely and post
the results.
regds
Rajesh Kumar Mallah.
On 6/25/10 12:03 PM, Greg Smith wrote:
Craig James wrote:
I've got a new server and want to make sure it's running well.
Any changes to the postgresql.conf file? Generally you need at least a
moderate shared_buffers (1GB or so at a minimum) and checkpoint_segments
(32 or higher) in order for t
"Kevin Grittner" writes:
> Rajesh Kumar Mallah wrote:
>> just by removing the order by co_name reduces the query time
>> dramatically from ~ 9 sec to 63 ms. Can anyone please help.
> The reason is that one query allows it to return *any* 25 rows,
> while the other query requires it to find a
Rajesh Kumar Mallah wrote:
> just by removing the order by co_name reduces the query time
> dramatically from ~ 9 sec to 63 ms. Can anyone please help.
The reason is that one query allows it to return *any* 25 rows,
while the other query requires it to find a *specific* set of 25
rows. It h
On Mon, Jun 28, 2010 at 5:09 PM, Yeb Havinga wrote:
> Rajesh Kumar Mallah wrote:
>
>> Dear List,
>>
>> just by removing the order by co_name reduces the query time dramatically
>> from ~ 9 sec to 63 ms. Can anyone please help.
>>
> The 63 ms query result is probably useless since it returns a l
On Monday 28 June 2010 13:39:27 Yeb Havinga wrote:
> It looks like seq_scans are disabled, since the index scan has only a
> filter expression but not an index cond.
Or its using it to get an ordered result...
Andres
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
Rajesh Kumar Mallah wrote:
Dear List,
just by removing the order by co_name reduces the query time dramatically
from ~ 9 sec to 63 ms. Can anyone please help.
The 63 ms query result is probably useless since it returns a limit of
25 rows from an unordered result. It is not surprising that thi
Dear List,
just by removing the order by co_name reduces the query time dramatically
from ~ 9 sec to 63 ms. Can anyone please help.
Regds
Rajesh Kumar Mallah.
explain analyze SELECT * from ( SELECT
a.profile_id,a.userid,a.amount,a.category_id,a.catalog_id,a.keywords,b.co_name
from general.c
t...@exquisiteimages.com writes:
> I am wondering how I should architect this in PostgreSQL. Should I follow
> a similar strategy and have a separate database for each client and one
> database that contains the global data?
As others said already, there's more problems to foresee doing so that
t
13 matches
Mail list logo