Yes. I have used analyze table, and also I have explain plan the CITEXT query. 
It was not using Index. It is not primary and it is surprised to know that 
CITEXT would use index only if it is a primary key column. Interesting and new 
thing to know.
 
Upper and Lower functions are not right choice when the table is > 2.5 million 
and where we also have heavy insert transactions.

I doubt, if we can cache the table if there are frequent inserts/updates.  The 
good idea would be to get the DB to case insenstive configuration like SQL 
Server. I would go for this solution, if postgres supports.
 
Thanks for all the replies and help.
 
> Date: Sat, 29 Jun 2013 09:02:12 -0700
> From: j...@commandprompt.com
> To: udayabhanu1...@hotmail.com
> CC: kgri...@mail.com; adrian.kla...@gmail.com; pgsql-general@postgresql.org; 
> pgadmin-supp...@postgresql.org; laurenz.a...@wien.gv.at; 
> chris.trav...@gmail.com; mag...@hagander.net
> Subject: Re: [GENERAL] Postgres case insensitive searches
> 
> 
> On 06/28/2013 03:21 AM, bhanu udaya wrote:
> > Hello,
> >
> > Grettings,
> >
> > What is the best way of doing case insensitive searches in postgres
> > using Like.
> >
> > Ilike - does not use indexes
> > function based indexes are not as fast as required.
> > CITEXT - it still taking 600 ms - 1 second on a 2.2 million rows... does
> > not use index
> > Collation Indexes creation with POSIX - does not really work.
> > GIST/GIN indexes are faster when using like, but not case insenstive.
> >
> > Is there a better way of resolving this case insenstive searches with
> > fast retrieval.
> 
> O.k. there is not anywhere near enough information here to provide you 
> with a proper answer but here are the two things you should look at:
> 
> CITEXT: You said it takes 600ms - 1 second. Is that a first run or is 
> the relation cached? Second how do you know it isn't using the index? 
> Have you ran an explain analyze? In order for CITEXT to use an index it 
> the value being searched must be the PRIMARY KEY, is your column the 
> primary key?
> > Second, you have provided us with zero information on your hardware 
> configuration. 2.2 million rows is a low of rows to seqscan, if they 
> aren't cached or if you don't have reasonable hardware it is going to 
> take time no matter what you do.
> 
> Third, have you tried this with unlogged tables (for performance)?
> 
> Fourth, there was another person that suggested using UPPER() that is a 
> reasonable suggestion. The docs clearly suggest using lower(), I don't 
> actually know if there is a difference but that is the common way to do 
> it and it will use an index IF you make a functional index on the column 
> using lower.
> 
> JD
> 
> 
> 
> 
> >
> > Thanks and Regards
> > Radha Krishna
> >
> 
> 
> -- 
> Command Prompt, Inc. - http://www.commandprompt.com/  509-416-6579
> PostgreSQL Support, Training, Professional Services and Development
> High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc
> For my dreams of your image that blossoms
>     a rose in the deeps of my heart. - W.B. Yeats
                                          

Reply via email to