Re: [GENERAL] Working with huge amount of data. RESULTS!

2008-02-12 Thread Oleg Bartunov
On Tue, 12 Feb 2008, Mario Lopez wrote: Hi!, I optimized the LIKE 'keyword%' and LIKE '%keyword' with the following results: # time /Library/PostgreSQL8/bin/psql -U postgres -d testdb -c "select * from table1 where varchar_reverse(data) like varchar_reverse('%keyword');" real0m0.055s u

Re: [GENERAL] Working with huge amount of data. RESULTS!

2008-02-12 Thread hubert depesz lubaczewski
On Tue, Feb 12, 2008 at 03:45:51PM +0100, Mario Lopez wrote: > the reversed index which takes like 20 minutes, I guess it has to do > with the plperl function, perhaps a C function for inverting would make > it up in less time. sure. take a look at this: http://www.depesz.com/index.php/2007/09/0

Re: [GENERAL] Working with huge amount of data. RESULTS!

2008-02-12 Thread Alvaro Herrera
Mario Lopez wrote: > The problem is still with the LIKE '%keyword%', my problem is that I am > not searching for Words in a dictionary fashion, suppose my "data" is > random garbage, that it has common consecutive bytes. How could I > generate a dictionary from this random garbage to make it

Re: [GENERAL] Working with huge amount of data. RESULTS!

2008-02-12 Thread Mario Lopez
Hi!, I optimized the LIKE 'keyword%' and LIKE '%keyword' with the following results: # time /Library/PostgreSQL8/bin/psql -U postgres -d testdb -c "select * from table1 where varchar_reverse(data) like varchar_reverse('%keyword');" real0m0.055s user0m0.011s sys 0m0.006s # time