i thought you shared my frustration :-) (see
http://postgresql.1045698.n5.nabble.com/tsearch2-plainto-tsquery-with-OR-td1885955.html).
anyway, then "...plainto_tsquery('...')" is pretty much useless when
it fails if someone inserts a single boolean operator - back to
"...to_tsquery('...')" and inse
John Smith writes:
> i can run "...@@ to_tsquery('cat | dog')".
> but if i run "...@@ to_tsquery('cat dog')", it gives me a syntax error
> (#42601).
> so i run "...@@ plainto_tsquery('cat dog')".
> but then i can't run "...@@ plainto_tsquery('cat | dog')".
Yeah ... that's pretty much exactly the
i can run "...@@ to_tsquery('cat | dog')".
but if i run "...@@ to_tsquery('cat dog')", it gives me a syntax error (#42601).
so i run "...@@ plainto_tsquery('cat dog')".
but then i can't run "...@@ plainto_tsquery('cat | dog')".
can you help before i give up on tsearch2?
thks, jzs
http://postg
Ok, so to be sure if I understand everything - first I should install a
postgresql-contrib extension. Next, there will appear a contrib/dict_int
directory with dict_int sourcecode inside, which I can modify. Then,
I'll be able to install this modified dictionary, and it would be
working properl
Please,
take a look on contrib/dict_int and create your own dict_noop.
It should be easy. I think you could document it and share
with people (wiki.postgresql.org ?), since there were other people
interesting in noop dictionary. Also, don't forget to modify
your configuration - use ts_debug(), i
Hello,
I encountered such a problem. my goal is to extract links from a text
using tsearch2. Everything seemed to be well, unless I got some youtube
links - there are some small and big letters inside, and a tsearch
parser is lowering everything (from http://youtube.com/Y6dsHDX I got
http://y
Hello,
I encountered such a problem. my goal is to extract links from a text
using tsearch2. Everything seemed to be well, unless I got some youtube
links - there are some small and big letters inside, and a tsearch
parser is lowering everything (from http://youtube.com/Y6dsHDX I got
http://y
Ivan,
did you found your misunderstooding ? You forget how dictionaries work.
You need to put some dictionary, which recognize anything, like simple, or
stemmer dictionary to recognize 'unknown' word. Look into documentation.
Oleg
On Wed, 2 Jun 2010, Ivan Voras wrote:
hello,
I think I have a
hello,
I think I have a problem with tsearch2 configuration I'm trying to use.
I have created a text search configuration as:
--
CREATE TEXT SEARCH DICTIONARY hr_ispell (
TEMPLATE = ispell,
DictFile = 'hr',
AffFile = 'hr',
StopWords = 'hr'
);
CREATE TEXT SEARCH CONFIGURATION publ
AI Rumman wrote:
> When I am using the query:
>
> select length(description),
> to_tsvector('default',description) as c from crmentity ;
>
> Getting error:
>
> NOTICE: word is too long
>
> Postgresql 8.1.
>
> Could anyone please tell me why?
Because there is a "word" in the "descriptio
When I am using the query:
select length(description), to_tsvector('default',description) as c from
crmentity ;
Getting error:
NOTICE: word is too long
Postgresql 8.1.
Could anyone please tell me why?
Ah, I finally found it http://pgfoundry.org/projects/textsearch-ja/
- Original Message -
From: "Gordon Callan"
To: pgsql-general@postgresql.org
Sent: Monday, November 9, 2009 2:36:18 PM GMT -08:00 US/Canada Pacific
Subject: [GENERAL] Tsearch2 with Japanese
Does a
Does anyone know where I can locate a Japanese parser and dictionary to use
with Tsearch2? There was a link (http://www.oss.ecl.ntt.co.jp/tsearch2j/
) to a Contrib at one time but this link is now dead :-( Any leads would be
appreciated.
Hi,
I am using a Synology NAS and want to use the installed postgresql 8.2.5
database as backend for mediawiki 1.25.1.
The problem is that mediawiki needs tsearch2 installed, but I can't find a
solution to get tsearch2 on my NAS. There are several packages for nearly
all linux distribution
Hi!
I realized that loading a dictionary with ~16 words consumes
additional ~40 MB of memory for each connection. It obviously doesn't
use a shared memory. Is it possible to decrease the memory consumption?
I found this thread
http://www.mail-archive.com/pgsql-general@postgresql.org/msg116924
Oleg Bartunov wrote:
> contrib/test_parser - an example parser code.
Using that as a template, I seem to be on track to use the regexp.c
code to pick out statute cites from the text in my start function, and
recognize when I'm positioned on one in my getlexeme (GETTOKEN)
function, delegating ev
Tom Lane wrote:
> "Kevin Grittner" writes:
>> Can I use a different set of dictionaries
>> for creating the tsquery than I did for the tsvector?
>
> Sure, as long as the tokens (normalized words) that they produce
> match up for words that you want to have match. Once the tokens
> come out, t
On Tue, 7 Apr 2009, Kevin Grittner wrote:
Oleg Bartunov wrote:
of course, you can build tsquery youself, but once your parser can
recognize your very own token 'xxx', it'd be much better to have
mapping xxx -> dict_xxx, where dict_xxx knows all semantics.
I probably just need to have that "A
"Kevin Grittner" writes:
> Can I use a different set of dictionaries
> for creating the tsquery than I did for the tsvector?
Sure, as long as the tokens (normalized words) that they produce match
up for words that you want to have match. Once the tokens come out,
they're just strings as far as
Oleg Bartunov wrote:
> of course, you can build tsquery youself, but once your parser can
> recognize your very own token 'xxx', it'd be much better to have
> mapping xxx -> dict_xxx, where dict_xxx knows all semantics.
I probably just need to have that "Aha!" moment, slap my forehead, and
move
On Tue, 7 Apr 2009, Kevin Grittner wrote:
If the document text contains '341.15(3)' I want to find it with a
search string of '341', '341.15', '341.15(3)' but not '341.15(3)(b)',
'341.1', or '15'. How do I handle that? Do I have to build my
tsquery values myself as text and cast to tsquery, or
Oleg Bartunov wrote:
> contrib/test_parser - an example parser code.
Thanks! Sorry I missed that.
-Kevin
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Kevin,
contrib/test_parser - an example parser code.
On Mon, 6 Apr 2009, Kevin Grittner wrote:
Tom Lane wrote:
"Kevin Grittner" writes:
People are likely to search for statute cites, which tend to have a
hierarchical form.
I think what you need is a custom parser
I've just returned to
Tom Lane wrote:
> regexp substitution
I found a way to at least keep the cite in one piece. Perhaps I can
do the rest in custom dictionaries, which are more pluggable.
select ts_debug
('State Statute pertaining to');
ts_debug
-
Tom Lane wrote:
> Perhaps you could pass the texts and the queries through a regexp
> substitution that converts digit-dot-digit to digit-dash-digit?
This doesn't seem to get me anywhere. For cite '9.125.07(4A)(3)'
I got this:
select ts_debug('9-125-07-4A-3');
ts_
Tom Lane wrote:
> "Kevin Grittner" writes:
>> People are likely to search for statute cites, which tend to have a
>> hierarchical form.
> I think what you need is a custom parser
I've just returned to this and after review have become convinced that
this is absolutely necessary; once the def
>>> Oleg Bartunov wrote:
> On Tue, 10 Mar 2009, Tom Lane wrote:
>> "Kevin Grittner" writes:
>>> People are likely to search for statute cites, which tend to have
a
>>> hierarchical form. I'm not sure the prefix approach will work for
>>> this. For example, there is a section 939.64 in the stat
On Tue, 10 Mar 2009, Tom Lane wrote:
"Kevin Grittner" writes:
People are likely to search for statute cites, which tend to have a
hierarchical form. I'm not sure the prefix approach will work for
this. For example, there is a section 939.64 in the state statutes
dealing with commission of a
"Kevin Grittner" writes:
> People are likely to search for statute cites, which tend to have a
> hierarchical form. I'm not sure the prefix approach will work for
> this. For example, there is a section 939.64 in the state statutes
> dealing with commission of a crime while wearing a bulletproof
I broached this topic last year[1], but the project got tabled until
now; so I raise it again. We want to be able to search text
(extracted from character-based PDF files) which will contain legal
terms and statute cites, and we want to be able to do tsearch2
searches (under 8.3.recent). It's cle
Hello
this bug was reported two weeks ago and it is fixed in 8.3.6.
regards
Pavel Stehule
2009/2/11 Howard Cole :
> Hi,
>
> I am in the process of updating a database from 8.2 to 8.3 and need a little
> help with the tsearch2 update.
>
> Prior to restoring my 8.3 backup, I ran the tsearch2.sql o
RTFM!
Just read the part about ditching the tsearch2 function. Sincere apologies.
Howard.
Howard Cole wrote:
execute procedure tsearch2('fts','column1','column2');
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgre
Hi,
I am in the process of updating a database from 8.2 to 8.3 and need a
little help with the tsearch2 update.
Prior to restoring my 8.3 backup, I ran the tsearch2.sql on the new
database, however I am having a little problem with triggers - when I
run an update on a table I get the followi
(reposted, in case people who could help me have missed my 2-weeks-ago post)
Hello.
I am looking for a Chinese dictionary for TSearch2 Postgresql feature,
with no success yet.
I found a reference on a chinese webpage:
http://code.google.com/p/nlpbamboo/wiki/TSearch2
With the help of transla
Hello.
I am looking for a Chinese dictionary for TSearch2 Postgresql feature,
with no success yet.
I found a reference on a chinese webpage:
http://code.google.com/p/nlpbamboo/wiki/TSearch2
With the help of translate.google.com, I thought managed to figure out
what to do: I installed "nlpba
On Oct 31, 2008, at 6:30 AM, Jodok Batlogg wrote:
nevertheless i still have the problem that words with '/' are beeing
interpreted as file paths instead of words. any idea how i could tweak
this?
The easiest solution I found was to replace '/' with a space before
parsing the text.
John
Sergio,
On Fri, 31 Oct 2008, Ivan Sergio Borgonovo wrote:
On Fri, 31 Oct 2008 13:10:20 +0300 (MSK)
Oleg Bartunov <[EMAIL PROTECTED]> wrote:
Jodok,
you got what's you defined. Please, read documentation.
In short, word doesn't indexed if it is not recognized by any
dictionaried from stack of
On Fri, 31 Oct 2008, Jodok Batlogg wrote:
hi oleg,
thanks for your quick response,
2008/10/31 Oleg Bartunov <[EMAIL PROTECTED]>:
Jodok,
you got what's you defined. Please, read documentation.
In short, word doesn't indexed if it is not recognized by any
dictionaried from stack of dictionarie
On Fri, 31 Oct 2008 13:10:20 +0300 (MSK)
Oleg Bartunov <[EMAIL PROTECTED]> wrote:
> Jodok,
>
> you got what's you defined. Please, read documentation.
> In short, word doesn't indexed if it is not recognized by any
> dictionaried from stack of dictionaries. Put stemming dictionary
> at the end, w
hi oleg,
thanks for your quick response,
2008/10/31 Oleg Bartunov <[EMAIL PROTECTED]>:
> Jodok,
>
> you got what's you defined. Please, read documentation.
> In short, word doesn't indexed if it is not recognized by any
> dictionaried from stack of dictionaries. Put stemming dictionary at the end
Jodok,
you got what's you defined. Please, read documentation.
In short, word doesn't indexed if it is not recognized by any
dictionaried from stack of dictionaries. Put stemming dictionary at the end,
which recognizes everything.
Oleg
On Fri, 31 Oct 2008, Jodok Batlogg wrote:
we're using tsea
we're using tsearch2 with the german dictionary
http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/dicts/ispell/ispell-german-compound.tar.gz
for fulltext search.
the indexing is configured as follows:
CREATE TEXT SEARCH DICTIONARY public.german (
TEMPLATE = ispell,
DictFile = german,
On Tue, 21 Oct 2008 13:40:33 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:
> Ivan Sergio Borgonovo <[EMAIL PROTECTED]> writes:
> > I missed it. Thanks. Nearly perfect. Now I've to understand what
> > a {} is.
> > An array with a null element? an empty array? an array
> > containing ''?
>
> Hmm ... it
Ivan Sergio Borgonovo <[EMAIL PROTECTED]> writes:
> I missed it. Thanks. Nearly perfect. Now I've to understand what a
> {} is.
> An array with a null element? an empty array? an array containing ''?
Hmm ... it appears that ts_lexize is returning a one-dimensional array of
no elements, whereas '{}
On Tue, 21 Oct 2008 10:36:20 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:
> Ivan Sergio Borgonovo <[EMAIL PROTECTED]> writes:
> > It would still be nice to be able to directly work with tsvector
> > and tsquery so people could exploit the parser, lexer etc... and
> > recycle the config.
>
> > I'd th
Ivan Sergio Borgonovo <[EMAIL PROTECTED]> writes:
> It would still be nice to be able to directly work with tsvector
> and tsquery so people could exploit the parser, lexer etc... and
> recycle the config.
> I'd thinking something in the line of
> for lex in select * from to_tsvector('jsjdjd fdsds
On Tue, 21 Oct 2008 13:20:12 +0200
Ivan Sergio Borgonovo <[EMAIL PROTECTED]> wrote:
> On Tue, 21 Oct 2008 10:29:52 +0200
> Ivan Sergio Borgonovo <[EMAIL PROTECTED]> wrote:
>
> I came across this:
> http://grokbase.com/topic/2007/08/07/general-tsear
On Tue, 21 Oct 2008 10:29:52 +0200
Ivan Sergio Borgonovo <[EMAIL PROTECTED]> wrote:
I came across this:
http://grokbase.com/topic/2007/08/07/general-tsearch2-plainto-tsquery-with-or/r92nI5l_k9S4iKcWdCxKs05yFQk
And I find it is strictly related to my needs.
Working around ts_parse I could
plainto_tsquery is handy to make a string from users turn into a
tsquery.
This strips "control" characters and glue lexemes with &.
Now I've several strings coming from input user and what I'd like to
do is assign a different token to each part.
eg.
input1 = "ratto && matto | gatto & the"
input2
On Thu, 2 Oct 2008, Matthew Terenzio wrote:
There are less than 20,000 records being searched here, but the query takes
several minutes.
I know this may not be enough info, but would one suggest I optimize the
query or put my attention towards other areas.
SELECT id,date,headline as head,head
There are less than 20,000 records being searched here, but the query takes
several minutes.
I know this may not be enough info, but would one suggest I optimize the
query or put my attention towards other areas.
SELECT id,date,headline as head,headline(body,q),rank(vectors,q),timestamp
FROM sto
Hi Richard
Thanks for your help. I'll try that.
Darragh
On Wed, Oct 1, 2008 at 6:03 PM, Richard Huxton <[EMAIL PROTECTED]> wrote:
> Darragh Gammell wrote:
> > I am currently upgrading from 8.1 to 8.3 and am getting errors when
> > restoring the dump from 8.1 into 8.3. Like below:
>
> > I have
Darragh Gammell wrote:
> I am currently upgrading from 8.1 to 8.3 and am getting errors when
> restoring the dump from 8.1 into 8.3. Like below:
> I have read this is due to the tsearch2 functions being moved into the core
> section of postgres and I'll need to do some editing after the dump to
>
Hi
I am currently upgrading from 8.1 to 8.3 and am getting errors when
restoring the dump from 8.1 into 8.3. Like below:
ERROR: could not find function "gtsvector_in" in file
"/usr/lib/postgresql/8.3/lib/tsearch2.so"
ERROR: function public.gtsvector_in(cstring) does not exist
ERROR: could not
>
> explain analyze
>> select * from test.test_tsq
>> where to_tsvector('40x40') @@ q
>>
>
> why do you need tsvector @@ q ? Much better to use tsquery = tsquery
>
> test=# explain analyze select * from test_tsq where q =
> '40x40'::tsque>
>
On Fri, 12 Sep 2008, Dmitry Koterov wrote:
Hello.
TSearch2 allows to search a table of tsvectors by a single tsquery.
I need to solve the reverse problem.
*I have a large table of tsquery. I need to find all tsqueries in that table
that match a single document tsvector:
*
CREATE TABLE "test"."
Hello.
TSearch2 allows to search a table of tsvectors by a single tsquery.
I need to solve the reverse problem.
*I have a large table of tsquery. I need to find all tsqueries in that table
that match a single document tsvector:
*
CREATE TABLE "test"."test_tsq" (
"id" SERIAL,
"q" TSQUERY NOT N
Hello!
I found the solution:
Normal export.
Normal upgrade procedure.
su - postgres
# Upgrade tsearch2
#
http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/tsearch-V2-intro.html
createdb wikidb
psql wikidb < /usr/share/pgsql/contrib/tsearch2.sql
psql < pg_dumpall.sql postgres
Ciao,
Hello,
We definitely came across this issue recently. When new postgres
backend is started it uses ~3MB of the memory accordingly to pmap.
When one runs within this backend several typical queries that our
application generates its consumed memory increases to 5-8MB which is
not critical for us. B
Craig Ringer wrote:
This is probably a stupid question, but: with PostgreSQL's use of
shared memory, is it possible to load dictionaries into a small
reserved shm area when the first backend starts, then use the
preloaded copy in subsequent backends?
That way the postmaster doesn't have to do an
Tom Lane wrote:
What I think *is* worth doing is spending some time on making dictionary
loading go faster.
This is probably a stupid question, but: with PostgreSQL's use of shared
memory, is it possible to load dictionaries into a small reserved shm
area when the first backend starts, then
Teodor Sigaev <[EMAIL PROTECTED]> writes:
>> Hmm, good point; I presume "accept the fact that settings change won't
>> propagate to other backends until reconnect" would not be acceptable
>> behavior, even if documented along with the relevant configuration option?
> I suppose so. That was one o
Teodor Sigaev wrote:
As for downsides, I only really see two:
* Tracking updates of dictionaries - but it's reasonable to believe
that new connections get open more often than the dictionary gets
updated. Also, this might be easily solved by stat()-ing the
dictionary file before starting up s
Hmm, good point; I presume "accept the fact that settings change won't
propagate to other backends until reconnect" would not be acceptable
behavior, even if documented along with the relevant configuration option?
I suppose so. That was one of the reasons to move tsearch into core and it wi
* Considering the database is loaded separately for each session, does
this also imply that each running backend has a separate dictionary
stored in memory?
Yes.
As for downsides, I only really see two:
* Tracking updates of dictionaries - but it's reasonable to believe
that new connection
Hello,
I'd like to ask about two separate things regarding tsearch2 in
PostgreSQL 8.3.
Firstly, I've noticed that dictionary is loaded on-demand specifically
for each session, and apparently this behavior cannot be changed in any way.
If that's the case, would it be reasonable to ask for an
Hello!
I want to upgrade from 8.2 to 8.3.1 but I've problems:
I did a pg_dumpall but this doesn't work. I found the migration guide with
a trick to load the new contrib/tsearch2 module. But how is this done
exactly?
-
http
Hi,
I´m trying to use the tsearch2 with postgre 8.2. What i am trying to
do its: from a text search the text and synonyms excluding the words
that doesn´t mean nothing like("what", "the", "of").
How can i configure the dictionarys for use both synonyms and stop
dictionarys?
Can anyone creat
Craig Ringer <[EMAIL PROTECTED]> writes:
> Is there any chance your contrib package does not match the core
> PostgreSQL version or is from a different source?
qsort_arg was added in 8.2, so it seems certain he's trying to load an
8.2 tsearch2 into his 8.1 engine.
regards,
Corin Schedler wrote:
> Hi all,
>
> I'm having some trouble installing tsearch2 in to my database. I'm
> running 8.1.11 on CentOS 5.
Where did the packages come from? Where they part of CentOS / RHEL, or
are they obtained from somewhere else?
Is there any chance your contrib package does not mat
Hi all,
I'm having some trouble installing tsearch2 in to my database. I'm running
8.1.11 on CentOS 5.
I'm getting the following output after trying 'psql db < tsearch2.sql'.
SET
BEGIN
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index
"pg_ts_dict_pkey" for table "pg_ts_dict"
CREATE
Mario Ignacio Rodríguez Cortés wrote:
> But in postgresql-8.3.1:
>
> SELECT to_tsvector('spanish','estadística');
> to_tsvector
> -
> 'stic':2
> (1 row)
It works for me:
alvherre=# SELECT to_tsvector('spanish','estadística');
to_tsvector
--
'estadist':1
(1 fila)
Hi All:
I have installed postgresql 8.3.1 on a Gentoo server, but i think that
the spanish dictionary isn't the correct because i have another two
machines with other postgresql versions and tserch2 installed, and a
simple test that i do is make a query with the spanish dictionary, i get
the follo
On Fri, 2008-04-11 at 22:07 +0400, Oleg Bartunov wrote:
> We have the same problem with names in astronomy, so we implemented
> dict_regex http://vo.astronet.ru/arxiv/dict_regex.html
> Check it out !
Oleg-
This gets me a lot closer. Thank you. I have two remaining problems.
The first problem
We have the same problem with names in astronomy, so we implemented
dict_regex http://vo.astronet.ru/arxiv/dict_regex.html
Check it out !
Oleg
On Thu, 10 Apr 2008, Reece Hart wrote:
I'd like to use tsearch2 to index protein and gene names. Unfortunately,
such names are written inconsistently a
Reece Hart <[EMAIL PROTECTED]> writes:
> For the purposes of indexing these names, I suspect I'd get the majority
> of cases by removing a hyphen when it's followed by 1 or 2 chars from
> [a-zA-Z0-9]. Does that require a custom parser?
Yeah, looks like it:
regression=# select * from ts_debug('MCL
I'd like to use tsearch2 to index protein and gene names. Unfortunately,
such names are written inconsistently and sometimes with hyphens. For
example, MCL-1 and MCL1 are semantically equivalent but with the default
parser and to_tsvector, I see this:
[EMAIL PROTECTED]> select to_tsvector(
Martijn van Oosterhout <[EMAIL PROTECTED]> writes:
> On Wed, Mar 19, 2008 at 07:55:40PM -0400, Tom Lane wrote:
>> (that's \303\240 or 0xc3 0xa0). I am thinking that something decided
>> the \240 was junk and removed it.
> Hmm, it is coincidently the space character +0x80, which is defined as
> a
On Wed, Mar 19, 2008 at 07:55:40PM -0400, Tom Lane wrote:
> (that's \303\240 or 0xc3 0xa0). I am thinking that something decided
> the \240 was junk and removed it.
Hmm, it is coincidently the space character +0x80, which is defined as
a non-breaking space in many Latin encodings. Perhaps ctype d
SELECT 'abc'::text || 'def'::text;
it's working fine (no need to convert the query ASCII to UTF8 or such
i am using pgadmin (1.8.2) to pass the query:
show client_encoding = UNICODE.
in postgresql.conf i have:
client_encoding; Value = UTF8, Current value = UNICODE;
i tried to restart postgresql
Richard Huxton <[EMAIL PROTECTED]> writes:
> Missed the mailing list on the last reply
>> patrick wrote:
>>> thoses queries are not working, same message:
>>> ERROR: invalid byte sequence for encoding "UTF8": 0xc3
>>>
>>> what i found is in postgresql.conf if i change:
>>> default_text_search_confi
Missed the mailing list on the last reply
Richard Huxton wrote:
patrick wrote:
hi richard,
thanks for your help! i found something... but first let me answer
your question:
UPDATE product SET search_vector = to_tsvector(name);
UPDATE product SET search_vector = setweight(to_tsvector(name),
Richard Huxton <[EMAIL PROTECTED]> writes:
> The issue is what characters were in your script file.
I'm wondering about non-UTF8 characters in the dictionary file(s) used
by the text search configuration. Failure to load a configuration
file would explain why it only shows up in tsearch-related q
patrick wrote:
Can you identify which row(s) are causing this problem? If we have the
value that's causing this, someone can reproduce it.
i have only 1 row:
46; "the product name"; "the description";
i don't see any specials chars or accents.
I think I've reproduced it here, and it's not yo
patrick wrote:
SELECT 'abc'::text || 'def'::text;
it's working fine (no need to convert the query ASCII to UTF8 or such
OK, now try each of these in turn:
UPDATE product SET search_vector = to_tsvector(name);
UPDATE product SET search_vector = setweight(to_tsvector(name), 'A');
UPDATE product
Can you identify which row(s) are causing this problem? If we have the
value that's causing this, someone can reproduce it.
i have only 1 row:
46; "the product name"; "the description";
i don't see any specials chars or accents.
knowing that some of my clients are french, should i use LATIN9 a
patrick wrote:
hi,
i have an issue with tseach2, i just installed postgresql 8.3.1 on
windows using UTF8 server encoding / client encoding and LOCALE Canada /
French.
UPDATE product SET search_vector = setweight(to_tsvector(name), 'A') ||
to_tsvector(description);
ERROR: invalid byte sequ
hi,
i have an issue with tseach2, i just installed postgresql 8.3.1 on windows
using UTF8 server encoding / client encoding and LOCALE Canada / French.
CREATE DATABASE mydbWITH OWNER = me ENCODING = 'UTF8';
CREATE TABLE product
(
product_id SERIAL NOT NULL,
name VAR
On Thu, 13 Mar 2008, Sushant Sinha wrote:
A document may contain date in the traditional format. For example it
may contain '11/1/2007'. It will be useful if we can directly search for
year in a document. However, the 'default' tsearch2 parser does not
break down integers separated by '/'. So I
A document may contain date in the traditional format. For example it
may contain '11/1/2007'. It will be useful if we can directly search for
year in a document. However, the 'default' tsearch2 parser does not
break down integers separated by '/'. So I my search for '2007' will not
match tsvector
On Tuesday 12 February 2008 10:26, Tom Lane wrote:
> Richard Huxton <[EMAIL PROTECTED]> writes:
> > Oliver Weichhold wrote:
> >> Is there something like a Migration Guide from 8.2to
> >> 8.3 for tsearch2 users?
> >
> > Hmm - there was a blog posting recently that linked to a load of
> > migration s
Richard Huxton <[EMAIL PROTECTED]> writes:
> Oliver Weichhold wrote:
>> Is there something like a Migration Guide from 8.2to
>> 8.3 for tsearch2 users?
> Hmm - there was a blog posting recently that linked to a load of
> migration stuff...
There's always RTFM:
http://www.postgresql.org/docs/8.3/
Oliver Weichhold wrote:
Hi
I run a site with several MediaWiki installations all running on PostgreSQL
8.2.5 utilizing TSearch2. Is there something like a Migration Guide from 8.2to
8.3 for tsearch2 users?
Hmm - there was a blog posting recently that linked to a load of
migration stuff...
H
Hi
I run a site with several MediaWiki installations all running on PostgreSQL
8.2.5 utilizing TSearch2. Is there something like a Migration Guide from 8.2to
8.3 for tsearch2 users?
Cheers
Oliver
"James Reynolds" <[EMAIL PROTECTED]> writes:
> I want to convert a TEXT string that I am mangling to TSVECTOR with a cast.
> I am using Postgresql 8.1.6 and tsearch2.
> According to the documentation this should work although I am getting an
> ERROR.
> tsearch2 reference on www.sai.msu.su says th
Hi,
I want to convert a TEXT string that I am mangling to TSVECTOR with a cast.
I am using Postgresql 8.1.6 and tsearch2.
According to the documentation this should work although I am getting an
ERROR.
tsearch2 reference on www.sai.msu.su says that
text::TSVECTOR RETURNS TSVECTOR
FWIW, I am
Hi All
I have PostgreSQL 8.2.6 running on Windows. I tryed install slovak
dictionary for tsearch2.
INSERT INTO pg_ts_dict
VALUES('ispell_slovak','spell_init(internal)','DictFile="C:/slovak_utf8.dict",
AffFile="C:/slovak_utf8.aff", StopFile="C:/slovak_utf8.stop"',
'spell_lexize(internal,
On Sat, 26 Jan 2008, Sushant Sinha wrote:
I want to remove stop words but do not want to stem the words. Is there
an interface in tsearch2 that allows me to do this?
Basically I am trying to implement spelling corrections and do not want
to correct stop words.
Create custom dictionary using
I want to remove stop words but do not want to stem the words. Is there
an interface in tsearch2 that allows me to do this?
Basically I am trying to implement spelling corrections and do not want
to correct stop words.
Thanks,
-Sushant.
---(end of broadcast)
"Satch Jones" <[EMAIL PROTECTED]> writes:
> Hello - I can't get tsearch2 running in a long-functioning instance of
> PostgreSQL 8.1.9 on Fedora Core 5, and could use some help.
Rather than trying to compile it yourself, why don't you just install
the postgresql-contrib RPM that goes with the postg
1 - 100 of 477 matches
Mail list logo