On 12/13/2011 02:29 AM, Ephraim Stevens wrote:
I'm using a custom tokenizer in each scenario (yes it works and the proof
is enclosed). In the first dataset, the data was tokenized such that any
alphanumeric character qualifies as part of a token.

In the second dataset, the data was tokenized such that anything other than
a semicolon qualifies as part of a token.\

The issues I'm raising is that 1) the '*' expansion doesn't seem to work
for a alphanumeric/non-alphanumeric token matches nor does the OR operator.
I haven't figured out what I'm missing here. I've been banging my head all
morning against tihs. I appreciate any help.

Do your custom tokenizers allow whitespace or "*" characters
to be part of tokens? If so, try changing them so that they
do not.



_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to