You can definitely use the Pattern Tokenizer to define your own token 
separators (i.e. the word boundary breaks), but you will add complexity and 
lose some of the benefits of the StandardTokenizer.

First, regarding complexity, if you want certain characters to not become 
token separators, you're regex will start becoming increasingly complex and 
lengthy ... I know from experience as I have used the Pattern Tokenizer 
previously. 

Second, you will lose all the benefits of the StandardTokenizer and how it 
handles international characters / symbols and special characters outside 
the ASCII range of Unicode characters (e.g. Left-to-right markers, etc.). 
It's probably impossible to capture all that the StandardTokenizer does in 
a regex expression

If you are dealing with very clean data and perhaps English-only text, then 
the Pattern Tokenizer could be a viable solution. But, when dealing with 
web / user-generated data and many languages, then the StandardTokenizer is 
your friend.

- Bryan

On Monday, September 8, 2014 11:36:08 PM UTC-4, vineeth mohan wrote:
>
> Hello Bryan ,
>
> Congrats on your first plugin. 
> I have a question here - Can you implement the whole plugin by using 
> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/analysis-pattern-tokenizer.html
>  
> tokenizer ? 
>
> Is your plugin providing any advantage over going this approach ?
>
> Thanks
>           Vineeth
>
> On Tue, Sep 9, 2014 at 7:56 AM, Bryan Warner <bryan....@gmail.com 
> <javascript:>> wrote:
>
>> Hi all,
>>
>> Recently, I've been working on an extension to Lucene's Standard 
>> Tokenizer that allows the user to customize / override the default word 
>> boundary break rules for Unicode characters. The Standard Tokenizer 
>> implements the word break rules from the Unicode Text segmentation 
>> <http://www.unicode.org/reports/tr29/> algorithm where most punctuation 
>> symbols (except for underscore '_') are treated as hard word breaks (e.g. 
>> "@foo" , "#foo" are tokenized to "foo"). While the Standard Tokenizer works 
>> great in most cases, I found that being unable to override the default word 
>> break rules was quite limiting especially since a lot of these punctuation 
>> symbols have important meaning now on the web (@ - mentions, # - hashtags, 
>> etc.)
>>
>> I've wrapped this extension to the Standard Tokenizer in an ElasticSearch 
>> plugin, which can be found at - 
>> https://github.com/bbguitar77/elasticsearch-analysis-standardext ... 
>> definitely looking for feedback as this is my first go at an ElasticSearch 
>> plugin!
>>
>> I'm hoping other ElasticSearch / Lucene users find this helpful.
>>
>> Cheers!
>> Bryan
>>
>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com <javascript:>.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/929dc7c3-ff99-43a4-a287-1a8f89d86e3f%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/elasticsearch/929dc7c3-ff99-43a4-a287-1a8f89d86e3f%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/06e1a147-e7b3-43b3-ab9a-6e4bbef4a63f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to