[
https://issues.apache.org/jira/browse/LUCENE-8526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16640244#comment-16640244
]
Steve Rowe commented on LUCENE-8526:
------------------------------------
bq. Lucene should update to the recently released JFlex 1.7, which supports
Unicode 9.0. (I'll go make an issue.)
See LUCENE-8527
> StandardTokenizer doesn't separate hangul characters from other non-CJK chars
> -----------------------------------------------------------------------------
>
> Key: LUCENE-8526
> URL: https://issues.apache.org/jira/browse/LUCENE-8526
> Project: Lucene - Core
> Issue Type: Improvement
> Reporter: Jim Ferenczi
> Priority: Minor
>
> It was first reported here
> https://github.com/elastic/elasticsearch/issues/34285.
> I don't know if it's the expected behavior but the StandardTokenizer does not
> split words
> which are composed of a mixed of non-CJK characters and hangul syllabs. For
> instance "한국2018" or "한국abc" is kept as is by this tokenizer and mark as an
> alpha-numeric group. This breaks the CJKBigram token filter which will not
> build bigrams on such groups. The other CJK characters are correctly splitted
> when they are mixed with other alphabet so I'd expect the same for hangul.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]