[jira] [Updated] (LUCENE-2522) add simple japanese tokenizer, based on tinysegmenter

2014-03-15 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-2522:
-

Fix Version/s: (was: 4.7)
   4.8

> add simple japanese tokenizer, based on tinysegmenter
> -
>
> Key: LUCENE-2522
> URL: https://issues.apache.org/jira/browse/LUCENE-2522
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Robert Muir
>Priority: Minor
> Fix For: 4.8
>
> Attachments: LUCENE-2522.patch, LUCENE-2522.patch, LUCENE-2522.patch
>
>
> TinySegmenter (http://www.chasen.org/~taku/software/TinySegmenter/) is a tiny 
> japanese segmenter.
> It was ported to java/lucene by Kohei TAKETA , 
> and is under friendly license terms (BSD, some files explicitly disclaim 
> copyright to the source code, giving a blessing instead)
> Koji knows the author, and already contacted about incorporating into lucene:
> {noformat}
> I've contacted Takeda-san who is the creater of Java version of
> TinySegmenter. He said he is happy if his program is part of Lucene.
> He is a co-author of my book about Solr published in Japan, BTW. ;-)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-2522) add simple japanese tokenizer, based on tinysegmenter

2013-05-09 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-2522:
--

Fix Version/s: (was: 4.3)
   4.4

> add simple japanese tokenizer, based on tinysegmenter
> -
>
> Key: LUCENE-2522
> URL: https://issues.apache.org/jira/browse/LUCENE-2522
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Robert Muir
>Priority: Minor
> Fix For: 4.4
>
> Attachments: LUCENE-2522.patch, LUCENE-2522.patch, LUCENE-2522.patch
>
>
> TinySegmenter (http://www.chasen.org/~taku/software/TinySegmenter/) is a tiny 
> japanese segmenter.
> It was ported to java/lucene by Kohei TAKETA , 
> and is under friendly license terms (BSD, some files explicitly disclaim 
> copyright to the source code, giving a blessing instead)
> Koji knows the author, and already contacted about incorporating into lucene:
> {noformat}
> I've contacted Takeda-san who is the creater of Java version of
> TinySegmenter. He said he is happy if his program is part of Lucene.
> He is a co-author of my book about Solr published in Japan, BTW. ;-)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-2522) add simple japanese tokenizer, based on tinysegmenter

2011-04-23 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-2522:


Attachment: LUCENE-2522.patch

attached is an updated patch, its still a work in progress (needs some more 
tests and benchmarking and some other things little fixes).

Theres a general pattern for these segmenters (this one, smartchinese, sen) 
thats a little tricky, that is they want to really look at sentences to 
determine how to segment.

So, I added a base class for this to make writing these segmenters easier, and 
also to hopefully improve segmentation accuracy. (I would like to switch 
smartchinese over to it) This class makes it easy to segment sentences with a 
Sentence BreakIterator... in my opinion it doesnt matter how theoretically good 
the word tokenization is for these things, if the sentence tokenizer is really 
bad (I found this issue with both sen and smartchinese).

hope to get it committable soon

> add simple japanese tokenizer, based on tinysegmenter
> -
>
> Key: LUCENE-2522
> URL: https://issues.apache.org/jira/browse/LUCENE-2522
> Project: Lucene - Java
>  Issue Type: New Feature
>  Components: contrib/analyzers
>Reporter: Robert Muir
>Priority: Minor
> Fix For: 4.0
>
> Attachments: LUCENE-2522.patch, LUCENE-2522.patch, LUCENE-2522.patch
>
>
> TinySegmenter (http://www.chasen.org/~taku/software/TinySegmenter/) is a tiny 
> japanese segmenter.
> It was ported to java/lucene by Kohei TAKETA , 
> and is under friendly license terms (BSD, some files explicitly disclaim 
> copyright to the source code, giving a blessing instead)
> Koji knows the author, and already contacted about incorporating into lucene:
> {noformat}
> I've contacted Takeda-san who is the creater of Java version of
> TinySegmenter. He said he is happy if his program is part of Lucene.
> He is a co-author of my book about Solr published in Japan, BTW. ;-)
> {noformat}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (LUCENE-2522) add simple japanese tokenizer, based on tinysegmenter

2011-01-16 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-2522:


Fix Version/s: (was: 3.1)

> add simple japanese tokenizer, based on tinysegmenter
> -
>
> Key: LUCENE-2522
> URL: https://issues.apache.org/jira/browse/LUCENE-2522
> Project: Lucene - Java
>  Issue Type: New Feature
>  Components: contrib/analyzers
>Reporter: Robert Muir
>Priority: Minor
> Fix For: 4.0
>
> Attachments: LUCENE-2522.patch, LUCENE-2522.patch
>
>
> TinySegmenter (http://www.chasen.org/~taku/software/TinySegmenter/) is a tiny 
> japanese segmenter.
> It was ported to java/lucene by Kohei TAKETA , 
> and is under friendly license terms (BSD, some files explicitly disclaim 
> copyright to the source code, giving a blessing instead)
> Koji knows the author, and already contacted about incorporating into lucene:
> {noformat}
> I've contacted Takeda-san who is the creater of Java version of
> TinySegmenter. He said he is happy if his program is part of Lucene.
> He is a co-author of my book about Solr published in Japan, BTW. ;-)
> {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (LUCENE-2522) add simple japanese tokenizer, based on tinysegmenter

2010-07-08 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-2522:


Affects Version/s: (was: 3.0.3)

> add simple japanese tokenizer, based on tinysegmenter
> -
>
> Key: LUCENE-2522
> URL: https://issues.apache.org/jira/browse/LUCENE-2522
> Project: Lucene - Java
>  Issue Type: New Feature
>  Components: contrib/analyzers
>Reporter: Robert Muir
>Priority: Minor
> Fix For: 3.1, 4.0
>
> Attachments: LUCENE-2522.patch, LUCENE-2522.patch
>
>
> TinySegmenter (http://www.chasen.org/~taku/software/TinySegmenter/) is a tiny 
> japanese segmenter.
> It was ported to java/lucene by Kohei TAKETA , 
> and is under friendly license terms (BSD, some files explicitly disclaim 
> copyright to the source code, giving a blessing instead)
> Koji knows the author, and already contacted about incorporating into lucene:
> {noformat}
> I've contacted Takeda-san who is the creater of Java version of
> TinySegmenter. He said he is happy if his program is part of Lucene.
> He is a co-author of my book about Solr published in Japan, BTW. ;-)
> {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (LUCENE-2522) add simple japanese tokenizer, based on tinysegmenter

2010-07-08 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-2522:


Fix Version/s: 3.1
   4.0
Affects Version/s: 3.0.3

> add simple japanese tokenizer, based on tinysegmenter
> -
>
> Key: LUCENE-2522
> URL: https://issues.apache.org/jira/browse/LUCENE-2522
> Project: Lucene - Java
>  Issue Type: New Feature
>  Components: contrib/analyzers
>Affects Versions: 3.0.3
>Reporter: Robert Muir
>Priority: Minor
> Fix For: 3.1, 4.0
>
> Attachments: LUCENE-2522.patch, LUCENE-2522.patch
>
>
> TinySegmenter (http://www.chasen.org/~taku/software/TinySegmenter/) is a tiny 
> japanese segmenter.
> It was ported to java/lucene by Kohei TAKETA , 
> and is under friendly license terms (BSD, some files explicitly disclaim 
> copyright to the source code, giving a blessing instead)
> Koji knows the author, and already contacted about incorporating into lucene:
> {noformat}
> I've contacted Takeda-san who is the creater of Java version of
> TinySegmenter. He said he is happy if his program is part of Lucene.
> He is a co-author of my book about Solr published in Japan, BTW. ;-)
> {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (LUCENE-2522) add simple japanese tokenizer, based on tinysegmenter

2010-07-02 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-2522:


Attachment: LUCENE-2522.patch

i refactored the TinySegmenterConstants to use ints/switch statements instead 
of all the hashmaps.

this creates a larger .java file, but its a smaller .class, and scoring no 
longer has to create 24 strings per character


> add simple japanese tokenizer, based on tinysegmenter
> -
>
> Key: LUCENE-2522
> URL: https://issues.apache.org/jira/browse/LUCENE-2522
> Project: Lucene - Java
>  Issue Type: New Feature
>  Components: contrib/analyzers
>Reporter: Robert Muir
>Priority: Minor
> Attachments: LUCENE-2522.patch, LUCENE-2522.patch
>
>
> TinySegmenter (http://www.chasen.org/~taku/software/TinySegmenter/) is a tiny 
> japanese segmenter.
> It was ported to java/lucene by Kohei TAKETA , 
> and is under friendly license terms (BSD, some files explicitly disclaim 
> copyright to the source code, giving a blessing instead)
> Koji knows the author, and already contacted about incorporating into lucene:
> {noformat}
> I've contacted Takeda-san who is the creater of Java version of
> TinySegmenter. He said he is happy if his program is part of Lucene.
> He is a co-author of my book about Solr published in Japan, BTW. ;-)
> {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (LUCENE-2522) add simple japanese tokenizer, based on tinysegmenter

2010-07-01 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-2522:


Attachment: LUCENE-2522.patch

here is a really quickly done patch, just to get started (not really for 
committing)

* converted their tests to basetokenstream tests, 
* changed it to use CharTermAttribute instead of TermAttribute, 
* added clearAttributes() 
* made class final.
* added solr factory.

The code is nice, it is setup to work on unicode codepoints etc, but i think we 
can improve
it by using CharArrayMaps for speed and by using lucene's codepoint i/o stuff 
in CharUtils.


> add simple japanese tokenizer, based on tinysegmenter
> -
>
> Key: LUCENE-2522
> URL: https://issues.apache.org/jira/browse/LUCENE-2522
> Project: Lucene - Java
>  Issue Type: New Feature
>  Components: contrib/analyzers
>Reporter: Robert Muir
>Priority: Minor
> Attachments: LUCENE-2522.patch
>
>
> TinySegmenter (http://www.chasen.org/~taku/software/TinySegmenter/) is a tiny 
> japanese segmenter.
> It was ported to java/lucene by Kohei TAKETA , 
> and is under friendly license terms (BSD, some files explicitly disclaim 
> copyright to the source code, giving a blessing instead)
> Koji knows the author, and already contacted about incorporating into lucene:
> {noformat}
> I've contacted Takeda-san who is the creater of Java version of
> TinySegmenter. He said he is happy if his program is part of Lucene.
> He is a co-author of my book about Solr published in Japan, BTW. ;-)
> {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org