[ 
https://issues.apache.org/jira/browse/LUCENE-1799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12893288#action_12893288
 ] 

Robert Muir commented on LUCENE-1799:
-------------------------------------

yonik, what were you benchmarking? I think you should benchmark overall 
indexing time, of which encode is just a blip (<1% of).

and yes, since the start state is 0x40 the FIRST cjk char is a diff from 0x40, 
but any subsequent ones yield savings.

in general you wont get much compression for chinese.. id say max 25%
for russian, arabic, hebrew, japanese it will do a lot better: max 40%
for indian languages you tend to get about 50%.

I also dont know how you encoded word at a time, because i get quite different 
results. I focused a lot on 'single-byte diffs' to be fast (e.g. just 
subtraction) and I think i do a lot better for english than the 160% described 
in http://unicode.org/notes/tn6/

Furthermore, utf-8 is a complete no-op for english, so being a compression 
algorithm that is only 29% slower than (byte) char is good in my book, but i 
dont measure 29% for english.

I don't think there is any problem in encode speed at all.

> Unicode compression
> -------------------
>
>                 Key: LUCENE-1799
>                 URL: https://issues.apache.org/jira/browse/LUCENE-1799
>             Project: Lucene - Java
>          Issue Type: New Feature
>          Components: Store
>    Affects Versions: 2.4.1
>            Reporter: DM Smith
>            Priority: Minor
>         Attachments: LUCENE-1779.patch, LUCENE-1799.patch, LUCENE-1799.patch, 
> LUCENE-1799.patch, LUCENE-1799.patch, LUCENE-1799.patch, LUCENE-1799.patch, 
> LUCENE-1799.patch, LUCENE-1799.patch, LUCENE-1799.patch, LUCENE-1799.patch, 
> LUCENE-1799.patch, LUCENE-1799.patch, LUCENE-1799.patch, LUCENE-1799.patch, 
> LUCENE-1799_big.patch
>
>
> In lucene-1793, there is the off-topic suggestion to provide compression of 
> Unicode data. The motivation was a custom encoding in a Russian analyzer. The 
> original supposition was that it provided a more compact index.
> This led to the comment that a different or compressed encoding would be a 
> generally useful feature. 
> BOCU-1 was suggested as a possibility. This is a patented algorithm by IBM 
> with an implementation in ICU. If Lucene provide it's own implementation a 
> freely avIlable, royalty-free license would need to be obtained.
> SCSU is another Unicode compression algorithm that could be used. 
> An advantage of these methods is that they work on the whole of Unicode. If 
> that is not needed an encoding such as iso8859-1 (or whatever covers the 
> input) could be used.    

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to