[
https://issues.apache.org/jira/browse/LUCENE-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13112467#comment-13112467
]
Michael McCandless commented on LUCENE-2205:
--------------------------------------------
I ported luceneutil's PKLookupTest to 3.x (see
http://code.google.com/a/apache-extras.org/p/luceneutil/source/browse/perf/PKLookupPerfTest3X.java),
and ran a test w/ 100M docs, spread across 25 segs, doing 100K PK
lookups. I temporarily disabled the terms lookup cache.
3.x:
{noformat}
Reader=ReadOnlyDirectoryReader(segments_1 _d6(3.5):C18018000 _3x(3.5):C18018000
_70(3.5):C18018000 _a3(3.5):C18018000 _bh(3.5):C1801800 _g9(3.5):C18018000
_fp(3.5):C1801800 _gk(3.5):C1801800 _g1(3.5):C180180 _g2(3.5):C180180
_gu(3.5):C1801800 _g7(3.5):C180180 _gd(3.5):C180180 _ge(3.5):C180180
_gj(3.5):C180180 _gl(3.5):C180180 _gp(3.5):C180180 _gv(3.5):C180180
_gw(3.5):C180180 _gx(3.5):C180180 _gy(3.5):C180180 _gz(3.5):C180180
_h0(3.5):C180180 _h1(3.5):C180180 _h2(3.5):C100)
Cycle: warm
Lookup...
WARM: 10428 msec for 100000 lookups (104.28 us per lookup)
Cycle: test
Lookup...
10309 msec for 100000 lookups (103.09 us per lookup)
Cycle: test
Lookup...
10333 msec for 100000 lookups (103.33 us per lookup)
Cycle: test
Lookup...
10333 msec for 100000 lookups (103.33 us per lookup)
Cycle: test
Lookup...
10506 msec for 100000 lookups (105.06 us per lookup)
Cycle: test
Lookup...
10499 msec for 100000 lookups (104.99 us per lookup)
Cycle: test
Lookup...
10297 msec for 100000 lookups (102.97 us per lookup)
Cycle: test
Lookup...
10345 msec for 100000 lookups (103.45 us per lookup)
Cycle: test
Lookup...
10396 msec for 100000 lookups (103.96 us per lookup)
Cycle: test
Lookup...
10302 msec for 100000 lookups (103.02 us per lookup)
{noformat}
Patch:
{noformat}
Reader=ReadOnlyDirectoryReader(segments_1 _d6(3.5):C18018000 _3x(3.5):C18018000
_70(3.5):C18018000 _a3(3.5):C18018000 _bh(3.5):C1801800 _g9(3.5):C18018000
_fp(3.5):C1801800 _gk(3.5):C1801800 _g1(3.5):C180180 _g2(3.5):C180180
_gu(3.5):C1801800 _g7(3.5):C180180 _gd(3.5):C180180 _ge(3.5):C180180
_gj(3.5):C180180 _gl(3.5):C180180 _gp(3.5):C180180 _gv(3.5):C180180
_gw(3.5):C180180 _gx(3.5):C180180 _gy(3.5):C180180 _gz(3.5):C180180
_h0(3.5):C180180 _h1(3.5):C180180 _h2(3.5):C100)
Cycle: warm
Lookup...
WARM: 11164 msec for 100000 lookups (111.64 us per lookup)
Cycle: test
Lookup...
10838 msec for 100000 lookups (108.38 us per lookup)
Cycle: test
Lookup...
10882 msec for 100000 lookups (108.82 us per lookup)
Cycle: test
Lookup...
10873 msec for 100000 lookups (108.73 us per lookup)
Cycle: test
Lookup...
10871 msec for 100000 lookups (108.71 us per lookup)
Cycle: test
Lookup...
10870 msec for 100000 lookups (108.7 us per lookup)
Cycle: test
Lookup...
10896 msec for 100000 lookups (108.96 us per lookup)
Cycle: test
Lookup...
10840 msec for 100000 lookups (108.4 us per lookup)
Cycle: test
Lookup...
10860 msec for 100000 lookups (108.6 us per lookup)
Cycle: test
Lookup...
10847 msec for 100000 lookups (108.47 us per lookup)
{noformat}
So net/net patch is a bit (~5%) slower, as expected since PKLookup is
the worst case here, but I think the enormous gains in RAM reduction /
startup time / GC load make this tradeoff acceptable.
> Rework of the TermInfosReader class to remove the Terms[], TermInfos[], and
> the index pointer long[] and create a more memory efficient data structure.
> -------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: LUCENE-2205
> URL: https://issues.apache.org/jira/browse/LUCENE-2205
> Project: Lucene - Java
> Issue Type: Improvement
> Components: core/index
> Environment: Java5
> Reporter: Aaron McCurry
> Assignee: Michael McCandless
> Fix For: 3.5
>
> Attachments: RandomAccessTest.java, TermInfosReader.java,
> TermInfosReaderIndex.java, TermInfosReaderIndexDefault.java,
> TermInfosReaderIndexSmall.java, lowmemory_w_utf8_encoding.patch,
> patch-final.txt, rawoutput.txt
>
>
> Basically packing those three arrays into a byte array with an int array as
> an index offset.
> The performance benefits are stagering on my test index (of size 6.2 GB, with
> ~1,000,000 documents and ~175,000,000 terms), the memory needed to load the
> terminfos into memory were reduced to 17% of there original size. From 291.5
> MB to 49.7 MB. The random access speed has been made better by 1-2%, load
> time of the segments are ~40% faster as well, and full GC's on my JVM were
> made 7 times faster.
> I have already performed the work and am offering this code as a patch.
> Currently all test in the trunk pass with this new code enabled. I did write
> a system property switch to allow for the original implementation to be used
> as well.
> -Dorg.apache.lucene.index.TermInfosReader=default or small
> I have also written a blog about this patch here is the link.
> http://www.nearinfinity.com/blogs/aaron_mccurry/my_first_lucene_patch.html
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]