[ 
https://issues.apache.org/jira/browse/LUCENE-10315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17516857#comment-17516857
 ] 

Ignacio Vera commented on LUCENE-10315:
---------------------------------------

Yes, that is strange but I can reproduce it consistenly in my current working 
machine. So I run the benchmarks in two other different environments and in 
these environments results are closer to what you see:

{code}
java -version
openjdk version "17.0.2" 2022-01-18
OpenJDK Runtime Environment (build 17.0.2+8-86)
OpenJDK 64-Bit Server VM (build 17.0.2+8-86, mixed mode, sharing)

OS name: "mac os x", version: "12.3.1", arch: "aarch64", family: "mac"

Benchmark                                            Mode  Cnt   Score   Error  
 Units
ReadInts24Benchmark.readInts24ForUtilVisitor        thrpt   25   1,137 ± 0,001  
ops/us
ReadInts24Benchmark.readInts24Visitor               thrpt   25   1,549 ± 0,003  
ops/us
{code}

{code}
java -version
java version "17.0.2" 2022-01-18 LTS
Java(TM) SE Runtime Environment (build 17.0.2+8-LTS-86)
Java HotSpot(TM) 64-Bit Server VM (build 17.0.2+8-LTS-86, mixed mode, sharing)

OS name: "mac os x", version: "12.3", arch: "x86_64", family: "mac"

Benchmark                                            Mode  Cnt   Score   Error  
 Units
ReadInts24Benchmark.readInts24ForUtilVisitor        thrpt   25   0,823 ± 0,009  
ops/us
ReadInts24Benchmark.readInts24Visitor               thrpt   25   0,895 ± 0,005  
ops/us
{code}

{code}
java -version
java version "17.0.1" 2021-10-19 LTS
Java(TM) SE Runtime Environment (build 17.0.1+12-LTS-39)
Java HotSpot(TM) 64-Bit Server VM (build 17.0.1+12-LTS-39, mixed mode, sharing)

OS name: "linux", version: "4.15.0-147-generic", arch: "amd64", family: "unix"

Benchmark                                            Mode  Cnt   Score   Error  
 Units
ReadInts24Benchmark.readInts24ForUtilVisitor        thrpt   25   0.908 ± 0.005  
ops/us
ReadInts24Benchmark.readInts24Visitor               thrpt   25   0.996 ± 0.002  
ops/us
{code}

So yes, a bit puzzling.

> Speed up BKD leaf block ids codec by a 512 ints ForUtil
> -------------------------------------------------------
>
>                 Key: LUCENE-10315
>                 URL: https://issues.apache.org/jira/browse/LUCENE-10315
>             Project: Lucene - Core
>          Issue Type: Improvement
>            Reporter: Feng Guo
>            Assignee: Feng Guo
>            Priority: Major
>         Attachments: addall.svg, cpu_profile_baseline.html, 
> cpu_profile_path.html
>
>          Time Spent: 6h 20m
>  Remaining Estimate: 0h
>
> Elasticsearch (which based on lucene) can automatically infers types for 
> users with its dynamic mapping feature. When users index some low cardinality 
> fields, such as gender / age / status... they often use some numbers to 
> represent the values, while ES will infer these fields as {{{}long{}}}, and 
> ES uses BKD as the index of {{long}} fields. When the data volume grows, 
> building the result set of low-cardinality fields will make the CPU usage and 
> load very high.
> This is a flame graph we obtained from the production environment:
> [^addall.svg]
> It can be seen that almost all CPU is used in addAll. When we reindex 
> {{long}} to {{{}keyword{}}}, the cluster load and search latency are greatly 
> reduced ( We spent weeks of time to reindex all indices... ). I know that ES 
> recommended to use {{keyword}} for term/terms query and {{long}} for range 
> query in the document, but there are always some users who didn't realize 
> this and keep their habit of using sql database, or dynamic mapping 
> automatically selects the type for them. All in all, users won't realize that 
> there would be such a big difference in performance between {{long}} and 
> {{keyword}} fields in low cardinality fields. So from my point of view it 
> will make sense if we can make BKD works better for the low/medium 
> cardinality fields.
> As far as i can see, for low cardinality fields, there are two advantages of 
> {{keyword}} over {{{}long{}}}:
> 1. {{ForUtil}} used in {{keyword}} postings is much more efficient than BKD's 
> delta VInt, because its batch reading (readLongs) and SIMD decode.
> 2. When the query term count is less than 16, {{TermsInSetQuery}} can lazily 
> materialize of its result set, and when another small result clause 
> intersects with this low cardinality condition, the low cardinality field can 
> avoid reading all docIds into memory.
> This ISSUE is targeting to solve the first point. The basic idea is trying to 
> use a 512 ints {{ForUtil}} for BKD ids codec. I benchmarked this optimization 
> by mocking some random {{LongPoint}} and querying them with 
> {{PointInSetQuery}}.
> *Benchmark Result*
> |doc count|field cardinality|query point|baseline QPS|candidate QPS|diff 
> percentage|
> |100000000|32|1|51.44|148.26|188.22%|
> |100000000|32|2|26.8|101.88|280.15%|
> |100000000|32|4|14.04|53.52|281.20%|
> |100000000|32|8|7.04|28.54|305.40%|
> |100000000|32|16|3.54|14.61|312.71%|
> |100000000|128|1|110.56|350.26|216.81%|
> |100000000|128|8|16.6|89.81|441.02%|
> |100000000|128|16|8.45|48.07|468.88%|
> |100000000|128|32|4.2|25.35|503.57%|
> |100000000|128|64|2.13|13.02|511.27%|
> |100000000|1024|1|536.19|843.88|57.38%|
> |100000000|1024|8|109.71|251.89|129.60%|
> |100000000|1024|32|33.24|104.11|213.21%|
> |100000000|1024|128|8.87|30.47|243.52%|
> |100000000|1024|512|2.24|8.3|270.54%|
> |100000000|8192|1|3333.33|5000|50.00%|
> |100000000|8192|32|139.47|214.59|53.86%|
> |100000000|8192|128|54.59|109.23|100.09%|
> |100000000|8192|512|15.61|36.15|131.58%|
> |100000000|8192|2048|4.11|11.14|171.05%|
> |100000000|1048576|1|2597.4|3030.3|16.67%|
> |100000000|1048576|32|314.96|371.75|18.03%|
> |100000000|1048576|128|99.7|116.28|16.63%|
> |100000000|1048576|512|30.5|37.15|21.80%|
> |100000000|1048576|2048|10.38|12.3|18.50%|
> |100000000|8388608|1|2564.1|3174.6|23.81%|
> |100000000|8388608|32|196.27|238.95|21.75%|
> |100000000|8388608|128|55.36|68.03|22.89%|
> |100000000|8388608|512|15.58|19.24|23.49%|
> |100000000|8388608|2048|4.56|5.71|25.22%|
> The indices size is reduced for low cardinality fields and flat for high 
> cardinality fields.
> {code:java}
> 113M    index_100000000_doc_32_cardinality_baseline
> 114M    index_100000000_doc_32_cardinality_candidate
> 140M    index_100000000_doc_128_cardinality_baseline
> 133M    index_100000000_doc_128_cardinality_candidate
> 193M    index_100000000_doc_1024_cardinality_baseline
> 174M    index_100000000_doc_1024_cardinality_candidate
> 241M    index_100000000_doc_8192_cardinality_baseline
> 233M    index_100000000_doc_8192_cardinality_candidate
> 314M    index_100000000_doc_1048576_cardinality_baseline
> 315M    index_100000000_doc_1048576_cardinality_candidate
> 392M    index_100000000_doc_8388608_cardinality_baseline
> 391M    index_100000000_doc_8388608_cardinality_candidate
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org

Reply via email to