[jira] [Comment Edited] (SOLR-9764) Design a memory efficient DocSet if a query returns all docs

2016-12-05 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15723002#comment-15723002
 ] 

Michael Sun edited comment on SOLR-9764 at 12/5/16 6:45 PM:


bq.  if the DocSet just produced has size==numDocs, then just use liveDocs
[~yo...@apache.org] Can you give me some more details how to implement this 
check. Somehow I can't find a clean way to do it. Thanks.



was (Author: michael.sun):
bq.  if the DocSet just produced has size==numDocs, then just use liveDocs
[~yo...@apache.org] Can you give me some more details how to implement this 
check. Somehow I can't find an easy way to do it. Thanks.


> Design a memory efficient DocSet if a query returns all docs
> 
>
> Key: SOLR-9764
> URL: https://issues.apache.org/jira/browse/SOLR-9764
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Sun
> Attachments: SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch, 
> SOLR-9764.patch, SOLR-9764.patch, SOLR_9764_no_cloneMe.patch
>
>
> In some use cases, particularly use cases with time series data, using 
> collection alias and partitioning data into multiple small collections using 
> timestamp, a filter query can match all documents in a collection. Currently 
> BitDocSet is used which contains a large array of long integers with every 
> bits set to 1. After querying, the resulted DocSet saved in filter cache is 
> large and becomes one of the main memory consumers in these use cases.
> For example. suppose a Solr setup has 14 collections for data in last 14 
> days, each collection with one day of data. A filter query for last one week 
> data would result in at least six DocSet in filter cache which matches all 
> documents in six collections respectively.   
> This is to design a new DocSet that is memory efficient for such a use case.  
> The new DocSet removes the large array, reduces memory usage and GC pressure 
> without losing advantage of large filter cache.
> In particular, for use cases when using time series data, collection alias 
> and partition data into multiple small collections using timestamp, the gain 
> can be large.
> For further optimization, it may be helpful to design a DocSet with run 
> length encoding. Thanks [~mmokhtar] for suggestion. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9764) Design a memory efficient DocSet if a query returns all docs

2016-12-03 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15718383#comment-15718383
 ] 

Shawn Heisey edited comment on SOLR-9764 at 12/3/16 5:02 PM:
-

How much of a performance speedup (forgetting for a moment about memory 
savings) are we talking about for the "match all docs" enhancement?  For my 
environment, it would only apply to manual queries and the load balancer ping 
requests (every five seconds), but NOT to queries made by users.  The ping 
handler does a distributed query using q=\*:\* with no filters and rows=1.  If 
the speedup is significant, then my load balancer health checks might get 
faster, which would be a good thing.


was (Author: elyograg):
How much of a performance speedup (forgetting for a moment about memory 
savings) are we talking about for the "match all docs" enhancement?  For my 
environment, it would only apply to manual queries and the load balancer ping 
requests (every five seconds), but NOT to queries made by users.  The ping 
handler does a distributed query using q=*:* with no filters and rows=1.  If 
the speedup is significant, then my load balancer health checks might get 
faster, which would be a good thing.

> Design a memory efficient DocSet if a query returns all docs
> 
>
> Key: SOLR-9764
> URL: https://issues.apache.org/jira/browse/SOLR-9764
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Sun
> Attachments: SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch, 
> SOLR-9764.patch, SOLR-9764.patch, SOLR_9764_no_cloneMe.patch
>
>
> In some use cases, particularly use cases with time series data, using 
> collection alias and partitioning data into multiple small collections using 
> timestamp, a filter query can match all documents in a collection. Currently 
> BitDocSet is used which contains a large array of long integers with every 
> bits set to 1. After querying, the resulted DocSet saved in filter cache is 
> large and becomes one of the main memory consumers in these use cases.
> For example. suppose a Solr setup has 14 collections for data in last 14 
> days, each collection with one day of data. A filter query for last one week 
> data would result in at least six DocSet in filter cache which matches all 
> documents in six collections respectively.   
> This is to design a new DocSet that is memory efficient for such a use case.  
> The new DocSet removes the large array, reduces memory usage and GC pressure 
> without losing advantage of large filter cache.
> In particular, for use cases when using time series data, collection alias 
> and partition data into multiple small collections using timestamp, the gain 
> can be large.
> For further optimization, it may be helpful to design a DocSet with run 
> length encoding. Thanks [~mmokhtar] for suggestion. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9764) Design a memory efficient DocSet if a query returns all docs

2016-12-02 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716573#comment-15716573
 ] 

Michael Sun edited comment on SOLR-9764 at 12/2/16 9:29 PM:


bq.  I do not know how it would perform when actually used as a filterCache 
entry, compared to the current bitset implementation.
RoaringDocIdSet looks pretty interesting. From the link in comments,  
https://www.elastic.co/blog/frame-of-reference-and-roaring-bitmaps, however, it 
looks RoaringDocIdSet doesn't save any memory in case a query match all docs.

Basically the idea of RoaringDocIdSet is to divide the entire bitmap into 
multiple chunks. For each chunk, either a bitmap or a integer array (using diff 
compression) can be used depending on number of matched docs in that chunk. If 
matched doc is higher than a certain number in a chunk, a bitmap is used for 
that chunk. Otherwise integer array is used. It can help in some use cases but 
it would fall back to something equivalent to FixedBitMap in this use case.

In addition, the 'official' website for roaring bitmaps  
http://roaringbitmap.org mentioned roaring bitmaps can also use run length 
encoding to store the bitmap chunk but also mentioned one of the main goals of 
roaring bitmap is to solve the problem of run length encoding, which is 
expensive random access. Need to dig into source code to understand it better. 
Any suggestion is welcome.



was (Author: michael.sun):
bq.  I do not know how it would perform when actually used as a filterCache 
entry, compared to the current bitset implementation.
RoaringDocIdSet looks pretty interesting. From the link in comments,  
https://www.elastic.co/blog/frame-of-reference-and-roaring-bitmaps, however, it 
looks RoaringDocIdSet doesn't save any memory in case a query match all docs.

Basically the idea of RoaringDocIdSet is to divide the entire bitmap into 
multiple chunks. For each chunk, either a bitmap or a integer array (using diff 
compression) is used depending on number of matched docs in that chunk. If 
matched doc is higher than a certain number, a bitmap is used for that chunk. 
Otherwise integer array is used. It can help in some use cases but it would 
fall back to something equivalent to FixedBitMap in this use case.

In addition, the 'official' website for roaring bitmaps  
http://roaringbitmap.org mentioned roaring bitmaps can also use run length 
encoding to store the bitmap chunk but also mentioned one of the main goals of 
roaring bitmap is to solve the problem of run length encoding, which is 
expensive random access. Need to dig into source code to understand it better. 
Any suggestion is welcome.


> Design a memory efficient DocSet if a query returns all docs
> 
>
> Key: SOLR-9764
> URL: https://issues.apache.org/jira/browse/SOLR-9764
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Sun
> Attachments: SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch, 
> SOLR-9764.patch, SOLR-9764.patch, SOLR_9764_no_cloneMe.patch
>
>
> In some use cases, particularly use cases with time series data, using 
> collection alias and partitioning data into multiple small collections using 
> timestamp, a filter query can match all documents in a collection. Currently 
> BitDocSet is used which contains a large array of long integers with every 
> bits set to 1. After querying, the resulted DocSet saved in filter cache is 
> large and becomes one of the main memory consumers in these use cases.
> For example. suppose a Solr setup has 14 collections for data in last 14 
> days, each collection with one day of data. A filter query for last one week 
> data would result in at least six DocSet in filter cache which matches all 
> documents in six collections respectively.   
> This is to design a new DocSet that is memory efficient for such a use case.  
> The new DocSet removes the large array, reduces memory usage and GC pressure 
> without losing advantage of large filter cache.
> In particular, for use cases when using time series data, collection alias 
> and partition data into multiple small collections using timestamp, the gain 
> can be large.
> For further optimization, it may be helpful to design a DocSet with run 
> length encoding. Thanks [~mmokhtar] for suggestion. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9764) Design a memory efficient DocSet if a query returns all docs

2016-11-28 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703181#comment-15703181
 ] 

Michael Sun edited comment on SOLR-9764 at 11/28/16 9:27 PM:
-

Uploaded a new patch with all tests passed.

bq. What is the issue with intDocSet?
Basic in DocSetBase.equals(), both DocSet are converted to FixedBitSet and then 
both FixedBitSet are compared. However, both DocSet may go through different 
code path and resize differently in conversion even these two DocSet are equal. 
The result is that one FixedBitSet has more zero paddings than the other which 
makes FixedBitSet.equals() think they are different. 

The fix is to resize both FixedBitSet to the same larger size before comparison 
in DocSetBase.equals(). Since DocSetBase.equals() is marked for test purpose 
only, the efficiency of the extra sizing would not be a problem.


was (Author: michael.sun):
Uploaded a new patch with all tests passed.

bq. What is the issue with intDocSet?
Basic in DocSetBase.equals(), both DocSet are converted to FixedBitSet and then 
both FixedBitSet are compared. However, both DocSet may go through different 
code path and resize differently in conversion even these two DocSet are equal. 
The result is taht one FixedBitSet has more zero paddings than the other which 
makes FixedBitSet.equals() think they are different. 

The fix is to resize both FixedBitSet to the same larger size before comparison 
in DocSetBase.equals(). Since DocSetBase.equals() is marked for test purpose 
only, the efficiency of the extra sizing would not be a problem.

> Design a memory efficient DocSet if a query returns all docs
> 
>
> Key: SOLR-9764
> URL: https://issues.apache.org/jira/browse/SOLR-9764
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Sun
> Attachments: SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch, 
> SOLR-9764.patch, SOLR-9764.patch, SOLR_9764_no_cloneMe.patch
>
>
> In some use cases, particularly use cases with time series data, using 
> collection alias and partitioning data into multiple small collections using 
> timestamp, a filter query can match all documents in a collection. Currently 
> BitDocSet is used which contains a large array of long integers with every 
> bits set to 1. After querying, the resulted DocSet saved in filter cache is 
> large and becomes one of the main memory consumers in these use cases.
> For example. suppose a Solr setup has 14 collections for data in last 14 
> days, each collection with one day of data. A filter query for last one week 
> data would result in at least six DocSet in filter cache which matches all 
> documents in six collections respectively.   
> This is to design a new DocSet that is memory efficient for such a use case.  
> The new DocSet removes the large array, reduces memory usage and GC pressure 
> without losing advantage of large filter cache.
> In particular, for use cases when using time series data, collection alias 
> and partition data into multiple small collections using timestamp, the gain 
> can be large.
> For further optimization, it may be helpful to design a DocSet with run 
> length encoding. Thanks [~mmokhtar] for suggestion. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9764) Design a memory efficient DocSet if a query returns all docs

2016-11-22 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685203#comment-15685203
 ] 

Michael Sun edited comment on SOLR-9764 at 11/22/16 5:03 PM:
-

Here are some single user test results for the amount of memory saved.

Setup: Solr with a collection alias mapping to two collections, each with 4 
days of data. 
Test: Restart Solr, run query with filter for last 7 days and collect memory 
histogram on one server afterwards. The filter hits both collections, with one 
match all and the other match partially.
Result (extracted from  histogram)
|Patched|#BitDocSet instances|#MatchAllDocSet instances|bytes for [J|Mem Saving|
|Y|1|1|9998408496|3.4M|
|N|2|0|10001843704| |
|Y|2|2|10001833664|6.9M or 3.4M per hit|
|N|4|0|10008701640| |

Analysis:
The difference of bytes for long[] is 3435208 bytes(3.4M) if one MatchAllDocSet 
is hit.. That's the total amount of memory saved by this patch for one query 
per server per matched collection. The the other side, The core under study has 
27M documents. A BitDocSet would require a long[] at the size of 3.4M (27M/8) 
without patch, which is aligned with the memory saved.





was (Author: michael.sun):
Here are some single user test results for the amount of memory saved.

Setup: Solr with a collection alias mapping to two collections, each with 4 
days of data. 
Test: Restart Solr, run query with filter for last 7 days and collect memory 
histogram on one server afterwards. The filter hits both collections, with one 
match all and the other match partially.
Result (extracted from  histogram)
|Patched|#BitDocSet instances|#MatchAllDocSet instances|bytes for [J|Mem Saving|
|Y|1|1|9998408496|3.4M|
|N|2|0|10001843704| |
|Y|2|2|10001833664|6.9M or 3.4M per hit|
|N|4|0|10008701640| |

Analysis:
The difference of bytes for long[] is 3435208 bytes(3.4M) if one MatchAllDocSet 
is hit.. That's the total amount of memory saved by this patch for one query on 
one server. The the other side, The core under study has 27M documents, which 
requires a long[] at the size of 3.4M (27M/8), which is aligned with the memory 
saved from histogram.




> Design a memory efficient DocSet if a query returns all docs
> 
>
> Key: SOLR-9764
> URL: https://issues.apache.org/jira/browse/SOLR-9764
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Sun
> Attachments: SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch, 
> SOLR-9764.patch, SOLR_9764_no_cloneMe.patch
>
>
> In some use cases, particularly use cases with time series data, using 
> collection alias and partitioning data into multiple small collections using 
> timestamp, a filter query can match all documents in a collection. Currently 
> BitDocSet is used which contains a large array of long integers with every 
> bits set to 1. After querying, the resulted DocSet saved in filter cache is 
> large and becomes one of the main memory consumers in these use cases.
> For example. suppose a Solr setup has 14 collections for data in last 14 
> days, each collection with one day of data. A filter query for last one week 
> data would result in at least six DocSet in filter cache which matches all 
> documents in six collections respectively.   
> This is to design a new DocSet that is memory efficient for such a use case.  
> The new DocSet removes the large array, reduces memory usage and GC pressure 
> without losing advantage of large filter cache.
> In particular, for use cases when using time series data, collection alias 
> and partition data into multiple small collections using timestamp, the gain 
> can be large.
> For further optimization, it may be helpful to design a DocSet with run 
> length encoding. Thanks [~mmokhtar] for suggestion. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9764) Design a memory efficient DocSet if a query returns all docs

2016-11-21 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685364#comment-15685364
 ] 

Michael Sun edited comment on SOLR-9764 at 11/22/16 1:41 AM:
-

Ah, I see. The implementation of clone() in DocSetBase makes the difference. 
It's good to know. Thanks [~dsmiley] for help. 

Uploaded an updated patch with cloneMe() removed, using [~dsmiley]'s code as 
example. The DocSetBase.clone() is simplified though.

Just curious, what logic in JVM requires clone() to be implemented in 
DocSetBase in this case. DocSetBase is an abstract class which normally is not 
required to implement a method.


was (Author: michael.sun):
Ah, I see. The implementation of clone() in DocSetBase makes the difference. 
It's good to know. Thanks [~dsmiley] for help. 

Uploaded an updated patch with cloneMe() removed. 

Just curious, what logic in JVM requires clone() to be implemented in 
DocSetBase in this case. DocSetBase is an abstract class which normally is not 
required to implement a method.

> Design a memory efficient DocSet if a query returns all docs
> 
>
> Key: SOLR-9764
> URL: https://issues.apache.org/jira/browse/SOLR-9764
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Sun
> Attachments: SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch, 
> SOLR-9764.patch, SOLR_9764_no_cloneMe.patch
>
>
> In some use cases, particularly use cases with time series data, using 
> collection alias and partitioning data into multiple small collections using 
> timestamp, a filter query can match all documents in a collection. Currently 
> BitDocSet is used which contains a large array of long integers with every 
> bits set to 1. After querying, the resulted DocSet saved in filter cache is 
> large and becomes one of the main memory consumers in these use cases.
> For example. suppose a Solr setup has 14 collections for data in last 14 
> days, each collection with one day of data. A filter query for last one week 
> data would result in at least six DocSet in filter cache which matches all 
> documents in six collections respectively.   
> This is to design a new DocSet that is memory efficient for such a use case.  
> The new DocSet removes the large array, reduces memory usage and GC pressure 
> without losing advantage of large filter cache.
> In particular, for use cases when using time series data, collection alias 
> and partition data into multiple small collections using timestamp, the gain 
> can be large.
> For further optimization, it may be helpful to design a DocSet with run 
> length encoding. Thanks [~mmokhtar] for suggestion. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9764) Design a memory efficient DocSet if a query returns all docs

2016-11-21 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685203#comment-15685203
 ] 

Michael Sun edited comment on SOLR-9764 at 11/22/16 12:32 AM:
--

Here are some single user test results for the amount of memory saved.

Setup: Solr with a collection alias mapping to two collections, each with 4 
days of data. 
Test: Restart Solr, run query with filter for last 7 days and collect memory 
histogram on one server afterwards. The filter hits both collections, with one 
match all and the other match partially.
Result (extracted from  histogram)
|Patched|#BitDocSet instances|#MatchAllDocSet instances|bytes for [J|Mem Saving|
|Y|1|1|9998408496|3.4M|
|N|2|0|10001843704| |
|Y|2|2|10001833664|6.9M or 3.4M per hit|
|N|4|0|10008701640| |

Analysis:
The difference of bytes for long[] is 3435208 bytes(3.4M) if one MatchAllDocSet 
is hit.. That's the total amount of memory saved by this patch for one query on 
one server. The the other side, The core under study has 27M documents, which 
requires a long[] at the size of 3.4M (27M/8), which is aligned with the memory 
saved from histogram.





was (Author: michael.sun):
Here are some single user test results for the amount of memory saved.

Setup: Solr with a collection alias mapping to two collections, each with 4 
days of data. 
Test: Restart Solr, run query with filter for last 7 days and collect memory 
histogram on one server afterwards. The filter hits both collections, with one 
match all and the other match partially.
Result (extracted from  histogram)
|Patched|#BitDocSet instances|#MatchAllDocSet instances|bytes for [J|Saving|
|Y|2|2|10001833664|6.9M or 3.4M per shard|
|N|4|0|10008701640| |

Validation:
The difference of bytes for long[] is 6867976 bytes (6.9M). That's the total 
amount of memory saved by MatchAllDocSet for one query. Since there are 2 
MatchedDocSet are used, each saves 3433988 (3.4M). The the other side, The core 
under study has 27M documents, which requires a long[] at the size of 3.4M 
(27M/8), which is aligned with the memory saved from histogram.




> Design a memory efficient DocSet if a query returns all docs
> 
>
> Key: SOLR-9764
> URL: https://issues.apache.org/jira/browse/SOLR-9764
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Sun
> Attachments: SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch, 
> SOLR_9764_no_cloneMe.patch
>
>
> In some use cases, particularly use cases with time series data, using 
> collection alias and partitioning data into multiple small collections using 
> timestamp, a filter query can match all documents in a collection. Currently 
> BitDocSet is used which contains a large array of long integers with every 
> bits set to 1. After querying, the resulted DocSet saved in filter cache is 
> large and becomes one of the main memory consumers in these use cases.
> For example. suppose a Solr setup has 14 collections for data in last 14 
> days, each collection with one day of data. A filter query for last one week 
> data would result in at least six DocSet in filter cache which matches all 
> documents in six collections respectively.   
> This is to design a new DocSet that is memory efficient for such a use case.  
> The new DocSet removes the large array, reduces memory usage and GC pressure 
> without losing advantage of large filter cache.
> In particular, for use cases when using time series data, collection alias 
> and partition data into multiple small collections using timestamp, the gain 
> can be large.
> For further optimization, it may be helpful to design a DocSet with run 
> length encoding. Thanks [~mmokhtar] for suggestion. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9764) Design a memory efficient DocSet if a query returns all docs

2016-11-21 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685203#comment-15685203
 ] 

Michael Sun edited comment on SOLR-9764 at 11/22/16 12:19 AM:
--

Here are some single user test results for the amount of memory saved.

Setup: Solr with a collection alias mapping to two collections, each with 4 
days of data. 
Test: Restart Solr, run query with filter for last 7 days and collect memory 
histogram on one server afterwards. The filter hits both collections, with one 
match all and the other match partially.
Result (extracted from  histogram)
|Patched|#BitDocSet instances|#MatchAllDocSet instances|bytes for [J|Saving|
|Y|2|2|10001833664|6.9M or 3.4M per shard|
|N|4|0|10008701640| |

Validation:
The difference of bytes for long[] is 6867976 bytes (6.9M). That's the total 
amount of memory saved by MatchAllDocSet for one query. Since there are 2 
MatchedDocSet are used, each saves 3433988 (3.4M). The the other side, The core 
under study has 27M documents, which requires a long[] at the size of 3.4M 
(27M/8), which is aligned with the memory saved from histogram.





was (Author: michael.sun):
Here are some single user test results for the amount of memory saved.

Setup: Solr with a collection alias mapping to two collections, each with 4 
days of data. 
Test: Restart Solr, run query with filter for last 7 days and collect memory 
histogram on one server afterwards. The filter hits both collections, with one 
match all and the other match partially.
Result (extracted from  histogram)
|Patched|#BitDocSet instances|#MatchAllDocSet instances|bytes for [J|Saving|
|Y|2|2|10001833664|6.9M|
|N|4|0|10008701640| |

Validation:
The difference of bytes for long[] is 6867976 bytes (6.9M). That's the total 
amount of memory saved by MatchAllDocSet for one query. Since there are 2 
MatchedDocSet are used, each saves 3433988 (3.4M). The the other side, The core 
under study has 27M documents, which requires a long[] at the size of 3.4M 
(27M/8), which is aligned with the memory saved from histogram.




> Design a memory efficient DocSet if a query returns all docs
> 
>
> Key: SOLR-9764
> URL: https://issues.apache.org/jira/browse/SOLR-9764
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Sun
> Attachments: SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch, 
> SOLR_9764_no_cloneMe.patch
>
>
> In some use cases, particularly use cases with time series data, using 
> collection alias and partitioning data into multiple small collections using 
> timestamp, a filter query can match all documents in a collection. Currently 
> BitDocSet is used which contains a large array of long integers with every 
> bits set to 1. After querying, the resulted DocSet saved in filter cache is 
> large and becomes one of the main memory consumers in these use cases.
> For example. suppose a Solr setup has 14 collections for data in last 14 
> days, each collection with one day of data. A filter query for last one week 
> data would result in at least six DocSet in filter cache which matches all 
> documents in six collections respectively.   
> This is to design a new DocSet that is memory efficient for such a use case.  
> The new DocSet removes the large array, reduces memory usage and GC pressure 
> without losing advantage of large filter cache.
> In particular, for use cases when using time series data, collection alias 
> and partition data into multiple small collections using timestamp, the gain 
> can be large.
> For further optimization, it may be helpful to design a DocSet with run 
> length encoding. Thanks [~mmokhtar] for suggestion. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9764) Design a memory efficient DocSet if a query returns all docs

2016-11-21 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685203#comment-15685203
 ] 

Michael Sun edited comment on SOLR-9764 at 11/22/16 12:18 AM:
--

Here are some single user test results for the amount of memory saved.

Setup: Solr with a collection alias mapping to two collections, each with 4 
days of data. 
Test: Restart Solr, run query with filter for last 7 days and collect memory 
histogram on one server afterwards. The filter hits both collections, with one 
match all and the other match partially.
Result (extracted from  histogram)
|Patched|#BitDocSet instances|#MatchAllDocSet instances|bytes for [J|
|Y|2|2|10001833664|
|N|4|0|10008701640|

Validation:
The difference of bytes for long[] is 6867976 bytes (6.9M). That's the total 
amount of memory saved by MatchAllDocSet for one query. Since there are 2 
MatchedDocSet are used, each saves 3433988 (3.4M). The the other side, The core 
under study has 27M documents, which requires a long[] at the size of 3.4M 
(27M/8), which is aligned with the memory saved from histogram.





was (Author: michael.sun):
Here are some single user test results for the amount of memory saved.

Setup: Solr with a collection alias mapping to two collections, each with 4 
days of data. 
Test: Restart Solr, run query with filter for last 7 days and collect memory 
histogram on one server afterwards. The filter hits both collections, with one 
match all and the other match partially.
Result (extracted from  histogram)
|Patched|#BitDocSet instances|#MatchAllDocSet instances|# bytes for [J|
|Y|2|2|10001833664|
|N|4|0|10008701640|

Validation:
The difference of bytes for long[] is 6867976 bytes (6.9M). That's the total 
amount of memory saved by MatchAllDocSet for one query. Since there are 2 
MatchedDocSet are used, each saves 3433988 (3.4M). The the other side, The core 
under study has 27M documents, which requires a long[] at the size of 3.4M 
(27M/8), which is aligned with the memory saved from histogram.




> Design a memory efficient DocSet if a query returns all docs
> 
>
> Key: SOLR-9764
> URL: https://issues.apache.org/jira/browse/SOLR-9764
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Sun
> Attachments: SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch, 
> SOLR_9764_no_cloneMe.patch
>
>
> In some use cases, particularly use cases with time series data, using 
> collection alias and partitioning data into multiple small collections using 
> timestamp, a filter query can match all documents in a collection. Currently 
> BitDocSet is used which contains a large array of long integers with every 
> bits set to 1. After querying, the resulted DocSet saved in filter cache is 
> large and becomes one of the main memory consumers in these use cases.
> For example. suppose a Solr setup has 14 collections for data in last 14 
> days, each collection with one day of data. A filter query for last one week 
> data would result in at least six DocSet in filter cache which matches all 
> documents in six collections respectively.   
> This is to design a new DocSet that is memory efficient for such a use case.  
> The new DocSet removes the large array, reduces memory usage and GC pressure 
> without losing advantage of large filter cache.
> In particular, for use cases when using time series data, collection alias 
> and partition data into multiple small collections using timestamp, the gain 
> can be large.
> For further optimization, it may be helpful to design a DocSet with run 
> length encoding. Thanks [~mmokhtar] for suggestion. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9764) Design a memory efficient DocSet if a query returns all docs

2016-11-21 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685203#comment-15685203
 ] 

Michael Sun edited comment on SOLR-9764 at 11/22/16 12:18 AM:
--

Here are some single user test results for the amount of memory saved.

Setup: Solr with a collection alias mapping to two collections, each with 4 
days of data. 
Test: Restart Solr, run query with filter for last 7 days and collect memory 
histogram on one server afterwards. The filter hits both collections, with one 
match all and the other match partially.
Result (extracted from  histogram)
|Patched|#BitDocSet instances|#MatchAllDocSet instances|bytes for [J|Saving|
|Y|2|2|10001833664|6.9M|
|N|4|0|10008701640| |

Validation:
The difference of bytes for long[] is 6867976 bytes (6.9M). That's the total 
amount of memory saved by MatchAllDocSet for one query. Since there are 2 
MatchedDocSet are used, each saves 3433988 (3.4M). The the other side, The core 
under study has 27M documents, which requires a long[] at the size of 3.4M 
(27M/8), which is aligned with the memory saved from histogram.





was (Author: michael.sun):
Here are some single user test results for the amount of memory saved.

Setup: Solr with a collection alias mapping to two collections, each with 4 
days of data. 
Test: Restart Solr, run query with filter for last 7 days and collect memory 
histogram on one server afterwards. The filter hits both collections, with one 
match all and the other match partially.
Result (extracted from  histogram)
|Patched|#BitDocSet instances|#MatchAllDocSet instances|bytes for [J|
|Y|2|2|10001833664|
|N|4|0|10008701640|

Validation:
The difference of bytes for long[] is 6867976 bytes (6.9M). That's the total 
amount of memory saved by MatchAllDocSet for one query. Since there are 2 
MatchedDocSet are used, each saves 3433988 (3.4M). The the other side, The core 
under study has 27M documents, which requires a long[] at the size of 3.4M 
(27M/8), which is aligned with the memory saved from histogram.




> Design a memory efficient DocSet if a query returns all docs
> 
>
> Key: SOLR-9764
> URL: https://issues.apache.org/jira/browse/SOLR-9764
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Sun
> Attachments: SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch, 
> SOLR_9764_no_cloneMe.patch
>
>
> In some use cases, particularly use cases with time series data, using 
> collection alias and partitioning data into multiple small collections using 
> timestamp, a filter query can match all documents in a collection. Currently 
> BitDocSet is used which contains a large array of long integers with every 
> bits set to 1. After querying, the resulted DocSet saved in filter cache is 
> large and becomes one of the main memory consumers in these use cases.
> For example. suppose a Solr setup has 14 collections for data in last 14 
> days, each collection with one day of data. A filter query for last one week 
> data would result in at least six DocSet in filter cache which matches all 
> documents in six collections respectively.   
> This is to design a new DocSet that is memory efficient for such a use case.  
> The new DocSet removes the large array, reduces memory usage and GC pressure 
> without losing advantage of large filter cache.
> In particular, for use cases when using time series data, collection alias 
> and partition data into multiple small collections using timestamp, the gain 
> can be large.
> For further optimization, it may be helpful to design a DocSet with run 
> length encoding. Thanks [~mmokhtar] for suggestion. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9764) Design a memory efficient DocSet if a query returns all docs

2016-11-20 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15681560#comment-15681560
 ] 

Michael Sun edited comment on SOLR-9764 at 11/20/16 5:49 PM:
-

[~elyograg] Thanks for reviewing. The patch was wrong. I uploaded an updated 
one. There was a mistake in git command in patch creation. Apologize for it.

For run length encoding, it can be a good direction for further memory 
optimization. [~mmokhtar] initially suggested this idea, as mentioned in JIRA 
description. I am trying to gather some supporting data meanwhile to justify 
the effort and potential risk. Any help would be great.






was (Author: michael.sun):
[~elyograg] Thanks for reviewing. I uploaded an updated patch. There was a 
mistake in git command in patch creation. Apologize for it.

For run length encoding, it can be a good direction for further memory 
optimization. [~mmokhtar] initially suggested this idea, as mentioned in JIRA 
description. I am trying to gather some supporting data meanwhile to justify 
the effort and potential risk. Any help would be great.





> Design a memory efficient DocSet if a query returns all docs
> 
>
> Key: SOLR-9764
> URL: https://issues.apache.org/jira/browse/SOLR-9764
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Sun
> Attachments: SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch
>
>
> In some use cases, particularly use cases with time series data, using 
> collection alias and partitioning data into multiple small collections using 
> timestamp, a filter query can match all documents in a collection. Currently 
> BitDocSet is used which contains a large array of long integers with every 
> bits set to 1. After querying, the resulted DocSet saved in filter cache is 
> large and becomes one of the main memory consumers in these use cases.
> For example. suppose a Solr setup has 14 collections for data in last 14 
> days, each collection with one day of data. A filter query for last one week 
> data would result in at least six DocSet in filter cache which matches all 
> documents in six collections respectively.   
> This is to design a new DocSet that is memory efficient for such a use case.  
> The new DocSet removes the large array, reduces memory usage and GC pressure 
> without losing advantage of large filter cache.
> In particular, for use cases when using time series data, collection alias 
> and partition data into multiple small collections using timestamp, the gain 
> can be large.
> For further optimization, it may be helpful to design a DocSet with run 
> length encoding. Thanks [~mmokhtar] for suggestion. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org