[jira] [Commented] (SOLR-5302) Analytics Component

2013-10-23 Thread Andrew Psaltis (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13802629#comment-13802629
 ] 

Andrew Psaltis commented on SOLR-5302:
--

[~sbower] This is great, we have been playing around with this against Solr 
4.5. What would it take to implement the pivot faceting so that a stat that is 
defined could be applied across multiple dimensions? Can you point me in the 
write direction to do this?

 Analytics Component
 ---

 Key: SOLR-5302
 URL: https://issues.apache.org/jira/browse/SOLR-5302
 Project: Solr
  Issue Type: New Feature
Reporter: Steven Bower
Assignee: Erick Erickson
 Attachments: Search Analytics Component.pdf, SOLR-5302.patch, 
 solr_analytics-2013.10.04-2.patch, Statistical Expressions.pdf


 This ticket is to track a replacement for the StatsComponent. The 
 AnalyticsComponent supports the following features:
 * All functionality of StatsComponent (SOLR-4499)
 * Field Faceting (SOLR-3435)
 ** Support for limit
 ** Sorting (bucket name or any stat in the bucket
 ** Support for offset
 * Range Faceting
 ** Supports all options of standard range faceting
 * Query Faceting (SOLR-2925)
 * Ability to use overall/field facet statistics as input to range/query 
 faceting (ie calc min/max date and then facet over that range
 * Support for more complex aggregate/mapping operations (SOLR-1622)
 ** Aggregations: min, max, sum, sum-of-square, count, missing, stddev, mean, 
 median, percentiles
 ** Operations: negation, abs, add, multiply, divide, power, log, date math, 
 string reversal, string concat
 ** Easily pluggable framework to add additional operations
 * New / cleaner output format
 Outstanding Issues:
 * Multi-value field support for stats (supported for faceting)
 * Multi-shard support (may not be possible for some operations, eg median)



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5300) Split shards with custom hash ranges

2013-10-23 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-5300:


Attachment: SOLR-5300-cover-shardrange.patch

# DocRouter.Range implements Comparable
# Split shard copies the ranges provided, sorts them and checks that they cover 
the entire hash range
# Added a test in ShardSplitTest

 Split shards with custom hash ranges
 

 Key: SOLR-5300
 URL: https://issues.apache.org/jira/browse/SOLR-5300
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.6, 5.0

 Attachments: SOLR-5300-cover-shardrange.patch, SOLR-5300.patch, 
 SOLR-5300.patch


 Currently, shards can only be split at the mid point of their hash range. 
 This makes it difficult to control the distribution of data in the sub shards.
 We should make it possible to specify ranges to be used for splitting. A 
 ranges parameter can be added which can accept hash ranges in hexadecimal 
 e.g. ranges=0-1f4,1f5-3e8,3e9-5dc will split a shard with range 0-1500 into 
 three shards with ranges [0,500], [501-1000] and [1001-1500] respectively.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5320) Multi level compositeId router

2013-10-23 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-5320:
---

Attachment: SOLR-5320.patch

Working on refactoring and testing this.
Will be uploading a refactored test later in the day.

 Multi level compositeId router
 --

 Key: SOLR-5320
 URL: https://issues.apache.org/jira/browse/SOLR-5320
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Anshum Gupta
 Attachments: SOLR-5320.patch

   Original Estimate: 336h
  Remaining Estimate: 336h

 This would enable multi level routing as compared to the 2 level routing 
 available as of now. On the usage bit, here's an example:
 Document Id: myapp!dummyuser!doc
 myapp!dummyuser! can be used as the shardkey for searching content for 
 dummyuser.
 myapp! can be used for searching across all users of myapp.
 I am looking at either a 3 (or 4) level routing. The 32 bit hash would then 
 comprise of 8X4 components from each part (in case of 4 level).



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5378) Suggester Version 2

2013-10-23 Thread Areek Zillur (JIRA)
Areek Zillur created SOLR-5378:
--

 Summary: Suggester Version 2
 Key: SOLR-5378
 URL: https://issues.apache.org/jira/browse/SOLR-5378
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Areek Zillur


The idea is to add a new Suggester Component that will eventually replace the 
Suggester support through the SpellCheck Component.
This will enable Solr to fully utilize the Lucene suggester module (along with 
supporting most of the existing features) in the following ways:
   - Dictionary pluggability (give users the option to choose the dictionary 
implementation to use for their suggesters to consume)
   - Map the suggester options/ suggester result format (e.g. support for 
payload)
   - The new Component will also allow us to have beefier Lookup support 
instead of resorting to collation and such. (Move computation from query time 
to index time) with more freedom

In addition to this, this suggester version should also have support for 
distributed support, which was awkward at best with the previous implementation 
due to SpellCheck requirements.

Example query:
{code}
http://localhost:8983/solr/suggest?suggest.dictionary=mySuggestersuggest=truesuggest.build=truesuggest.q=elec
{code}
Distributed query:
{code}
http://localhost:7574/solr/suggest?suggest.dictionary=mySuggestersuggest=truesuggest.build=truesuggest.q=elecshards=localhost:8983/solr,localhost:7574/solrshards.qt=/suggest
{code}
Example config file:
{code}
  searchComponent name=suggest class=solr.SuggestComponent
lst name=suggester
  str name=namemySuggester/str
  str name=lookupImplFuzzyLookupFactory/str 
  str name=dictionaryImplDocumentDictionaryFactory/str 
  str name=fieldcat/str
  str name=weightFieldprice/str
  str name=suggestAnalyzerFieldTypestring/str
/lst
  /searchComponent
{code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5378) Suggester Version 2

2013-10-23 Thread Areek Zillur (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Areek Zillur updated SOLR-5378:
---

Attachment: SOLR-5378.patch

Initial Patch:
   - implements SuggestComponent and SolrSuggester (along with classes for 
results, params, etc) 

TODO:
  - add more tests
  - fix hard coded defaults
  - add documentation
  

 Suggester Version 2
 ---

 Key: SOLR-5378
 URL: https://issues.apache.org/jira/browse/SOLR-5378
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Areek Zillur
 Attachments: SOLR-5378.patch


 The idea is to add a new Suggester Component that will eventually replace the 
 Suggester support through the SpellCheck Component.
 This will enable Solr to fully utilize the Lucene suggester module (along 
 with supporting most of the existing features) in the following ways:
- Dictionary pluggability (give users the option to choose the dictionary 
 implementation to use for their suggesters to consume)
- Map the suggester options/ suggester result format (e.g. support for 
 payload)
- The new Component will also allow us to have beefier Lookup support 
 instead of resorting to collation and such. (Move computation from query time 
 to index time) with more freedom
 In addition to this, this suggester version should also have support for 
 distributed support, which was awkward at best with the previous 
 implementation due to SpellCheck requirements.
 Example query:
 {code}
 http://localhost:8983/solr/suggest?suggest.dictionary=mySuggestersuggest=truesuggest.build=truesuggest.q=elec
 {code}
 Distributed query:
 {code}
 http://localhost:7574/solr/suggest?suggest.dictionary=mySuggestersuggest=truesuggest.build=truesuggest.q=elecshards=localhost:8983/solr,localhost:7574/solrshards.qt=/suggest
 {code}
 Example config file:
 {code}
   searchComponent name=suggest class=solr.SuggestComponent
   lst name=suggester
   str name=namemySuggester/str
   str name=lookupImplFuzzyLookupFactory/str 
   str name=dictionaryImplDocumentDictionaryFactory/str 
   str name=fieldcat/str
   str name=weightFieldprice/str
   str name=suggestAnalyzerFieldTypestring/str
 /lst
   /searchComponent
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5378) Suggester Version 2

2013-10-23 Thread Areek Zillur (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Areek Zillur updated SOLR-5378:
---

Description: 
The idea is to add a new Suggester Component that will eventually replace the 
Suggester support through the SpellCheck Component.
This will enable Solr to fully utilize the Lucene suggester module (along with 
supporting most of the existing features) in the following ways:
   - Dictionary pluggability (give users the option to choose the dictionary 
implementation to use for their suggesters to consume)
   - Map the suggester options/ suggester result format (e.g. support for 
payload)
   - The new Component will also allow us to have beefier Lookup support 
instead of resorting to collation and such. (Move computation from query time 
to index time) with more freedom

In addition to this, this suggester version should also have support for 
distributed support, which was awkward at best with the previous implementation 
due to SpellCheck requirements.

Example query:
{code}
http://localhost:8983/solr/suggest?suggest.dictionary=mySuggestersuggest=truesuggest.build=truesuggest.q=elec
{code}
Distributed query:
{code}
http://localhost:7574/solr/suggest?suggest.dictionary=mySuggestersuggest=truesuggest.build=truesuggest.q=elecshards=localhost:8983/solr,localhost:7574/solrshards.qt=/suggest
{code}

Example Response:
{code}
response
lst name=responseHeader
int name=status0/int
int name=QTime28/int
/lst
str name=commandbuild/str
result name=response numFound=0 start=0 maxScore=0.0/
lst name=suggest
lst name=suggestions
lst name=e
int name=numFound3/int
lst name=suggestion
str name=termelectronics and computer1/str
long name=weight2199/long
str name=payload/
/lst
lst name=suggestion
str name=termelectronics/str
long name=weight649/long
str name=payload/
/lst
lst name=suggestion
str name=termelectronics/str
long name=weight649/long
str name=payload/
/lst
/lst
/lst
/lst
/response
{code}
Example config file:
{code}
  searchComponent name=suggest class=solr.SuggestComponent
lst name=suggester
  str name=namemySuggester/str
  str name=lookupImplFuzzyLookupFactory/str 
  str name=dictionaryImplDocumentDictionaryFactory/str 
  str name=fieldcat/str
  str name=weightFieldprice/str
  str name=suggestAnalyzerFieldTypestring/str
/lst
  /searchComponent
{code}

  was:
The idea is to add a new Suggester Component that will eventually replace the 
Suggester support through the SpellCheck Component.
This will enable Solr to fully utilize the Lucene suggester module (along with 
supporting most of the existing features) in the following ways:
   - Dictionary pluggability (give users the option to choose the dictionary 
implementation to use for their suggesters to consume)
   - Map the suggester options/ suggester result format (e.g. support for 
payload)
   - The new Component will also allow us to have beefier Lookup support 
instead of resorting to collation and such. (Move computation from query time 
to index time) with more freedom

In addition to this, this suggester version should also have support for 
distributed support, which was awkward at best with the previous implementation 
due to SpellCheck requirements.

Example query:
{code}
http://localhost:8983/solr/suggest?suggest.dictionary=mySuggestersuggest=truesuggest.build=truesuggest.q=elec
{code}
Distributed query:
{code}
http://localhost:7574/solr/suggest?suggest.dictionary=mySuggestersuggest=truesuggest.build=truesuggest.q=elecshards=localhost:8983/solr,localhost:7574/solrshards.qt=/suggest
{code}
Example config file:
{code}
  searchComponent name=suggest class=solr.SuggestComponent
lst name=suggester
  str name=namemySuggester/str
  str name=lookupImplFuzzyLookupFactory/str 
  str name=dictionaryImplDocumentDictionaryFactory/str 
  str name=fieldcat/str
  str name=weightFieldprice/str
  str name=suggestAnalyzerFieldTypestring/str
/lst
  /searchComponent
{code}


 Suggester Version 2
 ---

 Key: SOLR-5378
 URL: https://issues.apache.org/jira/browse/SOLR-5378
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Areek Zillur
 Attachments: SOLR-5378.patch


 The idea is to add a new Suggester Component that will eventually replace the 
 Suggester support through the SpellCheck Component.
 This will enable Solr to fully utilize the Lucene suggester module (along 
 with supporting most of the existing features) in the following ways:
- Dictionary pluggability (give users the option to choose the dictionary 
 implementation to use for their suggesters to consume)
- Map the suggester options/ suggester result format (e.g. support for 
 payload)
- The new Component will also allow us to have beefier Lookup support 
 instead of resorting to collation and such. (Move 

Suggester Version 2.0! All grown up and Independent

2013-10-23 Thread Areek Zillur
Given the current development of the Lucene suggest module, I believe it
makes sense to refactor the Solr Suggester out of the existing SpellCheck
Component into its own.

https://issues.apache.org/jira/browse/SOLR-5378 [patch uploaded]
It would be really great if you could look at the current
input/output/config of the Component and provide some feedback! (the
details are in the jira description). The patch has a working
implementation of the new Component (SuggestComponent and SolrSuggester)

The idea is to give the Solr Suggester the flexibility in terms of
response, processing (distributed  normal) and pluggability for utilizing
the new features of the Lucene counterpart. The new Component will allow:
  - input (dictionary) pluggability [users can choose from the Dictionary
implementations available, including utilizing the new expressions module
in lucene for term weights]
  - Allow us to add new ways to build the suggesters [move computation from
query time to index]
  - Have desired input and output formats suitable for the suggester.

I plan to do the following:
  - add tests  documentations
  - add support for all the lucene suggesters [FreeTextSuggester is the
only one without support]
  - add additional lookup Factories to allow users to filter out
suggestions based on categories instead of doing query time processing

Thanks in advance,

Areek Zillur


[jira] [Commented] (SOLR-5378) Suggester Version 2

2013-10-23 Thread Areek Zillur (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13802726#comment-13802726
 ] 

Areek Zillur commented on SOLR-5378:


Note: the dictionary implementations still have spellcheck parameters in the 
tests, will fix in next patch.

 Suggester Version 2
 ---

 Key: SOLR-5378
 URL: https://issues.apache.org/jira/browse/SOLR-5378
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Areek Zillur
 Attachments: SOLR-5378.patch


 The idea is to add a new Suggester Component that will eventually replace the 
 Suggester support through the SpellCheck Component.
 This will enable Solr to fully utilize the Lucene suggester module (along 
 with supporting most of the existing features) in the following ways:
- Dictionary pluggability (give users the option to choose the dictionary 
 implementation to use for their suggesters to consume)
- Map the suggester options/ suggester result format (e.g. support for 
 payload)
- The new Component will also allow us to have beefier Lookup support 
 instead of resorting to collation and such. (Move computation from query time 
 to index time) with more freedom
 In addition to this, this suggester version should also have support for 
 distributed support, which was awkward at best with the previous 
 implementation due to SpellCheck requirements.
 Example query:
 {code}
 http://localhost:8983/solr/suggest?suggest.dictionary=mySuggestersuggest=truesuggest.build=truesuggest.q=elec
 {code}
 Distributed query:
 {code}
 http://localhost:7574/solr/suggest?suggest.dictionary=mySuggestersuggest=truesuggest.build=truesuggest.q=elecshards=localhost:8983/solr,localhost:7574/solrshards.qt=/suggest
 {code}
 Example Response:
 {code}
 response
 lst name=responseHeader
 int name=status0/int
 int name=QTime28/int
 /lst
 str name=commandbuild/str
 result name=response numFound=0 start=0 maxScore=0.0/
 lst name=suggest
 lst name=suggestions
 lst name=e
 int name=numFound3/int
 lst name=suggestion
 str name=termelectronics and computer1/str
 long name=weight2199/long
 str name=payload/
 /lst
 lst name=suggestion
 str name=termelectronics/str
 long name=weight649/long
 str name=payload/
 /lst
 lst name=suggestion
 str name=termelectronics/str
 long name=weight649/long
 str name=payload/
 /lst
 /lst
 /lst
 /lst
 /response
 {code}
 Example config file:
 {code}
   searchComponent name=suggest class=solr.SuggestComponent
   lst name=suggester
   str name=namemySuggester/str
   str name=lookupImplFuzzyLookupFactory/str 
   str name=dictionaryImplDocumentDictionaryFactory/str 
   str name=fieldcat/str
   str name=weightFieldprice/str
   str name=suggestAnalyzerFieldTypestring/str
 /lst
   /searchComponent
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2899) Add OpenNLP Analysis capabilities as a module

2013-10-23 Thread simon raphael (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13802735#comment-13802735
 ] 

simon raphael commented on LUCENE-2899:
---

Hi,

I'm new to Solr and Opennlp.
I have followed the tutorial to install this patch. I have downloaded the 
branch_4x, then i download and apply the LUCENE-2899-current.patch. Then i do 
ant compile.

Everything works fine, but no opennlp folder in /solr/contrib/ is created.

What I am doing wrong?

Thanks for your help :)

 Add OpenNLP Analysis capabilities as a module
 -

 Key: LUCENE-2899
 URL: https://issues.apache.org/jira/browse/LUCENE-2899
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/analysis
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
Priority: Minor
 Fix For: 4.6

 Attachments: LUCENE-2899-current.patch, LUCENE-2899.patch, 
 LUCENE-2899.patch, LUCENE-2899.patch, LUCENE-2899.patch, LUCENE-2899.patch, 
 LUCENE-2899.patch, LUCENE-2899-RJN.patch, LUCENE-2899-x.patch, 
 LUCENE-2899-x.patch, LUCENE-2899-x.patch, OpenNLPFilter.java, 
 OpenNLPFilter.java, OpenNLPTokenizer.java, opennlp_trunk.patch


 Now that OpenNLP is an ASF project and has a nice license, it would be nice 
 to have a submodule (under analysis) that exposed capabilities for it. Drew 
 Farris, Tom Morton and I have code that does:
 * Sentence Detection as a Tokenizer (could also be a TokenFilter, although it 
 would have to change slightly to buffer tokens)
 * NamedEntity recognition as a TokenFilter
 We are also planning a Tokenizer/TokenFilter that can put parts of speech as 
 either payloads (PartOfSpeechAttribute?) on a token or at the same position.
 I'd propose it go under:
 modules/analysis/opennlp



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5300) Split shards with custom hash ranges

2013-10-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13802768#comment-13802768
 ] 

ASF subversion and git services commented on SOLR-5300:
---

Commit 1534974 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1534974 ]

SOLR-5300: Check that the supplied hash ranges actually cover the entire range 
of the shard

 Split shards with custom hash ranges
 

 Key: SOLR-5300
 URL: https://issues.apache.org/jira/browse/SOLR-5300
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.6, 5.0

 Attachments: SOLR-5300-cover-shardrange.patch, SOLR-5300.patch, 
 SOLR-5300.patch


 Currently, shards can only be split at the mid point of their hash range. 
 This makes it difficult to control the distribution of data in the sub shards.
 We should make it possible to specify ranges to be used for splitting. A 
 ranges parameter can be added which can accept hash ranges in hexadecimal 
 e.g. ranges=0-1f4,1f5-3e8,3e9-5dc will split a shard with range 0-1500 into 
 three shards with ranges [0,500], [501-1000] and [1001-1500] respectively.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5300) Split shards with custom hash ranges

2013-10-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13802769#comment-13802769
 ] 

ASF subversion and git services commented on SOLR-5300:
---

Commit 1534975 from sha...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1534975 ]

SOLR-5300: Check that the supplied hash ranges actually cover the entire range 
of the shard

 Split shards with custom hash ranges
 

 Key: SOLR-5300
 URL: https://issues.apache.org/jira/browse/SOLR-5300
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.6, 5.0

 Attachments: SOLR-5300-cover-shardrange.patch, SOLR-5300.patch, 
 SOLR-5300.patch


 Currently, shards can only be split at the mid point of their hash range. 
 This makes it difficult to control the distribution of data in the sub shards.
 We should make it possible to specify ranges to be used for splitting. A 
 ranges parameter can be added which can accept hash ranges in hexadecimal 
 e.g. ranges=0-1f4,1f5-3e8,3e9-5dc will split a shard with range 0-1500 into 
 three shards with ranges [0,500], [501-1000] and [1001-1500] respectively.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5285) FastVectorHighlighter copies segments scores when splitting segments across multi-valued fields

2013-10-23 Thread Nik Everett (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13802776#comment-13802776
 ] 

Nik Everett commented on LUCENE-5285:
-

I realized last night that I did the WeightedFragList incorrectly in that 
patch.  I'll upload another one as time permits.

 FastVectorHighlighter copies segments scores when splitting segments across 
 multi-valued fields
 ---

 Key: LUCENE-5285
 URL: https://issues.apache.org/jira/browse/LUCENE-5285
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Nik Everett
Priority: Minor
 Attachments: LUCENE-5285.patch


 FastVectorHighlighter copies segments scores when splitting segments across 
 multi-valued fields.  This is only a problem when you want to sort the 
 fragments by score. Technically BaseFragmentsBuilder (line 261 in my copy of 
 the source) does the copying.
 Rather than copying the score I _think_ it'd be more right to pull that 
 copying logic into a protected method that child classes (such as 
 ScoreOrderFragmentsBuilder) can override to do more intelligent things.  
 Exactly what that means isn't clear to me at the moment.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



What is recommended version of jdk 1.7?

2013-10-23 Thread Danil ŢORIN
We had some problems with u45.
I know there are several jiras, and a bug report for oracle.

But my question in more pragmatic: when running test for release like
latest 4.5.1, what jvm (preferably 1.7) did you used ?

What is the lastest but safe version to use with Lucene?


[jira] [Commented] (SOLR-5308) Split all documents of a route key into another collection

2013-10-23 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13802845#comment-13802845
 ] 

Noble Paul commented on SOLR-5308:
--

We can have a simpler routing rule as follows

{code:javascript}
  routingRules:{a!:{
ranges:[3c25-3c25]
expireAt:1382449837040,

targetCollection:migrate_routekey_test_targetCollection}}
{code}

The target shard does not make sense because it is something that can be easily 
derived and may change if there is a further split on the target collection



 Split all documents of a route key into another collection
 --

 Key: SOLR-5308
 URL: https://issues.apache.org/jira/browse/SOLR-5308
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.6, 5.0

 Attachments: SOLR-5308.patch, SOLR-5308.patch, SOLR-5308.patch


 Enable SolrCloud users to split out a set of documents from a source 
 collection into another collection.
 This will be useful in multi-tenant environments. This feature will make it 
 possible to split a tenant out of a collection and put them into their own 
 collection which can be scaled separately.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5308) Split all documents of a route key into another collection

2013-10-23 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13802845#comment-13802845
 ] 

Noble Paul edited comment on SOLR-5308 at 10/23/13 12:20 PM:
-

We can have a simpler routing rule as follows

{code:javascript}
  routingRules:{a!:{
ranges:[3c25-3c25]
expireAt:1382449837040,

targetCollection:migrate_routekey_test_targetCollection}}
{code}

The target shard does not make sense because it is something that can be easily 
derived and may change if there is a further split on the target collection. 
note that the ranges is an array now so that multiple values can be set there




was (Author: noble.paul):
We can have a simpler routing rule as follows

{code:javascript}
  routingRules:{a!:{
ranges:[3c25-3c25]
expireAt:1382449837040,

targetCollection:migrate_routekey_test_targetCollection}}
{code}

The target shard does not make sense because it is something that can be easily 
derived and may change if there is a further split on the target collection



 Split all documents of a route key into another collection
 --

 Key: SOLR-5308
 URL: https://issues.apache.org/jira/browse/SOLR-5308
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.6, 5.0

 Attachments: SOLR-5308.patch, SOLR-5308.patch, SOLR-5308.patch


 Enable SolrCloud users to split out a set of documents from a source 
 collection into another collection.
 This will be useful in multi-tenant environments. This feature will make it 
 possible to split a tenant out of a collection and put them into their own 
 collection which can be scaled separately.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: What is recommended version of jdk 1.7?

2013-10-23 Thread Uwe Schindler
Use u25, this ist he latest stable version and works very fine.

 

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

http://www.thetaphi.de http://www.thetaphi.de/ 

eMail: u...@thetaphi.de

 

From: Danil ŢORIN [mailto:torin...@gmail.com] 
Sent: Wednesday, October 23, 2013 1:40 PM
To: lucene-...@apache.org
Subject: What is recommended version of jdk 1.7?

 

We had some problems with u45.

I know there are several jiras, and a bug report for oracle.

 

But my question in more pragmatic: when running test for release like latest 
4.5.1, what jvm (preferably 1.7) did you used ?

 

What is the lastest but safe version to use with Lucene?

 



RE: Testing Java Updates with your projects.

2013-10-23 Thread Uwe Schindler
Thanks Rory for the info!

 

I am changing the recipients of this mail, so it no longer goes to the private 
list.

 

@ dev@lao: FYI, the clone() bug seems to be fixed so we can soon upgrade to 
JDK8 latest and run tests again.

 

Uwe

 

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

 http://www.thetaphi.de/ http://www.thetaphi.de

eMail: u...@thetaphi.de

 

From: Rory O'Donnell [mailto:rory.odonn...@oracle.com] 
Sent: Wednesday, October 23, 2013 1:08 PM
To: rory.odonn...@oracle.com
Cc: Uwe Schindler; 'Dawid Weiss'; 'Robert Muir'; 'Dalibor Topic'; 'Donald 
Smith'; 'Seán Coffey'; 'Balchandra Vaidya'; priv...@lucene.apache.org
Subject: Re: Testing Java Updates with your projects.

 

Hi Uwe,

I noticed https://bugs.openjdk.java.net/browse/JDK-8026394 has moved into fixed 
state.
Hope to see this included in the near future, will update you.  

It is not in b112 https://jdk8.java.net/download.html  which is not available.

Rgds, Rory



On 18/10/2013 15:51, Rory O'Donnell Oracle, Dublin Ireland wrote:

Hi Uwe, 

It turns out this is a duplicate of 
https://bugs.openjdk.java.net/browse/JDK-8026394 

Rgds,Rory 

On 18/10/2013 10:09, Rory O'Donnell Oracle, Dublin Ireland wrote: 



Hi Uwe, 


Balchandra has logged a bug for this issue: 

https://bugs.openjdk.java.net/browse/JDK-8026845 

Rgds,Rory 

On 17/10/2013 18:27, Uwe Schindler wrote: 



Hi, 

I was able to reproduce with a simple test case that emulates the UIMA code. 
See attached test case, just compile it with any JDK and run with b111: 

With Java 7 or JDK8b109: 




javac TestCloneInterface.java 
java TestCloneInterface 

With JDK8b111: 




java TestCloneInterface 

Exception in thread main java.lang.IllegalAccessError: tried to access method 
java.lang.Object.clone()Ljava/lang/Object; from class TestCloneInterface 
 at TestCloneInterface.test(TestCloneInterface.java:15) 
 at TestCloneInterface.main(TestCloneInterface.java:19) 
The bug happens if the clone() method is declared in a superinterface only. 
Without the additional interface inbetween, test passes. 
Instead of the real interface (the o local variable, which is of type 
FoobarIntf) it checks access flags on this, which is of type 
TestCloneInterface. 

Uwe 

- 
Uwe Schindler 
uschind...@apache.org 
Apache Lucene PMC Chair / Committer 
Bremen, Germany 
http://lucene.apache.org/ 





-Original Message- 
From: Rory O'Donnell Oracle, Dublin Ireland 
[mailto:rory.odonn...@oracle.com] 
Sent: Thursday, October 17, 2013 7:19 PM 
To: Uwe Schindler 
Cc: 'Dawid Weiss'; 'Robert Muir'; 'Dalibor Topic'; 'Donald Smith'; 'Seán 
Coffey'; 
priv...@lucene.apache.org; Balchandra Vaidya 
Subject: Re: Testing Java Updates with your projects. 

Hi Uwe, 

The more info you can provide the better. 
Adding Balchandra to email. 

Rgds,Rory 

On 17/10/2013 17:41, Uwe Schindler wrote: 



Hi Rory, 

we found a new bug in 8 b111, not appearing in Java 7 and Java 8 b109: 
It seems to be related to one recent commit, reproduces all the time 

(reproduces with bytecode compiled with JDK8b111 and ran with JDK8b111, 
but also with bytecode compiled with JDK7 and ran with JDK8b111). I am 
trying to understand what's happening, but it looks like the patch fails to 
check the access flags of the method on the correct instance/type/whatever. 



This is the commit that I  think causes this: 
http://hg.openjdk.java.net/jdk8/jdk8/hotspot/rev/36b97be47bde 
Issue: https://bugs.openjdk.java.net/browse/JDK-8011311 

What happens: 
When running Apache Lucene's UIMA tests (UIMA is foreign code, not part 

of Lucene): 



cd lucene/analysis/uima 
ant test 

*All* tests fail here: 

java.lang.IllegalAccessError: tried to access method 

java.lang.Object.clone()Ljava/lang/Object; from class 
org.apache.uima.analysis_engine.impl.AnalysisEngineImplBase 



at 

__randomizedtesting.SeedInfo.seed([BC36C2DC5FC6C107:4A94D14D35381F8 
8]:0) 



at 

org.apache.uima.analysis_engine.impl.AnalysisEngineImplBase.initialize(Anal 
ysisEngineImplBase.java:163) 



at 

org.apache.uima.analysis_engine.impl.AggregateAnalysisEngine_impl.initializ 
e(AggregateAnalysisEngine_impl.java:127) 



at 

org.apache.uima.impl.AnalysisEngineFactory_impl.produceResource(Analysi 
sEngineFactory_impl.java:94) 



at 

org.apache.uima.impl.CompositeResourceFactory_impl.produceResource(C 
ompositeResourceFactory_impl.java:62) 



at 

org.apache.uima.UIMAFramework.produceResource(UIMAFramework.java: 
267) 



at 

org.apache.uima.UIMAFramework.produceAnalysisEngine(UIMAFramework 
.java:335) 



at 

org.apache.lucene.analysis.uima.ae.BasicAEProvider.getAE(BasicAEProvider.j 
ava:73) 



at 

org.apache.lucene.analysis.uima.BaseUIMATokenizer.analyzeInput(BaseUI 
MATokenizer.java:63) 



at 

org.apache.lucene.analysis.uima.UIMAAnnotationsTokenizer.initializeIterato 
r(UIMAAnnotationsTokenizer.java:60) 



at 

org.apache.lucene.analysis.uima.UIMAAnnotationsTokenizer.incrementToke 

Re: Testing Java Updates with your projects.

2013-10-23 Thread Rory O'Donnell


On 23/10/2013 14:03, Uwe Schindler wrote:


Thanks Rory for the info!

I am changing the recipients of this mail, so it no longer goes to the 
private list.


@ dev@lao: FYI, the clone() bug seems to be fixed so we can soon 
upgrade to JDK8 latest and run tests again.


Uwe - the fix is not yet available, maybe b113 or b114, will let you 
know when it is.


Rgds,Rory


Uwe

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

http://www.thetaphi.de http://www.thetaphi.de/

eMail: u...@thetaphi.de

*From:*Rory O'Donnell [mailto:rory.odonn...@oracle.com]
*Sent:* Wednesday, October 23, 2013 1:08 PM
*To:* rory.odonn...@oracle.com
*Cc:* Uwe Schindler; 'Dawid Weiss'; 'Robert Muir'; 'Dalibor Topic'; 
'Donald Smith'; 'Seán Coffey'; 'Balchandra Vaidya'; 
priv...@lucene.apache.org

*Subject:* Re: Testing Java Updates with your projects.

Hi Uwe,

I noticed https://bugs.openjdk.java.net/browse/JDK-8026394 has moved 
into fixed state.

Hope to see this included in the near future, will update you.

It is not in b112 https://jdk8.java.net/download.html which is not 
available.


Rgds, Rory

On 18/10/2013 15:51, Rory O'Donnell Oracle, Dublin Ireland wrote:

Hi Uwe,

It turns out this is a duplicate of
https://bugs.openjdk.java.net/browse/JDK-8026394

Rgds,Rory

On 18/10/2013 10:09, Rory O'Donnell Oracle, Dublin Ireland wrote:

Hi Uwe,


Balchandra has logged a bug for this issue:

https://bugs.openjdk.java.net/browse/JDK-8026845

Rgds,Rory

On 17/10/2013 18:27, Uwe Schindler wrote:

Hi,

I was able to reproduce with a simple test case that emulates the
UIMA code.
See attached test case, just compile it with any JDK and run with
b111:

With Java 7 or JDK8b109:


javac TestCloneInterface.java
java TestCloneInterface

With JDK8b111:


java TestCloneInterface

Exception in thread main java.lang.IllegalAccessError: tried to
access method java.lang.Object.clone()Ljava/lang/Object; from
class TestCloneInterface
 at TestCloneInterface.test(TestCloneInterface.java:15)
 at TestCloneInterface.main(TestCloneInterface.java:19)
The bug happens if the clone() method is declared in a
superinterface only. Without the additional interface inbetween,
test passes.
Instead of the real interface (the o local variable, which is of
type FoobarIntf) it checks access flags on this, which is of
type TestCloneInterface.

Uwe

-
Uwe Schindler
uschind...@apache.org mailto:uschind...@apache.org
Apache Lucene PMC Chair / Committer
Bremen, Germany
http://lucene.apache.org/



-Original Message-
From: Rory O'Donnell Oracle, Dublin Ireland
[mailto:rory.odonn...@oracle.com]
Sent: Thursday, October 17, 2013 7:19 PM
To: Uwe Schindler
Cc: 'Dawid Weiss'; 'Robert Muir'; 'Dalibor Topic'; 'Donald Smith';
'Seán Coffey';
priv...@lucene.apache.org mailto:priv...@lucene.apache.org;
Balchandra Vaidya
Subject: Re: Testing Java Updates with your projects.

Hi Uwe,

The more info you can provide the better.
Adding Balchandra to email.

Rgds,Rory

On 17/10/2013 17:41, Uwe Schindler wrote:

Hi Rory,

we found a new bug in 8 b111, not appearing in Java 7 and Java 8
b109:
It seems to be related to one recent commit, reproduces all the time

(reproduces with bytecode compiled with JDK8b111 and ran with
JDK8b111,
but also with bytecode compiled with JDK7 and ran with JDK8b111).
I am
trying to understand what's happening, but it looks like the patch
fails to
check the access flags of the method on the correct
instance/type/whatever.

This is the commit that I  think causes this:
http://hg.openjdk.java.net/jdk8/jdk8/hotspot/rev/36b97be47bde
Issue: https://bugs.openjdk.java.net/browse/JDK-8011311

What happens:
When running Apache Lucene's UIMA tests (UIMA is foreign code, not
part

of Lucene):

cd lucene/analysis/uima
ant test

*All* tests fail here:

java.lang.IllegalAccessError: tried to access method

java.lang.Object.clone()Ljava/lang/Object; from class
org.apache.uima.analysis_engine.impl.AnalysisEngineImplBase

at

__randomizedtesting.SeedInfo.seed([BC36C2DC5FC6C107:4A94D14D35381F8
8]:0)

at

org.apache.uima.analysis_engine.impl.AnalysisEngineImplBase.initialize(Anal

ysisEngineImplBase.java:163)

at

org.apache.uima.analysis_engine.impl.AggregateAnalysisEngine_impl.initializ

e(AggregateAnalysisEngine_impl.java:127)

at

org.apache.uima.impl.AnalysisEngineFactory_impl.produceResource(Analysi

sEngineFactory_impl.java:94)

at

org.apache.uima.impl.CompositeResourceFactory_impl.produceResource(C
ompositeResourceFactory_impl.java:62)

at

org.apache.uima.UIMAFramework.produceResource(UIMAFramework.java:
267)

at


Re: Testing Java Updates with your projects.

2013-10-23 Thread Dalibor Topic
On 10/23/13 3:03 PM, Uwe Schindler wrote:
 Thanks Rory for the info!
 
  
 
 I am changing the recipients of this mail, so it no longer goes to the 
 private list.
 
  
 
 @ dev@lao: FYI, the clone() bug seems to be fixed so we can soon upgrade to 
 JDK8 latest and run tests again.
 

b112 is here now: https://twitter.com/robilad/status/393000187213279232 

cheers,
dalibor topic

-- 
Oracle http://www.oracle.com
Dalibor Topic | Principal Product Manager
Phone: +494089091214 tel:+494089091214 | Mobile: +491737185961 
tel:+491737185961
Oracle Java Platform Group

ORACLE Deutschland B.V.  Co. KG | Kühnehöfe 5 | 22761 Hamburg

ORACLE Deutschland B.V.  Co. KG
Hauptverwaltung: Riesstr. 25, D-80992 München
Registergericht: Amtsgericht München, HRA 95603
Geschäftsführer: Jürgen Kunz

Komplementärin: ORACLE Deutschland Verwaltung B.V.
Hertogswetering 163/167, 3543 AS Utrecht, Niederlande
Handelsregister der Handelskammer Midden-Niederlande, Nr. 30143697
Geschäftsführer: Alexander van der Ven, Astrid Kepper, Val Maher

Green Oracle http://www.oracle.com/commitment Oracle is committed to 
developing practices and products that help protect the environment

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5320) Multi level compositeId router

2013-10-23 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13802869#comment-13802869
 ] 

Noble Paul commented on SOLR-5320:
--

The router object should be immutable and threadsafe. you are not supposed to 
change the state (I am referring to the masks)
I see repetition of the same code across multiple methods . Can you not put the 
entire logic in one place and reuse ?

 Multi level compositeId router
 --

 Key: SOLR-5320
 URL: https://issues.apache.org/jira/browse/SOLR-5320
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Anshum Gupta
 Attachments: SOLR-5320.patch

   Original Estimate: 336h
  Remaining Estimate: 336h

 This would enable multi level routing as compared to the 2 level routing 
 available as of now. On the usage bit, here's an example:
 Document Id: myapp!dummyuser!doc
 myapp!dummyuser! can be used as the shardkey for searching content for 
 dummyuser.
 myapp! can be used for searching across all users of myapp.
 I am looking at either a 3 (or 4) level routing. The 32 bit hash would then 
 comprise of 8X4 components from each part (in case of 4 level).



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5308) Split all documents of a route key into another collection

2013-10-23 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-5308:


Attachment: SOLR-5308.patch

Thanks Noble. That is certainly simpler. This patch has it as:
{code}
routingRules:{a!:{
  routeRanges:3c25-3c25,
  expireAt:1382535453866,
  targetCollection:migrate_routekey_test_targetCollection}}
{code}

I won't keep the routeRanges as a json list but as a comma-separated string.

 Split all documents of a route key into another collection
 --

 Key: SOLR-5308
 URL: https://issues.apache.org/jira/browse/SOLR-5308
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.6, 5.0

 Attachments: SOLR-5308.patch, SOLR-5308.patch, SOLR-5308.patch, 
 SOLR-5308.patch


 Enable SolrCloud users to split out a set of documents from a source 
 collection into another collection.
 This will be useful in multi-tenant environments. This feature will make it 
 possible to split a tenant out of a collection and put them into their own 
 collection which can be scaled separately.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5320) Multi level compositeId router

2013-10-23 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13802875#comment-13802875
 ] 

Anshum Gupta commented on SOLR-5320:


[~noble.paul] Yes, I already realized that and have a patch almost ready. Will 
just put that up in some time.

 Multi level compositeId router
 --

 Key: SOLR-5320
 URL: https://issues.apache.org/jira/browse/SOLR-5320
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Anshum Gupta
 Attachments: SOLR-5320.patch

   Original Estimate: 336h
  Remaining Estimate: 336h

 This would enable multi level routing as compared to the 2 level routing 
 available as of now. On the usage bit, here's an example:
 Document Id: myapp!dummyuser!doc
 myapp!dummyuser! can be used as the shardkey for searching content for 
 dummyuser.
 myapp! can be used for searching across all users of myapp.
 I am looking at either a 3 (or 4) level routing. The 32 bit hash would then 
 comprise of 8X4 components from each part (in case of 4 level).



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5300) SORTED_SET could use SORTED encoding when the field is actually single-valued

2013-10-23 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-5300:
-

Attachment: LUCENE-5300.patch

Here is a patch.

 SORTED_SET could use SORTED encoding when the field is actually single-valued
 -

 Key: LUCENE-5300
 URL: https://issues.apache.org/jira/browse/LUCENE-5300
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-5300.patch


 It would be nice to detect when a SORTED_SET field is single-valued in order 
 to optimize storage.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5300) SORTED_SET could use SORTED encoding when the field is actually single-valued

2013-10-23 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-5300:


 Summary: SORTED_SET could use SORTED encoding when the field is 
actually single-valued
 Key: LUCENE-5300
 URL: https://issues.apache.org/jira/browse/LUCENE-5300
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor


It would be nice to detect when a SORTED_SET field is single-valued in order to 
optimize storage.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5300) SORTED_SET could use SORTED encoding when the field is actually single-valued

2013-10-23 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13802890#comment-13802890
 ] 

Michael McCandless commented on LUCENE-5300:


+1

I wonder if we could somehow do this generically so that any DVFormat (not 
just Lucene45) would get it ... but that can be later.

 SORTED_SET could use SORTED encoding when the field is actually single-valued
 -

 Key: LUCENE-5300
 URL: https://issues.apache.org/jira/browse/LUCENE-5300
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-5300.patch


 It would be nice to detect when a SORTED_SET field is single-valued in order 
 to optimize storage.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5364) SolrCloud stops accepting updates

2013-10-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13802907#comment-13802907
 ] 

Mark Miller commented on SOLR-5364:
---

java version 1.7.0_25

All the default java command line args other than more heap.

 SolrCloud stops accepting updates
 -

 Key: SOLR-5364
 URL: https://issues.apache.org/jira/browse/SOLR-5364
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.4, 4.5, 4.6
Reporter: Chris
Priority: Blocker

 I'm attempting to import data into a SolrCloud cluster. After a certain 
 amount of time, the cluster stops accepting updates.
 I have tried numerous suggestions in IRC from Elyorag and others without 
 resolve.
 I have had this issue with 4.4, and understood there was a deadlock issue 
 fixed in 4.5, which hasn't resolved the issue, neither have the 4.6 snapshots.
 I've tried with Tomcat, various tomcat configuration changes to threading, 
 and with Jetty. Tried with various index merging configurations as I 
 initially thought there was a deadlock with concurrent merg scheduler, 
 however same issue with SerialMergeScheduler.
 The cluster stops accepting updates after some amount of time, this seems to 
 vary and is inconsistent. Sometimes I manage to index 400k docs, other times 
 ~1million . Querying  the cluster continues to work. I can reproduce the 
 issue consistently, and is currently blocking our transition to Solr.
 I can provide stack traces, thread dumps, jstack dumps as required.
 Here are two jstacks thus far:
 http://pastebin.com/1ktjBYbf
 http://pastebin.com/8JiQc3rb
 I have got these jstacks from the latest 4.6 snapshot, also running solrj 
 snapshot. The issue is also consistently reproducable with BinaryRequest 
 writer.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5363) NoClassDefFoundError when using Apache Log4J2

2013-10-23 Thread Petar Tahchiev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Petar Tahchiev updated SOLR-5363:
-

Attachment: SOLR-5363.patch

Hi guys,

so It seems that the CoreContainer checks if the slf4j implementation name 
contains log4j and if so instantiates the Log4j watcher. The problem is that 
using the Log4J2 the slf4j implementation is: 
{code}
org.slf4j.helpers.Log4jLoggerFactory
{code}

so you see it does contain Log4J, and thus SOLR will try to instantiate the 
Log4JWatcher, which was written for Log4J1. The Log4J1 slf4j implementation is 
called: 
{code}
org.slf4j.impl.Log4jLoggerFactory
{code}

so what I have done is to change the if to include the full class name. If you 
apply this patch i guess the issue can be closed, but further on it might be a 
good idea to implement a Log4J2Watcher. 

 NoClassDefFoundError when using Apache Log4J2
 -

 Key: SOLR-5363
 URL: https://issues.apache.org/jira/browse/SOLR-5363
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.5
Reporter: Petar Tahchiev
  Labels: log4j2
 Attachments: SOLR-5363.patch


 Hey guys,
 I'm using Log4J2 + SLF4J in my project. Unfortunately my embedded solr server 
 throws this error when starting:
 {code}
 Caused by: org.springframework.beans.factory.BeanDefinitionStoreException: 
 Factory method [public org.springframework.da
 ta.solr.core.SolrOperations 
 com.x.platform.core.config.SolrsearchConfig.defaultSolrTemplate() throws 
 javax.xml.par
 sers.ParserConfigurationException,java.io.IOException,org.xml.sax.SAXException]
  threw exception; nested exception is org
 .springframework.beans.factory.BeanCreationException: Error creating bean 
 with name 'defaultSolrServer' defined in class
  path resource [com/x/platform/core/config/SolrsearchConfig.class]: 
 Instantiation of bean failed; nested exception
  is org.springframework.beans.factory.BeanDefinitionStoreException: Factory 
 method [public org.apache.solr.client.solrj.
 SolrServer 
 com.xx.platform.core.config.SolrsearchConfig.defaultSolrServer() throws 
 javax.xml.parsers.ParserConfigur
 ationException,java.io.IOException,org.xml.sax.SAXException] threw exception; 
 nested exception is java.lang.NoClassDefFo
 undError: org/apache/log4j/Priority
 at 
 org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy
 .java:181)
 at 
 org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolv
 er.java:570)
 ... 105 more
 Caused by: org.springframework.beans.factory.BeanCreationException: Error 
 creating bean with name 'defaultSolrServer' de
 fined in class path resource 
 [com/xx/platform/core/config/SolrsearchConfig.class]: Instantiation of 
 bean failed; ne
 sted exception is 
 org.springframework.beans.factory.BeanDefinitionStoreException: Factory 
 method [public org.apache.solr
 .client.solrj.SolrServer 
 com.xxx.platform.core.config.SolrsearchConfig.defaultSolrServer() throws 
 javax.xml.parsers.
 ParserConfigurationException,java.io.IOException,org.xml.sax.SAXException] 
 threw exception; nested exception is java.lan
 g.NoClassDefFoundError: org/apache/log4j/Priority
 at 
 org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolv
 er.java:581)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(Ab
 stractAutowireCapableBeanFactory.java:1025)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutow
 ireCapableBeanFactory.java:921)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCap
 ableBeanFactory.java:487)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapab
 leBeanFactory.java:458)
 at 
 org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:295)
 at 
 org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegis
 try.java:223)
 at 
 org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:292)
 at 
 org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194)
 at 
 org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(Configurati
 onClassEnhancer.java:298)
 at 
 com.xx.platform.core.config.SolrsearchConfig$$EnhancerByCGLIB$$c571c5a6.defaultSolrServer(generated)
 at 
 

[jira] [Commented] (SOLR-5363) NoClassDefFoundError when using Apache Log4J2

2013-10-23 Thread Shikhar Bhushan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13802926#comment-13802926
 ] 

Shikhar Bhushan commented on SOLR-5363:
---

Confirming the issue  the Petar's assessment, ran into this as well

 NoClassDefFoundError when using Apache Log4J2
 -

 Key: SOLR-5363
 URL: https://issues.apache.org/jira/browse/SOLR-5363
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.5
Reporter: Petar Tahchiev
  Labels: log4j2
 Attachments: SOLR-5363.patch


 Hey guys,
 I'm using Log4J2 + SLF4J in my project. Unfortunately my embedded solr server 
 throws this error when starting:
 {code}
 Caused by: org.springframework.beans.factory.BeanDefinitionStoreException: 
 Factory method [public org.springframework.da
 ta.solr.core.SolrOperations 
 com.x.platform.core.config.SolrsearchConfig.defaultSolrTemplate() throws 
 javax.xml.par
 sers.ParserConfigurationException,java.io.IOException,org.xml.sax.SAXException]
  threw exception; nested exception is org
 .springframework.beans.factory.BeanCreationException: Error creating bean 
 with name 'defaultSolrServer' defined in class
  path resource [com/x/platform/core/config/SolrsearchConfig.class]: 
 Instantiation of bean failed; nested exception
  is org.springframework.beans.factory.BeanDefinitionStoreException: Factory 
 method [public org.apache.solr.client.solrj.
 SolrServer 
 com.xx.platform.core.config.SolrsearchConfig.defaultSolrServer() throws 
 javax.xml.parsers.ParserConfigur
 ationException,java.io.IOException,org.xml.sax.SAXException] threw exception; 
 nested exception is java.lang.NoClassDefFo
 undError: org/apache/log4j/Priority
 at 
 org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy
 .java:181)
 at 
 org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolv
 er.java:570)
 ... 105 more
 Caused by: org.springframework.beans.factory.BeanCreationException: Error 
 creating bean with name 'defaultSolrServer' de
 fined in class path resource 
 [com/xx/platform/core/config/SolrsearchConfig.class]: Instantiation of 
 bean failed; ne
 sted exception is 
 org.springframework.beans.factory.BeanDefinitionStoreException: Factory 
 method [public org.apache.solr
 .client.solrj.SolrServer 
 com.xxx.platform.core.config.SolrsearchConfig.defaultSolrServer() throws 
 javax.xml.parsers.
 ParserConfigurationException,java.io.IOException,org.xml.sax.SAXException] 
 threw exception; nested exception is java.lan
 g.NoClassDefFoundError: org/apache/log4j/Priority
 at 
 org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolv
 er.java:581)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(Ab
 stractAutowireCapableBeanFactory.java:1025)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutow
 ireCapableBeanFactory.java:921)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCap
 ableBeanFactory.java:487)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapab
 leBeanFactory.java:458)
 at 
 org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:295)
 at 
 org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegis
 try.java:223)
 at 
 org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:292)
 at 
 org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194)
 at 
 org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(Configurati
 onClassEnhancer.java:298)
 at 
 com.xx.platform.core.config.SolrsearchConfig$$EnhancerByCGLIB$$c571c5a6.defaultSolrServer(generated)
 at 
 com.x.platform.core.config.SolrsearchConfig.defaultSolrTemplate(SolrsearchConfig.java:37)
 at 
 com.xx.platform.core.config.SolrsearchConfig$$EnhancerByCGLIB$$c571c5a6.CGLIB$defaultSolrTemplate$2(gen
 erated)
 at 
 com.x.platform.core.config.SolrsearchConfig$$EnhancerByCGLIB$$c571c5a6$$FastClassByCGLIB$$f67069c2.invo
 ke(generated)
 at 
 org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228)
 at 
 org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(Configurati
 onClassEnhancer.java:286)
 at 
 

[jira] [Comment Edited] (SOLR-5363) NoClassDefFoundError when using Apache Log4J2

2013-10-23 Thread Shikhar Bhushan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13802926#comment-13802926
 ] 

Shikhar Bhushan edited comment on SOLR-5363 at 10/23/13 3:03 PM:
-

Confirming the issue  Petar's assessment, ran into this as well


was (Author: shikhar):
Confirming the issue  the Petar's assessment, ran into this as well

 NoClassDefFoundError when using Apache Log4J2
 -

 Key: SOLR-5363
 URL: https://issues.apache.org/jira/browse/SOLR-5363
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.5
Reporter: Petar Tahchiev
  Labels: log4j2
 Attachments: SOLR-5363.patch


 Hey guys,
 I'm using Log4J2 + SLF4J in my project. Unfortunately my embedded solr server 
 throws this error when starting:
 {code}
 Caused by: org.springframework.beans.factory.BeanDefinitionStoreException: 
 Factory method [public org.springframework.da
 ta.solr.core.SolrOperations 
 com.x.platform.core.config.SolrsearchConfig.defaultSolrTemplate() throws 
 javax.xml.par
 sers.ParserConfigurationException,java.io.IOException,org.xml.sax.SAXException]
  threw exception; nested exception is org
 .springframework.beans.factory.BeanCreationException: Error creating bean 
 with name 'defaultSolrServer' defined in class
  path resource [com/x/platform/core/config/SolrsearchConfig.class]: 
 Instantiation of bean failed; nested exception
  is org.springframework.beans.factory.BeanDefinitionStoreException: Factory 
 method [public org.apache.solr.client.solrj.
 SolrServer 
 com.xx.platform.core.config.SolrsearchConfig.defaultSolrServer() throws 
 javax.xml.parsers.ParserConfigur
 ationException,java.io.IOException,org.xml.sax.SAXException] threw exception; 
 nested exception is java.lang.NoClassDefFo
 undError: org/apache/log4j/Priority
 at 
 org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy
 .java:181)
 at 
 org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolv
 er.java:570)
 ... 105 more
 Caused by: org.springframework.beans.factory.BeanCreationException: Error 
 creating bean with name 'defaultSolrServer' de
 fined in class path resource 
 [com/xx/platform/core/config/SolrsearchConfig.class]: Instantiation of 
 bean failed; ne
 sted exception is 
 org.springframework.beans.factory.BeanDefinitionStoreException: Factory 
 method [public org.apache.solr
 .client.solrj.SolrServer 
 com.xxx.platform.core.config.SolrsearchConfig.defaultSolrServer() throws 
 javax.xml.parsers.
 ParserConfigurationException,java.io.IOException,org.xml.sax.SAXException] 
 threw exception; nested exception is java.lan
 g.NoClassDefFoundError: org/apache/log4j/Priority
 at 
 org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolv
 er.java:581)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(Ab
 stractAutowireCapableBeanFactory.java:1025)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutow
 ireCapableBeanFactory.java:921)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCap
 ableBeanFactory.java:487)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapab
 leBeanFactory.java:458)
 at 
 org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:295)
 at 
 org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegis
 try.java:223)
 at 
 org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:292)
 at 
 org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194)
 at 
 org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(Configurati
 onClassEnhancer.java:298)
 at 
 com.xx.platform.core.config.SolrsearchConfig$$EnhancerByCGLIB$$c571c5a6.defaultSolrServer(generated)
 at 
 com.x.platform.core.config.SolrsearchConfig.defaultSolrTemplate(SolrsearchConfig.java:37)
 at 
 com.xx.platform.core.config.SolrsearchConfig$$EnhancerByCGLIB$$c571c5a6.CGLIB$defaultSolrTemplate$2(gen
 erated)
 at 
 com.x.platform.core.config.SolrsearchConfig$$EnhancerByCGLIB$$c571c5a6$$FastClassByCGLIB$$f67069c2.invo
 ke(generated)
 at 
 org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228)
 at 
 

[jira] [Assigned] (SOLR-5363) NoClassDefFoundError when using Apache Log4J2

2013-10-23 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward reassigned SOLR-5363:
---

Assignee: Alan Woodward

 NoClassDefFoundError when using Apache Log4J2
 -

 Key: SOLR-5363
 URL: https://issues.apache.org/jira/browse/SOLR-5363
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.5
Reporter: Petar Tahchiev
Assignee: Alan Woodward
  Labels: log4j2
 Attachments: SOLR-5363.patch


 Hey guys,
 I'm using Log4J2 + SLF4J in my project. Unfortunately my embedded solr server 
 throws this error when starting:
 {code}
 Caused by: org.springframework.beans.factory.BeanDefinitionStoreException: 
 Factory method [public org.springframework.da
 ta.solr.core.SolrOperations 
 com.x.platform.core.config.SolrsearchConfig.defaultSolrTemplate() throws 
 javax.xml.par
 sers.ParserConfigurationException,java.io.IOException,org.xml.sax.SAXException]
  threw exception; nested exception is org
 .springframework.beans.factory.BeanCreationException: Error creating bean 
 with name 'defaultSolrServer' defined in class
  path resource [com/x/platform/core/config/SolrsearchConfig.class]: 
 Instantiation of bean failed; nested exception
  is org.springframework.beans.factory.BeanDefinitionStoreException: Factory 
 method [public org.apache.solr.client.solrj.
 SolrServer 
 com.xx.platform.core.config.SolrsearchConfig.defaultSolrServer() throws 
 javax.xml.parsers.ParserConfigur
 ationException,java.io.IOException,org.xml.sax.SAXException] threw exception; 
 nested exception is java.lang.NoClassDefFo
 undError: org/apache/log4j/Priority
 at 
 org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy
 .java:181)
 at 
 org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolv
 er.java:570)
 ... 105 more
 Caused by: org.springframework.beans.factory.BeanCreationException: Error 
 creating bean with name 'defaultSolrServer' de
 fined in class path resource 
 [com/xx/platform/core/config/SolrsearchConfig.class]: Instantiation of 
 bean failed; ne
 sted exception is 
 org.springframework.beans.factory.BeanDefinitionStoreException: Factory 
 method [public org.apache.solr
 .client.solrj.SolrServer 
 com.xxx.platform.core.config.SolrsearchConfig.defaultSolrServer() throws 
 javax.xml.parsers.
 ParserConfigurationException,java.io.IOException,org.xml.sax.SAXException] 
 threw exception; nested exception is java.lan
 g.NoClassDefFoundError: org/apache/log4j/Priority
 at 
 org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolv
 er.java:581)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(Ab
 stractAutowireCapableBeanFactory.java:1025)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutow
 ireCapableBeanFactory.java:921)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCap
 ableBeanFactory.java:487)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapab
 leBeanFactory.java:458)
 at 
 org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:295)
 at 
 org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegis
 try.java:223)
 at 
 org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:292)
 at 
 org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194)
 at 
 org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(Configurati
 onClassEnhancer.java:298)
 at 
 com.xx.platform.core.config.SolrsearchConfig$$EnhancerByCGLIB$$c571c5a6.defaultSolrServer(generated)
 at 
 com.x.platform.core.config.SolrsearchConfig.defaultSolrTemplate(SolrsearchConfig.java:37)
 at 
 com.xx.platform.core.config.SolrsearchConfig$$EnhancerByCGLIB$$c571c5a6.CGLIB$defaultSolrTemplate$2(gen
 erated)
 at 
 com.x.platform.core.config.SolrsearchConfig$$EnhancerByCGLIB$$c571c5a6$$FastClassByCGLIB$$f67069c2.invo
 ke(generated)
 at 
 org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228)
 at 
 org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(Configurati
 onClassEnhancer.java:286)
 at 
 

[jira] [Commented] (SOLR-5363) NoClassDefFoundError when using Apache Log4J2

2013-10-23 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13802939#comment-13802939
 ] 

Alan Woodward commented on SOLR-5363:
-

Hi Petar, thanks for this.  It looks as though your patch is against a slightly 
older version of Solr (the relevant code has moved out of CoreContainer and 
into LogWatcher now), but it's simple to apply it.

 NoClassDefFoundError when using Apache Log4J2
 -

 Key: SOLR-5363
 URL: https://issues.apache.org/jira/browse/SOLR-5363
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.5
Reporter: Petar Tahchiev
Assignee: Alan Woodward
  Labels: log4j2
 Attachments: SOLR-5363.patch


 Hey guys,
 I'm using Log4J2 + SLF4J in my project. Unfortunately my embedded solr server 
 throws this error when starting:
 {code}
 Caused by: org.springframework.beans.factory.BeanDefinitionStoreException: 
 Factory method [public org.springframework.da
 ta.solr.core.SolrOperations 
 com.x.platform.core.config.SolrsearchConfig.defaultSolrTemplate() throws 
 javax.xml.par
 sers.ParserConfigurationException,java.io.IOException,org.xml.sax.SAXException]
  threw exception; nested exception is org
 .springframework.beans.factory.BeanCreationException: Error creating bean 
 with name 'defaultSolrServer' defined in class
  path resource [com/x/platform/core/config/SolrsearchConfig.class]: 
 Instantiation of bean failed; nested exception
  is org.springframework.beans.factory.BeanDefinitionStoreException: Factory 
 method [public org.apache.solr.client.solrj.
 SolrServer 
 com.xx.platform.core.config.SolrsearchConfig.defaultSolrServer() throws 
 javax.xml.parsers.ParserConfigur
 ationException,java.io.IOException,org.xml.sax.SAXException] threw exception; 
 nested exception is java.lang.NoClassDefFo
 undError: org/apache/log4j/Priority
 at 
 org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy
 .java:181)
 at 
 org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolv
 er.java:570)
 ... 105 more
 Caused by: org.springframework.beans.factory.BeanCreationException: Error 
 creating bean with name 'defaultSolrServer' de
 fined in class path resource 
 [com/xx/platform/core/config/SolrsearchConfig.class]: Instantiation of 
 bean failed; ne
 sted exception is 
 org.springframework.beans.factory.BeanDefinitionStoreException: Factory 
 method [public org.apache.solr
 .client.solrj.SolrServer 
 com.xxx.platform.core.config.SolrsearchConfig.defaultSolrServer() throws 
 javax.xml.parsers.
 ParserConfigurationException,java.io.IOException,org.xml.sax.SAXException] 
 threw exception; nested exception is java.lan
 g.NoClassDefFoundError: org/apache/log4j/Priority
 at 
 org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolv
 er.java:581)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(Ab
 stractAutowireCapableBeanFactory.java:1025)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutow
 ireCapableBeanFactory.java:921)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCap
 ableBeanFactory.java:487)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapab
 leBeanFactory.java:458)
 at 
 org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:295)
 at 
 org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegis
 try.java:223)
 at 
 org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:292)
 at 
 org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194)
 at 
 org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(Configurati
 onClassEnhancer.java:298)
 at 
 com.xx.platform.core.config.SolrsearchConfig$$EnhancerByCGLIB$$c571c5a6.defaultSolrServer(generated)
 at 
 com.x.platform.core.config.SolrsearchConfig.defaultSolrTemplate(SolrsearchConfig.java:37)
 at 
 com.xx.platform.core.config.SolrsearchConfig$$EnhancerByCGLIB$$c571c5a6.CGLIB$defaultSolrTemplate$2(gen
 erated)
 at 
 com.x.platform.core.config.SolrsearchConfig$$EnhancerByCGLIB$$c571c5a6$$FastClassByCGLIB$$f67069c2.invo
 ke(generated)
 at 
 

[jira] [Created] (LUCENE-5301) All PackedInts APIs should share a common interface for random-access reads

2013-10-23 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-5301:


 Summary: All PackedInts APIs should share a common interface for 
random-access reads
 Key: LUCENE-5301
 URL: https://issues.apache.org/jira/browse/LUCENE-5301
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor


It would be convenient if all PackedInts had a super-class with the {{long 
get(long index)}} method. Maybe this super-class could even be NumericDocValues 
so that doc values formats don't need to wrap eg. BlockPackedReader into this 
kind of construct:
{code}
final BlockPackedReader reader = new BlockPackedReader(data, 
entry.packedIntsVersion, entry.blockSize, entry.count, true);
return new LongNumericDocValues() {
  @Override
  public long get(long id) {
return reader.get(id);
  }
};
{code}

Instead, they could just
{code}
return new BlockPackedReader(data, entry.packedIntsVersion, 
entry.blockSize, entry.count, true);
{code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5301) All PackedInts APIs should share a common interface for random-access reads

2013-10-23 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-5301:
-

Attachment: LUCENE-5301.patch

Here is a patch. All PackedInts structures now extend NumericDocValues.

 All PackedInts APIs should share a common interface for random-access reads
 ---

 Key: LUCENE-5301
 URL: https://issues.apache.org/jira/browse/LUCENE-5301
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-5301.patch


 It would be convenient if all PackedInts had a super-class with the {{long 
 get(long index)}} method. Maybe this super-class could even be 
 NumericDocValues so that doc values formats don't need to wrap eg. 
 BlockPackedReader into this kind of construct:
 {code}
 final BlockPackedReader reader = new BlockPackedReader(data, 
 entry.packedIntsVersion, entry.blockSize, entry.count, true);
 return new LongNumericDocValues() {
   @Override
   public long get(long id) {
 return reader.get(id);
   }
 };
 {code}
 Instead, they could just
 {code}
 return new BlockPackedReader(data, entry.packedIntsVersion, 
 entry.blockSize, entry.count, true);
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5300) SORTED_SET could use SORTED encoding when the field is actually single-valued

2013-10-23 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13802956#comment-13802956
 ] 

Robert Muir commented on LUCENE-5300:
-

I'm not so happy about this:

{code}
   @Override
   public SortedSetDocValues getSortedSet(FieldInfo field) throws IOException {
if (!ordIndexes.containsKey(field.number)) {
// if (entry is missing look in another place)
{code}

Can we just explicitly write the way the field is encoded instead of the 
fallback? The fallback could be confusing in the case of real bugs.

 SORTED_SET could use SORTED encoding when the field is actually single-valued
 -

 Key: LUCENE-5300
 URL: https://issues.apache.org/jira/browse/LUCENE-5300
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-5300.patch


 It would be nice to detect when a SORTED_SET field is single-valued in order 
 to optimize storage.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5186) SolrZkClient can leak threads if it doesn't start correctly

2013-10-23 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved SOLR-5186.
-

   Resolution: Fixed
Fix Version/s: 4.6

Fixed by SOLR-5359

 SolrZkClient can leak threads if it doesn't start correctly
 ---

 Key: SOLR-5186
 URL: https://issues.apache.org/jira/browse/SOLR-5186
 Project: Solr
  Issue Type: Bug
Reporter: Alan Woodward
Assignee: Alan Woodward
Priority: Minor
 Fix For: 4.6

 Attachments: SOLR-5186.patch


 Noticed this while writing tests for the embedded ZooKeeper servers.  If the 
 connection manager can't connect to a ZK server before the 
 clientConnectTimeout, or there's an Exception thrown during 
 ZkClientConnectionStrategy.connect(), then the client's SolrZooKeeper 
 instance isn't shutdown.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5379) Multi-word synonym filter

2013-10-23 Thread Nguyen Manh Tien (JIRA)
Nguyen Manh Tien created SOLR-5379:
--

 Summary: Multi-word synonym filter
 Key: SOLR-5379
 URL: https://issues.apache.org/jira/browse/SOLR-5379
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Reporter: Nguyen Manh Tien
Priority: Minor
 Fix For: 4.5.1


While dealing with synonym at query time, solr failed to work with multi-word 
synonyms due to some reasons:
- First the lucene queryparser tokenizes user query by space so it split 
multi-word term into two terms before feeding to synonym filter, so synonym 
filter can't recognized multi-word term to do expansion
- Second, if synonym filter expand into multiple terms which contains 
multi-word synonym, The SolrQueryParseBase currently use MultiPhraseQuery to 
handle synonyms. But MultiPhraseQuery don't work with term have different 
number of words.

For the first one, we can extend quoted all multi-word synonym in user query so 
that lucene queryparser don't split it. There are a jira task related to this 
one https://issues.apache.org/jira/browse/LUCENE-2605.

For the second, we can replace MultiPhraseQuery by an appropriate BoleanQuery 
SHOULD which contains multiple PhraseQuery in case tokens stream have 
multi-word synonym.




--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5379) Multi-word synonym filter

2013-10-23 Thread Nguyen Manh Tien (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nguyen Manh Tien updated SOLR-5379:
---

Attachment: synonym-expander.patch
quoted.patch

Here are two patchs for above two issue
1. quoted.patch is an extended EDismaxQParser with new option to quoted 
multi-word synonym in user query
2. synonym-expander.patch is a patch to create new Query structure when user 
query have multi-word synonym

 Multi-word synonym filter
 -

 Key: SOLR-5379
 URL: https://issues.apache.org/jira/browse/SOLR-5379
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Reporter: Nguyen Manh Tien
Priority: Minor
  Labels: multi-word, queryparser, synonym
 Fix For: 4.5.1

 Attachments: quoted.patch, synonym-expander.patch


 While dealing with synonym at query time, solr failed to work with multi-word 
 synonyms due to some reasons:
 - First the lucene queryparser tokenizes user query by space so it split 
 multi-word term into two terms before feeding to synonym filter, so synonym 
 filter can't recognized multi-word term to do expansion
 - Second, if synonym filter expand into multiple terms which contains 
 multi-word synonym, The SolrQueryParseBase currently use MultiPhraseQuery to 
 handle synonyms. But MultiPhraseQuery don't work with term have different 
 number of words.
 For the first one, we can extend quoted all multi-word synonym in user query 
 so that lucene queryparser don't split it. There are a jira task related to 
 this one https://issues.apache.org/jira/browse/LUCENE-2605.
 For the second, we can replace MultiPhraseQuery by an appropriate BoleanQuery 
 SHOULD which contains multiple PhraseQuery in case tokens stream have 
 multi-word synonym.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5363) NoClassDefFoundError when using Apache Log4J2

2013-10-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13802965#comment-13802965
 ] 

ASF subversion and git services commented on SOLR-5363:
---

Commit 1535065 from [~romseygeek] in branch 'dev/trunk'
[ https://svn.apache.org/r1535065 ]

SOLR-5363: Solr doesn't start up properly with Log4J2

 NoClassDefFoundError when using Apache Log4J2
 -

 Key: SOLR-5363
 URL: https://issues.apache.org/jira/browse/SOLR-5363
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.5
Reporter: Petar Tahchiev
Assignee: Alan Woodward
  Labels: log4j2
 Attachments: SOLR-5363.patch


 Hey guys,
 I'm using Log4J2 + SLF4J in my project. Unfortunately my embedded solr server 
 throws this error when starting:
 {code}
 Caused by: org.springframework.beans.factory.BeanDefinitionStoreException: 
 Factory method [public org.springframework.da
 ta.solr.core.SolrOperations 
 com.x.platform.core.config.SolrsearchConfig.defaultSolrTemplate() throws 
 javax.xml.par
 sers.ParserConfigurationException,java.io.IOException,org.xml.sax.SAXException]
  threw exception; nested exception is org
 .springframework.beans.factory.BeanCreationException: Error creating bean 
 with name 'defaultSolrServer' defined in class
  path resource [com/x/platform/core/config/SolrsearchConfig.class]: 
 Instantiation of bean failed; nested exception
  is org.springframework.beans.factory.BeanDefinitionStoreException: Factory 
 method [public org.apache.solr.client.solrj.
 SolrServer 
 com.xx.platform.core.config.SolrsearchConfig.defaultSolrServer() throws 
 javax.xml.parsers.ParserConfigur
 ationException,java.io.IOException,org.xml.sax.SAXException] threw exception; 
 nested exception is java.lang.NoClassDefFo
 undError: org/apache/log4j/Priority
 at 
 org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy
 .java:181)
 at 
 org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolv
 er.java:570)
 ... 105 more
 Caused by: org.springframework.beans.factory.BeanCreationException: Error 
 creating bean with name 'defaultSolrServer' de
 fined in class path resource 
 [com/xx/platform/core/config/SolrsearchConfig.class]: Instantiation of 
 bean failed; ne
 sted exception is 
 org.springframework.beans.factory.BeanDefinitionStoreException: Factory 
 method [public org.apache.solr
 .client.solrj.SolrServer 
 com.xxx.platform.core.config.SolrsearchConfig.defaultSolrServer() throws 
 javax.xml.parsers.
 ParserConfigurationException,java.io.IOException,org.xml.sax.SAXException] 
 threw exception; nested exception is java.lan
 g.NoClassDefFoundError: org/apache/log4j/Priority
 at 
 org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolv
 er.java:581)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(Ab
 stractAutowireCapableBeanFactory.java:1025)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutow
 ireCapableBeanFactory.java:921)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCap
 ableBeanFactory.java:487)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapab
 leBeanFactory.java:458)
 at 
 org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:295)
 at 
 org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegis
 try.java:223)
 at 
 org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:292)
 at 
 org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194)
 at 
 org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(Configurati
 onClassEnhancer.java:298)
 at 
 com.xx.platform.core.config.SolrsearchConfig$$EnhancerByCGLIB$$c571c5a6.defaultSolrServer(generated)
 at 
 com.x.platform.core.config.SolrsearchConfig.defaultSolrTemplate(SolrsearchConfig.java:37)
 at 
 com.xx.platform.core.config.SolrsearchConfig$$EnhancerByCGLIB$$c571c5a6.CGLIB$defaultSolrTemplate$2(gen
 erated)
 at 
 com.x.platform.core.config.SolrsearchConfig$$EnhancerByCGLIB$$c571c5a6$$FastClassByCGLIB$$f67069c2.invo
 ke(generated)
 at 
 org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228)
 at 
 

[jira] [Commented] (SOLR-5363) NoClassDefFoundError when using Apache Log4J2

2013-10-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13802966#comment-13802966
 ] 

ASF subversion and git services commented on SOLR-5363:
---

Commit 1535066 from [~romseygeek] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1535066 ]

SOLR-5363: Solr doesn't start up properly with Log4J2

 NoClassDefFoundError when using Apache Log4J2
 -

 Key: SOLR-5363
 URL: https://issues.apache.org/jira/browse/SOLR-5363
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.5
Reporter: Petar Tahchiev
Assignee: Alan Woodward
  Labels: log4j2
 Attachments: SOLR-5363.patch


 Hey guys,
 I'm using Log4J2 + SLF4J in my project. Unfortunately my embedded solr server 
 throws this error when starting:
 {code}
 Caused by: org.springframework.beans.factory.BeanDefinitionStoreException: 
 Factory method [public org.springframework.da
 ta.solr.core.SolrOperations 
 com.x.platform.core.config.SolrsearchConfig.defaultSolrTemplate() throws 
 javax.xml.par
 sers.ParserConfigurationException,java.io.IOException,org.xml.sax.SAXException]
  threw exception; nested exception is org
 .springframework.beans.factory.BeanCreationException: Error creating bean 
 with name 'defaultSolrServer' defined in class
  path resource [com/x/platform/core/config/SolrsearchConfig.class]: 
 Instantiation of bean failed; nested exception
  is org.springframework.beans.factory.BeanDefinitionStoreException: Factory 
 method [public org.apache.solr.client.solrj.
 SolrServer 
 com.xx.platform.core.config.SolrsearchConfig.defaultSolrServer() throws 
 javax.xml.parsers.ParserConfigur
 ationException,java.io.IOException,org.xml.sax.SAXException] threw exception; 
 nested exception is java.lang.NoClassDefFo
 undError: org/apache/log4j/Priority
 at 
 org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy
 .java:181)
 at 
 org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolv
 er.java:570)
 ... 105 more
 Caused by: org.springframework.beans.factory.BeanCreationException: Error 
 creating bean with name 'defaultSolrServer' de
 fined in class path resource 
 [com/xx/platform/core/config/SolrsearchConfig.class]: Instantiation of 
 bean failed; ne
 sted exception is 
 org.springframework.beans.factory.BeanDefinitionStoreException: Factory 
 method [public org.apache.solr
 .client.solrj.SolrServer 
 com.xxx.platform.core.config.SolrsearchConfig.defaultSolrServer() throws 
 javax.xml.parsers.
 ParserConfigurationException,java.io.IOException,org.xml.sax.SAXException] 
 threw exception; nested exception is java.lan
 g.NoClassDefFoundError: org/apache/log4j/Priority
 at 
 org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolv
 er.java:581)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(Ab
 stractAutowireCapableBeanFactory.java:1025)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutow
 ireCapableBeanFactory.java:921)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCap
 ableBeanFactory.java:487)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapab
 leBeanFactory.java:458)
 at 
 org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:295)
 at 
 org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegis
 try.java:223)
 at 
 org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:292)
 at 
 org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194)
 at 
 org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(Configurati
 onClassEnhancer.java:298)
 at 
 com.xx.platform.core.config.SolrsearchConfig$$EnhancerByCGLIB$$c571c5a6.defaultSolrServer(generated)
 at 
 com.x.platform.core.config.SolrsearchConfig.defaultSolrTemplate(SolrsearchConfig.java:37)
 at 
 com.xx.platform.core.config.SolrsearchConfig$$EnhancerByCGLIB$$c571c5a6.CGLIB$defaultSolrTemplate$2(gen
 erated)
 at 
 com.x.platform.core.config.SolrsearchConfig$$EnhancerByCGLIB$$c571c5a6$$FastClassByCGLIB$$f67069c2.invo
 ke(generated)
 at 
 org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228)
 at 

[jira] [Resolved] (SOLR-5363) NoClassDefFoundError when using Apache Log4J2

2013-10-23 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved SOLR-5363.
-

Resolution: Fixed

Thanks Petar!

 NoClassDefFoundError when using Apache Log4J2
 -

 Key: SOLR-5363
 URL: https://issues.apache.org/jira/browse/SOLR-5363
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.5
Reporter: Petar Tahchiev
Assignee: Alan Woodward
  Labels: log4j2
 Attachments: SOLR-5363.patch


 Hey guys,
 I'm using Log4J2 + SLF4J in my project. Unfortunately my embedded solr server 
 throws this error when starting:
 {code}
 Caused by: org.springframework.beans.factory.BeanDefinitionStoreException: 
 Factory method [public org.springframework.da
 ta.solr.core.SolrOperations 
 com.x.platform.core.config.SolrsearchConfig.defaultSolrTemplate() throws 
 javax.xml.par
 sers.ParserConfigurationException,java.io.IOException,org.xml.sax.SAXException]
  threw exception; nested exception is org
 .springframework.beans.factory.BeanCreationException: Error creating bean 
 with name 'defaultSolrServer' defined in class
  path resource [com/x/platform/core/config/SolrsearchConfig.class]: 
 Instantiation of bean failed; nested exception
  is org.springframework.beans.factory.BeanDefinitionStoreException: Factory 
 method [public org.apache.solr.client.solrj.
 SolrServer 
 com.xx.platform.core.config.SolrsearchConfig.defaultSolrServer() throws 
 javax.xml.parsers.ParserConfigur
 ationException,java.io.IOException,org.xml.sax.SAXException] threw exception; 
 nested exception is java.lang.NoClassDefFo
 undError: org/apache/log4j/Priority
 at 
 org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy
 .java:181)
 at 
 org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolv
 er.java:570)
 ... 105 more
 Caused by: org.springframework.beans.factory.BeanCreationException: Error 
 creating bean with name 'defaultSolrServer' de
 fined in class path resource 
 [com/xx/platform/core/config/SolrsearchConfig.class]: Instantiation of 
 bean failed; ne
 sted exception is 
 org.springframework.beans.factory.BeanDefinitionStoreException: Factory 
 method [public org.apache.solr
 .client.solrj.SolrServer 
 com.xxx.platform.core.config.SolrsearchConfig.defaultSolrServer() throws 
 javax.xml.parsers.
 ParserConfigurationException,java.io.IOException,org.xml.sax.SAXException] 
 threw exception; nested exception is java.lan
 g.NoClassDefFoundError: org/apache/log4j/Priority
 at 
 org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolv
 er.java:581)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(Ab
 stractAutowireCapableBeanFactory.java:1025)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutow
 ireCapableBeanFactory.java:921)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCap
 ableBeanFactory.java:487)
 at 
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapab
 leBeanFactory.java:458)
 at 
 org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:295)
 at 
 org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegis
 try.java:223)
 at 
 org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:292)
 at 
 org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194)
 at 
 org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(Configurati
 onClassEnhancer.java:298)
 at 
 com.xx.platform.core.config.SolrsearchConfig$$EnhancerByCGLIB$$c571c5a6.defaultSolrServer(generated)
 at 
 com.x.platform.core.config.SolrsearchConfig.defaultSolrTemplate(SolrsearchConfig.java:37)
 at 
 com.xx.platform.core.config.SolrsearchConfig$$EnhancerByCGLIB$$c571c5a6.CGLIB$defaultSolrTemplate$2(gen
 erated)
 at 
 com.x.platform.core.config.SolrsearchConfig$$EnhancerByCGLIB$$c571c5a6$$FastClassByCGLIB$$f67069c2.invo
 ke(generated)
 at 
 org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228)
 at 
 org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(Configurati
 onClassEnhancer.java:286)
 at 
 

[jira] [Created] (LUCENE-5302) Make StemmerOverrideMap methods public

2013-10-23 Thread Alan Woodward (JIRA)
Alan Woodward created LUCENE-5302:
-

 Summary: Make StemmerOverrideMap methods public
 Key: LUCENE-5302
 URL: https://issues.apache.org/jira/browse/LUCENE-5302
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward
Priority: Minor


StemmerOverrideFilter is configured with an FST-based map that you can build at 
construction time from a list of entries.  Building this FST offline and 
loading it directly as a bytestream makes construction a lot quicker, but you 
can't do that conveniently at the moment as all the methods of 
StemmerOverrideMap are package-private.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5379) Multi-word synonym filter

2013-10-23 Thread Otis Gospodnetic (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Otis Gospodnetic updated SOLR-5379:
---

Fix Version/s: 4.6

 Multi-word synonym filter
 -

 Key: SOLR-5379
 URL: https://issues.apache.org/jira/browse/SOLR-5379
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Reporter: Nguyen Manh Tien
Priority: Minor
  Labels: multi-word, queryparser, synonym
 Fix For: 4.5.1, 4.6

 Attachments: quoted.patch, synonym-expander.patch


 While dealing with synonym at query time, solr failed to work with multi-word 
 synonyms due to some reasons:
 - First the lucene queryparser tokenizes user query by space so it split 
 multi-word term into two terms before feeding to synonym filter, so synonym 
 filter can't recognized multi-word term to do expansion
 - Second, if synonym filter expand into multiple terms which contains 
 multi-word synonym, The SolrQueryParseBase currently use MultiPhraseQuery to 
 handle synonyms. But MultiPhraseQuery don't work with term have different 
 number of words.
 For the first one, we can extend quoted all multi-word synonym in user query 
 so that lucene queryparser don't split it. There are a jira task related to 
 this one https://issues.apache.org/jira/browse/LUCENE-2605.
 For the second, we can replace MultiPhraseQuery by an appropriate BoleanQuery 
 SHOULD which contains multiple PhraseQuery in case tokens stream have 
 multi-word synonym.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5302) Make StemmerOverrideMap methods public

2013-10-23 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-5302:
--

Attachment: LUCENE-5302.patch

 Make StemmerOverrideMap methods public
 --

 Key: LUCENE-5302
 URL: https://issues.apache.org/jira/browse/LUCENE-5302
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward
Priority: Minor
 Attachments: LUCENE-5302.patch


 StemmerOverrideFilter is configured with an FST-based map that you can build 
 at construction time from a list of entries.  Building this FST offline and 
 loading it directly as a bytestream makes construction a lot quicker, but you 
 can't do that conveniently at the moment as all the methods of 
 StemmerOverrideMap are package-private.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2899) Add OpenNLP Analysis capabilities as a module

2013-10-23 Thread Lance Norskog (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13802982#comment-13802982
 ] 

Lance Norskog commented on LUCENE-2899:
---

Hi-

The latest patch is LUCENE-2899-x.patch, pls try that. Also, apply it with:
patch -p0  patchfile

Lance




 Add OpenNLP Analysis capabilities as a module
 -

 Key: LUCENE-2899
 URL: https://issues.apache.org/jira/browse/LUCENE-2899
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/analysis
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
Priority: Minor
 Fix For: 4.6

 Attachments: LUCENE-2899-current.patch, LUCENE-2899.patch, 
 LUCENE-2899.patch, LUCENE-2899.patch, LUCENE-2899.patch, LUCENE-2899.patch, 
 LUCENE-2899.patch, LUCENE-2899-RJN.patch, LUCENE-2899-x.patch, 
 LUCENE-2899-x.patch, LUCENE-2899-x.patch, OpenNLPFilter.java, 
 OpenNLPFilter.java, OpenNLPTokenizer.java, opennlp_trunk.patch


 Now that OpenNLP is an ASF project and has a nice license, it would be nice 
 to have a submodule (under analysis) that exposed capabilities for it. Drew 
 Farris, Tom Morton and I have code that does:
 * Sentence Detection as a Tokenizer (could also be a TokenFilter, although it 
 would have to change slightly to buffer tokens)
 * NamedEntity recognition as a TokenFilter
 We are also planning a Tokenizer/TokenFilter that can put parts of speech as 
 either payloads (PartOfSpeechAttribute?) on a token or at the same position.
 I'd propose it go under:
 modules/analysis/opennlp



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5380) Using cloudSolrServer.setDefaultCollection(collectionId) does not work as intended for an alias spanning more than 1 collection.

2013-10-23 Thread Mark Miller (JIRA)
Mark Miller created SOLR-5380:
-

 Summary: Using cloudSolrServer.setDefaultCollection(collectionId) 
does not work as intended for an alias spanning more than 1 collection.
 Key: SOLR-5380
 URL: https://issues.apache.org/jira/browse/SOLR-5380
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.6, 5.0






--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5380) Using cloudSolrServer.setDefaultCollection(collectionId) does not work as intended for an alias spanning more than 1 collection.

2013-10-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13802988#comment-13802988
 ] 

Mark Miller edited comment on SOLR-5380 at 10/23/13 4:10 PM:
-

As reported by Thomas Egense on the user list and confirmed by [~elyograg]

http://lucene.472066.n3.nabble.com/Minor-bug-with-CloudSolrServer-and-collection-alias-td4097191.html


was (Author: markrmil...@gmail.com):
As reported by Thomas Egense on the user list and confirmed by [~elyograg]

 Using cloudSolrServer.setDefaultCollection(collectionId) does not work as 
 intended for an alias spanning more than 1 collection.
 

 Key: SOLR-5380
 URL: https://issues.apache.org/jira/browse/SOLR-5380
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.6, 5.0

 Attachments: SOLR-5380.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5380) Using cloudSolrServer.setDefaultCollection(collectionId) does not work as intended for an alias spanning more than 1 collection.

2013-10-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13802988#comment-13802988
 ] 

Mark Miller commented on SOLR-5380:
---

As reported by Thomas Egense on the user list and confirmed by [~elyograg]

 Using cloudSolrServer.setDefaultCollection(collectionId) does not work as 
 intended for an alias spanning more than 1 collection.
 

 Key: SOLR-5380
 URL: https://issues.apache.org/jira/browse/SOLR-5380
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.6, 5.0

 Attachments: SOLR-5380.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5380) Using cloudSolrServer.setDefaultCollection(collectionId) does not work as intended for an alias spanning more than 1 collection.

2013-10-23 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-5380:
--

Attachment: SOLR-5380.patch

 Using cloudSolrServer.setDefaultCollection(collectionId) does not work as 
 intended for an alias spanning more than 1 collection.
 

 Key: SOLR-5380
 URL: https://issues.apache.org/jira/browse/SOLR-5380
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.6, 5.0

 Attachments: SOLR-5380.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5379) Multi-word synonym filter

2013-10-23 Thread Otis Gospodnetic (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Otis Gospodnetic updated SOLR-5379:
---

Priority: Major  (was: Minor)

 Multi-word synonym filter
 -

 Key: SOLR-5379
 URL: https://issues.apache.org/jira/browse/SOLR-5379
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Reporter: Nguyen Manh Tien
  Labels: multi-word, queryparser, synonym
 Fix For: 4.5.1, 4.6

 Attachments: quoted.patch, synonym-expander.patch


 While dealing with synonym at query time, solr failed to work with multi-word 
 synonyms due to some reasons:
 - First the lucene queryparser tokenizes user query by space so it split 
 multi-word term into two terms before feeding to synonym filter, so synonym 
 filter can't recognized multi-word term to do expansion
 - Second, if synonym filter expand into multiple terms which contains 
 multi-word synonym, The SolrQueryParseBase currently use MultiPhraseQuery to 
 handle synonyms. But MultiPhraseQuery don't work with term have different 
 number of words.
 For the first one, we can extend quoted all multi-word synonym in user query 
 so that lucene queryparser don't split it. There are a jira task related to 
 this one https://issues.apache.org/jira/browse/LUCENE-2605.
 For the second, we can replace MultiPhraseQuery by an appropriate BoleanQuery 
 SHOULD which contains multiple PhraseQuery in case tokens stream have 
 multi-word synonym.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5302) Analytics Component

2013-10-23 Thread Houston Putman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803001#comment-13803001
 ] 

Houston Putman commented on SOLR-5302:
--

Andrew Psaltis, I would look at FacetingAccumulator and the 
fieldFacetAccumulator and implement something similar to those. I don't know 
much about pivot faceting, but from what I can tell it is nested field facets. 
The FacetingAccumulator acts like a wrapper on the BasicAccumulator to add 
functionality for Facets; so I would add a wrapper on top of the 
FacetingAccumulator to supported the (nested) pivoting. 

When the functionality is there, you will want to make a PivotFacetingRequest 
class, and look at the AnalyticsRequestFactory, AnalyticsStats and 
AnalyticsRequest to make sure your pivot params get parsed correctly and 
computed.

 Analytics Component
 ---

 Key: SOLR-5302
 URL: https://issues.apache.org/jira/browse/SOLR-5302
 Project: Solr
  Issue Type: New Feature
Reporter: Steven Bower
Assignee: Erick Erickson
 Attachments: Search Analytics Component.pdf, SOLR-5302.patch, 
 solr_analytics-2013.10.04-2.patch, Statistical Expressions.pdf


 This ticket is to track a replacement for the StatsComponent. The 
 AnalyticsComponent supports the following features:
 * All functionality of StatsComponent (SOLR-4499)
 * Field Faceting (SOLR-3435)
 ** Support for limit
 ** Sorting (bucket name or any stat in the bucket
 ** Support for offset
 * Range Faceting
 ** Supports all options of standard range faceting
 * Query Faceting (SOLR-2925)
 * Ability to use overall/field facet statistics as input to range/query 
 faceting (ie calc min/max date and then facet over that range
 * Support for more complex aggregate/mapping operations (SOLR-1622)
 ** Aggregations: min, max, sum, sum-of-square, count, missing, stddev, mean, 
 median, percentiles
 ** Operations: negation, abs, add, multiply, divide, power, log, date math, 
 string reversal, string concat
 ** Easily pluggable framework to add additional operations
 * New / cleaner output format
 Outstanding Issues:
 * Multi-value field support for stats (supported for faceting)
 * Multi-shard support (may not be possible for some operations, eg median)



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5380) Using cloudSolrServer.setDefaultCollection(collectionId) does not work as intended for an alias spanning more than 1 collection.

2013-10-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803006#comment-13803006
 ] 

Mark Miller commented on SOLR-5380:
---

The problem was that no collection parameter was set for the request in the 
case of using the default collection. Because of this, the alias used as the 
default collection was no propagated.

Because it's difficult to update the params for the request, I chose to change 
how we create urls in the default collection case - rather than using base_url 
+ core_name, I use base_url + default collection. SolrDispatchFilter then 
handles the rest.

 Using cloudSolrServer.setDefaultCollection(collectionId) does not work as 
 intended for an alias spanning more than 1 collection.
 

 Key: SOLR-5380
 URL: https://issues.apache.org/jira/browse/SOLR-5380
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.6, 5.0

 Attachments: SOLR-5380.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5300) SORTED_SET could use SORTED encoding when the field is actually single-valued

2013-10-23 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-5300:
-

Attachment: LUCENE-5300.patch

It was tempting to check for {{ordIndexes}} for simplicity but I agree it is 
safer to explicitely write the format. Here is a patch that fixes that.

 SORTED_SET could use SORTED encoding when the field is actually single-valued
 -

 Key: LUCENE-5300
 URL: https://issues.apache.org/jira/browse/LUCENE-5300
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-5300.patch, LUCENE-5300.patch


 It would be nice to detect when a SORTED_SET field is single-valued in order 
 to optimize storage.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5380) Using cloudSolrServer.setDefaultCollection(collectionId) does not work as intended for an alias spanning more than 1 collection.

2013-10-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803028#comment-13803028
 ] 

ASF subversion and git services commented on SOLR-5380:
---

Commit 1535076 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1535076 ]

SOLR-5380: Using cloudSolrServer.setDefaultCollection(collectionId) does not 
work as intended for an alias spanning more than 1 collection.

 Using cloudSolrServer.setDefaultCollection(collectionId) does not work as 
 intended for an alias spanning more than 1 collection.
 

 Key: SOLR-5380
 URL: https://issues.apache.org/jira/browse/SOLR-5380
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.6, 5.0

 Attachments: SOLR-5380.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5381) Split Clusterstate and scale

2013-10-23 Thread Noble Paul (JIRA)
Noble Paul created SOLR-5381:


 Summary: Split Clusterstate and scale 
 Key: SOLR-5381
 URL: https://issues.apache.org/jira/browse/SOLR-5381
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul


clusterstate.json is a single point of contention for all components in 
SolrCloud. It would be hard to scale SolrCloud beyond a few thousand nodes 
because there are too many updates and too many nodes need to be notified of 
the changes. As the no:of nodes go up the size of clusterstate.json keeps going 
up and it will soon exceed the limit impossed by ZK.

The first step is to store the shards information in separate nodes and each 
node can just listen to the shard node it belongs to. We may also need to split 
each collection into its own node and the clusterstate.json just holding the 
names of the collections .

This is an umbrella issue



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5380) Using cloudSolrServer.setDefaultCollection(collectionId) does not work as intended for an alias spanning more than 1 collection.

2013-10-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803030#comment-13803030
 ] 

ASF subversion and git services commented on SOLR-5380:
---

Commit 1535077 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1535077 ]

SOLR-5380: Using cloudSolrServer.setDefaultCollection(collectionId) does not 
work as intended for an alias spanning more than 1 collection.

 Using cloudSolrServer.setDefaultCollection(collectionId) does not work as 
 intended for an alias spanning more than 1 collection.
 

 Key: SOLR-5380
 URL: https://issues.apache.org/jira/browse/SOLR-5380
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.6, 5.0

 Attachments: SOLR-5380.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5379) Query time multi-word synonym expansion

2013-10-23 Thread Otis Gospodnetic (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Otis Gospodnetic updated SOLR-5379:
---

Summary: Query time multi-word synonym expansion  (was: Multi-word synonym 
filter)

 Query time multi-word synonym expansion
 ---

 Key: SOLR-5379
 URL: https://issues.apache.org/jira/browse/SOLR-5379
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Reporter: Nguyen Manh Tien
  Labels: multi-word, queryparser, synonym
 Fix For: 4.5.1, 4.6

 Attachments: quoted.patch, synonym-expander.patch


 While dealing with synonym at query time, solr failed to work with multi-word 
 synonyms due to some reasons:
 - First the lucene queryparser tokenizes user query by space so it split 
 multi-word term into two terms before feeding to synonym filter, so synonym 
 filter can't recognized multi-word term to do expansion
 - Second, if synonym filter expand into multiple terms which contains 
 multi-word synonym, The SolrQueryParseBase currently use MultiPhraseQuery to 
 handle synonyms. But MultiPhraseQuery don't work with term have different 
 number of words.
 For the first one, we can extend quoted all multi-word synonym in user query 
 so that lucene queryparser don't split it. There are a jira task related to 
 this one https://issues.apache.org/jira/browse/LUCENE-2605.
 For the second, we can replace MultiPhraseQuery by an appropriate BoleanQuery 
 SHOULD which contains multiple PhraseQuery in case tokens stream have 
 multi-word synonym.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5379) Query-time multi-word synonym expansion

2013-10-23 Thread Otis Gospodnetic (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Otis Gospodnetic updated SOLR-5379:
---

Summary: Query-time multi-word synonym expansion  (was: Query time 
multi-word synonym expansion)

 Query-time multi-word synonym expansion
 ---

 Key: SOLR-5379
 URL: https://issues.apache.org/jira/browse/SOLR-5379
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Reporter: Nguyen Manh Tien
  Labels: multi-word, queryparser, synonym
 Fix For: 4.5.1, 4.6

 Attachments: quoted.patch, synonym-expander.patch


 While dealing with synonym at query time, solr failed to work with multi-word 
 synonyms due to some reasons:
 - First the lucene queryparser tokenizes user query by space so it split 
 multi-word term into two terms before feeding to synonym filter, so synonym 
 filter can't recognized multi-word term to do expansion
 - Second, if synonym filter expand into multiple terms which contains 
 multi-word synonym, The SolrQueryParseBase currently use MultiPhraseQuery to 
 handle synonyms. But MultiPhraseQuery don't work with term have different 
 number of words.
 For the first one, we can extend quoted all multi-word synonym in user query 
 so that lucene queryparser don't split it. There are a jira task related to 
 this one https://issues.apache.org/jira/browse/LUCENE-2605.
 For the second, we can replace MultiPhraseQuery by an appropriate BoleanQuery 
 SHOULD which contains multiple PhraseQuery in case tokens stream have 
 multi-word synonym.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1632) Distributed IDF

2013-10-23 Thread David (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803043#comment-13803043
 ] 

David commented on SOLR-1632:
-

is this patch currently working in 5.0?

 Distributed IDF
 ---

 Key: SOLR-1632
 URL: https://issues.apache.org/jira/browse/SOLR-1632
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 1.5
Reporter: Andrzej Bialecki 
 Fix For: 5.0

 Attachments: 3x_SOLR-1632_doesntwork.patch, distrib-2.patch, 
 distrib.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch


 Distributed IDF is a valuable enhancement for distributed search across 
 non-uniform shards. This issue tracks the proposed implementation of an API 
 to support this functionality in Solr.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5382) Update to Hadoop 2.2 GA release.

2013-10-23 Thread Mark Miller (JIRA)
Mark Miller created SOLR-5382:
-

 Summary: Update to Hadoop 2.2 GA release.
 Key: SOLR-5382
 URL: https://issues.apache.org/jira/browse/SOLR-5382
 Project: Solr
  Issue Type: Task
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.6, 5.0






--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1632) Distributed IDF

2013-10-23 Thread David (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803040#comment-13803040
 ] 

David commented on SOLR-1632:
-

It seems like this task should have a much higher priority. Distributed IDF is 
very important for scoring across non-uniform shards. I am currently using Solr 
Cloud with grouping and without distributed IDF my boost functions are rendered 
nearly useless in terms of the result ordering expected.

 Distributed IDF
 ---

 Key: SOLR-1632
 URL: https://issues.apache.org/jira/browse/SOLR-1632
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 1.5
Reporter: Andrzej Bialecki 
 Fix For: 5.0

 Attachments: 3x_SOLR-1632_doesntwork.patch, distrib-2.patch, 
 distrib.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch


 Distributed IDF is a valuable enhancement for distributed search across 
 non-uniform shards. This issue tracks the proposed implementation of an API 
 to support this functionality in Solr.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-1632) Distributed IDF

2013-10-23 Thread David (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David updated SOLR-1632:


Comment: was deleted

(was: It seems like this task should have a much higher priority. Distributed 
IDF is very important for scoring across non-uniform shards. I am currently 
using Solr Cloud with grouping and without distributed IDF my boost functions 
are rendered nearly useless in terms of the result ordering expected.)

 Distributed IDF
 ---

 Key: SOLR-1632
 URL: https://issues.apache.org/jira/browse/SOLR-1632
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 1.5
Reporter: Andrzej Bialecki 
 Fix For: 5.0

 Attachments: 3x_SOLR-1632_doesntwork.patch, distrib-2.patch, 
 distrib.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch


 Distributed IDF is a valuable enhancement for distributed search across 
 non-uniform shards. This issue tracks the proposed implementation of an API 
 to support this functionality in Solr.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-2844) benchmark geospatial performance based on geonames.org

2013-10-23 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-2844:
-

Attachment: LUCENE-2844_spatial_benchmark.patch

The attached patch added documentation. I chose to leave the extensive option 
list to spatial.alg instead of redundantly listing it elsewhere, but I do 
reference elsewhere to look in spatial.alg for the listing if you're looking 
for it.

I put a compressed allCountries.txt up on people.apache.org, which is a 
randomized-line order version of the one from geonames.  This is fetched 
instead of the live one for reproducibility of test results.

I made various other fairly minor improvements too.  Notably if another 
SpatialStrategy implementation needs to be tested, it should be feasible to do 
it via extending the SpatialDocMaker without duplicating much code.

I intend to commit this in a couple days or so.

 benchmark geospatial performance based on geonames.org
 --

 Key: LUCENE-2844
 URL: https://issues.apache.org/jira/browse/LUCENE-2844
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/benchmark, modules/spatial
Reporter: David Smiley
Assignee: David Smiley
Priority: Minor
 Fix For: 4.6, 5.0

 Attachments: benchmark-geo.patch, benchmark-geo.patch, 
 LUCENE-2844_spatial_benchmark.patch, LUCENE-2844_spatial_benchmark.patch


 See comments for details.
 In particular, the original patch benchmark-geo.patch is fairly different 
 than LUCENE-2844.patch



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5381) Split Clusterstate and scale

2013-10-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803056#comment-13803056
 ] 

Mark Miller commented on SOLR-5381:
---

I think this is just one of many issues you will hit at a few thousand nodes 
currently.

 Split Clusterstate and scale 
 -

 Key: SOLR-5381
 URL: https://issues.apache.org/jira/browse/SOLR-5381
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
   Original Estimate: 2,016h
  Remaining Estimate: 2,016h

 clusterstate.json is a single point of contention for all components in 
 SolrCloud. It would be hard to scale SolrCloud beyond a few thousand nodes 
 because there are too many updates and too many nodes need to be notified of 
 the changes. As the no:of nodes go up the size of clusterstate.json keeps 
 going up and it will soon exceed the limit impossed by ZK.
 The first step is to store the shards information in separate nodes and each 
 node can just listen to the shard node it belongs to. We may also need to 
 split each collection into its own node and the clusterstate.json just 
 holding the names of the collections .
 This is an umbrella issue



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5381) Split Clusterstate and scale

2013-10-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803059#comment-13803059
 ] 

Mark Miller commented on SOLR-5381:
---

bq.  size of clusterstate.json keeps going up and it will soon exceed the limit 
imposed by ZK.

That limit is adjustable - I think even at a couple thousand nodes you are only 
talking a couple/few MB at most, which moves pretty quick over a fast network.

I'm not saying we shouldn't look at this, but my testing of this at 1000 nodes 
was pretty smooth, so I would guess a couple thousands nodes is also reasonable 
- and to my knowledge there is no one approaching that scale with SolrCloud 
currently.

 Split Clusterstate and scale 
 -

 Key: SOLR-5381
 URL: https://issues.apache.org/jira/browse/SOLR-5381
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
   Original Estimate: 2,016h
  Remaining Estimate: 2,016h

 clusterstate.json is a single point of contention for all components in 
 SolrCloud. It would be hard to scale SolrCloud beyond a few thousand nodes 
 because there are too many updates and too many nodes need to be notified of 
 the changes. As the no:of nodes go up the size of clusterstate.json keeps 
 going up and it will soon exceed the limit impossed by ZK.
 The first step is to store the shards information in separate nodes and each 
 node can just listen to the shard node it belongs to. We may also need to 
 split each collection into its own node and the clusterstate.json just 
 holding the names of the collections .
 This is an umbrella issue



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1632) Distributed IDF

2013-10-23 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803064#comment-13803064
 ] 

Markus Jelsma commented on SOLR-1632:
-

No, it does not work at all. I did spend some time on it but had other things 
to do. In the end i removed my (not working) changes and uploaded a patch that 
at least compiles against the revision of that time.



 Distributed IDF
 ---

 Key: SOLR-1632
 URL: https://issues.apache.org/jira/browse/SOLR-1632
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 1.5
Reporter: Andrzej Bialecki 
 Fix For: 5.0

 Attachments: 3x_SOLR-1632_doesntwork.patch, distrib-2.patch, 
 distrib.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch


 Distributed IDF is a valuable enhancement for distributed search across 
 non-uniform shards. This issue tracks the proposed implementation of an API 
 to support this functionality in Solr.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5381) Split Clusterstate and scale

2013-10-23 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803068#comment-13803068
 ] 

Noble Paul commented on SOLR-5381:
--

[~hakeber] The requirement is to scale to 100,000's of nodes. 

 Split Clusterstate and scale 
 -

 Key: SOLR-5381
 URL: https://issues.apache.org/jira/browse/SOLR-5381
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
   Original Estimate: 2,016h
  Remaining Estimate: 2,016h

 clusterstate.json is a single point of contention for all components in 
 SolrCloud. It would be hard to scale SolrCloud beyond a few thousand nodes 
 because there are too many updates and too many nodes need to be notified of 
 the changes. As the no:of nodes go up the size of clusterstate.json keeps 
 going up and it will soon exceed the limit impossed by ZK.
 The first step is to store the shards information in separate nodes and each 
 node can just listen to the shard node it belongs to. We may also need to 
 split each collection into its own node and the clusterstate.json just 
 holding the names of the collections .
 This is an umbrella issue



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5381) Split Clusterstate and scale

2013-10-23 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803068#comment-13803068
 ] 

Noble Paul edited comment on SOLR-5381 at 10/23/13 5:58 PM:


[~hakeber] The requirement is to scale to 100,000's of nodes. 

each STATE command would mean every node will need to download that entire 
clusterstate.json and soon we will break ZK. 


was (Author: noble.paul):
[~hakeber] The requirement is to scale to 100,000's of nodes. 

 Split Clusterstate and scale 
 -

 Key: SOLR-5381
 URL: https://issues.apache.org/jira/browse/SOLR-5381
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
   Original Estimate: 2,016h
  Remaining Estimate: 2,016h

 clusterstate.json is a single point of contention for all components in 
 SolrCloud. It would be hard to scale SolrCloud beyond a few thousand nodes 
 because there are too many updates and too many nodes need to be notified of 
 the changes. As the no:of nodes go up the size of clusterstate.json keeps 
 going up and it will soon exceed the limit impossed by ZK.
 The first step is to store the shards information in separate nodes and each 
 node can just listen to the shard node it belongs to. We may also need to 
 split each collection into its own node and the clusterstate.json just 
 holding the names of the collections .
 This is an umbrella issue



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5381) Split Clusterstate and scale

2013-10-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803077#comment-13803077
 ] 

Mark Miller commented on SOLR-5381:
---

We have not even nailed 1000 nails fully yet - seems silly to start working on 
100,000's.

 Split Clusterstate and scale 
 -

 Key: SOLR-5381
 URL: https://issues.apache.org/jira/browse/SOLR-5381
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
   Original Estimate: 2,016h
  Remaining Estimate: 2,016h

 clusterstate.json is a single point of contention for all components in 
 SolrCloud. It would be hard to scale SolrCloud beyond a few thousand nodes 
 because there are too many updates and too many nodes need to be notified of 
 the changes. As the no:of nodes go up the size of clusterstate.json keeps 
 going up and it will soon exceed the limit impossed by ZK.
 The first step is to store the shards information in separate nodes and each 
 node can just listen to the shard node it belongs to. We may also need to 
 split each collection into its own node and the clusterstate.json just 
 holding the names of the collections .
 This is an umbrella issue



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5382) Update to Hadoop 2.2 GA release.

2013-10-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803076#comment-13803076
 ] 

ASF subversion and git services commented on SOLR-5382:
---

Commit 1535083 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1535083 ]

SOLR-5382: Update to Hadoop 2.2 GA release.

 Update to Hadoop 2.2 GA release.
 

 Key: SOLR-5382
 URL: https://issues.apache.org/jira/browse/SOLR-5382
 Project: Solr
  Issue Type: Task
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.6, 5.0






--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5381) Split Clusterstate and scale

2013-10-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803077#comment-13803077
 ] 

Mark Miller edited comment on SOLR-5381 at 10/23/13 6:00 PM:
-

We have not even nailed 1000 nodes fully yet - seems silly to start working on 
100,000's.


was (Author: markrmil...@gmail.com):
We have not even nailed 1000 nails fully yet - seems silly to start working on 
100,000's.

 Split Clusterstate and scale 
 -

 Key: SOLR-5381
 URL: https://issues.apache.org/jira/browse/SOLR-5381
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
   Original Estimate: 2,016h
  Remaining Estimate: 2,016h

 clusterstate.json is a single point of contention for all components in 
 SolrCloud. It would be hard to scale SolrCloud beyond a few thousand nodes 
 because there are too many updates and too many nodes need to be notified of 
 the changes. As the no:of nodes go up the size of clusterstate.json keeps 
 going up and it will soon exceed the limit impossed by ZK.
 The first step is to store the shards information in separate nodes and each 
 node can just listen to the shard node it belongs to. We may also need to 
 split each collection into its own node and the clusterstate.json just 
 holding the names of the collections .
 This is an umbrella issue



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5381) Split Clusterstate and scale

2013-10-23 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803080#comment-13803080
 ] 

Noble Paul commented on SOLR-5381:
--

 We need to eliminate the known bottlenecks if we need to scale. Is there any 
other obvious issue we need to address to scale beyond the current limit?

 Split Clusterstate and scale 
 -

 Key: SOLR-5381
 URL: https://issues.apache.org/jira/browse/SOLR-5381
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
   Original Estimate: 2,016h
  Remaining Estimate: 2,016h

 clusterstate.json is a single point of contention for all components in 
 SolrCloud. It would be hard to scale SolrCloud beyond a few thousand nodes 
 because there are too many updates and too many nodes need to be notified of 
 the changes. As the no:of nodes go up the size of clusterstate.json keeps 
 going up and it will soon exceed the limit impossed by ZK.
 The first step is to store the shards information in separate nodes and each 
 node can just listen to the shard node it belongs to. We may also need to 
 split each collection into its own node and the clusterstate.json just 
 holding the names of the collections .
 This is an umbrella issue



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5379) Query-time multi-word synonym expansion

2013-10-23 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803086#comment-13803086
 ] 

Otis Gospodnetic commented on SOLR-5379:


[~tiennm] How does this diff from SOLR-4381?  Which cases does SOLR-4381 not 
handle that this patch handles?

 Query-time multi-word synonym expansion
 ---

 Key: SOLR-5379
 URL: https://issues.apache.org/jira/browse/SOLR-5379
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Reporter: Nguyen Manh Tien
  Labels: multi-word, queryparser, synonym
 Fix For: 4.5.1, 4.6

 Attachments: quoted.patch, synonym-expander.patch


 While dealing with synonym at query time, solr failed to work with multi-word 
 synonyms due to some reasons:
 - First the lucene queryparser tokenizes user query by space so it split 
 multi-word term into two terms before feeding to synonym filter, so synonym 
 filter can't recognized multi-word term to do expansion
 - Second, if synonym filter expand into multiple terms which contains 
 multi-word synonym, The SolrQueryParseBase currently use MultiPhraseQuery to 
 handle synonyms. But MultiPhraseQuery don't work with term have different 
 number of words.
 For the first one, we can extend quoted all multi-word synonym in user query 
 so that lucene queryparser don't split it. There are a jira task related to 
 this one https://issues.apache.org/jira/browse/LUCENE-2605.
 For the second, we can replace MultiPhraseQuery by an appropriate BoleanQuery 
 SHOULD which contains multiple PhraseQuery in case tokens stream have 
 multi-word synonym.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5379) Query-time multi-word synonym expansion

2013-10-23 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803086#comment-13803086
 ] 

Otis Gospodnetic edited comment on SOLR-5379 at 10/23/13 6:09 PM:
--

[~tiennm] How does this differ from SOLR-4381?  Which cases does SOLR-4381 not 
handle that this patch handles?


was (Author: otis):
[~tiennm] How does this diff from SOLR-4381?  Which cases does SOLR-4381 not 
handle that this patch handles?

 Query-time multi-word synonym expansion
 ---

 Key: SOLR-5379
 URL: https://issues.apache.org/jira/browse/SOLR-5379
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Reporter: Nguyen Manh Tien
  Labels: multi-word, queryparser, synonym
 Fix For: 4.5.1, 4.6

 Attachments: quoted.patch, synonym-expander.patch


 While dealing with synonym at query time, solr failed to work with multi-word 
 synonyms due to some reasons:
 - First the lucene queryparser tokenizes user query by space so it split 
 multi-word term into two terms before feeding to synonym filter, so synonym 
 filter can't recognized multi-word term to do expansion
 - Second, if synonym filter expand into multiple terms which contains 
 multi-word synonym, The SolrQueryParseBase currently use MultiPhraseQuery to 
 handle synonyms. But MultiPhraseQuery don't work with term have different 
 number of words.
 For the first one, we can extend quoted all multi-word synonym in user query 
 so that lucene queryparser don't split it. There are a jira task related to 
 this one https://issues.apache.org/jira/browse/LUCENE-2605.
 For the second, we can replace MultiPhraseQuery by an appropriate BoleanQuery 
 SHOULD which contains multiple PhraseQuery in case tokens stream have 
 multi-word synonym.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5381) Split Clusterstate and scale

2013-10-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803109#comment-13803109
 ] 

Mark Miller commented on SOLR-5381:
---

bq. ZK documentation says 1mb is the recommended limit

That's because it's kept in RAM and they want to discourage bad patterns. 1mb 
has not scaled with networks and hardware though - it's arbitrary to say 1mb 
and not 3mb (which handles thousands of nodes). 3mb will perform just as well 
as 1 mb. With modern servers ram and network speed, this stuff flies around 
easily - I saw that on my 1000 node test - the UI was main bottleneck there - 
it takes a long time to render the cloud screen due to the rendering speed.

We also are not constantly working with large files - in a steady state we dont 
pull or push large files at all to ZK - it's only on a cluster state change. 
All of this makes 1 mb or 5 mb pretty irrelevant for us - you can test it out 
and see.

 Split Clusterstate and scale 
 -

 Key: SOLR-5381
 URL: https://issues.apache.org/jira/browse/SOLR-5381
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
   Original Estimate: 2,016h
  Remaining Estimate: 2,016h

 clusterstate.json is a single point of contention for all components in 
 SolrCloud. It would be hard to scale SolrCloud beyond a few thousand nodes 
 because there are too many updates and too many nodes need to be notified of 
 the changes. As the no:of nodes go up the size of clusterstate.json keeps 
 going up and it will soon exceed the limit impossed by ZK.
 The first step is to store the shards information in separate nodes and each 
 node can just listen to the shard node it belongs to. We may also need to 
 split each collection into its own node and the clusterstate.json just 
 holding the names of the collections .
 This is an umbrella issue



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5381) Split Clusterstate and scale

2013-10-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803110#comment-13803110
 ] 

Mark Miller commented on SOLR-5381:
---

I do agree that it becomes relevant once you are talking 10,000 100,000 nodes, 
etc. But like I said, we have not even proved out a couple thousand nodes, so 
it seems like we are getting ahead if we are already focusing on 100,000 node 
issues.

 Split Clusterstate and scale 
 -

 Key: SOLR-5381
 URL: https://issues.apache.org/jira/browse/SOLR-5381
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
   Original Estimate: 2,016h
  Remaining Estimate: 2,016h

 clusterstate.json is a single point of contention for all components in 
 SolrCloud. It would be hard to scale SolrCloud beyond a few thousand nodes 
 because there are too many updates and too many nodes need to be notified of 
 the changes. As the no:of nodes go up the size of clusterstate.json keeps 
 going up and it will soon exceed the limit impossed by ZK.
 The first step is to store the shards information in separate nodes and each 
 node can just listen to the shard node it belongs to. We may also need to 
 split each collection into its own node and the clusterstate.json just 
 holding the names of the collections .
 This is an umbrella issue



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5382) Update to Hadoop 2.2 GA release.

2013-10-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803111#comment-13803111
 ] 

ASF subversion and git services commented on SOLR-5382:
---

Commit 1535104 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1535104 ]

SOLR-5382: Update to Hadoop 2.2 GA release.

 Update to Hadoop 2.2 GA release.
 

 Key: SOLR-5382
 URL: https://issues.apache.org/jira/browse/SOLR-5382
 Project: Solr
  Issue Type: Task
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.6, 5.0






--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5382) Update to Hadoop 2.2 GA release.

2013-10-23 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-5382.
---

Resolution: Fixed

 Update to Hadoop 2.2 GA release.
 

 Key: SOLR-5382
 URL: https://issues.apache.org/jira/browse/SOLR-5382
 Project: Solr
  Issue Type: Task
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.6, 5.0






--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0-ea-b109) - Build # 8011 - Failure!

2013-10-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/8011/
Java: 32bit/jdk1.8.0-ea-b109 -client -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 22974 lines...]
check-licenses:
 [echo] License check under: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr
 [licenses] MISSING sha1 checksum file for: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/lib/protobuf-java-2.5.0.jar
 [licenses] EXPECTED sha1 checksum file : 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/protobuf-java-2.5.0.jar.sha1

[...truncated 1 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:422: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:67: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build.xml:254: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/custom-tasks.xml:62:
 License check failed. Check the logs.

Total time: 45 minutes 56 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 32bit/jdk1.8.0-ea-b109 -client -XX:+UseG1GC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Created] (LUCENE-5303) OrdinalsCache should use reader.getCoreCacheKey()

2013-10-23 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-5303:
--

 Summary: OrdinalsCache should use reader.getCoreCacheKey()
 Key: LUCENE-5303
 URL: https://issues.apache.org/jira/browse/LUCENE-5303
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 4.6, 5.0


I'm doing some facet performance tests, and I tried using the 
CachedOrdsCountingFacetsAggregator to cache the decoded ords per doc X field 
... but noticed it was generating way too many cache entries, because it's 
currently using the NDV instance as the cache key.

NDV instances are thread-private so this results in way too many entries in the 
cache.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5303) OrdinalsCache should use reader.getCoreCacheKey()

2013-10-23 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803193#comment-13803193
 ] 

Michael McCandless commented on LUCENE-5303:


Sorry, I meant BDV (BinaryDocValues) not NDV ...

 OrdinalsCache should use reader.getCoreCacheKey()
 -

 Key: LUCENE-5303
 URL: https://issues.apache.org/jira/browse/LUCENE-5303
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 4.6, 5.0


 I'm doing some facet performance tests, and I tried using the 
 CachedOrdsCountingFacetsAggregator to cache the decoded ords per doc X field 
 ... but noticed it was generating way too many cache entries, because it's 
 currently using the NDV instance as the cache key.
 NDV instances are thread-private so this results in way too many entries in 
 the cache.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5303) OrdinalsCache should use reader.getCoreCacheKey()

2013-10-23 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803196#comment-13803196
 ] 

Shai Erera commented on LUCENE-5303:


Good catch!

I guess we should use a compound cache key: coreCacheKey + clp.field?

 OrdinalsCache should use reader.getCoreCacheKey()
 -

 Key: LUCENE-5303
 URL: https://issues.apache.org/jira/browse/LUCENE-5303
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 4.6, 5.0


 I'm doing some facet performance tests, and I tried using the 
 CachedOrdsCountingFacetsAggregator to cache the decoded ords per doc X field 
 ... but noticed it was generating way too many cache entries, because it's 
 currently using the NDV instance as the cache key.
 NDV instances are thread-private so this results in way too many entries in 
 the cache.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5379) Query-time multi-word synonym expansion

2013-10-23 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803199#comment-13803199
 ] 

Otis Gospodnetic commented on SOLR-5379:


My understanding of how this synonym expander (the synonym-expander.patch) 
works is:

Assume synonyms are:
{code}
Seabiscuit, Sea biscit, Biscit
{code}

For query Seabiscuit article, the regular edismax will construct a 
MultiPhraseQuery like (Seebiscuit|Sea|biscit, biscit, article).

Instead of that, this patch rewrites the query differently:
PhraseQuery(Seabiscit article) OR PhraseQuery(Sea biscit article) OR 
PhraseQuery(biscit article)


 Query-time multi-word synonym expansion
 ---

 Key: SOLR-5379
 URL: https://issues.apache.org/jira/browse/SOLR-5379
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Reporter: Nguyen Manh Tien
  Labels: multi-word, queryparser, synonym
 Fix For: 4.5.1, 4.6

 Attachments: quoted.patch, synonym-expander.patch


 While dealing with synonym at query time, solr failed to work with multi-word 
 synonyms due to some reasons:
 - First the lucene queryparser tokenizes user query by space so it split 
 multi-word term into two terms before feeding to synonym filter, so synonym 
 filter can't recognized multi-word term to do expansion
 - Second, if synonym filter expand into multiple terms which contains 
 multi-word synonym, The SolrQueryParseBase currently use MultiPhraseQuery to 
 handle synonyms. But MultiPhraseQuery don't work with term have different 
 number of words.
 For the first one, we can extend quoted all multi-word synonym in user query 
 so that lucene queryparser don't split it. There are a jira task related to 
 this one https://issues.apache.org/jira/browse/LUCENE-2605.
 For the second, we can replace MultiPhraseQuery by an appropriate BoleanQuery 
 SHOULD which contains multiple PhraseQuery in case tokens stream have 
 multi-word synonym.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5320) Multi level compositeId router

2013-10-23 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-5320:
---

Attachment: SOLR-5320-refactored.patch

Refactored and fixed thread safety (+other) issues.


 Multi level compositeId router
 --

 Key: SOLR-5320
 URL: https://issues.apache.org/jira/browse/SOLR-5320
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Anshum Gupta
 Attachments: SOLR-5320.patch, SOLR-5320-refactored.patch

   Original Estimate: 336h
  Remaining Estimate: 336h

 This would enable multi level routing as compared to the 2 level routing 
 available as of now. On the usage bit, here's an example:
 Document Id: myapp!dummyuser!doc
 myapp!dummyuser! can be used as the shardkey for searching content for 
 dummyuser.
 myapp! can be used for searching across all users of myapp.
 I am looking at either a 3 (or 4) level routing. The 32 bit hash would then 
 comprise of 8X4 components from each part (in case of 4 level).



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b109) - Build # 7916 - Failure!

2013-10-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/7916/
Java: 32bit/jdk1.8.0-ea-b109 -client -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 23156 lines...]
check-licenses:
 [echo] License check under: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr
 [licenses] MISSING sha1 checksum file for: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/core/lib/protobuf-java-2.5.0.jar
 [licenses] EXPECTED sha1 checksum file : 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/licenses/protobuf-java-2.5.0.jar.sha1

[...truncated 1 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:428: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:67: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build.xml:254: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/custom-tasks.xml:62:
 License check failed. Check the logs.

Total time: 44 minutes 20 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 32bit/jdk1.8.0-ea-b109 -client -XX:+UseConcMarkSweepGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4423 - Failure

2013-10-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4423/

All tests passed

Build Log:
[...truncated 22995 lines...]
check-licenses:
 [echo] License check under: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr
 [licenses] MISSING sha1 checksum file for: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/core/lib/protobuf-java-2.5.0.jar
 [licenses] EXPECTED sha1 checksum file : 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/protobuf-java-2.5.0.jar.sha1

[...truncated 1 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/build.xml:422:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/build.xml:67:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/build.xml:254:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/lucene/tools/custom-tasks.xml:62:
 License check failed. Check the logs.

Total time: 73 minutes 0 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.7.0_45) - Build # 3384 - Failure!

2013-10-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/3384/
Java: 32bit/jdk1.7.0_45 -server -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 22901 lines...]
check-licenses:
 [echo] License check under: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr
 [licenses] MISSING sha1 checksum file for: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\core\lib\protobuf-java-2.5.0.jar
 [licenses] EXPECTED sha1 checksum file : 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\licenses\protobuf-java-2.5.0.jar.sha1

[...truncated 1 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:422: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:67: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build.xml:254: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\tools\custom-tasks.xml:62:
 License check failed. Check the logs.

Total time: 99 minutes 1 second
Build step 'Invoke Ant' marked build as failure
Description set: Java: 32bit/jdk1.7.0_45 -server -XX:+UseConcMarkSweepGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (LUCENE-5303) OrdinalsCache should use reader.getCoreCacheKey()

2013-10-23 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated LUCENE-5303:
---

Attachment: LUCENE-5303.patch

Patch changes the map to be a WeakHashMapObject,MapString,CachedOrds so the 
outer map is keyed by reader.getCoreCacheKey() and the inner map is from field 
to CachedOrds, where field is the BinaryDV field which holds the facets 
ordinals.

I also added ramBytesUsed and a test which verifies that ramBytesUsed does not 
change between threads.

 OrdinalsCache should use reader.getCoreCacheKey()
 -

 Key: LUCENE-5303
 URL: https://issues.apache.org/jira/browse/LUCENE-5303
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 4.6, 5.0

 Attachments: LUCENE-5303.patch


 I'm doing some facet performance tests, and I tried using the 
 CachedOrdsCountingFacetsAggregator to cache the decoded ords per doc X field 
 ... but noticed it was generating way too many cache entries, because it's 
 currently using the NDV instance as the cache key.
 NDV instances are thread-private so this results in way too many entries in 
 the cache.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5303) OrdinalsCache should use reader.getCoreCacheKey()

2013-10-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803293#comment-13803293
 ] 

ASF subversion and git services commented on LUCENE-5303:
-

Commit 1535163 from [~shaie] in branch 'dev/trunk'
[ https://svn.apache.org/r1535163 ]

LUCENE-5303: OrdinalsCache should use reader.getCoreCacheKey()

 OrdinalsCache should use reader.getCoreCacheKey()
 -

 Key: LUCENE-5303
 URL: https://issues.apache.org/jira/browse/LUCENE-5303
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 4.6, 5.0

 Attachments: LUCENE-5303.patch


 I'm doing some facet performance tests, and I tried using the 
 CachedOrdsCountingFacetsAggregator to cache the decoded ords per doc X field 
 ... but noticed it was generating way too many cache entries, because it's 
 currently using the NDV instance as the cache key.
 NDV instances are thread-private so this results in way too many entries in 
 the cache.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5302) Analytics Component

2013-10-23 Thread Houston Putman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803001#comment-13803001
 ] 

Houston Putman edited comment on SOLR-5302 at 10/23/13 8:41 PM:


Andrew Psaltis, I would look at FacetingAccumulator and the 
fieldFacetAccumulator and implement something similar to those. I don't know 
much about pivot faceting, but from what I can tell it is nested field facets. 
The FacetingAccumulator acts like a wrapper on the BasicAccumulator to add 
functionality for Facets; so I would add another BasicAccumulator wrapper that 
deals with pivoting. 

When the functionality is there, you will want to make a PivotFacetingRequest 
class, and look at the AnalyticsRequestFactory, AnalyticsStats and 
AnalyticsRequest to make sure your pivot params get parsed correctly and 
computed.


was (Author: houstonputman):
Andrew Psaltis, I would look at FacetingAccumulator and the 
fieldFacetAccumulator and implement something similar to those. I don't know 
much about pivot faceting, but from what I can tell it is nested field facets. 
The FacetingAccumulator acts like a wrapper on the BasicAccumulator to add 
functionality for Facets; so I would add a wrapper on top of the 
FacetingAccumulator to supported the (nested) pivoting. 

When the functionality is there, you will want to make a PivotFacetingRequest 
class, and look at the AnalyticsRequestFactory, AnalyticsStats and 
AnalyticsRequest to make sure your pivot params get parsed correctly and 
computed.

 Analytics Component
 ---

 Key: SOLR-5302
 URL: https://issues.apache.org/jira/browse/SOLR-5302
 Project: Solr
  Issue Type: New Feature
Reporter: Steven Bower
Assignee: Erick Erickson
 Attachments: Search Analytics Component.pdf, SOLR-5302.patch, 
 solr_analytics-2013.10.04-2.patch, Statistical Expressions.pdf


 This ticket is to track a replacement for the StatsComponent. The 
 AnalyticsComponent supports the following features:
 * All functionality of StatsComponent (SOLR-4499)
 * Field Faceting (SOLR-3435)
 ** Support for limit
 ** Sorting (bucket name or any stat in the bucket
 ** Support for offset
 * Range Faceting
 ** Supports all options of standard range faceting
 * Query Faceting (SOLR-2925)
 * Ability to use overall/field facet statistics as input to range/query 
 faceting (ie calc min/max date and then facet over that range
 * Support for more complex aggregate/mapping operations (SOLR-1622)
 ** Aggregations: min, max, sum, sum-of-square, count, missing, stddev, mean, 
 median, percentiles
 ** Operations: negation, abs, add, multiply, divide, power, log, date math, 
 string reversal, string concat
 ** Easily pluggable framework to add additional operations
 * New / cleaner output format
 Outstanding Issues:
 * Multi-value field support for stats (supported for faceting)
 * Multi-shard support (may not be possible for some operations, eg median)



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5303) OrdinalsCache should use reader.getCoreCacheKey()

2013-10-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803299#comment-13803299
 ] 

ASF subversion and git services commented on LUCENE-5303:
-

Commit 1535167 from [~shaie] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1535167 ]

LUCENE-5303: OrdinalsCache should use reader.getCoreCacheKey()

 OrdinalsCache should use reader.getCoreCacheKey()
 -

 Key: LUCENE-5303
 URL: https://issues.apache.org/jira/browse/LUCENE-5303
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 4.6, 5.0

 Attachments: LUCENE-5303.patch


 I'm doing some facet performance tests, and I tried using the 
 CachedOrdsCountingFacetsAggregator to cache the decoded ords per doc X field 
 ... but noticed it was generating way too many cache entries, because it's 
 currently using the NDV instance as the cache key.
 NDV instances are thread-private so this results in way too many entries in 
 the cache.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5303) OrdinalsCache should use reader.getCoreCacheKey()

2013-10-23 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera resolved LUCENE-5303.


   Resolution: Fixed
 Assignee: Shai Erera  (was: Michael McCandless)
Lucene Fields: New,Patch Available  (was: New)

Committed to trunk and 4x.

 OrdinalsCache should use reader.getCoreCacheKey()
 -

 Key: LUCENE-5303
 URL: https://issues.apache.org/jira/browse/LUCENE-5303
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Reporter: Michael McCandless
Assignee: Shai Erera
 Fix For: 4.6, 5.0

 Attachments: LUCENE-5303.patch


 I'm doing some facet performance tests, and I tried using the 
 CachedOrdsCountingFacetsAggregator to cache the decoded ords per doc X field 
 ... but noticed it was generating way too many cache entries, because it's 
 currently using the NDV instance as the cache key.
 NDV instances are thread-private so this results in way too many entries in 
 the cache.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b109) - Build # 8012 - Still Failing!

2013-10-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/8012/
Java: 64bit/jdk1.8.0-ea-b109 -XX:-UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 22887 lines...]
check-licenses:
 [echo] License check under: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr
 [licenses] MISSING sha1 checksum file for: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/lib/protobuf-java-2.5.0.jar
 [licenses] EXPECTED sha1 checksum file : 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/protobuf-java-2.5.0.jar.sha1

[...truncated 1 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:422: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:67: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build.xml:254: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/custom-tasks.xml:62:
 License check failed. Check the logs.

Total time: 42 minutes 3 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 64bit/jdk1.8.0-ea-b109 -XX:-UseCompressedOops 
-XX:+UseParallelGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-5381) Split Clusterstate and scale

2013-10-23 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803347#comment-13803347
 ] 

Jack Krupansky commented on SOLR-5381:
--

It might be helpful to clarify the intended use cases. I mean, normally, 
traditionally, a cluster is some number of nodes that have an application 
need to talk with each other, such as a Solr query fanning out to shards and 
then aggregating the results.

So, are we really talking about individual collections with many thousands of 
shards?

Or, are we talking more about having many thousands of collections, each of 
which may only have a rather modest number of shards?

And, are we talking about multitenancy here?

 Split Clusterstate and scale 
 -

 Key: SOLR-5381
 URL: https://issues.apache.org/jira/browse/SOLR-5381
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
   Original Estimate: 2,016h
  Remaining Estimate: 2,016h

 clusterstate.json is a single point of contention for all components in 
 SolrCloud. It would be hard to scale SolrCloud beyond a few thousand nodes 
 because there are too many updates and too many nodes need to be notified of 
 the changes. As the no:of nodes go up the size of clusterstate.json keeps 
 going up and it will soon exceed the limit impossed by ZK.
 The first step is to store the shards information in separate nodes and each 
 node can just listen to the shard node it belongs to. We may also need to 
 split each collection into its own node and the clusterstate.json just 
 holding the names of the collections .
 This is an umbrella issue



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 414 - Still Failing

2013-10-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/414/

1 tests failed.
FAILED:  
org.apache.lucene.search.suggest.DocumentExpressionDictionaryTest.testWithoutPayload

Error Message:
CompositeReader is not supported

Stack Trace:
java.lang.IllegalArgumentException: CompositeReader is not supported
at 
__randomizedtesting.SeedInfo.seed([4393102586925672:F90997B172D8B897]:0)
at 
org.apache.lucene.search.suggest.DocumentExpressionDictionary$DocumentExpressionInputIterator.init(DocumentExpressionDictionary.java:110)
at 
org.apache.lucene.search.suggest.DocumentExpressionDictionary.getWordsIterator(DocumentExpressionDictionary.java:98)
at 
org.apache.lucene.search.suggest.DocumentExpressionDictionaryTest.testWithoutPayload(DocumentExpressionDictionaryTest.java:128)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:679)




Build Log:
[...truncated 8359 lines...]
   [junit4] Suite: 
org.apache.lucene.search.suggest.DocumentExpressionDictionaryTest
   [junit4]   2 NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
   [junit4]   2 NOTE: reproduce with: ant test  

[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1007: POMs out of sync

2013-10-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1007/

No tests ran.

Build Log:
[...truncated 24200 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-4.x-Windows (64bit/jdk1.7.0_45) - Build # 3309 - Failure!

2013-10-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Windows/3309/
Java: 64bit/jdk1.7.0_45 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 23137 lines...]
check-licenses:
 [echo] License check under: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr
 [licenses] MISSING sha1 checksum file for: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\core\lib\protobuf-java-2.5.0.jar
 [licenses] EXPECTED sha1 checksum file : 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\licenses\protobuf-java-2.5.0.jar.sha1

[...truncated 1 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\build.xml:428: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\build.xml:67: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\build.xml:254: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\lucene\tools\custom-tasks.xml:62:
 License check failed. Check the logs.

Total time: 95 minutes 35 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 64bit/jdk1.7.0_45 -XX:-UseCompressedOops -XX:+UseG1GC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Created] (SOLR-5383) Group Faceting does not play nice with Trie fields that use precsionStep

2013-10-23 Thread Hoss Man (JIRA)
Hoss Man created SOLR-5383:
--

 Summary: Group Faceting does not play nice with Trie fields that 
use precsionStep
 Key: SOLR-5383
 URL: https://issues.apache.org/jira/browse/SOLR-5383
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man


(Initially reported by DavidBoychuck on IRC  solr-user)

Field faceting normally works fine on Trie fields, even when precisionStep is 
used -- but when you activate group faceting, then the artificially injected 
terms in the trie fields start surfacing as facet values.

---

Workaround is to use precisionStep=0 for the fields you want to use group 
faceting on, or copyField between a precisionStep=0 field for group faceting 
and another field with multiple precisions for the improved range query 
performance.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-4.x-Windows (64bit/jdk1.7.0_45) - Build # 3309 - Failure!

2013-10-23 Thread Mark Miller
My fault, updated jar checksum coming.

- Mark

On Oct 23, 2013, at 5:51 PM, Policeman Jenkins Server jenk...@thetaphi.de 
wrote:

 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Windows/3309/
 Java: 64bit/jdk1.7.0_45 -XX:-UseCompressedOops -XX:+UseG1GC
 
 All tests passed
 
 Build Log:
 [...truncated 23137 lines...]
 check-licenses:
 [echo] License check under: 
 C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr
 [licenses] MISSING sha1 checksum file for: 
 C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\core\lib\protobuf-java-2.5.0.jar
 [licenses] EXPECTED sha1 checksum file : 
 C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\licenses\protobuf-java-2.5.0.jar.sha1
 
 [...truncated 1 lines...]
 BUILD FAILED
 C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\build.xml:428: The 
 following error occurred while executing this line:
 C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\build.xml:67: The 
 following error occurred while executing this line:
 C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\build.xml:254: 
 The following error occurred while executing this line:
 C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\lucene\tools\custom-tasks.xml:62:
  License check failed. Check the logs.
 
 Total time: 95 minutes 35 seconds
 Build step 'Invoke Ant' marked build as failure
 Description set: Java: 64bit/jdk1.7.0_45 -XX:-UseCompressedOops -XX:+UseG1GC
 Archiving artifacts
 Recording test results
 Email was triggered for: Failure
 Sending email for trigger: Failure
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5383) Group Faceting does not play nice with Trie fields that use precsionStep

2013-10-23 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803393#comment-13803393
 ] 

Hoss Man commented on SOLR-5383:


Steps to reproduce...

1) start up solr example

2) index a doc like so...
{code}
{id:test,num_tf:1234.567,num_ti:1234567,num_f:1234.567,num_i:1234567}
{code}

3) confirm correct results from simple faceting...
http://localhost:8983/solr/select?q=id:testfacet=truefacet.field=num_tffacet.field=num_tifacet.field=num_ffacet.field=num_i
{code}
lst name=facet_counts
  lst name=facet_queries/
  lst name=facet_fields
lst name=num_tf
  int name=1234.5671/int
/lst
lst name=num_ti
  int name=12345671/int
/lst
lst name=num_f
  int name=1234.5671/int
/lst
lst name=num_i
  int name=12345671/int
/lst
  /lst
  lst name=facet_dates/
  lst name=facet_ranges/
/lst
{code}

4) confirm that the _tf and _ti fields generate incorrect results when using 
group.facet...
http://localhost:8983/solr/select?q=id:testfacet=truefacet.field=num_tffacet.field=num_tifacet.field=num_ffacet.field=num_igroup=truegroup.field=idgroup.facet=true
{code}
lst name=facet_counts
  lst name=facet_queries/
  lst name=facet_fields
lst name=num_tf
  int name=1234.56251/int
  int name=1234.5670/int
/lst
lst name=num_ti
  int name=12344321/int
  int name=12345670/int
/lst
lst name=num_f
  int name=1234.5671/int
/lst
lst name=num_i
  int name=12345671/int
/lst
  /lst
  lst name=facet_dates/
  lst name=facet_ranges/
/lst
{code}

 Group Faceting does not play nice with Trie fields that use precsionStep
 

 Key: SOLR-5383
 URL: https://issues.apache.org/jira/browse/SOLR-5383
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man

 (Initially reported by DavidBoychuck on IRC  solr-user)
 Field faceting normally works fine on Trie fields, even when precisionStep is 
 used -- but when you activate group faceting, then the artificially injected 
 terms in the trie fields start surfacing as facet values.
 ---
 Workaround is to use precisionStep=0 for the fields you want to use group 
 faceting on, or copyField between a precisionStep=0 field for group 
 faceting and another field with multiple precisions for the improved range 
 query performance.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5382) Update to Hadoop 2.2 GA release.

2013-10-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803441#comment-13803441
 ] 

ASF subversion and git services commented on SOLR-5382:
---

Commit 1535195 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1535195 ]

SOLR-5382: Update protobuf checksum

 Update to Hadoop 2.2 GA release.
 

 Key: SOLR-5382
 URL: https://issues.apache.org/jira/browse/SOLR-5382
 Project: Solr
  Issue Type: Task
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.6, 5.0






--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >