[jira] [Updated] (LUCENE-5425) Make creation of FixedBitSet in FacetsCollector overridable

2014-02-05 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated LUCENE-5425:
---

Attachment: LUCENE-5425.patch

* I implemented FixedBitSet.iterator() to return what I think is a more 
optimized version, and not OpenBitSetIterator. I also saved nextSetBit two 
additions in nextSetBit (and the iterator). I think we may want to commit those 
two changes irrespective of whether we cut over to a general DocIdSet in 
MatchingDocs.

I then reviewed the patch more carefully and I noticed a couple of issues, all 
fixed in this patch:

* FacetsCollector.createHitSet() returned a MutableDocIdSet which uses 
OpenBitSet and not FixedBitSet internally. This affects .add() too, especially 
as it used set() and not fastSet(). So I modified it to use FixedBitSet, as the 
number of bits is known in advance.

* I moved MutableDocIdSet inside FacetsCollector and changed it to not extend 
DocIdSet, but rather expose two methods: add() and getDocs(). The latter 
returns a DocIdSet.
** That way, MatchingDocs doesn't need to declare MutableDocIdSet but DocIdSet, 
which makes more sense as we don't want the users of MatchingDocs to be able 
to modify the doc id set.

* I noticed many places in the code still included a {{++doc}} even though it's 
not needed anymore, so I removed them.

I wonder if the 10% loss that Mike saw was related to both the usage of 
OpenBitSet (which affects collection) and OpenBitSetIterator (which affects 
accumulation), and how will this patch perform vs trunk. I will try to setup my 
environment to run the facets benchmark, but Mike, if you can repeat the test 
w/ this patch and post the results, that would be great.

 Make creation of FixedBitSet in FacetsCollector overridable
 ---

 Key: LUCENE-5425
 URL: https://issues.apache.org/jira/browse/LUCENE-5425
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Affects Versions: 4.6
Reporter: John Wang
 Attachments: LUCENE-5425.patch, facetscollector.patch, 
 facetscollector.patch, fixbitset.patch


 In FacetsCollector, creation of bits in MatchingDocs are allocated per query. 
 For large indexes where maxDocs are large creating a bitset of maxDoc bits 
 will be expensive and would great a lot of garbage.
 Attached patch is to allow for this allocation customizable while maintaining 
 current behavior.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5530) SolrJ NoOpResponseParser

2014-02-05 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-5530:


Attachment: SOLR-5530.patch

Thanks Vitaliy!

The only change in this patch is that I have combined both the tests into a 
single test class called NoOpResponseParserTest.

 SolrJ NoOpResponseParser
 

 Key: SOLR-5530
 URL: https://issues.apache.org/jira/browse/SOLR-5530
 Project: Solr
  Issue Type: New Feature
  Components: clients - java
Reporter: Upayavira
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Attachments: PATCH-5530.txt, SOLR-5530.patch, SOLR-5530.patch


 If you want the raw response string out of SolrJ, the advice seems to be to 
 just use an HttpClient directly. 
 However, sometimes you may have a lot of SolrJ infrastructure already in 
 place to build out queries, etc, so it would be much simpler to just use 
 SolrJ to do the work.
 This patch offers a NoOpResponseParser, which simply puts the entire response 
 into an entry in a NamedList.
 Because the response isn't parsed into a QueryResponse, usage is slightly 
 different:
 HttpSolrServer server = new HttpSolrServer(http://localhost:8983/solr;);
 SolrQuery query = new SolrQuery(*:*);
 QueryRequest req = new QueryRequest(query);
 server.setParser(new NoOpResponseParser());
 NamedListObject resp = server.request(req);
 String responseString = resp.get(response);



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5434) NRT support for file systems that do no have delete on last close or cannot delete while referenced semantics.

2014-02-05 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891961#comment-13891961
 ] 

Michael McCandless commented on LUCENE-5434:


Thanks Mark, that looks great.

I think we should modify existing test(s) to confirm IW never even attempts to 
delete a still-open file, when only NRT readers are being opened/closed?

E.g. maybe we could add a acts like HDFS mode to MockDirectoryWrapper, where 
if a still-open file is deleted it then refuses to allow any further operations 
against that file.  Or, to make debugging easier, just have MDW throw an 
unchecked exception when an attempt is made to delete a still-open file?

 NRT support for file systems that do no have delete on last close or cannot 
 delete while referenced semantics.
 --

 Key: LUCENE-5434
 URL: https://issues.apache.org/jira/browse/LUCENE-5434
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.7

 Attachments: LUCENE-5434.patch


 See SOLR-5693 and our HDFS support - for something like HDFS to work with 
 NRT, we need an ability for near realtime readers to hold references to their 
 files to prevent deletes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5623) Better diagnosis of RuntimeExceptions in analysis

2014-02-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891973#comment-13891973
 ] 

ASF subversion and git services commented on SOLR-5623:
---

Commit 1564700 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1564700 ]

SOLR-5623: Use root locale in String.format and do not wrap SolrExceptions

 Better diagnosis of RuntimeExceptions in analysis
 -

 Key: SOLR-5623
 URL: https://issues.apache.org/jira/browse/SOLR-5623
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.6.1
Reporter: Benson Margulies
Assignee: Benson Margulies
 Fix For: 5.0, 4.7

 Attachments: SOLR-5623-nowrap.patch, SOLR-5623-nowrap.patch


 If an analysis component (tokenizer, filter, etc) gets really into a hissy 
 fit and throws a RuntimeException, the resulting log traffic is less than 
 informative, lacking any pointer to the doc under discussion (in the doc 
 case). It would be more better if there was a catch/try shortstop that logged 
 this more informatively.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5623) Better diagnosis of RuntimeExceptions in analysis

2014-02-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891976#comment-13891976
 ] 

ASF subversion and git services commented on SOLR-5623:
---

Commit 1564701 from sha...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1564701 ]

SOLR-5623: Use root locale in String.format and do not wrap SolrExceptions

 Better diagnosis of RuntimeExceptions in analysis
 -

 Key: SOLR-5623
 URL: https://issues.apache.org/jira/browse/SOLR-5623
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.6.1
Reporter: Benson Margulies
Assignee: Benson Margulies
 Fix For: 5.0, 4.7

 Attachments: SOLR-5623-nowrap.patch, SOLR-5623-nowrap.patch


 If an analysis component (tokenizer, filter, etc) gets really into a hissy 
 fit and throws a RuntimeException, the resulting log traffic is less than 
 informative, lacking any pointer to the doc under discussion (in the doc 
 case). It would be more better if there was a catch/try shortstop that logged 
 this more informatively.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5404) Add support to get number of entries a Suggester Lookup was built with and minor refactorings

2014-02-05 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891977#comment-13891977
 ] 

Michael McCandless commented on LUCENE-5404:


+1, patch looks great, thanks Areek.

{quote}
On a different note, I also see the different options that can be fed in the 
lookup() for different suggesters, I was thinking an object (LookupOptions?) 
can be passed instead (which will encapsulate all the params). I think this 
will make the API 'cleaner' and allow suggester specific options to be passed 
by just using the Lookup class, Thoughts? (I will probably just do this 
separately)
{quote}

I think maybe each suggester should just have its own lookup method, taking its 
additional params?  Ie, I'm not sure how consistently each one will have 
options that the others would want to use.  E.g. AnalyzingInfixSuggester 
accepts two additional booleans: allTermsRequired, doHighlight.  But I don't 
think other suggesters can support these options...

 Add support to get number of entries a Suggester Lookup was built with and 
 minor refactorings
 -

 Key: LUCENE-5404
 URL: https://issues.apache.org/jira/browse/LUCENE-5404
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Reporter: Areek Zillur
 Fix For: 5.0, 4.7

 Attachments: LUCENE-5404.patch, LUCENE-5404.patch, LUCENE-5404.patch, 
 LUCENE-5404.patch


 It would be nice to be able to tell the number of entries a suggester lookup 
 was built with. This would let components using lookups to keep some stats 
 regarding how many entries were used to build a lookup.
 Additionally, Dictionary could use InputIterator rather than the 
 BytesRefIteratator, as most of the implmentations now use it.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5530) SolrJ NoOpResponseParser

2014-02-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891990#comment-13891990
 ] 

ASF subversion and git services commented on SOLR-5530:
---

Commit 1564709 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1564709 ]

SOLR-5530: Added a NoOpResponseParser for SolrJ which puts the entire raw 
response into an entry in the NamedList

 SolrJ NoOpResponseParser
 

 Key: SOLR-5530
 URL: https://issues.apache.org/jira/browse/SOLR-5530
 Project: Solr
  Issue Type: New Feature
  Components: clients - java
Reporter: Upayavira
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Attachments: PATCH-5530.txt, SOLR-5530.patch, SOLR-5530.patch


 If you want the raw response string out of SolrJ, the advice seems to be to 
 just use an HttpClient directly. 
 However, sometimes you may have a lot of SolrJ infrastructure already in 
 place to build out queries, etc, so it would be much simpler to just use 
 SolrJ to do the work.
 This patch offers a NoOpResponseParser, which simply puts the entire response 
 into an entry in a NamedList.
 Because the response isn't parsed into a QueryResponse, usage is slightly 
 different:
 HttpSolrServer server = new HttpSolrServer(http://localhost:8983/solr;);
 SolrQuery query = new SolrQuery(*:*);
 QueryRequest req = new QueryRequest(query);
 server.setParser(new NoOpResponseParser());
 NamedListObject resp = server.request(req);
 String responseString = resp.get(response);



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5530) SolrJ NoOpResponseParser

2014-02-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891994#comment-13891994
 ] 

ASF subversion and git services commented on SOLR-5530:
---

Commit 1564710 from sha...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1564710 ]

SOLR-5530: Added a NoOpResponseParser for SolrJ which puts the entire raw 
response into an entry in the NamedList

 SolrJ NoOpResponseParser
 

 Key: SOLR-5530
 URL: https://issues.apache.org/jira/browse/SOLR-5530
 Project: Solr
  Issue Type: New Feature
  Components: clients - java
Reporter: Upayavira
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Attachments: PATCH-5530.txt, SOLR-5530.patch, SOLR-5530.patch


 If you want the raw response string out of SolrJ, the advice seems to be to 
 just use an HttpClient directly. 
 However, sometimes you may have a lot of SolrJ infrastructure already in 
 place to build out queries, etc, so it would be much simpler to just use 
 SolrJ to do the work.
 This patch offers a NoOpResponseParser, which simply puts the entire response 
 into an entry in a NamedList.
 Because the response isn't parsed into a QueryResponse, usage is slightly 
 different:
 HttpSolrServer server = new HttpSolrServer(http://localhost:8983/solr;);
 SolrQuery query = new SolrQuery(*:*);
 QueryRequest req = new QueryRequest(query);
 server.setParser(new NoOpResponseParser());
 NamedListObject resp = server.request(req);
 String responseString = resp.get(response);



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5530) SolrJ NoOpResponseParser

2014-02-05 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-5530.
-

   Resolution: Fixed
Fix Version/s: 4.7
   5.0

Thanks Upayavira and Vitaliy!

 SolrJ NoOpResponseParser
 

 Key: SOLR-5530
 URL: https://issues.apache.org/jira/browse/SOLR-5530
 Project: Solr
  Issue Type: New Feature
  Components: clients - java
Reporter: Upayavira
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 5.0, 4.7

 Attachments: PATCH-5530.txt, SOLR-5530.patch, SOLR-5530.patch


 If you want the raw response string out of SolrJ, the advice seems to be to 
 just use an HttpClient directly. 
 However, sometimes you may have a lot of SolrJ infrastructure already in 
 place to build out queries, etc, so it would be much simpler to just use 
 SolrJ to do the work.
 This patch offers a NoOpResponseParser, which simply puts the entire response 
 into an entry in a NamedList.
 Because the response isn't parsed into a QueryResponse, usage is slightly 
 different:
 HttpSolrServer server = new HttpSolrServer(http://localhost:8983/solr;);
 SolrQuery query = new SolrQuery(*:*);
 QueryRequest req = new QueryRequest(query);
 server.setParser(new NoOpResponseParser());
 NamedListObject resp = server.request(req);
 String responseString = resp.get(response);



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5379) Query-time multi-word synonym expansion

2014-02-05 Thread Eric Bus (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892010#comment-13892010
 ] 

Eric Bus commented on SOLR-5379:


Has anyone modified this patch to work on 4.6.1? I tried to do a manual merge 
for the second patch. But a lot has changed in the SolrQueryParserBase.java 
file.

 Query-time multi-word synonym expansion
 ---

 Key: SOLR-5379
 URL: https://issues.apache.org/jira/browse/SOLR-5379
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Reporter: Tien Nguyen Manh
  Labels: multi-word, queryparser, synonym
 Fix For: 4.7

 Attachments: quoted.patch, synonym-expander.patch


 While dealing with synonym at query time, solr failed to work with multi-word 
 synonyms due to some reasons:
 - First the lucene queryparser tokenizes user query by space so it split 
 multi-word term into two terms before feeding to synonym filter, so synonym 
 filter can't recognized multi-word term to do expansion
 - Second, if synonym filter expand into multiple terms which contains 
 multi-word synonym, The SolrQueryParseBase currently use MultiPhraseQuery to 
 handle synonyms. But MultiPhraseQuery don't work with term have different 
 number of words.
 For the first one, we can extend quoted all multi-word synonym in user query 
 so that lucene queryparser don't split it. There are a jira task related to 
 this one https://issues.apache.org/jira/browse/LUCENE-2605.
 For the second, we can replace MultiPhraseQuery by an appropriate BoleanQuery 
 SHOULD which contains multiple PhraseQuery in case tokens stream have 
 multi-word synonym.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5530) SolrJ NoOpResponseParser

2014-02-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892013#comment-13892013
 ] 

ASF subversion and git services commented on SOLR-5530:
---

Commit 1564712 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1564712 ]

SOLR-5530: Fix forbidden-api-check failure

 SolrJ NoOpResponseParser
 

 Key: SOLR-5530
 URL: https://issues.apache.org/jira/browse/SOLR-5530
 Project: Solr
  Issue Type: New Feature
  Components: clients - java
Reporter: Upayavira
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 5.0, 4.7

 Attachments: PATCH-5530.txt, SOLR-5530.patch, SOLR-5530.patch


 If you want the raw response string out of SolrJ, the advice seems to be to 
 just use an HttpClient directly. 
 However, sometimes you may have a lot of SolrJ infrastructure already in 
 place to build out queries, etc, so it would be much simpler to just use 
 SolrJ to do the work.
 This patch offers a NoOpResponseParser, which simply puts the entire response 
 into an entry in a NamedList.
 Because the response isn't parsed into a QueryResponse, usage is slightly 
 different:
 HttpSolrServer server = new HttpSolrServer(http://localhost:8983/solr;);
 SolrQuery query = new SolrQuery(*:*);
 QueryRequest req = new QueryRequest(query);
 server.setParser(new NoOpResponseParser());
 NamedListObject resp = server.request(req);
 String responseString = resp.get(response);



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5530) SolrJ NoOpResponseParser

2014-02-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892014#comment-13892014
 ] 

ASF subversion and git services commented on SOLR-5530:
---

Commit 1564713 from sha...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1564713 ]

SOLR-5530: Fix forbidden-api-check failure

 SolrJ NoOpResponseParser
 

 Key: SOLR-5530
 URL: https://issues.apache.org/jira/browse/SOLR-5530
 Project: Solr
  Issue Type: New Feature
  Components: clients - java
Reporter: Upayavira
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 5.0, 4.7

 Attachments: PATCH-5530.txt, SOLR-5530.patch, SOLR-5530.patch


 If you want the raw response string out of SolrJ, the advice seems to be to 
 just use an HttpClient directly. 
 However, sometimes you may have a lot of SolrJ infrastructure already in 
 place to build out queries, etc, so it would be much simpler to just use 
 SolrJ to do the work.
 This patch offers a NoOpResponseParser, which simply puts the entire response 
 into an entry in a NamedList.
 Because the response isn't parsed into a QueryResponse, usage is slightly 
 different:
 HttpSolrServer server = new HttpSolrServer(http://localhost:8983/solr;);
 SolrQuery query = new SolrQuery(*:*);
 QueryRequest req = new QueryRequest(query);
 server.setParser(new NoOpResponseParser());
 NamedListObject resp = server.request(req);
 String responseString = resp.get(response);



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5530) SolrJ NoOpResponseParser

2014-02-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892030#comment-13892030
 ] 

ASF subversion and git services commented on SOLR-5530:
---

Commit 1564720 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1564720 ]

SOLR-5530: Remove empty throws clause

 SolrJ NoOpResponseParser
 

 Key: SOLR-5530
 URL: https://issues.apache.org/jira/browse/SOLR-5530
 Project: Solr
  Issue Type: New Feature
  Components: clients - java
Reporter: Upayavira
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 5.0, 4.7

 Attachments: PATCH-5530.txt, SOLR-5530.patch, SOLR-5530.patch


 If you want the raw response string out of SolrJ, the advice seems to be to 
 just use an HttpClient directly. 
 However, sometimes you may have a lot of SolrJ infrastructure already in 
 place to build out queries, etc, so it would be much simpler to just use 
 SolrJ to do the work.
 This patch offers a NoOpResponseParser, which simply puts the entire response 
 into an entry in a NamedList.
 Because the response isn't parsed into a QueryResponse, usage is slightly 
 different:
 HttpSolrServer server = new HttpSolrServer(http://localhost:8983/solr;);
 SolrQuery query = new SolrQuery(*:*);
 QueryRequest req = new QueryRequest(query);
 server.setParser(new NoOpResponseParser());
 NamedListObject resp = server.request(req);
 String responseString = resp.get(response);



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5530) SolrJ NoOpResponseParser

2014-02-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892032#comment-13892032
 ] 

ASF subversion and git services commented on SOLR-5530:
---

Commit 1564722 from sha...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1564722 ]

SOLR-5530: Remove empty throws clause

 SolrJ NoOpResponseParser
 

 Key: SOLR-5530
 URL: https://issues.apache.org/jira/browse/SOLR-5530
 Project: Solr
  Issue Type: New Feature
  Components: clients - java
Reporter: Upayavira
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 5.0, 4.7

 Attachments: PATCH-5530.txt, SOLR-5530.patch, SOLR-5530.patch


 If you want the raw response string out of SolrJ, the advice seems to be to 
 just use an HttpClient directly. 
 However, sometimes you may have a lot of SolrJ infrastructure already in 
 place to build out queries, etc, so it would be much simpler to just use 
 SolrJ to do the work.
 This patch offers a NoOpResponseParser, which simply puts the entire response 
 into an entry in a NamedList.
 Because the response isn't parsed into a QueryResponse, usage is slightly 
 different:
 HttpSolrServer server = new HttpSolrServer(http://localhost:8983/solr;);
 SolrQuery query = new SolrQuery(*:*);
 QueryRequest req = new QueryRequest(query);
 server.setParser(new NoOpResponseParser());
 NamedListObject resp = server.request(req);
 String responseString = resp.get(response);



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.7.0_60-ea-b04) - Build # 9263 - Still Failing!

2014-02-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/9263/
Java: 32bit/jdk1.7.0_60-ea-b04 -server -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 50473 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:459: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:398: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:87: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:185: The 
following files are missing svn:eol-style (or binary svn:mime-type):
* ./solr/core/src/test-files/solr/analysisconfs/analysis-err-schema.xml
* 
./solr/core/src/test/org/apache/solr/analysis/ThrowingMockTokenFilterFactory.java
* ./solr/core/src/test/org/apache/solr/update/AnalysisErrorHandlingTest.java

Total time: 58 minutes 13 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 32bit/jdk1.7.0_60-ea-b04 -server -XX:+UseG1GC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-5598) LanguageIdentifierUpdateProcessor ignores all but the first value of multiValued string fields

2014-02-05 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-5598:


Attachment: SOLR-5598.patch

Thanks Vitaliy!

This patch removes the empty exception javadoc line (it causes Javadoc: 
Description expected after this reference errors). I also removed content 
from the log warning.

In future, please run ant precommit from the checkout directory which will 
catch usage of forbidden-apis as well as javadoc errors.

 LanguageIdentifierUpdateProcessor ignores all but the first value of 
 multiValued string fields
 --

 Key: SOLR-5598
 URL: https://issues.apache.org/jira/browse/SOLR-5598
 Project: Solr
  Issue Type: Bug
  Components: contrib - LangId
Affects Versions: 4.5.1
Reporter: Andreas Hubold
Assignee: Shalin Shekhar Mangar
 Fix For: 4.7

 Attachments: SOLR-5598.patch, SOLR-5598.patch, SOLR-5598.patch


 The LanguageIdentifierUpdateProcessor just uses the first value of the 
 multiValued field to detect the language. 
 Method {{concatFields}} calls {{doc.getFieldValue(fieldName)}} but should 
 instead iterate over {{doc.getFieldValues(fieldName)}}.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5302) Analytics Component

2014-02-05 Thread Mehmet Erkek (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892040#comment-13892040
 ] 

Mehmet Erkek commented on SOLR-5302:


Thanks Shawn. Nice answer. I think we need this component sooner. In this case, 
my questions here is : Is there anything we can do to help including this 
feature in one of 4.X versions ? 

 Analytics Component
 ---

 Key: SOLR-5302
 URL: https://issues.apache.org/jira/browse/SOLR-5302
 Project: Solr
  Issue Type: New Feature
Reporter: Steven Bower
Assignee: Erick Erickson
 Attachments: SOLR-5302.patch, SOLR-5302.patch, SOLR-5302.patch, 
 SOLR-5302.patch, Search Analytics Component.pdf, Statistical Expressions.pdf, 
 solr_analytics-2013.10.04-2.patch


 This ticket is to track a replacement for the StatsComponent. The 
 AnalyticsComponent supports the following features:
 * All functionality of StatsComponent (SOLR-4499)
 * Field Faceting (SOLR-3435)
 ** Support for limit
 ** Sorting (bucket name or any stat in the bucket
 ** Support for offset
 * Range Faceting
 ** Supports all options of standard range faceting
 * Query Faceting (SOLR-2925)
 * Ability to use overall/field facet statistics as input to range/query 
 faceting (ie calc min/max date and then facet over that range
 * Support for more complex aggregate/mapping operations (SOLR-1622)
 ** Aggregations: min, max, sum, sum-of-square, count, missing, stddev, mean, 
 median, percentiles
 ** Operations: negation, abs, add, multiply, divide, power, log, date math, 
 string reversal, string concat
 ** Easily pluggable framework to add additional operations
 * New / cleaner output format
 Outstanding Issues:
 * Multi-value field support for stats (supported for faceting)
 * Multi-shard support (may not be possible for some operations, eg median)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5598) LanguageIdentifierUpdateProcessor ignores all but the first value of multiValued string fields

2014-02-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892041#comment-13892041
 ] 

ASF subversion and git services commented on SOLR-5598:
---

Commit 1564732 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1564732 ]

SOLR-5598: LanguageIdentifierUpdateProcessor ignores all but the first value of 
multiValued string fields

 LanguageIdentifierUpdateProcessor ignores all but the first value of 
 multiValued string fields
 --

 Key: SOLR-5598
 URL: https://issues.apache.org/jira/browse/SOLR-5598
 Project: Solr
  Issue Type: Bug
  Components: contrib - LangId
Affects Versions: 4.5.1
Reporter: Andreas Hubold
Assignee: Shalin Shekhar Mangar
 Fix For: 4.7

 Attachments: SOLR-5598.patch, SOLR-5598.patch, SOLR-5598.patch


 The LanguageIdentifierUpdateProcessor just uses the first value of the 
 multiValued field to detect the language. 
 Method {{concatFields}} calls {{doc.getFieldValue(fieldName)}} but should 
 instead iterate over {{doc.getFieldValues(fieldName)}}.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5598) LanguageIdentifierUpdateProcessor ignores all but the first value of multiValued string fields

2014-02-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892042#comment-13892042
 ] 

ASF subversion and git services commented on SOLR-5598:
---

Commit 1564733 from sha...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1564733 ]

SOLR-5598: LanguageIdentifierUpdateProcessor ignores all but the first value of 
multiValued string fields

 LanguageIdentifierUpdateProcessor ignores all but the first value of 
 multiValued string fields
 --

 Key: SOLR-5598
 URL: https://issues.apache.org/jira/browse/SOLR-5598
 Project: Solr
  Issue Type: Bug
  Components: contrib - LangId
Affects Versions: 4.5.1
Reporter: Andreas Hubold
Assignee: Shalin Shekhar Mangar
 Fix For: 4.7

 Attachments: SOLR-5598.patch, SOLR-5598.patch, SOLR-5598.patch


 The LanguageIdentifierUpdateProcessor just uses the first value of the 
 multiValued field to detect the language. 
 Method {{concatFields}} calls {{doc.getFieldValue(fieldName)}} but should 
 instead iterate over {{doc.getFieldValues(fieldName)}}.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5426) org.apache.solr.common.SolrException; org.apache.solr.common.SolrException: org.apache.lucene.search.highlight.InvalidTokenOffsetsException: Token 0 exceeds length of prov

2014-02-05 Thread Arun Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Kumar updated SOLR-5426:
-

Attachment: OffsetLimitTokenFilter.java.patch

disregard my previous comments, Even if a string exists the max offset limit it 
shouldn't blow up with an exception. On further investigation I found that an 
unused token of the larger string when it token count increase the max offset 
limit carries to the next available string in the loop. Attached a patch file 
for the fix, it resolves the issue.

 org.apache.solr.common.SolrException; org.apache.solr.common.SolrException: 
 org.apache.lucene.search.highlight.InvalidTokenOffsetsException: Token 0 
 exceeds length of provided text sized 840
 --

 Key: SOLR-5426
 URL: https://issues.apache.org/jira/browse/SOLR-5426
 Project: Solr
  Issue Type: Bug
  Components: highlighter
Affects Versions: 4.4, 4.5.1
Reporter: Nikolay
Priority: Minor
 Attachments: OffsetLimitTokenFilter.java.patch, highlighter.zip


 Highlighter does not work correctly on test-data.
 I added index- and config- files (see attached highlighter.zip) for 
 reproducing this issue.
 Everything works fine if I search without highlighting:
 http://localhost:8983/solr/global/select?q=aawt=jsonindent=true
 But if search with highlighting: 
 http://localhost:8983/solr/global/select?q=aawt=jsonindent=truehl=truehl.fl=*_stxhl.simple.pre=emhl.simple.post=%2Fem
 I'm get the error:
 ERROR - 2013-11-07 10:17:15.797; org.apache.solr.common.SolrException; 
 null:org.apache.solr.common.SolrException: 
 org.apache.lucene.search.highlight.InvalidTokenOffsetsException: Token 0 
 exceeds length of provided text sized 840
   at 
 org.apache.solr.highlight.DefaultSolrHighlighter.doHighlightingByHighlighter(DefaultSolrHighlighter.java:542)
   at 
 org.apache.solr.highlight.DefaultSolrHighlighter.doHighlighting(DefaultSolrHighlighter.java:414)
   at 
 org.apache.solr.handler.component.HighlightComponent.process(HighlightComponent.java:139)
   at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:208)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1859)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:703)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:406)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:195)
   at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
   at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
   at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
   at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
   at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
   at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
   at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
   at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
   at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
   at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
   at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
   at org.eclipse.jetty.server.Server.handle(Server.java:368)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
   at 
 org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
   at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
   at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
   at 
 org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
   at 
 org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
   at 
 

[jira] [Commented] (SOLR-5623) Better diagnosis of RuntimeExceptions in analysis

2014-02-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892044#comment-13892044
 ] 

ASF subversion and git services commented on SOLR-5623:
---

Commit 1564737 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1564737 ]

SOLR-5623: Added svn:eol-style

 Better diagnosis of RuntimeExceptions in analysis
 -

 Key: SOLR-5623
 URL: https://issues.apache.org/jira/browse/SOLR-5623
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.6.1
Reporter: Benson Margulies
Assignee: Benson Margulies
 Fix For: 5.0, 4.7

 Attachments: SOLR-5623-nowrap.patch, SOLR-5623-nowrap.patch


 If an analysis component (tokenizer, filter, etc) gets really into a hissy 
 fit and throws a RuntimeException, the resulting log traffic is less than 
 informative, lacking any pointer to the doc under discussion (in the doc 
 case). It would be more better if there was a catch/try shortstop that logged 
 this more informatively.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5623) Better diagnosis of RuntimeExceptions in analysis

2014-02-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892045#comment-13892045
 ] 

ASF subversion and git services commented on SOLR-5623:
---

Commit 1564741 from sha...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1564741 ]

SOLR-5623: Added svn:eol-style

 Better diagnosis of RuntimeExceptions in analysis
 -

 Key: SOLR-5623
 URL: https://issues.apache.org/jira/browse/SOLR-5623
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.6.1
Reporter: Benson Margulies
Assignee: Benson Margulies
 Fix For: 5.0, 4.7

 Attachments: SOLR-5623-nowrap.patch, SOLR-5623-nowrap.patch


 If an analysis component (tokenizer, filter, etc) gets really into a hissy 
 fit and throws a RuntimeException, the resulting log traffic is less than 
 informative, lacking any pointer to the doc under discussion (in the doc 
 case). It would be more better if there was a catch/try shortstop that logged 
 this more informatively.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5598) LanguageIdentifierUpdateProcessor ignores all but the first value of multiValued string fields

2014-02-05 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-5598.
-

   Resolution: Fixed
Fix Version/s: 5.0

Thanks Andreas and Vitaliy!

 LanguageIdentifierUpdateProcessor ignores all but the first value of 
 multiValued string fields
 --

 Key: SOLR-5598
 URL: https://issues.apache.org/jira/browse/SOLR-5598
 Project: Solr
  Issue Type: Bug
  Components: contrib - LangId
Affects Versions: 4.5.1
Reporter: Andreas Hubold
Assignee: Shalin Shekhar Mangar
 Fix For: 5.0, 4.7

 Attachments: SOLR-5598.patch, SOLR-5598.patch, SOLR-5598.patch


 The LanguageIdentifierUpdateProcessor just uses the first value of the 
 multiValued field to detect the language. 
 Method {{concatFields}} calls {{doc.getFieldValue(fieldName)}} but should 
 instead iterate over {{doc.getFieldValues(fieldName)}}.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5623) Better diagnosis of RuntimeExceptions in analysis

2014-02-05 Thread Benson Margulies (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892049#comment-13892049
 ] 

Benson Margulies commented on SOLR-5623:


[~shalinmangar] Apparently I haven't learned to read the output of ant test 
very well, and fooled myself into believing that all as well. Thanks for 
cleaning up after me.


 Better diagnosis of RuntimeExceptions in analysis
 -

 Key: SOLR-5623
 URL: https://issues.apache.org/jira/browse/SOLR-5623
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.6.1
Reporter: Benson Margulies
Assignee: Benson Margulies
 Fix For: 5.0, 4.7

 Attachments: SOLR-5623-nowrap.patch, SOLR-5623-nowrap.patch


 If an analysis component (tokenizer, filter, etc) gets really into a hissy 
 fit and throws a RuntimeException, the resulting log traffic is less than 
 informative, lacking any pointer to the doc under discussion (in the doc 
 case). It would be more better if there was a catch/try shortstop that logged 
 this more informatively.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5623) Better diagnosis of RuntimeExceptions in analysis

2014-02-05 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892054#comment-13892054
 ] 

Shalin Shekhar Mangar commented on SOLR-5623:
-

No problem, it happens to all of us. I've been guilty of it more often than 
others I think. Running the test suite is not enough, you need 'ant precommit' 
to pass as well.

 Better diagnosis of RuntimeExceptions in analysis
 -

 Key: SOLR-5623
 URL: https://issues.apache.org/jira/browse/SOLR-5623
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.6.1
Reporter: Benson Margulies
Assignee: Benson Margulies
 Fix For: 5.0, 4.7

 Attachments: SOLR-5623-nowrap.patch, SOLR-5623-nowrap.patch


 If an analysis component (tokenizer, filter, etc) gets really into a hissy 
 fit and throws a RuntimeException, the resulting log traffic is less than 
 informative, lacking any pointer to the doc under discussion (in the doc 
 case). It would be more better if there was a catch/try shortstop that logged 
 this more informatively.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Why is String.format[] forbidden?

2014-02-05 Thread Benson Margulies
Or, more specifically, what's the minimum JVM for 4.x versus 5.x? I
had the idea that even 4.x required 1.6.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Why is String.format[] forbidden?

2014-02-05 Thread Michael McCandless
I think you must use the String.format that takes a Locale, else it
relies on the default Locale which is dangerous.

4.x requires Java 1.6 and trunk (5.0) requires Java 1.7.

Mike McCandless

http://blog.mikemccandless.com


On Wed, Feb 5, 2014 at 7:45 AM, Benson Margulies bimargul...@gmail.com wrote:
 Or, more specifically, what's the minimum JVM for 4.x versus 5.x? I
 had the idea that even 4.x required 1.6.

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Why is String.format[] forbidden?

2014-02-05 Thread Uwe Schindler
Hi Benson,

If you use String#format without an explicit Locale, its forbidden because not 
platform independent, which is a requirement for an library like Lucene. If you 
really know for sure that you want to use the default locale, add 
Locale.getDefault() explicit. Forbidden checker mentions that fact when it 
reports the error.
Lucene 5 (trunk) is based on Java 7, Lucene 4 is based on Java 6.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Benson Margulies [mailto:bimargul...@gmail.com]
 Sent: Wednesday, February 05, 2014 1:46 PM
 To: dev@lucene.apache.org
 Subject: Why is String.format[] forbidden?
 
 Or, more specifically, what's the minimum JVM for 4.x versus 5.x? I had the
 idea that even 4.x required 1.6.
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Why is String.format[] forbidden?

2014-02-05 Thread Benson Margulies
OK, I get it.

On Wed, Feb 5, 2014 at 7:53 AM, Uwe Schindler u...@thetaphi.de wrote:
 Hi Benson,

 If you use String#format without an explicit Locale, its forbidden because 
 not platform independent, which is a requirement for an library like Lucene. 
 If you really know for sure that you want to use the default locale, add 
 Locale.getDefault() explicit. Forbidden checker mentions that fact when it 
 reports the error.
 Lucene 5 (trunk) is based on Java 7, Lucene 4 is based on Java 6.

 Uwe

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de


 -Original Message-
 From: Benson Margulies [mailto:bimargul...@gmail.com]
 Sent: Wednesday, February 05, 2014 1:46 PM
 To: dev@lucene.apache.org
 Subject: Why is String.format[] forbidden?

 Or, more specifically, what's the minimum JVM for 4.x versus 5.x? I had the
 idea that even 4.x required 1.6.

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5308) Split all documents of a route key into another collection

2014-02-05 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-5308.
-

Resolution: Fixed

 Split all documents of a route key into another collection
 --

 Key: SOLR-5308
 URL: https://issues.apache.org/jira/browse/SOLR-5308
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 5.0, 4.7

 Attachments: SOLR-5308-bitsep-fix.patch, SOLR-5308-fixes.patch, 
 SOLR-5308.patch, SOLR-5308.patch, SOLR-5308.patch, SOLR-5308.patch, 
 SOLR-5308.patch, SOLR-5308.patch, SOLR-5308.patch, SOLR-5308.patch


 Enable SolrCloud users to split out a set of documents from a source 
 collection into another collection.
 This will be useful in multi-tenant environments. This feature will make it 
 possible to split a tenant out of a collection and put them into their own 
 collection which can be scaled separately.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5146) Figure out what it would take for lazily-loaded cores to play nice with SolrCloud

2014-02-05 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-5146:


  Component/s: SolrCloud
Fix Version/s: 5.0
 Assignee: Shalin Shekhar Mangar

This is next. Exciting times ahead :-)

 Figure out what it would take for lazily-loaded cores to play nice with 
 SolrCloud
 -

 Key: SOLR-5146
 URL: https://issues.apache.org/jira/browse/SOLR-5146
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.5, 5.0
Reporter: Erick Erickson
Assignee: Shalin Shekhar Mangar
 Fix For: 5.0


 The whole lazy-load core thing was implemented with non-SolrCloud use-cases 
 in mind. There are several user-list threads that ask about using lazy cores 
 with SolrCloud, especially in multi-tenant use-cases.
 This is a marker JIRA to investigate what it would take to make lazy-load 
 cores play nice with SolrCloud. It's especially interesting how this all 
 works with shards, replicas, leader election, recovery, etc.
 NOTE: This is pretty much totally unexplored territory. It may be that a few 
 trivial modifications are all that's needed. OTOH, It may be that we'd have 
 to rip apart SolrCloud to handle this case. Until someone dives into the 
 code, we don't know.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-5659) Ignore or throw proper error message for bad delete containing bad composite ID

2014-02-05 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reassigned SOLR-5659:
---

Assignee: Shalin Shekhar Mangar

 Ignore or throw proper error message for bad delete containing bad composite 
 ID
 ---

 Key: SOLR-5659
 URL: https://issues.apache.org/jira/browse/SOLR-5659
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.0
 Environment: 5.0-SNAPSHOT 1480985:1559676M - markus - 2014-01-20 
 13:48:08
Reporter: Markus Jelsma
Assignee: Shalin Shekhar Mangar
 Fix For: 5.0


 The following error is thrown when sending deleteById via SolrJ with ID 
 ending with an exclamation mark, it is also the case for deletes by id via 
 the URL. For some curious reason delete by query using the id field does not 
 fail, but i would expect the same behaviour.
 * fails: /solr/update?commit=truestream.body=deleteida!/id/delete
 * ok: 
 /solr/update?commit=truestream.body=deletequeryid:a!/query/delete
 {code}
 2014-01-22 15:32:48,826 ERROR [solr.core.SolrCore] - [http-8080-exec-5] - : 
 java.lang.ArrayIndexOutOfBoundsException: 1
 at 
 org.apache.solr.common.cloud.CompositeIdRouter$KeyParser.getHash(CompositeIdRouter.java:291)
 at 
 org.apache.solr.common.cloud.CompositeIdRouter.sliceHash(CompositeIdRouter.java:58)
 at 
 org.apache.solr.common.cloud.HashBasedRouter.getTargetSlice(HashBasedRouter.java:33)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.setupRequest(DistributedUpdateProcessor.java:218)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processDelete(DistributedUpdateProcessor.java:961)
 at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processDelete(UpdateRequestProcessor.java:55)
 at 
 org.apache.solr.handler.loader.XMLLoader.processDelete(XMLLoader.java:347)
 at 
 org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:278)
 at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:174)
 at 
 org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1915)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:785)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:203)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
 at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
 at 
 org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
 at 
 org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:744)
 at 
 org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2282)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724) 
 {code}
 See also: 
 http://lucene.472066.n3.nabble.com/AIOOBException-on-trunk-since-21st-or-22nd-build-td4112753.html



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5418) Don't use .advance on costly (e.g. distance range facets) filters

2014-02-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892084#comment-13892084
 ] 

ASF subversion and git services commented on LUCENE-5418:
-

Commit 1564765 from [~mikemccand] in branch 'dev/branches/lucene5376'
[ https://svn.apache.org/r1564765 ]

LUCENE-5418: fix typo

 Don't use .advance on costly (e.g. distance range facets) filters
 -

 Key: LUCENE-5418
 URL: https://issues.apache.org/jira/browse/LUCENE-5418
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, 4.7

 Attachments: LUCENE-5418.patch, LUCENE-5418.patch


 If you use a distance filter today (see 
 http://blog.mikemccandless.com/2014/01/geospatial-distance-faceting-using.html
  ), then drill down on one of those ranges, under the hood Lucene is using 
 .advance on the Filter, which is very costly because we end up computing 
 distance on (possibly many) hits that don't match the query.
 It's better performance to find the hits matching the Query first, and then 
 check the filter.
 FilteredQuery can already do this today, when you use its 
 QUERY_FIRST_FILTER_STRATEGY.  This essentially accomplishes the same thing as 
 Solr's post filters (I think?) but with a far simpler/better/less code 
 approach.
 E.g., I believe ElasticSearch uses this API when it applies costly filters.
 Longish term, I think  Query/Filter ought to know itself that it's expensive, 
 and cases where such a Query/Filter is MUST'd onto a BooleanQuery (e.g. 
 ConstantScoreQuery), or the Filter is a clause in BooleanFilter, or it's 
 passed to IndexSearcher.search, we should also be smart here and not call 
 .advance on such clauses.  But that'd be a biggish change ... so for today 
 the workaround is the user must carefully construct the FilteredQuery 
 themselves.
 In the mean time, as another workaround, I want to fix DrillSideways so that 
 when you drill down on such filters it doesn't use .advance; this should give 
 a good speedup for the normal path API usage with a costly filter.
 I'm iterating on the lucene server branch (LUCENE-5376) but once it's working 
 I plan to merge this back to trunk / 4.7.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5376) Add a demo search server

2014-02-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892087#comment-13892087
 ] 

ASF subversion and git services commented on LUCENE-5376:
-

Commit 1564769 from [~mikemccand] in branch 'dev/branches/lucene5376'
[ https://svn.apache.org/r1564769 ]

LUCENE-5376: minor cleanups in replication

 Add a demo search server
 

 Key: LUCENE-5376
 URL: https://issues.apache.org/jira/browse/LUCENE-5376
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: lucene-demo-server.tgz


 I think it'd be useful to have a demo search server for Lucene.
 Rather than being fully featured, like Solr, it would be minimal, just 
 wrapping the existing Lucene modules to show how you can make use of these 
 features in a server setting.
 The purpose is to demonstrate how one can build a minimal search server on 
 top of APIs like SearchManager, SearcherLifetimeManager, etc.
 This is also useful for finding rough edges / issues in Lucene's APIs that 
 make building a server unnecessarily hard.
 I don't think it should have back compatibility promises (except Lucene's 
 index back compatibility), so it's free to improve as Lucene's APIs change.
 As a starting point, I'll post what I built for the eating your own dog 
 food search app for Lucene's  Solr's jira issues 
 http://jirasearch.mikemccandless.com (blog: 
 http://blog.mikemccandless.com/2013/05/eating-dog-food-with-lucene.html ). It 
 uses Netty to expose basic indexing  searching APIs via JSON, but it's very 
 rough (lots nocommits).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5418) Don't use .advance on costly (e.g. distance range facets) filters

2014-02-05 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892088#comment-13892088
 ] 

Michael McCandless commented on LUCENE-5418:


I think the last patch is ready except for a small typo ... I'll commit soon.

 Don't use .advance on costly (e.g. distance range facets) filters
 -

 Key: LUCENE-5418
 URL: https://issues.apache.org/jira/browse/LUCENE-5418
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, 4.7

 Attachments: LUCENE-5418.patch, LUCENE-5418.patch


 If you use a distance filter today (see 
 http://blog.mikemccandless.com/2014/01/geospatial-distance-faceting-using.html
  ), then drill down on one of those ranges, under the hood Lucene is using 
 .advance on the Filter, which is very costly because we end up computing 
 distance on (possibly many) hits that don't match the query.
 It's better performance to find the hits matching the Query first, and then 
 check the filter.
 FilteredQuery can already do this today, when you use its 
 QUERY_FIRST_FILTER_STRATEGY.  This essentially accomplishes the same thing as 
 Solr's post filters (I think?) but with a far simpler/better/less code 
 approach.
 E.g., I believe ElasticSearch uses this API when it applies costly filters.
 Longish term, I think  Query/Filter ought to know itself that it's expensive, 
 and cases where such a Query/Filter is MUST'd onto a BooleanQuery (e.g. 
 ConstantScoreQuery), or the Filter is a clause in BooleanFilter, or it's 
 passed to IndexSearcher.search, we should also be smart here and not call 
 .advance on such clauses.  But that'd be a biggish change ... so for today 
 the workaround is the user must carefully construct the FilteredQuery 
 themselves.
 In the mean time, as another workaround, I want to fix DrillSideways so that 
 when you drill down on such filters it doesn't use .advance; this should give 
 a good speedup for the normal path API usage with a costly filter.
 I'm iterating on the lucene server branch (LUCENE-5376) but once it's working 
 I plan to merge this back to trunk / 4.7.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5659) Ignore or throw proper error message for bad delete containing bad composite ID

2014-02-05 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892096#comment-13892096
 ] 

Yonik Seeley commented on SOLR-5659:


This sounds like a bug in CompositeIdRouter - it should not be any ID for which 
it throws an exception (it was made the default because it is completely 
transparent, other than the hash codes it produces).

 Ignore or throw proper error message for bad delete containing bad composite 
 ID
 ---

 Key: SOLR-5659
 URL: https://issues.apache.org/jira/browse/SOLR-5659
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.0
 Environment: 5.0-SNAPSHOT 1480985:1559676M - markus - 2014-01-20 
 13:48:08
Reporter: Markus Jelsma
Assignee: Shalin Shekhar Mangar
 Fix For: 5.0


 The following error is thrown when sending deleteById via SolrJ with ID 
 ending with an exclamation mark, it is also the case for deletes by id via 
 the URL. For some curious reason delete by query using the id field does not 
 fail, but i would expect the same behaviour.
 * fails: /solr/update?commit=truestream.body=deleteida!/id/delete
 * ok: 
 /solr/update?commit=truestream.body=deletequeryid:a!/query/delete
 {code}
 2014-01-22 15:32:48,826 ERROR [solr.core.SolrCore] - [http-8080-exec-5] - : 
 java.lang.ArrayIndexOutOfBoundsException: 1
 at 
 org.apache.solr.common.cloud.CompositeIdRouter$KeyParser.getHash(CompositeIdRouter.java:291)
 at 
 org.apache.solr.common.cloud.CompositeIdRouter.sliceHash(CompositeIdRouter.java:58)
 at 
 org.apache.solr.common.cloud.HashBasedRouter.getTargetSlice(HashBasedRouter.java:33)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.setupRequest(DistributedUpdateProcessor.java:218)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processDelete(DistributedUpdateProcessor.java:961)
 at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processDelete(UpdateRequestProcessor.java:55)
 at 
 org.apache.solr.handler.loader.XMLLoader.processDelete(XMLLoader.java:347)
 at 
 org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:278)
 at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:174)
 at 
 org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1915)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:785)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:203)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
 at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
 at 
 org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
 at 
 org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:744)
 at 
 org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2282)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724) 
 {code}
 See also: 
 http://lucene.472066.n3.nabble.com/AIOOBException-on-trunk-since-21st-or-22nd-build-td4112753.html



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5659) Ignore or throw proper error message for bad delete containing bad composite ID

2014-02-05 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892096#comment-13892096
 ] 

Yonik Seeley edited comment on SOLR-5659 at 2/5/14 1:33 PM:


This sounds like a bug in CompositeIdRouter - there should not be any ID for 
which it throws an exception (it was made the default because it is completely 
transparent, other than the hash codes it produces).


was (Author: ysee...@gmail.com):
This sounds like a bug in CompositeIdRouter - it should not be any ID for which 
it throws an exception (it was made the default because it is completely 
transparent, other than the hash codes it produces).

 Ignore or throw proper error message for bad delete containing bad composite 
 ID
 ---

 Key: SOLR-5659
 URL: https://issues.apache.org/jira/browse/SOLR-5659
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.0
 Environment: 5.0-SNAPSHOT 1480985:1559676M - markus - 2014-01-20 
 13:48:08
Reporter: Markus Jelsma
Assignee: Shalin Shekhar Mangar
 Fix For: 5.0


 The following error is thrown when sending deleteById via SolrJ with ID 
 ending with an exclamation mark, it is also the case for deletes by id via 
 the URL. For some curious reason delete by query using the id field does not 
 fail, but i would expect the same behaviour.
 * fails: /solr/update?commit=truestream.body=deleteida!/id/delete
 * ok: 
 /solr/update?commit=truestream.body=deletequeryid:a!/query/delete
 {code}
 2014-01-22 15:32:48,826 ERROR [solr.core.SolrCore] - [http-8080-exec-5] - : 
 java.lang.ArrayIndexOutOfBoundsException: 1
 at 
 org.apache.solr.common.cloud.CompositeIdRouter$KeyParser.getHash(CompositeIdRouter.java:291)
 at 
 org.apache.solr.common.cloud.CompositeIdRouter.sliceHash(CompositeIdRouter.java:58)
 at 
 org.apache.solr.common.cloud.HashBasedRouter.getTargetSlice(HashBasedRouter.java:33)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.setupRequest(DistributedUpdateProcessor.java:218)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processDelete(DistributedUpdateProcessor.java:961)
 at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processDelete(UpdateRequestProcessor.java:55)
 at 
 org.apache.solr.handler.loader.XMLLoader.processDelete(XMLLoader.java:347)
 at 
 org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:278)
 at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:174)
 at 
 org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1915)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:785)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:203)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
 at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
 at 
 org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
 at 
 org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:744)
 at 
 org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2282)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724) 
 {code}
 See also: 
 http://lucene.472066.n3.nabble.com/AIOOBException-on-trunk-since-21st-or-22nd-build-td4112753.html



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (SOLR-5659) Ignore or throw proper error message for bad delete containing bad composite ID

2014-02-05 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892107#comment-13892107
 ] 

Shalin Shekhar Mangar commented on SOLR-5659:
-

Yes, I think this was introduced with SOLR-5659

 Ignore or throw proper error message for bad delete containing bad composite 
 ID
 ---

 Key: SOLR-5659
 URL: https://issues.apache.org/jira/browse/SOLR-5659
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.0
 Environment: 5.0-SNAPSHOT 1480985:1559676M - markus - 2014-01-20 
 13:48:08
Reporter: Markus Jelsma
Assignee: Shalin Shekhar Mangar
 Fix For: 5.0


 The following error is thrown when sending deleteById via SolrJ with ID 
 ending with an exclamation mark, it is also the case for deletes by id via 
 the URL. For some curious reason delete by query using the id field does not 
 fail, but i would expect the same behaviour.
 * fails: /solr/update?commit=truestream.body=deleteida!/id/delete
 * ok: 
 /solr/update?commit=truestream.body=deletequeryid:a!/query/delete
 {code}
 2014-01-22 15:32:48,826 ERROR [solr.core.SolrCore] - [http-8080-exec-5] - : 
 java.lang.ArrayIndexOutOfBoundsException: 1
 at 
 org.apache.solr.common.cloud.CompositeIdRouter$KeyParser.getHash(CompositeIdRouter.java:291)
 at 
 org.apache.solr.common.cloud.CompositeIdRouter.sliceHash(CompositeIdRouter.java:58)
 at 
 org.apache.solr.common.cloud.HashBasedRouter.getTargetSlice(HashBasedRouter.java:33)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.setupRequest(DistributedUpdateProcessor.java:218)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processDelete(DistributedUpdateProcessor.java:961)
 at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processDelete(UpdateRequestProcessor.java:55)
 at 
 org.apache.solr.handler.loader.XMLLoader.processDelete(XMLLoader.java:347)
 at 
 org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:278)
 at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:174)
 at 
 org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1915)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:785)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:203)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
 at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
 at 
 org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
 at 
 org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:744)
 at 
 org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2282)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724) 
 {code}
 See also: 
 http://lucene.472066.n3.nabble.com/AIOOBException-on-trunk-since-21st-or-22nd-build-td4112753.html



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5659) Ignore or throw proper error message for bad delete containing bad composite ID

2014-02-05 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892107#comment-13892107
 ] 

Shalin Shekhar Mangar edited comment on SOLR-5659 at 2/5/14 1:47 PM:
-

Yes, I think this was introduced with SOLR-5320


was (Author: shalinmangar):
Yes, I think this was introduced with SOLR-5659

 Ignore or throw proper error message for bad delete containing bad composite 
 ID
 ---

 Key: SOLR-5659
 URL: https://issues.apache.org/jira/browse/SOLR-5659
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.0
 Environment: 5.0-SNAPSHOT 1480985:1559676M - markus - 2014-01-20 
 13:48:08
Reporter: Markus Jelsma
Assignee: Shalin Shekhar Mangar
 Fix For: 5.0


 The following error is thrown when sending deleteById via SolrJ with ID 
 ending with an exclamation mark, it is also the case for deletes by id via 
 the URL. For some curious reason delete by query using the id field does not 
 fail, but i would expect the same behaviour.
 * fails: /solr/update?commit=truestream.body=deleteida!/id/delete
 * ok: 
 /solr/update?commit=truestream.body=deletequeryid:a!/query/delete
 {code}
 2014-01-22 15:32:48,826 ERROR [solr.core.SolrCore] - [http-8080-exec-5] - : 
 java.lang.ArrayIndexOutOfBoundsException: 1
 at 
 org.apache.solr.common.cloud.CompositeIdRouter$KeyParser.getHash(CompositeIdRouter.java:291)
 at 
 org.apache.solr.common.cloud.CompositeIdRouter.sliceHash(CompositeIdRouter.java:58)
 at 
 org.apache.solr.common.cloud.HashBasedRouter.getTargetSlice(HashBasedRouter.java:33)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.setupRequest(DistributedUpdateProcessor.java:218)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processDelete(DistributedUpdateProcessor.java:961)
 at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processDelete(UpdateRequestProcessor.java:55)
 at 
 org.apache.solr.handler.loader.XMLLoader.processDelete(XMLLoader.java:347)
 at 
 org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:278)
 at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:174)
 at 
 org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1915)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:785)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:203)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
 at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
 at 
 org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
 at 
 org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:744)
 at 
 org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2282)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724) 
 {code}
 See also: 
 http://lucene.472066.n3.nabble.com/AIOOBException-on-trunk-since-21st-or-22nd-build-td4112753.html



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.6.0_45) - Build # 9264 - Still Failing!

2014-02-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/9264/
Java: 32bit/jdk1.6.0_45 -server -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 8985 lines...]
[javac] Compiling 133 source files to 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build/solr-solrj/classes/java
[javac] 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/solrj/src/java/org/apache/solr/client/solrj/impl/NoOpResponseParser.java:60:
 illegal start of type
[javac]   NamedListObject list = new NamedList();
[javac]  ^
[javac] 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/solrj/src/java/org/apache/solr/client/solrj/impl/NoOpResponseParser.java:74:
 illegal start of type
[javac]   NamedListObject list = new NamedList();
[javac]  ^
[javac] 2 errors

BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:459: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:439: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:39: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:37: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build.xml:189: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/common-build.xml:491: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/common-build.xml:413: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/common-build.xml:359: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/common-build.xml:379: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/common-build.xml:507: 
The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/common-build.xml:1764: 
Compile failed; see the compiler error output for details.

Total time: 26 minutes 27 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 32bit/jdk1.6.0_45 -server -XX:+UseSerialGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-5425) Make creation of FixedBitSet in FacetsCollector overridable

2014-02-05 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892131#comment-13892131
 ] 

Shai Erera commented on LUCENE-5425:


I ran this on a 2013 Wikipedia dump w/ 6.7M docs (full docs, not 1K) and Date 
facet:

{noformat}
   TaskQPS base  StdDevQPS comp  StdDev 
   Pct diff
   HighSloppyPhrase1.80  (9.7%)1.75  (6.1%)   -2.9% 
( -16% -   14%)
  OrHighLow5.53  (2.2%)5.43  (2.7%)   -1.9% 
(  -6% -3%)
 OrHighHigh3.81  (2.2%)3.74  (2.7%)   -1.8% 
(  -6% -3%)
   OrHighNotLow9.27  (2.1%)9.13  (2.6%)   -1.5% 
(  -5% -3%)
   HighSpanNear3.77  (5.4%)3.71  (6.2%)   -1.4% 
( -12% -   10%)
   OrHighNotMed   14.84  (2.1%)   14.64  (2.5%)   -1.3% 
(  -5% -3%)
  OrHighNotHigh8.06  (2.4%)7.96  (2.8%)   -1.2% 
(  -6% -4%)
MedSloppyPhrase1.66  (7.2%)1.64  (4.3%)   -1.2% 
( -11% -   11%)
   OrNotHighLow   30.04  (4.6%)   29.71  (4.9%)   -1.1% 
( -10% -8%)
  OrHighMed   12.16  (2.2%)   12.04  (2.3%)   -1.0% 
(  -5% -3%)
 HighPhrase2.28 (10.2%)2.26  (9.2%)   -0.7% 
( -18% -   20%)
  OrNotHighHigh   13.08  (3.0%)   13.00  (3.1%)   -0.7% 
(  -6% -5%)
Respell   24.49  (3.3%)   24.33  (3.3%)   -0.6% 
(  -7% -6%)
   OrNotHighMed   18.02  (4.1%)   17.99  (4.0%)   -0.2% 
(  -7% -8%)
  LowPhrase5.73  (7.0%)5.72  (6.9%)   -0.2% 
( -13% -   14%)
MedSpanNear   14.97  (3.8%)   14.99  (4.3%)0.1% 
(  -7% -8%)
 AndHighLow  199.51  (2.9%)  200.05  (3.6%)0.3% 
(  -6% -6%)
LowSpanNear4.57  (4.0%)4.59  (4.7%)0.3% 
(  -8% -9%)
  MedPhrase   79.00  (7.4%)   79.23  (6.3%)0.3% 
( -12% -   15%)
 Fuzzy2   25.42  (3.0%)   25.56  (3.1%)0.6% 
(  -5% -6%)
 Fuzzy1   35.84  (2.7%)   36.11  (3.7%)0.7% 
(  -5% -7%)
LowSloppyPhrase   20.55  (2.7%)   20.73  (2.3%)0.9% 
(  -4% -6%)
   HighTerm   22.31  (3.7%)   22.59  (2.6%)1.2% 
(  -4% -7%)
 AndHighMed   16.17  (1.8%)   16.39  (2.3%)1.3% 
(  -2% -5%)
AndHighHigh   15.85  (2.3%)   16.17  (1.7%)2.1% 
(  -1% -6%)
MedTerm   26.51  (3.9%)   27.11  (4.0%)2.3% 
(  -5% -   10%)
LowTerm   98.07  (4.6%)  101.55  (5.5%)3.5% 
(  -6% -   14%)
 IntNRQ8.61  (4.3%)9.20  (4.6%)6.9% 
(  -1% -   16%)
   Wildcard   12.96  (3.0%)   14.30  (3.6%)   10.3% 
(   3% -   17%)
Prefix3   74.18  (2.7%)   96.70  (4.9%)   30.4% 
(  22% -   38%)
{noformat}

Results are consistent with yours. So should we proceed w/ the API change?

 Make creation of FixedBitSet in FacetsCollector overridable
 ---

 Key: LUCENE-5425
 URL: https://issues.apache.org/jira/browse/LUCENE-5425
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Affects Versions: 4.6
Reporter: John Wang
 Attachments: LUCENE-5425.patch, facetscollector.patch, 
 facetscollector.patch, fixbitset.patch


 In FacetsCollector, creation of bits in MatchingDocs are allocated per query. 
 For large indexes where maxDocs are large creating a bitset of maxDoc bits 
 will be expensive and would great a lot of garbage.
 Attached patch is to allow for this allocation customizable while maintaining 
 current behavior.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3854) SolrCloud does not work with https

2014-02-05 Thread Steve Davids (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892130#comment-13892130
 ] 

Steve Davids commented on SOLR-3854:


Just thinking about this a little bit more, if the urlScheme is defined in 
the clusterprops.json file then we can drop the urlScheme configuration in 
the solr.xml - whatever is defined in the clusterprops is applied to all cores 
on all servers within the cluster. This may make configuration a bit easier but 
at the potential loss of flexibility, is there a use case where someone only 
wants to run https in a certain core or certain machines within the cluster 
(not all). Alternatively, we can use the value defined in the cluster props as 
the default value if no urlScheme is defined specifically on that core (if not 
specified anywhere default to http). Just a few thoughts...

 SolrCloud does not work with https
 --

 Key: SOLR-3854
 URL: https://issues.apache.org/jira/browse/SOLR-3854
 Project: Solr
  Issue Type: Bug
Reporter: Sami Siren
Assignee: Mark Miller
 Fix For: 5.0, 4.7

 Attachments: SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
 SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch


 There are a few places in current codebase that assume http is used. This 
 prevents using https when running solr in cloud mode.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5425) Make creation of FixedBitSet in FacetsCollector overridable

2014-02-05 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892144#comment-13892144
 ] 

Shai Erera commented on LUCENE-5425:


John/Lei, can you please review the new patch to confirm this API will work for 
you?

 Make creation of FixedBitSet in FacetsCollector overridable
 ---

 Key: LUCENE-5425
 URL: https://issues.apache.org/jira/browse/LUCENE-5425
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Affects Versions: 4.6
Reporter: John Wang
 Attachments: LUCENE-5425.patch, facetscollector.patch, 
 facetscollector.patch, fixbitset.patch


 In FacetsCollector, creation of bits in MatchingDocs are allocated per query. 
 For large indexes where maxDocs are large creating a bitset of maxDoc bits 
 will be expensive and would great a lot of garbage.
 Attached patch is to allow for this allocation customizable while maintaining 
 current behavior.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.6.0) - Build # 1267 - Still Failing!

2014-02-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/1267/
Java: 64bit/jdk1.6.0 -XX:+UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 8851 lines...]
[javac] Compiling 133 source files to 
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-solrj/classes/java
[javac] 
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/solr/solrj/src/java/org/apache/solr/client/solrj/impl/NoOpResponseParser.java:60:
 illegal start of type
[javac]   NamedListObject list = new NamedList();
[javac]  ^
[javac] 
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/solr/solrj/src/java/org/apache/solr/client/solrj/impl/NoOpResponseParser.java:74:
 illegal start of type
[javac]   NamedListObject list = new NamedList();
[javac]  ^
[javac] 2 errors

BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/build.xml:459: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/build.xml:439: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/build.xml:39: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/extra-targets.xml:37: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/solr/build.xml:189: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/solr/common-build.xml:491: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/solr/common-build.xml:413: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/solr/common-build.xml:359: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/solr/common-build.xml:379: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/lucene/common-build.xml:507: 
The following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/lucene/common-build.xml:1764: 
Compile failed; see the compiler error output for details.

Total time: 37 minutes 34 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 64bit/jdk1.6.0 -XX:+UseCompressedOops -XX:+UseSerialGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-SmokeRelease-4.x - Build # 148 - Failure

2014-02-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-4.x/148/

No tests ran.

Build Log:
[...truncated 23708 lines...]
[javac] Compiling 133 source files to 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/solr/build/solr-solrj/classes/java
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/solr/solrj/src/java/org/apache/solr/client/solrj/impl/NoOpResponseParser.java:60:
 illegal start of type
[javac]   NamedListObject list = new NamedList();
[javac]  ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/solr/solrj/src/java/org/apache/solr/client/solrj/impl/NoOpResponseParser.java:74:
 illegal start of type
[javac]   NamedListObject list = new NamedList();
[javac]  ^
[javac] 2 errors

BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/build.xml:363:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/solr/build.xml:434:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/solr/common-build.xml:392:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/lucene/common-build.xml:507:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/lucene/common-build.xml:1764:
 Compile failed; see the compiler error output for details.

Total time: 12 minutes 7 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-5425) Make creation of FixedBitSet in FacetsCollector overridable

2014-02-05 Thread John Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892204#comment-13892204
 ] 

John Wang commented on LUCENE-5425:
---

Yes, this works great for us!

Thanks Shai and Mike!

-John

 Make creation of FixedBitSet in FacetsCollector overridable
 ---

 Key: LUCENE-5425
 URL: https://issues.apache.org/jira/browse/LUCENE-5425
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Affects Versions: 4.6
Reporter: John Wang
 Attachments: LUCENE-5425.patch, facetscollector.patch, 
 facetscollector.patch, fixbitset.patch


 In FacetsCollector, creation of bits in MatchingDocs are allocated per query. 
 For large indexes where maxDocs are large creating a bitset of maxDoc bits 
 will be expensive and would great a lot of garbage.
 Attached patch is to allow for this allocation customizable while maintaining 
 current behavior.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5379) Query-time multi-word synonym expansion

2014-02-05 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892203#comment-13892203
 ] 

Markus Jelsma commented on SOLR-5379:
-

[~nolanlawson] is the outcome you describe desired behaviour? I don't really 
believe it is. For synonyms [a b,x y] and q=a b you get 
PhraseQuery(content:x y a b). While phrase a b and x y would ordinarily 
match some documents, x y a b will never match. Or is this supposed to expand 
syns at index time too?

 Query-time multi-word synonym expansion
 ---

 Key: SOLR-5379
 URL: https://issues.apache.org/jira/browse/SOLR-5379
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Reporter: Tien Nguyen Manh
  Labels: multi-word, queryparser, synonym
 Fix For: 4.7

 Attachments: quoted.patch, synonym-expander.patch


 While dealing with synonym at query time, solr failed to work with multi-word 
 synonyms due to some reasons:
 - First the lucene queryparser tokenizes user query by space so it split 
 multi-word term into two terms before feeding to synonym filter, so synonym 
 filter can't recognized multi-word term to do expansion
 - Second, if synonym filter expand into multiple terms which contains 
 multi-word synonym, The SolrQueryParseBase currently use MultiPhraseQuery to 
 handle synonyms. But MultiPhraseQuery don't work with term have different 
 number of words.
 For the first one, we can extend quoted all multi-word synonym in user query 
 so that lucene queryparser don't split it. There are a jira task related to 
 this one https://issues.apache.org/jira/browse/LUCENE-2605.
 For the second, we can replace MultiPhraseQuery by an appropriate BoleanQuery 
 SHOULD which contains multiple PhraseQuery in case tokens stream have 
 multi-word synonym.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.6.0_45) - Build # 9265 - Still Failing!

2014-02-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/9265/
Java: 32bit/jdk1.6.0_45 -client -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 8834 lines...]
[javac] Compiling 133 source files to 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build/solr-solrj/classes/java
[javac] 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/solrj/src/java/org/apache/solr/client/solrj/impl/NoOpResponseParser.java:60:
 illegal start of type
[javac]   NamedListObject list = new NamedList();
[javac]  ^
[javac] 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/solrj/src/java/org/apache/solr/client/solrj/impl/NoOpResponseParser.java:74:
 illegal start of type
[javac]   NamedListObject list = new NamedList();
[javac]  ^
[javac] 2 errors

BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:459: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:439: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:39: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:37: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build.xml:189: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/common-build.xml:491: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/common-build.xml:413: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/common-build.xml:359: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/common-build.xml:379: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/common-build.xml:507: 
The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/common-build.xml:1764: 
Compile failed; see the compiler error output for details.

Total time: 26 minutes 26 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 32bit/jdk1.6.0_45 -client -XX:+UseSerialGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

RE: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.6.0_45) - Build # 9265 - Still Failing!

2014-02-05 Thread Uwe Schindler
Hi,

Lucene 4.x is Java 6 only, so no diamond operator is available.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de]
 Sent: Wednesday, February 05, 2014 4:17 PM
 To: dev@lucene.apache.org
 Subject: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.6.0_45) - Build # 9265 -
 Still Failing!
 
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/9265/
 Java: 32bit/jdk1.6.0_45 -client -XX:+UseSerialGC
 
 All tests passed
 
 Build Log:
 [...truncated 8834 lines...]
 [javac] Compiling 133 source files to /mnt/ssd/jenkins/workspace/Lucene-
 Solr-4.x-Linux/solr/build/solr-solrj/classes/java
 [javac] /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/solr/solrj/src/java/org/apache/solr/client/solrj/impl/NoOpResponsePa
 rser.java:60: illegal start of type
 [javac]   NamedListObject list = new NamedList();
 [javac]  ^
 [javac] /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/solr/solrj/src/java/org/apache/solr/client/solrj/impl/NoOpResponsePa
 rser.java:74: illegal start of type
 [javac]   NamedListObject list = new NamedList();
 [javac]  ^
 [javac] 2 errors
 
 BUILD FAILED
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:459: The
 following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:439: The
 following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:39: The
 following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:37:
 The following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build.xml:189: The
 following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/common-
 build.xml:491: The following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/common-
 build.xml:413: The following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/common-
 build.xml:359: The following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/common-
 build.xml:379: The following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/common-
 build.xml:507: The following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/common-
 build.xml:1764: Compile failed; see the compiler error output for details.
 
 Total time: 26 minutes 26 seconds
 Build step 'Invoke Ant' marked build as failure Description set: Java:
 32bit/jdk1.6.0_45 -client -XX:+UseSerialGC Archiving artifacts Recording test
 results Email was triggered for: Failure Sending email for trigger: Failure
 



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5146) Figure out what it would take for lazily-loaded cores to play nice with SolrCloud

2014-02-05 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892221#comment-13892221
 ] 

Erick Erickson commented on SOLR-5146:
--

Shalin:

Let me know how I can help. The last two months I've been pretty much out of 
touch, culminating in a cross-country move. But we're at our destination now so 
I have some bandwidth, maybe

Best,
Erick

 Figure out what it would take for lazily-loaded cores to play nice with 
 SolrCloud
 -

 Key: SOLR-5146
 URL: https://issues.apache.org/jira/browse/SOLR-5146
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.5, 5.0
Reporter: Erick Erickson
Assignee: Shalin Shekhar Mangar
 Fix For: 5.0


 The whole lazy-load core thing was implemented with non-SolrCloud use-cases 
 in mind. There are several user-list threads that ask about using lazy cores 
 with SolrCloud, especially in multi-tenant use-cases.
 This is a marker JIRA to investigate what it would take to make lazy-load 
 cores play nice with SolrCloud. It's especially interesting how this all 
 works with shards, replicas, leader election, recovery, etc.
 NOTE: This is pretty much totally unexplored territory. It may be that a few 
 trivial modifications are all that's needed. OTOH, It may be that we'd have 
 to rip apart SolrCloud to handle this case. Until someone dives into the 
 code, we don't know.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.6.0_45) - Build # 9265 - Still Failing!

2014-02-05 Thread Shalin Shekhar Mangar
Ah, I'll fix.

On Wed, Feb 5, 2014 at 8:50 PM, Uwe Schindler u...@thetaphi.de wrote:
 Hi,

 Lucene 4.x is Java 6 only, so no diamond operator is available.

 Uwe

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de


 -Original Message-
 From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de]
 Sent: Wednesday, February 05, 2014 4:17 PM
 To: dev@lucene.apache.org
 Subject: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.6.0_45) - Build # 9265 -
 Still Failing!

 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/9265/
 Java: 32bit/jdk1.6.0_45 -client -XX:+UseSerialGC

 All tests passed

 Build Log:
 [...truncated 8834 lines...]
 [javac] Compiling 133 source files to /mnt/ssd/jenkins/workspace/Lucene-
 Solr-4.x-Linux/solr/build/solr-solrj/classes/java
 [javac] /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/solr/solrj/src/java/org/apache/solr/client/solrj/impl/NoOpResponsePa
 rser.java:60: illegal start of type
 [javac]   NamedListObject list = new NamedList();
 [javac]  ^
 [javac] /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/solr/solrj/src/java/org/apache/solr/client/solrj/impl/NoOpResponsePa
 rser.java:74: illegal start of type
 [javac]   NamedListObject list = new NamedList();
 [javac]  ^
 [javac] 2 errors

 BUILD FAILED
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:459: The
 following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:439: The
 following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:39: The
 following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:37:
 The following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build.xml:189: The
 following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/common-
 build.xml:491: The following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/common-
 build.xml:413: The following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/common-
 build.xml:359: The following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/common-
 build.xml:379: The following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/common-
 build.xml:507: The following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/common-
 build.xml:1764: Compile failed; see the compiler error output for details.

 Total time: 26 minutes 26 seconds
 Build step 'Invoke Ant' marked build as failure Description set: Java:
 32bit/jdk1.6.0_45 -client -XX:+UseSerialGC Archiving artifacts Recording test
 results Email was triggered for: Failure Sending email for trigger: Failure




 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-- 
Regards,
Shalin Shekhar Mangar.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5530) SolrJ NoOpResponseParser

2014-02-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892224#comment-13892224
 ] 

ASF subversion and git services commented on SOLR-5530:
---

Commit 1564802 from sha...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1564802 ]

SOLR-5530: Don't use diamond operator on branch_4x

 SolrJ NoOpResponseParser
 

 Key: SOLR-5530
 URL: https://issues.apache.org/jira/browse/SOLR-5530
 Project: Solr
  Issue Type: New Feature
  Components: clients - java
Reporter: Upayavira
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 5.0, 4.7

 Attachments: PATCH-5530.txt, SOLR-5530.patch, SOLR-5530.patch


 If you want the raw response string out of SolrJ, the advice seems to be to 
 just use an HttpClient directly. 
 However, sometimes you may have a lot of SolrJ infrastructure already in 
 place to build out queries, etc, so it would be much simpler to just use 
 SolrJ to do the work.
 This patch offers a NoOpResponseParser, which simply puts the entire response 
 into an entry in a NamedList.
 Because the response isn't parsed into a QueryResponse, usage is slightly 
 different:
 HttpSolrServer server = new HttpSolrServer(http://localhost:8983/solr;);
 SolrQuery query = new SolrQuery(*:*);
 QueryRequest req = new QueryRequest(query);
 server.setParser(new NoOpResponseParser());
 NamedListObject resp = server.request(req);
 String responseString = resp.get(response);



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5302) Analytics Component

2014-02-05 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892225#comment-13892225
 ] 

Erick Erickson commented on SOLR-5302:
--

Mehmet:

We're still trying to track down what's behind the test failures, that effort 
is being tracked in SOLR-5488. That discussion shows a way to reproduce the 
test failures we see, albeit intermittently.

You could certainly help if you can 
1 reproduce the problem. Note the discussion at SOLR-5488 about 
ant test -Dtestcase=ExpressionTest -Dtests.iters=1
2 figure out why/create a patch.

and/or

3 exercise trunk as much as possible to see that it all works.

Let's move the rest of the discussion over to SOLR-5488 though, this JIRA is 
gated by that one.

 Analytics Component
 ---

 Key: SOLR-5302
 URL: https://issues.apache.org/jira/browse/SOLR-5302
 Project: Solr
  Issue Type: New Feature
Reporter: Steven Bower
Assignee: Erick Erickson
 Attachments: SOLR-5302.patch, SOLR-5302.patch, SOLR-5302.patch, 
 SOLR-5302.patch, Search Analytics Component.pdf, Statistical Expressions.pdf, 
 solr_analytics-2013.10.04-2.patch


 This ticket is to track a replacement for the StatsComponent. The 
 AnalyticsComponent supports the following features:
 * All functionality of StatsComponent (SOLR-4499)
 * Field Faceting (SOLR-3435)
 ** Support for limit
 ** Sorting (bucket name or any stat in the bucket
 ** Support for offset
 * Range Faceting
 ** Supports all options of standard range faceting
 * Query Faceting (SOLR-2925)
 * Ability to use overall/field facet statistics as input to range/query 
 faceting (ie calc min/max date and then facet over that range
 * Support for more complex aggregate/mapping operations (SOLR-1622)
 ** Aggregations: min, max, sum, sum-of-square, count, missing, stddev, mean, 
 median, percentiles
 ** Operations: negation, abs, add, multiply, divide, power, log, date math, 
 string reversal, string concat
 ** Easily pluggable framework to add additional operations
 * New / cleaner output format
 Outstanding Issues:
 * Multi-value field support for stats (supported for faceting)
 * Multi-shard support (may not be possible for some operations, eg median)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5488) Fix up test failures for Analytics Component

2014-02-05 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892227#comment-13892227
 ] 

Erick Erickson commented on SOLR-5488:
--

[~sbower] We've alit at our rental after a cross-country move. Anything I can 
to to expedite this? Point me at the code you suspect and perhaps I can be 
another set of eyes



 Fix up test failures for Analytics Component
 

 Key: SOLR-5488
 URL: https://issues.apache.org/jira/browse/SOLR-5488
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, 4.7
Reporter: Erick Erickson
Assignee: Erick Erickson
 Attachments: SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
 SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, eoe.errors


 The analytics component has a few test failures, perhaps 
 environment-dependent. This is just to collect the test fixes in one place 
 for convenience when we merge back into 4.x



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5697) Delete by query does not work properly with customly configured query parser

2014-02-05 Thread Dmitry Kan (JIRA)
Dmitry Kan created SOLR-5697:


 Summary: Delete by query does not work properly with customly 
configured query parser
 Key: SOLR-5697
 URL: https://issues.apache.org/jira/browse/SOLR-5697
 Project: Solr
  Issue Type: Bug
  Components: query parsers, update
Affects Versions: 4.3.1
Reporter: Dmitry Kan
 Attachments: query_parser_maven_project.tgz

The shard with the configuration illustrating the issue is attached.
Also attached is example query parser maven project. The binary has been 
already deployed onto lib directories of each core.

Start the shard using startUp_multicore.sh.


1. curl 'http://localhost:8983/solr/metadata/update?commit=falsedebugQuery=on' 
--data-binary 'deletequeryTitle:this_title/query/delete' -H 
Content-type:text/xml

This query produces an exception:

?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeaderint name=status400/intint 
name=QTime33/int/lstlst name=errorstr name=msgUnknown query 
parser 'lucene'/strint name=code400/int/lst
/response


2. Change the multicore/metadata/solrconfig.xml and 
multicore/statements/solrconfig.xml by uncommenting the defType parameters on 
requestHandler name=/select.

Issue the same query. Result is same:

?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeaderint name=status400/intint 
name=QTime30/int/lstlst name=errorstr name=msgUnknown query 
parser 'lucene'/strint name=code400/int/lst
/response


3. Keep the same config as in 2. and specify query parser in the local params:

curl 'http://localhost:8983/solr/metadata/update?commit=falsedebugQuery=on' 
--data-binary 'deletequery{!qparser1}Title:this_title/query/delete' -H 
Content-type:text/xml


The result:

?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeaderint name=status400/intint 
name=QTime3/int/lstlst name=errorstr name=msgno field name 
specified in query and no default specified via 'df' param/strint 
name=code400/int/lst
/response


The reason being because our query parser is mis-behaving in that it removes 
colons from the input queries = we get on the server side:

Modified input query: Title:this_title --- Titlethis_title
5593 [qtp2121668094-15] INFO  
org.apache.solr.update.processor.LogUpdateProcessor  – [metadata] webapp=/solr 
path=/update params={debugQuery=oncommit=false} {} 0 31
5594 [qtp2121668094-15] ERROR org.apache.solr.core.SolrCore  – 
org.apache.solr.common.SolrException: no field name specified in query and no 
default specified via 'df' param
at 
org.apache.solr.parser.SolrQueryParserBase.checkNullField(SolrQueryParserBase.java:924)
at 
org.apache.solr.parser.SolrQueryParserBase.getFieldQuery(SolrQueryParserBase.java:944)
at 
org.apache.solr.parser.SolrQueryParserBase.handleBareTokenQuery(SolrQueryParserBase.java:765)
at org.apache.solr.parser.QueryParser.Term(QueryParser.java:300)
at org.apache.solr.parser.QueryParser.Clause(QueryParser.java:186)
at org.apache.solr.parser.QueryParser.Query(QueryParser.java:108)
at org.apache.solr.parser.QueryParser.TopLevelQuery(QueryParser.java:97)
at 
org.apache.solr.parser.SolrQueryParserBase.parse(SolrQueryParserBase.java:160)
at 
org.apache.solr.search.LuceneQParser.parse(LuceneQParserPlugin.java:72)
at org.apache.solr.search.QParser.getQuery(QParser.java:142)
at 
org.apache.solr.update.DirectUpdateHandler2.getQuery(DirectUpdateHandler2.java:319)
at 
org.apache.solr.update.DirectUpdateHandler2.deleteByQuery(DirectUpdateHandler2.java:349)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processDelete(RunUpdateProcessorFactory.java:80)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processDelete(UpdateRequestProcessor.java:55)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doDeleteByQuery(DistributedUpdateProcessor.java:931)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processDelete(DistributedUpdateProcessor.java:772)
at 
org.apache.solr.update.processor.LogUpdateProcessor.processDelete(LogUpdateProcessorFactory.java:121)
at 
org.apache.solr.handler.loader.XMLLoader.processDelete(XMLLoader.java:346)
at 
org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:277)
at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:173)
at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1820)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:656)
at 

[jira] [Updated] (SOLR-5697) Delete by query does not work properly with customly configured query parser

2014-02-05 Thread Dmitry Kan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Kan updated SOLR-5697:
-

Attachment: query_parser_maven_project.tgz

 Delete by query does not work properly with customly configured query parser
 

 Key: SOLR-5697
 URL: https://issues.apache.org/jira/browse/SOLR-5697
 Project: Solr
  Issue Type: Bug
  Components: query parsers, update
Affects Versions: 4.3.1
Reporter: Dmitry Kan
 Attachments: query_parser_maven_project.tgz


 The shard with the configuration illustrating the issue is attached.
 Also attached is example query parser maven project. The binary has been 
 already deployed onto lib directories of each core.
 Start the shard using startUp_multicore.sh.
 1. curl 
 'http://localhost:8983/solr/metadata/update?commit=falsedebugQuery=on' 
 --data-binary 'deletequeryTitle:this_title/query/delete' -H 
 Content-type:text/xml
 This query produces an exception:
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status400/intint 
 name=QTime33/int/lstlst name=errorstr name=msgUnknown query 
 parser 'lucene'/strint name=code400/int/lst
 /response
 2. Change the multicore/metadata/solrconfig.xml and 
 multicore/statements/solrconfig.xml by uncommenting the defType parameters on 
 requestHandler name=/select.
 Issue the same query. Result is same:
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status400/intint 
 name=QTime30/int/lstlst name=errorstr name=msgUnknown query 
 parser 'lucene'/strint name=code400/int/lst
 /response
 3. Keep the same config as in 2. and specify query parser in the local params:
 curl 'http://localhost:8983/solr/metadata/update?commit=falsedebugQuery=on' 
 --data-binary 'deletequery{!qparser1}Title:this_title/query/delete' 
 -H Content-type:text/xml
 The result:
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status400/intint 
 name=QTime3/int/lstlst name=errorstr name=msgno field name 
 specified in query and no default specified via 'df' param/strint 
 name=code400/int/lst
 /response
 The reason being because our query parser is mis-behaving in that it 
 removes colons from the input queries = we get on the server side:
 Modified input query: Title:this_title --- Titlethis_title
 5593 [qtp2121668094-15] INFO  
 org.apache.solr.update.processor.LogUpdateProcessor  – [metadata] 
 webapp=/solr path=/update params={debugQuery=oncommit=false} {} 0 31
 5594 [qtp2121668094-15] ERROR org.apache.solr.core.SolrCore  – 
 org.apache.solr.common.SolrException: no field name specified in query and no 
 default specified via 'df' param
   at 
 org.apache.solr.parser.SolrQueryParserBase.checkNullField(SolrQueryParserBase.java:924)
   at 
 org.apache.solr.parser.SolrQueryParserBase.getFieldQuery(SolrQueryParserBase.java:944)
   at 
 org.apache.solr.parser.SolrQueryParserBase.handleBareTokenQuery(SolrQueryParserBase.java:765)
   at org.apache.solr.parser.QueryParser.Term(QueryParser.java:300)
   at org.apache.solr.parser.QueryParser.Clause(QueryParser.java:186)
   at org.apache.solr.parser.QueryParser.Query(QueryParser.java:108)
   at org.apache.solr.parser.QueryParser.TopLevelQuery(QueryParser.java:97)
   at 
 org.apache.solr.parser.SolrQueryParserBase.parse(SolrQueryParserBase.java:160)
   at 
 org.apache.solr.search.LuceneQParser.parse(LuceneQParserPlugin.java:72)
   at org.apache.solr.search.QParser.getQuery(QParser.java:142)
   at 
 org.apache.solr.update.DirectUpdateHandler2.getQuery(DirectUpdateHandler2.java:319)
   at 
 org.apache.solr.update.DirectUpdateHandler2.deleteByQuery(DirectUpdateHandler2.java:349)
   at 
 org.apache.solr.update.processor.RunUpdateProcessor.processDelete(RunUpdateProcessorFactory.java:80)
   at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processDelete(UpdateRequestProcessor.java:55)
   at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.doDeleteByQuery(DistributedUpdateProcessor.java:931)
   at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processDelete(DistributedUpdateProcessor.java:772)
   at 
 org.apache.solr.update.processor.LogUpdateProcessor.processDelete(LogUpdateProcessorFactory.java:121)
   at 
 org.apache.solr.handler.loader.XMLLoader.processDelete(XMLLoader.java:346)
   at 
 org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:277)
   at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:173)
   at 
 org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
   at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
   at 
 

[jira] [Updated] (SOLR-5697) Delete by query does not work properly with customly configured query parser

2014-02-05 Thread Dmitry Kan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Kan updated SOLR-5697:
-

Description: 
The shard with the configuration illustrating the issue is attached. Since the 
size of the archive exceed the upload limit, I have dropped the solr.war from 
the webapps. Please add it (SOLR 4.3.1).


Also attached is example query parser maven project. The binary has been 
already deployed onto lib directories of each core.

Start the shard using startUp_multicore.sh.


1. curl 'http://localhost:8983/solr/metadata/update?commit=falsedebugQuery=on' 
--data-binary 'deletequeryTitle:this_title/query/delete' -H 
Content-type:text/xml

This query produces an exception:

?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeaderint name=status400/intint 
name=QTime33/int/lstlst name=errorstr name=msgUnknown query 
parser 'lucene'/strint name=code400/int/lst
/response


2. Change the multicore/metadata/solrconfig.xml and 
multicore/statements/solrconfig.xml by uncommenting the defType parameters on 
requestHandler name=/select.

Issue the same query. Result is same:

?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeaderint name=status400/intint 
name=QTime30/int/lstlst name=errorstr name=msgUnknown query 
parser 'lucene'/strint name=code400/int/lst
/response


3. Keep the same config as in 2. and specify query parser in the local params:

curl 'http://localhost:8983/solr/metadata/update?commit=falsedebugQuery=on' 
--data-binary 'deletequery{!qparser1}Title:this_title/query/delete' -H 
Content-type:text/xml


The result:

?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeaderint name=status400/intint 
name=QTime3/int/lstlst name=errorstr name=msgno field name 
specified in query and no default specified via 'df' param/strint 
name=code400/int/lst
/response


The reason being because our query parser is mis-behaving in that it removes 
colons from the input queries = we get on the server side:

Modified input query: Title:this_title --- Titlethis_title
5593 [qtp2121668094-15] INFO  
org.apache.solr.update.processor.LogUpdateProcessor  – [metadata] webapp=/solr 
path=/update params={debugQuery=oncommit=false} {} 0 31
5594 [qtp2121668094-15] ERROR org.apache.solr.core.SolrCore  – 
org.apache.solr.common.SolrException: no field name specified in query and no 
default specified via 'df' param
at 
org.apache.solr.parser.SolrQueryParserBase.checkNullField(SolrQueryParserBase.java:924)
at 
org.apache.solr.parser.SolrQueryParserBase.getFieldQuery(SolrQueryParserBase.java:944)
at 
org.apache.solr.parser.SolrQueryParserBase.handleBareTokenQuery(SolrQueryParserBase.java:765)
at org.apache.solr.parser.QueryParser.Term(QueryParser.java:300)
at org.apache.solr.parser.QueryParser.Clause(QueryParser.java:186)
at org.apache.solr.parser.QueryParser.Query(QueryParser.java:108)
at org.apache.solr.parser.QueryParser.TopLevelQuery(QueryParser.java:97)
at 
org.apache.solr.parser.SolrQueryParserBase.parse(SolrQueryParserBase.java:160)
at 
org.apache.solr.search.LuceneQParser.parse(LuceneQParserPlugin.java:72)
at org.apache.solr.search.QParser.getQuery(QParser.java:142)
at 
org.apache.solr.update.DirectUpdateHandler2.getQuery(DirectUpdateHandler2.java:319)
at 
org.apache.solr.update.DirectUpdateHandler2.deleteByQuery(DirectUpdateHandler2.java:349)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processDelete(RunUpdateProcessorFactory.java:80)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processDelete(UpdateRequestProcessor.java:55)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doDeleteByQuery(DistributedUpdateProcessor.java:931)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processDelete(DistributedUpdateProcessor.java:772)
at 
org.apache.solr.update.processor.LogUpdateProcessor.processDelete(LogUpdateProcessorFactory.java:121)
at 
org.apache.solr.handler.loader.XMLLoader.processDelete(XMLLoader.java:346)
at 
org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:277)
at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:173)
at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1820)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:656)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:359)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:155)
 

[jira] [Updated] (SOLR-5697) Delete by query does not work properly with customly configured query parser

2014-02-05 Thread Dmitry Kan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Kan updated SOLR-5697:
-

Attachment: shard.tgz

shard with config files without solr.war file.

 Delete by query does not work properly with customly configured query parser
 

 Key: SOLR-5697
 URL: https://issues.apache.org/jira/browse/SOLR-5697
 Project: Solr
  Issue Type: Bug
  Components: query parsers, update
Affects Versions: 4.3.1
Reporter: Dmitry Kan
 Attachments: query_parser_maven_project.tgz, shard.tgz


 The shard with the configuration illustrating the issue is attached. Since 
 the size of the archive exceed the upload limit, I have dropped the solr.war 
 from the webapps. Please add it (SOLR 4.3.1).
 Also attached is example query parser maven project. The binary has been 
 already deployed onto lib directories of each core.
 Start the shard using startUp_multicore.sh.
 1. curl 
 'http://localhost:8983/solr/metadata/update?commit=falsedebugQuery=on' 
 --data-binary 'deletequeryTitle:this_title/query/delete' -H 
 Content-type:text/xml
 This query produces an exception:
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status400/intint 
 name=QTime33/int/lstlst name=errorstr name=msgUnknown query 
 parser 'lucene'/strint name=code400/int/lst
 /response
 2. Change the multicore/metadata/solrconfig.xml and 
 multicore/statements/solrconfig.xml by uncommenting the defType parameters on 
 requestHandler name=/select.
 Issue the same query. Result is same:
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status400/intint 
 name=QTime30/int/lstlst name=errorstr name=msgUnknown query 
 parser 'lucene'/strint name=code400/int/lst
 /response
 3. Keep the same config as in 2. and specify query parser in the local params:
 curl 'http://localhost:8983/solr/metadata/update?commit=falsedebugQuery=on' 
 --data-binary 'deletequery{!qparser1}Title:this_title/query/delete' 
 -H Content-type:text/xml
 The result:
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status400/intint 
 name=QTime3/int/lstlst name=errorstr name=msgno field name 
 specified in query and no default specified via 'df' param/strint 
 name=code400/int/lst
 /response
 The reason being because our query parser is mis-behaving in that it 
 removes colons from the input queries = we get on the server side:
 Modified input query: Title:this_title --- Titlethis_title
 5593 [qtp2121668094-15] INFO  
 org.apache.solr.update.processor.LogUpdateProcessor  – [metadata] 
 webapp=/solr path=/update params={debugQuery=oncommit=false} {} 0 31
 5594 [qtp2121668094-15] ERROR org.apache.solr.core.SolrCore  – 
 org.apache.solr.common.SolrException: no field name specified in query and no 
 default specified via 'df' param
   at 
 org.apache.solr.parser.SolrQueryParserBase.checkNullField(SolrQueryParserBase.java:924)
   at 
 org.apache.solr.parser.SolrQueryParserBase.getFieldQuery(SolrQueryParserBase.java:944)
   at 
 org.apache.solr.parser.SolrQueryParserBase.handleBareTokenQuery(SolrQueryParserBase.java:765)
   at org.apache.solr.parser.QueryParser.Term(QueryParser.java:300)
   at org.apache.solr.parser.QueryParser.Clause(QueryParser.java:186)
   at org.apache.solr.parser.QueryParser.Query(QueryParser.java:108)
   at org.apache.solr.parser.QueryParser.TopLevelQuery(QueryParser.java:97)
   at 
 org.apache.solr.parser.SolrQueryParserBase.parse(SolrQueryParserBase.java:160)
   at 
 org.apache.solr.search.LuceneQParser.parse(LuceneQParserPlugin.java:72)
   at org.apache.solr.search.QParser.getQuery(QParser.java:142)
   at 
 org.apache.solr.update.DirectUpdateHandler2.getQuery(DirectUpdateHandler2.java:319)
   at 
 org.apache.solr.update.DirectUpdateHandler2.deleteByQuery(DirectUpdateHandler2.java:349)
   at 
 org.apache.solr.update.processor.RunUpdateProcessor.processDelete(RunUpdateProcessorFactory.java:80)
   at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processDelete(UpdateRequestProcessor.java:55)
   at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.doDeleteByQuery(DistributedUpdateProcessor.java:931)
   at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processDelete(DistributedUpdateProcessor.java:772)
   at 
 org.apache.solr.update.processor.LogUpdateProcessor.processDelete(LogUpdateProcessorFactory.java:121)
   at 
 org.apache.solr.handler.loader.XMLLoader.processDelete(XMLLoader.java:346)
   at 
 org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:277)
   at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:173)
   at 
 

[jira] [Updated] (SOLR-5697) Delete by query does not work properly with customly configured query parser

2014-02-05 Thread Dmitry Kan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Kan updated SOLR-5697:
-

Description: 
The shard with the configuration illustrating the issue is attached. Since the 
size of the archive exceed the upload limit, I have dropped the solr.war from 
the webapps directory. Please add it (SOLR 4.3.1).


Also attached is example query parser maven project. The binary has been 
already deployed onto lib directories of each core.

Start the shard using startUp_multicore.sh.


1. curl 'http://localhost:8983/solr/metadata/update?commit=falsedebugQuery=on' 
--data-binary 'deletequeryTitle:this_title/query/delete' -H 
Content-type:text/xml

This query produces an exception:

?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeaderint name=status400/intint 
name=QTime33/int/lstlst name=errorstr name=msgUnknown query 
parser 'lucene'/strint name=code400/int/lst
/response


2. Change the multicore/metadata/solrconfig.xml and 
multicore/statements/solrconfig.xml by uncommenting the defType parameters on 
requestHandler name=/select.

Issue the same query. Result is same:

?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeaderint name=status400/intint 
name=QTime30/int/lstlst name=errorstr name=msgUnknown query 
parser 'lucene'/strint name=code400/int/lst
/response


3. Keep the same config as in 2. and specify query parser in the local params:

curl 'http://localhost:8983/solr/metadata/update?commit=falsedebugQuery=on' 
--data-binary 'deletequery{!qparser1}Title:this_title/query/delete' -H 
Content-type:text/xml


The result:

?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeaderint name=status400/intint 
name=QTime3/int/lstlst name=errorstr name=msgno field name 
specified in query and no default specified via 'df' param/strint 
name=code400/int/lst
/response


The reason being because our query parser is mis-behaving in that it removes 
colons from the input queries = we get on the server side:

Modified input query: Title:this_title --- Titlethis_title
5593 [qtp2121668094-15] INFO  
org.apache.solr.update.processor.LogUpdateProcessor  – [metadata] webapp=/solr 
path=/update params={debugQuery=oncommit=false} {} 0 31
5594 [qtp2121668094-15] ERROR org.apache.solr.core.SolrCore  – 
org.apache.solr.common.SolrException: no field name specified in query and no 
default specified via 'df' param
at 
org.apache.solr.parser.SolrQueryParserBase.checkNullField(SolrQueryParserBase.java:924)
at 
org.apache.solr.parser.SolrQueryParserBase.getFieldQuery(SolrQueryParserBase.java:944)
at 
org.apache.solr.parser.SolrQueryParserBase.handleBareTokenQuery(SolrQueryParserBase.java:765)
at org.apache.solr.parser.QueryParser.Term(QueryParser.java:300)
at org.apache.solr.parser.QueryParser.Clause(QueryParser.java:186)
at org.apache.solr.parser.QueryParser.Query(QueryParser.java:108)
at org.apache.solr.parser.QueryParser.TopLevelQuery(QueryParser.java:97)
at 
org.apache.solr.parser.SolrQueryParserBase.parse(SolrQueryParserBase.java:160)
at 
org.apache.solr.search.LuceneQParser.parse(LuceneQParserPlugin.java:72)
at org.apache.solr.search.QParser.getQuery(QParser.java:142)
at 
org.apache.solr.update.DirectUpdateHandler2.getQuery(DirectUpdateHandler2.java:319)
at 
org.apache.solr.update.DirectUpdateHandler2.deleteByQuery(DirectUpdateHandler2.java:349)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processDelete(RunUpdateProcessorFactory.java:80)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processDelete(UpdateRequestProcessor.java:55)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doDeleteByQuery(DistributedUpdateProcessor.java:931)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processDelete(DistributedUpdateProcessor.java:772)
at 
org.apache.solr.update.processor.LogUpdateProcessor.processDelete(LogUpdateProcessorFactory.java:121)
at 
org.apache.solr.handler.loader.XMLLoader.processDelete(XMLLoader.java:346)
at 
org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:277)
at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:173)
at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1820)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:656)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:359)
at 

[jira] [Updated] (SOLR-5697) Delete by query does not work properly with customly configured query parser

2014-02-05 Thread Dmitry Kan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Kan updated SOLR-5697:
-

Description: 
The shard with the configuration illustrating the issue is attached. Since the 
size of the archive exceed the upload limit, I have dropped the solr.war from 
the webapps directory. Please add it (SOLR 4.3.1).


Also attached is example query parser maven project. The binary has been 
already deployed onto lib directories of each core.

Start the shard using startUp_multicore.sh.


1. curl 'http://localhost:8983/solr/metadata/update?commit=falsedebugQuery=on' 
--data-binary 'deletequeryTitle:this_title/query/delete' -H 
Content-type:text/xml

This query produces an exception:

?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeaderint name=status400/intint 
name=QTime33/int/lstlst name=errorstr name=msgUnknown query 
parser 'lucene'/strint name=code400/int/lst
/response


2. Change the multicore/metadata/solrconfig.xml and 
multicore/statements/solrconfig.xml by uncommenting the defType parameters on 
requestHandler name=/select.

Issue the same query. Result is same:

?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeaderint name=status400/intint 
name=QTime30/int/lstlst name=errorstr name=msgUnknown query 
parser 'lucene'/strint name=code400/int/lst
/response


3. Keep the same config as in 2. and specify query parser in the local params:

curl 'http://localhost:8983/solr/metadata/update?commit=falsedebugQuery=on' 
--data-binary 'deletequery{!qparser1}Title:this_title/query/delete' -H 
Content-type:text/xml


The result:

?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeaderint name=status400/intint 
name=QTime3/int/lstlst name=errorstr name=msgno field name 
specified in query and no default specified via 'df' param/strint 
name=code400/int/lst
/response


The reason being because our query parser is mis-behaving in that it removes 
colons from the input queries = we get on the server side:

Modified input query: Title:this_title --- Titlethis_title
5593 [qtp2121668094-15] INFO  
org.apache.solr.update.processor.LogUpdateProcessor  – [metadata] webapp=/solr 
path=/update params={debugQuery=oncommit=false} {} 0 31
5594 [qtp2121668094-15] ERROR org.apache.solr.core.SolrCore  – 
org.apache.solr.common.SolrException: no field name specified in query and no 
default specified via 'df' param
at 
org.apache.solr.parser.SolrQueryParserBase.checkNullField(SolrQueryParserBase.java:924)
at 
org.apache.solr.parser.SolrQueryParserBase.getFieldQuery(SolrQueryParserBase.java:944)
at 
org.apache.solr.parser.SolrQueryParserBase.handleBareTokenQuery(SolrQueryParserBase.java:765)
at org.apache.solr.parser.QueryParser.Term(QueryParser.java:300)
at org.apache.solr.parser.QueryParser.Clause(QueryParser.java:186)
at org.apache.solr.parser.QueryParser.Query(QueryParser.java:108)
at org.apache.solr.parser.QueryParser.TopLevelQuery(QueryParser.java:97)
at 
org.apache.solr.parser.SolrQueryParserBase.parse(SolrQueryParserBase.java:160)
at 
org.apache.solr.search.LuceneQParser.parse(LuceneQParserPlugin.java:72)
at org.apache.solr.search.QParser.getQuery(QParser.java:142)
at 
org.apache.solr.update.DirectUpdateHandler2.getQuery(DirectUpdateHandler2.java:319)
at 
org.apache.solr.update.DirectUpdateHandler2.deleteByQuery(DirectUpdateHandler2.java:349)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processDelete(RunUpdateProcessorFactory.java:80)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processDelete(UpdateRequestProcessor.java:55)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doDeleteByQuery(DistributedUpdateProcessor.java:931)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processDelete(DistributedUpdateProcessor.java:772)
at 
org.apache.solr.update.processor.LogUpdateProcessor.processDelete(LogUpdateProcessorFactory.java:121)
at 
org.apache.solr.handler.loader.XMLLoader.processDelete(XMLLoader.java:346)
at 
org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:277)
at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:173)
at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1820)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:656)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:359)
at 

[jira] [Comment Edited] (SOLR-5248) Data Import Handler support for Twitter

2014-02-05 Thread Hasan Emre ERKEK (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847556#comment-13847556
 ] 

Hasan Emre ERKEK edited comment on SOLR-5248 at 2/5/14 3:55 PM:


example of data-config.xml

{code:xml}
dataConfig
  document
entity 
processor=edu.anadolu.solr.handler.dataimport.TwitterEntityProcessor name=t 

oAuthAccessTokenSecret={twitter.oAuthAccessTokenSecret} 
oAuthAccessToken={twitter.oAuthAccessTokenSecret} 
oAuthConsumerKey={twitter.oAuthAccessTokenSecret} 
oAuthConsumerSecret={twitter.oAuthAccessTokenSecret} 
type=filter
tracks=anadolu üniversitesi,eskişehir
dataSource=null

field column=htmlname=html 
stripHTML=true
/
field column=lastEditDate
template='${dataimporter.last_index_time}'
/



/entity
  /document
/dataConfig
{code}


was (Author: heerkek):
example of data-config.xml

 Data Import Handler support for Twitter
 ---

 Key: SOLR-5248
 URL: https://issues.apache.org/jira/browse/SOLR-5248
 Project: Solr
  Issue Type: New Feature
  Components: contrib - DataImportHandler
Affects Versions: 4.4
Reporter: Hasan Emre ERKEK
Priority: Minor
  Labels: DIH, Twitter
 Attachments: SOLR-5248.patch, data-config.xml


 The Twitter Entity Processor allows index twitter stream using Solr



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1564739 - in /lucene/dev/trunk/solr/core/src/test/org/apache/solr: BasicFunctionalityTest.java handler/JsonLoaderTest.java search/TestSolr4Spatial.java

2014-02-05 Thread Shalin Shekhar Mangar
Uwe, I have already fixed the underlying issue. I'll re-enable the tests.

On Wed, Feb 5, 2014 at 5:53 PM,  uschind...@apache.org wrote:
 Author: uschindler
 Date: Wed Feb  5 12:23:33 2014
 New Revision: 1564739

 URL: http://svn.apache.org/r1564739
 Log:
 Ignore tests that fail since the last few commits. The responsible person 
 should fix those. This commit is only to make Jenkins quiet again!

 Modified:
 
 lucene/dev/trunk/solr/core/src/test/org/apache/solr/BasicFunctionalityTest.java
 
 lucene/dev/trunk/solr/core/src/test/org/apache/solr/handler/JsonLoaderTest.java
 
 lucene/dev/trunk/solr/core/src/test/org/apache/solr/search/TestSolr4Spatial.java

 Modified: 
 lucene/dev/trunk/solr/core/src/test/org/apache/solr/BasicFunctionalityTest.java
 URL: 
 http://svn.apache.org/viewvc/lucene/dev/trunk/solr/core/src/test/org/apache/solr/BasicFunctionalityTest.java?rev=1564739r1=1564738r2=1564739view=diff
 ==
 --- 
 lucene/dev/trunk/solr/core/src/test/org/apache/solr/BasicFunctionalityTest.java
  (original)
 +++ 
 lucene/dev/trunk/solr/core/src/test/org/apache/solr/BasicFunctionalityTest.java
  Wed Feb  5 12:23:33 2014
 @@ -345,7 +345,7 @@ public class BasicFunctionalityTest exte
}


 -  @Test
 +  @Test @Ignore(Please fix me!)
public void testClientErrorOnMalformedNumbers() throws Exception {

  final String BAD_VALUE = NOT_A_NUMBER;

 Modified: 
 lucene/dev/trunk/solr/core/src/test/org/apache/solr/handler/JsonLoaderTest.java
 URL: 
 http://svn.apache.org/viewvc/lucene/dev/trunk/solr/core/src/test/org/apache/solr/handler/JsonLoaderTest.java?rev=1564739r1=1564738r2=1564739view=diff
 ==
 --- 
 lucene/dev/trunk/solr/core/src/test/org/apache/solr/handler/JsonLoaderTest.java
  (original)
 +++ 
 lucene/dev/trunk/solr/core/src/test/org/apache/solr/handler/JsonLoaderTest.java
  Wed Feb  5 12:23:33 2014
 @@ -31,6 +31,7 @@ import org.apache.solr.update.DeleteUpda
  import org.apache.solr.update.processor.BufferingRequestProcessor;
  import org.junit.BeforeClass;
  import org.junit.Test;
 +import org.junit.Ignore;
  import org.xml.sax.SAXException;

  import java.math.BigDecimal;
 @@ -391,7 +392,7 @@ public class JsonLoaderTest extends Solr
}


 -  @Test
 +  @Test @Ignore(Please fix me!)
public void testAddBigIntegerValueToTrieField() throws Exception {
  // Adding a BigInteger to a long field should fail
  // BigInteger.longValue() returns only the low-order 64 bits.

 Modified: 
 lucene/dev/trunk/solr/core/src/test/org/apache/solr/search/TestSolr4Spatial.java
 URL: 
 http://svn.apache.org/viewvc/lucene/dev/trunk/solr/core/src/test/org/apache/solr/search/TestSolr4Spatial.java?rev=1564739r1=1564738r2=1564739view=diff
 ==
 --- 
 lucene/dev/trunk/solr/core/src/test/org/apache/solr/search/TestSolr4Spatial.java
  (original)
 +++ 
 lucene/dev/trunk/solr/core/src/test/org/apache/solr/search/TestSolr4Spatial.java
  Wed Feb  5 12:23:33 2014
 @@ -35,6 +35,7 @@ import org.apache.solr.util.SpatialUtils
  import org.junit.Before;
  import org.junit.BeforeClass;
  import org.junit.Test;
 +import org.junit.Ignore;

  import java.text.ParseException;
  import java.util.Arrays;
 @@ -71,7 +72,7 @@ public class TestSolr4Spatial extends So
  assertU(commit());
}

 -  @Test
 +  @Test @Ignore(Please fix me!)
public void testBadShapeParse400() {
  assertQEx(null, req(
  fl, id, + fieldName, q, *:*, rows, 1000,





-- 
Regards,
Shalin Shekhar Mangar.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5623) Better diagnosis of RuntimeExceptions in analysis

2014-02-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892309#comment-13892309
 ] 

ASF subversion and git services commented on SOLR-5623:
---

Commit 1564831 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1564831 ]

SOLR-5623: revert r1564739, shalin already fixed the bug that caused these 
failures, but Uwe didn't know that

 Better diagnosis of RuntimeExceptions in analysis
 -

 Key: SOLR-5623
 URL: https://issues.apache.org/jira/browse/SOLR-5623
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.6.1
Reporter: Benson Margulies
Assignee: Benson Margulies
 Fix For: 5.0, 4.7

 Attachments: SOLR-5623-nowrap.patch, SOLR-5623-nowrap.patch


 If an analysis component (tokenizer, filter, etc) gets really into a hissy 
 fit and throws a RuntimeException, the resulting log traffic is less than 
 informative, lacking any pointer to the doc under discussion (in the doc 
 case). It would be more better if there was a catch/try shortstop that logged 
 this more informatively.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5623) Better diagnosis of RuntimeExceptions in analysis

2014-02-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892313#comment-13892313
 ] 

ASF subversion and git services commented on SOLR-5623:
---

Commit 1564834 from hoss...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1564834 ]

SOLR-5623: revert r1564739, shalin already fixed the bug that caused these 
failures, but Uwe didn't know that (merge r1564831)

 Better diagnosis of RuntimeExceptions in analysis
 -

 Key: SOLR-5623
 URL: https://issues.apache.org/jira/browse/SOLR-5623
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.6.1
Reporter: Benson Margulies
Assignee: Benson Margulies
 Fix For: 5.0, 4.7

 Attachments: SOLR-5623-nowrap.patch, SOLR-5623-nowrap.patch


 If an analysis component (tokenizer, filter, etc) gets really into a hissy 
 fit and throws a RuntimeException, the resulting log traffic is less than 
 informative, lacking any pointer to the doc under discussion (in the doc 
 case). It would be more better if there was a catch/try shortstop that logged 
 this more informatively.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: svn commit: r1564834 - in /lucene/dev/branches/branch_4x: ./ dev-tools/ lucene/ lucene/analysis/ lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/std40/ lucene/analysis/icu/src/

2014-02-05 Thread Uwe Schindler
Hi Hoss,

Thanks for reverting. At the time when I added the Ignore, the test failed for 
me locally, so I am not aware if it was fixed in the meantime. Maybe the fix 
was committed at the same time.

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: hoss...@apache.org [mailto:hoss...@apache.org]
 Sent: Wednesday, February 05, 2014 6:11 PM
 To: comm...@lucene.apache.org
 Subject: svn commit: r1564834 - in /lucene/dev/branches/branch_4x: ./ dev-
 tools/ lucene/ lucene/analysis/
 lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/std
 40/ lucene/analysis/icu/src/java/org/apache/lucene/collation/
 lucene/backwards/ luce...
 
 Author: hossman
 Date: Wed Feb  5 17:11:27 2014
 New Revision: 1564834
 
 URL: http://svn.apache.org/r1564834
 Log:
 SOLR-5623: revert r1564739, shalin already fixed the bug that caused these
 failures, but Uwe didn't know that (merge r1564831)
 
 Modified:
 lucene/dev/branches/branch_4x/   (props changed)
 lucene/dev/branches/branch_4x/dev-tools/   (props changed)
 lucene/dev/branches/branch_4x/lucene/   (props changed)
 lucene/dev/branches/branch_4x/lucene/BUILD.txt   (props changed)
 lucene/dev/branches/branch_4x/lucene/CHANGES.txt   (props changed)
 lucene/dev/branches/branch_4x/lucene/JRE_VERSION_MIGRATION.txt
 (props changed)
 lucene/dev/branches/branch_4x/lucene/LICENSE.txt   (props changed)
 lucene/dev/branches/branch_4x/lucene/MIGRATE.txt   (props changed)
 lucene/dev/branches/branch_4x/lucene/NOTICE.txt   (props changed)
 lucene/dev/branches/branch_4x/lucene/README.txt   (props changed)
 lucene/dev/branches/branch_4x/lucene/SYSTEM_REQUIREMENTS.txt
 (props changed)
 lucene/dev/branches/branch_4x/lucene/analysis/   (props changed)
 
 lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apa
 che/lucene/analysis/standard/std40/ASCIITLD.jflex-macro   (props changed)
 
 lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apa
 che/lucene/analysis/standard/std40/SUPPLEMENTARY.jflex-macro   (props
 changed)
 
 lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apa
 che/lucene/analysis/standard/std40/StandardTokenizerImpl40.java   (props
 changed)
 
 lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apa
 che/lucene/analysis/standard/std40/StandardTokenizerImpl40.jflex   (props
 changed)
 
 lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apa
 che/lucene/analysis/standard/std40/UAX29URLEmailTokenizerImpl40.java
 (props changed)
 
 lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apa
 che/lucene/analysis/standard/std40/UAX29URLEmailTokenizerImpl40.jflex
 (props changed)
 
 lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apa
 che/lucene/analysis/standard/std40/package.html   (props changed)
 
 lucene/dev/branches/branch_4x/lucene/analysis/icu/src/java/org/apache/l
 ucene/collation/ICUCollationKeyFilterFactory.java   (props changed)
 lucene/dev/branches/branch_4x/lucene/backwards/   (props changed)
 lucene/dev/branches/branch_4x/lucene/benchmark/   (props changed)
 lucene/dev/branches/branch_4x/lucene/build.xml   (props changed)
 lucene/dev/branches/branch_4x/lucene/classification/   (props changed)
 lucene/dev/branches/branch_4x/lucene/classification/build.xml   (props
 changed)
 lucene/dev/branches/branch_4x/lucene/classification/ivy.xml   (props
 changed)
 lucene/dev/branches/branch_4x/lucene/classification/src/   (props
 changed)
 lucene/dev/branches/branch_4x/lucene/codecs/   (props changed)
 lucene/dev/branches/branch_4x/lucene/common-build.xml   (props
 changed)
 lucene/dev/branches/branch_4x/lucene/core/   (props changed)
 
 lucene/dev/branches/branch_4x/lucene/core/src/test/org/apache/lucene/i
 ndex/TestBackwardsCompatibility.java   (props changed)
 
 lucene/dev/branches/branch_4x/lucene/core/src/test/org/apache/lucene/i
 ndex/index.40.cfs.zip   (props changed)
 
 lucene/dev/branches/branch_4x/lucene/core/src/test/org/apache/lucene/i
 ndex/index.40.nocfs.zip   (props changed)
 
 lucene/dev/branches/branch_4x/lucene/core/src/test/org/apache/lucene/i
 ndex/index.40.optimized.cfs.zip   (props changed)
 
 lucene/dev/branches/branch_4x/lucene/core/src/test/org/apache/lucene/i
 ndex/index.40.optimized.nocfs.zip   (props changed)
 
 lucene/dev/branches/branch_4x/lucene/core/src/test/org/apache/lucene/s
 earch/TestSort.java   (props changed)
 
 lucene/dev/branches/branch_4x/lucene/core/src/test/org/apache/lucene/s
 earch/TestSortDocValues.java   (props changed)
 
 lucene/dev/branches/branch_4x/lucene/core/src/test/org/apache/lucene/s
 earch/TestSortRandom.java   (props changed)
 
 lucene/dev/branches/branch_4x/lucene/core/src/test/org/apache/lucene/s
 earch/TestTopFieldCollector.java   (props changed)
 
 lucene/dev/branches/branch_4x/lucene/core/src/test/org/apache/lucene/s
 

[jira] [Resolved] (LUCENE-5416) Performance of a FixedBitSet variant that uses Long.numberOfTrailingZeros()

2014-02-05 Thread Paul Elschot (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Elschot resolved LUCENE-5416.
--

Resolution: Done

See measurements and patch with Long.numberOfTrailingZeros at LUCENE-5425.

 Performance of a FixedBitSet variant that uses Long.numberOfTrailingZeros()
 ---

 Key: LUCENE-5416
 URL: https://issues.apache.org/jira/browse/LUCENE-5416
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 5.0
Reporter: Paul Elschot
Priority: Minor
 Fix For: 5.0


 On my machine the current byte index used in OpenBitSetIterator is slower 
 than Long.numberOfTrailingZeros() for advance().
 The pull request contains the code for benchmarking this taken from an early 
 stage of DocBlocksIterator.
 In case the benchmark shows improvements on more machines, well, we know what 
 to do...



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



lucene-solr pull request: Fbs perf 1

2014-02-05 Thread PaulElschot
Github user PaulElschot closed the pull request at:

https://github.com/apache/lucene-solr/pull/22


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5425) Make creation of FixedBitSet in FacetsCollector overridable

2014-02-05 Thread Lei Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892338#comment-13892338
 ] 

Lei Wang commented on LUCENE-5425:
--

Yes! with this patch, we can write our own FacetCollector, and do customize.

Only one small suggestions. The createHitsSet is marked as protected, but the 
class itself is final, no sub-class can override it other then creating a new 
FacetCollector. Can we remove the final modifier for the class, and add finals 
to the methods we don't want the user to override?

 Make creation of FixedBitSet in FacetsCollector overridable
 ---

 Key: LUCENE-5425
 URL: https://issues.apache.org/jira/browse/LUCENE-5425
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Affects Versions: 4.6
Reporter: John Wang
 Attachments: LUCENE-5425.patch, facetscollector.patch, 
 facetscollector.patch, fixbitset.patch


 In FacetsCollector, creation of bits in MatchingDocs are allocated per query. 
 For large indexes where maxDocs are large creating a bitset of maxDoc bits 
 will be expensive and would great a lot of garbage.
 Attached patch is to allow for this allocation customizable while maintaining 
 current behavior.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5434) NRT support for file systems that do no have delete on last close or cannot delete while referenced semantics.

2014-02-05 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated LUCENE-5434:


Attachment: LUCENE-5434.patch

Adds asserts to test to cause it to fail without this change.

 NRT support for file systems that do no have delete on last close or cannot 
 delete while referenced semantics.
 --

 Key: LUCENE-5434
 URL: https://issues.apache.org/jira/browse/LUCENE-5434
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.7

 Attachments: LUCENE-5434.patch, LUCENE-5434.patch


 See SOLR-5693 and our HDFS support - for something like HDFS to work with 
 NRT, we need an ability for near realtime readers to hold references to their 
 files to prevent deletes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5425) Make creation of FixedBitSet in FacetsCollector overridable

2014-02-05 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892363#comment-13892363
 ] 

Shai Erera commented on LUCENE-5425:


bq. The createHitsSet is marked as protected, but the class itself is final

Duh, will remove final from class, thanks for noticing that! :).

I will run some tests and then commit the patch.

 Make creation of FixedBitSet in FacetsCollector overridable
 ---

 Key: LUCENE-5425
 URL: https://issues.apache.org/jira/browse/LUCENE-5425
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Affects Versions: 4.6
Reporter: John Wang
 Attachments: LUCENE-5425.patch, facetscollector.patch, 
 facetscollector.patch, fixbitset.patch


 In FacetsCollector, creation of bits in MatchingDocs are allocated per query. 
 For large indexes where maxDocs are large creating a bitset of maxDoc bits 
 will be expensive and would great a lot of garbage.
 Attached patch is to allow for this allocation customizable while maintaining 
 current behavior.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5435) CommonTermsQuery should be able to query fields other than the one used as a source of commonness

2014-02-05 Thread Nik Everett (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nik Everett updated LUCENE-5435:


Attachment: LUCENE-5435.patch

 CommonTermsQuery should be able to query fields other than the one used as a 
 source of commonness
 -

 Key: LUCENE-5435
 URL: https://issues.apache.org/jira/browse/LUCENE-5435
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Nik Everett
 Attachments: LUCENE-5435.patch


 It'd be wonderful if I could use the commonness of one term, say the 
 contents of a document, to power a search across both the document and its 
 title.  Continuing the metaphor, I'd like be able to build a query like this:
 the first
 that is rewritten into: 
 (title:the OR body:the) +(title:first OR body:first)
 with the help of the CommonTermsQuery logic.  Essentially, I'd like 
 CommonTermsQuery to soften the implicit AND for the into and OR because it 
 is common.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5435) CommonTermsQuery should be able to query fields other than the one used as a source of commonness

2014-02-05 Thread Nik Everett (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nik Everett updated LUCENE-5435:


Priority: Minor  (was: Major)

 CommonTermsQuery should be able to query fields other than the one used as a 
 source of commonness
 -

 Key: LUCENE-5435
 URL: https://issues.apache.org/jira/browse/LUCENE-5435
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Nik Everett
Priority: Minor
 Attachments: LUCENE-5435.patch


 It'd be wonderful if I could use the commonness of one term, say the 
 contents of a document, to power a search across both the document and its 
 title.  Continuing the metaphor, I'd like be able to build a query like this:
 the first
 that is rewritten into: 
 (title:the OR body:the) +(title:first OR body:first)
 with the help of the CommonTermsQuery logic.  Essentially, I'd like 
 CommonTermsQuery to soften the implicit AND for the into and OR because it 
 is common.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5435) CommonTermsQuery should be able to query fields other than the one used as a source of commonness

2014-02-05 Thread Nik Everett (JIRA)
Nik Everett created LUCENE-5435:
---

 Summary: CommonTermsQuery should be able to query fields other 
than the one used as a source of commonness
 Key: LUCENE-5435
 URL: https://issues.apache.org/jira/browse/LUCENE-5435
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Nik Everett
 Attachments: LUCENE-5435.patch

It'd be wonderful if I could use the commonness of one term, say the contents 
of a document, to power a search across both the document and its title.  
Continuing the metaphor, I'd like be able to build a query like this:
the first
that is rewritten into: 
(title:the OR body:the) +(title:first OR body:first)
with the help of the CommonTermsQuery logic.  Essentially, I'd like 
CommonTermsQuery to soften the implicit AND for the into and OR because it is 
common.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5434) NRT support for file systems that do no have delete on last close or cannot delete while referenced semantics.

2014-02-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892385#comment-13892385
 ] 

Mark Miller commented on LUCENE-5434:
-

It seems we can't easily do it on a more general basis because 
IndexFileDeleter.checkpoint will often delete files that are still open.

 NRT support for file systems that do no have delete on last close or cannot 
 delete while referenced semantics.
 --

 Key: LUCENE-5434
 URL: https://issues.apache.org/jira/browse/LUCENE-5434
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.7

 Attachments: LUCENE-5434.patch, LUCENE-5434.patch


 See SOLR-5693 and our HDFS support - for something like HDFS to work with 
 NRT, we need an ability for near realtime readers to hold references to their 
 files to prevent deletes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5434) NRT support for file systems that do no have delete on last close or cannot delete while referenced semantics.

2014-02-05 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892393#comment-13892393
 ] 

Robert Muir commented on LUCENE-5434:
-

This makes sense I think, because often tests are not really using NRT but just 
pulling regular DirectoryReaders?

 NRT support for file systems that do no have delete on last close or cannot 
 delete while referenced semantics.
 --

 Key: LUCENE-5434
 URL: https://issues.apache.org/jira/browse/LUCENE-5434
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.7

 Attachments: LUCENE-5434.patch, LUCENE-5434.patch


 See SOLR-5693 and our HDFS support - for something like HDFS to work with 
 NRT, we need an ability for near realtime readers to hold references to their 
 files to prevent deletes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5434) NRT support for file systems that do no have delete on last close or cannot delete while referenced semantics.

2014-02-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892397#comment-13892397
 ] 

Mark Miller commented on LUCENE-5434:
-

Yeah - it's fine because with nfs or hdfs, you reserve commit points and if 
files are deleted via merging and you don't have an nrt reader on them, that's 
okay and expected.

 NRT support for file systems that do no have delete on last close or cannot 
 delete while referenced semantics.
 --

 Key: LUCENE-5434
 URL: https://issues.apache.org/jira/browse/LUCENE-5434
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.7

 Attachments: LUCENE-5434.patch, LUCENE-5434.patch


 See SOLR-5693 and our HDFS support - for something like HDFS to work with 
 NRT, we need an ability for near realtime readers to hold references to their 
 files to prevent deletes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5698) exceptionally long terms are silently ignored during indexing

2014-02-05 Thread Hoss Man (JIRA)
Hoss Man created SOLR-5698:
--

 Summary: exceptionally long terms are silently ignored during 
indexing
 Key: SOLR-5698
 URL: https://issues.apache.org/jira/browse/SOLR-5698
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man


As reported on the user list, when a term is greater then 2^15 bytes it is 
silently ignored at indexing time -- no error is given at all.

we should investigate:
* if there is a way to get the lower level lucene code to propogate up an error 
we can return to the user instead of silently ignoring these terms
* if there is no way to generate a low level error:
** is there at least way to make this limit configurable so it's more obvious 
to users that this limit exists?
** should we make things like StrField do explicit size checking on the terms 
they produce and explicitly throw their own error?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5698) exceptionally long terms are silently ignored during indexing

2014-02-05 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892429#comment-13892429
 ] 

Hoss Man commented on SOLR-5698:


Easy steps to reproduce using the example configs...

{noformat}
hossman@frisbee:~$ perl -le 'print a,aaa; print z, . (Z x 32767);' | curl 
'http://localhost:8983/solr/update?header=falsefieldnames=name,long_srowid=idcommit=true'
 -H 'Content-Type: application/csv' --data-binary @- 
?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeaderint name=status0/intint 
name=QTime572/int/lst
/response
hossman@frisbee:~$ curl 
'http://localhost:8983/solr/select?q=*:*fl=id,namewt=jsonindent=true'{
  responseHeader:{
status:0,
QTime:12,
params:{
  fl:id,name,
  indent:true,
  q:*:*,
  wt:json}},
  response:{numFound:2,start:0,docs:[
  {
name:a,
id:0},
  {
name:z,
id:1}]
  }}
hossman@frisbee:~$ curl 
'http://localhost:8983/solr/select?q=long_s:*wt=jsonindent=true'
{
  responseHeader:{
status:0,
QTime:1,
params:{
  indent:true,
  q:long_s:*,
  wt:json}},
  response:{numFound:1,start:0,docs:[
  {
name:a,
long_s:aaa,
id:0,
_version_:1459225819107819520}]
  }}
{noformat}

 exceptionally long terms are silently ignored during indexing
 -

 Key: SOLR-5698
 URL: https://issues.apache.org/jira/browse/SOLR-5698
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man

 As reported on the user list, when a term is greater then 2^15 bytes it is 
 silently ignored at indexing time -- no error is given at all.
 we should investigate:
 * if there is a way to get the lower level lucene code to propogate up an 
 error we can return to the user instead of silently ignoring these terms
 * if there is no way to generate a low level error:
 ** is there at least way to make this limit configurable so it's more obvious 
 to users that this limit exists?
 ** should we make things like StrField do explicit size checking on the terms 
 they produce and explicitly throw their own error?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5425) Make creation of FixedBitSet in FacetsCollector overridable

2014-02-05 Thread Lei Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892435#comment-13892435
 ] 

Lei Wang commented on LUCENE-5425:
--

Thanks!

 Make creation of FixedBitSet in FacetsCollector overridable
 ---

 Key: LUCENE-5425
 URL: https://issues.apache.org/jira/browse/LUCENE-5425
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Affects Versions: 4.6
Reporter: John Wang
 Attachments: LUCENE-5425.patch, facetscollector.patch, 
 facetscollector.patch, fixbitset.patch


 In FacetsCollector, creation of bits in MatchingDocs are allocated per query. 
 For large indexes where maxDocs are large creating a bitset of maxDoc bits 
 will be expensive and would great a lot of garbage.
 Attached patch is to allow for this allocation customizable while maintaining 
 current behavior.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5434) NRT support for file systems that do no have delete on last close or cannot delete while referenced semantics.

2014-02-05 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892436#comment-13892436
 ] 

Michael McCandless commented on LUCENE-5434:


Patch looks good!

Yes, any test pulling non-NRT readers cannot enable the new 
MDW.assertNoDeleteOpenFile... but if the test is only pulling NRT readers, we 
should enable the assert.  TestNRTReaderWithThreads is a good start.

Hmm, why did you need to change MDW.close?  Actually, why does MDW.close() even 
check noDeleteOpenFile when throwing exc because files are still open...?  
Shouldn't it always throw an exc if there are still open files (or, open 
locks)?  Tests seem to pass when I remove that, at least once :)

 NRT support for file systems that do no have delete on last close or cannot 
 delete while referenced semantics.
 --

 Key: LUCENE-5434
 URL: https://issues.apache.org/jira/browse/LUCENE-5434
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.7

 Attachments: LUCENE-5434.patch, LUCENE-5434.patch


 See SOLR-5693 and our HDFS support - for something like HDFS to work with 
 NRT, we need an ability for near realtime readers to hold references to their 
 files to prevent deletes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5698) exceptionally long terms are silently ignored during indexing

2014-02-05 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892442#comment-13892442
 ] 

Hoss Man commented on SOLR-5698:


Things i'm confident of...

* the limit is IndexWriter.MAX_TERM_LENGTH
* it is not configurable
* a message is written to the infoStream by DocFieldProcessor when this is 
exceeded...{code}
if (docState.maxTermPrefix != null  docState.infoStream.isEnabled(IW)) {
  docState.infoStream.message(IW, WARNING: document contains at least 
one immense term (whose UTF8 encoding is longer than the max length  + 
DocumentsWriterPerThread.MAX_TERM_LENGTH_UTF8 + ), all of which were skipped.  
Please correct the analyzer to not produce such terms.  The prefix of the first 
immense term is: ' + docState.maxTermPrefix + ...');
  docState.maxTermPrefix = null;
}
  }
{code}

Things i _think_ i understand, but am not certain of...
* by the time DocumentsWriterPerThread sees this problem, and logs this to the 
infoStream, it's already too late to throw an exception up the call stack 
(because it's happening in another thread)

Rough idea only half considered...
* update the tokenstream producers in Solr to explicitly check the terms they 
are about to return and throw an exception if they exceed this length (mention 
using LengthFilter in this error message)
* this wouldn't help if people use their own concrete Analyzer class -- but it 
would solve the problem with things like StrField, or anytime analysis 
factories are used
* we could conceivable wrap any user configured concrete Analyzer class to do 
this check -- but i'm not sure we should, since it will add cycles and the 
Analyzer might already be well behaved.

thoughts?

 exceptionally long terms are silently ignored during indexing
 -

 Key: SOLR-5698
 URL: https://issues.apache.org/jira/browse/SOLR-5698
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man

 As reported on the user list, when a term is greater then 2^15 bytes it is 
 silently ignored at indexing time -- no error is given at all.
 we should investigate:
 * if there is a way to get the lower level lucene code to propogate up an 
 error we can return to the user instead of silently ignoring these terms
 * if there is no way to generate a low level error:
 ** is there at least way to make this limit configurable so it's more obvious 
 to users that this limit exists?
 ** should we make things like StrField do explicit size checking on the terms 
 they produce and explicitly throw their own error?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5698) exceptionally long terms are silently ignored during indexing

2014-02-05 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892460#comment-13892460
 ] 

Michael McCandless commented on SOLR-5698:
--

Actually, I think Lucene should just throw an exc when this happens?  DWPT 
should be the right place (it isn't a different thread)...

Separately this limit is absurdly large

 exceptionally long terms are silently ignored during indexing
 -

 Key: SOLR-5698
 URL: https://issues.apache.org/jira/browse/SOLR-5698
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man

 As reported on the user list, when a term is greater then 2^15 bytes it is 
 silently ignored at indexing time -- no error is given at all.
 we should investigate:
 * if there is a way to get the lower level lucene code to propogate up an 
 error we can return to the user instead of silently ignoring these terms
 * if there is no way to generate a low level error:
 ** is there at least way to make this limit configurable so it's more obvious 
 to users that this limit exists?
 ** should we make things like StrField do explicit size checking on the terms 
 they produce and explicitly throw their own error?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5425) Make creation of FixedBitSet in FacetsCollector overridable

2014-02-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892481#comment-13892481
 ] 

ASF subversion and git services commented on LUCENE-5425:
-

Commit 1564898 from [~shaie] in branch 'dev/trunk'
[ https://svn.apache.org/r1564898 ]

LUCENE-5425: Make creation of FixedBitSet in FacetsCollector overridable

 Make creation of FixedBitSet in FacetsCollector overridable
 ---

 Key: LUCENE-5425
 URL: https://issues.apache.org/jira/browse/LUCENE-5425
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Affects Versions: 4.6
Reporter: John Wang
 Attachments: LUCENE-5425.patch, facetscollector.patch, 
 facetscollector.patch, fixbitset.patch


 In FacetsCollector, creation of bits in MatchingDocs are allocated per query. 
 For large indexes where maxDocs are large creating a bitset of maxDoc bits 
 will be expensive and would great a lot of garbage.
 Attached patch is to allow for this allocation customizable while maintaining 
 current behavior.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5425) Make creation of FixedBitSet in FacetsCollector overridable

2014-02-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892494#comment-13892494
 ] 

ASF subversion and git services commented on LUCENE-5425:
-

Commit 1564907 from [~shaie] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1564907 ]

LUCENE-5425: Make creation of FixedBitSet in FacetsCollector overridable

 Make creation of FixedBitSet in FacetsCollector overridable
 ---

 Key: LUCENE-5425
 URL: https://issues.apache.org/jira/browse/LUCENE-5425
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Affects Versions: 4.6
Reporter: John Wang
 Fix For: 5.0, 4.7

 Attachments: LUCENE-5425.patch, facetscollector.patch, 
 facetscollector.patch, fixbitset.patch


 In FacetsCollector, creation of bits in MatchingDocs are allocated per query. 
 For large indexes where maxDocs are large creating a bitset of maxDoc bits 
 will be expensive and would great a lot of garbage.
 Attached patch is to allow for this allocation customizable while maintaining 
 current behavior.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5425) Make creation of FixedBitSet in FacetsCollector overridable

2014-02-05 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera resolved LUCENE-5425.


   Resolution: Fixed
Fix Version/s: 4.7
   5.0
 Assignee: Shai Erera

Committed to trunk and 4x. I renamed MutableDocIdSet to Docs as it wasn't a 
DocIdSet anymore.

Thanks John and Lei for your contribution!

 Make creation of FixedBitSet in FacetsCollector overridable
 ---

 Key: LUCENE-5425
 URL: https://issues.apache.org/jira/browse/LUCENE-5425
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Affects Versions: 4.6
Reporter: John Wang
Assignee: Shai Erera
 Fix For: 5.0, 4.7

 Attachments: LUCENE-5425.patch, facetscollector.patch, 
 facetscollector.patch, fixbitset.patch


 In FacetsCollector, creation of bits in MatchingDocs are allocated per query. 
 For large indexes where maxDocs are large creating a bitset of maxDoc bits 
 will be expensive and would great a lot of garbage.
 Attached patch is to allow for this allocation customizable while maintaining 
 current behavior.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5434) NRT support for file systems that do no have delete on last close or cannot delete while referenced semantics.

2014-02-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892503#comment-13892503
 ] 

Mark Miller commented on LUCENE-5434:
-

bq. Hmm, why did you need to change MDW.close? 

I just mimicked what was happening with noDeleteOpenFile* for windows support. 
Seems we can remove this check for both.

 NRT support for file systems that do no have delete on last close or cannot 
 delete while referenced semantics.
 --

 Key: LUCENE-5434
 URL: https://issues.apache.org/jira/browse/LUCENE-5434
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.7

 Attachments: LUCENE-5434.patch, LUCENE-5434.patch


 See SOLR-5693 and our HDFS support - for something like HDFS to work with 
 NRT, we need an ability for near realtime readers to hold references to their 
 files to prevent deletes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5434) NRT support for file systems that do no have delete on last close or cannot delete while referenced semantics.

2014-02-05 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated LUCENE-5434:


Attachment: LUCENE-5434.patch

 NRT support for file systems that do no have delete on last close or cannot 
 delete while referenced semantics.
 --

 Key: LUCENE-5434
 URL: https://issues.apache.org/jira/browse/LUCENE-5434
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.7

 Attachments: LUCENE-5434.patch, LUCENE-5434.patch, LUCENE-5434.patch


 See SOLR-5693 and our HDFS support - for something like HDFS to work with 
 NRT, we need an ability for near realtime readers to hold references to their 
 files to prevent deletes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5434) NRT support for file systems that do no have delete on last close or cannot delete while referenced semantics.

2014-02-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892527#comment-13892527
 ] 

Mark Miller commented on LUCENE-5434:
-

One moment, another patch coming.

 NRT support for file systems that do no have delete on last close or cannot 
 delete while referenced semantics.
 --

 Key: LUCENE-5434
 URL: https://issues.apache.org/jira/browse/LUCENE-5434
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.7

 Attachments: LUCENE-5434.patch, LUCENE-5434.patch, LUCENE-5434.patch


 See SOLR-5693 and our HDFS support - for something like HDFS to work with 
 NRT, we need an ability for near realtime readers to hold references to their 
 files to prevent deletes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5434) NRT support for file systems that do no have delete on last close or cannot delete while referenced semantics.

2014-02-05 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated LUCENE-5434:


Attachment: LUCENE-5434.patch

 NRT support for file systems that do no have delete on last close or cannot 
 delete while referenced semantics.
 --

 Key: LUCENE-5434
 URL: https://issues.apache.org/jira/browse/LUCENE-5434
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.7

 Attachments: LUCENE-5434.patch, LUCENE-5434.patch, LUCENE-5434.patch, 
 LUCENE-5434.patch


 See SOLR-5693 and our HDFS support - for something like HDFS to work with 
 NRT, we need an ability for near realtime readers to hold references to their 
 files to prevent deletes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4072) Error message is incorrect for linkconfig in ZkCLI

2014-02-05 Thread Vamsee Yarlagadda (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vamsee Yarlagadda updated SOLR-4072:


Attachment: SOLR-4072.patch

Patch available.

 Error message is incorrect for linkconfig in ZkCLI
 --

 Key: SOLR-4072
 URL: https://issues.apache.org/jira/browse/SOLR-4072
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.0
Reporter: Adam Hahn
Priority: Trivial
 Attachments: SOLR-4072.patch

   Original Estimate: 5m
  Remaining Estimate: 5m

 If you don't include both the collection and confname when doing a 
 linkconfig, it shows you an incorrect error message stating that the CONFDIR 
 is required for linkconfig.  That should be changed to COLLECTION.  The 
 incorrect code is below.
 else if (line.getOptionValue(CMD).equals(LINKCONFIG)) {
   if (!line.hasOption(COLLECTION) || !line.hasOption(CONFNAME)) {
 System.out.println(- + {color:red} CONFDIR {color} +  and - + 
 CONFNAME
 +  are required for  + LINKCONFIG);
 System.exit(1);
   }



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5698) exceptionally long terms are silently ignored during indexing

2014-02-05 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892571#comment-13892571
 ] 

Hoss Man commented on SOLR-5698:


bq. I think Lucene should just throw an exc when this happens? ... (it isn't a 
different thread)

I wasn't entire sure about that -- and since it currently does an infoStream 
but does *not* throw an exception, i assumed that was because of the threading.

If you think we should convert this to a LUCENE issue and throw a 
RuntimeException i'm all for that.



 exceptionally long terms are silently ignored during indexing
 -

 Key: SOLR-5698
 URL: https://issues.apache.org/jira/browse/SOLR-5698
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man

 As reported on the user list, when a term is greater then 2^15 bytes it is 
 silently ignored at indexing time -- no error is given at all.
 we should investigate:
 * if there is a way to get the lower level lucene code to propogate up an 
 error we can return to the user instead of silently ignoring these terms
 * if there is no way to generate a low level error:
 ** is there at least way to make this limit configurable so it's more obvious 
 to users that this limit exists?
 ** should we make things like StrField do explicit size checking on the terms 
 they produce and explicitly throw their own error?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5434) NRT support for file systems that do no have delete on last close or cannot delete while referenced semantics.

2014-02-05 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892582#comment-13892582
 ] 

Michael McCandless commented on LUCENE-5434:


Thanks for fixing that crab in MDW.close.

This looks great (several tests cutover).

I think you don't need to both ex.printStackTrace and assert false?  Can you 
just throw new AssertionError(...) instead of assert false?


 NRT support for file systems that do no have delete on last close or cannot 
 delete while referenced semantics.
 --

 Key: LUCENE-5434
 URL: https://issues.apache.org/jira/browse/LUCENE-5434
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.7

 Attachments: LUCENE-5434.patch, LUCENE-5434.patch, LUCENE-5434.patch, 
 LUCENE-5434.patch


 See SOLR-5693 and our HDFS support - for something like HDFS to work with 
 NRT, we need an ability for near realtime readers to hold references to their 
 files to prevent deletes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-5476) Overseer Role for nodes

2014-02-05 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reopened SOLR-5476:
---


See common jenkins test fail - something is off here, with the impl or the 
test. An Overseer reading the queue can run into another Overseer having 
already removed the node being looked at.

 Overseer Role for nodes
 ---

 Key: SOLR-5476
 URL: https://issues.apache.org/jira/browse/SOLR-5476
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, 4.7

 Attachments: SOLR-5476.patch, SOLR-5476.patch, SOLR-5476.patch, 
 SOLR-5476.patch, SOLR-5476.patch


 In a very large cluster the Overseer is likely to be overloaded.If the same 
 node is a serving a few other shards it can lead to OverSeer getting slowed 
 down due to GC pauses , or simply too much of work  . If the cluster is 
 really large , it is possible to dedicate high end h/w for OverSeers
 It works as a new collection admin command
 command=addrolerole=overseernode=192.168.1.5:8983_solr
 This results in the creation of a entry in the /roles.json in ZK which would 
 look like the following
 {code:javascript}
 {
 overseer : [192.168.1.5:8983_solr]
 }
 {code}
 If a node is designated for overseer it gets preference over others when 
 overseer election takes place. If no designated servers are available another 
 random node would become the Overseer.
 Later on, if one of the designated nodes are brought up ,it would take over 
 the Overseer role from the current Overseer to become the Overseer of the 
 system



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5698) exceptionally long terms are silently ignored during indexing

2014-02-05 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892586#comment-13892586
 ] 

Michael McCandless commented on SOLR-5698:
--

bq. If you think we should convert this to a LUCENE issue and throw a 
RuntimeException i'm all for that.

+1

I don't think this leniency is good: it hides that your app is indexing trash.

 exceptionally long terms are silently ignored during indexing
 -

 Key: SOLR-5698
 URL: https://issues.apache.org/jira/browse/SOLR-5698
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man

 As reported on the user list, when a term is greater then 2^15 bytes it is 
 silently ignored at indexing time -- no error is given at all.
 we should investigate:
 * if there is a way to get the lower level lucene code to propogate up an 
 error we can return to the user instead of silently ignoring these terms
 * if there is no way to generate a low level error:
 ** is there at least way to make this limit configurable so it's more obvious 
 to users that this limit exists?
 ** should we make things like StrField do explicit size checking on the terms 
 they produce and explicitly throw their own error?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5476) Overseer Role for nodes

2014-02-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892587#comment-13892587
 ] 

Mark Miller edited comment on SOLR-5476 at 2/5/14 9:23 PM:
---

See common jenkins test fail - something is off here, with the impl or the 
test. An Overseer reading the queue can run into another Overseer having 
already removed the distrib queue node being looked at.


was (Author: markrmil...@gmail.com):
See common jenkins test fail - something is off here, with the impl or the 
test. An Overseer reading the queue can run into another Overseer having 
already removed the node being looked at.

 Overseer Role for nodes
 ---

 Key: SOLR-5476
 URL: https://issues.apache.org/jira/browse/SOLR-5476
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, 4.7

 Attachments: SOLR-5476.patch, SOLR-5476.patch, SOLR-5476.patch, 
 SOLR-5476.patch, SOLR-5476.patch


 In a very large cluster the Overseer is likely to be overloaded.If the same 
 node is a serving a few other shards it can lead to OverSeer getting slowed 
 down due to GC pauses , or simply too much of work  . If the cluster is 
 really large , it is possible to dedicate high end h/w for OverSeers
 It works as a new collection admin command
 command=addrolerole=overseernode=192.168.1.5:8983_solr
 This results in the creation of a entry in the /roles.json in ZK which would 
 look like the following
 {code:javascript}
 {
 overseer : [192.168.1.5:8983_solr]
 }
 {code}
 If a node is designated for overseer it gets preference over others when 
 overseer election takes place. If no designated servers are available another 
 random node would become the Overseer.
 Later on, if one of the designated nodes are brought up ,it would take over 
 the Overseer role from the current Overseer to become the Overseer of the 
 system



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5365) Bad version of common-compress

2014-02-05 Thread Patrick Uhlmann (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892591#comment-13892591
 ] 

Patrick Uhlmann commented on SOLR-5365:
---

It just tested it with version SOLR 4.6.1. Same problem. In order to make it 
work you just need to replace commons-compress-1.4.1 in the folder 
contrib/extract with commons-compress-1.7. Neither version 1.4.1 nor 1.5 have 
the method setDecompressConcatenated in the class CompressorStreamFactory.

 Bad version of common-compress
 --

 Key: SOLR-5365
 URL: https://issues.apache.org/jira/browse/SOLR-5365
 Project: Solr
  Issue Type: Bug
  Components: contrib - Solr Cell (Tika extraction)
Affects Versions: 4.4, 4.5
 Environment: MS Windows 2008 Release 2
Reporter: Roland Everaert

 When a WMZ file is sent to solr on resource /update/extract, the following 
 exception is thrown by solr:
 ERROR - 2013-10-17 18:13:48.902; org.apache.solr.common.SolrException; 
 null:java.lang.RuntimeException: java.lang.NoSuchMethodError: 
 org.apache.commons.compress.compressors.CompressorStreamFactory.setDecompressConcatenated(Z)V
 at 
 org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:673)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:383)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:158)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
 at 
 org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:953)
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
 at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
 at 
 org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1023)
 at 
 org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
 at 
 org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:1852)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 Caused by: java.lang.NoSuchMethodError: 
 org.apache.commons.compress.compressors.CompressorStreamFactory.setDecompressConcatenated(Z)V
 at 
 org.apache.tika.parser.pkg.CompressorParser.parse(CompressorParser.java:102)
 at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:242)
 at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:242)
 at 
 org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:120)
 at 
 org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:219)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at 
 org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:241)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1904)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:659)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:362)
 ... 16 more
 According to Koji Sekiguchi, Tika 1.4, the version bundled with solr, should 
 use common-compress-1.5, but version 1.4.1 is present in 
 solr/contrib/extraction/lib/ directory.
 During our testing, the ignoreTikaException flag was set to true.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5476) Overseer Role for nodes

2014-02-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892595#comment-13892595
 ] 

Mark Miller commented on SOLR-5476:
---

I don't think this is right. Where do you ensure the old Overseer is stopped 
first? You can't just force a new one by deleting the election node and doing 
the run leader process.

 Overseer Role for nodes
 ---

 Key: SOLR-5476
 URL: https://issues.apache.org/jira/browse/SOLR-5476
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, 4.7

 Attachments: SOLR-5476.patch, SOLR-5476.patch, SOLR-5476.patch, 
 SOLR-5476.patch, SOLR-5476.patch


 In a very large cluster the Overseer is likely to be overloaded.If the same 
 node is a serving a few other shards it can lead to OverSeer getting slowed 
 down due to GC pauses , or simply too much of work  . If the cluster is 
 really large , it is possible to dedicate high end h/w for OverSeers
 It works as a new collection admin command
 command=addrolerole=overseernode=192.168.1.5:8983_solr
 This results in the creation of a entry in the /roles.json in ZK which would 
 look like the following
 {code:javascript}
 {
 overseer : [192.168.1.5:8983_solr]
 }
 {code}
 If a node is designated for overseer it gets preference over others when 
 overseer election takes place. If no designated servers are available another 
 random node would become the Overseer.
 Later on, if one of the designated nodes are brought up ,it would take over 
 the Overseer role from the current Overseer to become the Overseer of the 
 system



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5476) Overseer Role for nodes

2014-02-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892603#comment-13892603
 ] 

Mark Miller commented on SOLR-5476:
---

Trying to force a leader like that is complicated, and I think there are a 
bunch of holes here. 

 Overseer Role for nodes
 ---

 Key: SOLR-5476
 URL: https://issues.apache.org/jira/browse/SOLR-5476
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, 4.7

 Attachments: SOLR-5476.patch, SOLR-5476.patch, SOLR-5476.patch, 
 SOLR-5476.patch, SOLR-5476.patch


 In a very large cluster the Overseer is likely to be overloaded.If the same 
 node is a serving a few other shards it can lead to OverSeer getting slowed 
 down due to GC pauses , or simply too much of work  . If the cluster is 
 really large , it is possible to dedicate high end h/w for OverSeers
 It works as a new collection admin command
 command=addrolerole=overseernode=192.168.1.5:8983_solr
 This results in the creation of a entry in the /roles.json in ZK which would 
 look like the following
 {code:javascript}
 {
 overseer : [192.168.1.5:8983_solr]
 }
 {code}
 If a node is designated for overseer it gets preference over others when 
 overseer election takes place. If no designated servers are available another 
 random node would become the Overseer.
 Later on, if one of the designated nodes are brought up ,it would take over 
 the Overseer role from the current Overseer to become the Overseer of the 
 system



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5476) Overseer Role for nodes

2014-02-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892644#comment-13892644
 ] 

Mark Miller commented on SOLR-5476:
---

I think the tests are too light as well. You want strong testing around this 
special overseer takeover - there are so many things that could go wrong - you 
need to make sure things continue merrily after several disaster scenario. And 
it needs to ensure that there are never more than one Overseer running at a 
time.

I think this was committed before it was ready.


 Overseer Role for nodes
 ---

 Key: SOLR-5476
 URL: https://issues.apache.org/jira/browse/SOLR-5476
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, 4.7

 Attachments: SOLR-5476.patch, SOLR-5476.patch, SOLR-5476.patch, 
 SOLR-5476.patch, SOLR-5476.patch


 In a very large cluster the Overseer is likely to be overloaded.If the same 
 node is a serving a few other shards it can lead to OverSeer getting slowed 
 down due to GC pauses , or simply too much of work  . If the cluster is 
 really large , it is possible to dedicate high end h/w for OverSeers
 It works as a new collection admin command
 command=addrolerole=overseernode=192.168.1.5:8983_solr
 This results in the creation of a entry in the /roles.json in ZK which would 
 look like the following
 {code:javascript}
 {
 overseer : [192.168.1.5:8983_solr]
 }
 {code}
 If a node is designated for overseer it gets preference over others when 
 overseer election takes place. If no designated servers are available another 
 random node would become the Overseer.
 Later on, if one of the designated nodes are brought up ,it would take over 
 the Overseer role from the current Overseer to become the Overseer of the 
 system



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5476) Overseer Role for nodes

2014-02-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892653#comment-13892653
 ] 

Mark Miller commented on SOLR-5476:
---

I've worked a little to try and do some of the right things along the current 
approach, but I'm starting to feel I'm -1 on this approach to leader 
prioritization. I think if there was proper testing here, no way doing things 
this way would survive it, and I'm not sure I can make this idea work.

A better first approach probably involves having prospective leaders give up 
there position in line if they see a better candidate behind them. The trick 
then being to look out for how to deal with a node that looks like a better 
candidate but for some reason keeps failing to be leader.

 Overseer Role for nodes
 ---

 Key: SOLR-5476
 URL: https://issues.apache.org/jira/browse/SOLR-5476
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, 4.7

 Attachments: SOLR-5476.patch, SOLR-5476.patch, SOLR-5476.patch, 
 SOLR-5476.patch, SOLR-5476.patch


 In a very large cluster the Overseer is likely to be overloaded.If the same 
 node is a serving a few other shards it can lead to OverSeer getting slowed 
 down due to GC pauses , or simply too much of work  . If the cluster is 
 really large , it is possible to dedicate high end h/w for OverSeers
 It works as a new collection admin command
 command=addrolerole=overseernode=192.168.1.5:8983_solr
 This results in the creation of a entry in the /roles.json in ZK which would 
 look like the following
 {code:javascript}
 {
 overseer : [192.168.1.5:8983_solr]
 }
 {code}
 If a node is designated for overseer it gets preference over others when 
 overseer election takes place. If no designated servers are available another 
 random node would become the Overseer.
 Later on, if one of the designated nodes are brought up ,it would take over 
 the Overseer role from the current Overseer to become the Overseer of the 
 system



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >