Re: [ANNOUNCE] Apache PyLucene 4.3.0

2013-05-15 Thread Andi Vajda


On Wed, 15 May 2013, Robert Muir wrote:


I got it working, with 2 steps:

1. force staging to rebuild, by making a commit under templates/ that
just added a space inside a comment
2. deploy staging to production, by clicking
https://cms.apache.org/lucene/publish


Oh, I didn't know about step 2. I just svn commit'ted the changes and 
expected it to percolate. Last time I did this, it just worked (tm).


Andi..



usually #1 is not necessary, but i do it when things dont take

On Wed, May 15, 2013 at 2:08 PM, Andi Vajda va...@apache.org wrote:


On May 15, 2013, at 10:08, Michael McCandless luc...@mikemccandless.com wrote:


Hmm the web site News section doesn't have 4.3.0?


I checked in the web site change last night. It probably needs some propagation 
time ?

Andi..



Mike McCandless

http://blog.mikemccandless.com


On Wed, May 15, 2013 at 11:14 AM, Andi Vajda va...@apache.org wrote:


I am pleased to announce the availability of Apache PyLucene 4.3.0, the
first PyLucene release wrapping the Lucene 4.x API.

Apache PyLucene, a subproject of Apache Lucene, is a Python extension for
accessing Apache Lucene Core. Its goal is to allow you to use Lucene's text
indexing and searching capabilities from Python. It is API compatible with
the latest version of Lucene 4.x Core, 4.3.0.

This release contains a number of bug fixes and improvements. Details can be
found in the changes files:

http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_4_3_0/CHANGES
http://svn.apache.org/repos/asf/lucene/pylucene/trunk/jcc/CHANGES

Apache PyLucene is available from the following download page:
http://www.apache.org/dyn/closer.cgi/lucene/pylucene/pylucene-4.3.0-1-src.tar.gz

When downloading from a mirror site, please remember to verify the downloads
using signatures found on the Apache site:
https://dist.apache.org/repos/dist/release/lucene/pylucene/KEYS

For more information on Apache PyLucene, visit the project home page:
 http://lucene.apache.org/pylucene

Andi..




[jira] [Updated] (SOLR-1913) QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations on Integer Fields

2013-05-15 Thread Christopher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christopher updated SOLR-1913:
--

Attachment: WEB-INF lib.jpg

 QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations 
 on Integer Fields
 ---

 Key: SOLR-1913
 URL: https://issues.apache.org/jira/browse/SOLR-1913
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Israel Ekpo
 Fix For: 4.4

 Attachments: bitwise_filter_plugin.jar, SOLR-1913.bitwise.tar.gz, 
 SOLR-1913-src.tar.gz, solr-bitwise-plugin.jar, WEB-INF lib.jpg

   Original Estimate: 1h
  Remaining Estimate: 1h

 BitwiseQueryParserPlugin is a org.apache.solr.search.QParserPlugin that 
 allows 
 users to filter the documents returned from a query
 by performing bitwise operations between a particular integer field in the 
 index
 and the specified value.
 This Solr plugin is based on the BitwiseFilter in LUCENE-2460
 See https://issues.apache.org/jira/browse/LUCENE-2460 for more details
 This is the syntax for searching in Solr:
 http://localhost:8983/path/to/solr/select/?q={!bitwise field=fieldname 
 op=OPERATION_NAME source=sourcevalue negate=boolean}remainder of query
 Example :
 http://localhost:8983/solr/bitwise/select/?q={!bitwise field=user_permissions 
 op=AND source=3 negate=true}state:FL
 The negate parameter is optional
 The field parameter is the name of the integer field
 The op parameter is the name of the operation; one of {AND, OR, XOR}
 The source parameter is the specified integer value
 The negate parameter is a boolean indicating whether or not to negate the 
 results of the bitwise operation
 To test out this plugin, simply copy the jar file containing the plugin 
 classes into your $SOLR_HOME/lib directory and then
 add the following to your solrconfig.xml file after the dismax request 
 handler:
 queryParser name=bitwise 
 class=org.apache.solr.bitwise.BitwiseQueryParserPlugin basedOn=dismax /
 Restart your servlet container.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1913) QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations on Integer Fields

2013-05-15 Thread Christopher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658126#comment-13658126
 ] 

Christopher commented on SOLR-1913:
---

Hi Deepthi Sigireddi, I attached a screenshot of my WEB-INF/lib

 QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations 
 on Integer Fields
 ---

 Key: SOLR-1913
 URL: https://issues.apache.org/jira/browse/SOLR-1913
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Israel Ekpo
 Fix For: 4.4

 Attachments: bitwise_filter_plugin.jar, SOLR-1913.bitwise.tar.gz, 
 SOLR-1913-src.tar.gz, solr-bitwise-plugin.jar, WEB-INF lib.jpg

   Original Estimate: 1h
  Remaining Estimate: 1h

 BitwiseQueryParserPlugin is a org.apache.solr.search.QParserPlugin that 
 allows 
 users to filter the documents returned from a query
 by performing bitwise operations between a particular integer field in the 
 index
 and the specified value.
 This Solr plugin is based on the BitwiseFilter in LUCENE-2460
 See https://issues.apache.org/jira/browse/LUCENE-2460 for more details
 This is the syntax for searching in Solr:
 http://localhost:8983/path/to/solr/select/?q={!bitwise field=fieldname 
 op=OPERATION_NAME source=sourcevalue negate=boolean}remainder of query
 Example :
 http://localhost:8983/solr/bitwise/select/?q={!bitwise field=user_permissions 
 op=AND source=3 negate=true}state:FL
 The negate parameter is optional
 The field parameter is the name of the integer field
 The op parameter is the name of the operation; one of {AND, OR, XOR}
 The source parameter is the specified integer value
 The negate parameter is a boolean indicating whether or not to negate the 
 results of the bitwise operation
 To test out this plugin, simply copy the jar file containing the plugin 
 classes into your $SOLR_HOME/lib directory and then
 add the following to your solrconfig.xml file after the dismax request 
 handler:
 queryParser name=bitwise 
 class=org.apache.solr.bitwise.BitwiseQueryParserPlugin basedOn=dismax /
 Restart your servlet container.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-1913) QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations on Integer Fields

2013-05-15 Thread Christopher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658126#comment-13658126
 ] 

Christopher edited comment on SOLR-1913 at 5/15/13 7:07 AM:


Hi Deepthi Sigireddi, I attached a screenshot of my WEB-INF/lib.
I'll try with a new install of 4.2.1

  was (Author: nekudot):
Hi Deepthi Sigireddi, I attached a screenshot of my WEB-INF/lib
  
 QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations 
 on Integer Fields
 ---

 Key: SOLR-1913
 URL: https://issues.apache.org/jira/browse/SOLR-1913
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Israel Ekpo
 Fix For: 4.4

 Attachments: bitwise_filter_plugin.jar, SOLR-1913.bitwise.tar.gz, 
 SOLR-1913-src.tar.gz, solr-bitwise-plugin.jar, WEB-INF lib.jpg

   Original Estimate: 1h
  Remaining Estimate: 1h

 BitwiseQueryParserPlugin is a org.apache.solr.search.QParserPlugin that 
 allows 
 users to filter the documents returned from a query
 by performing bitwise operations between a particular integer field in the 
 index
 and the specified value.
 This Solr plugin is based on the BitwiseFilter in LUCENE-2460
 See https://issues.apache.org/jira/browse/LUCENE-2460 for more details
 This is the syntax for searching in Solr:
 http://localhost:8983/path/to/solr/select/?q={!bitwise field=fieldname 
 op=OPERATION_NAME source=sourcevalue negate=boolean}remainder of query
 Example :
 http://localhost:8983/solr/bitwise/select/?q={!bitwise field=user_permissions 
 op=AND source=3 negate=true}state:FL
 The negate parameter is optional
 The field parameter is the name of the integer field
 The op parameter is the name of the operation; one of {AND, OR, XOR}
 The source parameter is the specified integer value
 The negate parameter is a boolean indicating whether or not to negate the 
 results of the bitwise operation
 To test out this plugin, simply copy the jar file containing the plugin 
 classes into your $SOLR_HOME/lib directory and then
 add the following to your solrconfig.xml file after the dismax request 
 handler:
 queryParser name=bitwise 
 class=org.apache.solr.bitwise.BitwiseQueryParserPlugin basedOn=dismax /
 Restart your servlet container.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1913) QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations on Integer Fields

2013-05-15 Thread Christopher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658143#comment-13658143
 ] 

Christopher commented on SOLR-1913:
---

It works with solr 4.2.1 !

 QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations 
 on Integer Fields
 ---

 Key: SOLR-1913
 URL: https://issues.apache.org/jira/browse/SOLR-1913
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Israel Ekpo
 Fix For: 4.4

 Attachments: bitwise_filter_plugin.jar, SOLR-1913.bitwise.tar.gz, 
 SOLR-1913-src.tar.gz, solr-bitwise-plugin.jar, WEB-INF lib.jpg

   Original Estimate: 1h
  Remaining Estimate: 1h

 BitwiseQueryParserPlugin is a org.apache.solr.search.QParserPlugin that 
 allows 
 users to filter the documents returned from a query
 by performing bitwise operations between a particular integer field in the 
 index
 and the specified value.
 This Solr plugin is based on the BitwiseFilter in LUCENE-2460
 See https://issues.apache.org/jira/browse/LUCENE-2460 for more details
 This is the syntax for searching in Solr:
 http://localhost:8983/path/to/solr/select/?q={!bitwise field=fieldname 
 op=OPERATION_NAME source=sourcevalue negate=boolean}remainder of query
 Example :
 http://localhost:8983/solr/bitwise/select/?q={!bitwise field=user_permissions 
 op=AND source=3 negate=true}state:FL
 The negate parameter is optional
 The field parameter is the name of the integer field
 The op parameter is the name of the operation; one of {AND, OR, XOR}
 The source parameter is the specified integer value
 The negate parameter is a boolean indicating whether or not to negate the 
 results of the bitwise operation
 To test out this plugin, simply copy the jar file containing the plugin 
 classes into your $SOLR_HOME/lib directory and then
 add the following to your solrconfig.xml file after the dismax request 
 handler:
 queryParser name=bitwise 
 class=org.apache.solr.bitwise.BitwiseQueryParserPlugin basedOn=dismax /
 Restart your servlet container.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b86) - Build # 5661 - Still Failing!

2013-05-15 Thread Steve Rowe
Shai, that .sha1 file was not leftover - I just added it.

After you removed it, I now get the following from 'ant 
validate-maven-dependencies':

-
-validate-maven-dependencies:
 [licenses] MISSING sha1 checksum file for: 
/Users/sarowe/.m2/repository/org/eclipse/jetty/orbit/javax.servlet/3.0.0.v201112011016/javax.servlet-3.0.0.v201112011016.jar
 [licenses] Scanned 14 JAR file(s) for licenses (in 0.05s.), 1 error(s).
-

I don't understand why this file is being reported as causing a dirty checkout 
(see quoted error message below) - I looked at the groovy script that does this 
checking, and this particular message is given when a file is either 
unversioned or missing.  Neither condition should be true in this case.

Uwe, do you understand what's going on here?

Steve

On May 15, 2013, at 1:09 AM, Shai Erera ser...@gmail.com wrote:

 Removed the leftover license file.
 
 Shai
 
 On Wed, May 15, 2013 at 7:24 AM, Policeman Jenkins Server 
 jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/5661/
 Java: 64bit/jdk1.8.0-ea-b86 -XX:+UseCompressedOops -XX:+UseSerialGC
[…]
 BUILD FAILED
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:377: The 
 following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:316: The 
 following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/extra-targets.xml:122: 
 Source checkout is dirty after running tests!!! Offending files:
 * lucene/licenses/javax.servlet-3.0.0.v201112011016.jar.sha1


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b86) - Build # 5661 - Still Failing!

2013-05-15 Thread Shai Erera
Steve, I discussed this w/ Robert before I removed the file.

Robert explained me that jar-checksums removes any license files that
without a matching .jar and then check-svn-working-copy reports a dirty
checkout.


Why is this file required by maven?

At any rate, I don't understand what's going on either. Whatever fixes
this, I'm fine with

Shai


On Wed, May 15, 2013 at 10:41 AM, Steve Rowe sar...@gmail.com wrote:

 Shai, that .sha1 file was not leftover - I just added it.

 After you removed it, I now get the following from 'ant
 validate-maven-dependencies':

 -
 -validate-maven-dependencies:
  [licenses] MISSING sha1 checksum file for:
 /Users/sarowe/.m2/repository/org/eclipse/jetty/orbit/javax.servlet/3.0.0.v201112011016/javax.servlet-3.0.0.v201112011016.jar
  [licenses] Scanned 14 JAR file(s) for licenses (in 0.05s.), 1 error(s).
 -

 I don't understand why this file is being reported as causing a dirty
 checkout (see quoted error message below) - I looked at the groovy script
 that does this checking, and this particular message is given when a file
 is either unversioned or missing.  Neither condition should be true in this
 case.

 Uwe, do you understand what's going on here?

 Steve

 On May 15, 2013, at 1:09 AM, Shai Erera ser...@gmail.com wrote:

  Removed the leftover license file.
 
  Shai
 
  On Wed, May 15, 2013 at 7:24 AM, Policeman Jenkins Server 
 jenk...@thetaphi.de wrote:
  Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/5661/
  Java: 64bit/jdk1.8.0-ea-b86 -XX:+UseCompressedOops -XX:+UseSerialGC
 […]
  BUILD FAILED
  /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:377: The
 following error occurred while executing this line:
  /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:316: The
 following error occurred while executing this line:
 
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/extra-targets.xml:122:
 Source checkout is dirty after running tests!!! Offending files:
  * lucene/licenses/javax.servlet-3.0.0.v201112011016.jar.sha1


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b86) - Build # 5661 - Still Failing!

2013-05-15 Thread Steve Rowe
Shai,

As I mentioned in another email thread, the Ant+Ivy setup for the replicator 
module renames remote dependency javax.servlet-3.0.0.v201112011016.jar to local 
filename servlet-api-3.0.jar - I don't know how or why.  As a result, the 
checksum, license and notice files in lucene/licenses/ are named servlet-api-*.

Maven has no such renaming facility, so the dependency name is the same as the 
remote file name, and the checksum checker, for which there is no mapping 
facility, expects the .sha1 file to be the dependency filename with .sha1 
appended.  That's why this file is required, not by Maven, but by the Ant 
build's 'validate-maven-dependencies' target.

Steve


On May 15, 2013, at 3:57 AM, Shai Erera ser...@gmail.com wrote:

 Steve, I discussed this w/ Robert before I removed the file.
 
 Robert explained me that jar-checksums removes any license files that 
 without a matching .jar and then check-svn-working-copy reports a dirty 
 checkout.
 
 
 Why is this file required by maven?
 
 At any rate, I don't understand what's going on either. Whatever fixes this, 
 I'm fine with
 
 Shai
 
 
 On Wed, May 15, 2013 at 10:41 AM, Steve Rowe sar...@gmail.com wrote:
 Shai, that .sha1 file was not leftover - I just added it.
 
 After you removed it, I now get the following from 'ant 
 validate-maven-dependencies':
 
 -
 -validate-maven-dependencies:
  [licenses] MISSING sha1 checksum file for: 
 /Users/sarowe/.m2/repository/org/eclipse/jetty/orbit/javax.servlet/3.0.0.v201112011016/javax.servlet-3.0.0.v201112011016.jar
  [licenses] Scanned 14 JAR file(s) for licenses (in 0.05s.), 1 error(s).
 -
 
 I don't understand why this file is being reported as causing a dirty 
 checkout (see quoted error message below) - I looked at the groovy script 
 that does this checking, and this particular message is given when a file is 
 either unversioned or missing.  Neither condition should be true in this case.
 
 Uwe, do you understand what's going on here?
 
 Steve
 
 On May 15, 2013, at 1:09 AM, Shai Erera ser...@gmail.com wrote:
 
  Removed the leftover license file.
 
  Shai
 
  On Wed, May 15, 2013 at 7:24 AM, Policeman Jenkins Server 
  jenk...@thetaphi.de wrote:
  Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/5661/
  Java: 64bit/jdk1.8.0-ea-b86 -XX:+UseCompressedOops -XX:+UseSerialGC
 […]
  BUILD FAILED
  /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:377: The 
  following error occurred while executing this line:
  /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:316: The 
  following error occurred while executing this line:
  /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/extra-targets.xml:122: 
  Source checkout is dirty after running tests!!! Offending files:
  * lucene/licenses/javax.servlet-3.0.0.v201112011016.jar.sha1
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: svn commit: r1482642 - /lucene/dev/branches/lucene_solr_4_3/lucene/core/src/java/org/apache/lucene/util/Constants.java

2013-05-15 Thread Uwe Schindler
Are we sure that this is the right thing? The LUCENE_MAIN_VERSION is used for 
index compatibility and should always be only in X.Y format.

Please revert this!

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de

 -Original Message-
 From: hoss...@apache.org [mailto:hoss...@apache.org]
 Sent: Wednesday, May 15, 2013 1:41 AM
 To: comm...@lucene.apache.org
 Subject: svn commit: r1482642 -
 /lucene/dev/branches/lucene_solr_4_3/lucene/core/src/java/org/apache/l
 ucene/util/Constants.java
 
 Author: hossman
 Date: Tue May 14 23:41:23 2013
 New Revision: 1482642
 
 URL: http://svn.apache.org/r1482642
 Log:
 LUCENE-5001: update Constants.LUCENE_MAIN_VERSION for 4.3.1
 
 Modified:
 
 lucene/dev/branches/lucene_solr_4_3/lucene/core/src/java/org/apache/lu
 cene/util/Constants.java
 
 Modified:
 lucene/dev/branches/lucene_solr_4_3/lucene/core/src/java/org/apache/lu
 cene/util/Constants.java
 URL:
 http://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_4_3/lucen
 e/core/src/java/org/apache/lucene/util/Constants.java?rev=1482642r1=14
 82641r2=1482642view=diff
 ==
 
 ---
 lucene/dev/branches/lucene_solr_4_3/lucene/core/src/java/org/apache/lu
 cene/util/Constants.java (original)
 +++
 lucene/dev/branches/lucene_solr_4_3/lucene/core/src/java/org/apache/lu
 cene/util/Constants.java Tue May 14 23:41:23 2013
 @@ -122,7 +122,7 @@ public final class Constants {
/**
 * This is the internal Lucene version, recorded into each segment.
 */
 -  public static final String LUCENE_MAIN_VERSION = ident(4.3);
 +  public static final String LUCENE_MAIN_VERSION = ident(4.3.1);
 
/**
 * This is the Lucene version for display purposes.



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b86) - Build # 5661 - Still Failing!

2013-05-15 Thread Shai Erera
I copied the replicator's ivy dependencies from Solr's. In
replicator/build.xml you can find the copy/renaming thing,
I think that I tried also without it, but something else broke, I don't
remember what though.

So the question is I guess, how does the maven dependency check passes for
Solr?

Shai


On Wed, May 15, 2013 at 11:08 AM, Steve Rowe sar...@gmail.com wrote:

 Shai,

 As I mentioned in another email thread, the Ant+Ivy setup for the
 replicator module renames remote dependency
 javax.servlet-3.0.0.v201112011016.jar to local filename servlet-api-3.0.jar
 - I don't know how or why.  As a result, the checksum, license and notice
 files in lucene/licenses/ are named servlet-api-*.

 Maven has no such renaming facility, so the dependency name is the same as
 the remote file name, and the checksum checker, for which there is no
 mapping facility, expects the .sha1 file to be the dependency filename with
 .sha1 appended.  That's why this file is required, not by Maven, but by
 the Ant build's 'validate-maven-dependencies' target.

 Steve


 On May 15, 2013, at 3:57 AM, Shai Erera ser...@gmail.com wrote:

  Steve, I discussed this w/ Robert before I removed the file.
 
  Robert explained me that jar-checksums removes any license files that
 without a matching .jar and then check-svn-working-copy reports a dirty
 checkout.
 
 
  Why is this file required by maven?
 
  At any rate, I don't understand what's going on either. Whatever fixes
 this, I'm fine with
 
  Shai
 
 
  On Wed, May 15, 2013 at 10:41 AM, Steve Rowe sar...@gmail.com wrote:
  Shai, that .sha1 file was not leftover - I just added it.
 
  After you removed it, I now get the following from 'ant
 validate-maven-dependencies':
 
  -
  -validate-maven-dependencies:
   [licenses] MISSING sha1 checksum file for:
 /Users/sarowe/.m2/repository/org/eclipse/jetty/orbit/javax.servlet/3.0.0.v201112011016/javax.servlet-3.0.0.v201112011016.jar
   [licenses] Scanned 14 JAR file(s) for licenses (in 0.05s.), 1 error(s).
  -
 
  I don't understand why this file is being reported as causing a dirty
 checkout (see quoted error message below) - I looked at the groovy script
 that does this checking, and this particular message is given when a file
 is either unversioned or missing.  Neither condition should be true in this
 case.
 
  Uwe, do you understand what's going on here?
 
  Steve
 
  On May 15, 2013, at 1:09 AM, Shai Erera ser...@gmail.com wrote:
 
   Removed the leftover license file.
  
   Shai
  
   On Wed, May 15, 2013 at 7:24 AM, Policeman Jenkins Server 
 jenk...@thetaphi.de wrote:
   Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/5661/
   Java: 64bit/jdk1.8.0-ea-b86 -XX:+UseCompressedOops -XX:+UseSerialGC
  […]
   BUILD FAILED
   /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:377: The
 following error occurred while executing this line:
   /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:316: The
 following error occurred while executing this line:
  
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/extra-targets.xml:122:
 Source checkout is dirty after running tests!!! Offending files:
   * lucene/licenses/javax.servlet-3.0.0.v201112011016.jar.sha1
 
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
  For additional commands, e-mail: dev-h...@lucene.apache.org
 
 


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b86) - Build # 5661 - Still Failing!

2013-05-15 Thread Steve Rowe
Looks like the Solr build is just ignoring the servlet jar for the purposes of 
license/checksum validation - from solr/common-build.xml:

-
 target name=-validate-maven-dependencies 
depends=-validate-maven-dependencies.init
m2-validate-dependencies pom.xml=${maven.pom.xml} 
licenseDirectory=${license.dir}
  additional-filters
replaceregex pattern=jetty([^/]+)$ replace=jetty flags=gi /
replaceregex pattern=slf4j-([^/]+)$ replace=slf4j flags=gi /
replaceregex pattern=(bcmail|bcprov)-([^/]+)$ replace=\1 
flags=gi /
  /additional-filters
  excludes
rsel:or
  rsel:name name=**/lucene-*-${maven.version.glob}.jar 
handledirsep=true/
  rsel:name name=**/solr-*-${maven.version.glob}.jar 
handledirsep=true/
  !-- TODO: figure out what is going on here with servlet-apis --
  rsel:name name=**/*servlet*.jar handledirsep=true/
/rsel:or
  /excludes
/m2-validate-dependencies
  /target
-

Steve

On May 15, 2013, at 4:25 AM, Shai Erera ser...@gmail.com wrote:

 I copied the replicator's ivy dependencies from Solr's. In 
 replicator/build.xml you can find the copy/renaming thing,
 I think that I tried also without it, but something else broke, I don't 
 remember what though.
 
 So the question is I guess, how does the maven dependency check passes for 
 Solr?
 
 Shai
 
 
 On Wed, May 15, 2013 at 11:08 AM, Steve Rowe sar...@gmail.com wrote:
 Shai,
 
 As I mentioned in another email thread, the Ant+Ivy setup for the replicator 
 module renames remote dependency javax.servlet-3.0.0.v201112011016.jar to 
 local filename servlet-api-3.0.jar - I don't know how or why.  As a result, 
 the checksum, license and notice files in lucene/licenses/ are named 
 servlet-api-*.
 
 Maven has no such renaming facility, so the dependency name is the same as 
 the remote file name, and the checksum checker, for which there is no mapping 
 facility, expects the .sha1 file to be the dependency filename with .sha1 
 appended.  That's why this file is required, not by Maven, but by the Ant 
 build's 'validate-maven-dependencies' target.
 
 Steve
 
 
 On May 15, 2013, at 3:57 AM, Shai Erera ser...@gmail.com wrote:
 
  Steve, I discussed this w/ Robert before I removed the file.
 
  Robert explained me that jar-checksums removes any license files that 
  without a matching .jar and then check-svn-working-copy reports a dirty 
  checkout.
 
 
  Why is this file required by maven?
 
  At any rate, I don't understand what's going on either. Whatever fixes 
  this, I'm fine with
 
  Shai
 
 
  On Wed, May 15, 2013 at 10:41 AM, Steve Rowe sar...@gmail.com wrote:
  Shai, that .sha1 file was not leftover - I just added it.
 
  After you removed it, I now get the following from 'ant 
  validate-maven-dependencies':
 
  -
  -validate-maven-dependencies:
   [licenses] MISSING sha1 checksum file for: 
  /Users/sarowe/.m2/repository/org/eclipse/jetty/orbit/javax.servlet/3.0.0.v201112011016/javax.servlet-3.0.0.v201112011016.jar
   [licenses] Scanned 14 JAR file(s) for licenses (in 0.05s.), 1 error(s).
  -
 
  I don't understand why this file is being reported as causing a dirty 
  checkout (see quoted error message below) - I looked at the groovy script 
  that does this checking, and this particular message is given when a file 
  is either unversioned or missing.  Neither condition should be true in this 
  case.
 
  Uwe, do you understand what's going on here?
 
  Steve
 
  On May 15, 2013, at 1:09 AM, Shai Erera ser...@gmail.com wrote:
 
   Removed the leftover license file.
  
   Shai
  
   On Wed, May 15, 2013 at 7:24 AM, Policeman Jenkins Server 
   jenk...@thetaphi.de wrote:
   Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/5661/
   Java: 64bit/jdk1.8.0-ea-b86 -XX:+UseCompressedOops -XX:+UseSerialGC
  […]
   BUILD FAILED
   /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:377: The 
   following error occurred while executing this line:
   /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:316: The 
   following error occurred while executing this line:
   /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/extra-targets.xml:122: 
   Source checkout is dirty after running tests!!! Offending files:
   * lucene/licenses/javax.servlet-3.0.0.v201112011016.jar.sha1
 
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
  For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b86) - Build # 5661 - Still Failing!

2013-05-15 Thread Shai Erera
I added something like that to lucene/build.xml$check-licenses. Perhaps we
should add the same to validate-maven-dependencies?

Shai


On Wed, May 15, 2013 at 11:33 AM, Steve Rowe sar...@gmail.com wrote:

 Looks like the Solr build is just ignoring the servlet jar for the
 purposes of license/checksum validation - from solr/common-build.xml:

 -
  target name=-validate-maven-dependencies
 depends=-validate-maven-dependencies.init
 m2-validate-dependencies pom.xml=${maven.pom.xml}
 licenseDirectory=${license.dir}
   additional-filters
 replaceregex pattern=jetty([^/]+)$ replace=jetty flags=gi /
 replaceregex pattern=slf4j-([^/]+)$ replace=slf4j flags=gi
 /
 replaceregex pattern=(bcmail|bcprov)-([^/]+)$ replace=\1
 flags=gi /
   /additional-filters
   excludes
 rsel:or
   rsel:name name=**/lucene-*-${maven.version.glob}.jar
 handledirsep=true/
   rsel:name name=**/solr-*-${maven.version.glob}.jar
 handledirsep=true/
   !-- TODO: figure out what is going on here with servlet-apis --
   rsel:name name=**/*servlet*.jar handledirsep=true/
 /rsel:or
   /excludes
 /m2-validate-dependencies
   /target
 -

 Steve

 On May 15, 2013, at 4:25 AM, Shai Erera ser...@gmail.com wrote:

  I copied the replicator's ivy dependencies from Solr's. In
 replicator/build.xml you can find the copy/renaming thing,
  I think that I tried also without it, but something else broke, I don't
 remember what though.
 
  So the question is I guess, how does the maven dependency check passes
 for Solr?
 
  Shai
 
 
  On Wed, May 15, 2013 at 11:08 AM, Steve Rowe sar...@gmail.com wrote:
  Shai,
 
  As I mentioned in another email thread, the Ant+Ivy setup for the
 replicator module renames remote dependency
 javax.servlet-3.0.0.v201112011016.jar to local filename servlet-api-3.0.jar
 - I don't know how or why.  As a result, the checksum, license and notice
 files in lucene/licenses/ are named servlet-api-*.
 
  Maven has no such renaming facility, so the dependency name is the same
 as the remote file name, and the checksum checker, for which there is no
 mapping facility, expects the .sha1 file to be the dependency filename with
 .sha1 appended.  That's why this file is required, not by Maven, but by
 the Ant build's 'validate-maven-dependencies' target.
 
  Steve
 
 
  On May 15, 2013, at 3:57 AM, Shai Erera ser...@gmail.com wrote:
 
   Steve, I discussed this w/ Robert before I removed the file.
  
   Robert explained me that jar-checksums removes any license files
 that without a matching .jar and then check-svn-working-copy reports a
 dirty checkout.
  
  
   Why is this file required by maven?
  
   At any rate, I don't understand what's going on either. Whatever fixes
 this, I'm fine with
  
   Shai
  
  
   On Wed, May 15, 2013 at 10:41 AM, Steve Rowe sar...@gmail.com wrote:
   Shai, that .sha1 file was not leftover - I just added it.
  
   After you removed it, I now get the following from 'ant
 validate-maven-dependencies':
  
   -
   -validate-maven-dependencies:
[licenses] MISSING sha1 checksum file for:
 /Users/sarowe/.m2/repository/org/eclipse/jetty/orbit/javax.servlet/3.0.0.v201112011016/javax.servlet-3.0.0.v201112011016.jar
[licenses] Scanned 14 JAR file(s) for licenses (in 0.05s.), 1
 error(s).
   -
  
   I don't understand why this file is being reported as causing a dirty
 checkout (see quoted error message below) - I looked at the groovy script
 that does this checking, and this particular message is given when a file
 is either unversioned or missing.  Neither condition should be true in this
 case.
  
   Uwe, do you understand what's going on here?
  
   Steve
  
   On May 15, 2013, at 1:09 AM, Shai Erera ser...@gmail.com wrote:
  
Removed the leftover license file.
   
Shai
   
On Wed, May 15, 2013 at 7:24 AM, Policeman Jenkins Server 
 jenk...@thetaphi.de wrote:
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/5661/
Java: 64bit/jdk1.8.0-ea-b86 -XX:+UseCompressedOops -XX:+UseSerialGC
   […]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:377:
 The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:316:
 The following error occurred while executing this line:
   
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/extra-targets.xml:122:
 Source checkout is dirty after running tests!!! Offending files:
* lucene/licenses/javax.servlet-3.0.0.v201112011016.jar.sha1
  
  
   -
   To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
   For additional commands, e-mail: dev-h...@lucene.apache.org
  
  
 
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
  For additional commands, e-mail: dev-h...@lucene.apache.org
 
 


 

Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b86) - Build # 5661 - Still Failing!

2013-05-15 Thread Steve Rowe
Here's lucene/build.xml$check-licenses - nothing in there about the servlet api 
jar?:

-
target name=check-licenses depends=compile-tools,resolve,load-custom-tasks 
description=Validate license stuff.
  license-check-macro dir=${basedir} licensedir=${common.dir}/licenses
additional-filters
  replaceregex pattern=jetty([^/]+)$ replace=jetty flags=gi /
  replaceregex pattern=slf4j-([^/]+)$ replace=slf4j flags=gi /
  replaceregex pattern=(bcmail|bcprov)-([^/]+)$ replace=\1 flags=gi 
/
/additional-filters
  /license-check-macro
/target
-

Steve

On May 15, 2013, at 4:58 AM, Shai Erera ser...@gmail.com wrote:

 I added something like that to lucene/build.xml$check-licenses. Perhaps we 
 should add the same to validate-maven-dependencies?
 
 Shai
 
 
 On Wed, May 15, 2013 at 11:33 AM, Steve Rowe sar...@gmail.com wrote:
 Looks like the Solr build is just ignoring the servlet jar for the purposes 
 of license/checksum validation - from solr/common-build.xml:
 
 -
  target name=-validate-maven-dependencies 
 depends=-validate-maven-dependencies.init
 m2-validate-dependencies pom.xml=${maven.pom.xml} 
 licenseDirectory=${license.dir}
   additional-filters
 replaceregex pattern=jetty([^/]+)$ replace=jetty flags=gi /
 replaceregex pattern=slf4j-([^/]+)$ replace=slf4j flags=gi /
 replaceregex pattern=(bcmail|bcprov)-([^/]+)$ replace=\1 
 flags=gi /
   /additional-filters
   excludes
 rsel:or
   rsel:name name=**/lucene-*-${maven.version.glob}.jar 
 handledirsep=true/
   rsel:name name=**/solr-*-${maven.version.glob}.jar 
 handledirsep=true/
   !-- TODO: figure out what is going on here with servlet-apis --
   rsel:name name=**/*servlet*.jar handledirsep=true/
 /rsel:or
   /excludes
 /m2-validate-dependencies
   /target
 -
 
 Steve
 
 On May 15, 2013, at 4:25 AM, Shai Erera ser...@gmail.com wrote:
 
  I copied the replicator's ivy dependencies from Solr's. In 
  replicator/build.xml you can find the copy/renaming thing,
  I think that I tried also without it, but something else broke, I don't 
  remember what though.
 
  So the question is I guess, how does the maven dependency check passes for 
  Solr?
 
  Shai
 
 
  On Wed, May 15, 2013 at 11:08 AM, Steve Rowe sar...@gmail.com wrote:
  Shai,
 
  As I mentioned in another email thread, the Ant+Ivy setup for the 
  replicator module renames remote dependency 
  javax.servlet-3.0.0.v201112011016.jar to local filename servlet-api-3.0.jar 
  - I don't know how or why.  As a result, the checksum, license and notice 
  files in lucene/licenses/ are named servlet-api-*.
 
  Maven has no such renaming facility, so the dependency name is the same as 
  the remote file name, and the checksum checker, for which there is no 
  mapping facility, expects the .sha1 file to be the dependency filename with 
  .sha1 appended.  That's why this file is required, not by Maven, but by 
  the Ant build's 'validate-maven-dependencies' target.
 
  Steve
 
 
  On May 15, 2013, at 3:57 AM, Shai Erera ser...@gmail.com wrote:
 
   Steve, I discussed this w/ Robert before I removed the file.
  
   Robert explained me that jar-checksums removes any license files that 
   without a matching .jar and then check-svn-working-copy reports a dirty 
   checkout.
  
  
   Why is this file required by maven?
  
   At any rate, I don't understand what's going on either. Whatever fixes 
   this, I'm fine with
  
   Shai
  
  
   On Wed, May 15, 2013 at 10:41 AM, Steve Rowe sar...@gmail.com wrote:
   Shai, that .sha1 file was not leftover - I just added it.
  
   After you removed it, I now get the following from 'ant 
   validate-maven-dependencies':
  
   -
   -validate-maven-dependencies:
[licenses] MISSING sha1 checksum file for: 
   /Users/sarowe/.m2/repository/org/eclipse/jetty/orbit/javax.servlet/3.0.0.v201112011016/javax.servlet-3.0.0.v201112011016.jar
[licenses] Scanned 14 JAR file(s) for licenses (in 0.05s.), 1 error(s).
   -
  
   I don't understand why this file is being reported as causing a dirty 
   checkout (see quoted error message below) - I looked at the groovy script 
   that does this checking, and this particular message is given when a file 
   is either unversioned or missing.  Neither condition should be true in 
   this case.
  
   Uwe, do you understand what's going on here?
  
   Steve
  
   On May 15, 2013, at 1:09 AM, Shai Erera ser...@gmail.com wrote:
  
Removed the leftover license file.
   
Shai
   
On Wed, May 15, 2013 at 7:24 AM, Policeman Jenkins Server 
jenk...@thetaphi.de wrote:
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/5661/
Java: 64bit/jdk1.8.0-ea-b86 -XX:+UseCompressedOops -XX:+UseSerialGC
   […]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:377: The 
following error occurred while executing this line:

Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b86) - Build # 5661 - Still Failing!

2013-05-15 Thread Shai Erera
Right, I now remember it was for jetty, because there are many jetty jars,
but only one license.
Maybe just add the same ignore from solr to lucene/build.xml?

Shai


On Wed, May 15, 2013 at 12:04 PM, Steve Rowe sar...@gmail.com wrote:

 Here's lucene/build.xml$check-licenses - nothing in there about the
 servlet api jar?:

 -
 target name=check-licenses
 depends=compile-tools,resolve,load-custom-tasks description=Validate
 license stuff.
   license-check-macro dir=${basedir}
 licensedir=${common.dir}/licenses
 additional-filters
   replaceregex pattern=jetty([^/]+)$ replace=jetty flags=gi /
   replaceregex pattern=slf4j-([^/]+)$ replace=slf4j flags=gi /
   replaceregex pattern=(bcmail|bcprov)-([^/]+)$ replace=\1
 flags=gi /
 /additional-filters
   /license-check-macro
 /target
 -

 Steve

 On May 15, 2013, at 4:58 AM, Shai Erera ser...@gmail.com wrote:

  I added something like that to lucene/build.xml$check-licenses. Perhaps
 we should add the same to validate-maven-dependencies?
 
  Shai
 
 
  On Wed, May 15, 2013 at 11:33 AM, Steve Rowe sar...@gmail.com wrote:
  Looks like the Solr build is just ignoring the servlet jar for the
 purposes of license/checksum validation - from solr/common-build.xml:
 
  -
   target name=-validate-maven-dependencies
 depends=-validate-maven-dependencies.init
  m2-validate-dependencies pom.xml=${maven.pom.xml}
 licenseDirectory=${license.dir}
additional-filters
  replaceregex pattern=jetty([^/]+)$ replace=jetty flags=gi
 /
  replaceregex pattern=slf4j-([^/]+)$ replace=slf4j
 flags=gi /
  replaceregex pattern=(bcmail|bcprov)-([^/]+)$ replace=\1
 flags=gi /
/additional-filters
excludes
  rsel:or
rsel:name name=**/lucene-*-${maven.version.glob}.jar
 handledirsep=true/
rsel:name name=**/solr-*-${maven.version.glob}.jar
 handledirsep=true/
!-- TODO: figure out what is going on here with servlet-apis
 --
rsel:name name=**/*servlet*.jar handledirsep=true/
  /rsel:or
/excludes
  /m2-validate-dependencies
/target
  -
 
  Steve
 
  On May 15, 2013, at 4:25 AM, Shai Erera ser...@gmail.com wrote:
 
   I copied the replicator's ivy dependencies from Solr's. In
 replicator/build.xml you can find the copy/renaming thing,
   I think that I tried also without it, but something else broke, I
 don't remember what though.
  
   So the question is I guess, how does the maven dependency check passes
 for Solr?
  
   Shai
  
  
   On Wed, May 15, 2013 at 11:08 AM, Steve Rowe sar...@gmail.com wrote:
   Shai,
  
   As I mentioned in another email thread, the Ant+Ivy setup for the
 replicator module renames remote dependency
 javax.servlet-3.0.0.v201112011016.jar to local filename servlet-api-3.0.jar
 - I don't know how or why.  As a result, the checksum, license and notice
 files in lucene/licenses/ are named servlet-api-*.
  
   Maven has no such renaming facility, so the dependency name is the
 same as the remote file name, and the checksum checker, for which there is
 no mapping facility, expects the .sha1 file to be the dependency filename
 with .sha1 appended.  That's why this file is required, not by Maven, but
 by the Ant build's 'validate-maven-dependencies' target.
  
   Steve
  
  
   On May 15, 2013, at 3:57 AM, Shai Erera ser...@gmail.com wrote:
  
Steve, I discussed this w/ Robert before I removed the file.
   
Robert explained me that jar-checksums removes any license files
 that without a matching .jar and then check-svn-working-copy reports a
 dirty checkout.
   
   
Why is this file required by maven?
   
At any rate, I don't understand what's going on either. Whatever
 fixes this, I'm fine with
   
Shai
   
   
On Wed, May 15, 2013 at 10:41 AM, Steve Rowe sar...@gmail.com
 wrote:
Shai, that .sha1 file was not leftover - I just added it.
   
After you removed it, I now get the following from 'ant
 validate-maven-dependencies':
   
-
-validate-maven-dependencies:
 [licenses] MISSING sha1 checksum file for:
 /Users/sarowe/.m2/repository/org/eclipse/jetty/orbit/javax.servlet/3.0.0.v201112011016/javax.servlet-3.0.0.v201112011016.jar
 [licenses] Scanned 14 JAR file(s) for licenses (in 0.05s.), 1
 error(s).
-
   
I don't understand why this file is being reported as causing a
 dirty checkout (see quoted error message below) - I looked at the groovy
 script that does this checking, and this particular message is given when a
 file is either unversioned or missing.  Neither condition should be true in
 this case.
   
Uwe, do you understand what's going on here?
   
Steve
   
On May 15, 2013, at 1:09 AM, Shai Erera ser...@gmail.com wrote:
   
 Removed the leftover license file.

 Shai

 On Wed, May 15, 2013 at 7:24 AM, Policeman Jenkins Server 
 jenk...@thetaphi.de wrote:
 Build:
 

Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b86) - Build # 5661 - Still Failing!

2013-05-15 Thread Steve Rowe
I'd rather not skip the license/checksum checking - I'll look into fixing the 
Solr issue once the replicator situation has stabilized.

I'm investigating not renaming the jar in the Ant+Ivy build, which should make 
the Maven stuff just work, I hope.

I'll get back to this in a few hours.

Steve

On May 15, 2013, at 5:12 AM, Shai Erera ser...@gmail.com wrote:

 Right, I now remember it was for jetty, because there are many jetty jars, 
 but only one license.
 Maybe just add the same ignore from solr to lucene/build.xml?
 
 Shai
 
 
 On Wed, May 15, 2013 at 12:04 PM, Steve Rowe sar...@gmail.com wrote:
 Here's lucene/build.xml$check-licenses - nothing in there about the servlet 
 api jar?:
 
 -
 target name=check-licenses 
 depends=compile-tools,resolve,load-custom-tasks description=Validate 
 license stuff.
   license-check-macro dir=${basedir} licensedir=${common.dir}/licenses
 additional-filters
   replaceregex pattern=jetty([^/]+)$ replace=jetty flags=gi /
   replaceregex pattern=slf4j-([^/]+)$ replace=slf4j flags=gi /
   replaceregex pattern=(bcmail|bcprov)-([^/]+)$ replace=\1 
 flags=gi /
 /additional-filters
   /license-check-macro
 /target
 -
 
 Steve
 
 On May 15, 2013, at 4:58 AM, Shai Erera ser...@gmail.com wrote:
 
  I added something like that to lucene/build.xml$check-licenses. Perhaps we 
  should add the same to validate-maven-dependencies?
 
  Shai
 
 
  On Wed, May 15, 2013 at 11:33 AM, Steve Rowe sar...@gmail.com wrote:
  Looks like the Solr build is just ignoring the servlet jar for the purposes 
  of license/checksum validation - from solr/common-build.xml:
 
  -
   target name=-validate-maven-dependencies 
  depends=-validate-maven-dependencies.init
  m2-validate-dependencies pom.xml=${maven.pom.xml} 
  licenseDirectory=${license.dir}
additional-filters
  replaceregex pattern=jetty([^/]+)$ replace=jetty flags=gi /
  replaceregex pattern=slf4j-([^/]+)$ replace=slf4j flags=gi /
  replaceregex pattern=(bcmail|bcprov)-([^/]+)$ replace=\1 
  flags=gi /
/additional-filters
excludes
  rsel:or
rsel:name name=**/lucene-*-${maven.version.glob}.jar 
  handledirsep=true/
rsel:name name=**/solr-*-${maven.version.glob}.jar 
  handledirsep=true/
!-- TODO: figure out what is going on here with servlet-apis --
rsel:name name=**/*servlet*.jar handledirsep=true/
  /rsel:or
/excludes
  /m2-validate-dependencies
/target
  -
 
  Steve
 
  On May 15, 2013, at 4:25 AM, Shai Erera ser...@gmail.com wrote:
 
   I copied the replicator's ivy dependencies from Solr's. In 
   replicator/build.xml you can find the copy/renaming thing,
   I think that I tried also without it, but something else broke, I don't 
   remember what though.
  
   So the question is I guess, how does the maven dependency check passes 
   for Solr?
  
   Shai
  
  
   On Wed, May 15, 2013 at 11:08 AM, Steve Rowe sar...@gmail.com wrote:
   Shai,
  
   As I mentioned in another email thread, the Ant+Ivy setup for the 
   replicator module renames remote dependency 
   javax.servlet-3.0.0.v201112011016.jar to local filename 
   servlet-api-3.0.jar - I don't know how or why.  As a result, the 
   checksum, license and notice files in lucene/licenses/ are named 
   servlet-api-*.
  
   Maven has no such renaming facility, so the dependency name is the same 
   as the remote file name, and the checksum checker, for which there is no 
   mapping facility, expects the .sha1 file to be the dependency filename 
   with .sha1 appended.  That's why this file is required, not by Maven, 
   but by the Ant build's 'validate-maven-dependencies' target.
  
   Steve
  
  
   On May 15, 2013, at 3:57 AM, Shai Erera ser...@gmail.com wrote:
  
Steve, I discussed this w/ Robert before I removed the file.
   
Robert explained me that jar-checksums removes any license files that 
without a matching .jar and then check-svn-working-copy reports a 
dirty checkout.
   
   
Why is this file required by maven?
   
At any rate, I don't understand what's going on either. Whatever fixes 
this, I'm fine with
   
Shai
   
   
On Wed, May 15, 2013 at 10:41 AM, Steve Rowe sar...@gmail.com wrote:
Shai, that .sha1 file was not leftover - I just added it.
   
After you removed it, I now get the following from 'ant 
validate-maven-dependencies':
   
-
-validate-maven-dependencies:
 [licenses] MISSING sha1 checksum file for: 
/Users/sarowe/.m2/repository/org/eclipse/jetty/orbit/javax.servlet/3.0.0.v201112011016/javax.servlet-3.0.0.v201112011016.jar
 [licenses] Scanned 14 JAR file(s) for licenses (in 0.05s.), 1 error(s).
-
   
I don't understand why this file is being reported as causing a dirty 
checkout (see quoted error message below) - I looked at the groovy 
script that does this checking, and this particular message is 

[jira] [Commented] (SOLR-1913) QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations on Integer Fields

2013-05-15 Thread Christopher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658198#comment-13658198
 ] 

Christopher commented on SOLR-1913:
---

I don't know if I correctly use the plugin or if there is a bug but I get these 
results:
I have a document with this field: int name=acl21/ int (00010101)
Case 1: q * {!bitwise field=acl op=AND source=2} = not match (OK) (0010) 
Case 2: q * {!bitwise field=acl op=AND source=1} = match (KO) (0001) 

For the second case, if the operator was OR ok for the result but it is 
opérareur AND I use. Is it that there is no confusion between the two in the 
plugin?

Thank you for your help

 QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations 
 on Integer Fields
 ---

 Key: SOLR-1913
 URL: https://issues.apache.org/jira/browse/SOLR-1913
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Israel Ekpo
 Fix For: 4.4

 Attachments: bitwise_filter_plugin.jar, SOLR-1913.bitwise.tar.gz, 
 SOLR-1913-src.tar.gz, solr-bitwise-plugin.jar, WEB-INF lib.jpg

   Original Estimate: 1h
  Remaining Estimate: 1h

 BitwiseQueryParserPlugin is a org.apache.solr.search.QParserPlugin that 
 allows 
 users to filter the documents returned from a query
 by performing bitwise operations between a particular integer field in the 
 index
 and the specified value.
 This Solr plugin is based on the BitwiseFilter in LUCENE-2460
 See https://issues.apache.org/jira/browse/LUCENE-2460 for more details
 This is the syntax for searching in Solr:
 http://localhost:8983/path/to/solr/select/?q={!bitwise field=fieldname 
 op=OPERATION_NAME source=sourcevalue negate=boolean}remainder of query
 Example :
 http://localhost:8983/solr/bitwise/select/?q={!bitwise field=user_permissions 
 op=AND source=3 negate=true}state:FL
 The negate parameter is optional
 The field parameter is the name of the integer field
 The op parameter is the name of the operation; one of {AND, OR, XOR}
 The source parameter is the specified integer value
 The negate parameter is a boolean indicating whether or not to negate the 
 results of the bitwise operation
 To test out this plugin, simply copy the jar file containing the plugin 
 classes into your $SOLR_HOME/lib directory and then
 add the following to your solrconfig.xml file after the dismax request 
 handler:
 queryParser name=bitwise 
 class=org.apache.solr.bitwise.BitwiseQueryParserPlugin basedOn=dismax /
 Restart your servlet container.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-1913) QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations on Integer Fields

2013-05-15 Thread Christopher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658198#comment-13658198
 ] 

Christopher edited comment on SOLR-1913 at 5/15/13 9:40 AM:


I don't know if I correctly use the plugin or if there is a bug but I get these 
results:

I have a document with this field: int name=acl21/ int (00010101)

Case 1: q * {!bitwise field=acl op=AND source=2} = not match (OK) (0010) 

Case 2: q * {!bitwise field=acl op=AND source=1} = match (KO) (0001) 

For the second case, if the operator was OR ok for the result but it is 
opérareur AND I use. Is it that there is no confusion between the two in the 
plugin?

Thank you for your help

  was (Author: nekudot):
I don't know if I correctly use the plugin or if there is a bug but I get 
these results:
I have a document with this field: int name=acl21/ int (00010101)
Case 1: q * {!bitwise field=acl op=AND source=2} = not match (OK) (0010) 
Case 2: q * {!bitwise field=acl op=AND source=1} = match (KO) (0001) 

For the second case, if the operator was OR ok for the result but it is 
opérareur AND I use. Is it that there is no confusion between the two in the 
plugin?

Thank you for your help
  
 QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations 
 on Integer Fields
 ---

 Key: SOLR-1913
 URL: https://issues.apache.org/jira/browse/SOLR-1913
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Israel Ekpo
 Fix For: 4.4

 Attachments: bitwise_filter_plugin.jar, SOLR-1913.bitwise.tar.gz, 
 SOLR-1913-src.tar.gz, solr-bitwise-plugin.jar, WEB-INF lib.jpg

   Original Estimate: 1h
  Remaining Estimate: 1h

 BitwiseQueryParserPlugin is a org.apache.solr.search.QParserPlugin that 
 allows 
 users to filter the documents returned from a query
 by performing bitwise operations between a particular integer field in the 
 index
 and the specified value.
 This Solr plugin is based on the BitwiseFilter in LUCENE-2460
 See https://issues.apache.org/jira/browse/LUCENE-2460 for more details
 This is the syntax for searching in Solr:
 http://localhost:8983/path/to/solr/select/?q={!bitwise field=fieldname 
 op=OPERATION_NAME source=sourcevalue negate=boolean}remainder of query
 Example :
 http://localhost:8983/solr/bitwise/select/?q={!bitwise field=user_permissions 
 op=AND source=3 negate=true}state:FL
 The negate parameter is optional
 The field parameter is the name of the integer field
 The op parameter is the name of the operation; one of {AND, OR, XOR}
 The source parameter is the specified integer value
 The negate parameter is a boolean indicating whether or not to negate the 
 results of the bitwise operation
 To test out this plugin, simply copy the jar file containing the plugin 
 classes into your $SOLR_HOME/lib directory and then
 add the following to your solrconfig.xml file after the dismax request 
 handler:
 queryParser name=bitwise 
 class=org.apache.solr.bitwise.BitwiseQueryParserPlugin basedOn=dismax /
 Restart your servlet container.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-1913) QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations on Integer Fields

2013-05-15 Thread Christopher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658198#comment-13658198
 ] 

Christopher edited comment on SOLR-1913 at 5/15/13 9:40 AM:


I don't know if I correctly use the plugin or if there is a bug but I get these 
results:

I have a document with this field: int name=acl21/ int (00010101)

Case 1: q{!bitwise field=acl op=AND source=2}* = not match (OK) (0010) 

Case 2: q{!bitwise field=acl op=AND source=1}* = match (KO) (0001) 

For the second case, if the operator was OR ok for the result but it is 
opérareur AND I use. Is it that there is no confusion between the two in the 
plugin?

Thank you for your help

  was (Author: nekudot):
I don't know if I correctly use the plugin or if there is a bug but I get 
these results:

I have a document with this field: int name=acl21/ int (00010101)

Case 1: q * {!bitwise field=acl op=AND source=2} = not match (OK) (0010) 

Case 2: q * {!bitwise field=acl op=AND source=1} = match (KO) (0001) 

For the second case, if the operator was OR ok for the result but it is 
opérareur AND I use. Is it that there is no confusion between the two in the 
plugin?

Thank you for your help
  
 QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations 
 on Integer Fields
 ---

 Key: SOLR-1913
 URL: https://issues.apache.org/jira/browse/SOLR-1913
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Israel Ekpo
 Fix For: 4.4

 Attachments: bitwise_filter_plugin.jar, SOLR-1913.bitwise.tar.gz, 
 SOLR-1913-src.tar.gz, solr-bitwise-plugin.jar, WEB-INF lib.jpg

   Original Estimate: 1h
  Remaining Estimate: 1h

 BitwiseQueryParserPlugin is a org.apache.solr.search.QParserPlugin that 
 allows 
 users to filter the documents returned from a query
 by performing bitwise operations between a particular integer field in the 
 index
 and the specified value.
 This Solr plugin is based on the BitwiseFilter in LUCENE-2460
 See https://issues.apache.org/jira/browse/LUCENE-2460 for more details
 This is the syntax for searching in Solr:
 http://localhost:8983/path/to/solr/select/?q={!bitwise field=fieldname 
 op=OPERATION_NAME source=sourcevalue negate=boolean}remainder of query
 Example :
 http://localhost:8983/solr/bitwise/select/?q={!bitwise field=user_permissions 
 op=AND source=3 negate=true}state:FL
 The negate parameter is optional
 The field parameter is the name of the integer field
 The op parameter is the name of the operation; one of {AND, OR, XOR}
 The source parameter is the specified integer value
 The negate parameter is a boolean indicating whether or not to negate the 
 results of the bitwise operation
 To test out this plugin, simply copy the jar file containing the plugin 
 classes into your $SOLR_HOME/lib directory and then
 add the following to your solrconfig.xml file after the dismax request 
 handler:
 queryParser name=bitwise 
 class=org.apache.solr.bitwise.BitwiseQueryParserPlugin basedOn=dismax /
 Restart your servlet container.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-1913) QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations on Integer Fields

2013-05-15 Thread Christopher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658198#comment-13658198
 ] 

Christopher edited comment on SOLR-1913 at 5/15/13 9:41 AM:


I don't know if I correctly use the plugin or if there is a bug but I get these 
results:

I have a document with this field: int name=acl21/ int (00010101)

 Case 1: q{!bitwise field=acl op=AND source=2}* = not match (OK) (0010) 

Case 2: q{!bitwise field=acl op=AND source=1}* = match (KO) (0001) 

For the second case, if the operator was OR ok for the result but it is 
opérareur AND I use. Is it that there is no confusion between the two in the 
plugin?

Thank you for your help

  was (Author: nekudot):
I don't know if I correctly use the plugin or if there is a bug but I get 
these results:

I have a document with this field: int name=acl21/ int (00010101)

Case 1: q{!bitwise field=acl op=AND source=2}* = not match (OK) (0010) 

Case 2: q{!bitwise field=acl op=AND source=1}* = match (KO) (0001) 

For the second case, if the operator was OR ok for the result but it is 
opérareur AND I use. Is it that there is no confusion between the two in the 
plugin?

Thank you for your help
  
 QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations 
 on Integer Fields
 ---

 Key: SOLR-1913
 URL: https://issues.apache.org/jira/browse/SOLR-1913
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Israel Ekpo
 Fix For: 4.4

 Attachments: bitwise_filter_plugin.jar, SOLR-1913.bitwise.tar.gz, 
 SOLR-1913-src.tar.gz, solr-bitwise-plugin.jar, WEB-INF lib.jpg

   Original Estimate: 1h
  Remaining Estimate: 1h

 BitwiseQueryParserPlugin is a org.apache.solr.search.QParserPlugin that 
 allows 
 users to filter the documents returned from a query
 by performing bitwise operations between a particular integer field in the 
 index
 and the specified value.
 This Solr plugin is based on the BitwiseFilter in LUCENE-2460
 See https://issues.apache.org/jira/browse/LUCENE-2460 for more details
 This is the syntax for searching in Solr:
 http://localhost:8983/path/to/solr/select/?q={!bitwise field=fieldname 
 op=OPERATION_NAME source=sourcevalue negate=boolean}remainder of query
 Example :
 http://localhost:8983/solr/bitwise/select/?q={!bitwise field=user_permissions 
 op=AND source=3 negate=true}state:FL
 The negate parameter is optional
 The field parameter is the name of the integer field
 The op parameter is the name of the operation; one of {AND, OR, XOR}
 The source parameter is the specified integer value
 The negate parameter is a boolean indicating whether or not to negate the 
 results of the bitwise operation
 To test out this plugin, simply copy the jar file containing the plugin 
 classes into your $SOLR_HOME/lib directory and then
 add the following to your solrconfig.xml file after the dismax request 
 handler:
 queryParser name=bitwise 
 class=org.apache.solr.bitwise.BitwiseQueryParserPlugin basedOn=dismax /
 Restart your servlet container.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-1913) QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations on Integer Fields

2013-05-15 Thread Christopher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658198#comment-13658198
 ] 

Christopher edited comment on SOLR-1913 at 5/15/13 9:41 AM:


I don't know if I correctly use the plugin or if there is a bug but I get these 
results:

I have a document with this field: int name=acl21/ int (00010101)

Case 1: codeq{!bitwise field=acl op=AND source=2}* = not match (OK) 
(0010)/code

Case 2: q{!bitwise field=acl op=AND source=1}* = match (KO) (0001) 

For the second case, if the operator was OR ok for the result but it is 
opérareur AND I use. Is it that there is no confusion between the two in the 
plugin?

Thank you for your help

  was (Author: nekudot):
I don't know if I correctly use the plugin or if there is a bug but I get 
these results:

I have a document with this field: int name=acl21/ int (00010101)

 Case 1: q{!bitwise field=acl op=AND source=2}* = not match (OK) (0010) 

Case 2: q{!bitwise field=acl op=AND source=1}* = match (KO) (0001) 

For the second case, if the operator was OR ok for the result but it is 
opérareur AND I use. Is it that there is no confusion between the two in the 
plugin?

Thank you for your help
  
 QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations 
 on Integer Fields
 ---

 Key: SOLR-1913
 URL: https://issues.apache.org/jira/browse/SOLR-1913
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Israel Ekpo
 Fix For: 4.4

 Attachments: bitwise_filter_plugin.jar, SOLR-1913.bitwise.tar.gz, 
 SOLR-1913-src.tar.gz, solr-bitwise-plugin.jar, WEB-INF lib.jpg

   Original Estimate: 1h
  Remaining Estimate: 1h

 BitwiseQueryParserPlugin is a org.apache.solr.search.QParserPlugin that 
 allows 
 users to filter the documents returned from a query
 by performing bitwise operations between a particular integer field in the 
 index
 and the specified value.
 This Solr plugin is based on the BitwiseFilter in LUCENE-2460
 See https://issues.apache.org/jira/browse/LUCENE-2460 for more details
 This is the syntax for searching in Solr:
 http://localhost:8983/path/to/solr/select/?q={!bitwise field=fieldname 
 op=OPERATION_NAME source=sourcevalue negate=boolean}remainder of query
 Example :
 http://localhost:8983/solr/bitwise/select/?q={!bitwise field=user_permissions 
 op=AND source=3 negate=true}state:FL
 The negate parameter is optional
 The field parameter is the name of the integer field
 The op parameter is the name of the operation; one of {AND, OR, XOR}
 The source parameter is the specified integer value
 The negate parameter is a boolean indicating whether or not to negate the 
 results of the bitwise operation
 To test out this plugin, simply copy the jar file containing the plugin 
 classes into your $SOLR_HOME/lib directory and then
 add the following to your solrconfig.xml file after the dismax request 
 handler:
 queryParser name=bitwise 
 class=org.apache.solr.bitwise.BitwiseQueryParserPlugin basedOn=dismax /
 Restart your servlet container.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-1913) QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations on Integer Fields

2013-05-15 Thread Christopher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658198#comment-13658198
 ] 

Christopher edited comment on SOLR-1913 at 5/15/13 9:42 AM:


I don't know if I correctly use the plugin or if there is a bug but I get these 
results:

I have a document with this field: int name=acl21/ int (00010101)

Case 1: q{!bitwise field=acl op=AND source=2}* = not match (OK) (0010)

Case 2: q{!bitwise field=acl op=AND source=1}* = match (KO) (0001) 

For the second case, if the operator was OR ok for the result but it is 
opérareur AND I use. Is it that there is no confusion between the two in the 
plugin?

Thank you for your help

  was (Author: nekudot):
I don't know if I correctly use the plugin or if there is a bug but I get 
these results:

I have a document with this field: int name=acl21/ int (00010101)

Case 1: codeq{!bitwise field=acl op=AND source=2}* = not match (OK) 
(0010)/code

Case 2: q{!bitwise field=acl op=AND source=1}* = match (KO) (0001) 

For the second case, if the operator was OR ok for the result but it is 
opérareur AND I use. Is it that there is no confusion between the two in the 
plugin?

Thank you for your help
  
 QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations 
 on Integer Fields
 ---

 Key: SOLR-1913
 URL: https://issues.apache.org/jira/browse/SOLR-1913
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Israel Ekpo
 Fix For: 4.4

 Attachments: bitwise_filter_plugin.jar, SOLR-1913.bitwise.tar.gz, 
 SOLR-1913-src.tar.gz, solr-bitwise-plugin.jar, WEB-INF lib.jpg

   Original Estimate: 1h
  Remaining Estimate: 1h

 BitwiseQueryParserPlugin is a org.apache.solr.search.QParserPlugin that 
 allows 
 users to filter the documents returned from a query
 by performing bitwise operations between a particular integer field in the 
 index
 and the specified value.
 This Solr plugin is based on the BitwiseFilter in LUCENE-2460
 See https://issues.apache.org/jira/browse/LUCENE-2460 for more details
 This is the syntax for searching in Solr:
 http://localhost:8983/path/to/solr/select/?q={!bitwise field=fieldname 
 op=OPERATION_NAME source=sourcevalue negate=boolean}remainder of query
 Example :
 http://localhost:8983/solr/bitwise/select/?q={!bitwise field=user_permissions 
 op=AND source=3 negate=true}state:FL
 The negate parameter is optional
 The field parameter is the name of the integer field
 The op parameter is the name of the operation; one of {AND, OR, XOR}
 The source parameter is the specified integer value
 The negate parameter is a boolean indicating whether or not to negate the 
 results of the bitwise operation
 To test out this plugin, simply copy the jar file containing the plugin 
 classes into your $SOLR_HOME/lib directory and then
 add the following to your solrconfig.xml file after the dismax request 
 handler:
 queryParser name=bitwise 
 class=org.apache.solr.bitwise.BitwiseQueryParserPlugin basedOn=dismax /
 Restart your servlet container.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b86) - Build # 5661 - Still Failing!

2013-05-15 Thread Shai Erera
Fine with me. I don't really care how the jar is called :). Was only trying
to make the build happy.

Shai


On Wed, May 15, 2013 at 12:19 PM, Steve Rowe sar...@gmail.com wrote:

 I'd rather not skip the license/checksum checking - I'll look into fixing
 the Solr issue once the replicator situation has stabilized.

 I'm investigating not renaming the jar in the Ant+Ivy build, which should
 make the Maven stuff just work, I hope.

 I'll get back to this in a few hours.

 Steve

 On May 15, 2013, at 5:12 AM, Shai Erera ser...@gmail.com wrote:

  Right, I now remember it was for jetty, because there are many jetty
 jars, but only one license.
  Maybe just add the same ignore from solr to lucene/build.xml?
 
  Shai
 
 
  On Wed, May 15, 2013 at 12:04 PM, Steve Rowe sar...@gmail.com wrote:
  Here's lucene/build.xml$check-licenses - nothing in there about the
 servlet api jar?:
 
  -
  target name=check-licenses
 depends=compile-tools,resolve,load-custom-tasks description=Validate
 license stuff.
license-check-macro dir=${basedir}
 licensedir=${common.dir}/licenses
  additional-filters
replaceregex pattern=jetty([^/]+)$ replace=jetty flags=gi /
replaceregex pattern=slf4j-([^/]+)$ replace=slf4j flags=gi
 /
replaceregex pattern=(bcmail|bcprov)-([^/]+)$ replace=\1
 flags=gi /
  /additional-filters
/license-check-macro
  /target
  -
 
  Steve
 
  On May 15, 2013, at 4:58 AM, Shai Erera ser...@gmail.com wrote:
 
   I added something like that to lucene/build.xml$check-licenses.
 Perhaps we should add the same to validate-maven-dependencies?
  
   Shai
  
  
   On Wed, May 15, 2013 at 11:33 AM, Steve Rowe sar...@gmail.com wrote:
   Looks like the Solr build is just ignoring the servlet jar for the
 purposes of license/checksum validation - from solr/common-build.xml:
  
   -
target name=-validate-maven-dependencies
 depends=-validate-maven-dependencies.init
   m2-validate-dependencies pom.xml=${maven.pom.xml}
 licenseDirectory=${license.dir}
 additional-filters
   replaceregex pattern=jetty([^/]+)$ replace=jetty
 flags=gi /
   replaceregex pattern=slf4j-([^/]+)$ replace=slf4j
 flags=gi /
   replaceregex pattern=(bcmail|bcprov)-([^/]+)$ replace=\1
 flags=gi /
 /additional-filters
 excludes
   rsel:or
 rsel:name name=**/lucene-*-${maven.version.glob}.jar
 handledirsep=true/
 rsel:name name=**/solr-*-${maven.version.glob}.jar
 handledirsep=true/
 !-- TODO: figure out what is going on here with
 servlet-apis --
 rsel:name name=**/*servlet*.jar handledirsep=true/
   /rsel:or
 /excludes
   /m2-validate-dependencies
 /target
   -
  
   Steve
  
   On May 15, 2013, at 4:25 AM, Shai Erera ser...@gmail.com wrote:
  
I copied the replicator's ivy dependencies from Solr's. In
 replicator/build.xml you can find the copy/renaming thing,
I think that I tried also without it, but something else broke, I
 don't remember what though.
   
So the question is I guess, how does the maven dependency check
 passes for Solr?
   
Shai
   
   
On Wed, May 15, 2013 at 11:08 AM, Steve Rowe sar...@gmail.com
 wrote:
Shai,
   
As I mentioned in another email thread, the Ant+Ivy setup for the
 replicator module renames remote dependency
 javax.servlet-3.0.0.v201112011016.jar to local filename servlet-api-3.0.jar
 - I don't know how or why.  As a result, the checksum, license and notice
 files in lucene/licenses/ are named servlet-api-*.
   
Maven has no such renaming facility, so the dependency name is the
 same as the remote file name, and the checksum checker, for which there is
 no mapping facility, expects the .sha1 file to be the dependency filename
 with .sha1 appended.  That's why this file is required, not by Maven, but
 by the Ant build's 'validate-maven-dependencies' target.
   
Steve
   
   
On May 15, 2013, at 3:57 AM, Shai Erera ser...@gmail.com wrote:
   
 Steve, I discussed this w/ Robert before I removed the file.

 Robert explained me that jar-checksums removes any license files
 that without a matching .jar and then check-svn-working-copy reports a
 dirty checkout.


 Why is this file required by maven?

 At any rate, I don't understand what's going on either. Whatever
 fixes this, I'm fine with

 Shai


 On Wed, May 15, 2013 at 10:41 AM, Steve Rowe sar...@gmail.com
 wrote:
 Shai, that .sha1 file was not leftover - I just added it.

 After you removed it, I now get the following from 'ant
 validate-maven-dependencies':

 -
 -validate-maven-dependencies:
  [licenses] MISSING sha1 checksum file for:
 /Users/sarowe/.m2/repository/org/eclipse/jetty/orbit/javax.servlet/3.0.0.v201112011016/javax.servlet-3.0.0.v201112011016.jar
  [licenses] Scanned 14 JAR file(s) for licenses (in 0.05s.), 1
 error(s).
 -

 I 

[jira] [Updated] (LUCENE-4901) TestIndexWriterOnJRECrash should work on any JRE vendor via Runtime.halt()

2013-05-15 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-4901:


Attachment: LUCENE-4901.patch

A version with runtime.halt() fallback for JVMs on which unsafe npe doesn't 
crash.

I also fixed the stream pumping from subprocess; previously it could have hung 
if something was printed to stderr (due to blocked pipe).

 TestIndexWriterOnJRECrash should work on any JRE vendor via Runtime.halt()
 --

 Key: LUCENE-4901
 URL: https://issues.apache.org/jira/browse/LUCENE-4901
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/test
 Environment: Red Hat EL 6.3
 IBM Java 1.6.0
 ANT 1.9.0
Reporter: Rodrigo Trujillo
Assignee: Dawid Weiss
Priority: Minor
 Fix For: 5.0, 4.4

 Attachments: LUCENE-4901.patch, LUCENE-4901.patch, 
 test-IBM-java-vendor.patch


 I successfully compiled Lucene 4.2 with IBM.
 Then ran unit tests with the nightly option set to true
 The test case TestIndexWriterOnJRECrash was skipped returning IBM 
 Corporation JRE not supported:
 [junit4:junit4] Suite: org.apache.lucene.index.TestIndexWriterOnJRECrash
 [junit4:junit4] IGNOR/A 0.28s | TestIndexWriterOnJRECrash.testNRTThreads
 [junit4:junit4] Assumption #1: IBM Corporation JRE not supported.
 [junit4:junit4] Completed in 0.68s, 1 test, 1 skipped

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4901) TestIndexWriterOnJRECrash should work on any JRE vendor via Runtime.halt()

2013-05-15 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-4901.
-

Resolution: Fixed

 TestIndexWriterOnJRECrash should work on any JRE vendor via Runtime.halt()
 --

 Key: LUCENE-4901
 URL: https://issues.apache.org/jira/browse/LUCENE-4901
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/test
 Environment: Red Hat EL 6.3
 IBM Java 1.6.0
 ANT 1.9.0
Reporter: Rodrigo Trujillo
Assignee: Dawid Weiss
Priority: Minor
 Fix For: 5.0, 4.4

 Attachments: LUCENE-4901.patch, LUCENE-4901.patch, 
 test-IBM-java-vendor.patch


 I successfully compiled Lucene 4.2 with IBM.
 Then ran unit tests with the nightly option set to true
 The test case TestIndexWriterOnJRECrash was skipped returning IBM 
 Corporation JRE not supported:
 [junit4:junit4] Suite: org.apache.lucene.index.TestIndexWriterOnJRECrash
 [junit4:junit4] IGNOR/A 0.28s | TestIndexWriterOnJRECrash.testNRTThreads
 [junit4:junit4] Assumption #1: IBM Corporation JRE not supported.
 [junit4:junit4] Completed in 0.68s, 1 test, 1 skipped

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4734) Can not create collection via collections API on empty solr

2013-05-15 Thread Alexander Eibner (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Eibner updated SOLR-4734:
---

Affects Version/s: 4.3

 Can not create collection via collections API on empty solr
 ---

 Key: SOLR-4734
 URL: https://issues.apache.org/jira/browse/SOLR-4734
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3, 4.2.1
 Environment: Linux 64bit on 3.2.0-33-generic kernel
 Solr: 4.2.1
 ZooKeeper: 3.4.5
 Tomcat 7.0.27 
Reporter: Alexander Eibner
 Attachments: config-logs.zip


 The following setup and steps always lead to the same error:
 app01: ZooKeeper
 app02: ZooKeeper, Solr (in Tomcat)
 app03: ZooKeeper, Solr (in Tomcat) 
 *) Start ZooKeeper as ensemble on all machines.
 *) Start tomcat on app02/app03
 {code:javascript|title=clusterstate.json}
 null
 cZxid = 0x10014
 ctime = Thu Apr 18 10:59:24 CEST 2013
 mZxid = 0x10014
 mtime = Thu Apr 18 10:59:24 CEST 2013
 pZxid = 0x10014
 cversion = 0
 dataVersion = 0
 aclVersion = 0
 ephemeralOwner = 0x0
 dataLength = 0
 numChildren = 0
 {code}
 *) Upload the configuration (on app02) for the collection via the following 
 command:
 {noformat}
 zkcli.sh -cmd upconfig --zkhost app01:4181,app02:4181,app03:4181 
 --confdir config/solr/storage/conf/ --confname storage-conf 
 {noformat}
 *) Linking the configuration (on app02) via the following command:
 {noformat}
 zkcli.sh -cmd linkconfig --collection storage --confname storage-conf 
 --zkhost app01:4181,app02:4181,app03:4181
 {noformat}
 *) Create Collection via: 
 {noformat}
 http://app02/solr/admin/collections?action=CREATEname=storagenumShards=1replicationFactor=2collection.configName=storage-conf
 {noformat}
 {code:javascript|title=clusterstate.json}
 {storage:{
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{
   app02:9985_solr_storage_shard1_replica2:{
 shard:shard1,
 state:down,
 core:storage_shard1_replica2,
 collection:storage,
 node_name:app02:9985_solr,
 base_url:http://app02:9985/solr},
   app03:9985_solr_storage_shard1_replica1:{
 shard:shard1,
 state:down,
 core:storage_shard1_replica1,
 collection:storage,
 node_name:app03:9985_solr,
 base_url:http://app03:9985/solr,
 router:compositeId}}
 cZxid = 0x10014
 ctime = Thu Apr 18 10:59:24 CEST 2013
 mZxid = 0x10047
 mtime = Thu Apr 18 11:04:06 CEST 2013
 pZxid = 0x10014
 cversion = 0
 dataVersion = 2
 aclVersion = 0
 ephemeralOwner = 0x0
 dataLength = 847
 numChildren = 0
 {code}
 This creates the replication of the shard on app02 and app03, but neither of 
 them is marked as leader, both are marked as DOWN.
 And after wards I can not access the collection.
 In the browser I get:
 {noformat}
 SEVERE: org.apache.solr.common.SolrException: no servers hosting shard:
 {noformat}
 The following stacktrace in the logs:
 {code}
 Apr 18, 2013 11:04:05 AM org.apache.solr.common.SolrException log
 SEVERE: org.apache.solr.common.SolrException: Error CREATEing SolrCore 
 'storage_shard1_replica2': 
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:483)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:140)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:591)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:192)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:225)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
 at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
 at 
 org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:999)
 at 
 

Re: New query parser?

2013-05-15 Thread Roman Chyla
Hi Jan,

Thanks for thumbs up


On Tue, May 14, 2013 at 11:14 AM, Jan Høydahl jan@cominvent.com wrote:

 Hello :)

 I think it has been the intention of the dev community for a long time to
 start using the flex parser framework, and in this regard this contribution
 is much welcome as a kickstarter for that.
 I have not looked much at the code, but I hope it could be a starting
 point for writing future parsers in a less spaghetti way.

 One question. Say we want to add a new operator such as NEAR/N. Ideally
 this should be added in Lucene, then all the Solr QParsers extending the
 lucene flex parser would benefit from the same new operator. Would this be
 easily achieved with your code you think? We also have a ton of



to add a new operator is very simple on the syntax level -- ie. when I want
the NEAR/x operator, I just change the ANTLR grammar, which produces the
approripate abstract syntax tree. The flex parser is consuming this.

Yet, imagine the following query

dog NEAR/5 cat

if you are using synonyms, an analyzer could have expanded dog with
synonyms, it becomes something like

(dog | canin) NEAR/5 cat

and since Lucene cannot handle these queries, the flex builder must rewrite
them, effectively producing

SpanNear(SpanOr(dog | cat), SpanTerm(cat), 5)

but you could also argue, that a better way to handle this query is:

SpanNear(dog, cat, 5) OR SpanNear(canin, cat, 5)

If that is the case, then a different builder will have to be used -

Just an example where syntax is relatively simple, but the semantics is the
hard part. But I believe the flex parser gives all necessary tools to deal
with that and avoid the spaghetti problem


--roman



 feature requests on the eDisMax parser for new kinds of query syntax
 support. Before we start implementing that on top of the
 already-hard-to-maintain eDismax code, we should think about
 re-implementing eDismax on top of flex, perhaps on top of Roman's contrib
 here?


btw: i am using edismax in one of my grammars -- ie. users can type: query
AND edismax(foo OR (dog AND cat)) -- and the edismax() will be parsed
by edismax, but I hit the problems there as well, it is not doing such a
nice job with operators and of course it doesn't know how to handle
multi-token synonym expansion, but I think it could be nicely extracted
into a flex processor and effectively become a plugin for a solr parser
(now, it is a parser of its own, which makes it hard to extend)






  --
 Jan Høydahl, search solution architect
 Cominvent AS - www.cominvent.com

 14. mai 2013 kl. 17:07 skrev Roman Chyla roman.ch...@gmail.com:

 Hello World!

 Following the recommended practice I'd like to let you know that I am
 about to start porting our existing query parser into JIRA with the aim of
 making it available to Lucene/SOLR community.

 The query parser is built on top of the flexible query parser, but it
 separates the parsing (ANTLR) and the query building - it allows for a very
 sophisticated custom logic and has self-retrospecting methods, so one can
 actually 'see' what is going on - I have had lots of FUN working with it
 (which I consider to be a feature, not a shameless plug ;)).

 Some write up is here:
 http://29min.wordpress.com/category/antlrqueryparser/

 You can see the source code at:

 https://github.com/romanchyla/montysolr/tree/master/contrib/antlrqueryparser


 If you think this project is duplicating something or even being useless
 (I hope not!) please let me know, stop me, say something...

 Thank you!

   roman





[JENKINS-MAVEN] Lucene-Solr-Maven-4.x #327: POMs out of sync

2013-05-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-4.x/327/

No tests ran.

Build Log:
[...truncated 11447 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: svn commit: r1482642 - /lucene/dev/branches/lucene_solr_4_3/lucene/core/src/java/org/apache/lucene/util/Constants.java

2013-05-15 Thread Robert Muir
On Wed, May 15, 2013 at 4:23 AM, Uwe Schindler u...@thetaphi.de wrote:
 Are we sure that this is the right thing? The LUCENE_MAIN_VERSION is used for 
 index compatibility and should always be only in X.Y format.

 Please revert this!


Uwe is correct: please only adjust build.xml here or whatever, but
don't change this!

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3422) IndeIndexWriter.optimize() throws FileNotFoundException and IOException

2013-05-15 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658277#comment-13658277
 ] 

Michael McCandless commented on LUCENE-3422:


If you are keeping an IndexWriter open across multiple changes, then you should 
not see a new segment (set of _N.* files) after each entity save, i.e. 
IndexWriter instead should be buffering 2MB worth of changes and only then 
flushing a new segment.

When a merge completes, it's only after updating the SegmentInfos that it goes 
and deletes files from the directory.

Is it possible you accidentally have two IndexWriters open on the same 
directory?  Are you changing the LockFactory used by the Directory (the purpose 
of locking is to prevent two IndexWriters on one directory).

 IndeIndexWriter.optimize() throws FileNotFoundException and IOException
 ---

 Key: LUCENE-3422
 URL: https://issues.apache.org/jira/browse/LUCENE-3422
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Elizabeth Nisha

 I am using lucene 3.0.2 search APIs for my application. 
 Indexed data is about 350MB and time taken for indexing is 25 hrs. Search 
 indexing and Optimization runs in two different threads. Optimization runs 
 for every 1 hour and it doesn't run while indexing is going on and vice 
 versa. When optimization is going on using IndexWriter.optimize(), 
 FileNotFoundException and IOException are seen in my log and the index file 
 is getting corrupted, log says
 1. java.io.IOException: No sub-file with id _5r8.fdt found 
 [The file name in this message changes over time (_5r8.fdt, _6fa.fdt, 
 _6uh.fdt, ..., _emv.fdt) ]
 2. java.io.FileNotFoundException: 
 /local/groups/necim/index_5.3/index/_bdx.cfs (No such file or directory)  
 3. java.io.FileNotFoundException: 
 /local/groups/necim/index_5.3/index/_hkq.cfs (No such file or directory)
   Stack trace: java.io.IOException: background merge hit exception: 
 _hkp:c100-_hkp _hkq:c100-_hkp _hkr:c100-_hkr _hks:c100-_hkr _hxb:c5500 
 _hx5:c1000 _hxc:c198
 84 into _hxd [optimize] [mergeDocStores]
at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2359)
at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2298)
at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2268)
at com.telelogic.cs.search.SearchIndex.doOptimize(SearchIndex.java:130)
at 
 com.telelogic.cs.search.SearchIndexerThread$1.run(SearchIndexerThread.java:337)
at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.FileNotFoundException: 
 /local/groups/necim/index_5.3/index/_hkq.cfs (No such file or directory)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.init(RandomAccessFile.java:212)
at 
 org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput$Descriptor.init(SimpleFSDirectory.java:76)
at 
 org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.init(SimpleFSDirectory.java:97)
at 
 org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.init(NIOFSDirectory.java:87)
at 
 org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:67)
at 
 org.apache.lucene.index.CompoundFileReader.init(CompoundFileReader.java:67)
at 
 org.apache.lucene.index.SegmentReader$CoreReaders.init(SegmentReader.java:114)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:590)
at 
 org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:616)
at 
 org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4309)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3965)
at 
 org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:231)
at 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:288)
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3422) IndeIndexWriter.optimize() throws FileNotFoundException and IOException

2013-05-15 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658280#comment-13658280
 ] 

Robert Muir commented on LUCENE-3422:
-

is this on a nfs filesystem?

 IndeIndexWriter.optimize() throws FileNotFoundException and IOException
 ---

 Key: LUCENE-3422
 URL: https://issues.apache.org/jira/browse/LUCENE-3422
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Elizabeth Nisha

 I am using lucene 3.0.2 search APIs for my application. 
 Indexed data is about 350MB and time taken for indexing is 25 hrs. Search 
 indexing and Optimization runs in two different threads. Optimization runs 
 for every 1 hour and it doesn't run while indexing is going on and vice 
 versa. When optimization is going on using IndexWriter.optimize(), 
 FileNotFoundException and IOException are seen in my log and the index file 
 is getting corrupted, log says
 1. java.io.IOException: No sub-file with id _5r8.fdt found 
 [The file name in this message changes over time (_5r8.fdt, _6fa.fdt, 
 _6uh.fdt, ..., _emv.fdt) ]
 2. java.io.FileNotFoundException: 
 /local/groups/necim/index_5.3/index/_bdx.cfs (No such file or directory)  
 3. java.io.FileNotFoundException: 
 /local/groups/necim/index_5.3/index/_hkq.cfs (No such file or directory)
   Stack trace: java.io.IOException: background merge hit exception: 
 _hkp:c100-_hkp _hkq:c100-_hkp _hkr:c100-_hkr _hks:c100-_hkr _hxb:c5500 
 _hx5:c1000 _hxc:c198
 84 into _hxd [optimize] [mergeDocStores]
at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2359)
at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2298)
at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2268)
at com.telelogic.cs.search.SearchIndex.doOptimize(SearchIndex.java:130)
at 
 com.telelogic.cs.search.SearchIndexerThread$1.run(SearchIndexerThread.java:337)
at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.FileNotFoundException: 
 /local/groups/necim/index_5.3/index/_hkq.cfs (No such file or directory)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.init(RandomAccessFile.java:212)
at 
 org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput$Descriptor.init(SimpleFSDirectory.java:76)
at 
 org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.init(SimpleFSDirectory.java:97)
at 
 org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.init(NIOFSDirectory.java:87)
at 
 org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:67)
at 
 org.apache.lucene.index.CompoundFileReader.init(CompoundFileReader.java:67)
at 
 org.apache.lucene.index.SegmentReader$CoreReaders.init(SegmentReader.java:114)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:590)
at 
 org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:616)
at 
 org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4309)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3965)
at 
 org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:231)
at 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:288)
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1482642 - /lucene/dev/branches/lucene_solr_4_3/lucene/core/src/java/org/apache/lucene/util/Constants.java

2013-05-15 Thread Robert Muir
I double checked (also 3.6.1, 3.6.2, and 4.2.1 releases): hossman's
commit is correct actually.

This is the one used for back compat (its tested with the version
comparator), but its just marking which version of lucene wrote the
segment. The variable name should really be changed, MAIN says
nothing.

This one is the more important one though: another reason why its not
triggered by the version sysprop from build.xml: our comparator cannot
deal with any -SNAPSHOT or any maven suffixes or any of that
horseshit. it needs real version numbers.

On Wed, May 15, 2013 at 7:57 AM, Robert Muir rcm...@gmail.com wrote:
 On Wed, May 15, 2013 at 4:23 AM, Uwe Schindler u...@thetaphi.de wrote:
 Are we sure that this is the right thing? The LUCENE_MAIN_VERSION is used 
 for index compatibility and should always be only in X.Y format.

 Please revert this!


 Uwe is correct: please only adjust build.xml here or whatever, but
 don't change this!

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4583) StraightBytesDocValuesField fails if bytes 32k

2013-05-15 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658287#comment-13658287
 ] 

Robert Muir commented on LUCENE-4583:
-

You convinced me dont worry, you convinced me we shouldnt do anything on this 
whole issue at all. Because the stuff you outlined here is absolutely the wrong 
path for us to be going down.

 StraightBytesDocValuesField fails if bytes  32k
 

 Key: LUCENE-4583
 URL: https://issues.apache.org/jira/browse/LUCENE-4583
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 4.0, 4.1, 5.0
Reporter: David Smiley
Priority: Critical
 Fix For: 4.4

 Attachments: LUCENE-4583.patch, LUCENE-4583.patch, LUCENE-4583.patch, 
 LUCENE-4583.patch, LUCENE-4583.patch


 I didn't observe any limitations on the size of a bytes based DocValues field 
 value in the docs.  It appears that the limit is 32k, although I didn't get 
 any friendly error telling me that was the limit.  32k is kind of small IMO; 
 I suspect this limit is unintended and as such is a bug.The following 
 test fails:
 {code:java}
   public void testBigDocValue() throws IOException {
 Directory dir = newDirectory();
 IndexWriter writer = new IndexWriter(dir, writerConfig(false));
 Document doc = new Document();
 BytesRef bytes = new BytesRef((4+4)*4097);//4096 works
 bytes.length = bytes.bytes.length;//byte data doesn't matter
 doc.add(new StraightBytesDocValuesField(dvField, bytes));
 writer.addDocument(doc);
 writer.commit();
 writer.close();
 DirectoryReader reader = DirectoryReader.open(dir);
 DocValues docValues = MultiDocValues.getDocValues(reader, dvField);
 //FAILS IF BYTES IS BIG!
 docValues.getSource().getBytes(0, bytes);
 reader.close();
 dir.close();
   }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4744) Version conflict error during shard split test

2013-05-15 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13657460#comment-13657460
 ] 

Yonik Seeley edited comment on SOLR-4744 at 5/15/13 12:26 PM:
--

Nice job tracking that down.

bq. Replicate to sub shard leader synchronously (before local update)

This seems like the right fix (I had thought it was this way already).  This 
should include returning failure to the client of course.


  was (Author: ysee...@gmail.com):
Nice job tracking that down.

bq. Replicate to sub shard leader synchronously (before local update)

This seems like the right fix (I had thought it was this way already).
  
 Version conflict error during shard split test
 --

 Key: SOLR-4744
 URL: https://issues.apache.org/jira/browse/SOLR-4744
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 4.4


 ShardSplitTest fails sometimes with the following error:
 {code}
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state 
 invoked for collection: collection1
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state shard1 
 to inactive
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state 
 shard1_0 to active
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state 
 shard1_1 to active
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.873; 
 org.apache.solr.update.processor.LogUpdateProcessor; [collection1] webapp= 
 path=/update params={wt=javabinversion=2} {add=[169 (1432319507166134272)]} 
 0 2
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.884; 
 org.apache.solr.update.processor.LogUpdateProcessor; 
 [collection1_shard1_1_replica1] webapp= path=/update 
 params={distrib.from=http://127.0.0.1:41028/collection1/update.distrib=FROMLEADERwt=javabindistrib.from.parent=shard1version=2}
  {} 0 1
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.885; 
 org.apache.solr.update.processor.LogUpdateProcessor; [collection1] webapp= 
 path=/update 
 params={distrib.from=http://127.0.0.1:41028/collection1/update.distrib=FROMLEADERwt=javabindistrib.from.parent=shard1version=2}
  {add=[169 (1432319507173474304)]} 0 2
 [junit4:junit4]   1 ERROR - 2013-04-14 19:05:26.885; 
 org.apache.solr.common.SolrException; shard update error StdNode: 
 http://127.0.0.1:41028/collection1_shard1_1_replica1/:org.apache.solr.common.SolrException:
  version conflict for 169 expected=1432319507173474304 actual=-1
 [junit4:junit4]   1  at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:404)
 [junit4:junit4]   1  at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:181)
 [junit4:junit4]   1  at 
 org.apache.solr.update.SolrCmdDistributor$1.call(SolrCmdDistributor.java:332)
 [junit4:junit4]   1  at 
 

[jira] [Created] (SOLR-4823) Split LBHttpLoadBalancer into two classes one for the solrj use case and one for the solr cloud use case

2013-05-15 Thread philip hoy (JIRA)
philip hoy created SOLR-4823:


 Summary: Split LBHttpLoadBalancer into two classes one for the 
solrj use case and one for the solr cloud use case
 Key: SOLR-4823
 URL: https://issues.apache.org/jira/browse/SOLR-4823
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: philip hoy
Priority: Minor


The LBHttpSolrServer has too many responsibilities. It could perhaps be broken 
into two classes, one in solrj to be used in the place of an external load 
balancer that balances across a known set of solr servers defined at 
construction time and one in solr core to be used by the solr cloud components 
that balances across servers dependant on the request.

To save code duplication, if much arises an abstract bass class could be 
introduced in to solrj.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4816) Change CloudSolrServer to send updates to the correct shard

2013-05-15 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658306#comment-13658306
 ] 

Joel Bernstein commented on SOLR-4816:
--

Hoss, sounds good I'll work this into the next patch.

Stephen, sure go ahead and post the changes. I'm going to make some changes to 
the patch shortly and I'll work with your changes as well.

 Change CloudSolrServer to send updates to the correct shard
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch


 This issue adds a directUpdate method to CloudSolrServer which routes 
 update requests to the correct shard. This would be a nice feature to have to 
 eliminate the document routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4618) Integrate LucidWorks' Solr Reference Guide with Solr documentation

2013-05-15 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658311#comment-13658311
 ] 

Alexandre Rafalovitch commented on SOLR-4618:
-

Just some thoughts:
# Apache version of software is a full release behind the latest. They are not 
planning to move and/or use Solr as a chance to test the latest version?
# Comments are not terribly big thing on the Solr Wiki. Most of them are more 
like call to action/bug reports. Maybe integrating external (e.g. Disqus) 
module is sufficient?
# Is the goal to merge content or to replace it? I think Wikis overlap, but I 
am not sure about one being a superset for another.

 Integrate LucidWorks' Solr Reference Guide with Solr documentation
 --

 Key: SOLR-4618
 URL: https://issues.apache.org/jira/browse/SOLR-4618
 Project: Solr
  Issue Type: Improvement
  Components: documentation
Affects Versions: 4.1
Reporter: Cassandra Targett
Assignee: Hoss Man
 Attachments: NewSolrStyle.css, SolrRefGuide4.1-ASF.zip


 LucidWorks would like to donate the Apache Solr Reference Guide, maintained 
 by LucidWorks tech writers, to the Solr community. It was first produced in 
 2009 as a download-only PDF for Solr 1.4, but since 2011 it has been online 
 at http://docs.lucidworks.com/display/solr/ and updated for Solr 3.x releases 
 and for Solr 4.0 and 4.1.
 I've prepared an XML export from our Confluence installation, which can be 
 easily imported into the Apache Confluence installation by someone with 
 system admin rights. The doc has not yet been updated for 4.2, so it covers 
 Solr 4.1 so far. I'll add some additional technical notes about the export 
 itself in a comment. 
 Since we use Confluence at LucidWorks, I can also offer assistance getting 
 Confluence set up, importing this package into it, or any other help needed 
 for the community to start using this. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4823) Split LBHttpSolrServer into two classes one for the solrj use case and one for the solr cloud use case

2013-05-15 Thread philip hoy (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

philip hoy updated SOLR-4823:
-

Summary: Split LBHttpSolrServer into two classes one for the solrj use case 
and one for the solr cloud use case  (was: Split LBHttpLoadBalancer into two 
classes one for the solrj use case and one for the solr cloud use case)

 Split LBHttpSolrServer into two classes one for the solrj use case and one 
 for the solr cloud use case
 --

 Key: SOLR-4823
 URL: https://issues.apache.org/jira/browse/SOLR-4823
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: philip hoy
Priority: Minor

 The LBHttpSolrServer has too many responsibilities. It could perhaps be 
 broken into two classes, one in solrj to be used in the place of an external 
 load balancer that balances across a known set of solr servers defined at 
 construction time and one in solr core to be used by the solr cloud 
 components that balances across servers dependant on the request.
 To save code duplication, if much arises an abstract bass class could be 
 introduced in to solrj.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-4734) Can not create collection via collections API on empty solr

2013-05-15 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-4734:
-

Assignee: Mark Miller

 Can not create collection via collections API on empty solr
 ---

 Key: SOLR-4734
 URL: https://issues.apache.org/jira/browse/SOLR-4734
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3, 4.2.1
 Environment: Linux 64bit on 3.2.0-33-generic kernel
 Solr: 4.2.1
 ZooKeeper: 3.4.5
 Tomcat 7.0.27 
Reporter: Alexander Eibner
Assignee: Mark Miller
 Attachments: config-logs.zip


 The following setup and steps always lead to the same error:
 app01: ZooKeeper
 app02: ZooKeeper, Solr (in Tomcat)
 app03: ZooKeeper, Solr (in Tomcat) 
 *) Start ZooKeeper as ensemble on all machines.
 *) Start tomcat on app02/app03
 {code:javascript|title=clusterstate.json}
 null
 cZxid = 0x10014
 ctime = Thu Apr 18 10:59:24 CEST 2013
 mZxid = 0x10014
 mtime = Thu Apr 18 10:59:24 CEST 2013
 pZxid = 0x10014
 cversion = 0
 dataVersion = 0
 aclVersion = 0
 ephemeralOwner = 0x0
 dataLength = 0
 numChildren = 0
 {code}
 *) Upload the configuration (on app02) for the collection via the following 
 command:
 {noformat}
 zkcli.sh -cmd upconfig --zkhost app01:4181,app02:4181,app03:4181 
 --confdir config/solr/storage/conf/ --confname storage-conf 
 {noformat}
 *) Linking the configuration (on app02) via the following command:
 {noformat}
 zkcli.sh -cmd linkconfig --collection storage --confname storage-conf 
 --zkhost app01:4181,app02:4181,app03:4181
 {noformat}
 *) Create Collection via: 
 {noformat}
 http://app02/solr/admin/collections?action=CREATEname=storagenumShards=1replicationFactor=2collection.configName=storage-conf
 {noformat}
 {code:javascript|title=clusterstate.json}
 {storage:{
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{
   app02:9985_solr_storage_shard1_replica2:{
 shard:shard1,
 state:down,
 core:storage_shard1_replica2,
 collection:storage,
 node_name:app02:9985_solr,
 base_url:http://app02:9985/solr},
   app03:9985_solr_storage_shard1_replica1:{
 shard:shard1,
 state:down,
 core:storage_shard1_replica1,
 collection:storage,
 node_name:app03:9985_solr,
 base_url:http://app03:9985/solr,
 router:compositeId}}
 cZxid = 0x10014
 ctime = Thu Apr 18 10:59:24 CEST 2013
 mZxid = 0x10047
 mtime = Thu Apr 18 11:04:06 CEST 2013
 pZxid = 0x10014
 cversion = 0
 dataVersion = 2
 aclVersion = 0
 ephemeralOwner = 0x0
 dataLength = 847
 numChildren = 0
 {code}
 This creates the replication of the shard on app02 and app03, but neither of 
 them is marked as leader, both are marked as DOWN.
 And after wards I can not access the collection.
 In the browser I get:
 {noformat}
 SEVERE: org.apache.solr.common.SolrException: no servers hosting shard:
 {noformat}
 The following stacktrace in the logs:
 {code}
 Apr 18, 2013 11:04:05 AM org.apache.solr.common.SolrException log
 SEVERE: org.apache.solr.common.SolrException: Error CREATEing SolrCore 
 'storage_shard1_replica2': 
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:483)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:140)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:591)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:192)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:225)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
 at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
 at 
 org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:999)
 at 
 

[jira] [Commented] (SOLR-4734) Can not create collection via collections API on empty solr

2013-05-15 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658327#comment-13658327
 ] 

Mark Miller commented on SOLR-4734:
---

Could use a better erorr message - it's not finding an update log. Looking at a 
config your zip file, you have updatelog defined in the wrong place - it needs 
to be in updateHandler.

 Can not create collection via collections API on empty solr
 ---

 Key: SOLR-4734
 URL: https://issues.apache.org/jira/browse/SOLR-4734
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3, 4.2.1
 Environment: Linux 64bit on 3.2.0-33-generic kernel
 Solr: 4.2.1
 ZooKeeper: 3.4.5
 Tomcat 7.0.27 
Reporter: Alexander Eibner
 Attachments: config-logs.zip


 The following setup and steps always lead to the same error:
 app01: ZooKeeper
 app02: ZooKeeper, Solr (in Tomcat)
 app03: ZooKeeper, Solr (in Tomcat) 
 *) Start ZooKeeper as ensemble on all machines.
 *) Start tomcat on app02/app03
 {code:javascript|title=clusterstate.json}
 null
 cZxid = 0x10014
 ctime = Thu Apr 18 10:59:24 CEST 2013
 mZxid = 0x10014
 mtime = Thu Apr 18 10:59:24 CEST 2013
 pZxid = 0x10014
 cversion = 0
 dataVersion = 0
 aclVersion = 0
 ephemeralOwner = 0x0
 dataLength = 0
 numChildren = 0
 {code}
 *) Upload the configuration (on app02) for the collection via the following 
 command:
 {noformat}
 zkcli.sh -cmd upconfig --zkhost app01:4181,app02:4181,app03:4181 
 --confdir config/solr/storage/conf/ --confname storage-conf 
 {noformat}
 *) Linking the configuration (on app02) via the following command:
 {noformat}
 zkcli.sh -cmd linkconfig --collection storage --confname storage-conf 
 --zkhost app01:4181,app02:4181,app03:4181
 {noformat}
 *) Create Collection via: 
 {noformat}
 http://app02/solr/admin/collections?action=CREATEname=storagenumShards=1replicationFactor=2collection.configName=storage-conf
 {noformat}
 {code:javascript|title=clusterstate.json}
 {storage:{
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{
   app02:9985_solr_storage_shard1_replica2:{
 shard:shard1,
 state:down,
 core:storage_shard1_replica2,
 collection:storage,
 node_name:app02:9985_solr,
 base_url:http://app02:9985/solr},
   app03:9985_solr_storage_shard1_replica1:{
 shard:shard1,
 state:down,
 core:storage_shard1_replica1,
 collection:storage,
 node_name:app03:9985_solr,
 base_url:http://app03:9985/solr,
 router:compositeId}}
 cZxid = 0x10014
 ctime = Thu Apr 18 10:59:24 CEST 2013
 mZxid = 0x10047
 mtime = Thu Apr 18 11:04:06 CEST 2013
 pZxid = 0x10014
 cversion = 0
 dataVersion = 2
 aclVersion = 0
 ephemeralOwner = 0x0
 dataLength = 847
 numChildren = 0
 {code}
 This creates the replication of the shard on app02 and app03, but neither of 
 them is marked as leader, both are marked as DOWN.
 And after wards I can not access the collection.
 In the browser I get:
 {noformat}
 SEVERE: org.apache.solr.common.SolrException: no servers hosting shard:
 {noformat}
 The following stacktrace in the logs:
 {code}
 Apr 18, 2013 11:04:05 AM org.apache.solr.common.SolrException log
 SEVERE: org.apache.solr.common.SolrException: Error CREATEing SolrCore 
 'storage_shard1_replica2': 
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:483)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:140)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:591)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:192)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:225)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
 at 
 

[jira] [Updated] (SOLR-4734) Can not create collection via collections API on empty solr

2013-05-15 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4734:
--

Fix Version/s: 4.4
   5.0

 Can not create collection via collections API on empty solr
 ---

 Key: SOLR-4734
 URL: https://issues.apache.org/jira/browse/SOLR-4734
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3, 4.2.1
 Environment: Linux 64bit on 3.2.0-33-generic kernel
 Solr: 4.2.1
 ZooKeeper: 3.4.5
 Tomcat 7.0.27 
Reporter: Alexander Eibner
Assignee: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: config-logs.zip


 The following setup and steps always lead to the same error:
 app01: ZooKeeper
 app02: ZooKeeper, Solr (in Tomcat)
 app03: ZooKeeper, Solr (in Tomcat) 
 *) Start ZooKeeper as ensemble on all machines.
 *) Start tomcat on app02/app03
 {code:javascript|title=clusterstate.json}
 null
 cZxid = 0x10014
 ctime = Thu Apr 18 10:59:24 CEST 2013
 mZxid = 0x10014
 mtime = Thu Apr 18 10:59:24 CEST 2013
 pZxid = 0x10014
 cversion = 0
 dataVersion = 0
 aclVersion = 0
 ephemeralOwner = 0x0
 dataLength = 0
 numChildren = 0
 {code}
 *) Upload the configuration (on app02) for the collection via the following 
 command:
 {noformat}
 zkcli.sh -cmd upconfig --zkhost app01:4181,app02:4181,app03:4181 
 --confdir config/solr/storage/conf/ --confname storage-conf 
 {noformat}
 *) Linking the configuration (on app02) via the following command:
 {noformat}
 zkcli.sh -cmd linkconfig --collection storage --confname storage-conf 
 --zkhost app01:4181,app02:4181,app03:4181
 {noformat}
 *) Create Collection via: 
 {noformat}
 http://app02/solr/admin/collections?action=CREATEname=storagenumShards=1replicationFactor=2collection.configName=storage-conf
 {noformat}
 {code:javascript|title=clusterstate.json}
 {storage:{
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{
   app02:9985_solr_storage_shard1_replica2:{
 shard:shard1,
 state:down,
 core:storage_shard1_replica2,
 collection:storage,
 node_name:app02:9985_solr,
 base_url:http://app02:9985/solr},
   app03:9985_solr_storage_shard1_replica1:{
 shard:shard1,
 state:down,
 core:storage_shard1_replica1,
 collection:storage,
 node_name:app03:9985_solr,
 base_url:http://app03:9985/solr,
 router:compositeId}}
 cZxid = 0x10014
 ctime = Thu Apr 18 10:59:24 CEST 2013
 mZxid = 0x10047
 mtime = Thu Apr 18 11:04:06 CEST 2013
 pZxid = 0x10014
 cversion = 0
 dataVersion = 2
 aclVersion = 0
 ephemeralOwner = 0x0
 dataLength = 847
 numChildren = 0
 {code}
 This creates the replication of the shard on app02 and app03, but neither of 
 them is marked as leader, both are marked as DOWN.
 And after wards I can not access the collection.
 In the browser I get:
 {noformat}
 SEVERE: org.apache.solr.common.SolrException: no servers hosting shard:
 {noformat}
 The following stacktrace in the logs:
 {code}
 Apr 18, 2013 11:04:05 AM org.apache.solr.common.SolrException log
 SEVERE: org.apache.solr.common.SolrException: Error CREATEing SolrCore 
 'storage_shard1_replica2': 
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:483)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:140)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:591)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:192)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:225)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
 at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
 at 
 org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:999)
 

[jira] [Commented] (SOLR-1913) QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations on Integer Fields

2013-05-15 Thread Deepthi Sigireddi (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658348#comment-13658348
 ] 

Deepthi Sigireddi commented on SOLR-1913:
-

Christopher,
I'm sorry but I don't understand the issue.
Case 2: q
{!bitwise field=acl op=AND source=1} - should match and is matching? So what is 
the issue?

 QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations 
 on Integer Fields
 ---

 Key: SOLR-1913
 URL: https://issues.apache.org/jira/browse/SOLR-1913
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Israel Ekpo
 Fix For: 4.4

 Attachments: bitwise_filter_plugin.jar, SOLR-1913.bitwise.tar.gz, 
 SOLR-1913-src.tar.gz, solr-bitwise-plugin.jar, WEB-INF lib.jpg

   Original Estimate: 1h
  Remaining Estimate: 1h

 BitwiseQueryParserPlugin is a org.apache.solr.search.QParserPlugin that 
 allows 
 users to filter the documents returned from a query
 by performing bitwise operations between a particular integer field in the 
 index
 and the specified value.
 This Solr plugin is based on the BitwiseFilter in LUCENE-2460
 See https://issues.apache.org/jira/browse/LUCENE-2460 for more details
 This is the syntax for searching in Solr:
 http://localhost:8983/path/to/solr/select/?q={!bitwise field=fieldname 
 op=OPERATION_NAME source=sourcevalue negate=boolean}remainder of query
 Example :
 http://localhost:8983/solr/bitwise/select/?q={!bitwise field=user_permissions 
 op=AND source=3 negate=true}state:FL
 The negate parameter is optional
 The field parameter is the name of the integer field
 The op parameter is the name of the operation; one of {AND, OR, XOR}
 The source parameter is the specified integer value
 The negate parameter is a boolean indicating whether or not to negate the 
 results of the bitwise operation
 To test out this plugin, simply copy the jar file containing the plugin 
 classes into your $SOLR_HOME/lib directory and then
 add the following to your solrconfig.xml file after the dismax request 
 handler:
 queryParser name=bitwise 
 class=org.apache.solr.bitwise.BitwiseQueryParserPlugin basedOn=dismax /
 Restart your servlet container.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4816) Change CloudSolrServer to send updates to the correct shard

2013-05-15 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-4816:
-

Attachment: SOLR-4816.patch

No longer going to the core url directly with updates and instead sending to 
the baseUrl+/+collectio. Also passing through the orignal params to each 
request. Added the getter/setter for the defaultId. 

Stephen, let me know if this addressed the bugs you found.

 Change CloudSolrServer to send updates to the correct shard
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch


 This issue adds a directUpdate method to CloudSolrServer which routes 
 update requests to the correct shard. This would be a nice feature to have to 
 eliminate the document routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4816) Change CloudSolrServer to send updates to the correct shard

2013-05-15 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658360#comment-13658360
 ] 

Joel Bernstein edited comment on SOLR-4816 at 5/15/13 1:40 PM:
---

No longer going to the core url directly with updates and instead sending to 
the baseUrl+/+collection. Also passing through the orignal params to each 
request. Added the getter/setter for the defaultId. 

Stephen, let me know if this addresses the bugs you found.

  was (Author: joel.bernstein):
No longer going to the core url directly with updates and instead sending 
to the baseUrl+/+collectio. Also passing through the orignal params to each 
request. Added the getter/setter for the defaultId. 

Stephen, let me know if this addressed the bugs you found.
  
 Change CloudSolrServer to send updates to the correct shard
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch


 This issue adds a directUpdate method to CloudSolrServer which routes 
 update requests to the correct shard. This would be a nice feature to have to 
 eliminate the document routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Created] (SOLR-4822) SOLR for SharePoint Indexing and search

2013-05-15 Thread Jack Krupansky

This is not a bug in Solr!

Please ask this type of question on the Solr user email list first. I'm sure 
people will be happy to answer it there.


-- Jack Krupansky

-Original Message- 
From: jaya shankar (JIRA)

Sent: Wednesday, May 15, 2013 12:59 AM
To: dev@lucene.apache.org
Subject: [jira] [Created] (SOLR-4822) SOLR for SharePoint Indexing and 
search


jaya shankar created SOLR-4822:
--

Summary: SOLR for SharePoint Indexing and search
Key: SOLR-4822
URL: https://issues.apache.org/jira/browse/SOLR-4822
Project: Solr
 Issue Type: Bug
 Components: SearchComponents - other
   Reporter: jaya shankar


Hello Team,

i am using solr for some of MY RDBMS, and FileSystem indexing and search.But 
i am not sure how i can use SOLR for SharePoint indexing. i feel this is one 
of major are to improve.Can any one help me how i can achieve this task


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA 
administrators

For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org 



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4816) Change CloudSolrServer to send updates to the correct shard

2013-05-15 Thread Stephen Riesenberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Riesenberg updated SOLR-4816:
-

Attachment: SOLR-4816-sriesenberg.patch

First attempt. This patch is to fix issues with directUpdate() method and add 
defaultIdField. Next attempt would be to integrate it better into the request() 
method. This is still un-tested, as I'm going to be deploying it today for load 
testing.

 Change CloudSolrServer to send updates to the correct shard
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816-sriesenberg.patch


 This issue adds a directUpdate method to CloudSolrServer which routes 
 update requests to the correct shard. This would be a nice feature to have to 
 eliminate the document routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4816) Change CloudSolrServer to send updates to the correct shard

2013-05-15 Thread Stephen Riesenberg (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658364#comment-13658364
 ] 

Stephen Riesenberg commented on SOLR-4816:
--

Oops, I missed your latest. I'll check it out.

 Change CloudSolrServer to send updates to the correct shard
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816-sriesenberg.patch


 This issue adds a directUpdate method to CloudSolrServer which routes 
 update requests to the correct shard. This would be a nice feature to have to 
 eliminate the document routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4816) Change CloudSolrServer to send updates to the correct shard

2013-05-15 Thread Stephen Riesenberg (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658364#comment-13658364
 ] 

Stephen Riesenberg edited comment on SOLR-4816 at 5/15/13 1:55 PM:
---

Oops, I missed your latest. I'll check it out.

*Edit*: Main issue is an NPE using params (params = new ...), and missing use 
of the defaultCollection in params.get().

  was (Author: sriesenberg):
Oops, I missed your latest. I'll check it out.
  
 Change CloudSolrServer to send updates to the correct shard
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816-sriesenberg.patch


 This issue adds a directUpdate method to CloudSolrServer which routes 
 update requests to the correct shard. This would be a nice feature to have to 
 eliminate the document routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4816) Change CloudSolrServer to send updates to the correct shard

2013-05-15 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658386#comment-13658386
 ] 

Joel Bernstein commented on SOLR-4816:
--

Stephen, I see what you did with the defaultCollection. I'll work that into the 
patch. I made a few other changes so I'll use the patch I'm working with.

 Change CloudSolrServer to send updates to the correct shard
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816-sriesenberg.patch


 This issue adds a directUpdate method to CloudSolrServer which routes 
 update requests to the correct shard. This would be a nice feature to have to 
 eliminate the document routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4816) Change CloudSolrServer to send updates to the correct shard

2013-05-15 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-4816:
-

Attachment: SOLR-4816.patch

Added Stephen's code fixing NPE with when params and/or collection is not set.


 Change CloudSolrServer to send updates to the correct shard
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816-sriesenberg.patch


 This issue adds a directUpdate method to CloudSolrServer which routes 
 update requests to the correct shard. This would be a nice feature to have to 
 eliminate the document routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4816) Change CloudSolrServer to send updates to the correct shard

2013-05-15 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658392#comment-13658392
 ] 

Joel Bernstein edited comment on SOLR-4816 at 5/15/13 2:21 PM:
---

Added Stephen's code fixing NPE when params and/or collection is not set.


  was (Author: joel.bernstein):
Added Stephen's code fixing NPE with when params and/or collection is not 
set.

  
 Change CloudSolrServer to send updates to the correct shard
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816-sriesenberg.patch


 This issue adds a directUpdate method to CloudSolrServer which routes 
 update requests to the correct shard. This would be a nice feature to have to 
 eliminate the document routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1913) QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations on Integer Fields

2013-05-15 Thread Christopher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658421#comment-13658421
 ] 

Christopher commented on SOLR-1913:
---

Sorry, I made a mistake, the plugin works correctly! Thank you so much!

 QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations 
 on Integer Fields
 ---

 Key: SOLR-1913
 URL: https://issues.apache.org/jira/browse/SOLR-1913
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Israel Ekpo
 Fix For: 4.4

 Attachments: bitwise_filter_plugin.jar, SOLR-1913.bitwise.tar.gz, 
 SOLR-1913-src.tar.gz, solr-bitwise-plugin.jar, WEB-INF lib.jpg

   Original Estimate: 1h
  Remaining Estimate: 1h

 BitwiseQueryParserPlugin is a org.apache.solr.search.QParserPlugin that 
 allows 
 users to filter the documents returned from a query
 by performing bitwise operations between a particular integer field in the 
 index
 and the specified value.
 This Solr plugin is based on the BitwiseFilter in LUCENE-2460
 See https://issues.apache.org/jira/browse/LUCENE-2460 for more details
 This is the syntax for searching in Solr:
 http://localhost:8983/path/to/solr/select/?q={!bitwise field=fieldname 
 op=OPERATION_NAME source=sourcevalue negate=boolean}remainder of query
 Example :
 http://localhost:8983/solr/bitwise/select/?q={!bitwise field=user_permissions 
 op=AND source=3 negate=true}state:FL
 The negate parameter is optional
 The field parameter is the name of the integer field
 The op parameter is the name of the operation; one of {AND, OR, XOR}
 The source parameter is the specified integer value
 The negate parameter is a boolean indicating whether or not to negate the 
 results of the bitwise operation
 To test out this plugin, simply copy the jar file containing the plugin 
 classes into your $SOLR_HOME/lib directory and then
 add the following to your solrconfig.xml file after the dismax request 
 handler:
 queryParser name=bitwise 
 class=org.apache.solr.bitwise.BitwiseQueryParserPlugin basedOn=dismax /
 Restart your servlet container.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4734) Can not create collection via collections API on empty solr

2013-05-15 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658431#comment-13658431
 ] 

Shawn Heisey commented on SOLR-4734:


One other thing in addition to Mark's note - the step where you link the config 
with zkcli isn't necessary, and at that point, the collection doesn't exist, so 
it can't be linked.

The best-case scenario is that the linkconfig step isn't doing anything at all. 
 The worst-case scenario is that linkconfig puts something in zookeeper that 
prevents the CREATE from working properly.

The CREATE action takes care of linking the config to the collection, via the 
collection.configName parameter.


 Can not create collection via collections API on empty solr
 ---

 Key: SOLR-4734
 URL: https://issues.apache.org/jira/browse/SOLR-4734
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3, 4.2.1
 Environment: Linux 64bit on 3.2.0-33-generic kernel
 Solr: 4.2.1
 ZooKeeper: 3.4.5
 Tomcat 7.0.27 
Reporter: Alexander Eibner
Assignee: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: config-logs.zip


 The following setup and steps always lead to the same error:
 app01: ZooKeeper
 app02: ZooKeeper, Solr (in Tomcat)
 app03: ZooKeeper, Solr (in Tomcat) 
 *) Start ZooKeeper as ensemble on all machines.
 *) Start tomcat on app02/app03
 {code:javascript|title=clusterstate.json}
 null
 cZxid = 0x10014
 ctime = Thu Apr 18 10:59:24 CEST 2013
 mZxid = 0x10014
 mtime = Thu Apr 18 10:59:24 CEST 2013
 pZxid = 0x10014
 cversion = 0
 dataVersion = 0
 aclVersion = 0
 ephemeralOwner = 0x0
 dataLength = 0
 numChildren = 0
 {code}
 *) Upload the configuration (on app02) for the collection via the following 
 command:
 {noformat}
 zkcli.sh -cmd upconfig --zkhost app01:4181,app02:4181,app03:4181 
 --confdir config/solr/storage/conf/ --confname storage-conf 
 {noformat}
 *) Linking the configuration (on app02) via the following command:
 {noformat}
 zkcli.sh -cmd linkconfig --collection storage --confname storage-conf 
 --zkhost app01:4181,app02:4181,app03:4181
 {noformat}
 *) Create Collection via: 
 {noformat}
 http://app02/solr/admin/collections?action=CREATEname=storagenumShards=1replicationFactor=2collection.configName=storage-conf
 {noformat}
 {code:javascript|title=clusterstate.json}
 {storage:{
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{
   app02:9985_solr_storage_shard1_replica2:{
 shard:shard1,
 state:down,
 core:storage_shard1_replica2,
 collection:storage,
 node_name:app02:9985_solr,
 base_url:http://app02:9985/solr},
   app03:9985_solr_storage_shard1_replica1:{
 shard:shard1,
 state:down,
 core:storage_shard1_replica1,
 collection:storage,
 node_name:app03:9985_solr,
 base_url:http://app03:9985/solr,
 router:compositeId}}
 cZxid = 0x10014
 ctime = Thu Apr 18 10:59:24 CEST 2013
 mZxid = 0x10047
 mtime = Thu Apr 18 11:04:06 CEST 2013
 pZxid = 0x10014
 cversion = 0
 dataVersion = 2
 aclVersion = 0
 ephemeralOwner = 0x0
 dataLength = 847
 numChildren = 0
 {code}
 This creates the replication of the shard on app02 and app03, but neither of 
 them is marked as leader, both are marked as DOWN.
 And after wards I can not access the collection.
 In the browser I get:
 {noformat}
 SEVERE: org.apache.solr.common.SolrException: no servers hosting shard:
 {noformat}
 The following stacktrace in the logs:
 {code}
 Apr 18, 2013 11:04:05 AM org.apache.solr.common.SolrException log
 SEVERE: org.apache.solr.common.SolrException: Error CREATEing SolrCore 
 'storage_shard1_replica2': 
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:483)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:140)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:591)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:192)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:225)
 at 
 

[ANNOUNCE] Apache PyLucene 4.3.0

2013-05-15 Thread Andi Vajda


I am pleased to announce the availability of Apache PyLucene 4.3.0, the 
first PyLucene release wrapping the Lucene 4.x API.


Apache PyLucene, a subproject of Apache Lucene, is a Python extension for
accessing Apache Lucene Core. Its goal is to allow you to use Lucene's text
indexing and searching capabilities from Python. It is API compatible with
the latest version of Lucene 4.x Core, 4.3.0.

This release contains a number of bug fixes and improvements. Details can be 
found in the changes files:


http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_4_3_0/CHANGES
http://svn.apache.org/repos/asf/lucene/pylucene/trunk/jcc/CHANGES

Apache PyLucene is available from the following download page:
http://www.apache.org/dyn/closer.cgi/lucene/pylucene/pylucene-4.3.0-1-src.tar.gz

When downloading from a mirror site, please remember to verify the downloads 
using signatures found on the Apache site:

https://dist.apache.org/repos/dist/release/lucene/pylucene/KEYS

For more information on Apache PyLucene, visit the project home page:
  http://lucene.apache.org/pylucene

Andi..


[jira] [Commented] (SOLR-3369) shards.tolerant=true broken on group and facet queries

2013-05-15 Thread Jabouille jean Charles (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658443#comment-13658443
 ] 

Jabouille jean Charles commented on SOLR-3369:
--

Hi,

here is a test, you just have to copy it in solr/core/src/test/org/apache/solr/:

{code:title=TestDistributedGroupingWithShardTolerantActivated.java|borderStyle=solid}
package org.apache.solr;

import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;

import org.apache.lucene.util.LuceneTestCase.Slow;
import org.apache.solr.client.solrj.SolrServer;
import org.apache.solr.client.solrj.SolrServerException;
import org.apache.solr.client.solrj.embedded.JettySolrRunner;
import org.apache.solr.cloud.ChaosMonkey;
import org.apache.solr.common.params.CommonParams;
import org.apache.solr.common.params.ModifiableSolrParams;
import org.apache.solr.common.params.ShardParams;


@Slow
public class TestDistributedGroupingWithShardTolerantActivated extends 
BaseDistributedSearchTestCase {

  String t1=a_t;
  String i1=a_si;
  String s1=a_s;
  String tlong = other_tl1;
  String tdate_a = a_n_tdt;
  String tdate_b = b_n_tdt;
  String oddField=oddField_s;

  @Override
  public void doTest() throws Exception {
del(*:*);
commit();

handle.clear();
handle.put(QTime, SKIPVAL);
handle.put(timestamp, SKIPVAL);
handle.put(grouped, UNORDERED);   // distrib grouping doesn't guarantee 
order of top level group commands

indexr(id,1, i1, 100, tlong, 100,t1,now is the time for all good men,
   tdate_a, 2010-04-20T11:00:00Z,
   tdate_b, 2009-08-20T11:00:00Z,
   foo_f, 1.414f, foo_b, true, foo_d, 1.414d);
indexr(id,2, i1, 50 , tlong, 50,t1,to come to the aid of their country.,
   tdate_a, 2010-05-02T11:00:00Z,
   tdate_b, 2009-11-02T11:00:00Z);
indexr(id,3, i1, 2, tlong, 2,t1,how now brown cow,
   tdate_a, 2010-05-03T11:00:00Z);
indexr(id,4, i1, -100 ,tlong, 101,
   t1,the quick fox jumped over the lazy dog,
   tdate_a, 2010-05-03T11:00:00Z,
   tdate_b, 2010-05-03T11:00:00Z);
indexr(id,5, i1, 500, tlong, 500 ,
   t1,the quick fox jumped way over the lazy dog,
   tdate_a, 2010-05-05T11:00:00Z);
indexr(id,6, i1, -600, tlong, 600 ,t1,humpty dumpy sat on a wall);
indexr(id,7, i1, 123, tlong, 123 ,t1,humpty dumpy had a great fall);
indexr(id,8, i1, 876, tlong, 876,
   tdate_b, 2010-01-05T11:00:00Z,
   t1,all the kings horses and all the kings men);
indexr(id,9, i1, 7, tlong, 7,t1,couldn't put humpty together again);
indexr(id,10, i1, 4321, tlong, 4321,t1,this too shall pass);
indexr(id,11, i1, -987, tlong, 987,
   t1,An eye for eye only ends up making the whole world blind.);
indexr(id,12, i1, 379, tlong, 379,
   t1,Great works are performed, not by strength, but by 
perseverance.);

indexr(id, 14, SubjectTerms_mfacet, new String[]  {mathematical models, 
mathematical analysis});
indexr(id, 15, SubjectTerms_mfacet, new String[]  {test 1, test 2, 
test3});
indexr(id, 16, SubjectTerms_mfacet, new String[]  {test 1, test 2, 
test3});
String[] vals = new String[100];
for (int i=0; i100; i++) {
  vals[i] = test  + i;
}
indexr(id, 17, SubjectTerms_mfacet, vals);

indexr(
id, 18, i1, 232, tlong, 332,
t1,no eggs on wall, lesson learned,
oddField, odd man out
);
indexr(
id, 19, i1, 232, tlong, 432,
t1, many eggs on wall,
oddField, odd man in
);
indexr(
id, 20, i1, 232, tlong, 532,
t1, some eggs on wall,
oddField, odd man between
);
indexr(
id, 21, i1, 232, tlong, 632,
t1, a few eggs on wall,
oddField, odd man under
);
indexr(
id, 22, i1, 232, tlong, 732,
t1, any eggs on wall,
oddField, odd man above
);
indexr(
id, 23, i1, 233, tlong, 734,
t1, dirty eggs,
oddField, odd eggs
);

for (int i = 100; i  150; i++) {
  indexr(id, i);
}

int[] values = new int[]{, 9, 99, 999};
for (int shard = 0; shard  clients.size(); shard++) {
  int groupValue = values[shard];
  for (int i = 500; i  600; i++) {
index_specific(shard, i1, groupValue, s1, a, id, i * (shard + 1), t1, 
shard);
  }
}

commit();

// SOLR-3369: shrds.tolreant=true with grouping
for (int numDownServers = 0; numDownServers  jettys.size() - 1; 
numDownServers++) {
  ListJettySolrRunner upJettys = new ArrayListJettySolrRunner(jettys);
  ListSolrServer upClients = new ArrayListSolrServer(clients);
  ListJettySolrRunner downJettys = new ArrayListJettySolrRunner();
  ListString upShards = new ArrayListString(Arrays.asList(shardsArr));
  for (int i = 0; i  numDownServers; i++) {
// shut down some of the jettys

[jira] [Commented] (SOLR-4734) Can not create collection via collections API on empty solr

2013-05-15 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658445#comment-13658445
 ] 

Shawn Heisey commented on SOLR-4734:


Another note: The linkconfig might in fact be failing, but you aren't seeing an 
error message because of SOLR-4807.  The fix for that problem will be in 4.3.1 
when it gets released.  If you download the 4.3.1 source code (or a nightly 
build) and copy its cloud-scripts directory over to your 4.3.0 install, you'll 
have logging.


 Can not create collection via collections API on empty solr
 ---

 Key: SOLR-4734
 URL: https://issues.apache.org/jira/browse/SOLR-4734
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3, 4.2.1
 Environment: Linux 64bit on 3.2.0-33-generic kernel
 Solr: 4.2.1
 ZooKeeper: 3.4.5
 Tomcat 7.0.27 
Reporter: Alexander Eibner
Assignee: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: config-logs.zip


 The following setup and steps always lead to the same error:
 app01: ZooKeeper
 app02: ZooKeeper, Solr (in Tomcat)
 app03: ZooKeeper, Solr (in Tomcat) 
 *) Start ZooKeeper as ensemble on all machines.
 *) Start tomcat on app02/app03
 {code:javascript|title=clusterstate.json}
 null
 cZxid = 0x10014
 ctime = Thu Apr 18 10:59:24 CEST 2013
 mZxid = 0x10014
 mtime = Thu Apr 18 10:59:24 CEST 2013
 pZxid = 0x10014
 cversion = 0
 dataVersion = 0
 aclVersion = 0
 ephemeralOwner = 0x0
 dataLength = 0
 numChildren = 0
 {code}
 *) Upload the configuration (on app02) for the collection via the following 
 command:
 {noformat}
 zkcli.sh -cmd upconfig --zkhost app01:4181,app02:4181,app03:4181 
 --confdir config/solr/storage/conf/ --confname storage-conf 
 {noformat}
 *) Linking the configuration (on app02) via the following command:
 {noformat}
 zkcli.sh -cmd linkconfig --collection storage --confname storage-conf 
 --zkhost app01:4181,app02:4181,app03:4181
 {noformat}
 *) Create Collection via: 
 {noformat}
 http://app02/solr/admin/collections?action=CREATEname=storagenumShards=1replicationFactor=2collection.configName=storage-conf
 {noformat}
 {code:javascript|title=clusterstate.json}
 {storage:{
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{
   app02:9985_solr_storage_shard1_replica2:{
 shard:shard1,
 state:down,
 core:storage_shard1_replica2,
 collection:storage,
 node_name:app02:9985_solr,
 base_url:http://app02:9985/solr},
   app03:9985_solr_storage_shard1_replica1:{
 shard:shard1,
 state:down,
 core:storage_shard1_replica1,
 collection:storage,
 node_name:app03:9985_solr,
 base_url:http://app03:9985/solr,
 router:compositeId}}
 cZxid = 0x10014
 ctime = Thu Apr 18 10:59:24 CEST 2013
 mZxid = 0x10047
 mtime = Thu Apr 18 11:04:06 CEST 2013
 pZxid = 0x10014
 cversion = 0
 dataVersion = 2
 aclVersion = 0
 ephemeralOwner = 0x0
 dataLength = 847
 numChildren = 0
 {code}
 This creates the replication of the shard on app02 and app03, but neither of 
 them is marked as leader, both are marked as DOWN.
 And after wards I can not access the collection.
 In the browser I get:
 {noformat}
 SEVERE: org.apache.solr.common.SolrException: no servers hosting shard:
 {noformat}
 The following stacktrace in the logs:
 {code}
 Apr 18, 2013 11:04:05 AM org.apache.solr.common.SolrException log
 SEVERE: org.apache.solr.common.SolrException: Error CREATEing SolrCore 
 'storage_shard1_replica2': 
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:483)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:140)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:591)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:192)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:225)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
 at 
 

[jira] [Commented] (SOLR-4734) Can not create collection via collections API on empty solr

2013-05-15 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658459#comment-13658459
 ] 

Mark Miller commented on SOLR-4734:
---

bq.  at that point, the collection doesn't exist, so it can't be linked.

You can link before the collection exists - this feature was added to support 
some more complicated scenarios. When the collection is actually created, it 
will find the link.

 Can not create collection via collections API on empty solr
 ---

 Key: SOLR-4734
 URL: https://issues.apache.org/jira/browse/SOLR-4734
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3, 4.2.1
 Environment: Linux 64bit on 3.2.0-33-generic kernel
 Solr: 4.2.1
 ZooKeeper: 3.4.5
 Tomcat 7.0.27 
Reporter: Alexander Eibner
Assignee: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: config-logs.zip


 The following setup and steps always lead to the same error:
 app01: ZooKeeper
 app02: ZooKeeper, Solr (in Tomcat)
 app03: ZooKeeper, Solr (in Tomcat) 
 *) Start ZooKeeper as ensemble on all machines.
 *) Start tomcat on app02/app03
 {code:javascript|title=clusterstate.json}
 null
 cZxid = 0x10014
 ctime = Thu Apr 18 10:59:24 CEST 2013
 mZxid = 0x10014
 mtime = Thu Apr 18 10:59:24 CEST 2013
 pZxid = 0x10014
 cversion = 0
 dataVersion = 0
 aclVersion = 0
 ephemeralOwner = 0x0
 dataLength = 0
 numChildren = 0
 {code}
 *) Upload the configuration (on app02) for the collection via the following 
 command:
 {noformat}
 zkcli.sh -cmd upconfig --zkhost app01:4181,app02:4181,app03:4181 
 --confdir config/solr/storage/conf/ --confname storage-conf 
 {noformat}
 *) Linking the configuration (on app02) via the following command:
 {noformat}
 zkcli.sh -cmd linkconfig --collection storage --confname storage-conf 
 --zkhost app01:4181,app02:4181,app03:4181
 {noformat}
 *) Create Collection via: 
 {noformat}
 http://app02/solr/admin/collections?action=CREATEname=storagenumShards=1replicationFactor=2collection.configName=storage-conf
 {noformat}
 {code:javascript|title=clusterstate.json}
 {storage:{
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{
   app02:9985_solr_storage_shard1_replica2:{
 shard:shard1,
 state:down,
 core:storage_shard1_replica2,
 collection:storage,
 node_name:app02:9985_solr,
 base_url:http://app02:9985/solr},
   app03:9985_solr_storage_shard1_replica1:{
 shard:shard1,
 state:down,
 core:storage_shard1_replica1,
 collection:storage,
 node_name:app03:9985_solr,
 base_url:http://app03:9985/solr,
 router:compositeId}}
 cZxid = 0x10014
 ctime = Thu Apr 18 10:59:24 CEST 2013
 mZxid = 0x10047
 mtime = Thu Apr 18 11:04:06 CEST 2013
 pZxid = 0x10014
 cversion = 0
 dataVersion = 2
 aclVersion = 0
 ephemeralOwner = 0x0
 dataLength = 847
 numChildren = 0
 {code}
 This creates the replication of the shard on app02 and app03, but neither of 
 them is marked as leader, both are marked as DOWN.
 And after wards I can not access the collection.
 In the browser I get:
 {noformat}
 SEVERE: org.apache.solr.common.SolrException: no servers hosting shard:
 {noformat}
 The following stacktrace in the logs:
 {code}
 Apr 18, 2013 11:04:05 AM org.apache.solr.common.SolrException log
 SEVERE: org.apache.solr.common.SolrException: Error CREATEing SolrCore 
 'storage_shard1_replica2': 
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:483)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:140)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:591)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:192)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:225)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
 at 
 

RE: svn commit: r1482642 - /lucene/dev/branches/lucene_solr_4_3/lucene/core/src/java/org/apache/lucene/util/Constants.java

2013-05-15 Thread Uwe Schindler
Hi,

ok, I just wanted to be sure!

 I double checked (also 3.6.1, 3.6.2, and 4.2.1 releases): hossman's commit is
 correct actually.
 
 This is the one used for back compat (its tested with the version
 comparator), but its just marking which version of lucene wrote the
 segment. The variable name should really be changed, MAIN says nothing.
 
 This one is the more important one though: another reason why its not
 triggered by the version sysprop from build.xml: our comparator cannot deal
 with any -SNAPSHOT or any maven suffixes or any of that horseshit. it needs
 real version numbers.

In trunk and branch_4x we already changed the numbering in common-build to fix 
some bugs with Jenkins! We have now:
  property name=dev.version.base value=4.4/
We should maybe backport this commit to 4.3, too.

So we can use this to maybe check consistency or pass this version somehow to 
tests.

Our version comparator can handle those versions, so it expands missing parts 
with .0, so e.g. 4.3.1 is considered greater than 4.3 (which is treated 
as 4.3.0).

 On Wed, May 15, 2013 at 7:57 AM, Robert Muir rcm...@gmail.com wrote:
  On Wed, May 15, 2013 at 4:23 AM, Uwe Schindler u...@thetaphi.de
 wrote:
  Are we sure that this is the right thing? The LUCENE_MAIN_VERSION is
 used for index compatibility and should always be only in X.Y format.
 
  Please revert this!
 
 
  Uwe is correct: please only adjust build.xml here or whatever, but
  don't change this!
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4734) Leader election fails with an NPE if there is no UpdateLog.

2013-05-15 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4734:
--

Priority: Minor  (was: Major)
 Summary: Leader election fails with an NPE if there is no UpdateLog.  
(was: Can not create collection via collections API on empty solr)

 Leader election fails with an NPE if there is no UpdateLog.
 ---

 Key: SOLR-4734
 URL: https://issues.apache.org/jira/browse/SOLR-4734
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3, 4.2.1
 Environment: Linux 64bit on 3.2.0-33-generic kernel
 Solr: 4.2.1
 ZooKeeper: 3.4.5
 Tomcat 7.0.27 
Reporter: Alexander Eibner
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.4

 Attachments: config-logs.zip


 The following setup and steps always lead to the same error:
 app01: ZooKeeper
 app02: ZooKeeper, Solr (in Tomcat)
 app03: ZooKeeper, Solr (in Tomcat) 
 *) Start ZooKeeper as ensemble on all machines.
 *) Start tomcat on app02/app03
 {code:javascript|title=clusterstate.json}
 null
 cZxid = 0x10014
 ctime = Thu Apr 18 10:59:24 CEST 2013
 mZxid = 0x10014
 mtime = Thu Apr 18 10:59:24 CEST 2013
 pZxid = 0x10014
 cversion = 0
 dataVersion = 0
 aclVersion = 0
 ephemeralOwner = 0x0
 dataLength = 0
 numChildren = 0
 {code}
 *) Upload the configuration (on app02) for the collection via the following 
 command:
 {noformat}
 zkcli.sh -cmd upconfig --zkhost app01:4181,app02:4181,app03:4181 
 --confdir config/solr/storage/conf/ --confname storage-conf 
 {noformat}
 *) Linking the configuration (on app02) via the following command:
 {noformat}
 zkcli.sh -cmd linkconfig --collection storage --confname storage-conf 
 --zkhost app01:4181,app02:4181,app03:4181
 {noformat}
 *) Create Collection via: 
 {noformat}
 http://app02/solr/admin/collections?action=CREATEname=storagenumShards=1replicationFactor=2collection.configName=storage-conf
 {noformat}
 {code:javascript|title=clusterstate.json}
 {storage:{
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{
   app02:9985_solr_storage_shard1_replica2:{
 shard:shard1,
 state:down,
 core:storage_shard1_replica2,
 collection:storage,
 node_name:app02:9985_solr,
 base_url:http://app02:9985/solr},
   app03:9985_solr_storage_shard1_replica1:{
 shard:shard1,
 state:down,
 core:storage_shard1_replica1,
 collection:storage,
 node_name:app03:9985_solr,
 base_url:http://app03:9985/solr,
 router:compositeId}}
 cZxid = 0x10014
 ctime = Thu Apr 18 10:59:24 CEST 2013
 mZxid = 0x10047
 mtime = Thu Apr 18 11:04:06 CEST 2013
 pZxid = 0x10014
 cversion = 0
 dataVersion = 2
 aclVersion = 0
 ephemeralOwner = 0x0
 dataLength = 847
 numChildren = 0
 {code}
 This creates the replication of the shard on app02 and app03, but neither of 
 them is marked as leader, both are marked as DOWN.
 And after wards I can not access the collection.
 In the browser I get:
 {noformat}
 SEVERE: org.apache.solr.common.SolrException: no servers hosting shard:
 {noformat}
 The following stacktrace in the logs:
 {code}
 Apr 18, 2013 11:04:05 AM org.apache.solr.common.SolrException log
 SEVERE: org.apache.solr.common.SolrException: Error CREATEing SolrCore 
 'storage_shard1_replica2': 
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:483)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:140)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:591)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:192)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:225)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
 at 
 

[jira] [Commented] (SOLR-4734) Leader election fails with an NPE if there is no UpdateLog.

2013-05-15 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658478#comment-13658478
 ] 

Shawn Heisey commented on SOLR-4734:


bq. You can link before the collection exists - this feature was added to 
support some more complicated scenarios. When the collection is actually 
created, it will find the link.

Thanks, Mark.  I can always learn new things!  I think I can envision the 
scenario - upload the config, link it to the collection that doesn't exist yet, 
then skip the collections API and manually create each core with CoreAdmin.


 Leader election fails with an NPE if there is no UpdateLog.
 ---

 Key: SOLR-4734
 URL: https://issues.apache.org/jira/browse/SOLR-4734
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3, 4.2.1
 Environment: Linux 64bit on 3.2.0-33-generic kernel
 Solr: 4.2.1
 ZooKeeper: 3.4.5
 Tomcat 7.0.27 
Reporter: Alexander Eibner
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.4

 Attachments: config-logs.zip


 The following setup and steps always lead to the same error:
 app01: ZooKeeper
 app02: ZooKeeper, Solr (in Tomcat)
 app03: ZooKeeper, Solr (in Tomcat) 
 *) Start ZooKeeper as ensemble on all machines.
 *) Start tomcat on app02/app03
 {code:javascript|title=clusterstate.json}
 null
 cZxid = 0x10014
 ctime = Thu Apr 18 10:59:24 CEST 2013
 mZxid = 0x10014
 mtime = Thu Apr 18 10:59:24 CEST 2013
 pZxid = 0x10014
 cversion = 0
 dataVersion = 0
 aclVersion = 0
 ephemeralOwner = 0x0
 dataLength = 0
 numChildren = 0
 {code}
 *) Upload the configuration (on app02) for the collection via the following 
 command:
 {noformat}
 zkcli.sh -cmd upconfig --zkhost app01:4181,app02:4181,app03:4181 
 --confdir config/solr/storage/conf/ --confname storage-conf 
 {noformat}
 *) Linking the configuration (on app02) via the following command:
 {noformat}
 zkcli.sh -cmd linkconfig --collection storage --confname storage-conf 
 --zkhost app01:4181,app02:4181,app03:4181
 {noformat}
 *) Create Collection via: 
 {noformat}
 http://app02/solr/admin/collections?action=CREATEname=storagenumShards=1replicationFactor=2collection.configName=storage-conf
 {noformat}
 {code:javascript|title=clusterstate.json}
 {storage:{
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{
   app02:9985_solr_storage_shard1_replica2:{
 shard:shard1,
 state:down,
 core:storage_shard1_replica2,
 collection:storage,
 node_name:app02:9985_solr,
 base_url:http://app02:9985/solr},
   app03:9985_solr_storage_shard1_replica1:{
 shard:shard1,
 state:down,
 core:storage_shard1_replica1,
 collection:storage,
 node_name:app03:9985_solr,
 base_url:http://app03:9985/solr,
 router:compositeId}}
 cZxid = 0x10014
 ctime = Thu Apr 18 10:59:24 CEST 2013
 mZxid = 0x10047
 mtime = Thu Apr 18 11:04:06 CEST 2013
 pZxid = 0x10014
 cversion = 0
 dataVersion = 2
 aclVersion = 0
 ephemeralOwner = 0x0
 dataLength = 847
 numChildren = 0
 {code}
 This creates the replication of the shard on app02 and app03, but neither of 
 them is marked as leader, both are marked as DOWN.
 And after wards I can not access the collection.
 In the browser I get:
 {noformat}
 SEVERE: org.apache.solr.common.SolrException: no servers hosting shard:
 {noformat}
 The following stacktrace in the logs:
 {code}
 Apr 18, 2013 11:04:05 AM org.apache.solr.common.SolrException log
 SEVERE: org.apache.solr.common.SolrException: Error CREATEing SolrCore 
 'storage_shard1_replica2': 
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:483)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:140)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:591)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:192)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:225)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
 at 
 

[jira] [Resolved] (SOLR-4822) SOLR for SharePoint Indexing and search

2013-05-15 Thread Ryan McKinley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McKinley resolved SOLR-4822.
-

Resolution: Invalid

check:
http://manifoldcf.apache.org


 SOLR for SharePoint Indexing and search
 ---

 Key: SOLR-4822
 URL: https://issues.apache.org/jira/browse/SOLR-4822
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other
Reporter: jaya shankar

 Hello Team,
 i am using solr for some of MY RDBMS, and FileSystem indexing and search.But 
 i am not sure how i can use SOLR for SharePoint indexing. i feel this is one 
 of major are to improve.Can any one help me how i can achieve this task

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4816) Change CloudSolrServer to send updates to the correct shard

2013-05-15 Thread Stephen Riesenberg (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658510#comment-13658510
 ] 

Stephen Riesenberg commented on SOLR-4816:
--

Fixed several other NPEs before getting to the code where it uses the 
DocRouter. It looks like the router that is used is the ImplicitDocRouter not 
the HashBasedRouter. So it returns a null Slice. Any idea how to get the 
correct DocRouter for hash-based from the API? Or could we just create a new 
HashBasedRouter instance and use it.

 Change CloudSolrServer to send updates to the correct shard
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816-sriesenberg.patch


 This issue adds a directUpdate method to CloudSolrServer which routes 
 update requests to the correct shard. This would be a nice feature to have to 
 eliminate the document routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4816) Change CloudSolrServer to send updates to the correct shard

2013-05-15 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658516#comment-13658516
 ] 

Joel Bernstein commented on SOLR-4816:
--

Stephen, let's discuss offline. Just sent you an email.

 Change CloudSolrServer to send updates to the correct shard
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816-sriesenberg.patch


 This issue adds a directUpdate method to CloudSolrServer which routes 
 update requests to the correct shard. This would be a nice feature to have to 
 eliminate the document routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4981) Deprecate PositionFilter

2013-05-15 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658517#comment-13658517
 ] 

Steve Rowe commented on LUCENE-4981:


Adrien,

I looked 
[http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.PositionFilterFactory],
 and where it originally came from 
(http://markmail.org/message/g4habmbyeuckmix6 and LUCENE-1380), and I don't 
think existing query parser functionality, including 
{{QueryParser.setAutoGeneratePhraseQueries}}, will cover the use case it was 
created to handle.

That use case is roughly: given an indexed non-tokenized string field (e.g. a 
b), and a multi-word query against that field, create disjunction query of all 
possible word n-grams, where 0nN and N is as large as the expected longest 
query.  E.g. a query a b c would result in 'a' OR 'a b' OR 'a b c' OR 'b' OR 
'b c' OR 'c', and would match a doc with field value a b.

[~michaelsembwever], the guy who started the thread and created the issue, was 
able to handle this use case by stringing together:

# Quoting the query, to allow the configured analyzer to see all of the terms 
instead of one-at-a-time
# ShingleFilter, to create the n-grams
# The new PositionFilter, to place all terms at the same position
# QueryParser's synonym handling functionality, which produces a 
MultiPhraseQuery, which when given multiple terms at the same single position, 
creates a BooleanQuery with one SHOULD TermQuery for each term.

Without PositionFilter, is there some way to achieve the same goal?

I don't think we should get rid of PositionFilter unless we have an alternate 
way to handle the (IMHO legitimate) use case it was originally designed to 
cover.

 Deprecate PositionFilter
 

 Key: LUCENE-4981
 URL: https://issues.apache.org/jira/browse/LUCENE-4981
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-4981.patch


 According to the documentation 
 (http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.PositionFilterFactory),
  PositionFilter is mainly useful to make query parsers generate boolean 
 queries instead of phrase queries although this problem can be solved at 
 query parsing level instead of analysis level (eg. using 
 QueryParser.setAutoGeneratePhraseQueries).
 So given that PositionFilter corrupts token graphs (see TestRandomChains), I 
 propose to deprecate it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4816) Change CloudSolrServer to send updates to the correct shard

2013-05-15 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658531#comment-13658531
 ] 

Mark Miller commented on SOLR-4816:
---

Seems we should be batching bulk docs up based on what leader they will go to 
and propegate the bulk adds where we can.

 Change CloudSolrServer to send updates to the correct shard
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816-sriesenberg.patch


 This issue adds a directUpdate method to CloudSolrServer which routes 
 update requests to the correct shard. This would be a nice feature to have to 
 eliminate the document routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4816) Change CloudSolrServer to send updates to the correct shard

2013-05-15 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658532#comment-13658532
 ] 

Mark Miller commented on SOLR-4816:
---

Ah, glanced too quickly - that is what is happening - looks a lot cleaner than 
your initial comment indicated, nice.

 Change CloudSolrServer to send updates to the correct shard
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816-sriesenberg.patch


 This issue adds a directUpdate method to CloudSolrServer which routes 
 update requests to the correct shard. This would be a nice feature to have to 
 eliminate the document routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4816) Change CloudSolrServer to send updates to the correct shard

2013-05-15 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658536#comment-13658536
 ] 

Mark Miller commented on SOLR-4816:
---

bq. Fixed several other NPEs before getting to the code where it uses the 
DocRouter. 

We should make sure we have tests that would have caught these as well!

 Change CloudSolrServer to send updates to the correct shard
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816-sriesenberg.patch


 This issue adds a directUpdate method to CloudSolrServer which routes 
 update requests to the correct shard. This would be a nice feature to have to 
 eliminate the document routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4816) Change CloudSolrServer to send updates to the correct shard

2013-05-15 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658543#comment-13658543
 ] 

Yonik Seeley commented on SOLR-4816:


bq. Any idea how to get the correct DocRouter for hash-based from the API?

It's collection specific.
See DocCollection.getRouter()

 Change CloudSolrServer to send updates to the correct shard
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816-sriesenberg.patch


 This issue adds a directUpdate method to CloudSolrServer which routes 
 update requests to the correct shard. This would be a nice feature to have to 
 eliminate the document routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4981) Deprecate PositionFilter

2013-05-15 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658546#comment-13658546
 ] 

Adrien Grand commented on LUCENE-4981:
--

bq. Without PositionFilter, is there some way to achieve the same goal?

My understanding is that this use-case would like the query parser to interpret 
phrase queries as sub-phrase queries. But instead of creating a specific query 
parser in order to process phrase queries differently (by overriding 
newFieldQuery for example), it tries to hack the token stream so that the 
default query parser generates the expected query. So I don't really think this 
is a use-case for PositionFilter?

 Deprecate PositionFilter
 

 Key: LUCENE-4981
 URL: https://issues.apache.org/jira/browse/LUCENE-4981
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-4981.patch


 According to the documentation 
 (http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.PositionFilterFactory),
  PositionFilter is mainly useful to make query parsers generate boolean 
 queries instead of phrase queries although this problem can be solved at 
 query parsing level instead of analysis level (eg. using 
 QueryParser.setAutoGeneratePhraseQueries).
 So given that PositionFilter corrupts token graphs (see TestRandomChains), I 
 propose to deprecate it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [ANNOUNCE] Apache PyLucene 4.3.0

2013-05-15 Thread Michael McCandless
Hmm the web site News section doesn't have 4.3.0?

Mike McCandless

http://blog.mikemccandless.com


On Wed, May 15, 2013 at 11:14 AM, Andi Vajda va...@apache.org wrote:

 I am pleased to announce the availability of Apache PyLucene 4.3.0, the
 first PyLucene release wrapping the Lucene 4.x API.

 Apache PyLucene, a subproject of Apache Lucene, is a Python extension for
 accessing Apache Lucene Core. Its goal is to allow you to use Lucene's text
 indexing and searching capabilities from Python. It is API compatible with
 the latest version of Lucene 4.x Core, 4.3.0.

 This release contains a number of bug fixes and improvements. Details can be
 found in the changes files:

 http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_4_3_0/CHANGES
 http://svn.apache.org/repos/asf/lucene/pylucene/trunk/jcc/CHANGES

 Apache PyLucene is available from the following download page:
 http://www.apache.org/dyn/closer.cgi/lucene/pylucene/pylucene-4.3.0-1-src.tar.gz

 When downloading from a mirror site, please remember to verify the downloads
 using signatures found on the Apache site:
 https://dist.apache.org/repos/dist/release/lucene/pylucene/KEYS

 For more information on Apache PyLucene, visit the project home page:
   http://lucene.apache.org/pylucene

 Andi..


[jira] [Resolved] (SOLR-4734) Leader election fails with an NPE if there is no UpdateLog.

2013-05-15 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-4734.
---

Resolution: Fixed

 Leader election fails with an NPE if there is no UpdateLog.
 ---

 Key: SOLR-4734
 URL: https://issues.apache.org/jira/browse/SOLR-4734
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3, 4.2.1
 Environment: Linux 64bit on 3.2.0-33-generic kernel
 Solr: 4.2.1
 ZooKeeper: 3.4.5
 Tomcat 7.0.27 
Reporter: Alexander Eibner
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.4

 Attachments: config-logs.zip


 The following setup and steps always lead to the same error:
 app01: ZooKeeper
 app02: ZooKeeper, Solr (in Tomcat)
 app03: ZooKeeper, Solr (in Tomcat) 
 *) Start ZooKeeper as ensemble on all machines.
 *) Start tomcat on app02/app03
 {code:javascript|title=clusterstate.json}
 null
 cZxid = 0x10014
 ctime = Thu Apr 18 10:59:24 CEST 2013
 mZxid = 0x10014
 mtime = Thu Apr 18 10:59:24 CEST 2013
 pZxid = 0x10014
 cversion = 0
 dataVersion = 0
 aclVersion = 0
 ephemeralOwner = 0x0
 dataLength = 0
 numChildren = 0
 {code}
 *) Upload the configuration (on app02) for the collection via the following 
 command:
 {noformat}
 zkcli.sh -cmd upconfig --zkhost app01:4181,app02:4181,app03:4181 
 --confdir config/solr/storage/conf/ --confname storage-conf 
 {noformat}
 *) Linking the configuration (on app02) via the following command:
 {noformat}
 zkcli.sh -cmd linkconfig --collection storage --confname storage-conf 
 --zkhost app01:4181,app02:4181,app03:4181
 {noformat}
 *) Create Collection via: 
 {noformat}
 http://app02/solr/admin/collections?action=CREATEname=storagenumShards=1replicationFactor=2collection.configName=storage-conf
 {noformat}
 {code:javascript|title=clusterstate.json}
 {storage:{
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{
   app02:9985_solr_storage_shard1_replica2:{
 shard:shard1,
 state:down,
 core:storage_shard1_replica2,
 collection:storage,
 node_name:app02:9985_solr,
 base_url:http://app02:9985/solr},
   app03:9985_solr_storage_shard1_replica1:{
 shard:shard1,
 state:down,
 core:storage_shard1_replica1,
 collection:storage,
 node_name:app03:9985_solr,
 base_url:http://app03:9985/solr,
 router:compositeId}}
 cZxid = 0x10014
 ctime = Thu Apr 18 10:59:24 CEST 2013
 mZxid = 0x10047
 mtime = Thu Apr 18 11:04:06 CEST 2013
 pZxid = 0x10014
 cversion = 0
 dataVersion = 2
 aclVersion = 0
 ephemeralOwner = 0x0
 dataLength = 847
 numChildren = 0
 {code}
 This creates the replication of the shard on app02 and app03, but neither of 
 them is marked as leader, both are marked as DOWN.
 And after wards I can not access the collection.
 In the browser I get:
 {noformat}
 SEVERE: org.apache.solr.common.SolrException: no servers hosting shard:
 {noformat}
 The following stacktrace in the logs:
 {code}
 Apr 18, 2013 11:04:05 AM org.apache.solr.common.SolrException log
 SEVERE: org.apache.solr.common.SolrException: Error CREATEing SolrCore 
 'storage_shard1_replica2': 
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:483)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:140)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:591)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:192)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:225)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
 at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
 at 
 org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:999)

[jira] [Assigned] (SOLR-4805) Calling Collection RELOAD where collection has a single core, leaves collection offline and unusable till reboot

2013-05-15 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-4805:
-

Assignee: Mark Miller

 Calling Collection RELOAD where collection has a single core, leaves 
 collection offline and unusable till reboot
 

 Key: SOLR-4805
 URL: https://issues.apache.org/jira/browse/SOLR-4805
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Jared Rodriguez
Assignee: Mark Miller

 If you have a collection that is composed of a single core, then calling 
 reload on that collection leaves the core offline.  This happens even if 
 nothing at all has changed about the collection or its config.  This happens 
 whether you call reload via an http GET or if you directly call reload via 
 the collections api. 
 Tried a collection with a single core that contains data, change nothing 
 about the config in ZK and call reload and the collection.  The call 
 completes, but ZK flags that replica with state:down
 Try it where a the single core contains no data and the same thing happens, 
 ZK config updates and broadcasts state:down for the replica.
 I did not try this in a multicore or replicated core environment.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4805) Calling Collection RELOAD where collection has a single core, leaves collection offline and unusable till reboot

2013-05-15 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4805:
--

Fix Version/s: 4.4
   5.0

 Calling Collection RELOAD where collection has a single core, leaves 
 collection offline and unusable till reboot
 

 Key: SOLR-4805
 URL: https://issues.apache.org/jira/browse/SOLR-4805
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Jared Rodriguez
Assignee: Mark Miller
 Fix For: 5.0, 4.4


 If you have a collection that is composed of a single core, then calling 
 reload on that collection leaves the core offline.  This happens even if 
 nothing at all has changed about the collection or its config.  This happens 
 whether you call reload via an http GET or if you directly call reload via 
 the collections api. 
 Tried a collection with a single core that contains data, change nothing 
 about the config in ZK and call reload and the collection.  The call 
 completes, but ZK flags that replica with state:down
 Try it where a the single core contains no data and the same thing happens, 
 ZK config updates and broadcasts state:down for the replica.
 I did not try this in a multicore or replicated core environment.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4981) Deprecate PositionFilter

2013-05-15 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658577#comment-13658577
 ] 

Steve Rowe commented on LUCENE-4981:


bq. So I don't really think this is a use-case for PositionFilter?

I agree, subclassing the QP and overriding {{newFieldQuery}} and 
{{getFieldQuery}} should be sufficient to handle this use case.  Current 
PositionFilter users will have to maintain their own code outside of Lucene and 
Solr's codebase, rather than having a configuration-only solution.

I think the {{@deprecated}} annotation on PositionFilter in branch_4x should be 
augmented to help people find this alternative.  Similarly, in the backcompat 
section of trunk CHANGES.txt, and/or MIGRATE.txt, this issue should be 
mentioned.


 Deprecate PositionFilter
 

 Key: LUCENE-4981
 URL: https://issues.apache.org/jira/browse/LUCENE-4981
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-4981.patch


 According to the documentation 
 (http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.PositionFilterFactory),
  PositionFilter is mainly useful to make query parsers generate boolean 
 queries instead of phrase queries although this problem can be solved at 
 query parsing level instead of analysis level (eg. using 
 QueryParser.setAutoGeneratePhraseQueries).
 So given that PositionFilter corrupts token graphs (see TestRandomChains), I 
 propose to deprecate it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



slf4j version included in Solr

2013-05-15 Thread Shawn Heisey
I'm wondering if there's a valid technical reason why we're on an older 
minor release of slf4j in Solr.  I've upgraded my slf4j components to 
1.7.5 and haven't seen any problem.


Thanks,
Shawn

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [ANNOUNCE] Apache PyLucene 4.3.0

2013-05-15 Thread Andi Vajda

On May 15, 2013, at 10:08, Michael McCandless luc...@mikemccandless.com wrote:

 Hmm the web site News section doesn't have 4.3.0?

I checked in the web site change last night. It probably needs some propagation 
time ?

Andi..

 
 Mike McCandless
 
 http://blog.mikemccandless.com
 
 
 On Wed, May 15, 2013 at 11:14 AM, Andi Vajda va...@apache.org wrote:
 
 I am pleased to announce the availability of Apache PyLucene 4.3.0, the
 first PyLucene release wrapping the Lucene 4.x API.
 
 Apache PyLucene, a subproject of Apache Lucene, is a Python extension for
 accessing Apache Lucene Core. Its goal is to allow you to use Lucene's text
 indexing and searching capabilities from Python. It is API compatible with
 the latest version of Lucene 4.x Core, 4.3.0.
 
 This release contains a number of bug fixes and improvements. Details can be
 found in the changes files:
 
 http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_4_3_0/CHANGES
 http://svn.apache.org/repos/asf/lucene/pylucene/trunk/jcc/CHANGES
 
 Apache PyLucene is available from the following download page:
 http://www.apache.org/dyn/closer.cgi/lucene/pylucene/pylucene-4.3.0-1-src.tar.gz
 
 When downloading from a mirror site, please remember to verify the downloads
 using signatures found on the Apache site:
 https://dist.apache.org/repos/dist/release/lucene/pylucene/KEYS
 
 For more information on Apache PyLucene, visit the project home page:
  http://lucene.apache.org/pylucene
 
 Andi..


[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #851: POMs out of sync

2013-05-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/851/

1 tests failed.
REGRESSION:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch

Error Message:
shard1 is not consistent.  Got 305 from 
http://127.0.0.1:28511/collection1lastClient and got 5 from 
http://127.0.0.1:51104/collection1

Stack Trace:
java.lang.AssertionError: shard1 is not consistent.  Got 305 from 
http://127.0.0.1:28511/collection1lastClient and got 5 from 
http://127.0.0.1:51104/collection1
at 
__randomizedtesting.SeedInfo.seed([25FD682693834C62:A41BE63EE4DC2C5E]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:963)
at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:238)




Build Log:
[...truncated 24260 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-3369) shards.tolerant=true broken on group and facet queries

2013-05-15 Thread Russell Black (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658664#comment-13658664
 ] 

Russell Black commented on SOLR-3369:
-

This test case fail sometimes because it occasionally tries to use one of the 
down servers as the collator.  I'll fix this and include it in a new patch.

 shards.tolerant=true broken on group and facet queries
 --

 Key: SOLR-3369
 URL: https://issues.apache.org/jira/browse/SOLR-3369
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.0-ALPHA
 Environment: Distributed environment (shards)
Reporter: Russell Black
  Labels: patch
 Attachments: SOLR-3369-shards-tolerant.patch


 In a distributed environment, shards.tolerant=true allows for partial results 
 to be returned when individual shards are down.  For group=true and 
 facet=true queries, using this feature results in an error when shards are 
 down.  This patch allows users to use the shard tolerance feature with facet 
 and grouping queries. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-3369) shards.tolerant=true broken on group and facet queries

2013-05-15 Thread Russell Black (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658664#comment-13658664
 ] 

Russell Black edited comment on SOLR-3369 at 5/15/13 6:53 PM:
--

This test case fails sometimes because it occasionally tries to use one of the 
down servers as the collator.  I'll fix this and include it in a new patch.

  was (Author: rblack):
This test case fail sometimes because it occasionally tries to use one of 
the down servers as the collator.  I'll fix this and include it in a new 
patch.
  
 shards.tolerant=true broken on group and facet queries
 --

 Key: SOLR-3369
 URL: https://issues.apache.org/jira/browse/SOLR-3369
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.0-ALPHA
 Environment: Distributed environment (shards)
Reporter: Russell Black
  Labels: patch
 Attachments: SOLR-3369-shards-tolerant.patch


 In a distributed environment, shards.tolerant=true allows for partial results 
 to be returned when individual shards are down.  For group=true and 
 facet=true queries, using this feature results in an error when shards are 
 down.  This patch allows users to use the shard tolerance feature with facet 
 and grouping queries. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4824) Faceting results are changed after ingestion of documents past a certain number

2013-05-15 Thread Lakshmi Venkataswamy (JIRA)
Lakshmi Venkataswamy created SOLR-4824:
--

 Summary: Faceting results are changed after ingestion of documents 
past a certain number 
 Key: SOLR-4824
 URL: https://issues.apache.org/jira/browse/SOLR-4824
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.3, 4.2
 Environment: Ubuntu 12.04 LTS 12.04.2 
jre1.7.0_17
jboss-as-7.1.1.Final
Reporter: Lakshmi Venkataswamy


In upgrading from SOLR 3.6 to 4.2/4.3 I and comparing results on fuzzy queries, 
I found that after a certain number of documents were ingested the fuzzy query 
has drastically lower number of results.  We have approximately 18,000 
documents per day and after ingesting approximately 40 days of documents, the 
next incremental day of documents results in a lower number of results of a 
fuzzy search.

The query :  
http://10.100.1.48:8080/solr/coreTV3/select?q=cc:worde~1facet=onfacet.field=datefl=datefacet.sort

produces the following result before the threshold is crossed

responselst name=responseHeader
int name=status0/intint name=QTime2349/intlst name=paramsstr 
name=faceton/strstr name=fldate/strstr name=facet.sort/
str name=qcc:worde~1/strstr 
name=facet.fielddate/str/lst/lstresult name=response 
numFound=362803 start=0/result
lst name=facet_countslst name=facet_queries/lst 
name=facet_fieldslst name=date
int name=2012-12-312866/int
int name=2013-01-0111372/int
int name=2013-01-0211514/int
int name=2013-01-0312015/int
int name=2013-01-0411746/int
int name=2013-01-0510853/int
int name=2013-01-0611053/int
int name=2013-01-0711815/int
int name=2013-01-0811427/int
int name=2013-01-0911475/int
int name=2013-01-1011461/int
int name=2013-01-1112058/int
int name=2013-01-1211335/int
int name=2013-01-1312039/int
int name=2013-01-1412064/int
int name=2013-01-1512234/int
int name=2013-01-1612545/int
int name=2013-01-1711766/int
int name=2013-01-1812197/int
int name=2013-01-1911414/int
int name=2013-01-2011633/int
int name=2013-01-2112863/int
int name=2013-01-2212378/int
int name=2013-01-2311947/int
int name=2013-01-2411822/int
int name=2013-01-2511882/int
int name=2013-01-2610474/int
int name=2013-01-2711051/int
int name=2013-01-2811776/int
int name=2013-01-2911957/int
int name=2013-01-3011260/int
int name=2013-01-318511/int
/lst/lstlst name=facet_dates/lst name=facet_ranges//lst/response

Once the 40 days of documents ingested threshold is crossed the results drop as 
show below for the same query

responselst name=responseHeader
int name=status0/intint name=QTime2/intlst name=paramsstr 
name=faceton/strstr name=fldate/strstr name=facet.sort/str 
name=qcc:worde~1/strstr name=facet.fielddate/str/lst/lst
result name=response numFound=1338 start=0/result
lst name=facet_countslst name=facet_queries/lst 
name=facet_fieldslst name=date
int name=2012-12-310/int
int name=2013-01-0141/int
int name=2013-01-0221/int
int name=2013-01-0324/int
int name=2013-01-0419/int
int name=2013-01-059/int
int name=2013-01-0611/int
int name=2013-01-0717/int
int name=2013-01-0814/int
int name=2013-01-0924/int
int name=2013-01-1043/int
int name=2013-01-1114/int
int name=2013-01-1252/int
int name=2013-01-1357/int
int name=2013-01-1425/int
int name=2013-01-1517/int
int name=2013-01-1634/int
int name=2013-01-1711/int
int name=2013-01-1816/int
int name=2013-01-19121/int
int name=2013-01-2033/int
int name=2013-01-2126/int
int name=2013-01-2259/int
int name=2013-01-2327/int
int name=2013-01-2410/int
int name=2013-01-259/int
int name=2013-01-266/int
int name=2013-01-2716/int
int name=2013-01-2811/int
int name=2013-01-2915/int
int name=2013-01-3021/int
int name=2013-01-31109/int
int name=2013-02-0111/int
int name=2013-02-027/int
int name=2013-02-0310/int
int name=2013-02-048/int
int name=2013-02-0513/int
int name=2013-02-0675/int
int name=2013-02-0777/int
int name=2013-02-0831/int
int name=2013-02-0935/int
int name=2013-02-1022/int
int name=2013-02-1118/int
int name=2013-02-1211/int
int name=2013-02-1368/int
int name=2013-02-1440/int
/lst/lstlst name=facet_dates/lst name=facet_ranges//lst/response

I have also tested this with different months of data and have seen the same 
issue  around the number of documents.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4824) Fuzzy / Faceting results are changed after ingestion of documents past a certain number

2013-05-15 Thread Lakshmi Venkataswamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lakshmi Venkataswamy updated SOLR-4824:
---

Summary: Fuzzy / Faceting results are changed after ingestion of documents 
past a certain number   (was: Faceting results are changed after ingestion of 
documents past a certain number )

 Fuzzy / Faceting results are changed after ingestion of documents past a 
 certain number 
 

 Key: SOLR-4824
 URL: https://issues.apache.org/jira/browse/SOLR-4824
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.2, 4.3
 Environment: Ubuntu 12.04 LTS 12.04.2 
 jre1.7.0_17
 jboss-as-7.1.1.Final
Reporter: Lakshmi Venkataswamy

 In upgrading from SOLR 3.6 to 4.2/4.3 I and comparing results on fuzzy 
 queries, I found that after a certain number of documents were ingested the 
 fuzzy query has drastically lower number of results.  We have approximately 
 18,000 documents per day and after ingesting approximately 40 days of 
 documents, the next incremental day of documents results in a lower number of 
 results of a fuzzy search.
 The query :  
 http://10.100.1.48:8080/solr/coreTV3/select?q=cc:worde~1facet=onfacet.field=datefl=datefacet.sort
 produces the following result before the threshold is crossed
 responselst name=responseHeader
 int name=status0/intint name=QTime2349/intlst name=paramsstr 
 name=faceton/strstr name=fldate/strstr name=facet.sort/
 str name=qcc:worde~1/strstr 
 name=facet.fielddate/str/lst/lstresult name=response 
 numFound=362803 start=0/result
 lst name=facet_countslst name=facet_queries/lst 
 name=facet_fieldslst name=date
 int name=2012-12-312866/int
 int name=2013-01-0111372/int
 int name=2013-01-0211514/int
 int name=2013-01-0312015/int
 int name=2013-01-0411746/int
 int name=2013-01-0510853/int
 int name=2013-01-0611053/int
 int name=2013-01-0711815/int
 int name=2013-01-0811427/int
 int name=2013-01-0911475/int
 int name=2013-01-1011461/int
 int name=2013-01-1112058/int
 int name=2013-01-1211335/int
 int name=2013-01-1312039/int
 int name=2013-01-1412064/int
 int name=2013-01-1512234/int
 int name=2013-01-1612545/int
 int name=2013-01-1711766/int
 int name=2013-01-1812197/int
 int name=2013-01-1911414/int
 int name=2013-01-2011633/int
 int name=2013-01-2112863/int
 int name=2013-01-2212378/int
 int name=2013-01-2311947/int
 int name=2013-01-2411822/int
 int name=2013-01-2511882/int
 int name=2013-01-2610474/int
 int name=2013-01-2711051/int
 int name=2013-01-2811776/int
 int name=2013-01-2911957/int
 int name=2013-01-3011260/int
 int name=2013-01-318511/int
 /lst/lstlst name=facet_dates/lst 
 name=facet_ranges//lst/response
 Once the 40 days of documents ingested threshold is crossed the results drop 
 as show below for the same query
 responselst name=responseHeader
 int name=status0/intint name=QTime2/intlst name=paramsstr 
 name=faceton/strstr name=fldate/strstr name=facet.sort/str 
 name=qcc:worde~1/strstr name=facet.fielddate/str/lst/lst
 result name=response numFound=1338 start=0/result
 lst name=facet_countslst name=facet_queries/lst 
 name=facet_fieldslst name=date
 int name=2012-12-310/int
 int name=2013-01-0141/int
 int name=2013-01-0221/int
 int name=2013-01-0324/int
 int name=2013-01-0419/int
 int name=2013-01-059/int
 int name=2013-01-0611/int
 int name=2013-01-0717/int
 int name=2013-01-0814/int
 int name=2013-01-0924/int
 int name=2013-01-1043/int
 int name=2013-01-1114/int
 int name=2013-01-1252/int
 int name=2013-01-1357/int
 int name=2013-01-1425/int
 int name=2013-01-1517/int
 int name=2013-01-1634/int
 int name=2013-01-1711/int
 int name=2013-01-1816/int
 int name=2013-01-19121/int
 int name=2013-01-2033/int
 int name=2013-01-2126/int
 int name=2013-01-2259/int
 int name=2013-01-2327/int
 int name=2013-01-2410/int
 int name=2013-01-259/int
 int name=2013-01-266/int
 int name=2013-01-2716/int
 int name=2013-01-2811/int
 int name=2013-01-2915/int
 int name=2013-01-3021/int
 int name=2013-01-31109/int
 int name=2013-02-0111/int
 int name=2013-02-027/int
 int name=2013-02-0310/int
 int name=2013-02-048/int
 int name=2013-02-0513/int
 int name=2013-02-0675/int
 int name=2013-02-0777/int
 int name=2013-02-0831/int
 int name=2013-02-0935/int
 int name=2013-02-1022/int
 int name=2013-02-1118/int
 int name=2013-02-1211/int
 int name=2013-02-1368/int
 int name=2013-02-1440/int
 /lst/lstlst name=facet_dates/lst 
 name=facet_ranges//lst/response
 I have also tested this with different months of data and have seen the same 
 issue  around the number of documents.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (SOLR-4824) Fuzzy / Faceting results are changed after ingestion of documents past a certain number

2013-05-15 Thread Lakshmi Venkataswamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lakshmi Venkataswamy updated SOLR-4824:
---

Description: 
In upgrading from SOLR 3.6 to 4.2/4.3 and comparing results on fuzzy queries, I 
found that after a certain number of documents were ingested the fuzzy query 
had drastically lower number of results.  We have approximately 18,000 
documents per day and after ingesting approximately 40 days of documents, the 
next incremental day of documents results in a lower number of results of a 
fuzzy search.

The query :  
http://10.100.1.xx:8080/solr/corex/select?q=cc:worde~1facet=onfacet.field=datefl=datefacet.sort

produces the following result before the threshold is crossed

responselst name=responseHeader
int name=status0/intint name=QTime2349/intlst name=paramsstr 
name=faceton/strstr name=fldate/strstr name=facet.sort/
str name=qcc:worde~1/strstr 
name=facet.fielddate/str/lst/lstresult name=response 
numFound=362803 start=0/result
lst name=facet_countslst name=facet_queries/lst 
name=facet_fieldslst name=date
int name=2012-12-312866/int
int name=2013-01-0111372/int
int name=2013-01-0211514/int
int name=2013-01-0312015/int
int name=2013-01-0411746/int
int name=2013-01-0510853/int
int name=2013-01-0611053/int
int name=2013-01-0711815/int
int name=2013-01-0811427/int
int name=2013-01-0911475/int
int name=2013-01-1011461/int
int name=2013-01-1112058/int
int name=2013-01-1211335/int
int name=2013-01-1312039/int
int name=2013-01-1412064/int
int name=2013-01-1512234/int
int name=2013-01-1612545/int
int name=2013-01-1711766/int
int name=2013-01-1812197/int
int name=2013-01-1911414/int
int name=2013-01-2011633/int
int name=2013-01-2112863/int
int name=2013-01-2212378/int
int name=2013-01-2311947/int
int name=2013-01-2411822/int
int name=2013-01-2511882/int
int name=2013-01-2610474/int
int name=2013-01-2711051/int
int name=2013-01-2811776/int
int name=2013-01-2911957/int
int name=2013-01-3011260/int
int name=2013-01-318511/int
/lst/lstlst name=facet_dates/lst name=facet_ranges//lst/response

Once the 40 days of documents ingested threshold is crossed the results drop as 
show below for the same query

responselst name=responseHeader
int name=status0/intint name=QTime2/intlst name=paramsstr 
name=faceton/strstr name=fldate/strstr name=facet.sort/str 
name=qcc:worde~1/strstr name=facet.fielddate/str/lst/lst
result name=response numFound=1338 start=0/result
lst name=facet_countslst name=facet_queries/lst 
name=facet_fieldslst name=date
int name=2012-12-310/int
int name=2013-01-0141/int
int name=2013-01-0221/int
int name=2013-01-0324/int
int name=2013-01-0419/int
int name=2013-01-059/int
int name=2013-01-0611/int
int name=2013-01-0717/int
int name=2013-01-0814/int
int name=2013-01-0924/int
int name=2013-01-1043/int
int name=2013-01-1114/int
int name=2013-01-1252/int
int name=2013-01-1357/int
int name=2013-01-1425/int
int name=2013-01-1517/int
int name=2013-01-1634/int
int name=2013-01-1711/int
int name=2013-01-1816/int
int name=2013-01-19121/int
int name=2013-01-2033/int
int name=2013-01-2126/int
int name=2013-01-2259/int
int name=2013-01-2327/int
int name=2013-01-2410/int
int name=2013-01-259/int
int name=2013-01-266/int
int name=2013-01-2716/int
int name=2013-01-2811/int
int name=2013-01-2915/int
int name=2013-01-3021/int
int name=2013-01-31109/int
int name=2013-02-0111/int
int name=2013-02-027/int
int name=2013-02-0310/int
int name=2013-02-048/int
int name=2013-02-0513/int
int name=2013-02-0675/int
int name=2013-02-0777/int
int name=2013-02-0831/int
int name=2013-02-0935/int
int name=2013-02-1022/int
int name=2013-02-1118/int
int name=2013-02-1211/int
int name=2013-02-1368/int
int name=2013-02-1440/int
/lst/lstlst name=facet_dates/lst name=facet_ranges//lst/response

I have also tested this with different months of data and have seen the same 
issue  around the number of documents.

  was:
In upgrading from SOLR 3.6 to 4.2/4.3 I and comparing results on fuzzy queries, 
I found that after a certain number of documents were ingested the fuzzy query 
has drastically lower number of results.  We have approximately 18,000 
documents per day and after ingesting approximately 40 days of documents, the 
next incremental day of documents results in a lower number of results of a 
fuzzy search.

The query :  
http://10.100.1.48:8080/solr/coreTV3/select?q=cc:worde~1facet=onfacet.field=datefl=datefacet.sort

produces the following result before the threshold is crossed

responselst name=responseHeader
int name=status0/intint name=QTime2349/intlst name=paramsstr 
name=faceton/strstr name=fldate/strstr name=facet.sort/
str name=qcc:worde~1/strstr 
name=facet.fielddate/str/lst/lstresult name=response 
numFound=362803 start=0/result
lst name=facet_countslst name=facet_queries/lst 
name=facet_fieldslst name=date
int name=2012-12-312866/int
int name=2013-01-0111372/int
int name=2013-01-0211514/int
int 

[jira] [Updated] (SOLR-4816) Change CloudSolrServer to send updates to the correct shard

2013-05-15 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-4816:
-

Description: This issue changes CloudSolrServer so it routes update 
requests to the correct shard. This would be a nice feature to have to 
eliminate the document routing overhead on the Solr servers.  (was: This issue 
adds a directUpdate method to CloudSolrServer which routes update requests to 
the correct shard. This would be a nice feature to have to eliminate the 
document routing overhead on the Solr servers.)

 Change CloudSolrServer to send updates to the correct shard
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816-sriesenberg.patch


 This issue changes CloudSolrServer so it routes update requests to the 
 correct shard. This would be a nice feature to have to eliminate the document 
 routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4823) Split LBHttpSolrServer into two classes one for the solrj use case and one for the solr cloud use case

2013-05-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-4823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658700#comment-13658700
 ] 

Jan Høydahl commented on SOLR-4823:
---

Yea, load balancing should be handled separately for cloud.

How is LB for cloud handled today? Does it roundrobin between all noes in 
cluster, or does it intelligently load balance only between nodes which have a 
core in the collection? And what if you specify an explicit shard ID, will it 
load balance between all replicas in that shard only?

 Split LBHttpSolrServer into two classes one for the solrj use case and one 
 for the solr cloud use case
 --

 Key: SOLR-4823
 URL: https://issues.apache.org/jira/browse/SOLR-4823
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: philip hoy
Priority: Minor

 The LBHttpSolrServer has too many responsibilities. It could perhaps be 
 broken into two classes, one in solrj to be used in the place of an external 
 load balancer that balances across a known set of solr servers defined at 
 construction time and one in solr core to be used by the solr cloud 
 components that balances across servers dependant on the request.
 To save code duplication, if much arises an abstract bass class could be 
 introduced in to solrj.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4824) Fuzzy / Faceting results are changed after ingestion of documents past a certain number

2013-05-15 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658718#comment-13658718
 ] 

Jack Krupansky commented on SOLR-4824:
--

Lucene FuzzyQuery has a parameter named maxExpansions, which defaults to 50, 
which I believe is the largest number of candidate terms the fuzzy query will 
rewite, so that once you have that many matches, I don't think any more will 
be found. Robert or one of the other Lucene experts can confirm.

At the Lucene level this can be changed, with the FuzzyQuery(Term term, int 
maxEdits, int prefixLength, int maxExpansions, boolean transpositions) 
constructor, but the Solr query parser uses the FuzzyQuery(Term term, int 
maxEdits, int prefixLength) constructor, so there is no provision for 
overriding that limit of 50.
Also note that even in Lucene maxExpansions is limited to maxBooleanQueries, 
which would be 1024 unless you override that in solrconfig. Not that that would 
do you any good unless you had a query parser that let you set maxExpansions.

Still, that is a reasonable enhancement request.


 Fuzzy / Faceting results are changed after ingestion of documents past a 
 certain number 
 

 Key: SOLR-4824
 URL: https://issues.apache.org/jira/browse/SOLR-4824
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.2, 4.3
 Environment: Ubuntu 12.04 LTS 12.04.2 
 jre1.7.0_17
 jboss-as-7.1.1.Final
Reporter: Lakshmi Venkataswamy

 In upgrading from SOLR 3.6 to 4.2/4.3 and comparing results on fuzzy queries, 
 I found that after a certain number of documents were ingested the fuzzy 
 query had drastically lower number of results.  We have approximately 18,000 
 documents per day and after ingesting approximately 40 days of documents, the 
 next incremental day of documents results in a lower number of results of a 
 fuzzy search.
 The query :  
 http://10.100.1.xx:8080/solr/corex/select?q=cc:worde~1facet=onfacet.field=datefl=datefacet.sort
 produces the following result before the threshold is crossed
 responselst name=responseHeader
 int name=status0/intint name=QTime2349/intlst name=paramsstr 
 name=faceton/strstr name=fldate/strstr name=facet.sort/
 str name=qcc:worde~1/strstr 
 name=facet.fielddate/str/lst/lstresult name=response 
 numFound=362803 start=0/result
 lst name=facet_countslst name=facet_queries/lst 
 name=facet_fieldslst name=date
 int name=2012-12-312866/int
 int name=2013-01-0111372/int
 int name=2013-01-0211514/int
 int name=2013-01-0312015/int
 int name=2013-01-0411746/int
 int name=2013-01-0510853/int
 int name=2013-01-0611053/int
 int name=2013-01-0711815/int
 int name=2013-01-0811427/int
 int name=2013-01-0911475/int
 int name=2013-01-1011461/int
 int name=2013-01-1112058/int
 int name=2013-01-1211335/int
 int name=2013-01-1312039/int
 int name=2013-01-1412064/int
 int name=2013-01-1512234/int
 int name=2013-01-1612545/int
 int name=2013-01-1711766/int
 int name=2013-01-1812197/int
 int name=2013-01-1911414/int
 int name=2013-01-2011633/int
 int name=2013-01-2112863/int
 int name=2013-01-2212378/int
 int name=2013-01-2311947/int
 int name=2013-01-2411822/int
 int name=2013-01-2511882/int
 int name=2013-01-2610474/int
 int name=2013-01-2711051/int
 int name=2013-01-2811776/int
 int name=2013-01-2911957/int
 int name=2013-01-3011260/int
 int name=2013-01-318511/int
 /lst/lstlst name=facet_dates/lst 
 name=facet_ranges//lst/response
 Once the 40 days of documents ingested threshold is crossed the results drop 
 as show below for the same query
 responselst name=responseHeader
 int name=status0/intint name=QTime2/intlst name=paramsstr 
 name=faceton/strstr name=fldate/strstr name=facet.sort/str 
 name=qcc:worde~1/strstr name=facet.fielddate/str/lst/lst
 result name=response numFound=1338 start=0/result
 lst name=facet_countslst name=facet_queries/lst 
 name=facet_fieldslst name=date
 int name=2012-12-310/int
 int name=2013-01-0141/int
 int name=2013-01-0221/int
 int name=2013-01-0324/int
 int name=2013-01-0419/int
 int name=2013-01-059/int
 int name=2013-01-0611/int
 int name=2013-01-0717/int
 int name=2013-01-0814/int
 int name=2013-01-0924/int
 int name=2013-01-1043/int
 int name=2013-01-1114/int
 int name=2013-01-1252/int
 int name=2013-01-1357/int
 int name=2013-01-1425/int
 int name=2013-01-1517/int
 int name=2013-01-1634/int
 int name=2013-01-1711/int
 int name=2013-01-1816/int
 int name=2013-01-19121/int
 int name=2013-01-2033/int
 int name=2013-01-2126/int
 int name=2013-01-2259/int
 int name=2013-01-2327/int
 int name=2013-01-2410/int
 int name=2013-01-259/int
 int name=2013-01-266/int
 int name=2013-01-2716/int
 int name=2013-01-2811/int
 int name=2013-01-2915/int
 int name=2013-01-3021/int
 int name=2013-01-31109/int
 int name=2013-02-0111/int
 int 

[jira] [Updated] (SOLR-4048) Add a getRecursive method to NamedList

2013-05-15 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-4048:
---

Attachment: SOLR-4048.patch

Updated patch following advice from [~rcmuir] and improving the javadoc.


 Add a getRecursive method to NamedList
 

 Key: SOLR-4048
 URL: https://issues.apache.org/jira/browse/SOLR-4048
 Project: Solr
  Issue Type: New Feature
Affects Versions: 4.0
Reporter: Shawn Heisey
Priority: Minor
 Fix For: 4.4

 Attachments: SOLR-4048.patch, SOLR-4048.patch, SOLR-4048.patch, 
 SOLR-4048.patch, SOLR-4048.patch, SOLR-4048.patch


 Most of the time when accessing data from a NamedList, what you'll be doing 
 is using get() to retrieve another NamedList, and doing so over and over 
 until you reach the final level, where you'll actually retrieve the value you 
 want.
 I propose adding a method to NamedList which would do all that heavy lifting 
 for you.  I created the following method for my own code.  It could be 
 adapted fairly easily for inclusion into NamedList itself.  The only reason I 
 did not include it as a patch is because I figure you'll want to ensure it 
 meets all your particular coding guidelines, and that the JavaDoc is much 
 better than I have done here:
 {code}
   /**
* Recursively parse a NamedList and return the value at the last level,
* assuming that the object found at each level is also a NamedList. For
* example, if response is the NamedList response from the Solr4 mbean
* handler, the following code makes sense:
* 
* String coreName = (String) getRecursiveFromResponse(response, new
* String[] { solr-mbeans, CORE, core, stats, coreName });
* 
* 
* @param namedList the NamedList to parse
* @param args A list of values to recursively request
* @return the object at the last level.
* @throws SolrServerException
*/
   @SuppressWarnings(unchecked)
   private final Object getRecursiveFromResponse(
   NamedListObject namedList, String[] args)
   throws SolrServerException
   {
   NamedListObject list = null;
   Object value = null;
   try
   {
   for (String key : args)
   {
   if (list == null)
   {
   list = namedList;
   }
   else
   {
   list = (NamedListObject) value;
   }
   value = list.get(key);
   }
   return value;
   }
   catch (Exception e)
   {
   throw new SolrServerException(
   Failed to recursively parse 
 NamedList, e);
   }
   }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4048) Add a getRecursive method to NamedList

2013-05-15 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658724#comment-13658724
 ] 

Shawn Heisey commented on SOLR-4048:


Adding find and deprecating get seems reasonable to me, though it would be a 
new issue.


 Add a getRecursive method to NamedList
 

 Key: SOLR-4048
 URL: https://issues.apache.org/jira/browse/SOLR-4048
 Project: Solr
  Issue Type: New Feature
Affects Versions: 4.0
Reporter: Shawn Heisey
Priority: Minor
 Fix For: 4.4

 Attachments: SOLR-4048.patch, SOLR-4048.patch, SOLR-4048.patch, 
 SOLR-4048.patch, SOLR-4048.patch, SOLR-4048.patch


 Most of the time when accessing data from a NamedList, what you'll be doing 
 is using get() to retrieve another NamedList, and doing so over and over 
 until you reach the final level, where you'll actually retrieve the value you 
 want.
 I propose adding a method to NamedList which would do all that heavy lifting 
 for you.  I created the following method for my own code.  It could be 
 adapted fairly easily for inclusion into NamedList itself.  The only reason I 
 did not include it as a patch is because I figure you'll want to ensure it 
 meets all your particular coding guidelines, and that the JavaDoc is much 
 better than I have done here:
 {code}
   /**
* Recursively parse a NamedList and return the value at the last level,
* assuming that the object found at each level is also a NamedList. For
* example, if response is the NamedList response from the Solr4 mbean
* handler, the following code makes sense:
* 
* String coreName = (String) getRecursiveFromResponse(response, new
* String[] { solr-mbeans, CORE, core, stats, coreName });
* 
* 
* @param namedList the NamedList to parse
* @param args A list of values to recursively request
* @return the object at the last level.
* @throws SolrServerException
*/
   @SuppressWarnings(unchecked)
   private final Object getRecursiveFromResponse(
   NamedListObject namedList, String[] args)
   throws SolrServerException
   {
   NamedListObject list = null;
   Object value = null;
   try
   {
   for (String key : args)
   {
   if (list == null)
   {
   list = namedList;
   }
   else
   {
   list = (NamedListObject) value;
   }
   value = list.get(key);
   }
   return value;
   }
   catch (Exception e)
   {
   throw new SolrServerException(
   Failed to recursively parse 
 NamedList, e);
   }
   }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4825) Port SolrLogFormatter to log4j

2013-05-15 Thread Mark Miller (JIRA)
Mark Miller created SOLR-4825:
-

 Summary: Port SolrLogFormatter to log4j
 Key: SOLR-4825
 URL: https://issues.apache.org/jira/browse/SOLR-4825
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Priority: Critical


Cloud tests are extremelly difficult to debug without this logging - can't go 
back to the early days of sys outs at this point. I'll port this formatter to 
work with log4j as a Layout.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4824) Fuzzy / Faceting results are changed after ingestion of documents past a certain number

2013-05-15 Thread Lakshmi Venkataswamy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658732#comment-13658732
 ] 

Lakshmi Venkataswamy commented on SOLR-4824:


Not sure I understand.  When I have 30 days of data I get 362,803 results.  
When I add another 11 days worth of data the same search returns 1,338 results. 
 Even if there is a maximum limit would I not see a capping of the results as 
opposed to a drastic drop ?  

 Fuzzy / Faceting results are changed after ingestion of documents past a 
 certain number 
 

 Key: SOLR-4824
 URL: https://issues.apache.org/jira/browse/SOLR-4824
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.2, 4.3
 Environment: Ubuntu 12.04 LTS 12.04.2 
 jre1.7.0_17
 jboss-as-7.1.1.Final
Reporter: Lakshmi Venkataswamy

 In upgrading from SOLR 3.6 to 4.2/4.3 and comparing results on fuzzy queries, 
 I found that after a certain number of documents were ingested the fuzzy 
 query had drastically lower number of results.  We have approximately 18,000 
 documents per day and after ingesting approximately 40 days of documents, the 
 next incremental day of documents results in a lower number of results of a 
 fuzzy search.
 The query :  
 http://10.100.1.xx:8080/solr/corex/select?q=cc:worde~1facet=onfacet.field=datefl=datefacet.sort
 produces the following result before the threshold is crossed
 responselst name=responseHeader
 int name=status0/intint name=QTime2349/intlst name=paramsstr 
 name=faceton/strstr name=fldate/strstr name=facet.sort/
 str name=qcc:worde~1/strstr 
 name=facet.fielddate/str/lst/lstresult name=response 
 numFound=362803 start=0/result
 lst name=facet_countslst name=facet_queries/lst 
 name=facet_fieldslst name=date
 int name=2012-12-312866/int
 int name=2013-01-0111372/int
 int name=2013-01-0211514/int
 int name=2013-01-0312015/int
 int name=2013-01-0411746/int
 int name=2013-01-0510853/int
 int name=2013-01-0611053/int
 int name=2013-01-0711815/int
 int name=2013-01-0811427/int
 int name=2013-01-0911475/int
 int name=2013-01-1011461/int
 int name=2013-01-1112058/int
 int name=2013-01-1211335/int
 int name=2013-01-1312039/int
 int name=2013-01-1412064/int
 int name=2013-01-1512234/int
 int name=2013-01-1612545/int
 int name=2013-01-1711766/int
 int name=2013-01-1812197/int
 int name=2013-01-1911414/int
 int name=2013-01-2011633/int
 int name=2013-01-2112863/int
 int name=2013-01-2212378/int
 int name=2013-01-2311947/int
 int name=2013-01-2411822/int
 int name=2013-01-2511882/int
 int name=2013-01-2610474/int
 int name=2013-01-2711051/int
 int name=2013-01-2811776/int
 int name=2013-01-2911957/int
 int name=2013-01-3011260/int
 int name=2013-01-318511/int
 /lst/lstlst name=facet_dates/lst 
 name=facet_ranges//lst/response
 Once the 40 days of documents ingested threshold is crossed the results drop 
 as show below for the same query
 responselst name=responseHeader
 int name=status0/intint name=QTime2/intlst name=paramsstr 
 name=faceton/strstr name=fldate/strstr name=facet.sort/str 
 name=qcc:worde~1/strstr name=facet.fielddate/str/lst/lst
 result name=response numFound=1338 start=0/result
 lst name=facet_countslst name=facet_queries/lst 
 name=facet_fieldslst name=date
 int name=2012-12-310/int
 int name=2013-01-0141/int
 int name=2013-01-0221/int
 int name=2013-01-0324/int
 int name=2013-01-0419/int
 int name=2013-01-059/int
 int name=2013-01-0611/int
 int name=2013-01-0717/int
 int name=2013-01-0814/int
 int name=2013-01-0924/int
 int name=2013-01-1043/int
 int name=2013-01-1114/int
 int name=2013-01-1252/int
 int name=2013-01-1357/int
 int name=2013-01-1425/int
 int name=2013-01-1517/int
 int name=2013-01-1634/int
 int name=2013-01-1711/int
 int name=2013-01-1816/int
 int name=2013-01-19121/int
 int name=2013-01-2033/int
 int name=2013-01-2126/int
 int name=2013-01-2259/int
 int name=2013-01-2327/int
 int name=2013-01-2410/int
 int name=2013-01-259/int
 int name=2013-01-266/int
 int name=2013-01-2716/int
 int name=2013-01-2811/int
 int name=2013-01-2915/int
 int name=2013-01-3021/int
 int name=2013-01-31109/int
 int name=2013-02-0111/int
 int name=2013-02-027/int
 int name=2013-02-0310/int
 int name=2013-02-048/int
 int name=2013-02-0513/int
 int name=2013-02-0675/int
 int name=2013-02-0777/int
 int name=2013-02-0831/int
 int name=2013-02-0935/int
 int name=2013-02-1022/int
 int name=2013-02-1118/int
 int name=2013-02-1211/int
 int name=2013-02-1368/int
 int name=2013-02-1440/int
 /lst/lstlst name=facet_dates/lst 
 name=facet_ranges//lst/response
 I have also tested this with different months of data and have seen the same 
 issue  around the number of documents.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your 

[jira] [Updated] (SOLR-4825) Port SolrLogFormatter to log4j

2013-05-15 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4825:
--

Attachment: SOLR-4825.patch

Patch attached.

 Port SolrLogFormatter to log4j
 --

 Key: SOLR-4825
 URL: https://issues.apache.org/jira/browse/SOLR-4825
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Priority: Critical
 Attachments: SOLR-4825.patch


 Cloud tests are extremelly difficult to debug without this logging - can't go 
 back to the early days of sys outs at this point. I'll port this formatter to 
 work with log4j as a Layout.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4825) Port SolrLogFormatter to log4j

2013-05-15 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4825:
--

Fix Version/s: 4.4
   5.0

 Port SolrLogFormatter to log4j
 --

 Key: SOLR-4825
 URL: https://issues.apache.org/jira/browse/SOLR-4825
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Priority: Critical
 Fix For: 5.0, 4.4

 Attachments: SOLR-4825.patch


 Cloud tests are extremelly difficult to debug without this logging - can't go 
 back to the early days of sys outs at this point. I'll port this formatter to 
 work with log4j as a Layout.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-4825) Port SolrLogFormatter to log4j

2013-05-15 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-4825:
-

Assignee: Mark Miller

 Port SolrLogFormatter to log4j
 --

 Key: SOLR-4825
 URL: https://issues.apache.org/jira/browse/SOLR-4825
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Critical
 Fix For: 5.0, 4.4

 Attachments: SOLR-4825.patch


 Cloud tests are extremelly difficult to debug without this logging - can't go 
 back to the early days of sys outs at this point. I'll port this formatter to 
 work with log4j as a Layout.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4816) Change CloudSolrServer to send updates to the correct shard

2013-05-15 Thread Stephen Riesenberg (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658772#comment-13658772
 ] 

Stephen Riesenberg commented on SOLR-4816:
--

{quote}It's collection specific.
See DocCollection.getRouter(){quote}

No javadoc on that method. In my environment, collections were set up for 
implicit routing because numShards was not specified at create time. Joel 
straightened us out. Only thing I found was at 
http://wiki.apache.org/solr/SolrCloud#Creating_cores_via_CoreAdmin which 
doesn't talk about document routing.

The NPEs I mentioned before have been fixed, one other fix was made to set the 
commitWithinMS and pass params to the sub-requests. Now I am running a load 
test against this to see how it performs. I'll post an updated patch with my 
changes a bit later.

{quote}We should make sure we have tests that would have caught these as 
well!{quote}

I didn't run the test suite. Seems like a good idea! I'll get caught up 
eventually. Having run it now with the earlier patches, seeing the NPEs. Good 
stuff. Thanks.

 Change CloudSolrServer to send updates to the correct shard
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816-sriesenberg.patch


 This issue changes CloudSolrServer so it routes update requests to the 
 correct shard. This would be a nice feature to have to eliminate the document 
 routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4816) Change CloudSolrServer to send updates to the correct shard

2013-05-15 Thread Stephen Riesenberg (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658772#comment-13658772
 ] 

Stephen Riesenberg edited comment on SOLR-4816 at 5/15/13 8:57 PM:
---

{quote}It's collection specific.
See DocCollection.getRouter(){quote}

No javadoc on that method. In my environment, collections were set up for 
implicit routing because numShards was not specified at create time. Joel 
straightened us out. Only thing I found was at 
http://wiki.apache.org/solr/SolrCloud#Creating_cores_via_CoreAdmin which 
doesn't talk about document routing.

The NPEs I mentioned before have been fixed, one other fix was made to set the 
commitWithinMS and pass params to the sub-requests *and* document router. Now I 
am running a load test against this to see how it performs. I'll post an 
updated patch with my changes a bit later.

{quote}We should make sure we have tests that would have caught these as 
well!{quote}

I didn't run the test suite. Seems like a good idea! I'll get caught up 
eventually. Having run it now with the earlier patches, seeing the NPEs. Good 
stuff. Thanks.

  was (Author: sriesenberg):
{quote}It's collection specific.
See DocCollection.getRouter(){quote}

No javadoc on that method. In my environment, collections were set up for 
implicit routing because numShards was not specified at create time. Joel 
straightened us out. Only thing I found was at 
http://wiki.apache.org/solr/SolrCloud#Creating_cores_via_CoreAdmin which 
doesn't talk about document routing.

The NPEs I mentioned before have been fixed, one other fix was made to set the 
commitWithinMS and pass params to the sub-requests. Now I am running a load 
test against this to see how it performs. I'll post an updated patch with my 
changes a bit later.

{quote}We should make sure we have tests that would have caught these as 
well!{quote}

I didn't run the test suite. Seems like a good idea! I'll get caught up 
eventually. Having run it now with the earlier patches, seeing the NPEs. Good 
stuff. Thanks.
  
 Change CloudSolrServer to send updates to the correct shard
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816-sriesenberg.patch


 This issue changes CloudSolrServer so it routes update requests to the 
 correct shard. This would be a nice feature to have to eliminate the document 
 routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4826) TikaException Parsing PPTX file

2013-05-15 Thread Thomas Weidman (JIRA)
Thomas Weidman created SOLR-4826:


 Summary: TikaException Parsing PPTX file
 Key: SOLR-4826
 URL: https://issues.apache.org/jira/browse/SOLR-4826
 Project: Solr
  Issue Type: Bug
Reporter: Thomas Weidman


Error parsing PPTX file:


org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from 
org.apache.tika.parser.microsoft.ooxml.OOXMLParser@33d839d1


org.apache.solr.common.SolrException: org.apache.tika.exception.TikaException: 
TIKA-198: Illegal IOException from 
org.apache.tika.parser.microsoft.ooxml.OOXMLParser@33d839d1
at 
org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:225)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
at 
org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:240)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1699)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:455)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:276)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at 
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at 
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852)
at 
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
at 
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:619)
Caused by: org.apache.tika.exception.TikaException: TIKA-198: Illegal 
IOException from org.apache.tika.parser.microsoft.ooxml.OOXMLParser@33d839d1
at 
org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:248)
at 
org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:242)
at 
org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:120)
at 
org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:219)
... 19 more
Caused by: java.io.IOException: Unable to read entire header; 0 bytes read; 
expected 512 bytes
at 
org.apache.poi.poifs.storage.HeaderBlock.alertShortRead(HeaderBlock.java:226)
at 
org.apache.poi.poifs.storage.HeaderBlock.readFirst512(HeaderBlock.java:207)
at 
org.apache.poi.poifs.storage.HeaderBlock.lt;initgt;(HeaderBlock.java:104)
at 
org.apache.poi.poifs.filesystem.POIFSFileSystem.lt;initgt;(POIFSFileSystem.java:138)
at 
org.apache.tika.parser.microsoft.ooxml.AbstractOOXMLExtractor.handleEmbeddedOLE(AbstractOOXMLExtractor.java:149)
at 
org.apache.tika.parser.microsoft.ooxml.AbstractOOXMLExtractor.handleEmbeddedParts(AbstractOOXMLExtractor.java:129)
at 
org.apache.tika.parser.microsoft.ooxml.AbstractOOXMLExtractor.getXHTML(AbstractOOXMLExtractor.java:107)
at 
org.apache.tika.parser.microsoft.ooxml.OOXMLExtractorFactory.parse(OOXMLExtractorFactory.java:112)
at 
org.apache.tika.parser.microsoft.ooxml.OOXMLParser.parse(OOXMLParser.java:82)
at 
org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:242)
... 22 more

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Help working with patch for SOLR-3076 (Block Joins)

2013-05-15 Thread Tom Burton-West
Hello,

I would like to build Solr with  the July 12th Solr-3076 patch.   How do I
determine what version/revision of Solr I need to check out to build this
patch against?   I tried using the latest branch_4x and got a bunch of
errors.  I suspect I need an earlier revision or maybe trunk.  This patch
also seems to be created with git instead of svn, so maybe I am doing
something wrong.  (Error messages appended below)


Tom


branch_4x
Checked out revision 1483079

patch -p0 -i SOLR-3076.patch --dry-run
can't find file to patch at input line 5
Perhaps you used the wrong -p or --strip option?
The text leading up to this was:
--
|diff --git a/lucene/module-build.xml b/lucene/module-build.xml
|index 62cfd96..2a746fc 100644
|--- a/lucene/module-build.xml
|+++ b/lucene/module-build.xml
--

Tried again with the git patch command
 patch -p1 -i SOLR-3076.patch --dry-run
patching file lucene/module-build.xml
Hunk #1 succeeded at 433 (offset 85 lines).
patching file solr/common-build.xml
Hunk #1 FAILED at 89.
Hunk #2 FAILED at 134.
Hunk #3 FAILED at 151.
3 out of 3 hunks FAILED -- saving rejects to file solr/common-build.xml.rej
patching file
solr/core/src/java/org/apache/solr/handler/UpdateRequestHandler.java
...
lots of more failed messages


[jira] [Updated] (SOLR-3369) shards.tolerant=true broken on group and facet queries

2013-05-15 Thread Russell Black (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russell Black updated SOLR-3369:


Attachment: (was: SOLR-3369-shards-tolerant.patch)

 shards.tolerant=true broken on group and facet queries
 --

 Key: SOLR-3369
 URL: https://issues.apache.org/jira/browse/SOLR-3369
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.0-ALPHA
 Environment: Distributed environment (shards)
Reporter: Russell Black
  Labels: patch

 In a distributed environment, shards.tolerant=true allows for partial results 
 to be returned when individual shards are down.  For group=true and 
 facet=true queries, using this feature results in an error when shards are 
 down.  This patch allows users to use the shard tolerance feature with facet 
 and grouping queries. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3369) shards.tolerant=true broken on group and facet queries

2013-05-15 Thread Russell Black (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russell Black updated SOLR-3369:


Attachment: SOLR-3369-shards-tolerant.patch

Added test case

 shards.tolerant=true broken on group and facet queries
 --

 Key: SOLR-3369
 URL: https://issues.apache.org/jira/browse/SOLR-3369
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.0-ALPHA
 Environment: Distributed environment (shards)
Reporter: Russell Black
  Labels: patch
 Attachments: SOLR-3369-shards-tolerant.patch


 In a distributed environment, shards.tolerant=true allows for partial results 
 to be returned when individual shards are down.  For group=true and 
 facet=true queries, using this feature results in an error when shards are 
 down.  This patch allows users to use the shard tolerance feature with facet 
 and grouping queries. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-3369) shards.tolerant=true broken on group and facet queries

2013-05-15 Thread Russell Black (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13659018#comment-13659018
 ] 

Russell Black edited comment on SOLR-3369 at 5/15/13 11:33 PM:
---

As requested, I added a test case to the patch file.  

  was (Author: rblack):
Added test case
  
 shards.tolerant=true broken on group and facet queries
 --

 Key: SOLR-3369
 URL: https://issues.apache.org/jira/browse/SOLR-3369
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.0-ALPHA
 Environment: Distributed environment (shards)
Reporter: Russell Black
  Labels: patch
 Attachments: SOLR-3369-shards-tolerant.patch


 In a distributed environment, shards.tolerant=true allows for partial results 
 to be returned when individual shards are down.  For group=true and 
 facet=true queries, using this feature results in an error when shards are 
 down.  This patch allows users to use the shard tolerance feature with facet 
 and grouping queries. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3369) shards.tolerant=true broken on group and facet queries

2013-05-15 Thread Ryan McKinley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13659021#comment-13659021
 ] 

Ryan McKinley commented on SOLR-3369:
-

latest patch adds:
{code}
+  // test group query
+  queryPartialResults(upShards, upClients, 
+  q, *:*, 
+  rows, 100, 
+  fl, id, + i1, 
+  group, true, 
+  group.query, t1 + :kings OR  + t1 + :eggs, 
+  group.limit, 10, 
+  sort, i1 +  asc, id asc,
+  CommonParams.TIME_ALLOWED, 1, 
+  ShardParams.SHARDS_INFO, true,
+  ShardParams.SHARDS_TOLERANT, true);
{code}


but does not include TestDistributedGroupingWithShardTolerantActivated.java

Is this intentional?

 shards.tolerant=true broken on group and facet queries
 --

 Key: SOLR-3369
 URL: https://issues.apache.org/jira/browse/SOLR-3369
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.0-ALPHA
 Environment: Distributed environment (shards)
Reporter: Russell Black
  Labels: patch
 Attachments: SOLR-3369-shards-tolerant.patch


 In a distributed environment, shards.tolerant=true allows for partial results 
 to be returned when individual shards are down.  For group=true and 
 facet=true queries, using this feature results in an error when shards are 
 down.  This patch allows users to use the shard tolerance feature with facet 
 and grouping queries. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.6.0_45) - Build # 2791 - Failure!

2013-05-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Windows/2791/
Java: 32bit/jdk1.6.0_45 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.update.processor.RegexBoostProcessorTest

Error Message:
Clean up static fields (in @AfterClass?), your test seems to hang on to 
approximately 11,697,192 bytes (threshold is 10,485,760). Field reference sizes 
(counted individually):   - 12,322,064 bytes, protected static 
org.apache.solr.servlet.SolrRequestParsers 
org.apache.solr.update.processor.RegexBoostProcessorTest._parser   - 4,840 
bytes, private static 
org.apache.solr.update.processor.RegexpBoostProcessorFactory 
org.apache.solr.update.processor.RegexBoostProcessorTest.factory   - 4,264 
bytes, private static org.apache.solr.update.processor.RegexpBoostProcessor 
org.apache.solr.update.processor.RegexBoostProcessorTest.reProcessor   - 712 
bytes, protected static org.apache.solr.common.params.ModifiableSolrParams 
org.apache.solr.update.processor.RegexBoostProcessorTest.parameters   - 256 
bytes, public static org.junit.rules.TestRule 
org.apache.solr.SolrTestCaseJ4.solrClassRules   - 248 bytes, protected static 
java.lang.String org.apache.solr.SolrTestCaseJ4.testSolrHome   - 120 bytes, 
private static java.lang.String org.apache.solr.SolrTestCaseJ4.factoryProp   - 
64 bytes, private static java.lang.String 
org.apache.solr.SolrTestCaseJ4.coreName

Stack Trace:
junit.framework.AssertionFailedError: Clean up static fields (in @AfterClass?), 
your test seems to hang on to approximately 11,697,192 bytes (threshold is 
10,485,760). Field reference sizes (counted individually):
  - 12,322,064 bytes, protected static 
org.apache.solr.servlet.SolrRequestParsers 
org.apache.solr.update.processor.RegexBoostProcessorTest._parser
  - 4,840 bytes, private static 
org.apache.solr.update.processor.RegexpBoostProcessorFactory 
org.apache.solr.update.processor.RegexBoostProcessorTest.factory
  - 4,264 bytes, private static 
org.apache.solr.update.processor.RegexpBoostProcessor 
org.apache.solr.update.processor.RegexBoostProcessorTest.reProcessor
  - 712 bytes, protected static 
org.apache.solr.common.params.ModifiableSolrParams 
org.apache.solr.update.processor.RegexBoostProcessorTest.parameters
  - 256 bytes, public static org.junit.rules.TestRule 
org.apache.solr.SolrTestCaseJ4.solrClassRules
  - 248 bytes, protected static java.lang.String 
org.apache.solr.SolrTestCaseJ4.testSolrHome
  - 120 bytes, private static java.lang.String 
org.apache.solr.SolrTestCaseJ4.factoryProp
  - 64 bytes, private static java.lang.String 
org.apache.solr.SolrTestCaseJ4.coreName
at __randomizedtesting.SeedInfo.seed([D2F15045584904A2]:0)
at 
com.carrotsearch.randomizedtesting.rules.StaticFieldsInvariantRule$1.afterAlways(StaticFieldsInvariantRule.java:127)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:662)




Build Log:
[...truncated 9234 lines...]
[junit4:junit4] Suite: org.apache.solr.update.processor.RegexBoostProcessorTest
[junit4:junit4]   2 2460099 T6983 oas.SolrTestCaseJ4.initCore initCore
[junit4:junit4]   2 Creating dataDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\build\solr-core\test\J0\.\solrtest-RegexBoostProcessorTest-1368660967932
[junit4:junit4]   2 2460101 T6983 oasc.SolrResourceLoader.init new 
SolrResourceLoader for directory: 
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\build\solr-core\test-files\solr\collection1\'
[junit4:junit4]   2 2460103 T6983 oasc.SolrResourceLoader.replaceClassLoader 
Adding 
'file:/C:/Users/JenkinsSlave/workspace/Lucene-Solr-4.x-Windows/solr/build/solr-core/test-files/solr/collection1/lib/classes/'
 to classloader
[junit4:junit4]   2 2460104 T6983 oasc.SolrResourceLoader.replaceClassLoader 
Adding 
'file:/C:/Users/JenkinsSlave/workspace/Lucene-Solr-4.x-Windows/solr/build/solr-core/test-files/solr/collection1/lib/README'
 to classloader
[junit4:junit4]   2 2460206 T6983 oasc.SolrConfig.init Using Lucene 
MatchVersion: LUCENE_44
[junit4:junit4]   2 2460318 T6983 oasc.SolrConfig.init Loaded SolrConfig: 
solrconfig.xml
[junit4:junit4]   2 2460319 T6983 oass.IndexSchema.readSchema Reading Solr 
Schema from schema12.xml
[junit4:junit4]   2 2460325 T6983 

  1   2   >