[jira] [Updated] (SOLR-5529) Add Support for queries to use multiple suggesters (new Suggester Component)

2013-12-08 Thread Areek Zillur (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Areek Zillur updated SOLR-5529:
---

Attachment: SOLR-5529.patch

Updated Patch:
  - Allow all the suggesters in a component to be build/reloaded with a single 
command (buildAll & reloadAll)
  - Added tests for normal & distributed cases

> Add Support for queries to use multiple suggesters (new Suggester Component)
> 
>
> Key: SOLR-5529
> URL: https://issues.apache.org/jira/browse/SOLR-5529
> Project: Solr
>  Issue Type: Improvement
>  Components: SearchComponents - other
>Affects Versions: 5.0, 4.7
>Reporter: Areek Zillur
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-5529.patch, SOLR-5529.patch
>
>
> Following the discussion on SOLR-5528. It would be nice to support suggest 
> queries to be processed by more than one suggesters configured in one suggest 
> component.
> The new response format is as follows:
> {code}
> {
> responseHeader: {
> status: 0,
> QTime: 3
> },
> suggest: {
> suggester1: {
> e: {
> numFound: 1,
> suggestions: [
> {
> term: "electronics and computer1",
> weight: 100,
> payload: ""
> }
> ]
> }
> },
> suggester2: {
> e: {
> numFound: 1,
> suggestions: [
> {
> term: "electronics and computer1",
> weight: 10,
> payload: ""
> }
> ]
> }
> }
> }
> }
> {code}
> where 'suggest1' and 'suggest2' are the names of the configured suggester and 
> 'e' is the query.
> Example query:
> {code}
> localhost:8983/solr/suggest?suggest=true&suggest.dictionary=suggest1&suggest.dictionary=suggest2&suggest.q=e
> {code}
> Example configuration:
> {code}
>   
>   
>   suggester1
>   FuzzyLookupFactory  
>   DocumentDictionaryFactory  
>   cat
>   price
>   string
> 
>   
>   suggester2
>   FuzzyLookupFactory  
>   DocumentDictionaryFactory 
>   name
>   price
>   string
> 
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1102 - Failure!

2013-12-08 Thread Dawid Weiss
We've already reported this macos JNU_NewStringPlatform failure.

Dawid

On Sun, Dec 8, 2013 at 7:59 PM, Policeman Jenkins Server
 wrote:
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1102/
> Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseParallelGC
>
> All tests passed
>
> Build Log:
> [...truncated 10268 lines...]
>[junit4] JVM J0: stdout was not empty, see: 
> /Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/temp/junit4-J0-20131208_184453_810.sysout
>[junit4] >>> JVM J0: stdout (verbatim) 
>[junit4] #
>[junit4] # A fatal error has been detected by the Java Runtime Environment:
>[junit4] #
>[junit4] #  SIGSEGV (0xb) at pc=0x00010a44aa2b, pid=187, tid=133123
>[junit4] #
>[junit4] # JRE version: Java(TM) SE Runtime Environment (7.0_45-b18) 
> (build 1.7.0_45-b18)
>[junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.45-b08 mixed 
> mode bsd-amd64 compressed oops)
>[junit4] # Problematic frame:
>[junit4] # C  [libjava.dylib+0x9a2b]  JNU_NewStringPlatform+0x1d3
>[junit4] #
>[junit4] # Failed to write core dump. Core dumps have been disabled. To 
> enable core dumping, try "ulimit -c unlimited" before starting Java again
>[junit4] #
>[junit4] # An error report file with more information is saved as:
>[junit4] # 
> /Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/J0/hs_err_pid187.log
>[junit4] [thread 123907 also had an error]
>[junit4] #
>[junit4] # If you would like to submit a bug report, please visit:
>[junit4] #   http://bugreport.sun.com/bugreport/crash.jsp
>[junit4] # The crash happened outside the Java Virtual Machine in native 
> code.
>[junit4] # See problematic frame for where to report the bug.
>[junit4] #
>[junit4] <<< JVM J0: EOF 
>
> [...truncated 1 lines...]
>[junit4] ERROR: JVM J0 ended with an exception, command line: 
> /Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/bin/java 
> -XX:+UseCompressedOops -XX:+UseParallelGC -XX:+HeapDumpOnOutOfMemoryError 
> -XX:HeapDumpPath=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/heapdumps 
> -Dtests.prefix=tests -Dtests.seed=FF84A3F52D86DBAF -Xmx512M -Dtests.iters= 
> -Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
> -Dtests.postingsformat=random -Dtests.docvaluesformat=random 
> -Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
> -Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=5.0 
> -Dtests.cleanthreads=perClass 
> -Djava.util.logging.config.file=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/junit4/logging.properties
>  -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true 
> -Dtests.asserts.gracious=false -Dtests.multiplier=1 -DtempDir=. 
> -Djava.io.tmpdir=. 
> -Djunit4.tempDir=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/temp
>  
> -Dclover.db.dir=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/clover/db
>  -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
> -Djava.security.policy=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/junit4/tests.policy
>  -Dlucene.version=5.0-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
> -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
> -Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
> -Dtests.disableHdfs=true -Dfile.encoding=ISO-8859-1 -classpath 
> /Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/classes/test:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-test-framework/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/test-framework/lib/junit4-ant-2.0.13.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test-files:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/test-framework/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/codecs/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-solrj/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/analysis/common/lucene-analyzers-common-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/analysis/kuromoji/lucene-analyzers-kuromoji-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/analysis/phonetic/lucene-analyzers-phonetic-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/codecs/lucene-codecs-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/highlighter/lucene-highlighter-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/memory/lucene-memory-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/misc/lucene-misc-5.0-SNAPSHOT.jar:/Us

[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1102 - Failure!

2013-12-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1102/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 10268 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/temp/junit4-J0-20131208_184453_810.sysout
   [junit4] >>> JVM J0: stdout (verbatim) 
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  SIGSEGV (0xb) at pc=0x00010a44aa2b, pid=187, tid=133123
   [junit4] #
   [junit4] # JRE version: Java(TM) SE Runtime Environment (7.0_45-b18) (build 
1.7.0_45-b18)
   [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.45-b08 mixed mode 
bsd-amd64 compressed oops)
   [junit4] # Problematic frame:
   [junit4] # C  [libjava.dylib+0x9a2b]  JNU_NewStringPlatform+0x1d3
   [junit4] #
   [junit4] # Failed to write core dump. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/J0/hs_err_pid187.log
   [junit4] [thread 123907 also had an error]
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.sun.com/bugreport/crash.jsp
   [junit4] # The crash happened outside the Java Virtual Machine in native 
code.
   [junit4] # See problematic frame for where to report the bug.
   [junit4] #
   [junit4] <<< JVM J0: EOF 

[...truncated 1 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: 
/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/bin/java 
-XX:+UseCompressedOops -XX:+UseParallelGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/heapdumps 
-Dtests.prefix=tests -Dtests.seed=FF84A3F52D86DBAF -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=5.0 
-Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true 
-Dtests.asserts.gracious=false -Dtests.multiplier=1 -DtempDir=. 
-Djava.io.tmpdir=. 
-Djunit4.tempDir=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/temp
 
-Dclover.db.dir=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/clover/db
 -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Djava.security.policy=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/junit4/tests.policy
 -Dlucene.version=5.0-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.disableHdfs=true -Dfile.encoding=ISO-8859-1 -classpath 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/classes/test:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-test-framework/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/test-framework/lib/junit4-ant-2.0.13.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test-files:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/test-framework/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/codecs/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-solrj/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/analysis/common/lucene-analyzers-common-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/analysis/kuromoji/lucene-analyzers-kuromoji-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/analysis/phonetic/lucene-analyzers-phonetic-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/codecs/lucene-codecs-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/highlighter/lucene-highlighter-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/memory/lucene-memory-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/misc/lucene-misc-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/spatial/lucene-spatial-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/expressions/lucene-expressions-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/

[JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 139 - Still Failing

2013-12-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-trunk/139/

No tests ran.

Build Log:
[...truncated 52054 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeRelease
 [copy] Copying 431 files to 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeRelease/lucene
 [copy] Copying 230 files to 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeRelease/solr
 [exec] JAVA7_HOME is /home/hudson/tools/java/latest1.7
 [exec] NOTE: output encoding is US-ASCII
 [exec] 
 [exec] Load release URL 
"file:/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeRelease/"...
 [exec] 
 [exec] Test Lucene...
 [exec]   test basics...
 [exec]   get KEYS
 [exec] 0.1 MB in 0.01 sec (10.8 MB/sec)
 [exec]   check changes HTML...
 [exec]   download lucene-5.0.0-src.tgz...
 [exec] 26.9 MB in 0.04 sec (603.3 MB/sec)
 [exec] verify md5/sha1 digests
 [exec]   download lucene-5.0.0.tgz...
 [exec] 61.5 MB in 0.09 sec (650.2 MB/sec)
 [exec] verify md5/sha1 digests
 [exec]   download lucene-5.0.0.zip...
 [exec] 71.2 MB in 0.08 sec (840.6 MB/sec)
 [exec] verify md5/sha1 digests
 [exec]   unpack lucene-5.0.0.tgz...
 [exec] verify JAR metadata/identity/no javax.* or java.* classes...
 [exec] test demo with 1.7...
 [exec]   got 5692 hits for query "lucene"
 [exec] check Lucene's javadoc JAR
 [exec]   unpack lucene-5.0.0.zip...
 [exec] verify JAR metadata/identity/no javax.* or java.* classes...
 [exec] test demo with 1.7...
 [exec]   got 5692 hits for query "lucene"
 [exec] check Lucene's javadoc JAR
 [exec]   unpack lucene-5.0.0-src.tgz...
 [exec] make sure no JARs/WARs in src dist...
 [exec] run "ant validate"
 [exec] run tests w/ Java 7 and testArgs='-Dtests.jettyConnector=Socket 
 -Dtests.disableHdfs=true'...
 [exec] test demo with 1.7...
 [exec]   got 226 hits for query "lucene"
 [exec] generate javadocs w/ Java 7...
 [exec] 
 [exec] Crawl/parse...
 [exec] 
 [exec] Verify...
 [exec] 
 [exec] Test Solr...
 [exec]   test basics...
 [exec]   get KEYS
 [exec] 0.1 MB in 0.01 sec (6.8 MB/sec)
 [exec]   check changes HTML...
 [exec]   download solr-5.0.0-src.tgz...
 [exec] 32.5 MB in 0.50 sec (65.0 MB/sec)
 [exec] verify md5/sha1 digests
 [exec]   download solr-5.0.0.tgz...
 [exec] 117.0 MB in 0.94 sec (124.6 MB/sec)
 [exec] verify md5/sha1 digests
 [exec]   download solr-5.0.0.zip...
 [exec] 122.6 MB in 0.56 sec (219.4 MB/sec)
 [exec] verify md5/sha1 digests
 [exec]   unpack solr-5.0.0.tgz...
 [exec] verify JAR metadata/identity/no javax.* or java.* classes...
 [exec] unpack lucene-5.0.0.tgz...
 [exec]   **WARNING**: skipping check of 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeReleaseTmp/unpack/solr-5.0.0/contrib/dataimporthandler/lib/mail-1.4.1.jar:
 it has javax.* classes
 [exec]   **WARNING**: skipping check of 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeReleaseTmp/unpack/solr-5.0.0/contrib/dataimporthandler/lib/activation-1.1.jar:
 it has javax.* classes
 [exec] Traceback (most recent call last):
 [exec]   File 
"/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scri
 [exec] pts/smokeTestRelease.py", line 1334, in 
 [exec] main()
 [exec]   File 
"/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py",
 line 1278, in main
 [exec] smokeTest(baseURL, svnRevision, version, tmpDir, isSigned, 
testArgs)
 [exec]   File 
"/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py",
 line 1322, in smokeTest
 [exec] unpackAndVerify('solr', tmpDir, artifact, svnRevision, version, 
testArgs, baseURL)
 [exec]   File 
"/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py",
 line 627, in unpackAndVerify
 [exec] verifyUnpacked(project, artifact, unpackPath, svnRevision, 
version, testArgs, tmpDir, baseURL)
 [exec]   File 
"/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py",
 line 752, in verifyUnpacked
 [exec] checkAllJARs(os.getcwd(), project, svnRevision, version, 
tmpDir, baseURL)
 [exec]   File 
"/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py",
 line 276, in checkAllJARs
 [exec] noJavaPackageClass

[jira] [Commented] (SOLR-1301) Add a Solr contrib that allows for building Solr indexes via Hadoop's Map-Reduce.

2013-12-08 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13842555#comment-13842555
 ] 

Steve Rowe commented on SOLR-1301:
--

The Maven Jenkins build on trunk has been failing for a while because 
{{com.sun.jersey:jersey-bundle:1.8}}, a morphlines-core dependency, causes 
{{ant validate-maven-dependencies}} to fail - here's a log excerpt from the 
most recent failure 
[https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1046/console]:

{noformat}
 [echo] Building solr-map-reduce...

-validate-maven-dependencies.init:

-validate-maven-dependencies:
[artifact:dependencies] [INFO] snapshot org.apache.solr:solr-cell:5.0-SNAPSHOT: 
checking for updates from maven-restlet
[artifact:dependencies] [INFO] snapshot org.apache.solr:solr-cell:5.0-SNAPSHOT: 
checking for updates from releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.solr:solr-morphlines-cell:5.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.solr:solr-morphlines-cell:5.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.solr:solr-morphlines-core:5.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.solr:solr-morphlines-core:5.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] An error has occurred while processing the Maven 
artifact tasks.
[artifact:dependencies]  Diagnosis:
[artifact:dependencies] 
[artifact:dependencies] Unable to resolve artifact: Unable to get dependency 
information: Unable to read the metadata file for artifact 
'com.sun.jersey:jersey-bundle:jar': Cannot find parent: 
com.sun.jersey:jersey-project for project: null:jersey-bundle:jar:null for 
project null:jersey-bundle:jar:null
[artifact:dependencies]   com.sun.jersey:jersey-bundle:jar:1.8
[artifact:dependencies] 
[artifact:dependencies] from the specified remote repositories:
[artifact:dependencies]   central (http://repo1.maven.org/maven2),
[artifact:dependencies]   releases.cloudera.com 
(https://repository.cloudera.com/artifactory/libs-release),
[artifact:dependencies]   maven-restlet (http://maven.restlet.org),
[artifact:dependencies]   Nexus (http://repository.apache.org/snapshots)
[artifact:dependencies] 
[artifact:dependencies] Path to dependency: 
[artifact:dependencies] 1) 
org.apache.solr:solr-map-reduce:jar:5.0-SNAPSHOT
[artifact:dependencies] 
[artifact:dependencies] 
[artifact:dependencies] Not a v4.0.0 POM. for project 
com.sun.jersey:jersey-project at 
/home/hudson/.m2/repository/com/sun/jersey/jersey-project/1.8/jersey-project-1.8.pom
{noformat}

I couldn't reproduce locally.

Turns out the parent POM in question, at 
{{/home/hudson/.m2/repository/com/sun/jersey/jersey-project/1.8/jersey-project-1.8.pom}},
 has the wrong contents:

{noformat}

301 Moved Permanently

301 Moved Permanently
nginx/0.6.39


{noformat}

I replaced this by manually downloading the correct POM and it's checksum file 
from Maven Central and putting them in the hudson user's local Maven repository.

[~markrmil...@gmail.com]: While investigating this failure, I tried dropping 
the triggering Ivy dependency com.sun.jersey:jersey-bundle, and all enabled 
tests succeed.  Okay with you to drop this dependency?  The description from 
the POM says:

{code:xml}

A bundle containing code of all jar-based modules that provide JAX-RS and 
Jersey-related features. Such a bundle is *only intended* for developers that 
do not use Maven's dependency system. The bundle does not include code for 
contributes, tests and samples.

{code}

Sounds like it's a sneaky replacement for transitive dependencies?  IMHO, if we 
need some of the classes this jar provides, we should declare direct 
dependencies on the appropriate artifacts.

> Add a Solr contrib that allows for building Solr indexes via Hadoop's 
> Map-Reduce.
> -
>
> Key: SOLR-1301
> URL: https://issues.apache.org/jira/browse/SOLR-1301
> Project: Solr
>  Issue Type: New Feature
>Reporter: Andrzej Bialecki 
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: README.txt, SOLR-1301-hadoop-0-20.patch, 
> SOLR-1301-hadoop-0-20.patch, SOLR-1301-maven-intellij.patch, SOLR-1301.patch, 
> SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
> SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
> SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
> SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
> SOLR-1301.patch, SolrRecordWriter.java, commons-logging-1.0.4.jar, 
> commons-logging-api-1.0.4.jar, hadoop-0.19.1-core.jar, 
> hadoop-0.20.1-core.jar, hadoop-co

[jira] [Updated] (SOLR-3191) field exclusion from fl

2013-12-08 Thread Andrea Gazzarini (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrea Gazzarini updated SOLR-3191:
---

Attachment: SOLR-3191.patch

New version of the patch -- ReturnFields without constants. 

> field exclusion from fl
> ---
>
> Key: SOLR-3191
> URL: https://issues.apache.org/jira/browse/SOLR-3191
> Project: Solr
>  Issue Type: Improvement
>Reporter: Luca Cavanna
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Attachments: SOLR-3191.patch, SOLR-3191.patch
>
>
> I think it would be useful to add a way to exclude field from the Solr 
> response. If I have for example 100 stored fields and I want to return all of 
> them but one, it would be handy to list just the field I want to exclude 
> instead of the 99 fields for inclusion through fl.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5362) IndexReader and friends should check ref count when incrementing

2013-12-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13842472#comment-13842472
 ] 

ASF subversion and git services commented on LUCENE-5362:
-

Commit 1549013 from [~simonw] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1549013 ]

LUCENE-5362: IndexReader and SegmentCoreReaders now throw 
AlreadyClosedException if the refCount in incremented but is less that 1.

> IndexReader and friends should check ref count when incrementing
> 
>
> Key: LUCENE-5362
> URL: https://issues.apache.org/jira/browse/LUCENE-5362
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 4.6
>Reporter: Simon Willnauer
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5362.patch, LUCENE-5362.patch
>
>
> IndexReader and SegmentCoreReaders blindly increments it's refcount which 
> could already be counted down to 0 which might allow an IndexReader  to "rise 
> from the dead" and use an already closed SCR instance. Even if that is caught 
> we should try best effort to raise ACE asap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5362) IndexReader and friends should check ref count when incrementing

2013-12-08 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer resolved LUCENE-5362.
-

Resolution: Fixed
  Assignee: Simon Willnauer

> IndexReader and friends should check ref count when incrementing
> 
>
> Key: LUCENE-5362
> URL: https://issues.apache.org/jira/browse/LUCENE-5362
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 4.6
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5362.patch, LUCENE-5362.patch
>
>
> IndexReader and SegmentCoreReaders blindly increments it's refcount which 
> could already be counted down to 0 which might allow an IndexReader  to "rise 
> from the dead" and use an already closed SCR instance. Even if that is caught 
> we should try best effort to raise ACE asap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5362) IndexReader and friends should check ref count when incrementing

2013-12-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13842471#comment-13842471
 ] 

ASF subversion and git services commented on LUCENE-5362:
-

Commit 1549012 from [~simonw] in branch 'dev/trunk'
[ https://svn.apache.org/r1549012 ]

LUCENE-5362: IndexReader and SegmentCoreReaders now throw 
AlreadyClosedException if the refCount in incremented but is less that 1.

> IndexReader and friends should check ref count when incrementing
> 
>
> Key: LUCENE-5362
> URL: https://issues.apache.org/jira/browse/LUCENE-5362
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 4.6
>Reporter: Simon Willnauer
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5362.patch, LUCENE-5362.patch
>
>
> IndexReader and SegmentCoreReaders blindly increments it's refcount which 
> could already be counted down to 0 which might allow an IndexReader  to "rise 
> from the dead" and use an already closed SCR instance. Even if that is caught 
> we should try best effort to raise ACE asap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1046: POMs out of sync

2013-12-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1046/

No tests ran.

Build Log:
[...truncated 38338 lines...]


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-5350) Add Context Aware Suggester

2013-12-08 Thread Areek Zillur (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13842461#comment-13842461
 ] 

Areek Zillur commented on LUCENE-5350:
--

Disregard the previous benchmark stats. There was a bug in how the keys were 
used for building the suggester (hence showed the unexplainable QPS).
The updated benchmark results is as follows:
{code}
-- Input stats
Input size: 53022, numContexts: 2666, Avg. Input/Context: 20

-- prefixes: 2-4, num: 7, onlyMorePopular: false
ContextAwareAnalyzingSuggester queries: 53022, time[ms]: 2630 [+- 124.14], 
~kQPS: 20
AnalyzingSuggester queries: 53022, time[ms]: 2249 [+- 25.16], ~kQPS: 24

-- RAM consumption
AnalyzingSuggester size[B]:4,767,705
ContextAwareAnalyzingSuggester size[B]:4,837,187

-- construction time
AnalyzingSuggester input: 53022, time[ms]: 10184 [+- 207.64]
ContextAwareAnalyzingSuggester input: 53022, time[ms]: 1831 [+- 81.89]

-- prefixes: 6-9, num: 7, onlyMorePopular: false
ContextAwareAnalyzingSuggester queries: 53022, time[ms]: 1457 [+- 163.04], 
~kQPS: 36
AnalyzingSuggester queries: 53022, time[ms]: 1140 [+- 28.59], ~kQPS: 47

-- prefixes: 100-200, num: 7, onlyMorePopular: false
ContextAwareAnalyzingSuggester queries: 53022, time[ms]: 1276 [+- 58.97], 
~kQPS: 42
AnalyzingSuggester queries: 53022, time[ms]: 1004 [+- 81.69], ~kQPS: 53
{code}

>From the above benchmarks, it seems the only improvement for the new suggester 
>is in the construction time. The QPS for all three cases seems to be ~20% less 
>and the RAM usage is ~3% more.

> Add Context Aware Suggester
> ---
>
> Key: LUCENE-5350
> URL: https://issues.apache.org/jira/browse/LUCENE-5350
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/search
>Reporter: Areek Zillur
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5350-benchmark.patch, 
> LUCENE-5350-benchmark.patch, LUCENE-5350.patch, LUCENE-5350.patch
>
>
> It would be nice to have a Context Aware Suggester (i.e. a suggester that 
> could return suggestions depending on some specified context(s)).
> Use-cases: 
>   - location-based suggestions:
>   -- returns suggestions which 'match' the context of a particular area
>   --- suggest restaurants names which are in Palo Alto (context -> 
> Palo Alto)
>   - category-based suggestions:
>   -- returns suggestions for items that are only in certain 
> categories/genres (contexts)
>   --- suggest movies that are of the genre sci-fi and adventure 
> (context -> [sci-fi, adventure])



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5350) Add Context Aware Suggester

2013-12-08 Thread Areek Zillur (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Areek Zillur updated LUCENE-5350:
-

Attachment: LUCENE-5350-benchmark.patch

Fixed benchmark code

> Add Context Aware Suggester
> ---
>
> Key: LUCENE-5350
> URL: https://issues.apache.org/jira/browse/LUCENE-5350
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/search
>Reporter: Areek Zillur
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5350-benchmark.patch, 
> LUCENE-5350-benchmark.patch, LUCENE-5350.patch, LUCENE-5350.patch
>
>
> It would be nice to have a Context Aware Suggester (i.e. a suggester that 
> could return suggestions depending on some specified context(s)).
> Use-cases: 
>   - location-based suggestions:
>   -- returns suggestions which 'match' the context of a particular area
>   --- suggest restaurants names which are in Palo Alto (context -> 
> Palo Alto)
>   - category-based suggestions:
>   -- returns suggestions for items that are only in certain 
> categories/genres (contexts)
>   --- suggest movies that are of the genre sci-fi and adventure 
> (context -> [sci-fi, adventure])



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5350) Add Context Aware Suggester

2013-12-08 Thread Areek Zillur (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Areek Zillur updated LUCENE-5350:
-

Attachment: LUCENE-5350.patch

Minor lookup optimization (in case of a single context)

> Add Context Aware Suggester
> ---
>
> Key: LUCENE-5350
> URL: https://issues.apache.org/jira/browse/LUCENE-5350
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/search
>Reporter: Areek Zillur
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5350-benchmark.patch, LUCENE-5350.patch, 
> LUCENE-5350.patch
>
>
> It would be nice to have a Context Aware Suggester (i.e. a suggester that 
> could return suggestions depending on some specified context(s)).
> Use-cases: 
>   - location-based suggestions:
>   -- returns suggestions which 'match' the context of a particular area
>   --- suggest restaurants names which are in Palo Alto (context -> 
> Palo Alto)
>   - category-based suggestions:
>   -- returns suggestions for items that are only in certain 
> categories/genres (contexts)
>   --- suggest movies that are of the genre sci-fi and adventure 
> (context -> [sci-fi, adventure])



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5271) A slightly more accurate SloppyMath distance

2013-12-08 Thread Gilad Barkai (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gilad Barkai updated LUCENE-5271:
-

Attachment: LUCENE-5271.patch

Ryan, thanks for looking at this.

bq. If the lat/lon values are large, then the index would be out of bounds for 
the table
Nice catch! I did not check for values over 90 degs Lat. Added a % with the the 
table's size.

bq. Why was this test removed? assertEquals(314.40338, haversin(1, 2, 3, 4), 
10e-5)
Well the test's result are wrong :) The new more accurate method gets other 
results.  I added other test instead:
{code}
double earthRadiusKMs = 6378.137;
double halfCircle = earthRadiusKMs * Math.PI;
assertEquals(halfCircle, haversin(0, 0, 0, 180), 0D);
{code}
Which computes half earth circle on the equator using both the harvestin and a 
simple circle equation using Earth's equator radius.
It differs in over 20KMs from the old harvesin result btw.

bq. Could you move the 2 * radius computation into the table?
Awesome! renamed the table to diameter rather than radius. 

bq. I know this is an already existing problem, but could you move the division 
by 2 from h1/h2 to h?
Done.

> A slightly more accurate SloppyMath distance
> 
>
> Key: LUCENE-5271
> URL: https://issues.apache.org/jira/browse/LUCENE-5271
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/other
>Reporter: Gilad Barkai
>Priority: Minor
> Attachments: LUCENE-5271.patch, LUCENE-5271.patch, LUCENE-5271.patch
>
>
> SloppyMath, intriduced in LUCENE-5258, uses earth's avg. (according to WGS84) 
> ellipsoid radius as an approximation for computing the "spherical" distance. 
> (The TO_KILOMETERS constant).
> While this is pretty accurate for long distances (latitude wise) this may 
> introduce some small errors while computing distances close to the equator 
> (as the earth radius there is larger than the avg.)
> A more accurate approximation would be taking the avg. earth radius at the 
> source and destination points. But computing an ellipsoid radius at any given 
> point is a heavy function, and this distance should be used in a scoring 
> function.. So two optimizations are optional - 
> * Pre-compute a table with an earth radius per latitude (the longitude does 
> not affect the radius)
> * Instead of using two point radius avg, figure out the avg. latitude 
> (exactly between the src and dst points) and get its radius.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org