Re: [VOTE] Release PyLucene 3.6.2
Hi, build succeeded on MacOS 10.8.2 with Python 2.7.2 and java 1.6.0_37. (= +1) I'm currently at home (xmas break) and don't have access to my Windows-build-environment, but can test on Win7 next week too. regards Thomas -- Am 26.12.2012 um 03:56 schrieb Andi Vajda va...@apache.org: The PyLucene 3.6.2-1 release tracking the recent release of Apache Lucene 3.6.2 is ready. A release candidate is available from: http://people.apache.org/~vajda/staging_area/ A list of changes in this release can be seen at: http://svn.apache.org/repos/asf/lucene/pylucene/branches/pylucene_3_6/CHANGES PyLucene 3.6.2 is built with JCC 2.15 included in these release artifacts: http://svn.apache.org/repos/asf/lucene/pylucene/trunk/jcc/CHANGES A list of Lucene Java changes can be seen at: http://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_3_6_2/lucene/CHANGES.txt Please vote to release these artifacts as PyLucene 3.6.2-1. Thanks ! Andi.. ps: the KEYS file for PyLucene release signing is at: http://svn.apache.org/repos/asf/lucene/pylucene/dist/KEYS http://people.apache.org/~vajda/staging_area/KEYS pps: here is my +1
[jira] [Commented] (LUCENE-4258) Incremental Field Updates through Stacked Segments
[ https://issues.apache.org/jira/browse/LUCENE-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13541053#comment-13541053 ] Sivan Yogev commented on LUCENE-4258: - Started switching to the invert-first approach following Mike's advices. My thought was to have a single directory for each fields update, and when flushing do something similar to IndexWriter.addIndexes(IndexReader...) and build the stacked segment. However, I encountered two problems with this approach: 1. If a certain document is updated more than once in a certain generation, two inverted documents should be merged into one, 2. extension to 1, where a field added in the first update is to be replaced in the second one. So, what I will try to do in such cases is to move the later updates to a new update generation. This will increase the number of generations, but I think it's a fair price to pay in light of the benefits offered by the invert-first approach. Incremental Field Updates through Stacked Segments -- Key: LUCENE-4258 URL: https://issues.apache.org/jira/browse/LUCENE-4258 Project: Lucene - Core Issue Type: Improvement Components: core/index Reporter: Sivan Yogev Fix For: 4.2, 5.0 Attachments: IncrementalFieldUpdates.odp, LUCENE-4258-API-changes.patch, LUCENE-4258.r1410593.patch, LUCENE-4258.r1412262.patch, LUCENE-4258.r1416438.patch, LUCENE-4258.r1416617.patch, LUCENE-4258.r1422495.patch, LUCENE-4258.r1423010.patch Original Estimate: 2,520h Remaining Estimate: 2,520h Shai and I would like to start working on the proposal to Incremental Field Updates outlined here (http://markmail.org/message/zhrdxxpfk6qvdaex). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3735) Relocate the example mime-to-extension mapping
[ https://issues.apache.org/jira/browse/SOLR-3735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13541063#comment-13541063 ] Erik Hatcher commented on SOLR-3735: bq. I also added the new velocity version to the Maven poms. Will you backport this to 4.x? Sorry about that oversight. Done on r1426916 on 4x now. Relocate the example mime-to-extension mapping -- Key: SOLR-3735 URL: https://issues.apache.org/jira/browse/SOLR-3735 Project: Solr Issue Type: Improvement Components: web gui Affects Versions: 4.0-BETA, 4.0 Reporter: Erik Hatcher Assignee: Erik Hatcher Priority: Minor Fix For: 4.1, 5.0 Attachments: SOLR-3735.patch A mime-to-extension mapping was added to VelocityResponseWriter recently. This really belongs in the templates themselves, not in VrW, as it is specific to the example search results not meant for all VrW templates. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3735) Relocate the example mime-to-extension mapping
[ https://issues.apache.org/jira/browse/SOLR-3735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13541066#comment-13541066 ] Commit Tag Bot commented on SOLR-3735: -- [branch_4x commit] Erik Hatcher http://svn.apache.org/viewvc?view=revisionrevision=1426916 SOLR-3735: fix maven POM for upgraded Velocity JAR Relocate the example mime-to-extension mapping -- Key: SOLR-3735 URL: https://issues.apache.org/jira/browse/SOLR-3735 Project: Solr Issue Type: Improvement Components: web gui Affects Versions: 4.0-BETA, 4.0 Reporter: Erik Hatcher Assignee: Erik Hatcher Priority: Minor Fix For: 4.1, 5.0 Attachments: SOLR-3735.patch A mime-to-extension mapping was added to VelocityResponseWriter recently. This really belongs in the templates themselves, not in VrW, as it is specific to the example search results not meant for all VrW templates. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 21 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/21/ Java: 64bit/jdk1.7.0 -XX:+UseG1GC All tests passed Build Log: [...truncated 723 lines...] [junit4:junit4] ERROR: JVM J0 ended with an exception, command line: /Library/Java/JavaVirtualMachines/jdk1.7.0_10.jdk/Contents/Home/jre/bin/java -XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/heapdumps -Dtests.prefix=tests -Dtests.seed=6BADAA10EB7E906B -Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random -Dtests.postingsformat=random -Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=5.0 -Dtests.cleanthreads=perMethod -Djava.util.logging.config.file=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/junit4/logging.properties -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true -Dtests.asserts.gracious=false -Dtests.multiplier=1 -DtempDir=. -Djava.io.tmpdir=. -Djunit4.tempDir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/core/test/temp -Dclover.db.dir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/clover/db -Djava.security.manager=org.apache.lucene.util.TestSecurityManager -Djava.security.policy=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/junit4/tests.policy -Dlucene.version=5.0-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory -Djava.awt.headless=true -Dfile.encoding=UTF-8 -classpath /Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/codecs/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/test-framework/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/lucene/test-framework/lib/junit-4.10.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/lucene/test-framework/lib/randomizedtesting-runner-2.0.7.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/core/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/core/classes/test:/Users/jenkins/jenkins-slave/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-launcher.jar:/Users/jenkins/.ant/lib/ivy-2.2.0.jar:/Users/jenkins/jenkins-slave/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-antlr.jar:/Users/jenkins/jenkins-slave/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-bcel.jar:/Users/jenkins/jenkins-slave/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-bsf.jar:/Users/jenkins/jenkins-slave/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-log4j.jar:/Users/jenkins/jenkins-slave/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-oro.jar:/Users/jenkins/jenkins-slave/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-regexp.jar:/Users/jenkins/jenkins-slave/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-resolver.jar:/Users/jenkins/jenkins-slave/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-xalan2.jar:/Users/jenkins/jenkins-slave/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-commons-logging.jar:/Users/jenkins/jenkins-slave/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-commons-net.jar:/Users/jenkins/jenkins-slave/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jai.jar:/Users/jenkins/jenkins-slave/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-javamail.jar:/Users/jenkins/jenkins-slave/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jdepend.jar:/Users/jenkins/jenkins-slave/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jmf.jar:/Users/jenkins/jenkins-slave/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jsch.jar:/Users/jenkins/jenkins-slave/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-junit.jar:/Users/jenkins/jenkins-slave/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-junit4.jar:/Users/jenkins/jenkins-slave/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-netrexx.jar:/Users/jenkins/jenkins-slave/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-swing.jar:/Users/jenkins/jenkins-slave/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-testutil.jar:/Users/jenkins/jenkins-slave/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_10.jdk/Contents/Home/lib/tools.jar:/Users/jenkins/.ivy2/cache/com.carrotsearch.randomizedtesting/junit4-ant/jars/junit4-ant-2.0.7.jar -ea:org.apache.lucene... -ea:org.apache.solr... com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe -flush -eventsfile /Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/core/test/temp/junit4-J0-20121230_025221_076.events
[JENKINS] Lucene-Solr-4.x-MacOSX ([[ Exception while replacing ENV. Please report this as a bug. ]]
{{ java.lang.NullPointerException }}) - Build # 10 - Still Failing! MIME-Version: 1.0 Content-Type: multipart/mixed; boundary==_Part_2_1477760067.1356869439066 X-Jenkins-Job: Lucene-Solr-4.x-MacOSX X-Jenkins-Result: FAILURE Precedence: bulk --=_Part_2_1477760067.1356869439066 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/10/ Java: [[ Exception while replacing ENV. Please report this as a bug. ]] {{ java.lang.NullPointerException }} No tests ran. Build Log: [...truncated 149 lines...] FATAL: hudson.remoting.RequestAbortedException: java.io.IOException: Unexpected termination of the channel hudson.remoting.RequestAbortedException: hudson.remoting.RequestAbortedException: java.io.IOException: Unexpected termination of the channel at hudson.remoting.Request.call(Request.java:174) at hudson.remoting.Channel.call(Channel.java:665) at hudson.FilePath.act(FilePath.java:841) at hudson.FilePath.act(FilePath.java:825) at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:771) at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:713) at hudson.model.AbstractProject.checkout(AbstractProject.java:1325) at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:676) at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:581) at hudson.model.Run.execute(Run.java:1543) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:236) Caused by: hudson.remoting.RequestAbortedException: java.io.IOException: Unexpected termination of the channel at hudson.remoting.Request.abort(Request.java:299) at hudson.remoting.Channel.terminate(Channel.java:725) at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:69) Caused by: java.io.IOException: Unexpected termination of the channel at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:50) Caused by: java.io.EOFException at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2553) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1296) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:350) at hudson.remoting.Command.readFrom(Command.java:90) at hudson.remoting.ClassicCommandTransport.read(ClassicCommandTransport.java:59) at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48) --=_Part_2_1477760067.1356869439066-- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4649) kill ThreadInterruptedException
[ https://issues.apache.org/jira/browse/LUCENE-4649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13541072#comment-13541072 ] Michael McCandless commented on LUCENE-4649: So e.g. we would fix IW methods to throw the [checked] InterruptedException, or would they also wrap it under an IOException? Why should should FSDir.sync pretend this was an IOException not an interrupt? The problem here is using Thread.interrupt is dangerous if you use MMapDir or NIOFSDir, ie the interrupt may close file handles and make the IR unusable (and eg lose a flushed segment if it's IW). So advertising that tons of methods now throw the checked InterruptedException might make users think these methods are in fact safely interruptible when they are not ... kill ThreadInterruptedException --- Key: LUCENE-4649 URL: https://issues.apache.org/jira/browse/LUCENE-4649 Project: Lucene - Core Issue Type: Bug Reporter: Robert Muir the way we currently do this is bogus. For example in FSDirectory's fsync, i think we should instead: {noformat} } catch (InterruptedException ie) { - throw new ThreadInterruptedException(ie); + Thread.currentThread().interrupt(); // restore status + IOException e = new java.io.InterruptedIOException(fsync() interrupted); + e.initCause(ie); + throw e; {noformat} and crazy code in IndexWriter etc that catches ThreadInterruptedExc just to restore status should be removed. Instead the guy doing the catch (InterruptedException) should do the right thing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4649) kill ThreadInterruptedException
[ https://issues.apache.org/jira/browse/LUCENE-4649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13541076#comment-13541076 ] Robert Muir commented on LUCENE-4649: - No no no. I thought my example was pretty good. I guess not. This is unrelated to mmap etc. This is about guys catching interruptedexc from thread.sleep and so on. An example of this is given above in the description of this issue, that's fsdirectorys sync impl. It sleeps sometimes. If its interrupted during sleep, we should: 1. Restore status 2. Throw non crazy exc. We don't need to change method Signatures. Interested users can check interrupt status and it will now actually work the way it should in a java program. Threadinterruptedexc is not that. The current scheme is senseless and broken. kill ThreadInterruptedException --- Key: LUCENE-4649 URL: https://issues.apache.org/jira/browse/LUCENE-4649 Project: Lucene - Core Issue Type: Bug Reporter: Robert Muir the way we currently do this is bogus. For example in FSDirectory's fsync, i think we should instead: {noformat} } catch (InterruptedException ie) { - throw new ThreadInterruptedException(ie); + Thread.currentThread().interrupt(); // restore status + IOException e = new java.io.InterruptedIOException(fsync() interrupted); + e.initCause(ie); + throw e; {noformat} and crazy code in IndexWriter etc that catches ThreadInterruptedExc just to restore status should be removed. Instead the guy doing the catch (InterruptedException) should do the right thing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Check me on this, CoreDescriptor.setCoreProperties seems backwards
Mostly checking to see if I'm understanding things correctly. If system properties are supposed to override properties file settings, then this method seems wrong, the putAll will make anything set in the properties file be what's specified as a system property. My guess is that this logic is a little twisted and solrcore.properties files are supposed to override system properties, so it's actually correct, I don't really see how this would work otherwise, a system property setting, for instance, instanceDir would then override _all_ individually-set instanceDirs so that just seems wrong.. Thanks, Erick
RE: [JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 21 - Failure!
Hi, This is really crazy. The problem here is: The virtual machine is suspended when the node shuts down. After it comes back, of course the system time is wrong. As MacOSX does not have the VM client tools (it's not supported), after wakeup the VMM does not reset the wall time in the VM, so the whole thing relies on ntpd to reset the time - and then of course it jumps hard to new wall clock. Unfortunately NTP does this very delayed, so the wall clock changes suddenly after approx. 5 minutes. Robert an me were expecting Solr tests to fail (because they depend on wall-clock not jumping), but funnily the whole JDK crashed this time. I have now changed the setup of the Jenkins slave to shutdown the VM completely, this will unfortunately take longer when it comes up again, but then always with a freshly booted OS. Currently I don't revert the harddisk to initial state, but as the node is killed hard (instead of ACPI shutdown), the file system may get corrupted. If this is the case, I can revert to snapshot, but I have to do this manually. In that case the workspaces and ivy cache would be empty again. I did this now (revert to snapshot). Alltogether it looks like the Darwin kernel does not do well in VMs... Maybe this is the reason why Apple officially disallows installing it inside a VM. If somebody has a real macintosh machine where we can connect to and run the salve, it would be fine. There is no special machine setup needed, only a separate user account (Jenkins) with SSH access and JDK 1.6, JDK 1.7 and python 3.2 installed at the standard Apple locations. The Master node automatically installs the Jenkins slave after connecting with SSH, so no setup needed. Uwe - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de -Original Message- From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de] Sent: Sunday, December 30, 2012 1:10 PM To: dev@lucene.apache.org; markrmil...@apache.org Subject: [JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 21 - Failure! Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/21/ Java: 64bit/jdk1.7.0 -XX:+UseG1GC All tests passed Build Log: [...truncated 723 lines...] [junit4:junit4] ERROR: JVM J0 ended with an exception, command line: /Library/Java/JavaVirtualMachines/jdk1.7.0_10.jdk/Contents/Home/jre/bin/ java -XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError - XX:HeapDumpPath=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr- trunk-MacOSX/heapdumps -Dtests.prefix=tests - Dtests.seed=6BADAA10EB7E906B -Xmx512M -Dtests.iters= - Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random - Dtests.postingsformat=random -Dtests.locale=random - Dtests.timezone=random -Dtests.directory=random - Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=5.0 - Dtests.cleanthreads=perMethod - Djava.util.logging.config.file=/Users/jenkins/jenkins- slave/workspace/Lucene-Solr-trunk- MacOSX/lucene/tools/junit4/logging.properties -Dtests.nightly=false - Dtests.weekly=false -Dtests.slow=true -Dtests.asserts.gracious=false - Dtests.multiplier=1 -DtempDir=. -Djava.io.tmpdir=. - Djunit4.tempDir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr- trunk-MacOSX/lucene/build/core/test/temp - Dclover.db.dir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk- MacOSX/lucene/build/clover/db - Djava.security.manager=org.apache.lucene.util.TestSecurityManager - Djava.security.policy=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr- trunk-MacOSX/lucene/tools/junit4/tests.policy -Dlucene.version=5.0- SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 - Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory - Djava.awt.headless=true -Dfile.encoding=UTF-8 -classpath /Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk- MacOSX/lucene/build/codecs/classes/java:/Users/jenkins/jenkins- slave/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/test- framework/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene- Solr-trunk-MacOSX/lucene/test-framework/lib/junit- 4.10.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk- MacOSX/lucene/test-framework/lib/randomizedtesting-runner- 2.0.7.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk- MacOSX/lucene/build/core/classes/java:/Users/jenkins/jenkins- slave/workspace/Lucene-Solr-trunk- MacOSX/lucene/build/core/classes/test:/Users/jenkins/jenkins- slave/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant- launcher.jar:/Users/jenkins/.ant/lib/ivy-2.2.0.jar:/Users/jenkins/jenkins- slave/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant- antlr.jar:/Users/jenkins/jenkins- slave/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache- bcel.jar:/Users/jenkins/jenkins- slave/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache- bsf.jar:/Users/jenkins/jenkins-
RE: svn commit: r1426916 - in /lucene/dev/branches/branch_4x: ./ dev-tools/ dev-tools/maven/ lucene/ lucene/analysis/ lucene/analysis/icu/src/java/org/apache/lucene/collation/ lucene/backwards/ lucene
I wonder why you did this, in 4.x the IVY ones still have the old velocity version? Now they differ! I fixed the Maven build error yesterday in trunk by applying that fix, but not 4.x because it did not contains your new velocity stuff. - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de -Original Message- From: ehatc...@apache.org [mailto:ehatc...@apache.org] Sent: Sunday, December 30, 2012 12:26 PM To: comm...@lucene.apache.org Subject: svn commit: r1426916 - in /lucene/dev/branches/branch_4x: ./ dev- tools/ dev-tools/maven/ lucene/ lucene/analysis/ lucene/analysis/icu/src/java/org/apache/lucene/collation/ lucene/backwards/ lucene/benchmark/ lucene/codecs/ lucene/core/ lucene/core/src/... Author: ehatcher Date: Sun Dec 30 11:26:26 2012 New Revision: 1426916 URL: http://svn.apache.org/viewvc?rev=1426916view=rev Log: SOLR-3735: fix maven POM for upgraded Velocity JAR Modified: lucene/dev/branches/branch_4x/ (props changed) lucene/dev/branches/branch_4x/dev-tools/ (props changed) lucene/dev/branches/branch_4x/dev-tools/maven/pom.xml.template lucene/dev/branches/branch_4x/lucene/ (props changed) lucene/dev/branches/branch_4x/lucene/BUILD.txt (props changed) lucene/dev/branches/branch_4x/lucene/CHANGES.txt (props changed) lucene/dev/branches/branch_4x/lucene/JRE_VERSION_MIGRATION.txt (props changed) lucene/dev/branches/branch_4x/lucene/LICENSE.txt (props changed) lucene/dev/branches/branch_4x/lucene/MIGRATE.txt (props changed) lucene/dev/branches/branch_4x/lucene/NOTICE.txt (props changed) lucene/dev/branches/branch_4x/lucene/README.txt (props changed) lucene/dev/branches/branch_4x/lucene/SYSTEM_REQUIREMENTS.txt (props changed) lucene/dev/branches/branch_4x/lucene/analysis/ (props changed) lucene/dev/branches/branch_4x/lucene/analysis/icu/src/java/org/apache/l ucene/collation/ICUCollationKeyFilterFactory.java (props changed) lucene/dev/branches/branch_4x/lucene/backwards/ (props changed) lucene/dev/branches/branch_4x/lucene/benchmark/ (props changed) lucene/dev/branches/branch_4x/lucene/build.xml (props changed) lucene/dev/branches/branch_4x/lucene/codecs/ (props changed) lucene/dev/branches/branch_4x/lucene/common-build.xml (props changed) lucene/dev/branches/branch_4x/lucene/core/ (props changed) lucene/dev/branches/branch_4x/lucene/core/src/test/org/apache/lucene/i ndex/TestBackwardsCompatibility.java (props changed) lucene/dev/branches/branch_4x/lucene/core/src/test/org/apache/lucene/i ndex/index.40.cfs.zip (props changed) lucene/dev/branches/branch_4x/lucene/core/src/test/org/apache/lucene/i ndex/index.40.nocfs.zip (props changed) lucene/dev/branches/branch_4x/lucene/core/src/test/org/apache/lucene/i ndex/index.40.optimized.cfs.zip (props changed) lucene/dev/branches/branch_4x/lucene/core/src/test/org/apache/lucene/i ndex/index.40.optimized.nocfs.zip (props changed) lucene/dev/branches/branch_4x/lucene/demo/ (props changed) lucene/dev/branches/branch_4x/lucene/facet/ (props changed) lucene/dev/branches/branch_4x/lucene/grouping/ (props changed) lucene/dev/branches/branch_4x/lucene/highlighter/ (props changed) lucene/dev/branches/branch_4x/lucene/ivy-settings.xml (props changed) lucene/dev/branches/branch_4x/lucene/join/ (props changed) lucene/dev/branches/branch_4x/lucene/licenses/ (props changed) lucene/dev/branches/branch_4x/lucene/memory/ (props changed) lucene/dev/branches/branch_4x/lucene/misc/ (props changed) lucene/dev/branches/branch_4x/lucene/module-build.xml (props changed) lucene/dev/branches/branch_4x/lucene/queries/ (props changed) lucene/dev/branches/branch_4x/lucene/queryparser/ (props changed) lucene/dev/branches/branch_4x/lucene/sandbox/ (props changed) lucene/dev/branches/branch_4x/lucene/site/ (props changed) lucene/dev/branches/branch_4x/lucene/spatial/ (props changed) lucene/dev/branches/branch_4x/lucene/suggest/ (props changed) lucene/dev/branches/branch_4x/lucene/test-framework/ (props changed) lucene/dev/branches/branch_4x/lucene/tools/ (props changed) lucene/dev/branches/branch_4x/solr/ (props changed) lucene/dev/branches/branch_4x/solr/CHANGES.txt (props changed) lucene/dev/branches/branch_4x/solr/LICENSE.txt (props changed) lucene/dev/branches/branch_4x/solr/NOTICE.txt (props changed) lucene/dev/branches/branch_4x/solr/README.txt (props changed) lucene/dev/branches/branch_4x/solr/SYSTEM_REQUIREMENTS.txt (props changed) lucene/dev/branches/branch_4x/solr/build.xml (props changed) lucene/dev/branches/branch_4x/solr/cloud-dev/ (props changed) lucene/dev/branches/branch_4x/solr/common-build.xml (props changed)
[jira] [Commented] (SOLR-3735) Relocate the example mime-to-extension mapping
[ https://issues.apache.org/jira/browse/SOLR-3735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13541089#comment-13541089 ] Uwe Schindler commented on SOLR-3735: - But you did not backport the fix itsself! Now we have different velocity versions in IVY vs. Maven! This issue is 5.x only! Relocate the example mime-to-extension mapping -- Key: SOLR-3735 URL: https://issues.apache.org/jira/browse/SOLR-3735 Project: Solr Issue Type: Improvement Components: web gui Affects Versions: 4.0-BETA, 4.0 Reporter: Erik Hatcher Assignee: Erik Hatcher Priority: Minor Fix For: 4.1, 5.0 Attachments: SOLR-3735.patch A mime-to-extension mapping was added to VelocityResponseWriter recently. This really belongs in the templates themselves, not in VrW, as it is specific to the example search results not meant for all VrW templates. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3735) Relocate the example mime-to-extension mapping
[ https://issues.apache.org/jira/browse/SOLR-3735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13541096#comment-13541096 ] Erik Hatcher commented on SOLR-3735: Uwe - geez, sorry, you had me all flustered saying I needed to backport to 4.x, but I obviously got too flustered to realize that I had only made the change intentionally on trunk only. I have reverted the change on 4.x (hopefully properly). Relocate the example mime-to-extension mapping -- Key: SOLR-3735 URL: https://issues.apache.org/jira/browse/SOLR-3735 Project: Solr Issue Type: Improvement Components: web gui Affects Versions: 4.0-BETA, 4.0 Reporter: Erik Hatcher Assignee: Erik Hatcher Priority: Minor Fix For: 4.1, 5.0 Attachments: SOLR-3735.patch A mime-to-extension mapping was added to VelocityResponseWriter recently. This really belongs in the templates themselves, not in VrW, as it is specific to the example search results not meant for all VrW templates. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3735) Relocate the example mime-to-extension mapping
[ https://issues.apache.org/jira/browse/SOLR-3735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13541098#comment-13541098 ] Commit Tag Bot commented on SOLR-3735: -- [branch_4x commit] Erik Hatcher http://svn.apache.org/viewvc?view=revisionrevision=1426953 SOLR-3735: sorry, revert stupid last commit Relocate the example mime-to-extension mapping -- Key: SOLR-3735 URL: https://issues.apache.org/jira/browse/SOLR-3735 Project: Solr Issue Type: Improvement Components: web gui Affects Versions: 4.0-BETA, 4.0 Reporter: Erik Hatcher Assignee: Erik Hatcher Priority: Minor Fix For: 4.1, 5.0 Attachments: SOLR-3735.patch A mime-to-extension mapping was added to VelocityResponseWriter recently. This really belongs in the templates themselves, not in VrW, as it is specific to the example search results not meant for all VrW templates. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3735) Relocate the example mime-to-extension mapping
[ https://issues.apache.org/jira/browse/SOLR-3735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13541099#comment-13541099 ] Uwe Schindler commented on SOLR-3735: - No problem! I was just confused, my first idea was that you backported the whole thing but then i realized that you only merged my fix from yesterday :-) Relocate the example mime-to-extension mapping -- Key: SOLR-3735 URL: https://issues.apache.org/jira/browse/SOLR-3735 Project: Solr Issue Type: Improvement Components: web gui Affects Versions: 4.0-BETA, 4.0 Reporter: Erik Hatcher Assignee: Erik Hatcher Priority: Minor Fix For: 4.1, 5.0 Attachments: SOLR-3735.patch A mime-to-extension mapping was added to VelocityResponseWriter recently. This really belongs in the templates themselves, not in VrW, as it is specific to the example search results not meant for all VrW templates. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3393) Implement an optimized LFUCache
[ https://issues.apache.org/jira/browse/SOLR-3393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13541106#comment-13541106 ] Shawn Heisey commented on SOLR-3393: A recent commit adding a large number of @Override annotations resulted in a lot of manual work to apply this patch to a 4x checkout. I have done this work, and I have also added a minDecayIntervalMs option, defaulting to 30, or five minutes. Implement an optimized LFUCache --- Key: SOLR-3393 URL: https://issues.apache.org/jira/browse/SOLR-3393 Project: Solr Issue Type: Improvement Components: search Affects Versions: 3.6, 4.0-ALPHA Reporter: Shawn Heisey Assignee: Hoss Man Priority: Minor Fix For: 4.1 Attachments: SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch SOLR-2906 gave us an inefficient LFU cache modeled on FastLRUCache/ConcurrentLRUCache. It could use some serious improvement. The following project includes an Apache 2.0 licensed O(1) implementation. The second link is the paper (PDF warning) it was based on: https://github.com/chirino/hawtdb http://dhruvbird.com/lfu.pdf Using this project and paper, I will attempt to make a new O(1) cache called FastLFUCache that is modeled on LRUCache.java. This will (for now) leave the existing LFUCache/ConcurrentLFUCache implementation in place. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Bad comments for the mlt.interestingTerms parameter, possible improvement
I ran into these bad comments in MoreLikeThisParams.java: // Do you want to include the original document in the results or not public final static String INTERESTING_TERMS = PREFIX + interestingTerms; // false,details,(list or true) 1. The leading comment is just plain wrong – it is a copy/paste error, a copy of the comment for the mlt.match.include parameter. 2. The trailing comment suggests that “true” is treated the same as “list”, which is not the case. The wiki says that the options are “list”, “details”, or “none”, which is consistent with the code: public static TermStyle get( String p ) { if( p != null ) { p = p.toUpperCase(Locale.ROOT); if( p.equals( DETAILS ) ) { return DETAILS; } else if( p.equals( LIST ) ) { return LIST; } } return NONE; } As an “Improvement”, I would suggest that the options be “list” (or “true”), “details”, or “none” (or “false”). -- Jack Krupansky
[jira] [Updated] (SOLR-3393) Implement an optimized LFUCache
[ https://issues.apache.org/jira/browse/SOLR-3393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shawn Heisey updated SOLR-3393: --- Attachment: SOLR-3393-trunk-withdecay.patch SOLR-3393-4x-withdecay.patch New patches against recent trunk and 4x checkouts. Because two of the files have a getSource method that SVN changes on checkout, applying the patch with standard linux patch tools is problematic. SVN aware patch utilities (I tried with TortoiseSVN) seem to apply with no problems. Implement an optimized LFUCache --- Key: SOLR-3393 URL: https://issues.apache.org/jira/browse/SOLR-3393 Project: Solr Issue Type: Improvement Components: search Affects Versions: 3.6, 4.0-ALPHA Reporter: Shawn Heisey Assignee: Hoss Man Priority: Minor Fix For: 4.1 Attachments: SOLR-3393-4x-withdecay.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393-trunk-withdecay.patch SOLR-2906 gave us an inefficient LFU cache modeled on FastLRUCache/ConcurrentLRUCache. It could use some serious improvement. The following project includes an Apache 2.0 licensed O(1) implementation. The second link is the paper (PDF warning) it was based on: https://github.com/chirino/hawtdb http://dhruvbird.com/lfu.pdf Using this project and paper, I will attempt to make a new O(1) cache called FastLFUCache that is modeled on LRUCache.java. This will (for now) leave the existing LFUCache/ConcurrentLFUCache implementation in place. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-3393) Implement an optimized LFUCache
[ https://issues.apache.org/jira/browse/SOLR-3393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13541116#comment-13541116 ] Shawn Heisey edited comment on SOLR-3393 at 12/30/12 5:51 PM: -- New patches against recent trunk and 4x checkouts that also implement a slow decay. Because two of the files have a getSource method that SVN changes on checkout, applying the patch with standard linux patch tools is problematic. SVN aware patch utilities (I tried with TortoiseSVN) seem to apply with no problems. was (Author: elyograg): New patches against recent trunk and 4x checkouts. Because two of the files have a getSource method that SVN changes on checkout, applying the patch with standard linux patch tools is problematic. SVN aware patch utilities (I tried with TortoiseSVN) seem to apply with no problems. Implement an optimized LFUCache --- Key: SOLR-3393 URL: https://issues.apache.org/jira/browse/SOLR-3393 Project: Solr Issue Type: Improvement Components: search Affects Versions: 3.6, 4.0-ALPHA Reporter: Shawn Heisey Assignee: Hoss Man Priority: Minor Fix For: 4.1 Attachments: SOLR-3393-4x-withdecay.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393-trunk-withdecay.patch SOLR-2906 gave us an inefficient LFU cache modeled on FastLRUCache/ConcurrentLRUCache. It could use some serious improvement. The following project includes an Apache 2.0 licensed O(1) implementation. The second link is the paper (PDF warning) it was based on: https://github.com/chirino/hawtdb http://dhruvbird.com/lfu.pdf Using this project and paper, I will attempt to make a new O(1) cache called FastLFUCache that is modeled on LRUCache.java. This will (for now) leave the existing LFUCache/ConcurrentLFUCache implementation in place. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3393) Implement an optimized LFUCache
[ https://issues.apache.org/jira/browse/SOLR-3393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13541144#comment-13541144 ] Shawn Heisey commented on SOLR-3393: I would like to deprecate ConcurrentLFUCache (patch renames old LFUCache implementation to ConcurrentLFUCache) in this patch for 4.x and eliminate it entirely for trunk, but I will leave that decision up to the committer who takes this on. The existing patches do not do this. I also realized that I have not included CHANGES.TXT. I have the following suggestion for that: 4x: SOLR-3393: New LFUCache implementation with much better performance. Deprecate old implementation and rename it to ConcurrentLFUCache. trunk: SOLR-3393: New LFUCache implementation with much better performance. Removed old implementation. Implement an optimized LFUCache --- Key: SOLR-3393 URL: https://issues.apache.org/jira/browse/SOLR-3393 Project: Solr Issue Type: Improvement Components: search Affects Versions: 3.6, 4.0-ALPHA Reporter: Shawn Heisey Assignee: Hoss Man Priority: Minor Fix For: 4.1 Attachments: SOLR-3393-4x-withdecay.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393-trunk-withdecay.patch SOLR-2906 gave us an inefficient LFU cache modeled on FastLRUCache/ConcurrentLRUCache. It could use some serious improvement. The following project includes an Apache 2.0 licensed O(1) implementation. The second link is the paper (PDF warning) it was based on: https://github.com/chirino/hawtdb http://dhruvbird.com/lfu.pdf Using this project and paper, I will attempt to make a new O(1) cache called FastLFUCache that is modeled on LRUCache.java. This will (for now) leave the existing LFUCache/ConcurrentLFUCache implementation in place. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3393) Implement an optimized LFUCache
[ https://issues.apache.org/jira/browse/SOLR-3393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13541153#comment-13541153 ] Shawn Heisey commented on SOLR-3393: When I built the 4x patch, I accidentally checked out a specific old revision instead of the newest. The patch will apply successfully to the most recent revision as long as the SVN URL glitch is dealt with first, or you use svn-aware tools. Implement an optimized LFUCache --- Key: SOLR-3393 URL: https://issues.apache.org/jira/browse/SOLR-3393 Project: Solr Issue Type: Improvement Components: search Affects Versions: 3.6, 4.0-ALPHA Reporter: Shawn Heisey Assignee: Hoss Man Priority: Minor Fix For: 4.1 Attachments: SOLR-3393-4x-withdecay.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393-trunk-withdecay.patch SOLR-2906 gave us an inefficient LFU cache modeled on FastLRUCache/ConcurrentLRUCache. It could use some serious improvement. The following project includes an Apache 2.0 licensed O(1) implementation. The second link is the paper (PDF warning) it was based on: https://github.com/chirino/hawtdb http://dhruvbird.com/lfu.pdf Using this project and paper, I will attempt to make a new O(1) cache called FastLFUCache that is modeled on LRUCache.java. This will (for now) leave the existing LFUCache/ConcurrentLFUCache implementation in place. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 23 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/23/ Java: 64bit/jdk1.7.0 -XX:+UseSerialGC All tests passed Build Log: [...truncated 9057 lines...] [junit4:junit4] ERROR: JVM J0 ended with an exception, command line: /Library/Java/JavaVirtualMachines/jdk1.7.0_10.jdk/Contents/Home/jre/bin/java -XX:+UseSerialGC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/heapdumps -Dtests.prefix=tests -Dtests.seed=977FFD9A2C9181D2 -Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random -Dtests.postingsformat=random -Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=5.0 -Dtests.cleanthreads=perClass -Djava.util.logging.config.file=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/solr/testlogging.properties -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true -Dtests.asserts.gracious=false -Dtests.multiplier=1 -DtempDir=. -Djava.io.tmpdir=. -Djunit4.tempDir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/temp -Dclover.db.dir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/clover/db -Djava.security.manager=org.apache.lucene.util.TestSecurityManager -Djava.security.policy=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/junit4/tests.policy -Dlucene.version=5.0-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory -Djava.awt.headless=true -Dfile.encoding=US-ASCII -classpath
[jira] [Commented] (SOLR-3393) Implement an optimized LFUCache
[ https://issues.apache.org/jira/browse/SOLR-3393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13541157#comment-13541157 ] Shawn Heisey commented on SOLR-3393: N.B.: subversion appears to have gotten the 'patch' subcommand in version 1.7 - CentOS 6 has v1.6. I always find that Redhat's stable offerings are quite outdated. Implement an optimized LFUCache --- Key: SOLR-3393 URL: https://issues.apache.org/jira/browse/SOLR-3393 Project: Solr Issue Type: Improvement Components: search Affects Versions: 3.6, 4.0-ALPHA Reporter: Shawn Heisey Assignee: Hoss Man Priority: Minor Fix For: 4.1 Attachments: SOLR-3393-4x-withdecay.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393-trunk-withdecay.patch SOLR-2906 gave us an inefficient LFU cache modeled on FastLRUCache/ConcurrentLRUCache. It could use some serious improvement. The following project includes an Apache 2.0 licensed O(1) implementation. The second link is the paper (PDF warning) it was based on: https://github.com/chirino/hawtdb http://dhruvbird.com/lfu.pdf Using this project and paper, I will attempt to make a new O(1) cache called FastLFUCache that is modeled on LRUCache.java. This will (for now) leave the existing LFUCache/ConcurrentLFUCache implementation in place. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 23 - Failure!
[junit4:junit4] JVM J0: stderr (verbatim) [junit4:junit4] java(198,0x13484) malloc: *** error for object 0x13682ef60: pointer being freed was not allocated [junit4:junit4] *** set a breakpoint in malloc_error_break to debug [junit4:junit4] JVM J0: EOF Seems like a legitimate JVM error. D. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 23 - Failure!
Yes. But I did not get a hs-err file. It looks like this error is created by the Machos runtime environment (libc). Uwe Dawid Weiss dawid.we...@cs.put.poznan.pl schrieb: [junit4:junit4] JVM J0: stderr (verbatim) [junit4:junit4] java(198,0x13484) malloc: *** error for object 0x13682ef60: pointer being freed was not allocated [junit4:junit4] *** set a breakpoint in malloc_error_break to debug [junit4:junit4] JVM J0: EOF Seems like a legitimate JVM error. D. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org -- Uwe Schindler H.-H.-Meier-Allee 63, 28213 Bremen http://www.thetaphi.de
[jira] [Updated] (SOLR-3118) We need a better error message when failing due to a slice that is part of collection is not available
[ https://issues.apache.org/jira/browse/SOLR-3118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3118: -- Attachment: SOLR-3118.patch I'm not sure what the absolute best way to solve this is, but it's pretty annoying so we need to do something. Here is a patch that finds the slice name when a shard is set to . When we don't find a shard, we simply set it's shard value to and propagate no info along - this patch just correlates the shard to it's slice name and passes it to httpshardhandler by param so no apis are changed. It seems to work fine for the SolrCloud case. Anyone else have any thoughts? We need a better error message when failing due to a slice that is part of collection is not available -- Key: SOLR-3118 URL: https://issues.apache.org/jira/browse/SOLR-3118 Project: Solr Issue Type: Improvement Components: SolrCloud Affects Versions: 4.0-ALPHA Reporter: Sami Siren Assignee: Mark Miller Priority: Minor Attachments: SOLR-3118.patch When indexing to/searching from an incomplete collection (for example a slice does not have any shards registered/available) a cruel error without a proper explanation is shown to the user. These errors are from running example1.sh and creating a new collection with coreadminhandler: Slices with no shards: Indexing: {code} Error 500 No registered leader was found, collection:collection2 slice:shard4 java.lang.RuntimeException: No registered leader was found, collection:collection2 slice:shard4 at org.apache.solr.common.cloud.ZkStateReader.getLeaderProps(ZkStateReader.java:408) at org.apache.solr.common.cloud.ZkStateReader.getLeaderProps(ZkStateReader.java:393) at org.apache.solr.update.processor.DistributedUpdateProcessor.setupRequest(DistributedUpdateProcessor.java:154) at org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:210) at org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:115) at org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:135) at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:79) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:59) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1523) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:339) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:234) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:945) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) {code} Searching: {code} HTTP ERROR 503 Problem accessing /solr/coreX/select/. Reason: no servers hosting shard: Powered by Jetty:// {code} Surprisingly the error is different when searching from a collection after removing a core from an collection that was in OK condition: {code} HTTP ERROR 500 Problem accessing /solr/coreX/select/. Reason: null java.util.concurrent.RejectedExecutionException at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:1768) at
[jira] [Commented] (SOLR-3118) We need a better error message when failing due to a slice that is part of collection is not available
[ https://issues.apache.org/jira/browse/SOLR-3118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13541169#comment-13541169 ] Mark Miller commented on SOLR-3118: --- The above references issue 2 around searching - we often return the error no servers hosting shard: and then don't give the shard - this is because it's already been set to - supposedly to help with partial results. We need a better error message when failing due to a slice that is part of collection is not available -- Key: SOLR-3118 URL: https://issues.apache.org/jira/browse/SOLR-3118 Project: Solr Issue Type: Improvement Components: SolrCloud Affects Versions: 4.0-ALPHA Reporter: Sami Siren Assignee: Mark Miller Priority: Minor Attachments: SOLR-3118.patch When indexing to/searching from an incomplete collection (for example a slice does not have any shards registered/available) a cruel error without a proper explanation is shown to the user. These errors are from running example1.sh and creating a new collection with coreadminhandler: Slices with no shards: Indexing: {code} Error 500 No registered leader was found, collection:collection2 slice:shard4 java.lang.RuntimeException: No registered leader was found, collection:collection2 slice:shard4 at org.apache.solr.common.cloud.ZkStateReader.getLeaderProps(ZkStateReader.java:408) at org.apache.solr.common.cloud.ZkStateReader.getLeaderProps(ZkStateReader.java:393) at org.apache.solr.update.processor.DistributedUpdateProcessor.setupRequest(DistributedUpdateProcessor.java:154) at org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:210) at org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:115) at org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:135) at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:79) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:59) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1523) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:339) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:234) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:945) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) {code} Searching: {code} HTTP ERROR 503 Problem accessing /solr/coreX/select/. Reason: no servers hosting shard: Powered by Jetty:// {code} Surprisingly the error is different when searching from a collection after removing a core from an collection that was in OK condition: {code} HTTP ERROR 500 Problem accessing /solr/coreX/select/. Reason: null java.util.concurrent.RejectedExecutionException at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:1768) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:767) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:658) at java.util.concurrent.ExecutorCompletionService.submit(ExecutorCompletionService.java:152) at
[jira] [Updated] (SOLR-3118) We need a better error message when failing due to a slice that is part of collection is not available
[ https://issues.apache.org/jira/browse/SOLR-3118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3118: -- Fix Version/s: 5.0 4.1 We need a better error message when failing due to a slice that is part of collection is not available -- Key: SOLR-3118 URL: https://issues.apache.org/jira/browse/SOLR-3118 Project: Solr Issue Type: Improvement Components: SolrCloud Affects Versions: 4.0-ALPHA Reporter: Sami Siren Assignee: Mark Miller Priority: Minor Fix For: 4.1, 5.0 Attachments: SOLR-3118.patch When indexing to/searching from an incomplete collection (for example a slice does not have any shards registered/available) a cruel error without a proper explanation is shown to the user. These errors are from running example1.sh and creating a new collection with coreadminhandler: Slices with no shards: Indexing: {code} Error 500 No registered leader was found, collection:collection2 slice:shard4 java.lang.RuntimeException: No registered leader was found, collection:collection2 slice:shard4 at org.apache.solr.common.cloud.ZkStateReader.getLeaderProps(ZkStateReader.java:408) at org.apache.solr.common.cloud.ZkStateReader.getLeaderProps(ZkStateReader.java:393) at org.apache.solr.update.processor.DistributedUpdateProcessor.setupRequest(DistributedUpdateProcessor.java:154) at org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:210) at org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:115) at org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:135) at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:79) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:59) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1523) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:339) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:234) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:945) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) {code} Searching: {code} HTTP ERROR 503 Problem accessing /solr/coreX/select/. Reason: no servers hosting shard: Powered by Jetty:// {code} Surprisingly the error is different when searching from a collection after removing a core from an collection that was in OK condition: {code} HTTP ERROR 500 Problem accessing /solr/coreX/select/. Reason: null java.util.concurrent.RejectedExecutionException at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:1768) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:767) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:658) at java.util.concurrent.ExecutorCompletionService.submit(ExecutorCompletionService.java:152) at org.apache.solr.handler.component.HttpShardHandler.submit(HttpShardHandler.java:173) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:274) at
[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.7.0_09) - Build # 2342 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/2342/ Java: 32bit/jdk1.7.0_09 -server -XX:+UseParallelGC All tests passed Build Log: [...truncated 30596 lines...] BUILD FAILED C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:245: The following error occurred while executing this line: C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build.xml:707: The following error occurred while executing this line: C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\common-build.xml:1791: Failed to copy C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\contrib\velocity\lib\velocity-1.7.jar to C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\null145459437\velocity-1.7.jar due to There is not enough space on the disk Total time: 65 minutes 52 seconds Build step 'Invoke Ant' marked build as failure FATAL: Remote call on Windows VBOX failed java.io.IOException: Remote call on Windows VBOX failed at hudson.remoting.Channel.call(Channel.java:674) at hudson.Launcher$RemoteLauncher.kill(Launcher.java:877) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:609) at hudson.model.Run.execute(Run.java:1543) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:236) Caused by: java.lang.Error: Unable to load resource winp.x64.dll at hudson.remoting.RemoteClassLoader.findResource(RemoteClassLoader.java:241) at java.lang.ClassLoader.getResource(Unknown Source) at org.jvnet.winp.Native.load(Native.java:61) at org.jvnet.winp.Native.clinit(Native.java:52) at org.jvnet.winp.WinProcess.enableDebugPrivilege(WinProcess.java:200) at hudson.util.ProcessTree$Windows.clinit(ProcessTree.java:465) at hudson.util.ProcessTree.get(ProcessTree.java:335) at hudson.Launcher$RemoteLauncher$KillTask.call(Launcher.java:889) at hudson.Launcher$RemoteLauncher$KillTask.call(Launcher.java:880) at hudson.remoting.UserRequest.perform(UserRequest.java:118) at hudson.remoting.UserRequest.perform(UserRequest.java:48) at hudson.remoting.Request$2.run(Request.java:326) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) at java.util.concurrent.FutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at hudson.remoting.Engine$1$1.run(Engine.java:60) at java.lang.Thread.run(Unknown Source) Caused by: java.io.IOException: There is not enough space on the disk at java.io.FileOutputStream.writeBytes(Native Method) at java.io.FileOutputStream.write(Unknown Source) at hudson.remoting.RemoteClassLoader.makeResource(RemoteClassLoader.java:310) at hudson.remoting.RemoteClassLoader.findResource(RemoteClassLoader.java:237) ... 18 more - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #721: POMs out of sync
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/721/ 1 tests failed. FAILED: org.apache.solr.cloud.SyncSliceTest.testDistribSearch Error Message: shard1 should have just been set up to be inconsistent - but it's still consistent Stack Trace: java.lang.AssertionError: shard1 should have just been set up to be inconsistent - but it's still consistent at __randomizedtesting.SeedInfo.seed([DDCDAFFDB356E8D2:5C2B21E5C40988EE]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNotNull(Assert.java:526) at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:214) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:794) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
[jira] [Updated] (SOLR-3029) Poor json formatting of spelling collation info
[ https://issues.apache.org/jira/browse/SOLR-3029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3029: -- Fix Version/s: 5.0 4.2 Priority: Major (was: Blocker) Poor json formatting of spelling collation info --- Key: SOLR-3029 URL: https://issues.apache.org/jira/browse/SOLR-3029 Project: Solr Issue Type: Bug Components: spellchecker Affects Versions: 4.0-ALPHA Reporter: Antony Stubbs Fix For: 4.2, 5.0 Attachments: SOLR-3029.patch {noformat} spellcheck: { suggestions: [ dalllas, { snip { word: canallas, freq: 1 } ] }, correctlySpelled, false, collation, dallas ] } {noformat} The correctlySpelled and collation key/values are stored as consecutive elements in an array - quite odd. Is there a reason isn't not a key/value map like most things? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-1764) While indexing a java.lang.IllegalStateException: Can't overwrite cause exception is thrown
[ https://issues.apache.org/jira/browse/SOLR-1764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-1764: -- Priority: Major (was: Blocker) While indexing a java.lang.IllegalStateException: Can't overwrite cause exception is thrown - Key: SOLR-1764 URL: https://issues.apache.org/jira/browse/SOLR-1764 Project: Solr Issue Type: Bug Components: clients - java Affects Versions: 1.4 Environment: Windows XP, JBoss 4.2.3 GA Reporter: Michael McGowan Labels: IllegalStateException I get an exception while indexing. It seems that I'm unable to see the root cause of the exception because it is masked by another java.lang.IllegalStateException: Can't overwrite cause exception. Here is the stacktrace : 16:59:04,292 ERROR [STDERR] Feb 8, 2010 4:59:04 PM org.apache.solr.update.processor.LogUpdateProcessor finish INFO: {} 0 15 16:59:04,292 ERROR [STDERR] Feb 8, 2010 4:59:04 PM org.apache.solr.common.SolrException log SEVERE: java.lang.IllegalStateException: Can't overwrite cause at java.lang.Throwable.initCause(Throwable.java:320) at com.ctc.wstx.compat.Jdk14Impl.setInitCause(Jdk14Impl.java:70) at com.ctc.wstx.exc.WstxException.init(WstxException.java:46) at com.ctc.wstx.exc.WstxIOException.init(WstxIOException.java:16) at com.ctc.wstx.stax.WstxInputFactory.doCreateSR(WstxInputFactory.java:536) at com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:592) at com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:648) at com.ctc.wstx.stax.WstxInputFactory.createXMLStreamReader(WstxInputFactory.java:319) at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:68) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:54) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:230) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:182) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:84) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:157) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:262) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:446) at java.lang.Thread.run(Thread.java:619) 16:59:04,292 ERROR [STDERR] Feb 8, 2010 4:59:04 PM org.apache.solr.core.SolrCore execute INFO: [] webapp=/solr path=/update params={wt=xmlversion=2.2} status=500 QTime=15 16:59:04,292 ERROR [STDERR] Feb 8, 2010 4:59:04 PM org.apache.solr.common.SolrException log SEVERE: java.lang.IllegalStateException: Can't overwrite cause at java.lang.Throwable.initCause(Throwable.java:320) at com.ctc.wstx.compat.Jdk14Impl.setInitCause(Jdk14Impl.java:70) at com.ctc.wstx.exc.WstxException.init(WstxException.java:46) at com.ctc.wstx.exc.WstxIOException.init(WstxIOException.java:16) at com.ctc.wstx.stax.WstxInputFactory.doCreateSR(WstxInputFactory.java:536) at
[jira] [Updated] (SOLR-2899) Custom DIH Functions in Delta-Query have null Context
[ https://issues.apache.org/jira/browse/SOLR-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-2899: -- Priority: Major (was: Blocker) Custom DIH Functions in Delta-Query have null Context - Key: SOLR-2899 URL: https://issues.apache.org/jira/browse/SOLR-2899 Project: Solr Issue Type: Bug Affects Versions: 3.4 Reporter: Jens Zastrow Labels: custom, dih, functions We must use a custom fucntion in the deltaQuery, but the passed in Context is always null, preventing any variable resolution. In full-import mode behavior is correct. Looking into the sources showed that the Conext is used from a thread-local Context.CURRENT_CONTEXT.get(), wich is never set by (Context.CURRENT_CONTEXT.set()) for the Context created in DocBuilder.java:871 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-4108) SolrCloud: Unexpected behavior when doing atomic updates or document reindexations.
[ https://issues.apache.org/jira/browse/SOLR-4108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-4108: -- Priority: Major (was: Blocker) SolrCloud: Unexpected behavior when doing atomic updates or document reindexations. --- Key: SOLR-4108 URL: https://issues.apache.org/jira/browse/SOLR-4108 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 4.0 Environment: Note: This issue is related with JIRA-4080. Context: SolrCloud deployed with nShards=1, two Solr servers, each one with two cores/collections. We have then one leader and one replica for each shard. Reporter: Luis Cappa Banda Fix For: 4.2, 5.0 The situation is this the following: 1. SolrCloud with one shard and two Solr instances. 2. Indexation via SolrJ with CloudServer and a custom BinaryLBHttpSolrServer that uses BinaryRequestWriter to execute correctly atomic updates. Check JIRA-4080. 3. An asynchronous proccess updates partially some document fields. After that operation I automatically execute a commit, so the index must be reloaded. What I have checked is that both using atomic updates or complete document reindexations random documents are not updated even if I saw debugging how the add() and commit() operations were executed correctly and without errors. In other words, something strange happens when you both index and update documents asynchronously at the same time. Also, if I debug line by line (blocking other indexation/update proccesses) and I check with my own eyes when an index operation is done, I confirm that the document itself updates correctly. What I think is that there is some critical problem with both SolrCloud and CloudSolrServer interface that has something to do with index blocking while writing and forwarding document updates to replicas. If I´m right, and considering also JIRA-4080, I would not recommend SolrCloud in production at the moment. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-4244) When coming back from session expiration we should not wait for the leader to see us in the down state if we are the node that must become the leader.
[ https://issues.apache.org/jira/browse/SOLR-4244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller resolved SOLR-4244. --- Resolution: Fixed When coming back from session expiration we should not wait for the leader to see us in the down state if we are the node that must become the leader. -- Key: SOLR-4244 URL: https://issues.apache.org/jira/browse/SOLR-4244 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 4.0-ALPHA, 4.0-BETA, 4.0 Reporter: Mark Miller Assignee: Mark Miller Fix For: 4.1, 5.0 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-3131) details command fails when a replication is forced with a fetchIndex command on a non-slave server
[ https://issues.apache.org/jira/browse/SOLR-3131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller resolved SOLR-3131. --- Resolution: Fixed Fix Version/s: (was: 4.1) 4.0 details command fails when a replication is forced with a fetchIndex command on a non-slave server -- Key: SOLR-3131 URL: https://issues.apache.org/jira/browse/SOLR-3131 Project: Solr Issue Type: Bug Components: replication (java) Affects Versions: 3.5 Reporter: Tomás Fernández Löbbe Assignee: Mark Miller Priority: Minor Fix For: 4.0 Attachments: SOLR-3131.patch Steps to reproduce the problem: 1) Start a master Solr instance (called A) 2) Start a Solr instance with replication handler configured, but with no slave configuration. (called B) 3) Issue the request http://B:port/solr/replication?command=fetchindexmasterUrl=http://A:port/solr/replication 4) While B is fetching the index, issue the request: http://B:port/solr/replication?command=details Expected behavior: See the replication details as usual. Getting an exception instead: java.lang.NullPointerException at org.apache.solr.handler.ReplicationHandler.isPollingDisabled(ReplicationHandler.java:447) at org.apache.solr.handler.ReplicationHandler.getReplicationDetails(ReplicationHandler.java:611) at org.apache.solr.handler.ReplicationHandler.handleRequestBody(ReplicationHandler.java:211) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1523) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:339) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:234) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-4246) log forwarded updates
[ https://issues.apache.org/jira/browse/SOLR-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yonik Seeley reassigned SOLR-4246: -- Assignee: Yonik Seeley log forwarded updates - Key: SOLR-4246 URL: https://issues.apache.org/jira/browse/SOLR-4246 Project: Solr Issue Type: Bug Reporter: Yonik Seeley Assignee: Yonik Seeley Updates that were forwarded from one solr node to another are not logged on the receiving side when complete, making debugging more difficult than it should be. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-4246) log forwarded updates
Yonik Seeley created SOLR-4246: -- Summary: log forwarded updates Key: SOLR-4246 URL: https://issues.apache.org/jira/browse/SOLR-4246 Project: Solr Issue Type: Bug Reporter: Yonik Seeley Updates that were forwarded from one solr node to another are not logged on the receiving side when complete, making debugging more difficult than it should be. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4246) log forwarded updates
[ https://issues.apache.org/jira/browse/SOLR-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13541233#comment-13541233 ] Commit Tag Bot commented on SOLR-4246: -- [trunk commit] Yonik Seeley http://svn.apache.org/viewvc?view=revisionrevision=1427037 SOLR-4246: log forwarded updates log forwarded updates - Key: SOLR-4246 URL: https://issues.apache.org/jira/browse/SOLR-4246 Project: Solr Issue Type: Bug Reporter: Yonik Seeley Assignee: Yonik Seeley Updates that were forwarded from one solr node to another are not logged on the receiving side when complete, making debugging more difficult than it should be. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4246) log forwarded updates
[ https://issues.apache.org/jira/browse/SOLR-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13541235#comment-13541235 ] Commit Tag Bot commented on SOLR-4246: -- [branch_4x commit] Yonik Seeley http://svn.apache.org/viewvc?view=revisionrevision=1427039 SOLR-4246: log forwarded updates log forwarded updates - Key: SOLR-4246 URL: https://issues.apache.org/jira/browse/SOLR-4246 Project: Solr Issue Type: Bug Reporter: Yonik Seeley Assignee: Yonik Seeley Updates that were forwarded from one solr node to another are not logged on the receiving side when complete, making debugging more difficult than it should be. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-4246) log forwarded updates
[ https://issues.apache.org/jira/browse/SOLR-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yonik Seeley resolved SOLR-4246. Resolution: Fixed Fix Version/s: 4.1 log forwarded updates - Key: SOLR-4246 URL: https://issues.apache.org/jira/browse/SOLR-4246 Project: Solr Issue Type: Bug Reporter: Yonik Seeley Assignee: Yonik Seeley Fix For: 4.1 Updates that were forwarded from one solr node to another are not logged on the receiving side when complete, making debugging more difficult than it should be. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2899) Add OpenNLP Analysis capabilities as a module
[ https://issues.apache.org/jira/browse/LUCENE-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13541285#comment-13541285 ] Lance Norskog commented on LUCENE-2899: --- Wow, someone tried it! I apologize for not noticing your question. bq. I'm able to get the posTagger working, yet I still have not found a way to incorporate either the Chunker or the NER Models into my Solr project. The schema.xml file includes samples for all of the models: {{/lusolr_4x_opennlp/solr/contrib/opennlp/src/test-files/opennlp/solr/collection1/conf/schema.xml}} This is for the chunker. The chunker works from parts-of-speech tags, not the original words. The chunker needs a parts-of-speech model as well as a chunker model. This should throw an error if the parts-of-speech model is not there. I will fix this. {code:xml} filter class=solr.OpenNLPFilterFactory posTaggerModel=opennlp/en-test-pos-maxent.bin chunkerModel=opennlp/en-test-chunker.bin / {code} Is the NER configuration still not working? Add OpenNLP Analysis capabilities as a module - Key: LUCENE-2899 URL: https://issues.apache.org/jira/browse/LUCENE-2899 Project: Lucene - Core Issue Type: New Feature Components: modules/analysis Reporter: Grant Ingersoll Assignee: Grant Ingersoll Priority: Minor Fix For: 4.1 Attachments: LUCENE-2899.patch, LUCENE-2899.patch, LUCENE-2899.patch, LUCENE-2899.patch, LUCENE-2899.patch, LUCENE-2899.patch, OpenNLPFilter.java, OpenNLPTokenizer.java, opennlp_trunk.patch Now that OpenNLP is an ASF project and has a nice license, it would be nice to have a submodule (under analysis) that exposed capabilities for it. Drew Farris, Tom Morton and I have code that does: * Sentence Detection as a Tokenizer (could also be a TokenFilter, although it would have to change slightly to buffer tokens) * NamedEntity recognition as a TokenFilter We are also planning a Tokenizer/TokenFilter that can put parts of speech as either payloads (PartOfSpeechAttribute?) on a token or at the same position. I'd propose it go under: modules/analysis/opennlp -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org