[jira] [Assigned] (SOLR-9269) Ability to create/delete/list snapshots for a solr core
[ https://issues.apache.org/jira/browse/SOLR-9269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley reassigned SOLR-9269: -- Assignee: David Smiley > Ability to create/delete/list snapshots for a solr core > --- > > Key: SOLR-9269 > URL: https://issues.apache.org/jira/browse/SOLR-9269 > Project: Solr > Issue Type: Sub-task > Components: SolrCloud >Reporter: Hrishikesh Gadre >Assignee: David Smiley > Attachments: SOLR-9269.patch > > > Support snapshot create/delete/list functionality @ the Solr core level. > Please refer to parent JIRA for more details. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_92) - Build # 1016 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1016/ Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseParallelGC All tests passed Build Log: [...truncated 10977 lines...] [junit4] JVM J1: stdout was not empty, see: /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/temp/junit4-J1-20160701_045532_918.sysout [junit4] >>> JVM J1 emitted unexpected output (verbatim) [junit4] java.lang.OutOfMemoryError: GC overhead limit exceeded [junit4] Dumping heap to /home/jenkins/workspace/Lucene-Solr-6.x-Linux/heapdumps/java_pid29278.hprof ... [junit4] Heap dump file created [450329487 bytes in 2.196 secs] [junit4] <<< JVM J1: EOF [junit4] JVM J1: stderr was not empty, see: /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/temp/junit4-J1-20160701_045532_918.syserr [junit4] >>> JVM J1 emitted unexpected output (verbatim) [junit4] WARN: Unhandled exception in event serialization. -> java.lang.OutOfMemoryError: GC overhead limit exceeded [junit4] at java.util.Arrays.copyOf(Arrays.java:3332) [junit4] at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:137) [junit4] at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:121) [junit4] at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:622) [junit4] at java.lang.StringBuilder.append(StringBuilder.java:202) [junit4] at com.carrotsearch.ant.tasks.junit4.events.AbstractEvent.toAscii(AbstractEvent.java:108) [junit4] at com.carrotsearch.ant.tasks.junit4.events.AbstractEvent.writeBinaryProperty(AbstractEvent.java:36) [junit4] at com.carrotsearch.ant.tasks.junit4.events.AppendStdErrEvent.serialize(AppendStdErrEvent.java:30) [junit4] at com.carrotsearch.ant.tasks.junit4.events.Serializer$2.run(Serializer.java:101) [junit4] at com.carrotsearch.ant.tasks.junit4.events.Serializer$2.run(Serializer.java:96) [junit4] at java.security.AccessController.doPrivileged(Native Method) [junit4] at com.carrotsearch.ant.tasks.junit4.events.Serializer.flushQueue(Serializer.java:96) [junit4] at com.carrotsearch.ant.tasks.junit4.events.Serializer.serialize(Serializer.java:81) [junit4] at com.carrotsearch.ant.tasks.junit4.slave.SlaveMain$3$2.write(SlaveMain.java:456) [junit4] at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) [junit4] at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) [junit4] at java.io.PrintStream.flush(PrintStream.java:338) [junit4] at java.io.FilterOutputStream.flush(FilterOutputStream.java:140) [junit4] at java.io.PrintStream.write(PrintStream.java:482) [junit4] at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) [junit4] at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282) [junit4] at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125) [junit4] at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:135) [junit4] at java.io.OutputStreamWriter.write(OutputStreamWriter.java:220) [junit4] at java.io.Writer.write(Writer.java:157) [junit4] at org.apache.log4j.helpers.QuietWriter.write(QuietWriter.java:48) [junit4] at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:310) [junit4] [junit4] at org.apache.log4j.WriterAppender.append(WriterAppender.java:162) [junit4] at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251) [junit4] at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66) [junit4] at org.apache.log4j.Category.callAppenders(Category.java:206) [junit4] at org.apache.log4j.Category.forcedLog(Category.java:391) [junit4] <<< JVM J1: EOF [...truncated 1418 lines...] [junit4] ERROR: JVM J1 ended with an exception, command line: /home/jenkins/tools/java/64bit/jdk1.8.0_92/jre/bin/java -XX:-UseCompressedOops -XX:+UseParallelGC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/jenkins/workspace/Lucene-Solr-6.x-Linux/heapdumps -ea -esa -Dtests.prefix=tests -Dtests.seed=D6C86E4CEDDE0828 -Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random -Dtests.postingsformat=random -Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=6.2.0 -Dtests.cleanthreads=perClass -Djava.util.logging.config.file=/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/tools/junit4/logging.properties -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false -Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=3 -DtempDir=./temp -Djava.io.tmpdir=./temp -Djunit4.tempDir=/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/temp
[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 244 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/244/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC All tests passed Build Log: [...truncated 63433 lines...] -ecj-javadoc-lint-tests: [mkdir] Created dir: /var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj1133236478 [ecj-lint] Compiling 671 source files to /var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj1133236478 [ecj-lint] invalid Class-Path header in manifest of jar file: /Users/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar [ecj-lint] invalid Class-Path header in manifest of jar file: /Users/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar [ecj-lint] -- [ecj-lint] 1. WARNING in /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java (at line 267) [ecj-lint] ZkStateReader reader = new ZkStateReader(zkClient); [ecj-lint] ^^ [ecj-lint] Resource leak: 'reader' is never closed [ecj-lint] -- [ecj-lint] 2. WARNING in /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java (at line 322) [ecj-lint] ZkStateReader reader = new ZkStateReader(zkClient); [ecj-lint] ^^ [ecj-lint] Resource leak: 'reader' is never closed [ecj-lint] -- [ecj-lint] -- [ecj-lint] 3. ERROR in /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/cloud/TestAuthenticationFramework.java (at line 46) [ecj-lint] import org.apache.solr.client.solrj.impl.HttpClientUtil; [ecj-lint] [ecj-lint] The import org.apache.solr.client.solrj.impl.HttpClientUtil is never used [ecj-lint] -- [ecj-lint] -- [ecj-lint] 4. WARNING in /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/core/HdfsDirectoryFactoryTest.java (at line 146) [ecj-lint] HdfsDirectoryFactory hdfsFactory = new HdfsDirectoryFactory(); [ecj-lint] ^^^ [ecj-lint] Resource leak: 'hdfsFactory' is never closed [ecj-lint] -- [ecj-lint] -- [ecj-lint] 5. WARNING in /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/handler/admin/SecurityConfHandlerTest.java (at line 53) [ecj-lint] BasicAuthPlugin basicAuth = new BasicAuthPlugin(); [ecj-lint] ^ [ecj-lint] Resource leak: 'basicAuth' is never closed [ecj-lint] -- [ecj-lint] -- [ecj-lint] 6. WARNING in /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/handler/component/DistributedDebugComponentTest.java (at line 163) [ecj-lint] SolrClient client = random().nextBoolean() ? collection1 : collection2; [ecj-lint]^^ [ecj-lint] Resource leak: 'client' is never closed [ecj-lint] -- [ecj-lint] 7. WARNING in /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/handler/component/DistributedDebugComponentTest.java (at line 221) [ecj-lint] throw new AssertionError(q.toString() + ": " + e.getMessage(), e); [ecj-lint] ^^ [ecj-lint] Resource leak: 'client' is not closed at this location [ecj-lint] -- [ecj-lint] -- [ecj-lint] 8. WARNING in /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java (at line 185) [ecj-lint] Analyzer a1 = new WhitespaceAnalyzer(); [ecj-lint] ^^ [ecj-lint] Resource leak: 'a1' is never closed [ecj-lint] -- [ecj-lint] 9. WARNING in /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java (at line 188) [ecj-lint] OffsetWindowTokenFilter tots = new OffsetWindowTokenFilter(tokenStream); [ecj-lint] [ecj-lint] Resource leak: 'tots' is never closed [ecj-lint] -- [ecj-lint] 10. WARNING in /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java (at line 192) [ecj-lint] Analyzer a2 = new WhitespaceAnalyzer(); [ecj-lint] ^^ [ecj-lint] Resource leak: 'a2' is never closed [ecj-lint] -- [ecj-lint] -- [ecj-lint] 11. WARNING in /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test/org/apache/solr/search/TestDocSet.java (at line 241) [ecj-lint] return loadfactor!=0 ? new HashDocSet(a,0,n,1/loadfactor) : new HashDocSet(a,0,n); [ecj-lint]^^ [ecj-lint] Resource leak: '' is never closed [ecj-lint] -- [ecj-lint] 12. WARNING in
[jira] [Commented] (SOLR-8787) TestAuthenticationFramework should not extend TestMiniSolrCloudCluster
[ https://issues.apache.org/jira/browse/SOLR-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15358386#comment-15358386 ] ASF subversion and git services commented on SOLR-8787: --- Commit ed38a29a9f497eda1c8d7bce374cc2bbdb281054 in lucene-solr's branch refs/heads/branch_6x from [~shalinmangar] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ed38a29 ] SOLR-8787: Fix broken build due to unused import > TestAuthenticationFramework should not extend TestMiniSolrCloudCluster > -- > > Key: SOLR-8787 > URL: https://issues.apache.org/jira/browse/SOLR-8787 > Project: Solr > Issue Type: Bug > Components: Tests >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar >Priority: Minor > Labels: difficulty-easy, newdev > Fix For: 6.2, master (7.0) > > Attachments: SOLR-8787.patch > > > TestAuthenticationFramework should not extend TestMiniSolrCloudCluster. The > TestMiniSolrCloudCluster is actually a test for MiniSolrCloudCluster and not > a generic test framework class. I saw a local failure for > TestAuthenticationFramework.testSegmentTerminateEarly which should never be > executed in the first place. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_92) - Build # 1015 - Still Failing!
Unused import checked in while merging commit to branch_6x on SOLR-8787. I pushed a fix. On Fri, Jul 1, 2016 at 8:04 AM, Policeman Jenkins Server < jenk...@thetaphi.de> wrote: > Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1015/ > Java: 32bit/jdk1.8.0_92 -client -XX:+UseSerialGC > > All tests passed > > Build Log: > [...truncated 61804 lines...] > -ecj-javadoc-lint-tests: > [mkdir] Created dir: /tmp/ecj1154503317 > [ecj-lint] Compiling 671 source files to /tmp/ecj1154503317 > [ecj-lint] invalid Class-Path header in manifest of jar file: > /home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar > [ecj-lint] invalid Class-Path header in manifest of jar file: > /home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar > [ecj-lint] -- > [ecj-lint] 1. WARNING in > /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java > (at line 267) > [ecj-lint] ZkStateReader reader = new ZkStateReader(zkClient); > [ecj-lint] ^^ > [ecj-lint] Resource leak: 'reader' is never closed > [ecj-lint] -- > [ecj-lint] 2. WARNING in > /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java > (at line 322) > [ecj-lint] ZkStateReader reader = new ZkStateReader(zkClient); > [ecj-lint] ^^ > [ecj-lint] Resource leak: 'reader' is never closed > [ecj-lint] -- > [ecj-lint] -- > [ecj-lint] 3. ERROR in > /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/cloud/TestAuthenticationFramework.java > (at line 46) > [ecj-lint] import org.apache.solr.client.solrj.impl.HttpClientUtil; > [ecj-lint] > [ecj-lint] The import org.apache.solr.client.solrj.impl.HttpClientUtil is > never used > [ecj-lint] -- > [ecj-lint] -- > [ecj-lint] 4. WARNING in > /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/core/HdfsDirectoryFactoryTest.java > (at line 146) > [ecj-lint] HdfsDirectoryFactory hdfsFactory = new > HdfsDirectoryFactory(); > [ecj-lint] ^^^ > [ecj-lint] Resource leak: 'hdfsFactory' is never closed > [ecj-lint] -- > [ecj-lint] -- > [ecj-lint] 5. WARNING in > /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/handler/admin/SecurityConfHandlerTest.java > (at line 53) > [ecj-lint] BasicAuthPlugin basicAuth = new BasicAuthPlugin(); > [ecj-lint] ^ > [ecj-lint] Resource leak: 'basicAuth' is never closed > [ecj-lint] -- > [ecj-lint] -- > [ecj-lint] 6. WARNING in > /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/handler/component/DistributedDebugComponentTest.java > (at line 163) > [ecj-lint] SolrClient client = random().nextBoolean() ? collection1 : > collection2; > [ecj-lint]^^ > [ecj-lint] Resource leak: 'client' is never closed > [ecj-lint] -- > [ecj-lint] 7. WARNING in > /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/handler/component/DistributedDebugComponentTest.java > (at line 221) > [ecj-lint] throw new AssertionError(q.toString() + ": " + > e.getMessage(), e); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'client' is not closed at this location > [ecj-lint] -- > [ecj-lint] -- > [ecj-lint] 8. WARNING in > /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java > (at line 185) > [ecj-lint] Analyzer a1 = new WhitespaceAnalyzer(); > [ecj-lint] ^^ > [ecj-lint] Resource leak: 'a1' is never closed > [ecj-lint] -- > [ecj-lint] 9. WARNING in > /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java > (at line 188) > [ecj-lint] OffsetWindowTokenFilter tots = new > OffsetWindowTokenFilter(tokenStream); > [ecj-lint] > [ecj-lint] Resource leak: 'tots' is never closed > [ecj-lint] -- > [ecj-lint] 10. WARNING in > /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java > (at line 192) > [ecj-lint] Analyzer a2 = new WhitespaceAnalyzer(); > [ecj-lint] ^^ > [ecj-lint] Resource leak: 'a2' is never closed > [ecj-lint] -- > [ecj-lint] -- > [ecj-lint] 11. WARNING in > /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/search/TestDocSet.java > (at line 241) > [ecj-lint] return loadfactor!=0 ? new HashDocSet(a,0,n,1/loadfactor) > : new HashDocSet(a,0,n); > [ecj-lint]
[jira] [Updated] (SOLR-9242) Collection level backup/restore should provide a param for specifying the repository implementation it should use
[ https://issues.apache.org/jira/browse/SOLR-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hrishikesh Gadre updated SOLR-9242: --- Attachment: SOLR-9242.patch [~varunthacker] Fixed a bug in the earlier patch. > Collection level backup/restore should provide a param for specifying the > repository implementation it should use > - > > Key: SOLR-9242 > URL: https://issues.apache.org/jira/browse/SOLR-9242 > Project: Solr > Issue Type: Improvement >Reporter: Hrishikesh Gadre >Assignee: Varun Thacker > Attachments: SOLR-9242.patch, SOLR-9242.patch, SOLR-9242.patch > > > SOLR-7374 provides BackupRepository interface to enable storing Solr index > data to a configured file-system (e.g. HDFS, local file-system etc.). This > JIRA is to track the work required to extend this functionality at the > collection level. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_92) - Build # 1015 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1015/ Java: 32bit/jdk1.8.0_92 -client -XX:+UseSerialGC All tests passed Build Log: [...truncated 61804 lines...] -ecj-javadoc-lint-tests: [mkdir] Created dir: /tmp/ecj1154503317 [ecj-lint] Compiling 671 source files to /tmp/ecj1154503317 [ecj-lint] invalid Class-Path header in manifest of jar file: /home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar [ecj-lint] invalid Class-Path header in manifest of jar file: /home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar [ecj-lint] -- [ecj-lint] 1. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java (at line 267) [ecj-lint] ZkStateReader reader = new ZkStateReader(zkClient); [ecj-lint] ^^ [ecj-lint] Resource leak: 'reader' is never closed [ecj-lint] -- [ecj-lint] 2. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java (at line 322) [ecj-lint] ZkStateReader reader = new ZkStateReader(zkClient); [ecj-lint] ^^ [ecj-lint] Resource leak: 'reader' is never closed [ecj-lint] -- [ecj-lint] -- [ecj-lint] 3. ERROR in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/cloud/TestAuthenticationFramework.java (at line 46) [ecj-lint] import org.apache.solr.client.solrj.impl.HttpClientUtil; [ecj-lint] [ecj-lint] The import org.apache.solr.client.solrj.impl.HttpClientUtil is never used [ecj-lint] -- [ecj-lint] -- [ecj-lint] 4. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/core/HdfsDirectoryFactoryTest.java (at line 146) [ecj-lint] HdfsDirectoryFactory hdfsFactory = new HdfsDirectoryFactory(); [ecj-lint] ^^^ [ecj-lint] Resource leak: 'hdfsFactory' is never closed [ecj-lint] -- [ecj-lint] -- [ecj-lint] 5. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/handler/admin/SecurityConfHandlerTest.java (at line 53) [ecj-lint] BasicAuthPlugin basicAuth = new BasicAuthPlugin(); [ecj-lint] ^ [ecj-lint] Resource leak: 'basicAuth' is never closed [ecj-lint] -- [ecj-lint] -- [ecj-lint] 6. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/handler/component/DistributedDebugComponentTest.java (at line 163) [ecj-lint] SolrClient client = random().nextBoolean() ? collection1 : collection2; [ecj-lint]^^ [ecj-lint] Resource leak: 'client' is never closed [ecj-lint] -- [ecj-lint] 7. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/handler/component/DistributedDebugComponentTest.java (at line 221) [ecj-lint] throw new AssertionError(q.toString() + ": " + e.getMessage(), e); [ecj-lint] ^^ [ecj-lint] Resource leak: 'client' is not closed at this location [ecj-lint] -- [ecj-lint] -- [ecj-lint] 8. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java (at line 185) [ecj-lint] Analyzer a1 = new WhitespaceAnalyzer(); [ecj-lint] ^^ [ecj-lint] Resource leak: 'a1' is never closed [ecj-lint] -- [ecj-lint] 9. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java (at line 188) [ecj-lint] OffsetWindowTokenFilter tots = new OffsetWindowTokenFilter(tokenStream); [ecj-lint] [ecj-lint] Resource leak: 'tots' is never closed [ecj-lint] -- [ecj-lint] 10. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java (at line 192) [ecj-lint] Analyzer a2 = new WhitespaceAnalyzer(); [ecj-lint] ^^ [ecj-lint] Resource leak: 'a2' is never closed [ecj-lint] -- [ecj-lint] -- [ecj-lint] 11. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/search/TestDocSet.java (at line 241) [ecj-lint] return loadfactor!=0 ? new HashDocSet(a,0,n,1/loadfactor) : new HashDocSet(a,0,n); [ecj-lint]^^ [ecj-lint] Resource leak: '' is never closed [ecj-lint] -- [ecj-lint] 12. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test/org/apache/solr/search/TestDocSet.java (at line 531) [ecj-lint] DocSet a = new BitDocSet(bs); [ecj-lint]^ [ecj-lint] Resource leak: 'a' is
[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_92) - Build # 285 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/285/ Java: 32bit/jdk1.8.0_92 -client -XX:+UseSerialGC 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI Error Message: ObjectTracker found 8 object(s) that were not released!!! [MDCAwareThreadPoolExecutor, MockDirectoryWrapper, MockDirectoryWrapper, TransactionLog, TransactionLog, MDCAwareThreadPoolExecutor, MockDirectoryWrapper, MockDirectoryWrapper] Stack Trace: java.lang.AssertionError: ObjectTracker found 8 object(s) that were not released!!! [MDCAwareThreadPoolExecutor, MockDirectoryWrapper, MockDirectoryWrapper, TransactionLog, TransactionLog, MDCAwareThreadPoolExecutor, MockDirectoryWrapper, MockDirectoryWrapper] at __randomizedtesting.SeedInfo.seed([FD633F85EC5E265D]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:257) at sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) FAILED: junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI Error Message: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.schema.TestManagedSchemaAPI_FD633F85EC5E265D-001\tempDir-001\node1\testschemaapi_shard1_replica2\data\tlog\tlog.000: java.nio.file.FileSystemException: C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.schema.TestManagedSchemaAPI_FD633F85EC5E265D-001\tempDir-001\node1\testschemaapi_shard1_replica2\data\tlog\tlog.000: The process cannot access the file because it is being used by another process. C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.schema.TestManagedSchemaAPI_FD633F85EC5E265D-001\tempDir-001\node1\testschemaapi_shard1_replica2\data\tlog: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.schema.TestManagedSchemaAPI_FD633F85EC5E265D-001\tempDir-001\node1\testschemaapi_shard1_replica2\data\tlog C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.schema.TestManagedSchemaAPI_FD633F85EC5E265D-001\tempDir-001\node1\testschemaapi_shard1_replica2\data: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.schema.TestManagedSchemaAPI_FD633F85EC5E265D-001\tempDir-001\node1\testschemaapi_shard1_replica2\data
[jira] [Commented] (SOLR-9038) Ability to create/delete/list snapshots for a solr collection
[ https://issues.apache.org/jira/browse/SOLR-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15358224#comment-15358224 ] Hrishikesh Gadre commented on SOLR-9038: [~dsmiley] Thanks for the feedback! I have created a new sub-task to track the implementation of this functionality at Solr core level (which is now complete). I am planning to work on extending it at collection level early next week. bq. , though there were some minor improvements I suggested RE Java 8 streams. I think I have addressed those comments. Please take a look at the attached patch for SOLR-9269 bq. Do tests pass & "ant precommit"? Yes I verified that all tests as well as precommit is passing. > Ability to create/delete/list snapshots for a solr collection > - > > Key: SOLR-9038 > URL: https://issues.apache.org/jira/browse/SOLR-9038 > Project: Solr > Issue Type: New Feature > Components: SolrCloud >Reporter: Hrishikesh Gadre >Assignee: David Smiley > > Currently work is under-way to implement backup/restore API for Solr cloud > (SOLR-5750). SOLR-5750 is about providing an ability to "copy" index files > and collection metadata to a configurable location. > In addition to this, we should also provide a facility to create "named" > snapshots for Solr collection. Here by "snapshot" I mean configuring the > underlying Lucene IndexDeletionPolicy to not delete a specific commit point > (e.g. using PersistentSnapshotIndexDeletionPolicy). This should not be > confused with SOLR-5340 which implements core level "backup" functionality. > The primary motivation of this feature is to decouple recording/preserving a > known consistent state of a collection from actually "copying" the relevant > files to a physically separate location. This decoupling have number of > advantages > - We can use specialized data-copying tools for transferring Solr index > files. e.g. in Hadoop environment, typically > [distcp|https://hadoop.apache.org/docs/r1.2.1/distcp2.html] tool is used to > copy files from one location to other. This tool provides various options to > configure degree of parallelism, bandwidth usage as well as integration with > different types and versions of file systems (e.g. AWS S3, Azure Blob store > etc.) > - This separation of concern would also help Solr to focus on the key > functionality (i.e. querying and indexing) while delegating the copy > operation to the tools built for that purpose. > - Users can decide if/when to copy the data files as against creating a > snapshot. e.g. a user may want to create a snapshot of a collection before > making an experimental change (e.g. updating/deleting docs, schema change > etc.). If the experiment is successful, he can delete the snapshot (without > having to copy the files). If the experiment is failed, then he can copy the > files associated with the snapshot and restore. > Note that Apache Blur project is also providing a similar feature > [BLUR-132|https://issues.apache.org/jira/browse/BLUR-132] -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9269) Ability to create/delete/list snapshots for a solr core
[ https://issues.apache.org/jira/browse/SOLR-9269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hrishikesh Gadre updated SOLR-9269: --- Attachment: SOLR-9269.patch [~dsmiley] Please find the patch attached. It addresses all the review comments posted on github except one. https://github.com/hgadre/lucene-solr/commit/1ab2b5022a2ed970e0bad733a4bdb284bb7a0830#commitcomment-18007499 Any thoughts here? > Ability to create/delete/list snapshots for a solr core > --- > > Key: SOLR-9269 > URL: https://issues.apache.org/jira/browse/SOLR-9269 > Project: Solr > Issue Type: Sub-task > Components: SolrCloud >Reporter: Hrishikesh Gadre > Attachments: SOLR-9269.patch > > > Support snapshot create/delete/list functionality @ the Solr core level. > Please refer to parent JIRA for more details. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9269) Ability to create/delete/list snapshots for a solr core
Hrishikesh Gadre created SOLR-9269: -- Summary: Ability to create/delete/list snapshots for a solr core Key: SOLR-9269 URL: https://issues.apache.org/jira/browse/SOLR-9269 Project: Solr Issue Type: Sub-task Reporter: Hrishikesh Gadre Support snapshot create/delete/list functionality @ the Solr core level. Please refer to parent JIRA for more details. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-9185) Solr's "Lucene"/standard query parser should not split on whitespace before sending terms to analysis
[ https://issues.apache.org/jira/browse/SOLR-9185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15358208#comment-15358208 ] Steve Rowe edited comment on SOLR-9185 at 7/1/16 1:44 AM: -- New patch, switches back to ignoring whitespace (along with comments). Added new LuceneQParser param {{sow}} (*S*plit *O*n *W*hitespace) to control whether to split on whitespace. Defaults to {{SolrQueryParser.DEFAULT_SPLIT_ON_WHITESPACE}} (true). All Solr core tests pass (with existing split-on-whitespace behavior preserved as the default), and I've added a couple basic multi-word synonym tests. Needs more tests to ensure multiword analysis is properly interrupted in the presence of operators. was (Author: steve_rowe): New patch, switches back to ignoring whitespace (along with comments). Added new LuceneQParser param {{sow}} (*S*plit *O*n *W*hitespace) to control whether to split on whitespace. Defaults to {{SolrQueryParser.DEFAULT_SPLIT_ON_WHITESPACE}} (true). All Solr core tests pass (with existing split-on-whitespace behavior preserved as the default), and I've added a couple basic multi-word synonym tests. Needs more tests to ensure multiword analysis is properly interrupted in the presence of operators. > Solr's "Lucene"/standard query parser should not split on whitespace before > sending terms to analysis > - > > Key: SOLR-9185 > URL: https://issues.apache.org/jira/browse/SOLR-9185 > Project: Solr > Issue Type: Bug >Reporter: Steve Rowe >Assignee: Steve Rowe > Attachments: SOLR-9185.patch, SOLR-9185.patch, SOLR-9185.patch > > > Copied from LUCENE-2605: > The queryparser parses input on whitespace, and sends each whitespace > separated term to its own independent token stream. > This breaks the following at query-time, because they can't see across > whitespace boundaries: > n-gram analysis > shingles > synonyms (especially multi-word for whitespace-separated languages) > languages where a 'word' can contain whitespace (e.g. vietnamese) > Its also rather unexpected, as users think their > charfilters/tokenizers/tokenfilters will do the same thing at index and > querytime, but > in many cases they can't. Instead, preferably the queryparser would parse > around only real 'operators'. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9185) Solr's "Lucene"/standard query parser should not split on whitespace before sending terms to analysis
[ https://issues.apache.org/jira/browse/SOLR-9185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Rowe updated SOLR-9185: - Attachment: SOLR-9185.patch New patch, switches back to ignoring whitespace (along with comments). Added new LuceneQParser param {{sow}} (*S*plit *O*n *W*hitespace) to control whether to split on whitespace. Defaults to {{SolrQueryParser.DEFAULT_SPLIT_ON_WHITESPACE}} (true). All Solr core tests pass (with existing split-on-whitespace behavior preserved as the default), and I've added a couple basic multi-word synonym tests. Needs more tests to ensure multiword analysis is properly interrupted in the presence of operators. > Solr's "Lucene"/standard query parser should not split on whitespace before > sending terms to analysis > - > > Key: SOLR-9185 > URL: https://issues.apache.org/jira/browse/SOLR-9185 > Project: Solr > Issue Type: Bug >Reporter: Steve Rowe >Assignee: Steve Rowe > Attachments: SOLR-9185.patch, SOLR-9185.patch, SOLR-9185.patch > > > Copied from LUCENE-2605: > The queryparser parses input on whitespace, and sends each whitespace > separated term to its own independent token stream. > This breaks the following at query-time, because they can't see across > whitespace boundaries: > n-gram analysis > shingles > synonyms (especially multi-word for whitespace-separated languages) > languages where a 'word' can contain whitespace (e.g. vietnamese) > Its also rather unexpected, as users think their > charfilters/tokenizers/tokenfilters will do the same thing at index and > querytime, but > in many cases they can't. Instead, preferably the queryparser would parse > around only real 'operators'. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-2605) queryparser parses on whitespace
[ https://issues.apache.org/jira/browse/LUCENE-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Rowe updated LUCENE-2605: --- Attachment: LUCENE-2605.patch Okay, really final patch. On SOLR-9185 I was having trouble integrating the Solr standard QP's comment support with the whitespace tokenization I introduced here, so I tried switching the Solr parser back to ignoring both whitespace and comments, and it worked. The patch brings this grammar simplification back here too - in addition to many fewer whitespace mentions in the rules, fewer (and less complicated) lookaheads are required. I've included the generated files in the patch. No tests changed from the last patch. All Lucene tests pass, and precommit passes. > queryparser parses on whitespace > > > Key: LUCENE-2605 > URL: https://issues.apache.org/jira/browse/LUCENE-2605 > Project: Lucene - Core > Issue Type: Bug > Components: core/queryparser >Reporter: Robert Muir >Assignee: Steve Rowe > Attachments: LUCENE-2605.patch, LUCENE-2605.patch, LUCENE-2605.patch, > LUCENE-2605.patch, LUCENE-2605.patch, LUCENE-2605.patch > > > The queryparser parses input on whitespace, and sends each whitespace > separated term to its own independent token stream. > This breaks the following at query-time, because they can't see across > whitespace boundaries: > * n-gram analysis > * shingles > * synonyms (especially multi-word for whitespace-separated languages) > * languages where a 'word' can contain whitespace (e.g. vietnamese) > Its also rather unexpected, as users think their > charfilters/tokenizers/tokenfilters will do the same thing at index and > querytime, but > in many cases they can't. Instead, preferably the queryparser would parse > around only real 'operators'. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+124) - Build # 17110 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17110/ Java: 64bit/jdk-9-ea+124 -XX:-UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.schema.TestManagedSchemaAPI.test Error Message: Error from server at http://127.0.0.1:46724/solr/testschemaapi_shard1_replica1: ERROR: [doc=2] unknown field 'myNewField1' Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at http://127.0.0.1:46724/solr/testschemaapi_shard1_replica1: ERROR: [doc=2] unknown field 'myNewField1' at __randomizedtesting.SeedInfo.seed([8AEA38A7121D556A:2BE077DBCE13892]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:697) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1109) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:998) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:934) at org.apache.solr.schema.TestManagedSchemaAPI.testAddFieldAndDocument(TestManagedSchemaAPI.java:86) at org.apache.solr.schema.TestManagedSchemaAPI.test(TestManagedSchemaAPI.java:55) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native Method) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62) at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:533) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at
[jira] [Commented] (LUCENE-7287) New lemma-tizer plugin for ukrainian language.
[ https://issues.apache.org/jira/browse/LUCENE-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15358188#comment-15358188 ] Andriy Rysin commented on LUCENE-7287: -- Hey [~mikemccand], can we please merge the pull request above, that should wrap up dictionary-based analyzer for Ukrainian. Thanks! > New lemma-tizer plugin for ukrainian language. > -- > > Key: LUCENE-7287 > URL: https://issues.apache.org/jira/browse/LUCENE-7287 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/analysis >Reporter: Dmytro Hambal >Priority: Minor > Labels: analysis, language, plugin > Fix For: master (7.0), 6.2 > > Attachments: LUCENE-7287.patch, Screen Shot 2016-06-23 at 8.23.01 > PM.png, Screen Shot 2016-06-23 at 8.41.28 PM.png > > > Hi all, > I wonder whether you are interested in supporting a plugin which provides a > mapping between ukrainian word forms and their lemmas. Some tests and docs go > out-of-the-box =) . > https://github.com/mrgambal/elasticsearch-ukrainian-lemmatizer > It's really simple but still works and generates some value for its users. > More: https://github.com/elastic/elasticsearch/issues/18303 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+124) - Build # 1014 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1014/ Java: 32bit/jdk-9-ea+124 -client -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.handler.TestReqParamsAPI.test Error Message: Could not get expected value 'CY val' for path 'response/params/y/c' full output: { "responseHeader":{ "status":0, "QTime":0}, "response":{ "znodeVersion":0, "params":{"x":{ "a":"A val", "b":"B val", "":{"v":0}, from server: https://127.0.0.1:44171/cvbr/py/collection1 Stack Trace: java.lang.AssertionError: Could not get expected value 'CY val' for path 'response/params/y/c' full output: { "responseHeader":{ "status":0, "QTime":0}, "response":{ "znodeVersion":0, "params":{"x":{ "a":"A val", "b":"B val", "":{"v":0}, from server: https://127.0.0.1:44171/cvbr/py/collection1 at __randomizedtesting.SeedInfo.seed([37C0A182702A39BF:BF949E58DED65447]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:481) at org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:159) at org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:61) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native Method) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62) at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:533) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-9236) AutoAddReplicas feature with one replica loses some documents not committed during failover
[ https://issues.apache.org/jira/browse/SOLR-9236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15358090#comment-15358090 ] Eungsop Yoo commented on SOLR-9236: --- LGTM > AutoAddReplicas feature with one replica loses some documents not committed > during failover > --- > > Key: SOLR-9236 > URL: https://issues.apache.org/jira/browse/SOLR-9236 > Project: Solr > Issue Type: Bug > Components: hdfs, SolrCloud >Reporter: Eungsop Yoo >Assignee: Mark Miller >Priority: Minor > Attachments: SOLR-9236.patch, SOLR-9236.patch > > > I need to index huge amount of logs, so I decide to use AutoAddReplica > feature with only one replica. > When using AutoAddReplicas with one replica, some benefits are expected. > - no redundant data files for replicas > -- saving disk usage > - best indexing performance > I expected that Solr fails over just like HBase. > The feature worked almost as it was expected, except for some missing > documents during failover. > I found two regions for the missing. > 1. The leader replica does not replay any transaction logs. But when there is > only one replica, it should be the leader. > So I made the leader replica replay the transaction logs when using > AutoAddReplicas with on replica. > But the above fix did not resolve the problem. > 2. As failover occurred, the transaction log directory had a deeper directory > depth. Just like this, tlog/tlog/tlog/... > The transaction log could not be replayed, because the transaction log > directory was changed during failover. > So I made the transaction log directory not changed during failover. > After these fixes, AutoAddReplicas with one replica fails over well without > losing any documents. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-6.x - Build # 302 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/302/ All tests passed Build Log: [...truncated 63503 lines...] -ecj-javadoc-lint-tests: [mkdir] Created dir: /tmp/ecj1406011900 [ecj-lint] Compiling 671 source files to /tmp/ecj1406011900 [ecj-lint] invalid Class-Path header in manifest of jar file: /x1/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar [ecj-lint] invalid Class-Path header in manifest of jar file: /x1/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar [ecj-lint] -- [ecj-lint] 1. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java (at line 267) [ecj-lint] ZkStateReader reader = new ZkStateReader(zkClient); [ecj-lint] ^^ [ecj-lint] Resource leak: 'reader' is never closed [ecj-lint] -- [ecj-lint] 2. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java (at line 322) [ecj-lint] ZkStateReader reader = new ZkStateReader(zkClient); [ecj-lint] ^^ [ecj-lint] Resource leak: 'reader' is never closed [ecj-lint] -- [ecj-lint] -- [ecj-lint] 3. ERROR in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/test/org/apache/solr/cloud/TestAuthenticationFramework.java (at line 46) [ecj-lint] import org.apache.solr.client.solrj.impl.HttpClientUtil; [ecj-lint] [ecj-lint] The import org.apache.solr.client.solrj.impl.HttpClientUtil is never used [ecj-lint] -- [ecj-lint] -- [ecj-lint] 4. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/test/org/apache/solr/core/HdfsDirectoryFactoryTest.java (at line 146) [ecj-lint] HdfsDirectoryFactory hdfsFactory = new HdfsDirectoryFactory(); [ecj-lint] ^^^ [ecj-lint] Resource leak: 'hdfsFactory' is never closed [ecj-lint] -- [ecj-lint] -- [ecj-lint] 5. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/test/org/apache/solr/handler/admin/SecurityConfHandlerTest.java (at line 53) [ecj-lint] BasicAuthPlugin basicAuth = new BasicAuthPlugin(); [ecj-lint] ^ [ecj-lint] Resource leak: 'basicAuth' is never closed [ecj-lint] -- [ecj-lint] -- [ecj-lint] 6. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/test/org/apache/solr/handler/component/DistributedDebugComponentTest.java (at line 163) [ecj-lint] SolrClient client = random().nextBoolean() ? collection1 : collection2; [ecj-lint]^^ [ecj-lint] Resource leak: 'client' is never closed [ecj-lint] -- [ecj-lint] 7. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/test/org/apache/solr/handler/component/DistributedDebugComponentTest.java (at line 221) [ecj-lint] throw new AssertionError(q.toString() + ": " + e.getMessage(), e); [ecj-lint] ^^ [ecj-lint] Resource leak: 'client' is not closed at this location [ecj-lint] -- [ecj-lint] -- [ecj-lint] 8. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java (at line 185) [ecj-lint] Analyzer a1 = new WhitespaceAnalyzer(); [ecj-lint] ^^ [ecj-lint] Resource leak: 'a1' is never closed [ecj-lint] -- [ecj-lint] 9. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java (at line 188) [ecj-lint] OffsetWindowTokenFilter tots = new OffsetWindowTokenFilter(tokenStream); [ecj-lint] [ecj-lint] Resource leak: 'tots' is never closed [ecj-lint] -- [ecj-lint] 10. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java (at line 192) [ecj-lint] Analyzer a2 = new WhitespaceAnalyzer(); [ecj-lint] ^^ [ecj-lint] Resource leak: 'a2' is never closed [ecj-lint] -- [ecj-lint] -- [ecj-lint] 11. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/test/org/apache/solr/search/TestDocSet.java (at line 241) [ecj-lint] return loadfactor!=0 ? new HashDocSet(a,0,n,1/loadfactor) : new HashDocSet(a,0,n); [ecj-lint]^^ [ecj-lint] Resource leak: '' is never closed [ecj-lint] -- [ecj-lint] 12. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/test/org/apache/solr/search/TestDocSet.java (at line 531) [ecj-lint]
[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 233 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/233/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC All tests passed Build Log: [...truncated 63560 lines...] BUILD FAILED /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/build.xml:740: The following error occurred while executing this line: /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/build.xml:101: The following error occurred while executing this line: /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/build.xml:652: The following error occurred while executing this line: /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/common-build.xml:1982: The following error occurred while executing this line: /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/common-build.xml:2015: Compile failed; see the compiler error output for details. Total time: 88 minutes 10 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts [WARNINGS] Skipping publisher since build result is FAILURE Recording test results Email was triggered for: Failure - Any Sending email for trigger: Failure - Any - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Closed] (SOLR-9172) Refactor/rename some methods in stream.expr.StreamFactory
[ https://issues.apache.org/jira/browse/SOLR-9172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dennis Gove closed SOLR-9172. - Resolution: Won't Fix I'm killing this. It led me down a rabbit hole that just kept going deeper and deeper and the benefits just don't outweigh the cost of completing it. > Refactor/rename some methods in stream.expr.StreamFactory > - > > Key: SOLR-9172 > URL: https://issues.apache.org/jira/browse/SOLR-9172 > Project: Solr > Issue Type: Improvement > Components: SolrJ >Reporter: Dennis Gove > Attachments: SOLR-9172.patch, SOLR-9172.patch > > > Refactors a bunch of a methods in StreamFactory to make the clearer and > easier to use. Also adds documentation for public methods. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9076) Update to Hadoop 2.7.2
[ https://issues.apache.org/jira/browse/SOLR-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357851#comment-15357851 ] Mark Miller commented on SOLR-9076: --- Bah, no such luck with using existing Rome and Bouncy Castle dependencies - it requires the versions I have above rather than the ones we have. > Update to Hadoop 2.7.2 > -- > > Key: SOLR-9076 > URL: https://issues.apache.org/jira/browse/SOLR-9076 > Project: Solr > Issue Type: Improvement >Reporter: Mark Miller >Assignee: Mark Miller > Fix For: 6.2, master (7.0) > > Attachments: SOLR-9076-Fix-dependencies.patch, SOLR-9076-Hack.patch, > SOLR-9076-fixnetty.patch, SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch, > SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9163) Confusing solrconfig.xml in the downloaded solr*.zip
[ https://issues.apache.org/jira/browse/SOLR-9163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357801#comment-15357801 ] Yonik Seeley commented on SOLR-9163: I just ran into some of this craziness myself... I would have expected the differences between basic_configs and data_driven_schema_configs to only be what is necessary for "schemaless". It seems like to the degree possible, those configs should be identical. - the only difference in the schema should perhaps be the "copyField *" that's in the schemaless one? I don't like that copyField myself, but at least it's limited to the schemaless config. - the only difference in the solrconfig should be if add-unknown-fields-to-the-schema update processor is enabled or not (i.e. it should be defined in both). Everything else should be the same? Is there a way to use params.json or anything else to further confine the differences? Once we have sync'd these configsets they should be kept in sync. > Confusing solrconfig.xml in the downloaded solr*.zip > > > Key: SOLR-9163 > URL: https://issues.apache.org/jira/browse/SOLR-9163 > Project: Solr > Issue Type: Bug >Reporter: Sachin Goyal > > Here are the solrconfig.xml when I download and unzip solr: > {code} > find . -name solrconfig.xml > ./solr-5.5.1/example/example-DIH/solr/db/conf/solrconfig.xml > ./solr-5.5.1/example/example-DIH/solr/mail/conf/solrconfig.xml > ./solr-5.5.1/example/example-DIH/solr/rss/conf/solrconfig.xml > ./solr-5.5.1/example/example-DIH/solr/solr/conf/solrconfig.xml > ./solr-5.5.1/example/example-DIH/solr/tika/conf/solrconfig.xml > ./solr-5.5.1/example/files/conf/solrconfig.xml > ./solr-5.5.1/server/solr/configsets/basic_configs/conf/solrconfig.xml > ./solr-5.5.1/server/solr/configsets/data_driven_schema_configs/conf/solrconfig.xml > ./solr-5.5.1/server/solr/configsets/sample_techproducts_configs/conf/solrconfig.xml > {code} > Most likely, the ones I want to use are in server/solr/configsets, I assume. > But then which ones among those three? > Searching online does not provide much detailed information. > And diff-ing among them yields even more confusing results. > Example: When I diff basic_configs/conf/solrconfig.xml with > data_driven_schema_configs/conf/solrconfig.xml, I am not sure why the latter > has these extra constrcuts? > # solr.LimitTokenCountFilterFactory and all the comments around it. > # deletionPolicy class="solr.SolrDeletionPolicy" > # Commented out infoStream file="INFOSTREAM.txt" > # Extra comments for "Update Related Event Listeners" > # indexReaderFactory > # And so for lots of other constructs and comments. > The point is that it is difficult to find out exactly what extra features in > the latter are making it data-driven. Hence it is difficult to know what > features I am losing by not taking the data-driven-schema. > It would be good to sync the above 3 files together (each file should have > same comments and differ only in the configuration which makes them > different). Also, some good documentation should be put online about them > otherwise it is very confusing for non-committers and vanilla-users. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9242) Collection level backup/restore should provide a param for specifying the repository implementation it should use
[ https://issues.apache.org/jira/browse/SOLR-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357790#comment-15357790 ] Hrishikesh Gadre commented on SOLR-9242: [~varunthacker] Filed SOLR-9268 to track API support for solr.xml configuration. > Collection level backup/restore should provide a param for specifying the > repository implementation it should use > - > > Key: SOLR-9242 > URL: https://issues.apache.org/jira/browse/SOLR-9242 > Project: Solr > Issue Type: Improvement >Reporter: Hrishikesh Gadre >Assignee: Varun Thacker > Attachments: SOLR-9242.patch, SOLR-9242.patch > > > SOLR-7374 provides BackupRepository interface to enable storing Solr index > data to a configured file-system (e.g. HDFS, local file-system etc.). This > JIRA is to track the work required to extend this functionality at the > collection level. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9268) Support updating configuration in solr.xml via API
Hrishikesh Gadre created SOLR-9268: -- Summary: Support updating configuration in solr.xml via API Key: SOLR-9268 URL: https://issues.apache.org/jira/browse/SOLR-9268 Project: Solr Issue Type: New Feature Security Level: Public (Default Security Level. Issues are Public) Reporter: Hrishikesh Gadre Currently users need to manually modify solr.xml in Zookeeper to update the configuration parameters (and restart Solr cluster). This is not quite user friendly. We should provide an API to update this configuration. (This came up during the discussions in SOLR-9242). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9242) Collection level backup/restore should provide a param for specifying the repository implementation it should use
[ https://issues.apache.org/jira/browse/SOLR-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hrishikesh Gadre updated SOLR-9242: --- Attachment: SOLR-9242.patch [~varunthacker] Thanks for the feedback. I have incorporated these review comments in this patch. Please take a look and let me know. > Collection level backup/restore should provide a param for specifying the > repository implementation it should use > - > > Key: SOLR-9242 > URL: https://issues.apache.org/jira/browse/SOLR-9242 > Project: Solr > Issue Type: Improvement >Reporter: Hrishikesh Gadre >Assignee: Varun Thacker > Attachments: SOLR-9242.patch, SOLR-9242.patch > > > SOLR-7374 provides BackupRepository interface to enable storing Solr index > data to a configured file-system (e.g. HDFS, local file-system etc.). This > JIRA is to track the work required to extend this functionality at the > collection level. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-6.x - Build # 301 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/301/ All tests passed Build Log: [...truncated 63627 lines...] BUILD FAILED /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/build.xml:740: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/build.xml:101: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/build.xml:652: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/lucene/common-build.xml:1982: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/lucene/common-build.xml:2015: Compile failed; see the compiler error output for details. Total time: 76 minutes 19 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts Recording test results Email was triggered for: Failure - Any Sending email for trigger: Failure - Any - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 243 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/243/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC All tests passed Build Log: [...truncated 63571 lines...] BUILD FAILED /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/build.xml:740: The following error occurred while executing this line: /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/build.xml:101: The following error occurred while executing this line: /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/build.xml:652: The following error occurred while executing this line: /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/common-build.xml:1982: The following error occurred while executing this line: /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/common-build.xml:2015: Compile failed; see the compiler error output for details. Total time: 109 minutes 34 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts [WARNINGS] Skipping publisher since build result is FAILURE Recording test results Email was triggered for: Failure - Any Sending email for trigger: Failure - Any - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9267) Cloud MLT field boost not working
[ https://issues.apache.org/jira/browse/SOLR-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brian Feldman updated SOLR-9267: Affects Version/s: 5.0 5.1 5.2 5.2.1 5.3 5.3.1 5.3.2 5.4 5.4.1 6.0 6.0.1 6.1 > Cloud MLT field boost not working > - > > Key: SOLR-9267 > URL: https://issues.apache.org/jira/browse/SOLR-9267 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: MoreLikeThis >Affects Versions: 5.0, 5.1, 5.2, 5.2.1, 5.3, 5.3.1, 5.3.2, 5.4, 5.4.1, > 5.5, 5.5.1, 5.5.2, 6.0, 6.0.1, 6.1 >Reporter: Brian Feldman > > When boosting by field "fieldname otherFieldName^4.0" the boost is not > stripped from the field name when adding to fieldNames ArrayList. So on line > 133 of CloudMLTQParser when adding field content to the filteredDocument the > field is not found (incorrectly trying to find 'otherFieldName^4.0'). > The easiest but perhaps hackiest solution is to overwrite qf: > {code} > if (localParams.get("boost") != null) { > mlt.setBoost(localParams.getBool("boost")); > boostFields = SolrPluginUtils.parseFieldBoosts(qf); > qf = boostFields.keySet().toArray(qf); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-9193) Add scoreNodes Streaming Expression
[ https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357626#comment-15357626 ] Joel Bernstein edited comment on SOLR-9193 at 6/30/16 6:37 PM: --- I'm also planning on making the /terms handler an implicit handler in this ticket. was (Author: joel.bernstein): I'm also planning on making the Terms handler an implicit handler in this ticket. > Add scoreNodes Streaming Expression > --- > > Key: SOLR-9193 > URL: https://issues.apache.org/jira/browse/SOLR-9193 > Project: Solr > Issue Type: New Feature >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 6.2 > > Attachments: SOLR-9193.patch > > > The scoreNodes Streaming Expression is another *GraphExpression*. It will > decorate a gatherNodes expression and us a tf-idf scoring algorithm to score > the nodes. > The gatherNodes expression only gathers nodes and aggregations. This is > similar in nature to tf in search ranking, where the number of times a node > appears in the traversal represents the tf. But this skews recommendations > towards nodes that appear frequently in the index. > Using the idf for each node we can score each node as a function of tf and > idf. This will provide a boost to nodes that appear less frequently in the > index. > The scoreNodes expression will gather the idf's from the shards for each node > emitted by the underlying gatherNodes expression. It will then assign the > score to each node. > The computed score will be added to each node in the *nodeScore* field. The > docFreq of the node across the entire collection will be added to each node > in the *nodeFreq* field. Other streaming expressions can then perform a > ranking based on the nodeScore or compute their own score using the nodeFreq. > proposed syntax: > {code} > top(n="10", > sort="nodeScore desc", > scoreNodes(gatherNodes(...))) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-9193) Add scoreNodes Streaming Expression
[ https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357624#comment-15357624 ] Joel Bernstein edited comment on SOLR-9193 at 6/30/16 6:38 PM: --- First patch with a working scoreNodes expression. A simple test case is included. This builds on the work to the TermsComponent in SOLR-9243. was (Author: joel.bernstein): First patch with the scoreNodes expression working. A simple testcase is included. This builds on the work to the TermsComponent in SOLR-9243. > Add scoreNodes Streaming Expression > --- > > Key: SOLR-9193 > URL: https://issues.apache.org/jira/browse/SOLR-9193 > Project: Solr > Issue Type: New Feature >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 6.2 > > Attachments: SOLR-9193.patch > > > The scoreNodes Streaming Expression is another *GraphExpression*. It will > decorate a gatherNodes expression and us a tf-idf scoring algorithm to score > the nodes. > The gatherNodes expression only gathers nodes and aggregations. This is > similar in nature to tf in search ranking, where the number of times a node > appears in the traversal represents the tf. But this skews recommendations > towards nodes that appear frequently in the index. > Using the idf for each node we can score each node as a function of tf and > idf. This will provide a boost to nodes that appear less frequently in the > index. > The scoreNodes expression will gather the idf's from the shards for each node > emitted by the underlying gatherNodes expression. It will then assign the > score to each node. > The computed score will be added to each node in the *nodeScore* field. The > docFreq of the node across the entire collection will be added to each node > in the *nodeFreq* field. Other streaming expressions can then perform a > ranking based on the nodeScore or compute their own score using the nodeFreq. > proposed syntax: > {code} > top(n="10", > sort="nodeScore desc", > scoreNodes(gatherNodes(...))) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9193) Add scoreNodes Streaming Expression
[ https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357626#comment-15357626 ] Joel Bernstein commented on SOLR-9193: -- I'm also planning on making the Terms handler an implicit handler in this ticket. > Add scoreNodes Streaming Expression > --- > > Key: SOLR-9193 > URL: https://issues.apache.org/jira/browse/SOLR-9193 > Project: Solr > Issue Type: New Feature >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 6.2 > > Attachments: SOLR-9193.patch > > > The scoreNodes Streaming Expression is another *GraphExpression*. It will > decorate a gatherNodes expression and us a tf-idf scoring algorithm to score > the nodes. > The gatherNodes expression only gathers nodes and aggregations. This is > similar in nature to tf in search ranking, where the number of times a node > appears in the traversal represents the tf. But this skews recommendations > towards nodes that appear frequently in the index. > Using the idf for each node we can score each node as a function of tf and > idf. This will provide a boost to nodes that appear less frequently in the > index. > The scoreNodes expression will gather the idf's from the shards for each node > emitted by the underlying gatherNodes expression. It will then assign the > score to each node. > The computed score will be added to each node in the *nodeScore* field. The > docFreq of the node across the entire collection will be added to each node > in the *nodeFreq* field. Other streaming expressions can then perform a > ranking based on the nodeScore or compute their own score using the nodeFreq. > proposed syntax: > {code} > top(n="10", > sort="nodeScore desc", > scoreNodes(gatherNodes(...))) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9193) Add scoreNodes Streaming Expression
[ https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-9193: - Attachment: SOLR-9193.patch First patch with the scoreNodes expression working. A simple testcase is included. This builds on the work to the TermsComponent in SOLR-9243. > Add scoreNodes Streaming Expression > --- > > Key: SOLR-9193 > URL: https://issues.apache.org/jira/browse/SOLR-9193 > Project: Solr > Issue Type: New Feature >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 6.2 > > Attachments: SOLR-9193.patch > > > The scoreNodes Streaming Expression is another *GraphExpression*. It will > decorate a gatherNodes expression and us a tf-idf scoring algorithm to score > the nodes. > The gatherNodes expression only gathers nodes and aggregations. This is > similar in nature to tf in search ranking, where the number of times a node > appears in the traversal represents the tf. But this skews recommendations > towards nodes that appear frequently in the index. > Using the idf for each node we can score each node as a function of tf and > idf. This will provide a boost to nodes that appear less frequently in the > index. > The scoreNodes expression will gather the idf's from the shards for each node > emitted by the underlying gatherNodes expression. It will then assign the > score to each node. > The computed score will be added to each node in the *nodeScore* field. The > docFreq of the node across the entire collection will be added to each node > in the *nodeFreq* field. Other streaming expressions can then perform a > ranking based on the nodeScore or compute their own score using the nodeFreq. > proposed syntax: > {code} > top(n="10", > sort="nodeScore desc", > scoreNodes(gatherNodes(...))) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [jira] [Commented] (SOLR-8522) ImplicitSnitch to support IPv4 fragment tags
Hello Arcadius, Noble, I have a single Solr cluster setup across two DC's with below similar configuration. Now i am looking to use preferredNodes feature/rule that search queries executed from DC1 client uses all dc1 replica's and DC2 client uses all dc2 replica's for faster query response. I am bit confused with the current documentation on what different steps needs to be taken care on client and zookeeper side? Can you please summarize what needs to be executed part of Solrj client configuration/properties and Zookeeper clusterstate (MODIFYCOLLECTION) to make it work? DC1 - 3-dc1 shards replica and 3-dc2 shards replica DC2 - 3-dc2 shards replaca and 3-dc1 shards replica Thanks, Susheel On Fri, Jun 3, 2016 at 2:47 AM, Arcadius Ahouansou (JIRA)wrote: > > [ > https://issues.apache.org/jira/browse/SOLR-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313750#comment-15313750 > ] > > Arcadius Ahouansou commented on SOLR-8522: > -- > > Hi [~k317h] > I commented on SOLR-9183 > > > ImplicitSnitch to support IPv4 fragment tags > > > > > > Key: SOLR-8522 > > URL: https://issues.apache.org/jira/browse/SOLR-8522 > > Project: Solr > > Issue Type: Improvement > > Components: SolrCloud > >Affects Versions: 5.4 > >Reporter: Arcadius Ahouansou > >Assignee: Noble Paul > >Priority: Minor > > Fix For: 6.0 > > > > Attachments: SOLR-8522.patch, SOLR-8522.patch, SOLR-8522.patch, > SOLR-8522.patch, SOLR-8522.patch > > > > > > This is a description from [~noble.paul]'s comment on SOLR-8146 > > h3. IPv4 fragment tags > > Lets assume a Solr node IPv4 address is {{192.93.255.255}} . > > This is about enhancing the current {{ImplicitSnitch}} to support IP > based tags like: > > - {{hostfrag_1 = 255}} > > - {{hostfrag_2 = 255}} > > - {{hostfrag_3 = 93}} > > - {{hostfrag_4 = 192}} > > Note that IPv6 support will be implemented by a separate ticket > > h3. Host name fragment tags > > Lets assume a Solr node host name {{serv1.dc1.country1.apache.org}} . > > This is about enhancing the current {{ImplicitSnitch}} to support tags > like: > > - {{hostfrag_1 = org}} > > - {{hostfrag_2 = apache}} > > - {{hostfrag_3 = country1}} > > - {{hostfrag_4 = dc1}} > > - {{hostfrag_5 = serv1}} > > > > -- > This message was sent by Atlassian JIRA > (v6.3.4#6332) > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > >
[jira] [Commented] (LUCENE-7355) Leverage MultiTermAwareComponent in query parsers
[ https://issues.apache.org/jira/browse/LUCENE-7355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357613#comment-15357613 ] Robert Muir commented on LUCENE-7355: - I think normalize should call `end` for consistency? Its defined on TokenStream, and its going to always be called in the ordinary case, so its strange if "for wildcards" its not called, i can see bugs from that. > Leverage MultiTermAwareComponent in query parsers > - > > Key: LUCENE-7355 > URL: https://issues.apache.org/jira/browse/LUCENE-7355 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Priority: Minor > Attachments: LUCENE-7355.patch, LUCENE-7355.patch, LUCENE-7355.patch > > > MultiTermAwareComponent is designed to make it possible to do the right thing > in query parsers when in comes to analysis of multi-term queries. However, > since query parsers just take an analyzer and since analyzers do not > propagate the information about what to do for multi-term analysis, query > parsers cannot do the right thing out of the box. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9251) Allow a tag role:!overseer in replica placement rules
[ https://issues.apache.org/jira/browse/SOLR-9251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-9251: - Attachment: SOLR-9251.patch > Allow a tag role:!overseer in replica placement rules > - > > Key: SOLR-9251 > URL: https://issues.apache.org/jira/browse/SOLR-9251 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul > Attachments: SOLR-9251.patch > > > The reason to assign an overseer role to a node is to ensure that the node > is exclusively used as overseer. replica placement should support tag called > {{role}} > So if a collection is created with {{rule=role:!overseer}} no replica should > be created in nodes designated as overseer -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9262) Connection and read timeouts are being ignored by UpdateShardHandler
[ https://issues.apache.org/jira/browse/SOLR-9262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357608#comment-15357608 ] Mark Miller commented on SOLR-9262: --- Actually, I don't think we want that anymore. Our new retry policy should be universal. I did just look, and it seems it's not on by default, so i do think we want to fix that. > Connection and read timeouts are being ignored by UpdateShardHandler > > > Key: SOLR-9262 > URL: https://issues.apache.org/jira/browse/SOLR-9262 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: master (7.0) >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar > Fix For: master (7.0) > > Attachments: SOLR-9262.patch > > > SOLR-4509 removed the usage of distribUpdateSoTimeout and > distribUpdateConnTimeout from UpdateShardHandler causing the http client to > be created with its default values of connection and read timeout. > https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=blobdiff;f=solr/core/src/java/org/apache/solr/update/UpdateShardHandler.java;h=4fe869c25c9ea0588903d8d366e8d3533835b601;hp=a44b8f87b766d4f998d534156ceb83f4d42eadbb;hb=ce172ac;hpb=3f217aba6d4422d829be5ad77b02068c130dc7d3 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7871) Platform independent config file instead of solr.in.sh and solr.in.cmd
[ https://issues.apache.org/jira/browse/SOLR-7871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357509#comment-15357509 ] Jan Høydahl commented on SOLR-7871: --- I have continued a bit on this one. My plan is to move the following functionality from {{solr.sh|cmd}} into a new {{SolrCliConfig.java}}: * Locate correct config file and location * Parse the config file * Resolve SOLR_PID_DIR, SOLR_TIP, DEFAULT_SERVER_DIR etc * Configure defaults for SOLR_URL_SCHEME, SOLR_SSL_OPTS, SOLR_JETTY_CONFIG etc The shell script will then call SolrCliConfig and get all variables in return. In the first phase, we'll just return a string which the script can evaluate to set all variables. I propose that the new config file will be of a simple "properties" format similar to {{solr.in.sh}} and be called {{solr.conf}}. The new SolrCliConfig will use {{solr.conf}} if found, else fallback to and parse solr.in.\* as well, for a smooth transition. > Platform independent config file instead of solr.in.sh and solr.in.cmd > -- > > Key: SOLR-7871 > URL: https://issues.apache.org/jira/browse/SOLR-7871 > Project: Solr > Issue Type: Improvement > Components: scripts and tools >Affects Versions: 5.2.1 >Reporter: Jan Høydahl >Assignee: Jan Høydahl > Labels: bin/solr > Fix For: 6.0 > > > Spinoff from SOLR-7043 > The config files {{solr.in.sh}} and {{solr.in.cmd}} are currently executable > batch files, but all they do is to set environment variables for the start > scripts on the format {{key=value}} > Suggest to instead have one central platform independent config file e.g. > {{bin/solr.yml}} or {{bin/solrstart.properties}} which is parsed by > {{SolrCLI.java}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9236) AutoAddReplicas feature with one replica loses some documents not committed during failover
[ https://issues.apache.org/jira/browse/SOLR-9236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-9236: -- Attachment: SOLR-9236.patch bq. the leader does not replay any transaction logs I think that may actually be a mistake. Here is a patch that uses your test additions and attempts to fix a bit differently. > AutoAddReplicas feature with one replica loses some documents not committed > during failover > --- > > Key: SOLR-9236 > URL: https://issues.apache.org/jira/browse/SOLR-9236 > Project: Solr > Issue Type: Bug > Components: hdfs, SolrCloud >Reporter: Eungsop Yoo >Assignee: Mark Miller >Priority: Minor > Attachments: SOLR-9236.patch, SOLR-9236.patch > > > I need to index huge amount of logs, so I decide to use AutoAddReplica > feature with only one replica. > When using AutoAddReplicas with one replica, some benefits are expected. > - no redundant data files for replicas > -- saving disk usage > - best indexing performance > I expected that Solr fails over just like HBase. > The feature worked almost as it was expected, except for some missing > documents during failover. > I found two regions for the missing. > 1. The leader replica does not replay any transaction logs. But when there is > only one replica, it should be the leader. > So I made the leader replica replay the transaction logs when using > AutoAddReplicas with on replica. > But the above fix did not resolve the problem. > 2. As failover occurred, the transaction log directory had a deeper directory > depth. Just like this, tlog/tlog/tlog/... > The transaction log could not be replayed, because the transaction log > directory was changed during failover. > So I made the transaction log directory not changed during failover. > After these fixes, AutoAddReplicas with one replica fails over well without > losing any documents. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3375 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3375/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.common.cloud.TestCollectionStateWatchers.testWatchesWorkForStateFormat1 Error Message: CollectionStateWatcher not notified of stateformat=1 collection creation Stack Trace: java.lang.AssertionError: CollectionStateWatcher not notified of stateformat=1 collection creation at __randomizedtesting.SeedInfo.seed([AE69E002E2DC9902:C9AEFCEC1664BDA3]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.common.cloud.TestCollectionStateWatchers.testWatchesWorkForStateFormat1(TestCollectionStateWatchers.java:267) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 12915 lines...] [junit4] Suite: org.apache.solr.common.cloud.TestCollectionStateWatchers [junit4] 2> Creating
[jira] [Updated] (LUCENE-7351) BKDWriter should compress doc ids when all values in a block are the same
[ https://issues.apache.org/jira/browse/LUCENE-7351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand updated LUCENE-7351: - Attachment: LUCENE-7351.patch Hmm I can remove both actually, they do not bring value now that the detection of whether doc ids are sorted is based on the doc ids themselves rather than the fact that there is a single value in a block. > BKDWriter should compress doc ids when all values in a block are the same > - > > Key: LUCENE-7351 > URL: https://issues.apache.org/jira/browse/LUCENE-7351 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Priority: Minor > Attachments: LUCENE-7351.patch, LUCENE-7351.patch, LUCENE-7351.patch > > > BKDWriter writes doc ids using 4 bytes per document. I think it should > compress similarly to postings when all docs in a block have the same packed > value. This can happen either when a field has a default value which is > common across documents or when quantization makes the number of unique > values so small that a large index will necessarily have blocks that all > contain the same value (eg. there are only 63490 unique half-float values). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-8787) TestAuthenticationFramework should not extend TestMiniSolrCloudCluster
[ https://issues.apache.org/jira/browse/SOLR-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar resolved SOLR-8787. - Resolution: Fixed Assignee: Shalin Shekhar Mangar Thanks Trey! > TestAuthenticationFramework should not extend TestMiniSolrCloudCluster > -- > > Key: SOLR-8787 > URL: https://issues.apache.org/jira/browse/SOLR-8787 > Project: Solr > Issue Type: Bug > Components: Tests >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar >Priority: Minor > Labels: difficulty-easy, newdev > Fix For: 6.2, master (7.0) > > Attachments: SOLR-8787.patch > > > TestAuthenticationFramework should not extend TestMiniSolrCloudCluster. The > TestMiniSolrCloudCluster is actually a test for MiniSolrCloudCluster and not > a generic test framework class. I saw a local failure for > TestAuthenticationFramework.testSegmentTerminateEarly which should never be > executed in the first place. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8787) TestAuthenticationFramework should not extend TestMiniSolrCloudCluster
[ https://issues.apache.org/jira/browse/SOLR-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357476#comment-15357476 ] ASF subversion and git services commented on SOLR-8787: --- Commit 18434526d6ef73373796481ef3ccd637694e3dfe in lucene-solr's branch refs/heads/branch_6x from [~shalinmangar] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1843452 ] SOLR-8787: Shutdown MiniSolrCloudCluster in a finally block (cherry picked from commit 0a15699) > TestAuthenticationFramework should not extend TestMiniSolrCloudCluster > -- > > Key: SOLR-8787 > URL: https://issues.apache.org/jira/browse/SOLR-8787 > Project: Solr > Issue Type: Bug > Components: Tests >Reporter: Shalin Shekhar Mangar >Priority: Minor > Labels: difficulty-easy, newdev > Fix For: 6.2, master (7.0) > > Attachments: SOLR-8787.patch > > > TestAuthenticationFramework should not extend TestMiniSolrCloudCluster. The > TestMiniSolrCloudCluster is actually a test for MiniSolrCloudCluster and not > a generic test framework class. I saw a local failure for > TestAuthenticationFramework.testSegmentTerminateEarly which should never be > executed in the first place. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8787) TestAuthenticationFramework should not extend TestMiniSolrCloudCluster
[ https://issues.apache.org/jira/browse/SOLR-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357475#comment-15357475 ] ASF subversion and git services commented on SOLR-8787: --- Commit 0a15699caa5d7d3a6b72977f90857d0a78a2fd70 in lucene-solr's branch refs/heads/master from [~shalinmangar] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0a15699 ] SOLR-8787: Shutdown MiniSolrCloudCluster in a finally block > TestAuthenticationFramework should not extend TestMiniSolrCloudCluster > -- > > Key: SOLR-8787 > URL: https://issues.apache.org/jira/browse/SOLR-8787 > Project: Solr > Issue Type: Bug > Components: Tests >Reporter: Shalin Shekhar Mangar >Priority: Minor > Labels: difficulty-easy, newdev > Fix For: 6.2, master (7.0) > > Attachments: SOLR-8787.patch > > > TestAuthenticationFramework should not extend TestMiniSolrCloudCluster. The > TestMiniSolrCloudCluster is actually a test for MiniSolrCloudCluster and not > a generic test framework class. I saw a local failure for > TestAuthenticationFramework.testSegmentTerminateEarly which should never be > executed in the first place. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8787) TestAuthenticationFramework should not extend TestMiniSolrCloudCluster
[ https://issues.apache.org/jira/browse/SOLR-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357467#comment-15357467 ] ASF subversion and git services commented on SOLR-8787: --- Commit c7d82e7b38676acffdb867514ebc3344c0b5faa9 in lucene-solr's branch refs/heads/branch_6x from [~shalinmangar] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c7d82e7 ] SOLR-8787: TestAuthenticationFramework should not extend TestMiniSolrCloudCluster (cherry picked from commit 6528dac) > TestAuthenticationFramework should not extend TestMiniSolrCloudCluster > -- > > Key: SOLR-8787 > URL: https://issues.apache.org/jira/browse/SOLR-8787 > Project: Solr > Issue Type: Bug > Components: Tests >Reporter: Shalin Shekhar Mangar >Priority: Minor > Labels: difficulty-easy, newdev > Fix For: 6.2, master (7.0) > > Attachments: SOLR-8787.patch > > > TestAuthenticationFramework should not extend TestMiniSolrCloudCluster. The > TestMiniSolrCloudCluster is actually a test for MiniSolrCloudCluster and not > a generic test framework class. I saw a local failure for > TestAuthenticationFramework.testSegmentTerminateEarly which should never be > executed in the first place. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7365) Don't use BooleanScorer for small segments
[ https://issues.apache.org/jira/browse/LUCENE-7365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357459#comment-15357459 ] Adrien Grand commented on LUCENE-7365: -- Yes, exactly. > Don't use BooleanScorer for small segments > -- > > Key: LUCENE-7365 > URL: https://issues.apache.org/jira/browse/LUCENE-7365 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Alan Woodward >Assignee: Alan Woodward > Attachments: LUCENE-7365-query.patch, LUCENE-7365.patch, > LUCENE-7365.patch > > > If a BooleanQuery meets certain criteria (only contains disjunctions, is > likely to match large numbers of docs) then we use a BooleanScorer to score > groups of 1024 docs at a time. This allocates arrays of 1024 Bucket objects > up-front. On very small segments (for example, a MemoryIndex) this is very > wasteful of memory, particularly if the query is large or deeply-nested. We > should avoid using a bulk scorer on these segments. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-9236) AutoAddReplicas feature with one replica loses some documents not committed during failover
[ https://issues.apache.org/jira/browse/SOLR-9236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller reassigned SOLR-9236: - Assignee: Mark Miller > AutoAddReplicas feature with one replica loses some documents not committed > during failover > --- > > Key: SOLR-9236 > URL: https://issues.apache.org/jira/browse/SOLR-9236 > Project: Solr > Issue Type: Bug > Components: hdfs, SolrCloud >Reporter: Eungsop Yoo >Assignee: Mark Miller >Priority: Minor > Attachments: SOLR-9236.patch > > > I need to index huge amount of logs, so I decide to use AutoAddReplica > feature with only one replica. > When using AutoAddReplicas with one replica, some benefits are expected. > - no redundant data files for replicas > -- saving disk usage > - best indexing performance > I expected that Solr fails over just like HBase. > The feature worked almost as it was expected, except for some missing > documents during failover. > I found two regions for the missing. > 1. The leader replica does not replay any transaction logs. But when there is > only one replica, it should be the leader. > So I made the leader replica replay the transaction logs when using > AutoAddReplicas with on replica. > But the above fix did not resolve the problem. > 2. As failover occurred, the transaction log directory had a deeper directory > depth. Just like this, tlog/tlog/tlog/... > The transaction log could not be replayed, because the transaction log > directory was changed during failover. > So I made the transaction log directory not changed during failover. > After these fixes, AutoAddReplicas with one replica fails over well without > losing any documents. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9076) Update to Hadoop 2.7.2
[ https://issues.apache.org/jira/browse/SOLR-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-9076: -- Attachment: SOLR-9076-Fix-dependencies.patch I think this is the patch we need here. It's slightly different versions then I commented above though, so I won't have full confidence again until SOLR-9073 is resolved, and that has turned out to be quite annoying to solve nicely rather than just via hack. > Update to Hadoop 2.7.2 > -- > > Key: SOLR-9076 > URL: https://issues.apache.org/jira/browse/SOLR-9076 > Project: Solr > Issue Type: Improvement >Reporter: Mark Miller >Assignee: Mark Miller > Fix For: 6.2, master (7.0) > > Attachments: SOLR-9076-Fix-dependencies.patch, SOLR-9076-Hack.patch, > SOLR-9076-fixnetty.patch, SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch, > SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-SmokeRelease-6.x - Build # 98 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-6.x/98/ No tests ran. Build Log: [...truncated 40566 lines...] prepare-release-no-sign: [mkdir] Created dir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist [copy] Copying 476 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/lucene [copy] Copying 245 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/solr [smoker] Java 1.8 JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8 [smoker] NOTE: output encoding is UTF-8 [smoker] [smoker] Load release URL "file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/"... [smoker] [smoker] Test Lucene... [smoker] test basics... [smoker] get KEYS [smoker] 0.2 MB in 0.01 sec (14.6 MB/sec) [smoker] check changes HTML... [smoker] download lucene-6.2.0-src.tgz... [smoker] 29.8 MB in 0.03 sec (1118.3 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-6.2.0.tgz... [smoker] 64.4 MB in 0.06 sec (1122.4 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-6.2.0.zip... [smoker] 75.0 MB in 0.07 sec (1097.6 MB/sec) [smoker] verify md5/sha1 digests [smoker] unpack lucene-6.2.0.tgz... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.8... [smoker] got 6032 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-6.2.0.zip... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.8... [smoker] got 6032 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-6.2.0-src.tgz... [smoker] make sure no JARs/WARs in src dist... [smoker] run "ant validate" [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'... [smoker] test demo with 1.8... [smoker] got 224 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] generate javadocs w/ Java 8... [smoker] [smoker] Crawl/parse... [smoker] [smoker] Verify... [smoker] confirm all releases have coverage in TestBackwardsCompatibility [smoker] find all past Lucene releases... [smoker] run TestBackwardsCompatibility.. [smoker] success! [smoker] [smoker] Test Solr... [smoker] test basics... [smoker] get KEYS [smoker] 0.2 MB in 0.00 sec (44.1 MB/sec) [smoker] check changes HTML... [smoker] download solr-6.2.0-src.tgz... [smoker] 39.1 MB in 0.38 sec (102.4 MB/sec) [smoker] verify md5/sha1 digests [smoker] download solr-6.2.0.tgz... [smoker] 137.1 MB in 2.29 sec (59.8 MB/sec) [smoker] verify md5/sha1 digests [smoker] download solr-6.2.0.zip... [smoker] 145.7 MB in 1.70 sec (86.0 MB/sec) [smoker] verify md5/sha1 digests [smoker] unpack solr-6.2.0.tgz... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] unpack lucene-6.2.0.tgz... [smoker] **WARNING**: skipping check of /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.2.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar: it has javax.* classes [smoker] **WARNING**: skipping check of /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.2.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar: it has javax.* classes [smoker] copying unpacked distribution for Java 8 ... [smoker] test solr example w/ Java 8... [smoker] start Solr instance (log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.2.0-java8/solr-example.log)... [smoker] No process found for Solr node running on port 8983 [smoker] Running techproducts example on port 8983 from /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.2.0-java8 [smoker] Creating Solr home directory /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.2.0-java8/example/techproducts/solr [smoker] [smoker] Starting up Solr on port 8983 using command: [smoker] bin/solr start -p 8983 -s "example/techproducts/solr" [smoker] [smoker] Waiting up to 30 seconds to see Solr running on port 8983 [|] [/] [-] [\] [|] [/] [-] [\] [|] [/] [-]
[jira] [Updated] (SOLR-9267) Cloud MLT field boost not working
[ https://issues.apache.org/jira/browse/SOLR-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brian Feldman updated SOLR-9267: Description: When boosting by field "fieldname otherFieldName^4.0" the boost is not stripped from the field name when adding to fieldNames ArrayList. So on line 133 of CloudMLTQParser when adding field content to the filteredDocument the field is not found (incorrectly trying to find 'otherFieldName^4.0'). The easiest but perhaps hackiest solution is to overwrite qf: {code} if (localParams.get("boost") != null) { mlt.setBoost(localParams.getBool("boost")); boostFields = SolrPluginUtils.parseFieldBoosts(qf); qf = boostFields.keySet().toArray(qf); } {code} was: When boosting by field "fieldname otherFieldName^4.0" the boost is not stripped from the field name when adding to fieldNames ArrayList. So on line 133 of CloudMLTQParser when adding field content to the filteredDocument the field is not found (incorrectly trying to find 'otherFieldName^4.0'). The easiest but perhaps hackiest solution is to overwrite qf: if (localParams.get("boost") != null) { mlt.setBoost(localParams.getBool("boost")); boostFields = SolrPluginUtils.parseFieldBoosts(qf); qf = boostFields.keySet().toArray(qf); } > Cloud MLT field boost not working > - > > Key: SOLR-9267 > URL: https://issues.apache.org/jira/browse/SOLR-9267 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: MoreLikeThis >Affects Versions: 5.5, 5.5.1, 5.5.2 >Reporter: Brian Feldman > > When boosting by field "fieldname otherFieldName^4.0" the boost is not > stripped from the field name when adding to fieldNames ArrayList. So on line > 133 of CloudMLTQParser when adding field content to the filteredDocument the > field is not found (incorrectly trying to find 'otherFieldName^4.0'). > The easiest but perhaps hackiest solution is to overwrite qf: > {code} > if (localParams.get("boost") != null) { > mlt.setBoost(localParams.getBool("boost")); > boostFields = SolrPluginUtils.parseFieldBoosts(qf); > qf = boostFields.keySet().toArray(qf); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7365) Don't use BooleanScorer for small segments
[ https://issues.apache.org/jira/browse/LUCENE-7365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357368#comment-15357368 ] Alan Woodward commented on LUCENE-7365: --- I'm not sure I understand what you mean by wrapping at the Weight level - do you mean by subclassing IndexSearcher and overriding createNormalizedWeight()? > Don't use BooleanScorer for small segments > -- > > Key: LUCENE-7365 > URL: https://issues.apache.org/jira/browse/LUCENE-7365 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Alan Woodward >Assignee: Alan Woodward > Attachments: LUCENE-7365-query.patch, LUCENE-7365.patch, > LUCENE-7365.patch > > > If a BooleanQuery meets certain criteria (only contains disjunctions, is > likely to match large numbers of docs) then we use a BooleanScorer to score > groups of 1024 docs at a time. This allocates arrays of 1024 Bucket objects > up-front. On very small segments (for example, a MemoryIndex) this is very > wasteful of memory, particularly if the query is large or deeply-nested. We > should avoid using a bulk scorer on these segments. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7351) BKDWriter should compress doc ids when all values in a block are the same
[ https://issues.apache.org/jira/browse/LUCENE-7351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357388#comment-15357388 ] Robert Muir commented on LUCENE-7351: - I like this better than the last patch, I think the optimization is more general. I think in the base test class, {{tesMostEqual()}} is a mistake? > BKDWriter should compress doc ids when all values in a block are the same > - > > Key: LUCENE-7351 > URL: https://issues.apache.org/jira/browse/LUCENE-7351 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Priority: Minor > Attachments: LUCENE-7351.patch, LUCENE-7351.patch > > > BKDWriter writes doc ids using 4 bytes per document. I think it should > compress similarly to postings when all docs in a block have the same packed > value. This can happen either when a field has a default value which is > common across documents or when quantization makes the number of unique > values so small that a large index will necessarily have blocks that all > contain the same value (eg. there are only 63490 unique half-float values). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9267) Cloud MLT field boost not working
Brian Feldman created SOLR-9267: --- Summary: Cloud MLT field boost not working Key: SOLR-9267 URL: https://issues.apache.org/jira/browse/SOLR-9267 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: MoreLikeThis Affects Versions: 5.5.2, 5.5.1, 5.5 Reporter: Brian Feldman When boosting by field "fieldname otherFieldName^4.0" the boost is not stripped from the field name when adding to fieldNames ArrayList. So on line 133 of CloudMLTQParser when adding field content to the filteredDocument the field is not found (incorrectly trying to find 'otherFieldName^4.0'). The easiest but perhaps hackiest solution is to overwrite qf: if (localParams.get("boost") != null) { mlt.setBoost(localParams.getBool("boost")); boostFields = SolrPluginUtils.parseFieldBoosts(qf); qf = boostFields.keySet().toArray(qf); } -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7351) BKDWriter should compress doc ids when all values in a block are the same
[ https://issues.apache.org/jira/browse/LUCENE-7351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand updated LUCENE-7351: - Attachment: LUCENE-7351.patch Updated patch. It now specializes both reading doc ids into an array and feeding a visitor, which seems to help get the performance back to what it is on master, or at least less than 1% slower (not easy to distinguish minor slowdowns to noise at this stage). It has 3 cases: - increasing doc ids, which is expected to happen for either sorted segments or when all docs in a block have the same value. In that case, we delta-encode using vints. - doc ids requiring less than 24 bits, which are encoded on 3 bytes. - doc ids requiring less than 32 bits, which are encoded on 4 bytes like on master today. I think it's ready to go? > BKDWriter should compress doc ids when all values in a block are the same > - > > Key: LUCENE-7351 > URL: https://issues.apache.org/jira/browse/LUCENE-7351 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Priority: Minor > Attachments: LUCENE-7351.patch, LUCENE-7351.patch > > > BKDWriter writes doc ids using 4 bytes per document. I think it should > compress similarly to postings when all docs in a block have the same packed > value. This can happen either when a field has a default value which is > common across documents or when quantization makes the number of unique > values so small that a large index will necessarily have blocks that all > contain the same value (eg. there are only 63490 unique half-float values). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9266) zero size fdx file being created and commit taking 2 to 3 hours
[ https://issues.apache.org/jira/browse/SOLR-9266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357334#comment-15357334 ] rajat commented on SOLR-9266: - Hi Erik , thanks for the prompt reply , but can you please help me with the problem , what actually might the cause be . I have checked the index with index checker it says its all fine . My stack is ubuntu 14.04 , java 8 , solr 4.2 apache tomcat 8 regards Rajat On Thu, Jun 30, 2016 at 7:41 PM, Erick Erickson (JIRA)> zero size fdx file being created and commit taking 2 to 3 hours > --- > > Key: SOLR-9266 > URL: https://issues.apache.org/jira/browse/SOLR-9266 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 4.2 > Environment: ubuntu 14.04 lts , apache tomcat 9 , java 8 >Reporter: rajat > > index size 100 gbs > not using compound file format > During indexing zero size fdx files are being created and commits are taking > a lot of time (2 to 3 hours ) I have been using solr 4.2 for past 2 -2.5 > years > faced such a problem first time > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_92) - Build # 284 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/284/ Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.handler.TestReqParamsAPI.test Error Message: Could not get expected value 'CY val' for path 'params/c' full output: { "responseHeader":{ "status":0, "QTime":0}, "params":{ "a":"A val", "b":"B val", "wt":"json", "useParams":""}, "context":{ "webapp":"", "path":"/dump1", "httpMethod":"GET"}}, from server: http://127.0.0.1:60602/collection1 Stack Trace: java.lang.AssertionError: Could not get expected value 'CY val' for path 'params/c' full output: { "responseHeader":{ "status":0, "QTime":0}, "params":{ "a":"A val", "b":"B val", "wt":"json", "useParams":""}, "context":{ "webapp":"", "path":"/dump1", "httpMethod":"GET"}}, from server: http://127.0.0.1:60602/collection1 at __randomizedtesting.SeedInfo.seed([4895AA26FC66F968:C0C195FC529A9490]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:481) at org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:171) at org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:61) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-9216) Support collection.configName in MODIFYCOLLECTION request
[ https://issues.apache.org/jira/browse/SOLR-9216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357284#comment-15357284 ] Keith Laban commented on SOLR-9216: --- Never mind, I didn't realize you made a change to use it in some of the other classes > Support collection.configName in MODIFYCOLLECTION request > - > > Key: SOLR-9216 > URL: https://issues.apache.org/jira/browse/SOLR-9216 > Project: Solr > Issue Type: Improvement >Reporter: Keith Laban >Assignee: Noble Paul > Fix For: 6.2 > > Attachments: SOLR-9216.patch, SOLR-9216.patch, SOLR-9216.patch > > > MODIFYCOLLECTION should support updating the > {{/collections/}} value of "configName" in zookeeper -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9216) Support collection.configName in MODIFYCOLLECTION request
[ https://issues.apache.org/jira/browse/SOLR-9216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357280#comment-15357280 ] Keith Laban commented on SOLR-9216: --- [~noble.paul] did you mean to commit that change to SolrParams? > Support collection.configName in MODIFYCOLLECTION request > - > > Key: SOLR-9216 > URL: https://issues.apache.org/jira/browse/SOLR-9216 > Project: Solr > Issue Type: Improvement >Reporter: Keith Laban >Assignee: Noble Paul > Fix For: 6.2 > > Attachments: SOLR-9216.patch, SOLR-9216.patch, SOLR-9216.patch > > > MODIFYCOLLECTION should support updating the > {{/collections/}} value of "configName" in zookeeper -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 106 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/106/ 2 tests failed. FAILED: org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test Error Message: Captured an uncaught exception in thread: Thread[id=50182, name=collection2, state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=50182, name=collection2, state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest] Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:37887/t/u: collection already exists: awholynewstresscollection_collection2_0 at __randomizedtesting.SeedInfo.seed([502018AF5E6CE827]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:590) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:403) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:356) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1228) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:998) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:934) at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1599) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1620) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:987) FAILED: junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler Error Message: ObjectTracker found 11 object(s) that were not released!!! [NRTCachingDirectory, NRTCachingDirectory, NRTCachingDirectory, NRTCachingDirectory, NRTCachingDirectory, NRTCachingDirectory, NRTCachingDirectory, NRTCachingDirectory, NRTCachingDirectory, NRTCachingDirectory, NRTCachingDirectory] Stack Trace: java.lang.AssertionError: ObjectTracker found 11 object(s) that were not released!!! [NRTCachingDirectory, NRTCachingDirectory, NRTCachingDirectory, NRTCachingDirectory, NRTCachingDirectory, NRTCachingDirectory, NRTCachingDirectory, NRTCachingDirectory, NRTCachingDirectory, NRTCachingDirectory, NRTCachingDirectory] at __randomizedtesting.SeedInfo.seed([502018AF5E6CE827]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:257) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at
[jira] [Commented] (SOLR-9253) solrcloud goes dowm
[ https://issues.apache.org/jira/browse/SOLR-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357170#comment-15357170 ] Junfeng Mu commented on SOLR-9253: -- Mr. Mangar! Would you mind we communicate by email? My personal email is "kent...@live.cn", you can call me Kent. Would you please spare some time to answer my problems that I came across? > solrcloud goes dowm > --- > > Key: SOLR-9253 > URL: https://issues.apache.org/jira/browse/SOLR-9253 > Project: Solr > Issue Type: Wish > Security Level: Public(Default Security Level. Issues are Public) > Components: clients - java >Affects Versions: 4.9.1 > Environment: jboss, zookeeper >Reporter: Junfeng Mu > Attachments: 20160627161845.png, javacore.165.txt > > Original Estimate: 96h > Remaining Estimate: 96h > > We use solrcloud in our project. now we use solr, but the data grows bigger > and bigger, so we want to switch to solrcloud, however, once we switch to > solrcloud, solrcloud goes down, It seems that solrcloud blocked, can not deal > with the new query, please see the attachments and help us ASAP. Thanks! -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7280) Load cores in sorted order and tweak coreLoadThread counts to improve cluster stability on restarts
[ https://issues.apache.org/jira/browse/SOLR-7280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-7280: - Attachment: SOLR-7280.patch moved patch over from SOLR-7191 > Load cores in sorted order and tweak coreLoadThread counts to improve cluster > stability on restarts > --- > > Key: SOLR-7280 > URL: https://issues.apache.org/jira/browse/SOLR-7280 > Project: Solr > Issue Type: Sub-task > Components: SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Noble Paul > Fix For: 5.2, 6.0 > > Attachments: SOLR-7280.patch > > > In SOLR-7191, Damien mentioned that by loading solr cores in a sorted order > and tweaking some of the coreLoadThread counts, he was able to improve the > stability of a cluster with thousands of collections. We should explore some > of these changes and fold them into Solr. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7191) Improve stability and startup performance of SolrCloud with thousands of collections
[ https://issues.apache.org/jira/browse/SOLR-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357167#comment-15357167 ] Noble Paul commented on SOLR-7191: -- I have simplified this patch and moved it over to SOLR-7280 . I plan to commit that soon > Improve stability and startup performance of SolrCloud with thousands of > collections > > > Key: SOLR-7191 > URL: https://issues.apache.org/jira/browse/SOLR-7191 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 5.0 >Reporter: Shawn Heisey >Assignee: Shalin Shekhar Mangar > Labels: performance, scalability > Attachments: SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, > SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, > lots-of-zkstatereader-updates-branch_5x.log > > > A user on the mailing list with thousands of collections (5000 on 4.10.3, > 4000 on 5.0) is having severe problems with getting Solr to restart. > I tried as hard as I could to duplicate the user setup, but I ran into many > problems myself even before I was able to get 4000 collections created on a > 5.0 example cloud setup. Restarting Solr takes a very long time, and it is > not very stable once it's up and running. > This kind of setup is very much pushing the envelope on SolrCloud performance > and scalability. It doesn't help that I'm running both Solr nodes on one > machine (I started with 'bin/solr -e cloud') and that ZK is embedded. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_92) - Build # 5946 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5946/ Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.common.cloud.TestCollectionStateWatchers.testSimpleCollectionWatch Error Message: Stack Trace: java.util.concurrent.TimeoutException at __randomizedtesting.SeedInfo.seed([ACE149333A9DD931:F1DA86437D90460F]:0) at org.apache.solr.common.cloud.ZkStateReader.waitForState(ZkStateReader.java:1206) at org.apache.solr.client.solrj.impl.CloudSolrClient.waitForState(CloudSolrClient.java:593) at org.apache.solr.common.cloud.TestCollectionStateWatchers.testSimpleCollectionWatch(TestCollectionStateWatchers.java:92) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 13127 lines...] [junit4] Suite: org.apache.solr.common.cloud.TestCollectionStateWatchers [junit4] 2> Creating dataDir:
[jira] [Commented] (SOLR-8787) TestAuthenticationFramework should not extend TestMiniSolrCloudCluster
[ https://issues.apache.org/jira/browse/SOLR-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357154#comment-15357154 ] ASF subversion and git services commented on SOLR-8787: --- Commit 6528dacb0e5c71f9e412655d8abee0857f4bda8f in lucene-solr's branch refs/heads/master from [~shalinmangar] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6528dac ] SOLR-8787: TestAuthenticationFramework should not extend TestMiniSolrCloudCluster > TestAuthenticationFramework should not extend TestMiniSolrCloudCluster > -- > > Key: SOLR-8787 > URL: https://issues.apache.org/jira/browse/SOLR-8787 > Project: Solr > Issue Type: Bug > Components: Tests >Reporter: Shalin Shekhar Mangar >Priority: Minor > Labels: difficulty-easy, newdev > Fix For: 6.2, master (7.0) > > Attachments: SOLR-8787.patch > > > TestAuthenticationFramework should not extend TestMiniSolrCloudCluster. The > TestMiniSolrCloudCluster is actually a test for MiniSolrCloudCluster and not > a generic test framework class. I saw a local failure for > TestAuthenticationFramework.testSegmentTerminateEarly which should never be > executed in the first place. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7365) Don't use BooleanScorer for small segments
[ https://issues.apache.org/jira/browse/LUCENE-7365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357152#comment-15357152 ] Adrien Grand commented on LUCENE-7365: -- I think you forgot to put LinearScoringIndexSearcher in the previous patch? I am fine with the wrapper approach too, but if we do it I think we should wrap at the weight level directly rather than at the query level. This way we can still modify the way the query is executed, but without modifying the query tree. > Don't use BooleanScorer for small segments > -- > > Key: LUCENE-7365 > URL: https://issues.apache.org/jira/browse/LUCENE-7365 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Alan Woodward >Assignee: Alan Woodward > Attachments: LUCENE-7365-query.patch, LUCENE-7365.patch, > LUCENE-7365.patch > > > If a BooleanQuery meets certain criteria (only contains disjunctions, is > likely to match large numbers of docs) then we use a BooleanScorer to score > groups of 1024 docs at a time. This allocates arrays of 1024 Bucket objects > up-front. On very small segments (for example, a MemoryIndex) this is very > wasteful of memory, particularly if the query is large or deeply-nested. We > should avoid using a bulk scorer on these segments. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7191) Improve stability and startup performance of SolrCloud with thousands of collections
[ https://issues.apache.org/jira/browse/SOLR-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357146#comment-15357146 ] Noble Paul commented on SOLR-7191: -- yeah, normally you are fine. If there is a GC pause in the overseer node, a lot of messages can get stuck in the queue and this will lead to even more threads waiting indefinitely (consuming more memory ) and aggravating the situation. > Improve stability and startup performance of SolrCloud with thousands of > collections > > > Key: SOLR-7191 > URL: https://issues.apache.org/jira/browse/SOLR-7191 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 5.0 >Reporter: Shawn Heisey >Assignee: Shalin Shekhar Mangar > Labels: performance, scalability > Attachments: SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, > SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, > lots-of-zkstatereader-updates-branch_5x.log > > > A user on the mailing list with thousands of collections (5000 on 4.10.3, > 4000 on 5.0) is having severe problems with getting Solr to restart. > I tried as hard as I could to duplicate the user setup, but I ran into many > problems myself even before I was able to get 4000 collections created on a > 5.0 example cloud setup. Restarting Solr takes a very long time, and it is > not very stable once it's up and running. > This kind of setup is very much pushing the envelope on SolrCloud performance > and scalability. It doesn't help that I'm running both Solr nodes on one > machine (I started with 'bin/solr -e cloud') and that ZK is embedded. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9194) Enhance the bin/solr script to perform file operations to/from Zookeeper
[ https://issues.apache.org/jira/browse/SOLR-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357143#comment-15357143 ] Erick Erickson commented on SOLR-9194: -- OK, I'll commit this over the weekend. Any additional testing (especially on Windows with Jan's patch) would be most welcome. > Enhance the bin/solr script to perform file operations to/from Zookeeper > > > Key: SOLR-9194 > URL: https://issues.apache.org/jira/browse/SOLR-9194 > Project: Solr > Issue Type: Improvement >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Minor > Attachments: SOLR-9194.patch, SOLR-9194.patch, SOLR-9194.patch, > SOLR-9194.patch, SOLR-9194.patch > > > There are a few other files that can reasonably be pushed to Zookeeper, e.g. > solr.xml, security.json, clusterprops.json. Who knows? Even > /state.json for the brave. > This could reduce further the need for bouncing out to zkcli. > Assigning to myself just so I don't lose track, but I would _love_ it if > someone else wanted to take it... > I'm thinking the commands would be > bin/solr zk -putfile -z -p -f > bin/solr zk -getfile -z -p -f > but I'm not wedded to those, all suggestions welcome. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-8546) TestLazyCores is failing a lot on the Jenkins cluster.
[ https://issues.apache.org/jira/browse/SOLR-8546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson resolved SOLR-8546. -- Resolution: Fixed Fix Version/s: master (7.0) 6.2 No failures for since the last patch so closling. > TestLazyCores is failing a lot on the Jenkins cluster. > -- > > Key: SOLR-8546 > URL: https://issues.apache.org/jira/browse/SOLR-8546 > Project: Solr > Issue Type: Test >Reporter: Mark Miller >Assignee: Erick Erickson > Fix For: 6.2, master (7.0) > > Attachments: SOLR-8546.patch > > > Looks like two issues: > * A thread leak due to 3DsearcherExecutor > * An ObjectTracker fail because a SolrCore is left unclosed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-9266) zero size fdx file being created and commit taking 2 to 3 hours
[ https://issues.apache.org/jira/browse/SOLR-9266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson resolved SOLR-9266. -- Resolution: Invalid Please raise this kind of issue on the user's list before raising a JIRA, we try to reserve JIRAs for known code problems. Plus, it's highly unlikely any patches for the 4x code line will be forthcoming. This has not been reported by anyone else in this code line so it's quite likely it's something local to your environment and not amenable for Solr changes. > zero size fdx file being created and commit taking 2 to 3 hours > --- > > Key: SOLR-9266 > URL: https://issues.apache.org/jira/browse/SOLR-9266 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 4.2 > Environment: ubuntu 14.04 lts , apache tomcat 9 , java 8 >Reporter: rajat > > index size 100 gbs > not using compound file format > During indexing zero size fdx files are being created and commits are taking > a lot of time (2 to 3 hours ) I have been using solr 4.2 for past 2 -2.5 > years > faced such a problem first time > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9088) solr.schema.TestManagedSchemaAPI.test failures ([doc=2] unknown field 'myNewField1')
[ https://issues.apache.org/jira/browse/SOLR-9088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357120#comment-15357120 ] Noble Paul commented on SOLR-9088: -- I understand that. But as a general API contract core references are not kept around. The problem is that the anonymous inner class object holds a reference to the core even after it may be closed and prevents it from getting garbage collected > solr.schema.TestManagedSchemaAPI.test failures ([doc=2] unknown field > 'myNewField1') > > > Key: SOLR-9088 > URL: https://issues.apache.org/jira/browse/SOLR-9088 > Project: Solr > Issue Type: Test >Reporter: Christine Poerschke >Priority: Minor > Attachments: SOLR-9088.patch > > > e.g. > http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3256/ > http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/588/ > http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/266/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8787) TestAuthenticationFramework should not extend TestMiniSolrCloudCluster
[ https://issues.apache.org/jira/browse/SOLR-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar updated SOLR-8787: Attachment: (was: SOLR-8787.patch) > TestAuthenticationFramework should not extend TestMiniSolrCloudCluster > -- > > Key: SOLR-8787 > URL: https://issues.apache.org/jira/browse/SOLR-8787 > Project: Solr > Issue Type: Bug > Components: Tests >Reporter: Shalin Shekhar Mangar >Priority: Minor > Labels: difficulty-easy, newdev > Fix For: 6.2, master (7.0) > > Attachments: SOLR-8787.patch > > > TestAuthenticationFramework should not extend TestMiniSolrCloudCluster. The > TestMiniSolrCloudCluster is actually a test for MiniSolrCloudCluster and not > a generic test framework class. I saw a local failure for > TestAuthenticationFramework.testSegmentTerminateEarly which should never be > executed in the first place. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Issue Comment Deleted] (SOLR-8787) TestAuthenticationFramework should not extend TestMiniSolrCloudCluster
[ https://issues.apache.org/jira/browse/SOLR-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar updated SOLR-8787: Comment: was deleted (was: Patch updated to master. I'll commit this shortly.) > TestAuthenticationFramework should not extend TestMiniSolrCloudCluster > -- > > Key: SOLR-8787 > URL: https://issues.apache.org/jira/browse/SOLR-8787 > Project: Solr > Issue Type: Bug > Components: Tests >Reporter: Shalin Shekhar Mangar >Priority: Minor > Labels: difficulty-easy, newdev > Fix For: 6.2, master (7.0) > > Attachments: SOLR-8787.patch > > > TestAuthenticationFramework should not extend TestMiniSolrCloudCluster. The > TestMiniSolrCloudCluster is actually a test for MiniSolrCloudCluster and not > a generic test framework class. I saw a local failure for > TestAuthenticationFramework.testSegmentTerminateEarly which should never be > executed in the first place. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8787) TestAuthenticationFramework should not extend TestMiniSolrCloudCluster
[ https://issues.apache.org/jira/browse/SOLR-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar updated SOLR-8787: Attachment: SOLR-8787.patch Patch updated to master. I'll commit this shortly. > TestAuthenticationFramework should not extend TestMiniSolrCloudCluster > -- > > Key: SOLR-8787 > URL: https://issues.apache.org/jira/browse/SOLR-8787 > Project: Solr > Issue Type: Bug > Components: Tests >Reporter: Shalin Shekhar Mangar >Priority: Minor > Labels: difficulty-easy, newdev > Fix For: 6.2, master (7.0) > > Attachments: SOLR-8787.patch, SOLR-8787.patch > > > TestAuthenticationFramework should not extend TestMiniSolrCloudCluster. The > TestMiniSolrCloudCluster is actually a test for MiniSolrCloudCluster and not > a generic test framework class. I saw a local failure for > TestAuthenticationFramework.testSegmentTerminateEarly which should never be > executed in the first place. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9088) solr.schema.TestManagedSchemaAPI.test failures ([doc=2] unknown field 'myNewField1')
[ https://issues.apache.org/jira/browse/SOLR-9088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357078#comment-15357078 ] Varun Thacker commented on SOLR-9088: - Hi Noble, I assume the core could be gone in this scenario : we are deleting a replica and in the period the config znode triggered this listener? If the core is gone do we even need to execute the {{getConfListener}} method? In {{registerConfListener}} we call {{ZkController#registerConfListenerForCore}} maybe we should add a core close hook to deregister the listener ? > solr.schema.TestManagedSchemaAPI.test failures ([doc=2] unknown field > 'myNewField1') > > > Key: SOLR-9088 > URL: https://issues.apache.org/jira/browse/SOLR-9088 > Project: Solr > Issue Type: Test >Reporter: Christine Poerschke >Priority: Minor > Attachments: SOLR-9088.patch > > > e.g. > http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3256/ > http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/588/ > http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/266/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7365) Don't use BooleanScorer for small segments
[ https://issues.apache.org/jira/browse/LUCENE-7365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Woodward updated LUCENE-7365: -- Attachment: LUCENE-7365-query.patch This is an alternative idea, a ForceNoBulkScoringQuery implementation that wraps an existing query and ensures use of the DefaultBulkScorer. > Don't use BooleanScorer for small segments > -- > > Key: LUCENE-7365 > URL: https://issues.apache.org/jira/browse/LUCENE-7365 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Alan Woodward >Assignee: Alan Woodward > Attachments: LUCENE-7365-query.patch, LUCENE-7365.patch, > LUCENE-7365.patch > > > If a BooleanQuery meets certain criteria (only contains disjunctions, is > likely to match large numbers of docs) then we use a BooleanScorer to score > groups of 1024 docs at a time. This allocates arrays of 1024 Bucket objects > up-front. On very small segments (for example, a MemoryIndex) this is very > wasteful of memory, particularly if the query is large or deeply-nested. We > should avoid using a bulk scorer on these segments. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-7871) Platform independent config file instead of solr.in.sh and solr.in.cmd
[ https://issues.apache.org/jira/browse/SOLR-7871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Høydahl reassigned SOLR-7871: - Assignee: Jan Høydahl > Platform independent config file instead of solr.in.sh and solr.in.cmd > -- > > Key: SOLR-7871 > URL: https://issues.apache.org/jira/browse/SOLR-7871 > Project: Solr > Issue Type: Improvement > Components: scripts and tools >Affects Versions: 5.2.1 >Reporter: Jan Høydahl >Assignee: Jan Høydahl > Labels: bin/solr > Fix For: 6.0 > > > Spinoff from SOLR-7043 > The config files {{solr.in.sh}} and {{solr.in.cmd}} are currently executable > batch files, but all they do is to set environment variables for the start > scripts on the format {{key=value}} > Suggest to instead have one central platform independent config file e.g. > {{bin/solr.yml}} or {{bin/solrstart.properties}} which is parsed by > {{SolrCLI.java}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9264) Optimize ZkController.publishAndWaitForDownStates
[ https://issues.apache.org/jira/browse/SOLR-9264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357029#comment-15357029 ] ASF subversion and git services commented on SOLR-9264: --- Commit 1ce09b482e9370649e9a7421b4961a67e744e46f in lucene-solr's branch refs/heads/branch_6x from [~shalinmangar] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1ce09b4 ] SOLR-9264: Remove unused imports (cherry picked from commit 74c8606) > Optimize ZkController.publishAndWaitForDownStates > - > > Key: SOLR-9264 > URL: https://issues.apache.org/jira/browse/SOLR-9264 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar > Fix For: 6.2, master (7.0) > > Attachments: SOLR-9264.patch, SOLR-9264.patch, SOLR-9264.patch > > > ZkController.publishAndWaitForDownStates keeps looping over all collections > in the cluster state to ensure that every replica hosted on the current node > has been marked as down. This is wasteful when you have a large number of > collections because each access to a non-watched collection gets data from > ZK. Instead, we can watch the interesting collections (i.e. which have > replicas hosted locally) and wait till we see the required state. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 232 - Failure!
My fault. I pushed a fix. On Thu, Jun 30, 2016 at 5:58 PM, Policeman Jenkins Server < jenk...@thetaphi.de> wrote: > Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/232/ > Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC > > All tests passed > > Build Log: > [...truncated 63278 lines...] > -ecj-javadoc-lint-src: > [mkdir] Created dir: /var/tmp/ecj1815376812 > [ecj-lint] Compiling 936 source files to /var/tmp/ecj1815376812 > [ecj-lint] invalid Class-Path header in manifest of jar file: > /export/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar > [ecj-lint] invalid Class-Path header in manifest of jar file: > /export/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar > [ecj-lint] -- > [ecj-lint] 1. WARNING in > /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/Assign.java > (at line 101) > [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { > [ecj-lint]^^^ > [ecj-lint] (Recovered) Internal inconsistency detected during lambda > shape analysis > [ecj-lint] -- > [ecj-lint] 2. WARNING in > /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/Assign.java > (at line 101) > [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { > [ecj-lint]^^^ > [ecj-lint] (Recovered) Internal inconsistency detected during lambda > shape analysis > [ecj-lint] -- > [ecj-lint] 3. WARNING in > /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/Assign.java > (at line 101) > [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { > [ecj-lint]^^^ > [ecj-lint] (Recovered) Internal inconsistency detected during lambda > shape analysis > [ecj-lint] -- > [ecj-lint] -- > [ecj-lint] 4. ERROR in > /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/ZkController.java > (at line 46) > [ecj-lint] import java.util.concurrent.atomic.AtomicBoolean; > [ecj-lint]^ > [ecj-lint] The import java.util.concurrent.atomic.AtomicBoolean is never > used > [ecj-lint] -- > [ecj-lint] 5. ERROR in > /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/ZkController.java > (at line 80) > [ecj-lint] import org.eclipse.jetty.util.ConcurrentHashSet; > [ecj-lint] > [ecj-lint] The import org.eclipse.jetty.util.ConcurrentHashSet is never > used > [ecj-lint] -- > [ecj-lint] -- > [ecj-lint] 6. WARNING in > /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java > (at line 213) > [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { > [ecj-lint] ^^^ > [ecj-lint] (Recovered) Internal inconsistency detected during lambda > shape analysis > [ecj-lint] -- > [ecj-lint] 7. WARNING in > /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java > (at line 213) > [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { > [ecj-lint] ^^^ > [ecj-lint] (Recovered) Internal inconsistency detected during lambda > shape analysis > [ecj-lint] -- > [ecj-lint] 8. WARNING in > /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java > (at line 213) > [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { > [ecj-lint] ^^^ > [ecj-lint] (Recovered) Internal inconsistency detected during lambda > shape analysis > [ecj-lint] -- > [ecj-lint] -- > [ecj-lint] 9. WARNING in > /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java > (at line 227) > [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, > blockCacheReadEnabled, false, cacheMerges, cacheReadOnce); > [ecj-lint] > > ^^ > [ecj-lint] Resource leak: 'dir' is never closed > [ecj-lint] -- > [ecj-lint] -- > [ecj-lint] 10. WARNING in > /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java > (at line 120) > [ecj-lint] reader = cfiltfac.create(reader); > [ecj-lint] > [ecj-lint] Resource leak: 'reader' is not closed at this location >
[jira] [Commented] (SOLR-9264) Optimize ZkController.publishAndWaitForDownStates
[ https://issues.apache.org/jira/browse/SOLR-9264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357028#comment-15357028 ] ASF subversion and git services commented on SOLR-9264: --- Commit 74c86063cf94dcc4dc022776bba31ae278686b42 in lucene-solr's branch refs/heads/master from [~shalinmangar] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=74c8606 ] SOLR-9264: Remove unused imports > Optimize ZkController.publishAndWaitForDownStates > - > > Key: SOLR-9264 > URL: https://issues.apache.org/jira/browse/SOLR-9264 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar > Fix For: 6.2, master (7.0) > > Attachments: SOLR-9264.patch, SOLR-9264.patch, SOLR-9264.patch > > > ZkController.publishAndWaitForDownStates keeps looping over all collections > in the cluster state to ensure that every replica hosted on the current node > has been marked as down. This is wasteful when you have a large number of > collections because each access to a non-watched collection gets data from > ZK. Instead, we can watch the interesting collections (i.e. which have > replicas hosted locally) and wait till we see the required state. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9253) solrcloud goes dowm
[ https://issues.apache.org/jira/browse/SOLR-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357015#comment-15357015 ] Junfeng Mu commented on SOLR-9253: -- Dear Mr. Erickson! I have sent a mail to "solr-user-subscribe", with the same tile "solrcloud goes down", but no response, so I post the question here. Sorry to disturb! > solrcloud goes dowm > --- > > Key: SOLR-9253 > URL: https://issues.apache.org/jira/browse/SOLR-9253 > Project: Solr > Issue Type: Wish > Security Level: Public(Default Security Level. Issues are Public) > Components: clients - java >Affects Versions: 4.9.1 > Environment: jboss, zookeeper >Reporter: Junfeng Mu > Attachments: 20160627161845.png, javacore.165.txt > > Original Estimate: 96h > Remaining Estimate: 96h > > We use solrcloud in our project. now we use solr, but the data grows bigger > and bigger, so we want to switch to solrcloud, however, once we switch to > solrcloud, solrcloud goes down, It seems that solrcloud blocked, can not deal > with the new query, please see the attachments and help us ASAP. Thanks! -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9253) solrcloud goes dowm
[ https://issues.apache.org/jira/browse/SOLR-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357009#comment-15357009 ] Junfeng Mu commented on SOLR-9253: -- Mr. Mangar! I did the configuration as you said before. I configed add "maxConnections" and "maxConnectionsPerHost" in the "", but the problem occured again. The configuration as below 1 500 besides, we use the singleton pattern to create and get the solr server connection, I wonder if this pattern is OK. Once I shutdown the zookeeper, the application can not do the solr query, the error is "no live SolrServer available to handle this request". so I need to restart our connection to reconnect solrcloud. as we use the singleton pattern, we do not use the method of "shutdown" to release the solrserver connection, will this be a problem? or we need to create the connection on demand every time? please help me, look forward to your reply. Thanks a lot! > solrcloud goes dowm > --- > > Key: SOLR-9253 > URL: https://issues.apache.org/jira/browse/SOLR-9253 > Project: Solr > Issue Type: Wish > Security Level: Public(Default Security Level. Issues are Public) > Components: clients - java >Affects Versions: 4.9.1 > Environment: jboss, zookeeper >Reporter: Junfeng Mu > Attachments: 20160627161845.png, javacore.165.txt > > Original Estimate: 96h > Remaining Estimate: 96h > > We use solrcloud in our project. now we use solr, but the data grows bigger > and bigger, so we want to switch to solrcloud, however, once we switch to > solrcloud, solrcloud goes down, It seems that solrcloud blocked, can not deal > with the new query, please see the attachments and help us ASAP. Thanks! -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 232 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/232/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC All tests passed Build Log: [...truncated 63278 lines...] -ecj-javadoc-lint-src: [mkdir] Created dir: /var/tmp/ecj1815376812 [ecj-lint] Compiling 936 source files to /var/tmp/ecj1815376812 [ecj-lint] invalid Class-Path header in manifest of jar file: /export/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar [ecj-lint] invalid Class-Path header in manifest of jar file: /export/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar [ecj-lint] -- [ecj-lint] 1. WARNING in /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 2. WARNING in /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 3. WARNING in /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] -- [ecj-lint] 4. ERROR in /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/ZkController.java (at line 46) [ecj-lint] import java.util.concurrent.atomic.AtomicBoolean; [ecj-lint]^ [ecj-lint] The import java.util.concurrent.atomic.AtomicBoolean is never used [ecj-lint] -- [ecj-lint] 5. ERROR in /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/ZkController.java (at line 80) [ecj-lint] import org.eclipse.jetty.util.ConcurrentHashSet; [ecj-lint] [ecj-lint] The import org.eclipse.jetty.util.ConcurrentHashSet is never used [ecj-lint] -- [ecj-lint] -- [ecj-lint] 6. WARNING in /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 7. WARNING in /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 8. WARNING in /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] -- [ecj-lint] 9. WARNING in /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java (at line 227) [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, blockCacheReadEnabled, false, cacheMerges, cacheReadOnce); [ecj-lint] ^^ [ecj-lint] Resource leak: 'dir' is never closed [ecj-lint] -- [ecj-lint] -- [ecj-lint] 10. WARNING in /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java (at line 120) [ecj-lint] reader = cfiltfac.create(reader); [ecj-lint] [ecj-lint] Resource leak: 'reader' is not closed at this location [ecj-lint] -- [ecj-lint] 11. WARNING in /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java (at line 144) [ecj-lint] return namedList; [ecj-lint] ^ [ecj-lint] Resource leak:
[jira] [Commented] (LUCENE-7366) Allow RAMDirectory to copy any Directory
[ https://issues.apache.org/jira/browse/LUCENE-7366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357006#comment-15357006 ] Uwe Schindler commented on LUCENE-7366: --- I just checked, the ctor taking FSDirectory was originally taking Directory, but this was changed in LUCENE-6241. As said before, we should just remove the check. > Allow RAMDirectory to copy any Directory > - > > Key: LUCENE-7366 > URL: https://issues.apache.org/jira/browse/LUCENE-7366 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Rob Audenaerde > > Uwe: "The FSDirectory passed to RAMDirectory in the ctor could be changed to > Directory easily. The additional check for "not is a directory inode" is in > my opinion lo longer needed, because listFiles should only return files." > Use case: For increasing the speed of some of my application tests, I want to > re-use/copy a pre-populated RAMDirectory over and over. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-7366) Allow RAMDirectory to copy any Directory
[ https://issues.apache.org/jira/browse/LUCENE-7366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15356997#comment-15356997 ] Uwe Schindler edited comment on LUCENE-7366 at 6/30/16 12:22 PM: - So I would suggest to just remove the check (or only do it is instanceof FSDirectory). If somebody has subdirs in an FSDirectory, the copy ctor will just fail - as this is brokenness anyways. was (Author: thetaphi): So I would suggest to just remove the check (or only do it is instanceof FSDirectory). If somebody has subdirs in an FSDirectory, the copy ctor will just fail. > Allow RAMDirectory to copy any Directory > - > > Key: LUCENE-7366 > URL: https://issues.apache.org/jira/browse/LUCENE-7366 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Rob Audenaerde > > Uwe: "The FSDirectory passed to RAMDirectory in the ctor could be changed to > Directory easily. The additional check for "not is a directory inode" is in > my opinion lo longer needed, because listFiles should only return files." > Use case: For increasing the speed of some of my application tests, I want to > re-use/copy a pre-populated RAMDirectory over and over. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7366) Allow RAMDirectory to copy any Directory
[ https://issues.apache.org/jira/browse/LUCENE-7366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15356997#comment-15356997 ] Uwe Schindler commented on LUCENE-7366: --- So I would suggest to just remove the check (or only do it is instanceof FSDirectory). If somebody has subdirs in an FSDirectory, the copy ctor will just fail. > Allow RAMDirectory to copy any Directory > - > > Key: LUCENE-7366 > URL: https://issues.apache.org/jira/browse/LUCENE-7366 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Rob Audenaerde > > Uwe: "The FSDirectory passed to RAMDirectory in the ctor could be changed to > Directory easily. The additional check for "not is a directory inode" is in > my opinion lo longer needed, because listFiles should only return files." > Use case: For increasing the speed of some of my application tests, I want to > re-use/copy a pre-populated RAMDirectory over and over. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7366) Allow RAMDirectory to copy any Directory
[ https://issues.apache.org/jira/browse/LUCENE-7366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15356983#comment-15356983 ] Robert Muir commented on LUCENE-7366: - I don't agree with that. That costs a readAttribute for every file. > Allow RAMDirectory to copy any Directory > - > > Key: LUCENE-7366 > URL: https://issues.apache.org/jira/browse/LUCENE-7366 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Rob Audenaerde > > Uwe: "The FSDirectory passed to RAMDirectory in the ctor could be changed to > Directory easily. The additional check for "not is a directory inode" is in > my opinion lo longer needed, because listFiles should only return files." > Use case: For increasing the speed of some of my application tests, I want to > re-use/copy a pre-populated RAMDirectory over and over. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+124) - Build # 17104 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17104/ Java: 64bit/jdk-9-ea+124 -XX:+UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.core.TestDynamicLoading.testDynamicLoading Error Message: Could not get expected value 'X val' for path 'x' full output: { "responseHeader":{ "status":0, "QTime":0}, "params":{"wt":"json"}, "context":{ "webapp":"", "path":"/test1", "httpMethod":"GET"}, "class":"org.apache.solr.core.BlobStoreTestRequestHandler", "x":null}, from server: null Stack Trace: java.lang.AssertionError: Could not get expected value 'X val' for path 'x' full output: { "responseHeader":{ "status":0, "QTime":0}, "params":{"wt":"json"}, "context":{ "webapp":"", "path":"/test1", "httpMethod":"GET"}, "class":"org.apache.solr.core.BlobStoreTestRequestHandler", "x":null}, from server: null at __randomizedtesting.SeedInfo.seed([6AD651CDB2A8804:DEE0484B2CF72DA4]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:481) at org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:232) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native Method) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62) at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:533) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (LUCENE-7366) Allow RAMDirectory to copy any Directory
[ https://issues.apache.org/jira/browse/LUCENE-7366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15356966#comment-15356966 ] Uwe Schindler commented on LUCENE-7366: --- I think to fix this, we should first fix FSDirectory, to filter the returned filenames in listAll() to only show regular files. I had the impression we already fixed this. No other directory implementation allows o create directorys or lists them, only FSDirectory. So for consistency FSDirectory.listAll() should exclude non-regular files. > Allow RAMDirectory to copy any Directory > - > > Key: LUCENE-7366 > URL: https://issues.apache.org/jira/browse/LUCENE-7366 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Rob Audenaerde > > Uwe: "The FSDirectory passed to RAMDirectory in the ctor could be changed to > Directory easily. The additional check for "not is a directory inode" is in > my opinion lo longer needed, because listFiles should only return files." > Use case: For increasing the speed of some of my application tests, I want to > re-use/copy a pre-populated RAMDirectory over and over. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-7280) Load cores in sorted order and tweak coreLoadThread counts to improve cluster stability on restarts
[ https://issues.apache.org/jira/browse/SOLR-7280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul reassigned SOLR-7280: Assignee: Noble Paul (was: Shalin Shekhar Mangar) > Load cores in sorted order and tweak coreLoadThread counts to improve cluster > stability on restarts > --- > > Key: SOLR-7280 > URL: https://issues.apache.org/jira/browse/SOLR-7280 > Project: Solr > Issue Type: Sub-task > Components: SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Noble Paul > Fix For: 5.2, 6.0 > > > In SOLR-7191, Damien mentioned that by loading solr cores in a sorted order > and tweaking some of the coreLoadThread counts, he was able to improve the > stability of a cluster with thousands of collections. We should explore some > of these changes and fold them into Solr. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7366) Allow RAMDirectory to copy any Directory
[ https://issues.apache.org/jira/browse/LUCENE-7366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rob Audenaerde updated LUCENE-7366: --- Description: Uwe: "The FSDirectory passed to RAMDirectory in the ctor could be changed to Directory easily. The additional check for "not is a directory inode" is in my opinion lo longer needed, because listFiles should only return files." Use case: For increasing the speed of some of my application tests, I want to re-use/copy a pre-populated RAMDirectory over and over. was: The FSDirectory passed to RAMDirectory in the ctor could be changed to Directory easily. The additional check for "not is a directory inode" is in my opinion lo longer needed, because listFiles should only return files. Use case: For increasing the speed of some of my application tests, I want to re-use/copy a pre-populated RAMDirectory over and over. > Allow RAMDirectory to copy any Directory > - > > Key: LUCENE-7366 > URL: https://issues.apache.org/jira/browse/LUCENE-7366 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Rob Audenaerde > > Uwe: "The FSDirectory passed to RAMDirectory in the ctor could be changed to > Directory easily. The additional check for "not is a directory inode" is in > my opinion lo longer needed, because listFiles should only return files." > Use case: For increasing the speed of some of my application tests, I want to > re-use/copy a pre-populated RAMDirectory over and over. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7366) Allow RAMDirectory to copy any Directory
[ https://issues.apache.org/jira/browse/LUCENE-7366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rob Audenaerde updated LUCENE-7366: --- Description: The FSDirectory passed to RAMDirectory in the ctor could be changed to Directory easily. The additional check for "not is a directory inode" is in my opinion lo longer needed, because listFiles should only return files. Use case: For increasing the speed of some of my application tests, I want to re-use/copy a pre-populated RAMDirectory over and over. was: The FSDirectory passed to RAMDirectory could be changed to Directory easily. The additional check for "not is a directory inode" is in my opinion lo longer needed, because listFiles should only return files. Use case: For increasing the speed of some of my application tests, I want to re-use/copy a pre-populated RAMDirectory over and over. > Allow RAMDirectory to copy any Directory > - > > Key: LUCENE-7366 > URL: https://issues.apache.org/jira/browse/LUCENE-7366 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Rob Audenaerde > > The FSDirectory passed to RAMDirectory in the ctor could be changed to > Directory easily. The additional check for "not is a directory inode" is in > my opinion lo longer needed, because listFiles should only return files. > Use case: For increasing the speed of some of my application tests, I want to > re-use/copy a pre-populated RAMDirectory over and over. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-7366) Allow RAMDirectory to copy any Directory
Rob Audenaerde created LUCENE-7366: -- Summary: Allow RAMDirectory to copy any Directory Key: LUCENE-7366 URL: https://issues.apache.org/jira/browse/LUCENE-7366 Project: Lucene - Core Issue Type: Improvement Reporter: Rob Audenaerde The FSDirectory passed to RAMDirectory could be changed to Directory easily. The additional check for "not is a directory inode" is in my opinion lo longer needed, because listFiles should only return files. Use case: For increasing the speed of some of my application tests, I want to re-use/copy a pre-populated RAMDirectory over and over. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7365) Don't use BooleanScorer for small segments
[ https://issues.apache.org/jira/browse/LUCENE-7365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Woodward updated LUCENE-7365: -- Attachment: LUCENE-7365.patch I like the idea of a specialised IndexSearcher, that's a lot less invasive. Here's a patch. LinearScoringIndexSearcher is a separate, public class, because I can see situations other than MemoryIndex where you might want to disable bulk scoring (for example, luwak also allows you to match against small batches of documents, and the same caveats apply to these as to MI). In this patch it's in the memory/ module, but that does force DefaultBulkScorer to become public, so maybe it would be better in core? > Don't use BooleanScorer for small segments > -- > > Key: LUCENE-7365 > URL: https://issues.apache.org/jira/browse/LUCENE-7365 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Alan Woodward >Assignee: Alan Woodward > Attachments: LUCENE-7365.patch, LUCENE-7365.patch > > > If a BooleanQuery meets certain criteria (only contains disjunctions, is > likely to match large numbers of docs) then we use a BooleanScorer to score > groups of 1024 docs at a time. This allocates arrays of 1024 Bucket objects > up-front. On very small segments (for example, a MemoryIndex) this is very > wasteful of memory, particularly if the query is large or deeply-nested. We > should avoid using a bulk scorer on these segments. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-9264) Optimize ZkController.publishAndWaitForDownStates
[ https://issues.apache.org/jira/browse/SOLR-9264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar resolved SOLR-9264. - Resolution: Fixed Thanks for the review Hrishikesh! > Optimize ZkController.publishAndWaitForDownStates > - > > Key: SOLR-9264 > URL: https://issues.apache.org/jira/browse/SOLR-9264 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar > Fix For: 6.2, master (7.0) > > Attachments: SOLR-9264.patch, SOLR-9264.patch, SOLR-9264.patch > > > ZkController.publishAndWaitForDownStates keeps looping over all collections > in the cluster state to ensure that every replica hosted on the current node > has been marked as down. This is wasteful when you have a large number of > collections because each access to a non-watched collection gets data from > ZK. Instead, we can watch the interesting collections (i.e. which have > replicas hosted locally) and wait till we see the required state. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9264) Optimize ZkController.publishAndWaitForDownStates
[ https://issues.apache.org/jira/browse/SOLR-9264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15356837#comment-15356837 ] ASF subversion and git services commented on SOLR-9264: --- Commit 8cb37842ec531a84469607971024336b68c6ed50 in lucene-solr's branch refs/heads/branch_6x from [~shalinmangar] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8cb3784 ] SOLR-9264: Optimize ZkController.publishAndWaitForDownStates to not read all collection states and watch relevant collections instead (cherry picked from commit 015e0fc) > Optimize ZkController.publishAndWaitForDownStates > - > > Key: SOLR-9264 > URL: https://issues.apache.org/jira/browse/SOLR-9264 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar > Fix For: 6.2, master (7.0) > > Attachments: SOLR-9264.patch, SOLR-9264.patch, SOLR-9264.patch > > > ZkController.publishAndWaitForDownStates keeps looping over all collections > in the cluster state to ensure that every replica hosted on the current node > has been marked as down. This is wasteful when you have a large number of > collections because each access to a non-watched collection gets data from > ZK. Instead, we can watch the interesting collections (i.e. which have > replicas hosted locally) and wait till we see the required state. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9264) Optimize ZkController.publishAndWaitForDownStates
[ https://issues.apache.org/jira/browse/SOLR-9264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15356835#comment-15356835 ] ASF subversion and git services commented on SOLR-9264: --- Commit 015e0fc1cf1d581c9657cd8f5588062c02588793 in lucene-solr's branch refs/heads/master from [~shalinmangar] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=015e0fc ] SOLR-9264: Optimize ZkController.publishAndWaitForDownStates to not read all collection states and watch relevant collections instead > Optimize ZkController.publishAndWaitForDownStates > - > > Key: SOLR-9264 > URL: https://issues.apache.org/jira/browse/SOLR-9264 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar > Fix For: 6.2, master (7.0) > > Attachments: SOLR-9264.patch, SOLR-9264.patch, SOLR-9264.patch > > > ZkController.publishAndWaitForDownStates keeps looping over all collections > in the cluster state to ensure that every replica hosted on the current node > has been marked as down. This is wasteful when you have a large number of > collections because each access to a non-watched collection gets data from > ZK. Instead, we can watch the interesting collections (i.e. which have > replicas hosted locally) and wait till we see the required state. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7355) Leverage MultiTermAwareComponent in query parsers
[ https://issues.apache.org/jira/browse/LUCENE-7355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand updated LUCENE-7355: - Attachment: LUCENE-7355.patch I think I have something better now: - the method is {{BytesRef normalize(String field, String text)}}, it can be configured with a subset of the char filters / token filters of the default analysis chain, and uses the same AttributeFactory as the default analysis chain - {{setLowerCaseExpandedTerms}} has been removed from query parsers, which now use {{Analyzer.normalize}} to process range/prefix/fuzzy/wildcard/regexp queries - {{AnalyzingQueryParser}} and the classic {{QueryParser}} have been merged together - both {{SimpleQueryParser}} and the classic {{QueryParser}} now work with a non-default AttributeFactory that eg. uses a different encoding for terms (it was only the case before for wildcard queries and the classic QueryParser when analyzeRangeTerms was true). Other query parsers could be fixed too but it will require more work as they are using String representations for terms rather than binary. > Leverage MultiTermAwareComponent in query parsers > - > > Key: LUCENE-7355 > URL: https://issues.apache.org/jira/browse/LUCENE-7355 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Priority: Minor > Attachments: LUCENE-7355.patch, LUCENE-7355.patch, LUCENE-7355.patch > > > MultiTermAwareComponent is designed to make it possible to do the right thing > in query parsers when in comes to analysis of multi-term queries. However, > since query parsers just take an analyzer and since analyzers do not > propagate the information about what to do for multi-term analysis, query > parsers cannot do the right thing out of the box. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9266) zero size fdx file being created and commit taking 2 to 3 hours
rajat created SOLR-9266: --- Summary: zero size fdx file being created and commit taking 2 to 3 hours Key: SOLR-9266 URL: https://issues.apache.org/jira/browse/SOLR-9266 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Affects Versions: 4.2 Environment: ubuntu 14.04 lts , apache tomcat 9 , java 8 Reporter: rajat index size 100 gbs not using compound file format During indexing zero size fdx files are being created and commits are taking a lot of time (2 to 3 hours ) I have been using solr 4.2 for past 2 -2.5 years faced such a problem first time -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9242) Collection level backup/restore should provide a param for specifying the repository implementation it should use
[ https://issues.apache.org/jira/browse/SOLR-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15356701#comment-15356701 ] Varun Thacker commented on SOLR-9242: - Hi Hrishikesh, bq. we can restrict the users to configure only a single repository at-a-time. This will avoid the problem mentioned above and they can use the current property. Personally I don't like the idea of limiting our users to one repo in all of the 6.x line. Let's say we follow this order: 1. If "location" param was provided as a query param use that 2. Else if the "repository" in the solr.xml has a "location" param use that. 3. If the "repository" specified doesn't specify a "location" param then see if it's specified via the cluster property API . The code will throw an error if the location was bogus or was with not for this repo. It has to fail as the repository will fail to read / write to that location. I thought about the "repoName:/path" syntax idea that you proposed . Seems to me that we want to do all of this because solr.xml doesn't have an API to update it . We have to hand edit the file and upload to ZK at the very least. I think let's not complicate it any further? Keep the location cluster prop for now the way it is and support it. We can work towards adding API support to solr.xml and then get rid the "location" cluster prop or the entire cluster prop API in the future. > Collection level backup/restore should provide a param for specifying the > repository implementation it should use > - > > Key: SOLR-9242 > URL: https://issues.apache.org/jira/browse/SOLR-9242 > Project: Solr > Issue Type: Improvement >Reporter: Hrishikesh Gadre >Assignee: Varun Thacker > Attachments: SOLR-9242.patch > > > SOLR-7374 provides BackupRepository interface to enable storing Solr index > data to a configured file-system (e.g. HDFS, local file-system etc.). This > JIRA is to track the work required to extend this functionality at the > collection level. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-9216) Support collection.configName in MODIFYCOLLECTION request
[ https://issues.apache.org/jira/browse/SOLR-9216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul resolved SOLR-9216. -- Resolution: Fixed Fix Version/s: 6.2 > Support collection.configName in MODIFYCOLLECTION request > - > > Key: SOLR-9216 > URL: https://issues.apache.org/jira/browse/SOLR-9216 > Project: Solr > Issue Type: Improvement >Reporter: Keith Laban >Assignee: Noble Paul > Fix For: 6.2 > > Attachments: SOLR-9216.patch, SOLR-9216.patch, SOLR-9216.patch > > > MODIFYCOLLECTION should support updating the > {{/collections/}} value of "configName" in zookeeper -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org