[JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 92999 - Failure!

2014-08-21 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/92999/

All tests passed

Build Log:
[...truncated 1477 lines...]
   [junit4] JVM J7: stdout was not empty, see: 
/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java7-64-test-only/checkout/lucene/build/core/test/temp/junit4-J7-20140821_080036_795.sysout
   [junit4]  JVM J7: stdout (verbatim) 
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  SIGSEGV (0xb) at pc=0x7fc6daab6170, pid=10688, 
tid=140491759924992
   [junit4] #
   [junit4] # JRE version: Java(TM) SE Runtime Environment (7.0_65-b17) (build 
1.7.0_65-b17)
   [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed mode 
linux-amd64 compressed oops)
   [junit4] # Problematic frame:
   [junit4] # V  [libjvm.so+0x853170]  Parse::do_one_bytecode()+0x3190
   [junit4] #
   [junit4] # Failed to write core dump. Core dumps have been disabled. To 
enable core dumping, try ulimit -c unlimited before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java7-64-test-only/checkout/lucene/build/core/test/J7/hs_err_pid10688.log
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.sun.com/bugreport/crash.jsp
   [junit4] #
   [junit4]  JVM J7: EOF 

[...truncated 98 lines...]
   [junit4] ERROR: JVM J7 ended with an exception, command line: 
/var/lib/jenkins/tools/hudson.model.JDK/Java_7_64bit_u65/jre/bin/java 
-Dtests.prefix=tests -Dtests.seed=547483E40389D7F8 -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=5.0.0 
-Dtests.cleanthreads=perMethod 
-Djava.util.logging.config.file=/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java7-64-test-only/checkout/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts.gracious=false -Dtests.multiplier=1 
-DtempDir=./temp -Djava.io.tmpdir=./temp 
-Djunit4.tempDir=/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java7-64-test-only/checkout/lucene/build/core/test/temp
 
-Dclover.db.dir=/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java7-64-test-only/checkout/lucene/build/clover/db
 -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Djava.security.policy=/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java7-64-test-only/checkout/lucene/tools/junit4/tests.policy
 -Dtests.LUCENE_VERSION=5.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.leaveTemporary=false -Dtests.filterstacks=true -classpath 

[jira] [Commented] (LUCENE-5897) performance bug (adversary) in StandardTokenizer

2014-08-21 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105104#comment-14105104
 ] 

Steve Rowe commented on LUCENE-5897:


bq. So more generally, can we optimize the general case to also remove what 
appears to be a backtracking algo? I know jflex is more general than what ICU 
offers, so its like comparing apples and oranges, but i can't help but wonder...

Sorry, I don't know enough about how the automaton is constructed and run to 
know if this is possible.

 performance bug (adversary) in StandardTokenizer
 --

 Key: LUCENE-5897
 URL: https://issues.apache.org/jira/browse/LUCENE-5897
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir

 There seem to be some conditions (I don't know how rare or what conditions) 
 that cause StandardTokenizer to essentially hang on input: I havent looked 
 hard yet, but as its essentially a DFA I think something wierd might be going 
 on.
 An easy way to reproduce is with 1MB of underscores, it will just hang 
 forever.
 {code}
   public void testWorthyAdversary() throws Exception {
 char buffer[] = new char[1024 * 1024];
 Arrays.fill(buffer, '_');
 int tokenCount = 0;
 Tokenizer ts = new StandardTokenizer();
 ts.setReader(new StringReader(new String(buffer)));
 ts.reset();
 while (ts.incrementToken()) {
   tokenCount++;
 }
 ts.end();
 ts.close();
 assertEquals(0, tokenCount);
   }
 {code} 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5896) A few potential reproducibility issues

2014-08-21 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105108#comment-14105108
 ] 

Dawid Weiss commented on LUCENE-5896:
-

 As things stand right now, we can at least write a test that does this...

This is a good idea if you make those methods hidden from general use (for 
example package-private, like you said). The drawback is it'd require an 
implementation of equals on the returned result, which may be a pain.

 A few potential reproducibility issues
 --

 Key: LUCENE-5896
 URL: https://issues.apache.org/jira/browse/LUCENE-5896
 Project: Lucene - Core
  Issue Type: Test
  Components: general/test
Affects Versions: 4.9
Reporter: Simon Willnauer
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5896.patch


 I realized that passing the same seeded random instance to LuceneTestCase# 
 newIndewWriterConfig doesn't necessarily produce the same IWC and I found a 
 bunch of issues in that class using global random rather than local random. 
 Yet, I went over the file to spot others but we might need to think about a 
 more automated way to spot those...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.7.0_67) - Build # 10941 - Still Failing!

2014-08-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/10941/
Java: 64bit/jdk1.7.0_67 -XX:+UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 61618 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:474: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:63: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build.xml:548: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build.xml:523: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/common-build.xml:2446: 
Can't get https://issues.apache.org/jira/rest/api/2/project/SOLR to 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build/docs/changes/jiraVersionList.json

Total time: 113 minutes 8 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 64bit/jdk1.7.0_67 
-XX:+UseCompressedOops -XX:+UseParallelGC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-5897) performance bug (adversary) in StandardTokenizer

2014-08-21 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105181#comment-14105181
 ] 

Steve Rowe commented on LUCENE-5897:


I removed the buffer expansion logic in {{StandardTokenizerImpl.zzRefill()}}, 
and the tokenizer still functions - as I had hoped, partial match searches are 
limited to the buffer size:

{code:java}
@@ -509,16 +509,6 @@
   zzStartRead = 0;
 }
 
-/* is the buffer big enough? */
-if (zzCurrentPos = zzBuffer.length - zzFinalHighSurrogate) {
-  /* if not: blow it up */
-  char newBuffer[] = new char[zzBuffer.length*2];
-  System.arraycopy(zzBuffer, 0, newBuffer, 0, zzBuffer.length);
-  zzBuffer = newBuffer;
-  zzEndRead += zzFinalHighSurrogate;
-  zzFinalHighSurrogate = 0;
-}
-
 /* fill the buffer with new input */
 int requested = zzBuffer.length - zzEndRead;   
 int totalRead = 0;
{code}

and ran Robert's {{testWorthyAdversary()}} with the input length ranging from 
100k to 3.2M chars, and varying the buffer size from 4k chars (the default) to 
255, compared to the current implementation, where unlimited buffer expansion 
is allowed (NBE = no buffer expansion; times are in seconds; Oracle Java 
1.7.0_55; OS X 10.9.4):

||Input chars||current impl.||4k buff, NBE||2k buff, NBE||1k buff, NBE||255 
buff, NBE||
|100k|29s|3s|1s|1s|1s|
|200k|136s|5s|3s|1s|1s|
|400k|547s|11s|5s|3s|1s|
|800k|2,272s|22s|11s|5s|1s|
|1,600k|9,000s (est.)|43s|23s|11s|3s|
|3,200k|40,000s (est.)|91s|43s|22s|6s|

I didn't actually run the test against the current implementation with 1.6M and 
3.2M input chars - the numbers above with (est.) after them are estimates - but 
for the ones I did measure, doubling the input length roughly quadruples the 
run time.

By contrast, when the buffer length is limited, doubling the input length only 
doubles the run time.

When the buffer length is limited, doubling the buffer length doubles the run 
time.

Based on this, I'd like to introduce a new max buffer size setter to 
StandardTokenizer, which defaults to the initial buffer size.  That way, by 
default buffer expansion is disabled, but can be re-enabled by setting a max 
buffer size larger than the initial buffer size.

I ran luceneutil's {{TestAnalyzerPerf}}, just testing {{StandardAnalyzer}} 
using enwiki-20130102-lines.txt, with unpatched trunk against trunk patched to 
disable buffer expansion, and with a buffer size of 255 (the default max token 
size), 5 runs each:

|| ||Million tokens/sec, trunk||Million tokens/sec, patched||
|run 1|7.162|7.020|
|run 2|7.079|7.245|
|run 3|7.381|7.200|
|run 4|7.352|7.192|
|run 5|7.160|7.169|
|mean|7.227|7.166|
|stddev|0.1323|0.08589|

These are pretty noisy, but comparing the best throughput numbers, the patched 
version has 1.8% lower throughput. 

Based on the above, I'd also like to:

# set the initial buffer size at the max token length
# when basing the initial buffer size on the max token length, don't go above 
1M or 2M chars, to guard against people specifying {{Integer.MAX_VALUE}} for 
the max token length
\\
\\
and from above: 
\\
\\
# add a max buffer size setter to StandardTokenizer, which defaults to the 
initial buffer size.

 performance bug (adversary) in StandardTokenizer
 --

 Key: LUCENE-5897
 URL: https://issues.apache.org/jira/browse/LUCENE-5897
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir

 There seem to be some conditions (I don't know how rare or what conditions) 
 that cause StandardTokenizer to essentially hang on input: I havent looked 
 hard yet, but as its essentially a DFA I think something wierd might be going 
 on.
 An easy way to reproduce is with 1MB of underscores, it will just hang 
 forever.
 {code}
   public void testWorthyAdversary() throws Exception {
 char buffer[] = new char[1024 * 1024];
 Arrays.fill(buffer, '_');
 int tokenCount = 0;
 Tokenizer ts = new StandardTokenizer();
 ts.setReader(new StringReader(new String(buffer)));
 ts.reset();
 while (ts.incrementToken()) {
   tokenCount++;
 }
 ts.end();
 ts.close();
 assertEquals(0, tokenCount);
   }
 {code} 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5889) AnalyzingInfixSuggester should expose commit()

2014-08-21 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated LUCENE-5889:
--

Attachment: LUCENE-5889.patch

Thanks for the review!

New patch which addresses all the inputs you provided.

 AnalyzingInfixSuggester should expose commit()
 --

 Key: LUCENE-5889
 URL: https://issues.apache.org/jira/browse/LUCENE-5889
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spellchecker
Reporter: Mike Sokolov
 Attachments: LUCENE-5889.patch, LUCENE-5889.patch


 There is no way short of close() for a user of AnalyzingInfixSuggester to 
 cause it to commit() its underlying index: only refresh() is provided.  But 
 callers might want to ensure the index is flushed to disk without closing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: FW: [jira] [Commented] (LUCENE-5886) current ecj-javadoc-lint crashes on SharedFSAutoReplicaFailoverUtilsTest.java

2014-08-21 Thread Balchandra Vaidya



Hi Uwe,

On 08/20/14 07:02 PM, Uwe Schindler wrote:

Hi Balchandra,

Thanks fort he info! I installed JDK 8u20 and JDK 7u67 on the Jenkins machines 
(Linux, Windows, OSX).
In addition, on Linux, we now also test JDK 9 build 26.

Excellent..


Give me a note, if we should also look at stuff like 7u80 and 8u40 previews!

Yes, that is correct.


Thanks
Balchandra




Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de



-Original Message-
From: Balchandra Vaidya [mailto:balchandra.vai...@oracle.com]
Sent: Wednesday, August 20, 2014 1:42 PM
To: Uwe Schindler
Cc: dev@lucene.apache.org; rory.odonn...@oracle.com; 'Dalibor Topic'
Subject: Re: FW: [jira] [Commented] (LUCENE-5886) current ecj-javadoc-lint
crashes on SharedFSAutoReplicaFailoverUtilsTest.java


Hi Uwe,

As you might have already noticed, Java SE 8u20 has been released and is
available from
http://www.oracle.com/technetwork/java/javase/downloads/index.html

Thanks
Balchandra


On 08/18/14 11:48 AM, Balchandra Vaidya wrote:

Hi Uwe,

On 08/15/14 11:34 PM, Uwe Schindler wrote:

Hi Rory,

I opened https://issues.apache.org/jira/browse/LUCENE-5890 to track
any problems. I will install JDK 9 and update the other JDKs during
the next week.
Is there any release date for Java 8 update 20? If yes, I could
combine the updates, because it always causes downtime of virtual
machines.

8u20 is expected to be released soon.
http://openjdk.java.net/projects/jdk8u/releases/8u20.html

We will inform you as soon as the release go out.


Kind regards,
Balchandra



Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de



-Original Message-
From: Rory O'Donnell Oracle, Dublin Ireland
[mailto:rory.odonn...@oracle.com]
Sent: Friday, August 15, 2014 7:20 PM
To: Uwe Schindler; 'Balchandra Vaidya'; 'Dalibor Topic'
Cc: dev@lucene.apache.org
Subject: Re: FW: [jira] [Commented] (LUCENE-5886) current
ecj-javadoc-lint crashes on
SharedFSAutoReplicaFailoverUtilsTest.java

Thanks for the update Uwe!
On 15/08/2014 17:49, Uwe Schindler wrote:

Hi Rory,

FYI, the JDK 9 b26 build seems to work now with Lucene. I have not
yet

completed the tests (no Solr up to now, only Lucene), so we might
add it as build JDK to the Policeman Jenkins server soon!

As you see in the attached issue mail

(https://issues.apache.org/jira/browse/LUCENE-5886), I will add Java
9 support to our build files (and a new Constant accessible from
Lucene classes), so some conditionals in the ANT build works correct
(we do some checks only on specific JVMs).

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de
eMail: u...@thetaphi.de



--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1781 - Still Failing!

2014-08-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1781/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.schema.TestCloudSchemaless.testDistribSearch

Error Message:
Timeout occured while waiting response from server at: 
https://127.0.0.1:49222/se_gb/qc/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:49222/se_gb/qc/collection1
at 
__randomizedtesting.SeedInfo.seed([FA34DCD4045B30A6:7BD252CC7304509A]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:558)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:68)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:54)
at 
org.apache.solr.schema.TestCloudSchemaless.doTest(TestCloudSchemaless.java:140)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 

[jira] [Issue Comment Deleted] (SOLR-4527) Atomic updates when running distributed seem broken.

2014-08-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-4527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

José Joaquín updated SOLR-4527:
---

Comment: was deleted

(was: The NullPointerException returned by a real time get when a collection 
with more than a shard is requested was fixed in 4.7.1 (maybe before). But it's 
happening again in version 4.9.)

 Atomic updates when running distributed seem broken.
 

 Key: SOLR-4527
 URL: https://issues.apache.org/jira/browse/SOLR-4527
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, update
Affects Versions: 4.1
Reporter: mike st. john
 Fix For: 4.9, 5.0


 When using solrcloud as a nosql solution,  i've run into the issue where i've 
 sent some atomic updates and i'm receiving an error  missing required 
 field:  implying that this is an add instead of an update.  when i add 
 distrib=false to the url and send the doc to the index where it resides, the 
 update is applied.
 Possibly related...when i try and do a real time get for the id,  its 
 throwing an NPE
  trace:java.lang.NullPointerException\n\tat 
 org.apache.solr.handler.component.RealTimeGetComponent.createSubRequests(RealTimeGetComponent.java:368)\n\tat
  
 org.apache.solr.handler.component.RealTimeGetComponent.distributedProcess(RealTimeGetComponent.java:325)\n\tat
  
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:244)\n\tat
  
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)\n\tat
  org.apache.solr.core.SolrCore.execute(SolrCore.java:1808)\n\tat 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:583)\n\tat
  
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:293)\n\tat
  
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)\n\tat
  
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)\n\tat
  
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)\n\tat
  
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)\n\tat
  
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)\n\tat
  
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)\n\tat
  
 org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:936)\n\tat
  
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)\n\tat
  
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)\n\tat
  
 org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1004)\n\tat
  
 org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)\n\tat
  
 org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310)\n\tat
  
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)\n\tat
  
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)\n\tat
  java.lang.Thread.run(Thread.java:679)\n,
 code:500}}
 the command will succeed , if i use the url the doc exists on and add 
 distrib=false to the end.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-trunk-Linux-java7-64-analyzers - Build # 13167 - Failure!

2014-08-21 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-java7-64-analyzers/13167/

1 tests failed.
REGRESSION:  
org.apache.lucene.analysis.miscellaneous.TestWordDelimiterFilter.testRandomHugeStrings

Error Message:
some thread(s) failed

Stack Trace:
java.lang.RuntimeException: some thread(s) failed
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:535)
at 
org.apache.lucene.analysis.miscellaneous.TestWordDelimiterFilter.testRandomHugeStrings(TestWordDelimiterFilter.java:383)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 1087 lines...]
   [junit4] Suite: 
org.apache.lucene.analysis.miscellaneous.TestWordDelimiterFilter
   [junit4]   2 aug 21, 2014 2:16:16 PM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2 WARNING: Uncaught exception in thread: 
Thread[Thread-70,5,TGRP-TestWordDelimiterFilter]
   [junit4]   2 java.lang.OutOfMemoryError: GC overhead limit exceeded
   [junit4]   2 
   [junit4]   2 aug 21, 2014 2:16:18 PM 

[jira] [Commented] (LUCENE-5894) refactor bulk merge logic

2014-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105371#comment-14105371
 ] 

ASF subversion and git services commented on LUCENE-5894:
-

Commit 1619392 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1619392 ]

LUCENE-5894: refactor bulk merge logic

 refactor bulk merge logic
 -

 Key: LUCENE-5894
 URL: https://issues.apache.org/jira/browse/LUCENE-5894
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-5894.patch


 Today its only usable really by stored fields/term vectors, has hardcoded 
 logic in SegmentMerger specific to certain impls, etc.
 It would be better if this was generalized to terms/postings/norms/docvalues 
 as well.
 Bulk merge is boring, the real idea is to allow codecs to do more: e.g. with 
 this patch they could do streaming checksum validation, or prevent the 
 loading of latent norms, or other things we cannot do today.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6314) Multi-threaded facet counts differ when SolrCloud has 1 shard

2014-08-21 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105395#comment-14105395
 ] 

Mark Miller commented on SOLR-6314:
---

bq. Multi-threaded facet counts differ when SolrCloud has 1 shard

This seems like the wrong title for the JIRA and especially the wrong title in 
CHANGES. Makes it sound like a severe bug rather than what I'm reading above.

 Multi-threaded facet counts differ when SolrCloud has 1 shard
 --

 Key: SOLR-6314
 URL: https://issues.apache.org/jira/browse/SOLR-6314
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other, SolrCloud
Affects Versions: 5.0
Reporter: Vamsee Yarlagadda
Assignee: Erick Erickson
 Fix For: 5.0, 4.10

 Attachments: SOLR-6314.patch, SOLR-6314.patch, SOLR-6314.patch


 I am trying to work with multi-threaded faceting on SolrCloud and in the 
 process i was hit by some issues.
 I am currently running the below upstream test on different SolrCloud 
 configurations and i am getting a different result set per configuration.
 https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/test/org/apache/solr/request/TestFaceting.java#L654
 Setup:
 - *Indexed 50 docs into SolrCloud.*
 - *If the SolrCloud has only 1 shard, the facet field query has the below 
 output (which matches with the expected upstream test output - # facet fields 
 ~ 50).*
 {code}
 $ curl  
 http://localhost:8983/solr/collection1/select?facet=truefl=idindent=trueq=id%3A*facet.limit=-1facet.threads=1000facet.field=f0_wsfacet.field=f0_wsfacet.field=f0_wsfacet.field=f0_wsfacet.field=f0_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f9_wsfacet.field=f9_wsfacet.field=f9_wsfacet.field=f9_wsfacet.field=f9_wsrows=1wt=xml;
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeader
   int name=status0/int
   int name=QTime21/int
   lst name=params
 str name=facettrue/str
 str name=flid/str
 str name=indenttrue/str
 str name=qid:*/str
 str name=facet.limit-1/str
 str name=facet.threads1000/str
 arr name=facet.field
   strf0_ws/str
   strf0_ws/str
   strf0_ws/str
   strf0_ws/str
   strf0_ws/str
   strf1_ws/str
   strf1_ws/str
   strf1_ws/str
   strf1_ws/str
   strf1_ws/str
   strf2_ws/str
   strf2_ws/str
   strf2_ws/str
   strf2_ws/str
   strf2_ws/str
   strf3_ws/str
   strf3_ws/str
   strf3_ws/str
   strf3_ws/str
   strf3_ws/str
   strf4_ws/str
   strf4_ws/str
   strf4_ws/str
   strf4_ws/str
   strf4_ws/str
   strf5_ws/str
   strf5_ws/str
   strf5_ws/str
   strf5_ws/str
   strf5_ws/str
   strf6_ws/str
   strf6_ws/str
   strf6_ws/str
   strf6_ws/str
   strf6_ws/str
   strf7_ws/str
   strf7_ws/str
   strf7_ws/str
   strf7_ws/str
   strf7_ws/str
   strf8_ws/str
   strf8_ws/str
   strf8_ws/str
   strf8_ws/str
   strf8_ws/str
   strf9_ws/str
   strf9_ws/str
   strf9_ws/str
   strf9_ws/str
   strf9_ws/str
 /arr
 str name=wtxml/str
 str name=rows1/str
   /lst
 /lst
 result name=response numFound=50 start=0
   doc
 float name=id0.0/float/doc
 /result
 lst name=facet_counts
   lst name=facet_queries/
   lst name=facet_fields
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f1_ws
   int 

[jira] [Commented] (SOLR-6314) Multi-threaded facet counts differ when SolrCloud has 1 shard

2014-08-21 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105397#comment-14105397
 ] 

Mark Miller commented on SOLR-6314:
---

bq. SOLR-6314: Multi-threaded facet counts differ when SolrCloud has 1 shard 
(Erick Erickson)

Also missing credit for Vamsee in CHANGES.

 Multi-threaded facet counts differ when SolrCloud has 1 shard
 --

 Key: SOLR-6314
 URL: https://issues.apache.org/jira/browse/SOLR-6314
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other, SolrCloud
Affects Versions: 5.0
Reporter: Vamsee Yarlagadda
Assignee: Erick Erickson
 Fix For: 5.0, 4.10

 Attachments: SOLR-6314.patch, SOLR-6314.patch, SOLR-6314.patch


 I am trying to work with multi-threaded faceting on SolrCloud and in the 
 process i was hit by some issues.
 I am currently running the below upstream test on different SolrCloud 
 configurations and i am getting a different result set per configuration.
 https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/test/org/apache/solr/request/TestFaceting.java#L654
 Setup:
 - *Indexed 50 docs into SolrCloud.*
 - *If the SolrCloud has only 1 shard, the facet field query has the below 
 output (which matches with the expected upstream test output - # facet fields 
 ~ 50).*
 {code}
 $ curl  
 http://localhost:8983/solr/collection1/select?facet=truefl=idindent=trueq=id%3A*facet.limit=-1facet.threads=1000facet.field=f0_wsfacet.field=f0_wsfacet.field=f0_wsfacet.field=f0_wsfacet.field=f0_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f9_wsfacet.field=f9_wsfacet.field=f9_wsfacet.field=f9_wsfacet.field=f9_wsrows=1wt=xml;
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeader
   int name=status0/int
   int name=QTime21/int
   lst name=params
 str name=facettrue/str
 str name=flid/str
 str name=indenttrue/str
 str name=qid:*/str
 str name=facet.limit-1/str
 str name=facet.threads1000/str
 arr name=facet.field
   strf0_ws/str
   strf0_ws/str
   strf0_ws/str
   strf0_ws/str
   strf0_ws/str
   strf1_ws/str
   strf1_ws/str
   strf1_ws/str
   strf1_ws/str
   strf1_ws/str
   strf2_ws/str
   strf2_ws/str
   strf2_ws/str
   strf2_ws/str
   strf2_ws/str
   strf3_ws/str
   strf3_ws/str
   strf3_ws/str
   strf3_ws/str
   strf3_ws/str
   strf4_ws/str
   strf4_ws/str
   strf4_ws/str
   strf4_ws/str
   strf4_ws/str
   strf5_ws/str
   strf5_ws/str
   strf5_ws/str
   strf5_ws/str
   strf5_ws/str
   strf6_ws/str
   strf6_ws/str
   strf6_ws/str
   strf6_ws/str
   strf6_ws/str
   strf7_ws/str
   strf7_ws/str
   strf7_ws/str
   strf7_ws/str
   strf7_ws/str
   strf8_ws/str
   strf8_ws/str
   strf8_ws/str
   strf8_ws/str
   strf8_ws/str
   strf9_ws/str
   strf9_ws/str
   strf9_ws/str
   strf9_ws/str
   strf9_ws/str
 /arr
 str name=wtxml/str
 str name=rows1/str
   /lst
 /lst
 result name=response numFound=50 start=0
   doc
 float name=id0.0/float/doc
 /result
 lst name=facet_counts
   lst name=facet_queries/
   lst name=facet_fields
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f2_ws
   int 

[jira] [Commented] (SOLR-6268) HdfsUpdateLog has a race condition that can expose a closed HDFS FileSystem instance and should close it's FileSystem instance if either inherited close method is called

2014-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105399#comment-14105399
 ] 

ASF subversion and git services commented on SOLR-6268:
---

Commit 1619402 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1619402 ]

SOLR-6268: HdfsUpdateLog has a race condition that can expose a closed HDFS 
FileSystem instance and should close it's FileSystem instance if either 
inherited close method is called.

 HdfsUpdateLog has a race condition that can expose a closed HDFS FileSystem 
 instance and should close it's FileSystem instance if either inherited close 
 method is called.
 --

 Key: SOLR-6268
 URL: https://issues.apache.org/jira/browse/SOLR-6268
 Project: Solr
  Issue Type: Bug
  Components: hdfs
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.10






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6268) HdfsUpdateLog has a race condition that can expose a closed HDFS FileSystem instance and should close it's FileSystem instance if either inherited close method is called

2014-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105401#comment-14105401
 ] 

ASF subversion and git services commented on SOLR-6268:
---

Commit 1619404 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1619404 ]

SOLR-6268: HdfsUpdateLog has a race condition that can expose a closed HDFS 
FileSystem instance and should close it's FileSystem instance if either 
inherited close method is called.

 HdfsUpdateLog has a race condition that can expose a closed HDFS FileSystem 
 instance and should close it's FileSystem instance if either inherited close 
 method is called.
 --

 Key: SOLR-6268
 URL: https://issues.apache.org/jira/browse/SOLR-6268
 Project: Solr
  Issue Type: Bug
  Components: hdfs
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.10






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 93025 - Failure!

2014-08-21 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/93025/

1 tests failed.
REGRESSION:  org.apache.lucene.index.TestCodecs.testDisableImpersonation

Error Message:
this codec can only be used for reading

Stack Trace:
java.lang.UnsupportedOperationException: this codec can only be used for reading
at 
__randomizedtesting.SeedInfo.seed([2FCBBCB5F0F2EECC:5949E1CD11ACE021]:0)
at 
org.apache.lucene.codecs.lucene40.Lucene40RWStoredFieldsFormat.fieldsWriter(Lucene40RWStoredFieldsFormat.java:36)
at 
org.apache.lucene.index.DefaultIndexingChain.initStoredFieldsWriter(DefaultIndexingChain.java:84)
at 
org.apache.lucene.index.DefaultIndexingChain.startStoredFields(DefaultIndexingChain.java:254)
at 
org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:337)
at 
org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:241)
at 
org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:454)
at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1390)
at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1105)
at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1086)
at 
org.apache.lucene.index.TestCodecs.testDisableImpersonation(TestCodecs.java:863)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Resolved] (SOLR-6268) HdfsUpdateLog has a race condition that can expose a closed HDFS FileSystem instance and should close it's FileSystem instance if either inherited close method is called.

2014-08-21 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-6268.
---

Resolution: Fixed

 HdfsUpdateLog has a race condition that can expose a closed HDFS FileSystem 
 instance and should close it's FileSystem instance if either inherited close 
 method is called.
 --

 Key: SOLR-6268
 URL: https://issues.apache.org/jira/browse/SOLR-6268
 Project: Solr
  Issue Type: Bug
  Components: hdfs
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.10






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5656) Add autoAddReplicas feature for shared file systems.

2014-08-21 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-5656.
---

   Resolution: Fixed
Fix Version/s: 4.10
   5.0

 Add autoAddReplicas feature for shared file systems.
 

 Key: SOLR-5656
 URL: https://issues.apache.org/jira/browse/SOLR-5656
 Project: Solr
  Issue Type: New Feature
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.10

 Attachments: SOLR-5656.patch, SOLR-5656.patch, SOLR-5656.patch, 
 SOLR-5656.patch


 When using HDFS, the Overseer should have the ability to reassign the cores 
 from failed nodes to running nodes.
 Given that the index and transaction logs are in hdfs, it's simple for 
 surviving hardware to take over serving cores for failed hardware.
 There are some tricky issues around having the Overseer handle this for you, 
 but seems a simple first pass is not too difficult.
 This will add another alternative to replicating both with hdfs and solr.
 It shouldn't be specific to hdfs, and would be an option for any shared file 
 system Solr supports.
 https://reviews.apache.org/r/23371/



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5736) Separate the classifiers to online and caching where possible

2014-08-21 Thread Tommaso Teofili (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105430#comment-14105430
 ] 

Tommaso Teofili commented on LUCENE-5736:
-

I have a doubt on CachingNaiveBayesClassifier#reInitCache method, there it 
seems the _termList List_ is populated but never used, it seems that it's 
either useless so it can be removed or ignored by mistake so it has to be 
properly used, what is it? (to me the most likely seems the first, as there's 
already the _frequencyMap_ object).


 Separate the classifiers to online and caching where possible
 -

 Key: LUCENE-5736
 URL: https://issues.apache.org/jira/browse/LUCENE-5736
 Project: Lucene - Core
  Issue Type: Sub-task
  Components: modules/classification
Reporter: Gergő Törcsvári
Assignee: Tommaso Teofili
 Attachments: 0803-caching.patch, 0810-caching.patch, 
 CachingNaiveBayesClassifier.java


 The Lucene classifier implementations are now near onlines if they get a near 
 realtime reader. It is good for the users whoes have a continously changing 
 dataset, but slow for not changing datasets.
 The idea is: What if we implement a cache and speed up the results where it 
 is possible.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6089) When using the HDFS block cache, when a file is deleted, it's underlying data entries in the block cache are not removed, which is a problem with the global block cache

2014-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105441#comment-14105441
 ] 

ASF subversion and git services commented on SOLR-6089:
---

Commit 1619427 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1619427 ]

SOLR-6089: When using the HDFS block cache, when a file is deleted, it's 
underlying data entries in the block cache are not removed, which is a problem 
with the global block cache option.

 When using the HDFS block cache, when a file is deleted, it's underlying data 
 entries in the block cache are not removed, which is a problem with the 
 global block cache option.
 

 Key: SOLR-6089
 URL: https://issues.apache.org/jira/browse/SOLR-6089
 Project: Solr
  Issue Type: Bug
  Components: hdfs
Reporter: Mark Miller
Assignee: Mark Miller
 Attachments: SOLR-6089.patch


 Patrick Hunt noticed this. Without the global block cache, the block cache 
 was not reused after a directory was closed. Now that it is reused when using 
 the global cache, leaving the underlying entries presents a problem if that 
 directory is created again because blocks from the previous directory may be 
 read. This could happen when you remove a solrcore and recreate it with the 
 same data directory (or a collection with the same name). I could only 
 reproduce it easily using index merges (core admin) with the sequence: merge 
 index, delete collection, create collection, merge index. Reads on the final 
 merged index can look corrupt or queries may just return no results.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6089) When using the HDFS block cache, when a file is deleted, it's underlying data entries in the block cache are not removed, which is a problem with the global block cache

2014-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105444#comment-14105444
 ] 

ASF subversion and git services commented on SOLR-6089:
---

Commit 1619431 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1619431 ]

SOLR-6089: When using the HDFS block cache, when a file is deleted, it's 
underlying data entries in the block cache are not removed, which is a problem 
with the global block cache option.

 When using the HDFS block cache, when a file is deleted, it's underlying data 
 entries in the block cache are not removed, which is a problem with the 
 global block cache option.
 

 Key: SOLR-6089
 URL: https://issues.apache.org/jira/browse/SOLR-6089
 Project: Solr
  Issue Type: Bug
  Components: hdfs
Reporter: Mark Miller
Assignee: Mark Miller
 Attachments: SOLR-6089.patch


 Patrick Hunt noticed this. Without the global block cache, the block cache 
 was not reused after a directory was closed. Now that it is reused when using 
 the global cache, leaving the underlying entries presents a problem if that 
 directory is created again because blocks from the previous directory may be 
 read. This could happen when you remove a solrcore and recreate it with the 
 same data directory (or a collection with the same name). I could only 
 reproduce it easily using index merges (core admin) with the sequence: merge 
 index, delete collection, create collection, merge index. Reads on the final 
 merged index can look corrupt or queries may just return no results.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5736) Separate the classifiers to online and caching where possible

2014-08-21 Thread JIRA

[ 
https://issues.apache.org/jira/browse/LUCENE-5736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105446#comment-14105446
 ] 

Gergő Törcsvári commented on LUCENE-5736:
-

Yes, I'm remembering now. It was used for iterating tough the frequencyMap, but 
I started to refactor that for cycle with the MapEntry way, and I mistakenly 
left the termList in.

 Separate the classifiers to online and caching where possible
 -

 Key: LUCENE-5736
 URL: https://issues.apache.org/jira/browse/LUCENE-5736
 Project: Lucene - Core
  Issue Type: Sub-task
  Components: modules/classification
Reporter: Gergő Törcsvári
Assignee: Tommaso Teofili
 Attachments: 0803-caching.patch, 0810-caching.patch, 
 CachingNaiveBayesClassifier.java


 The Lucene classifier implementations are now near onlines if they get a near 
 realtime reader. It is good for the users whoes have a continously changing 
 dataset, but slow for not changing datasets.
 The idea is: What if we implement a cache and speed up the results where it 
 is possible.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6089) When using the HDFS block cache, when a file is deleted, it's underlying data entries in the block cache are not removed, which is a problem with the global block cache o

2014-08-21 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-6089.
---

   Resolution: Fixed
Fix Version/s: 4.10
   5.0

 When using the HDFS block cache, when a file is deleted, it's underlying data 
 entries in the block cache are not removed, which is a problem with the 
 global block cache option.
 

 Key: SOLR-6089
 URL: https://issues.apache.org/jira/browse/SOLR-6089
 Project: Solr
  Issue Type: Bug
  Components: hdfs
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.10

 Attachments: SOLR-6089.patch


 Patrick Hunt noticed this. Without the global block cache, the block cache 
 was not reused after a directory was closed. Now that it is reused when using 
 the global cache, leaving the underlying entries presents a problem if that 
 directory is created again because blocks from the previous directory may be 
 read. This could happen when you remove a solrcore and recreate it with the 
 same data directory (or a collection with the same name). I could only 
 reproduce it easily using index merges (core admin) with the sequence: merge 
 index, delete collection, create collection, merge index. Reads on the final 
 merged index can look corrupt or queries may just return no results.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5894) refactor bulk merge logic

2014-08-21 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105483#comment-14105483
 ] 

Robert Muir commented on LUCENE-5894:
-

This change is good to backport, but i would prefer it not go into 4.10 last 
minute.

[~rjernst] would you be ok with creating release branch soon?

 refactor bulk merge logic
 -

 Key: LUCENE-5894
 URL: https://issues.apache.org/jira/browse/LUCENE-5894
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-5894.patch


 Today its only usable really by stored fields/term vectors, has hardcoded 
 logic in SegmentMerger specific to certain impls, etc.
 It would be better if this was generalized to terms/postings/norms/docvalues 
 as well.
 Bulk merge is boring, the real idea is to allow codecs to do more: e.g. with 
 this patch they could do streaming checksum validation, or prevent the 
 loading of latent norms, or other things we cannot do today.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3617) Consider adding start scripts.

2014-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105489#comment-14105489
 ] 

ASF subversion and git services commented on SOLR-3617:
---

Commit 1619461 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1619461 ]

SOLR-3617: clean-up a few error messages and update changes to add to 4.10 
release

 Consider adding start scripts.
 --

 Key: SOLR-3617
 URL: https://issues.apache.org/jira/browse/SOLR-3617
 Project: Solr
  Issue Type: New Feature
Reporter: Mark Miller
Assignee: Timothy Potter
 Attachments: SOLR-3617.patch, SOLR-3617.patch, SOLR-3617.patch, 
 SOLR-3617.patch, SOLR-3617.patch


 I've always found that starting Solr with java -jar start.jar is a little odd 
 if you are not a java guy, but I think there are bigger pros than looking 
 less odd in shipping some start scripts.
 Not only do you get a cleaner start command:
 sh solr.sh or solr.bat or something
 But you also can do a couple other little nice things:
 * it becomes fairly obvious for a new casual user to see how to start the 
 system without reading doc.
 * you can make the working dir the location of the script - this lets you 
 call the start script from another dir and still have all the relative dir 
 setup work.
 * have an out of the box place to save startup params like -Xmx.
 * we could have multiple start scripts - say solr-dev.sh that logged to the 
 console and default to sys default for RAM - and also solr-prod which was 
 fully configured for logging, pegged Xms and Xmx at some larger value (1GB?) 
 etc.
 You would still of course be able to make the java cmd directly - and that is 
 probably what you would do when it's time to run as a service - but these 
 could be good starter scripts to get people on the right track and improve 
 the initial user experience.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5859) Remove Version from Analyzer constructors

2014-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105508#comment-14105508
 ] 

ASF subversion and git services commented on LUCENE-5859:
-

Commit 1619466 from [~rjernst] in branch 'dev/trunk'
[ https://svn.apache.org/r1619466 ]

LUCENE-5859: Update changes entry to reflect backport

 Remove Version from Analyzer constructors
 -

 Key: LUCENE-5859
 URL: https://issues.apache.org/jira/browse/LUCENE-5859
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Ryan Ernst
 Fix For: 5.0

 Attachments: LUCENE-5859.patch, LUCENE-5859_dead_code.patch


 This has always been a mess: analyzers are easy enough to make on your own, 
 we don't need to take responsibility for the users analysis chain for 2 
 major releases.
 The code maintenance is horrible here.
 This creates a huge usability issue too, and as seen from numerous mailing 
 list issues, users don't even understand how this versioning works anyway.
 I'm sure someone will whine if i try to remove these constants, but we can at 
 least make no-arg ctors forwarding to VERSION_CURRENT so that people who 
 don't care about back compat (e.g. just prototyping) don't have to deal with 
 the horribly complex versioning system.
 If you want to make the argument that doing this is trappy (i heard this 
 before), i think thats bogus, and ill counter by trying to remove them. 
 Either way, I'm personally not going to add any of this kind of back compat 
 logic myself ever again.
 Updated: description of the issue updated as expected. We should remove this 
 API completely. No one else on the planet has APIs that require a mandatory 
 version parameter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5859) Remove Version from Analyzer constructors

2014-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105513#comment-14105513
 ] 

ASF subversion and git services commented on LUCENE-5859:
-

Commit 1619467 from [~rjernst] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1619467 ]

LUCENE-5859: Update changes entry to reflect backport

 Remove Version from Analyzer constructors
 -

 Key: LUCENE-5859
 URL: https://issues.apache.org/jira/browse/LUCENE-5859
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Ryan Ernst
 Fix For: 5.0

 Attachments: LUCENE-5859.patch, LUCENE-5859_dead_code.patch


 This has always been a mess: analyzers are easy enough to make on your own, 
 we don't need to take responsibility for the users analysis chain for 2 
 major releases.
 The code maintenance is horrible here.
 This creates a huge usability issue too, and as seen from numerous mailing 
 list issues, users don't even understand how this versioning works anyway.
 I'm sure someone will whine if i try to remove these constants, but we can at 
 least make no-arg ctors forwarding to VERSION_CURRENT so that people who 
 don't care about back compat (e.g. just prototyping) don't have to deal with 
 the horribly complex versioning system.
 If you want to make the argument that doing this is trappy (i heard this 
 before), i think thats bogus, and ill counter by trying to remove them. 
 Either way, I'm personally not going to add any of this kind of back compat 
 logic myself ever again.
 Updated: description of the issue updated as expected. We should remove this 
 API completely. No one else on the planet has APIs that require a mandatory 
 version parameter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5859) Remove Version from Analyzer constructors

2014-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105517#comment-14105517
 ] 

ASF subversion and git services commented on LUCENE-5859:
-

Commit 1619468 from [~rjernst] in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1619468 ]

LUCENE-5859: Update changes entry to reflect backport

 Remove Version from Analyzer constructors
 -

 Key: LUCENE-5859
 URL: https://issues.apache.org/jira/browse/LUCENE-5859
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Ryan Ernst
 Fix For: 5.0

 Attachments: LUCENE-5859.patch, LUCENE-5859_dead_code.patch


 This has always been a mess: analyzers are easy enough to make on your own, 
 we don't need to take responsibility for the users analysis chain for 2 
 major releases.
 The code maintenance is horrible here.
 This creates a huge usability issue too, and as seen from numerous mailing 
 list issues, users don't even understand how this versioning works anyway.
 I'm sure someone will whine if i try to remove these constants, but we can at 
 least make no-arg ctors forwarding to VERSION_CURRENT so that people who 
 don't care about back compat (e.g. just prototyping) don't have to deal with 
 the horribly complex versioning system.
 If you want to make the argument that doing this is trappy (i heard this 
 before), i think thats bogus, and ill counter by trying to remove them. 
 Either way, I'm personally not going to add any of this kind of back compat 
 logic myself ever again.
 Updated: description of the issue updated as expected. We should remove this 
 API completely. No one else on the planet has APIs that require a mandatory 
 version parameter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4808 - Still Failing

2014-08-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4808/

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandlerBackup

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.TestReplicationHandlerBackup: 1) Thread[id=18419, 
name=Thread-2467, state=RUNNABLE, group=TGRP-TestReplicationHandlerBackup]  
   at java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at 
java.net.Socket.connect(Socket.java:579) at 
java.net.Socket.connect(Socket.java:528) at 
sun.net.NetworkClient.doConnect(NetworkClient.java:180) at 
sun.net.www.http.HttpClient.openServer(HttpClient.java:432) at 
sun.net.www.http.HttpClient.openServer(HttpClient.java:527) at 
sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652) at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
 at java.net.URL.openStream(URL.java:1037) at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:313)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.TestReplicationHandlerBackup: 
   1) Thread[id=18419, name=Thread-2467, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
at java.net.URL.openStream(URL.java:1037)
at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:313)
at __randomizedtesting.SeedInfo.seed([4B12C10A5FB99CCE]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandlerBackup

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=18419, name=Thread-2467, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at 
java.net.Socket.connect(Socket.java:579) at 
java.net.Socket.connect(Socket.java:528) at 
sun.net.NetworkClient.doConnect(NetworkClient.java:180) at 
sun.net.www.http.HttpClient.openServer(HttpClient.java:432) at 
sun.net.www.http.HttpClient.openServer(HttpClient.java:527) at 
sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652) at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
 at java.net.URL.openStream(URL.java:1037) at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:313)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=18419, name=Thread-2467, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at 

[jira] [Updated] (SOLR-6314) Facet counts returned multiple times if specified more than once on the request

2014-08-21 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-6314:
-

Priority: Minor  (was: Major)
 Summary: Facet counts returned multiple times if specified more than once 
on the request  (was: Multi-threaded facet counts differ when SolrCloud has 1 
shard)

 Facet counts returned multiple times if specified more than once on the 
 request
 ---

 Key: SOLR-6314
 URL: https://issues.apache.org/jira/browse/SOLR-6314
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other, SolrCloud
Affects Versions: 5.0
Reporter: Vamsee Yarlagadda
Assignee: Erick Erickson
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: SOLR-6314.patch, SOLR-6314.patch, SOLR-6314.patch


 I am trying to work with multi-threaded faceting on SolrCloud and in the 
 process i was hit by some issues.
 I am currently running the below upstream test on different SolrCloud 
 configurations and i am getting a different result set per configuration.
 https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/test/org/apache/solr/request/TestFaceting.java#L654
 Setup:
 - *Indexed 50 docs into SolrCloud.*
 - *If the SolrCloud has only 1 shard, the facet field query has the below 
 output (which matches with the expected upstream test output - # facet fields 
 ~ 50).*
 {code}
 $ curl  
 http://localhost:8983/solr/collection1/select?facet=truefl=idindent=trueq=id%3A*facet.limit=-1facet.threads=1000facet.field=f0_wsfacet.field=f0_wsfacet.field=f0_wsfacet.field=f0_wsfacet.field=f0_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f9_wsfacet.field=f9_wsfacet.field=f9_wsfacet.field=f9_wsfacet.field=f9_wsrows=1wt=xml;
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeader
   int name=status0/int
   int name=QTime21/int
   lst name=params
 str name=facettrue/str
 str name=flid/str
 str name=indenttrue/str
 str name=qid:*/str
 str name=facet.limit-1/str
 str name=facet.threads1000/str
 arr name=facet.field
   strf0_ws/str
   strf0_ws/str
   strf0_ws/str
   strf0_ws/str
   strf0_ws/str
   strf1_ws/str
   strf1_ws/str
   strf1_ws/str
   strf1_ws/str
   strf1_ws/str
   strf2_ws/str
   strf2_ws/str
   strf2_ws/str
   strf2_ws/str
   strf2_ws/str
   strf3_ws/str
   strf3_ws/str
   strf3_ws/str
   strf3_ws/str
   strf3_ws/str
   strf4_ws/str
   strf4_ws/str
   strf4_ws/str
   strf4_ws/str
   strf4_ws/str
   strf5_ws/str
   strf5_ws/str
   strf5_ws/str
   strf5_ws/str
   strf5_ws/str
   strf6_ws/str
   strf6_ws/str
   strf6_ws/str
   strf6_ws/str
   strf6_ws/str
   strf7_ws/str
   strf7_ws/str
   strf7_ws/str
   strf7_ws/str
   strf7_ws/str
   strf8_ws/str
   strf8_ws/str
   strf8_ws/str
   strf8_ws/str
   strf8_ws/str
   strf9_ws/str
   strf9_ws/str
   strf9_ws/str
   strf9_ws/str
   strf9_ws/str
 /arr
 str name=wtxml/str
 str name=rows1/str
   /lst
 /lst
 result name=response numFound=50 start=0
   doc
 float name=id0.0/float/doc
 /result
 lst name=facet_counts
   lst name=facet_queries/
   lst name=facet_fields
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f1_ws
   int name=one_133/int
 

[jira] [Updated] (SOLR-6314) Facet counts duplicated in the response if specified more than once on the request.

2014-08-21 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-6314:
-

Summary: Facet counts duplicated in the response if specified more than 
once on the request.  (was: Facet counts returned multiple times if specified 
more than once on the request)

 Facet counts duplicated in the response if specified more than once on the 
 request.
 ---

 Key: SOLR-6314
 URL: https://issues.apache.org/jira/browse/SOLR-6314
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other, SolrCloud
Affects Versions: 5.0
Reporter: Vamsee Yarlagadda
Assignee: Erick Erickson
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: SOLR-6314.patch, SOLR-6314.patch, SOLR-6314.patch


 I am trying to work with multi-threaded faceting on SolrCloud and in the 
 process i was hit by some issues.
 I am currently running the below upstream test on different SolrCloud 
 configurations and i am getting a different result set per configuration.
 https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/test/org/apache/solr/request/TestFaceting.java#L654
 Setup:
 - *Indexed 50 docs into SolrCloud.*
 - *If the SolrCloud has only 1 shard, the facet field query has the below 
 output (which matches with the expected upstream test output - # facet fields 
 ~ 50).*
 {code}
 $ curl  
 http://localhost:8983/solr/collection1/select?facet=truefl=idindent=trueq=id%3A*facet.limit=-1facet.threads=1000facet.field=f0_wsfacet.field=f0_wsfacet.field=f0_wsfacet.field=f0_wsfacet.field=f0_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f9_wsfacet.field=f9_wsfacet.field=f9_wsfacet.field=f9_wsfacet.field=f9_wsrows=1wt=xml;
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeader
   int name=status0/int
   int name=QTime21/int
   lst name=params
 str name=facettrue/str
 str name=flid/str
 str name=indenttrue/str
 str name=qid:*/str
 str name=facet.limit-1/str
 str name=facet.threads1000/str
 arr name=facet.field
   strf0_ws/str
   strf0_ws/str
   strf0_ws/str
   strf0_ws/str
   strf0_ws/str
   strf1_ws/str
   strf1_ws/str
   strf1_ws/str
   strf1_ws/str
   strf1_ws/str
   strf2_ws/str
   strf2_ws/str
   strf2_ws/str
   strf2_ws/str
   strf2_ws/str
   strf3_ws/str
   strf3_ws/str
   strf3_ws/str
   strf3_ws/str
   strf3_ws/str
   strf4_ws/str
   strf4_ws/str
   strf4_ws/str
   strf4_ws/str
   strf4_ws/str
   strf5_ws/str
   strf5_ws/str
   strf5_ws/str
   strf5_ws/str
   strf5_ws/str
   strf6_ws/str
   strf6_ws/str
   strf6_ws/str
   strf6_ws/str
   strf6_ws/str
   strf7_ws/str
   strf7_ws/str
   strf7_ws/str
   strf7_ws/str
   strf7_ws/str
   strf8_ws/str
   strf8_ws/str
   strf8_ws/str
   strf8_ws/str
   strf8_ws/str
   strf9_ws/str
   strf9_ws/str
   strf9_ws/str
   strf9_ws/str
   strf9_ws/str
 /arr
 str name=wtxml/str
 str name=rows1/str
   /lst
 /lst
 result name=response numFound=50 start=0
   doc
 float name=id0.0/float/doc
 /result
 lst name=facet_counts
   lst name=facet_queries/
   lst name=facet_fields
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   

[jira] [Commented] (SOLR-6314) Facet counts duplicated in the response if specified more than once on the request.

2014-08-21 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105524#comment-14105524
 ] 

Erick Erickson commented on SOLR-6314:
--

Yep, you're right. The counts were correct, it was just repeated unnecessarily 
in the response under some conditions.

Fixing up both.

 Facet counts duplicated in the response if specified more than once on the 
 request.
 ---

 Key: SOLR-6314
 URL: https://issues.apache.org/jira/browse/SOLR-6314
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other, SolrCloud
Affects Versions: 5.0
Reporter: Vamsee Yarlagadda
Assignee: Erick Erickson
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: SOLR-6314.patch, SOLR-6314.patch, SOLR-6314.patch


 I am trying to work with multi-threaded faceting on SolrCloud and in the 
 process i was hit by some issues.
 I am currently running the below upstream test on different SolrCloud 
 configurations and i am getting a different result set per configuration.
 https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/test/org/apache/solr/request/TestFaceting.java#L654
 Setup:
 - *Indexed 50 docs into SolrCloud.*
 - *If the SolrCloud has only 1 shard, the facet field query has the below 
 output (which matches with the expected upstream test output - # facet fields 
 ~ 50).*
 {code}
 $ curl  
 http://localhost:8983/solr/collection1/select?facet=truefl=idindent=trueq=id%3A*facet.limit=-1facet.threads=1000facet.field=f0_wsfacet.field=f0_wsfacet.field=f0_wsfacet.field=f0_wsfacet.field=f0_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f9_wsfacet.field=f9_wsfacet.field=f9_wsfacet.field=f9_wsfacet.field=f9_wsrows=1wt=xml;
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeader
   int name=status0/int
   int name=QTime21/int
   lst name=params
 str name=facettrue/str
 str name=flid/str
 str name=indenttrue/str
 str name=qid:*/str
 str name=facet.limit-1/str
 str name=facet.threads1000/str
 arr name=facet.field
   strf0_ws/str
   strf0_ws/str
   strf0_ws/str
   strf0_ws/str
   strf0_ws/str
   strf1_ws/str
   strf1_ws/str
   strf1_ws/str
   strf1_ws/str
   strf1_ws/str
   strf2_ws/str
   strf2_ws/str
   strf2_ws/str
   strf2_ws/str
   strf2_ws/str
   strf3_ws/str
   strf3_ws/str
   strf3_ws/str
   strf3_ws/str
   strf3_ws/str
   strf4_ws/str
   strf4_ws/str
   strf4_ws/str
   strf4_ws/str
   strf4_ws/str
   strf5_ws/str
   strf5_ws/str
   strf5_ws/str
   strf5_ws/str
   strf5_ws/str
   strf6_ws/str
   strf6_ws/str
   strf6_ws/str
   strf6_ws/str
   strf6_ws/str
   strf7_ws/str
   strf7_ws/str
   strf7_ws/str
   strf7_ws/str
   strf7_ws/str
   strf8_ws/str
   strf8_ws/str
   strf8_ws/str
   strf8_ws/str
   strf8_ws/str
   strf9_ws/str
   strf9_ws/str
   strf9_ws/str
   strf9_ws/str
   strf9_ws/str
 /arr
 str name=wtxml/str
 str name=rows1/str
   /lst
 /lst
 result name=response numFound=50 start=0
   doc
 float name=id0.0/float/doc
 /result
 lst name=facet_counts
   lst name=facet_queries/
   lst name=facet_fields
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f1_ws
   int name=one_133/int
 

[jira] [Commented] (SOLR-6314) Facet counts duplicated in the response if specified more than once on the request.

2014-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105526#comment-14105526
 ] 

ASF subversion and git services commented on SOLR-6314:
---

Commit 1619470 from [~erickoerickson] in branch 'dev/trunk'
[ https://svn.apache.org/r1619470 ]

SOLR-6314: updated bug title to be more accurate and include Vamsee in credits, 
all in CHNAGES.txt. No code changes

 Facet counts duplicated in the response if specified more than once on the 
 request.
 ---

 Key: SOLR-6314
 URL: https://issues.apache.org/jira/browse/SOLR-6314
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other, SolrCloud
Affects Versions: 5.0
Reporter: Vamsee Yarlagadda
Assignee: Erick Erickson
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: SOLR-6314.patch, SOLR-6314.patch, SOLR-6314.patch


 I am trying to work with multi-threaded faceting on SolrCloud and in the 
 process i was hit by some issues.
 I am currently running the below upstream test on different SolrCloud 
 configurations and i am getting a different result set per configuration.
 https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/test/org/apache/solr/request/TestFaceting.java#L654
 Setup:
 - *Indexed 50 docs into SolrCloud.*
 - *If the SolrCloud has only 1 shard, the facet field query has the below 
 output (which matches with the expected upstream test output - # facet fields 
 ~ 50).*
 {code}
 $ curl  
 http://localhost:8983/solr/collection1/select?facet=truefl=idindent=trueq=id%3A*facet.limit=-1facet.threads=1000facet.field=f0_wsfacet.field=f0_wsfacet.field=f0_wsfacet.field=f0_wsfacet.field=f0_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f9_wsfacet.field=f9_wsfacet.field=f9_wsfacet.field=f9_wsfacet.field=f9_wsrows=1wt=xml;
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeader
   int name=status0/int
   int name=QTime21/int
   lst name=params
 str name=facettrue/str
 str name=flid/str
 str name=indenttrue/str
 str name=qid:*/str
 str name=facet.limit-1/str
 str name=facet.threads1000/str
 arr name=facet.field
   strf0_ws/str
   strf0_ws/str
   strf0_ws/str
   strf0_ws/str
   strf0_ws/str
   strf1_ws/str
   strf1_ws/str
   strf1_ws/str
   strf1_ws/str
   strf1_ws/str
   strf2_ws/str
   strf2_ws/str
   strf2_ws/str
   strf2_ws/str
   strf2_ws/str
   strf3_ws/str
   strf3_ws/str
   strf3_ws/str
   strf3_ws/str
   strf3_ws/str
   strf4_ws/str
   strf4_ws/str
   strf4_ws/str
   strf4_ws/str
   strf4_ws/str
   strf5_ws/str
   strf5_ws/str
   strf5_ws/str
   strf5_ws/str
   strf5_ws/str
   strf6_ws/str
   strf6_ws/str
   strf6_ws/str
   strf6_ws/str
   strf6_ws/str
   strf7_ws/str
   strf7_ws/str
   strf7_ws/str
   strf7_ws/str
   strf7_ws/str
   strf8_ws/str
   strf8_ws/str
   strf8_ws/str
   strf8_ws/str
   strf8_ws/str
   strf9_ws/str
   strf9_ws/str
   strf9_ws/str
   strf9_ws/str
   strf9_ws/str
 /arr
 str name=wtxml/str
 str name=rows1/str
   /lst
 /lst
 result name=response numFound=50 start=0
   doc
 float name=id0.0/float/doc
 /result
 lst name=facet_counts
   lst name=facet_queries/
   lst name=facet_fields
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f1_ws
   

[jira] [Commented] (SOLR-6314) Facet counts duplicated in the response if specified more than once on the request.

2014-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105528#comment-14105528
 ] 

ASF subversion and git services commented on SOLR-6314:
---

Commit 1619471 from [~erickoerickson] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1619471 ]

SOLR-6314: updated bug title to be more accurate and include Vamsee in credits, 
all in CHNAGES.txt. No code changes

 Facet counts duplicated in the response if specified more than once on the 
 request.
 ---

 Key: SOLR-6314
 URL: https://issues.apache.org/jira/browse/SOLR-6314
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other, SolrCloud
Affects Versions: 5.0
Reporter: Vamsee Yarlagadda
Assignee: Erick Erickson
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: SOLR-6314.patch, SOLR-6314.patch, SOLR-6314.patch


 I am trying to work with multi-threaded faceting on SolrCloud and in the 
 process i was hit by some issues.
 I am currently running the below upstream test on different SolrCloud 
 configurations and i am getting a different result set per configuration.
 https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/test/org/apache/solr/request/TestFaceting.java#L654
 Setup:
 - *Indexed 50 docs into SolrCloud.*
 - *If the SolrCloud has only 1 shard, the facet field query has the below 
 output (which matches with the expected upstream test output - # facet fields 
 ~ 50).*
 {code}
 $ curl  
 http://localhost:8983/solr/collection1/select?facet=truefl=idindent=trueq=id%3A*facet.limit=-1facet.threads=1000facet.field=f0_wsfacet.field=f0_wsfacet.field=f0_wsfacet.field=f0_wsfacet.field=f0_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f9_wsfacet.field=f9_wsfacet.field=f9_wsfacet.field=f9_wsfacet.field=f9_wsrows=1wt=xml;
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeader
   int name=status0/int
   int name=QTime21/int
   lst name=params
 str name=facettrue/str
 str name=flid/str
 str name=indenttrue/str
 str name=qid:*/str
 str name=facet.limit-1/str
 str name=facet.threads1000/str
 arr name=facet.field
   strf0_ws/str
   strf0_ws/str
   strf0_ws/str
   strf0_ws/str
   strf0_ws/str
   strf1_ws/str
   strf1_ws/str
   strf1_ws/str
   strf1_ws/str
   strf1_ws/str
   strf2_ws/str
   strf2_ws/str
   strf2_ws/str
   strf2_ws/str
   strf2_ws/str
   strf3_ws/str
   strf3_ws/str
   strf3_ws/str
   strf3_ws/str
   strf3_ws/str
   strf4_ws/str
   strf4_ws/str
   strf4_ws/str
   strf4_ws/str
   strf4_ws/str
   strf5_ws/str
   strf5_ws/str
   strf5_ws/str
   strf5_ws/str
   strf5_ws/str
   strf6_ws/str
   strf6_ws/str
   strf6_ws/str
   strf6_ws/str
   strf6_ws/str
   strf7_ws/str
   strf7_ws/str
   strf7_ws/str
   strf7_ws/str
   strf7_ws/str
   strf8_ws/str
   strf8_ws/str
   strf8_ws/str
   strf8_ws/str
   strf8_ws/str
   strf9_ws/str
   strf9_ws/str
   strf9_ws/str
   strf9_ws/str
   strf9_ws/str
 /arr
 str name=wtxml/str
 str name=rows1/str
   /lst
 /lst
 result name=response numFound=50 start=0
   doc
 float name=id0.0/float/doc
 /result
 lst name=facet_counts
   lst name=facet_queries/
   lst name=facet_fields
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst 

[jira] [Commented] (LUCENE-5894) refactor bulk merge logic

2014-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105538#comment-14105538
 ] 

ASF subversion and git services commented on LUCENE-5894:
-

Commit 1619477 from [~rcmuir] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1619477 ]

LUCENE-5894: refactor bulk merge logic

 refactor bulk merge logic
 -

 Key: LUCENE-5894
 URL: https://issues.apache.org/jira/browse/LUCENE-5894
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-5894.patch


 Today its only usable really by stored fields/term vectors, has hardcoded 
 logic in SegmentMerger specific to certain impls, etc.
 It would be better if this was generalized to terms/postings/norms/docvalues 
 as well.
 Bulk merge is boring, the real idea is to allow codecs to do more: e.g. with 
 this patch they could do streaming checksum validation, or prevent the 
 loading of latent norms, or other things we cannot do today.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6314) Facet counts duplicated in the response if specified more than once on the request.

2014-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105542#comment-14105542
 ] 

ASF subversion and git services commented on SOLR-6314:
---

Commit 1619478 from [~rjernst] in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1619478 ]

SOLR-6314: updated bug title to be more accurate and include Vamsee in credits, 
all in CHNAGES.txt. No code changes

 Facet counts duplicated in the response if specified more than once on the 
 request.
 ---

 Key: SOLR-6314
 URL: https://issues.apache.org/jira/browse/SOLR-6314
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other, SolrCloud
Affects Versions: 5.0
Reporter: Vamsee Yarlagadda
Assignee: Erick Erickson
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: SOLR-6314.patch, SOLR-6314.patch, SOLR-6314.patch


 I am trying to work with multi-threaded faceting on SolrCloud and in the 
 process i was hit by some issues.
 I am currently running the below upstream test on different SolrCloud 
 configurations and i am getting a different result set per configuration.
 https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/test/org/apache/solr/request/TestFaceting.java#L654
 Setup:
 - *Indexed 50 docs into SolrCloud.*
 - *If the SolrCloud has only 1 shard, the facet field query has the below 
 output (which matches with the expected upstream test output - # facet fields 
 ~ 50).*
 {code}
 $ curl  
 http://localhost:8983/solr/collection1/select?facet=truefl=idindent=trueq=id%3A*facet.limit=-1facet.threads=1000facet.field=f0_wsfacet.field=f0_wsfacet.field=f0_wsfacet.field=f0_wsfacet.field=f0_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f9_wsfacet.field=f9_wsfacet.field=f9_wsfacet.field=f9_wsfacet.field=f9_wsrows=1wt=xml;
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeader
   int name=status0/int
   int name=QTime21/int
   lst name=params
 str name=facettrue/str
 str name=flid/str
 str name=indenttrue/str
 str name=qid:*/str
 str name=facet.limit-1/str
 str name=facet.threads1000/str
 arr name=facet.field
   strf0_ws/str
   strf0_ws/str
   strf0_ws/str
   strf0_ws/str
   strf0_ws/str
   strf1_ws/str
   strf1_ws/str
   strf1_ws/str
   strf1_ws/str
   strf1_ws/str
   strf2_ws/str
   strf2_ws/str
   strf2_ws/str
   strf2_ws/str
   strf2_ws/str
   strf3_ws/str
   strf3_ws/str
   strf3_ws/str
   strf3_ws/str
   strf3_ws/str
   strf4_ws/str
   strf4_ws/str
   strf4_ws/str
   strf4_ws/str
   strf4_ws/str
   strf5_ws/str
   strf5_ws/str
   strf5_ws/str
   strf5_ws/str
   strf5_ws/str
   strf6_ws/str
   strf6_ws/str
   strf6_ws/str
   strf6_ws/str
   strf6_ws/str
   strf7_ws/str
   strf7_ws/str
   strf7_ws/str
   strf7_ws/str
   strf7_ws/str
   strf8_ws/str
   strf8_ws/str
   strf8_ws/str
   strf8_ws/str
   strf8_ws/str
   strf9_ws/str
   strf9_ws/str
   strf9_ws/str
   strf9_ws/str
   strf9_ws/str
 /arr
 str name=wtxml/str
 str name=rows1/str
   /lst
 /lst
 result name=response numFound=50 start=0
   doc
 float name=id0.0/float/doc
 /result
 lst name=facet_counts
   lst name=facet_queries/
   lst name=facet_fields
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f0_ws
   int name=zero_125/int
   int name=zero_225/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst name=f1_ws
   int name=one_133/int
   int name=one_317/int
 /lst
 lst 

[jira] [Resolved] (LUCENE-5894) refactor bulk merge logic

2014-08-21 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-5894.
-

   Resolution: Fixed
Fix Version/s: 4.11
   5.0

 refactor bulk merge logic
 -

 Key: LUCENE-5894
 URL: https://issues.apache.org/jira/browse/LUCENE-5894
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: 5.0, 4.11

 Attachments: LUCENE-5894.patch


 Today its only usable really by stored fields/term vectors, has hardcoded 
 logic in SegmentMerger specific to certain impls, etc.
 It would be better if this was generalized to terms/postings/norms/docvalues 
 as well.
 Bulk merge is boring, the real idea is to allow codecs to do more: e.g. with 
 this patch they could do streaming checksum validation, or prevent the 
 loading of latent norms, or other things we cannot do today.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3617) Consider adding start scripts.

2014-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105550#comment-14105550
 ] 

ASF subversion and git services commented on SOLR-3617:
---

Commit 1619480 from [~thelabdude] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1619480 ]

SOLR-3617: add bin/solr and bin/solr.cmd scripts for starting, stopping, and 
running examples.

 Consider adding start scripts.
 --

 Key: SOLR-3617
 URL: https://issues.apache.org/jira/browse/SOLR-3617
 Project: Solr
  Issue Type: New Feature
Reporter: Mark Miller
Assignee: Timothy Potter
 Attachments: SOLR-3617.patch, SOLR-3617.patch, SOLR-3617.patch, 
 SOLR-3617.patch, SOLR-3617.patch


 I've always found that starting Solr with java -jar start.jar is a little odd 
 if you are not a java guy, but I think there are bigger pros than looking 
 less odd in shipping some start scripts.
 Not only do you get a cleaner start command:
 sh solr.sh or solr.bat or something
 But you also can do a couple other little nice things:
 * it becomes fairly obvious for a new casual user to see how to start the 
 system without reading doc.
 * you can make the working dir the location of the script - this lets you 
 call the start script from another dir and still have all the relative dir 
 setup work.
 * have an out of the box place to save startup params like -Xmx.
 * we could have multiple start scripts - say solr-dev.sh that logged to the 
 console and default to sys default for RAM - and also solr-prod which was 
 fully configured for logging, pegged Xms and Xmx at some larger value (1GB?) 
 etc.
 You would still of course be able to make the java cmd directly - and that is 
 probably what you would do when it's time to run as a service - but these 
 could be good starter scripts to get people on the right track and improve 
 the initial user experience.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6233) Provide basic command line tools for checking Solr status and health.

2014-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105552#comment-14105552
 ] 

ASF subversion and git services commented on SOLR-6233:
---

Commit 1619482 from [~thelabdude] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1619482 ]

SOLR-6233: Provide basic command line tools for checking Solr status and health.

 Provide basic command line tools for checking Solr status and health.
 -

 Key: SOLR-6233
 URL: https://issues.apache.org/jira/browse/SOLR-6233
 Project: Solr
  Issue Type: Improvement
Reporter: Timothy Potter
Assignee: Timothy Potter
Priority: Minor
 Fix For: 5.0, 4.10


 As part of the start script development work SOLR-3617, example restructuring 
 SOLR-3619, and the overall curb appeal work SOLR-4430, I'd like to have an 
 option on the SystemInfoHandler that gives a shorter, well formatted JSON 
 synopsis of essential information. I know essential is vague ;-) but right 
 now using curl to http://host:port/solr/admin/info/system?wt=json gives too 
 much information when I just want a synopsis of a Solr server. 
 Maybe something like overview=true?
 Result would be:
 {noformat}
 {
   address: http://localhost:8983/solr;,
   mode: solrcloud,
   zookeeper: localhost:2181/foo,
   uptime: 2 days, 3 hours, 4 minutes, 5 seconds,
   version: 5.0-SNAPSHOT,
   status: healthy,
   memory: 4.2g of 6g
 }
 {noformat}
 Now of course, one may argue all this information can be easily parsed from 
 the JSON but consider cross-platform command-line tools that don't have 
 immediate access to a JSON parser, such as the bin/solr start script.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6233) Provide basic command line tools for checking Solr status and health.

2014-08-21 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter resolved SOLR-6233.
--

   Resolution: Fixed
Fix Version/s: 4.10
   5.0

 Provide basic command line tools for checking Solr status and health.
 -

 Key: SOLR-6233
 URL: https://issues.apache.org/jira/browse/SOLR-6233
 Project: Solr
  Issue Type: Improvement
Reporter: Timothy Potter
Assignee: Timothy Potter
Priority: Minor
 Fix For: 5.0, 4.10


 As part of the start script development work SOLR-3617, example restructuring 
 SOLR-3619, and the overall curb appeal work SOLR-4430, I'd like to have an 
 option on the SystemInfoHandler that gives a shorter, well formatted JSON 
 synopsis of essential information. I know essential is vague ;-) but right 
 now using curl to http://host:port/solr/admin/info/system?wt=json gives too 
 much information when I just want a synopsis of a Solr server. 
 Maybe something like overview=true?
 Result would be:
 {noformat}
 {
   address: http://localhost:8983/solr;,
   mode: solrcloud,
   zookeeper: localhost:2181/foo,
   uptime: 2 days, 3 hours, 4 minutes, 5 seconds,
   version: 5.0-SNAPSHOT,
   status: healthy,
   memory: 4.2g of 6g
 }
 {noformat}
 Now of course, one may argue all this information can be easily parsed from 
 the JSON but consider cross-platform command-line tools that don't have 
 immediate access to a JSON parser, such as the bin/solr start script.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6398) Add IterativeMergeStrategy to support Parallel Iterative Algorithms

2014-08-21 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6398:
-

Summary: Add IterativeMergeStrategy to support Parallel Iterative 
Algorithms  (was: Add IterativeMergeStrategy to support the execution of 
Parallel Iterative Algorithms )

 Add IterativeMergeStrategy to support Parallel Iterative Algorithms
 ---

 Key: SOLR-6398
 URL: https://issues.apache.org/jira/browse/SOLR-6398
 Project: Solr
  Issue Type: Improvement
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-6398.patch


 The main inspiration for this ticket came from this presentation:
 http://www.slideshare.net/jpatanooga/hadoop-summit-eu-2013-parallel-linear-regression-iterativereduce-and-yarn
 This ticket builds on the existing AnalyticsQuery / MergeStrategy framework 
 by adding the abstract class IterativeMergeStrategy,  which has built-in 
 support for call-backs to the shards. By extending this class you can plugin 
 parallel iterative algorithms that will run efficiently inside Solr.
 I will update this ticket with more information about the design soon.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3617) Consider adding start scripts.

2014-08-21 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter resolved SOLR-3617.
--

   Resolution: Fixed
Fix Version/s: 4.10
   5.0

 Consider adding start scripts.
 --

 Key: SOLR-3617
 URL: https://issues.apache.org/jira/browse/SOLR-3617
 Project: Solr
  Issue Type: New Feature
Reporter: Mark Miller
Assignee: Timothy Potter
 Fix For: 5.0, 4.10

 Attachments: SOLR-3617.patch, SOLR-3617.patch, SOLR-3617.patch, 
 SOLR-3617.patch, SOLR-3617.patch


 I've always found that starting Solr with java -jar start.jar is a little odd 
 if you are not a java guy, but I think there are bigger pros than looking 
 less odd in shipping some start scripts.
 Not only do you get a cleaner start command:
 sh solr.sh or solr.bat or something
 But you also can do a couple other little nice things:
 * it becomes fairly obvious for a new casual user to see how to start the 
 system without reading doc.
 * you can make the working dir the location of the script - this lets you 
 call the start script from another dir and still have all the relative dir 
 setup work.
 * have an out of the box place to save startup params like -Xmx.
 * we could have multiple start scripts - say solr-dev.sh that logged to the 
 console and default to sys default for RAM - and also solr-prod which was 
 fully configured for logging, pegged Xms and Xmx at some larger value (1GB?) 
 etc.
 You would still of course be able to make the java cmd directly - and that is 
 probably what you would do when it's time to run as a service - but these 
 could be good starter scripts to get people on the right track and improve 
 the initial user experience.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5895) Add per-segment and per-commit id to help replication

2014-08-21 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105608#comment-14105608
 ] 

Michael McCandless commented on LUCENE-5895:


I think the last patch is ready ... I think it's net/net low risk, since it 
just adds a new field to SIS/SI, so I'd like to commit for 4.10.  We can always 
improve the uniqueness of the id generation later (it's opaque).

 Add per-segment and per-commit id to help replication
 -

 Key: LUCENE-5895
 URL: https://issues.apache.org/jira/browse/LUCENE-5895
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5895.patch, LUCENE-5895.patch


 It would be useful if Lucene recorded a unique id for each segment written 
 and each commit point.  This way, file-based replicators can use this to 
 know whether the segment/commit they are looking at on a source machine and 
 dest machine are in fact that same.
 I know this would have been very useful when I was playing with NRT 
 replication (LUCENE-5438).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5672) Addindexes does not call maybeMerge

2014-08-21 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105621#comment-14105621
 ] 

Mark Miller commented on LUCENE-5672:
-

Can we get this in for 4.10 [~rcmuir]?

 Addindexes does not call maybeMerge
 ---

 Key: LUCENE-5672
 URL: https://issues.apache.org/jira/browse/LUCENE-5672
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Robert Muir
 Attachments: LUCENE-5672.patch


 I don't know why this was removed, but this is buggy and just asking for 
 trouble.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5895) Add per-segment and per-commit id to help replication

2014-08-21 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105626#comment-14105626
 ] 

Uwe Schindler commented on LUCENE-5895:
---

I am fine with the patch.

 Add per-segment and per-commit id to help replication
 -

 Key: LUCENE-5895
 URL: https://issues.apache.org/jira/browse/LUCENE-5895
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5895.patch, LUCENE-5895.patch


 It would be useful if Lucene recorded a unique id for each segment written 
 and each commit point.  This way, file-based replicators can use this to 
 know whether the segment/commit they are looking at on a source machine and 
 dest machine are in fact that same.
 I know this would have been very useful when I was playing with NRT 
 replication (LUCENE-5438).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5672) Addindexes does not call maybeMerge

2014-08-21 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105641#comment-14105641
 ] 

Michael McCandless commented on LUCENE-5672:


+1

 Addindexes does not call maybeMerge
 ---

 Key: LUCENE-5672
 URL: https://issues.apache.org/jira/browse/LUCENE-5672
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Robert Muir
 Attachments: LUCENE-5672.patch


 I don't know why this was removed, but this is buggy and just asking for 
 trouble.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3617) Consider adding start scripts.

2014-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105654#comment-14105654
 ] 

ASF subversion and git services commented on SOLR-3617:
---

Commit 1619490 from [~thelabdude] in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1619490 ]

SOLR-3617: backport bin/solr scripts into the 4.10 branch

 Consider adding start scripts.
 --

 Key: SOLR-3617
 URL: https://issues.apache.org/jira/browse/SOLR-3617
 Project: Solr
  Issue Type: New Feature
Reporter: Mark Miller
Assignee: Timothy Potter
 Fix For: 5.0, 4.10

 Attachments: SOLR-3617.patch, SOLR-3617.patch, SOLR-3617.patch, 
 SOLR-3617.patch, SOLR-3617.patch


 I've always found that starting Solr with java -jar start.jar is a little odd 
 if you are not a java guy, but I think there are bigger pros than looking 
 less odd in shipping some start scripts.
 Not only do you get a cleaner start command:
 sh solr.sh or solr.bat or something
 But you also can do a couple other little nice things:
 * it becomes fairly obvious for a new casual user to see how to start the 
 system without reading doc.
 * you can make the working dir the location of the script - this lets you 
 call the start script from another dir and still have all the relative dir 
 setup work.
 * have an out of the box place to save startup params like -Xmx.
 * we could have multiple start scripts - say solr-dev.sh that logged to the 
 console and default to sys default for RAM - and also solr-prod which was 
 fully configured for logging, pegged Xms and Xmx at some larger value (1GB?) 
 etc.
 You would still of course be able to make the java cmd directly - and that is 
 probably what you would do when it's time to run as a service - but these 
 could be good starter scripts to get people on the right track and improve 
 the initial user experience.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6233) Provide basic command line tools for checking Solr status and health.

2014-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105655#comment-14105655
 ] 

ASF subversion and git services commented on SOLR-6233:
---

Commit 1619491 from [~thelabdude] in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1619491 ]

SOLR-6233: backport SolrCLI (needed by bin/solr scripts) into the 4.10 branch

 Provide basic command line tools for checking Solr status and health.
 -

 Key: SOLR-6233
 URL: https://issues.apache.org/jira/browse/SOLR-6233
 Project: Solr
  Issue Type: Improvement
Reporter: Timothy Potter
Assignee: Timothy Potter
Priority: Minor
 Fix For: 5.0, 4.10


 As part of the start script development work SOLR-3617, example restructuring 
 SOLR-3619, and the overall curb appeal work SOLR-4430, I'd like to have an 
 option on the SystemInfoHandler that gives a shorter, well formatted JSON 
 synopsis of essential information. I know essential is vague ;-) but right 
 now using curl to http://host:port/solr/admin/info/system?wt=json gives too 
 much information when I just want a synopsis of a Solr server. 
 Maybe something like overview=true?
 Result would be:
 {noformat}
 {
   address: http://localhost:8983/solr;,
   mode: solrcloud,
   zookeeper: localhost:2181/foo,
   uptime: 2 days, 3 hours, 4 minutes, 5 seconds,
   version: 5.0-SNAPSHOT,
   status: healthy,
   memory: 4.2g of 6g
 }
 {noformat}
 Now of course, one may argue all this information can be easily parsed from 
 the JSON but consider cross-platform command-line tools that don't have 
 immediate access to a JSON parser, such as the bin/solr start script.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6233) Provide basic command line tools for checking Solr status and health.

2014-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105657#comment-14105657
 ] 

ASF subversion and git services commented on SOLR-6233:
---

Commit 1619494 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1619494 ]

SOLR-6233: Add to 4.10 section of CHANGES

 Provide basic command line tools for checking Solr status and health.
 -

 Key: SOLR-6233
 URL: https://issues.apache.org/jira/browse/SOLR-6233
 Project: Solr
  Issue Type: Improvement
Reporter: Timothy Potter
Assignee: Timothy Potter
Priority: Minor
 Fix For: 5.0, 4.10


 As part of the start script development work SOLR-3617, example restructuring 
 SOLR-3619, and the overall curb appeal work SOLR-4430, I'd like to have an 
 option on the SystemInfoHandler that gives a shorter, well formatted JSON 
 synopsis of essential information. I know essential is vague ;-) but right 
 now using curl to http://host:port/solr/admin/info/system?wt=json gives too 
 much information when I just want a synopsis of a Solr server. 
 Maybe something like overview=true?
 Result would be:
 {noformat}
 {
   address: http://localhost:8983/solr;,
   mode: solrcloud,
   zookeeper: localhost:2181/foo,
   uptime: 2 days, 3 hours, 4 minutes, 5 seconds,
   version: 5.0-SNAPSHOT,
   status: healthy,
   memory: 4.2g of 6g
 }
 {noformat}
 Now of course, one may argue all this information can be easily parsed from 
 the JSON but consider cross-platform command-line tools that don't have 
 immediate access to a JSON parser, such as the bin/solr start script.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6396) Allow the name of core.properties file used in discovery to be specified by an environment variable

2014-08-21 Thread Ryan Josal (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105662#comment-14105662
 ] 

Ryan Josal commented on SOLR-6396:
--

Confirmed it does work the way Hoss suggests; using property expansion in 
core.properties meets the need.

 Allow the name of core.properties file used in discovery to be specified by 
 an environment variable
 ---

 Key: SOLR-6396
 URL: https://issues.apache.org/jira/browse/SOLR-6396
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.9, 5.0
Reporter: Erick Erickson
Assignee: Erick Erickson
Priority: Minor

 This was brought up on the user's list. I haven't thought this through, but 
 it seems reasonable.
 This has some interesting implications in the core rename case, i.e. The 
 unloaded props file will have the different name as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5672) Addindexes does not call maybeMerge

2014-08-21 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105671#comment-14105671
 ] 

Shai Erera commented on LUCENE-5672:


+1. Now that we can set the MP dynamically, if someone doesn't want to perform 
merges, he can set the MP to NoMP, call addIndexes, then change it back.

 Addindexes does not call maybeMerge
 ---

 Key: LUCENE-5672
 URL: https://issues.apache.org/jira/browse/LUCENE-5672
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Robert Muir
 Attachments: LUCENE-5672.patch


 I don't know why this was removed, but this is buggy and just asking for 
 trouble.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5672) Addindexes does not call maybeMerge

2014-08-21 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105676#comment-14105676
 ] 

Ryan Ernst commented on LUCENE-5672:


If this was that important, why wasn't it pushed on weeks ago? I hate to play 
the bad guy, but I'm a little scared to throw this into the release branch, 
when it has undergone minimal testing (ie it hasn't be hammered by jenkins for 
days or weeks).

 Addindexes does not call maybeMerge
 ---

 Key: LUCENE-5672
 URL: https://issues.apache.org/jira/browse/LUCENE-5672
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Robert Muir
 Attachments: LUCENE-5672.patch


 I don't know why this was removed, but this is buggy and just asking for 
 trouble.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5894) Speed up high-cardinality facets with sparse counters

2014-08-21 Thread Toke Eskildsen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105715#comment-14105715
 ] 

Toke Eskildsen commented on SOLR-5894:
--

Forked Lucene/Solr on GitHub for better project management: 
https://github.com/tokee/lucene-solr . Experimental branches for sparse 
faceting are lucene_solr_4_8_sparse and lucene_solr_4_9_sparse and will contain 
the latest code. The patch above for 4.7.1 is a little behind the GitHub 
branches, as it does not have speed up for second phase (getting counts for 
specific terms) of distributed faceting.

 Speed up high-cardinality facets with sparse counters
 -

 Key: SOLR-5894
 URL: https://issues.apache.org/jira/browse/SOLR-5894
 Project: Solr
  Issue Type: Improvement
  Components: SearchComponents - other
Affects Versions: 4.7.1
Reporter: Toke Eskildsen
Priority: Minor
  Labels: faceted-search, faceting, memory, performance
 Attachments: SOLR-5894.patch, SOLR-5894.patch, SOLR-5894.patch, 
 SOLR-5894.patch, SOLR-5894.patch, SOLR-5894.patch, SOLR-5894.patch, 
 SOLR-5894.patch, SOLR-5894.patch, SOLR-5894_test.zip, SOLR-5894_test.zip, 
 SOLR-5894_test.zip, SOLR-5894_test.zip, SOLR-5894_test.zip, 
 author_7M_tags_1852_logged_queries_warmed.png, 
 sparse_200docs_fc_cutoff_20140403-145412.png, 
 sparse_500docs_20140331-151918_multi.png, 
 sparse_500docs_20140331-151918_single.png, 
 sparse_5051docs_20140328-152807.png


 Field based faceting in Solr has two phases: Collecting counts for tags in 
 facets and extracting the requested tags.
 The execution time for the collecting phase is approximately linear to the 
 number of hits and the number of references from hits to tags. This phase is 
 not the focus here.
 The extraction time scales with the number of unique tags in the search 
 result, but is also heavily influenced by the total number of unique tags in 
 the facet as every counter, 0 or not, is visited by the extractor (at least 
 for count order). For fields with millions of unique tag values this means 
 10s of milliseconds added to the minimum response time (see 
 https://sbdevel.wordpress.com/2014/03/18/sparse-facet-counting-on-a-real-index/
  for a test on a corpus with 7M unique values in the facet).
 The extractor needs to visit every counter due to the current counter 
 structure being a plain int-array of size #unique_tags. Switching to a sparse 
 structure, where only the tag counters  0 are visited, makes the extraction 
 time linear to the number of unique tags in the result set.
 Unfortunately the number of unique tags in the result set is unknown at 
 collect time, so it is not possible to reliably select sparse counting vs. 
 full counting up front. Luckily there exists solutions for sparse sets that 
 has the property of switching to non-sparse-mode without a switch-penalty, 
 when the sparse-threshold is exceeded (see 
 http://programmingpraxis.com/2012/03/09/sparse-sets/ for an example). This 
 JIRA aims to implement this functionality in Solr.
 Current status: Sparse counting is implemented for field cache faceting, both 
 single- and multi-value, with and without doc-values. Sort by count only. The 
 patch applies cleanly to Solr 4.6.1 and should integrate well with everything 
 as all functionality is unchanged. After patching, the following new 
 parameters are possible:
 * facet.sparse=true enables sparse faceting.
 * facet.sparse.mintags=1 the minimum amount of unique tags in the given 
 field for sparse faceting to be active. This is used for auto-selecting 
 whether sparse should be used or not.
 * facet.sparse.fraction=0.08 the overhead used for the sparse tracker. 
 Setting this too low means that only very small result sets are handled as 
 sparse. Setting this too high will result in a large performance penalty if 
 the result set blows the sparse tracker. Values between 0.04 and 0.1 seems to 
 work well.
 * facet.sparse.packed=true use PackecInts for counters instead of int[]. This 
 saves memory, but performance will differ. Whether performance will be better 
 or worse depends on the corpus. Experiment with it.
 * facet.sparse.cutoff=0.90 if the estimated number (based on hitcount) of 
 unique tags in the search result exceeds this fraction of the sparse tracker, 
 do not perform sparse tracking. The estimate is based on the assumption that 
 references from documents to tags are distributed randomly.
 * facet.sparse.pool.size=2 the maximum amount of sparse trackers to clear and 
 keep in memory, ready for usage. Clearing and re-using a counter is faster 
 that allocating it fresh from the heap. Setting the pool size to 0 means than 
 a new sparse counter will be allocated each time, just as standard Solr 
 faceting works.
 * 

[jira] [Commented] (LUCENE-5897) performance bug (adversary) in StandardTokenizer

2014-08-21 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105729#comment-14105729
 ] 

Robert Muir commented on LUCENE-5897:
-

do we need a separate max buffer size parameter? can it just be an impl detail 
based on max token length?

 performance bug (adversary) in StandardTokenizer
 --

 Key: LUCENE-5897
 URL: https://issues.apache.org/jira/browse/LUCENE-5897
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir

 There seem to be some conditions (I don't know how rare or what conditions) 
 that cause StandardTokenizer to essentially hang on input: I havent looked 
 hard yet, but as its essentially a DFA I think something wierd might be going 
 on.
 An easy way to reproduce is with 1MB of underscores, it will just hang 
 forever.
 {code}
   public void testWorthyAdversary() throws Exception {
 char buffer[] = new char[1024 * 1024];
 Arrays.fill(buffer, '_');
 int tokenCount = 0;
 Tokenizer ts = new StandardTokenizer();
 ts.setReader(new StringReader(new String(buffer)));
 ts.reset();
 while (ts.incrementToken()) {
   tokenCount++;
 }
 ts.end();
 ts.close();
 assertEquals(0, tokenCount);
   }
 {code} 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5897) performance bug (adversary) in StandardTokenizer

2014-08-21 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105745#comment-14105745
 ] 

Steve Rowe commented on LUCENE-5897:


bq. do we need a separate max buffer size parameter? can it just be an impl 
detail based on max token length?

It depends on whether we think anybody will want the (apparently minor) benefit 
of having a larger buffer, regardless of max token length

 performance bug (adversary) in StandardTokenizer
 --

 Key: LUCENE-5897
 URL: https://issues.apache.org/jira/browse/LUCENE-5897
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir

 There seem to be some conditions (I don't know how rare or what conditions) 
 that cause StandardTokenizer to essentially hang on input: I havent looked 
 hard yet, but as its essentially a DFA I think something wierd might be going 
 on.
 An easy way to reproduce is with 1MB of underscores, it will just hang 
 forever.
 {code}
   public void testWorthyAdversary() throws Exception {
 char buffer[] = new char[1024 * 1024];
 Arrays.fill(buffer, '_');
 int tokenCount = 0;
 Tokenizer ts = new StandardTokenizer();
 ts.setReader(new StringReader(new String(buffer)));
 ts.reset();
 while (ts.incrementToken()) {
   tokenCount++;
 }
 ts.end();
 ts.close();
 assertEquals(0, tokenCount);
   }
 {code} 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5897) performance bug (adversary) in StandardTokenizer

2014-08-21 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105763#comment-14105763
 ] 

Steve Rowe commented on LUCENE-5897:


Oh, and one other side effect that people might want: when buffer size is 
larger than max token length, too-large tokens are not emitted, and no attempt 
is made to find smaller matching prefixes.

These two seem like very minor benefits for a small audience, so I'm fine going 
without a separate max buffer size parameter.

 performance bug (adversary) in StandardTokenizer
 --

 Key: LUCENE-5897
 URL: https://issues.apache.org/jira/browse/LUCENE-5897
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir

 There seem to be some conditions (I don't know how rare or what conditions) 
 that cause StandardTokenizer to essentially hang on input: I havent looked 
 hard yet, but as its essentially a DFA I think something wierd might be going 
 on.
 An easy way to reproduce is with 1MB of underscores, it will just hang 
 forever.
 {code}
   public void testWorthyAdversary() throws Exception {
 char buffer[] = new char[1024 * 1024];
 Arrays.fill(buffer, '_');
 int tokenCount = 0;
 Tokenizer ts = new StandardTokenizer();
 ts.setReader(new StringReader(new String(buffer)));
 ts.reset();
 while (ts.incrementToken()) {
   tokenCount++;
 }
 ts.end();
 ts.close();
 assertEquals(0, tokenCount);
   }
 {code} 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [CONF] Apache Solr Reference Guide Internal - TODO List

2014-08-21 Thread Chris Hostetter
: 
: I moved the items to the bottom (out of the 4.10 section), and didn’t 
out-right delete anything.

Ah ... sorry, i overlooked that part of the diff.

I'm going to move your comments back up into the 4.10 section for now so 
theyire more obvious/visibile to other folks working on 4.10 changes 
(example: someone might see your comment about not having any variable 
section to put ${dih.handlerName} in and be motivated to add one now, or 
might point out another place where that concept is breifly mentioned that 
could be updated, etc...)

: 
:   Erik
: 
: On Aug 20, 2014, at 5:24 PM, Chris Hostetter hossman_luc...@fucit.org wrote:
: 
:  
:  erik: please don't delete things from the todo list completley ... strike 
:  them out if they are done or N/A, and post a comment -- but don't delete 
:  them completley.
:  
:  that way they are still on people's radar and you can get more eyeballs on 
:  the changes (or lack of changes if poeple point out additional edits that 
:  are needed).
:  
:  
:  : Date: Wed, 20 Aug 2014 18:08:00 + (UTC)
:  : From: Erik Hatcher (Confluence) conflue...@apache.org
:  : Reply-To: dev@lucene.apache.org
:  : To: comm...@lucene.apache.org
:  : Subject: [CONF] Apache Solr Reference Guide  Internal - TODO List
:  : 
:  : [IMAGE]
:  : Erik Hatcher edited the page:
:  : 
:  : [IMAGE] INTERNAL - TODO LIST
:  : 
:  : View Online · Like · View Changes · Add Comment
:  : Stop watching space · Manage Notifications
:  : This message was sent by Atlassian Confluence 5.0.3, Team Collaboration 
Software
:  : 
:  
:  -Hoss
:  http://www.lucidworks.com/
:  
:  -
:  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
:  For additional commands, e-mail: dev-h...@lucene.apache.org
: 
: 
: -
: To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
: For additional commands, e-mail: dev-h...@lucene.apache.org
: 
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-5672) Addindexes does not call maybeMerge

2014-08-21 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105804#comment-14105804
 ] 

Mark Miller commented on LUCENE-5672:
-

I think we would have way more major concerns if this was a dangerous change.

This is a really ugly bug with a simple fix.

 Addindexes does not call maybeMerge
 ---

 Key: LUCENE-5672
 URL: https://issues.apache.org/jira/browse/LUCENE-5672
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Robert Muir
 Attachments: LUCENE-5672.patch


 I don't know why this was removed, but this is buggy and just asking for 
 trouble.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6379) CloudSolrServer can query the wrong replica if a collection has a SolrCore name that matches a collection name.

2014-08-21 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-6379:
---

Attachment: SOLR-6379.pristine_collection.test.patch

bq. This looks like it's simply one of those corner case bugs that manifests 
when you have a collection that has core names that match another collection 
name. 

FWIW: i wanted to prove to myself that this was really the only problem - so i 
mangled TestQueriesWhileReplicasComeOnline.java into a 
TestQueriesWhileReplicasComeOnlineOfPristineCollection.java with the following 
changes:
* creates a new collection from scratch with a randomly generated name
* skips the initial batch of queries against the static index
* uses CLUSTERSTATUS to get the list of shards  replicas (since the test 
framework plumbing for this is all built arround collection1)
* inlcudes the random DELETEREPLICA logic that was missing from the previous 
test.
* loops until a min number of replica add/delete commands have been sent 
(async) instead of a fixed number of times

Even w/o anshum's change to the core vs collection name resolution, this new 
test sort of passes for me -- by which i mean it doesn't fail any assertions on 
comparing the results while it's randomly adding/removing replicas - but it 
does die horribly with tons of zombie ZK threads (why? I have no idea)


 CloudSolrServer can query the wrong replica if a collection has a SolrCore 
 name that matches a collection name.
 ---

 Key: SOLR-6379
 URL: https://issues.apache.org/jira/browse/SOLR-6379
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Hoss Man
Assignee: Anshum Gupta
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: SOLR-6379.patch, SOLR-6379.patch, SOLR-6379.patch, 
 SOLR-6379.patch, SOLR-6379.patch, SOLR-6379.pristine_collection.test.patch


 spin off of SOLR-2894 where sarowe  miller were getting failures from 
 TestCloudPivot that seemed unrelated to any of hte distrib pivot logic itself.
 in particular: adding a call to waitForThingsToLevelOut at the start of the 
 test, even before indexing any docs, seemed to work around the problem -- but 
 even if all replicas aren't yet up when the test starts, we should either get 
 a failure when adding docs (ie: no replica hosting the target shard) or 
 queries should only be routed to the replicas that are up and fully caught up 
 with the rest of the collection.
 (NOTE: we're specifically talking about a situation where the set of docs in 
 the collection is static during the query request)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5672) Addindexes does not call maybeMerge

2014-08-21 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105811#comment-14105811
 ] 

Robert Muir commented on LUCENE-5672:
-

While we think about it i will go to trunk and 4x with the change.

 Addindexes does not call maybeMerge
 ---

 Key: LUCENE-5672
 URL: https://issues.apache.org/jira/browse/LUCENE-5672
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Robert Muir
 Attachments: LUCENE-5672.patch


 I don't know why this was removed, but this is buggy and just asking for 
 trouble.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6379) CloudSolrServer can query the wrong replica if a collection has a SolrCore name that matches a collection name.

2014-08-21 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105837#comment-14105837
 ] 

Anshum Gupta commented on SOLR-6379:


[~hossman] So, I'll translate the above comment as we diagnosed the problem 
correctly. Let's look at the zk thread leaks in another JIRA?

 CloudSolrServer can query the wrong replica if a collection has a SolrCore 
 name that matches a collection name.
 ---

 Key: SOLR-6379
 URL: https://issues.apache.org/jira/browse/SOLR-6379
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Hoss Man
Assignee: Anshum Gupta
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: SOLR-6379.patch, SOLR-6379.patch, SOLR-6379.patch, 
 SOLR-6379.patch, SOLR-6379.patch, SOLR-6379.pristine_collection.test.patch


 spin off of SOLR-2894 where sarowe  miller were getting failures from 
 TestCloudPivot that seemed unrelated to any of hte distrib pivot logic itself.
 in particular: adding a call to waitForThingsToLevelOut at the start of the 
 test, even before indexing any docs, seemed to work around the problem -- but 
 even if all replicas aren't yet up when the test starts, we should either get 
 a failure when adding docs (ie: no replica hosting the target shard) or 
 queries should only be routed to the replicas that are up and fully caught up 
 with the rest of the collection.
 (NOTE: we're specifically talking about a situation where the set of docs in 
 the collection is static during the query request)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 608 - Still Failing

2014-08-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/608/

1 tests failed.
REGRESSION:  org.apache.lucene.index.TestIndexWriterOutOfMemory.testBasics

Error Message:
MockDirectoryWrapper: cannot close: there are still open files: {_1_1.dvd=1}

Stack Trace:
java.lang.RuntimeException: MockDirectoryWrapper: cannot close: there are still 
open files: {_1_1.dvd=1}
at 
__randomizedtesting.SeedInfo.seed([3C3A06BDFCB270AC:1E2A891C45C2EDC]:0)
at 
org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:672)
at 
org.apache.lucene.index.TestIndexWriterOutOfMemory.testBasics(TestIndexWriterOutOfMemory.java:85)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: unclosed IndexInput: _1_1.dvd
at 
org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:559)
at 
org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:603)
at 
org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer.init(Lucene410DocValuesProducer.java:120)
at 

[jira] [Commented] (LUCENE-5895) Add per-segment and per-commit id to help replication

2014-08-21 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105867#comment-14105867
 ] 

Michael McCandless commented on LUCENE-5895:


Since we are so close to releasing 4.10, I think this change should wait for 
the next release ...

 Add per-segment and per-commit id to help replication
 -

 Key: LUCENE-5895
 URL: https://issues.apache.org/jira/browse/LUCENE-5895
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, 4.11

 Attachments: LUCENE-5895.patch, LUCENE-5895.patch


 It would be useful if Lucene recorded a unique id for each segment written 
 and each commit point.  This way, file-based replicators can use this to 
 know whether the segment/commit they are looking at on a source machine and 
 dest machine are in fact that same.
 I know this would have been very useful when I was playing with NRT 
 replication (LUCENE-5438).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5895) Add per-segment and per-commit id to help replication

2014-08-21 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-5895:
---

Fix Version/s: (was: 4.10)
   4.11

 Add per-segment and per-commit id to help replication
 -

 Key: LUCENE-5895
 URL: https://issues.apache.org/jira/browse/LUCENE-5895
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, 4.11

 Attachments: LUCENE-5895.patch, LUCENE-5895.patch


 It would be useful if Lucene recorded a unique id for each segment written 
 and each commit point.  This way, file-based replicators can use this to 
 know whether the segment/commit they are looking at on a source machine and 
 dest machine are in fact that same.
 I know this would have been very useful when I was playing with NRT 
 replication (LUCENE-5438).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6398) Add IterativeMergeStrategy to support Parallel Iterative Algorithms

2014-08-21 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6398:
-

Attachment: SOLR-6398.patch

New patch with a simplified callBack mechanism. Will also provide more granular 
callBack support.

 Add IterativeMergeStrategy to support Parallel Iterative Algorithms
 ---

 Key: SOLR-6398
 URL: https://issues.apache.org/jira/browse/SOLR-6398
 Project: Solr
  Issue Type: Improvement
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-6398.patch, SOLR-6398.patch


 The main inspiration for this ticket came from this presentation:
 http://www.slideshare.net/jpatanooga/hadoop-summit-eu-2013-parallel-linear-regression-iterativereduce-and-yarn
 This ticket builds on the existing AnalyticsQuery / MergeStrategy framework 
 by adding the abstract class IterativeMergeStrategy,  which has built-in 
 support for call-backs to the shards. By extending this class you can plugin 
 parallel iterative algorithms that will run efficiently inside Solr.
 I will update this ticket with more information about the design soon.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5205) [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser

2014-08-21 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105906#comment-14105906
 ] 

Paul Elschot commented on LUCENE-5205:
--

I just tried the pull command:
git pull https://github.com/tballison/lucene-solr lucene5205
from current trunk, commit 91dd8c4b1e430 .

This gave merge conflicts for TestComplexPhraseQuery.java and 
TestMultiAnalyzer.java, a.o. because of the recent removal of version arguments 
in the analysis code.

Also, the patch:
https://github.com/apache/lucene-solr/pull/68.patch
contains some unrelated code, see the list of commits above.

Tim, could you resolve the conflicts and post a new pull request and/or patch?
I'd like to have a starting point for LUCENE-5758.

 [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to 
 classic QueryParser
 ---

 Key: LUCENE-5205
 URL: https://issues.apache.org/jira/browse/LUCENE-5205
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/queryparser
Reporter: Tim Allison
  Labels: patch
 Fix For: 4.9

 Attachments: LUCENE-5205-cleanup-tests.patch, 
 LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, 
 LUCENE-5205_dateTestReInitPkgPrvt.patch, 
 LUCENE-5205_improve_stop_word_handling.patch, 
 LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, 
 SpanQueryParser_v1.patch.gz, patch.txt


 This parser extends QueryParserBase and includes functionality from:
 * Classic QueryParser: most of its syntax
 * SurroundQueryParser: recursive parsing for near and not clauses.
 * ComplexPhraseQueryParser: can handle near queries that include multiterms 
 (wildcard, fuzzy, regex, prefix),
 * AnalyzingQueryParser: has an option to analyze multiterms.
 At a high level, there's a first pass BooleanQuery/field parser and then a 
 span query parser handles all terminal nodes and phrases.
 Same as classic syntax:
 * term: test 
 * fuzzy: roam~0.8, roam~2
 * wildcard: te?t, test*, t*st
 * regex: /\[mb\]oat/
 * phrase: jakarta apache
 * phrase with slop: jakarta apache~3
 * default or clause: jakarta apache
 * grouping or clause: (jakarta apache)
 * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta
 * multiple fields: title:lucene author:hatcher
  
 Main additions in SpanQueryParser syntax vs. classic syntax:
 * Can require in order for phrases with slop with the \~ operator: 
 jakarta apache\~3
 * Can specify not near: fever bieber!\~3,10 ::
 find fever but not if bieber appears within 3 words before or 10 
 words after it.
 * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta 
 apache\]~3 lucene\]\~4 :: 
 find jakarta within 3 words of apache, and that hit has to be within 
 four words before lucene
 * Can also use \[\] for single level phrasal queries instead of  as in: 
 \[jakarta apache\]
 * Can use or grouping clauses in phrasal queries: apache (lucene solr)\~3 
 :: find apache and then either lucene or solr within three words.
 * Can use multiterms in phrasal queries: jakarta\~1 ap*che\~2
 * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ 
 /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like jakarta within two 
 words of ap*che and that hit has to be within ten words of something like 
 solr or that lucene regex.
 * Can require at least x number of hits at boolean level: apache AND (lucene 
 solr tika)~2
 * Can use negative only query: -jakarta :: Find all docs that don't contain 
 jakarta
 * Can use an edit distance  2 for fuzzy query via SlowFuzzyQuery (beware of 
 potential performance issues!).
 Trivial additions:
 * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, 
 prefix =2)
 * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance 
 =2: (jakarta~1 (OSA) vs jakarta~1(Levenshtein)
 This parser can be very useful for concordance tasks (see also LUCENE-5317 
 and LUCENE-5318) and for analytical search.  
 Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery.
 Most of the documentation is in the javadoc for SpanQueryParser.
 Any and all feedback is welcome.  Thank you.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5683) Documentation of Suggester V2

2014-08-21 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-5683.
-

Resolution: Fixed

I'm ready to call this done at 
https://cwiki.apache.org/confluence/display/solr/Suggester.

[~varunthacker], [~areek]: if either of you have a chance, I would appreciate 
it if you could take a look and let me know if I got anything wrong. If you 
have edits, feel free to add them as comments to that page and I (or someone) 
will incorporate them as soon as I can.

 Documentation of Suggester V2
 -

 Key: SOLR-5683
 URL: https://issues.apache.org/jira/browse/SOLR-5683
 Project: Solr
  Issue Type: Task
  Components: SearchComponents - other
Reporter: Areek Zillur
Assignee: Cassandra Targett
 Fix For: 5.0, 4.10


 Place holder for documentation that will eventually end up in the Solr Ref 
 guide.
 
 The new Suggester Component allows Solr to fully utilize the Lucene 
 suggesters. 
 The main features are:
 - lookup pluggability (TODO: add description):
   -- AnalyzingInfixLookupFactory
   -- AnalyzingLookupFactory
   -- FuzzyLookupFactory
   -- FreeTextLookupFactory
   -- FSTLookupFactory
   -- WFSTLookupFactory
   -- TSTLookupFactory
   --  JaspellLookupFactory
- Dictionary pluggability (give users the option to choose the dictionary 
 implementation to use for their suggesters to consume)
-- Input from search index
   --- DocumentDictionaryFactory – user can specify suggestion field along 
 with optional weight and payload fields from their search index.
   --- DocumentExpressionFactory – same as DocumentDictionaryFactory but 
 allows users to specify arbitrary expression using existing numeric fields.
  --- HighFrequencyDictionaryFactory – user can specify a suggestion field 
 and specify a threshold to prune out less frequent terms.   
-- Input from external files
  --- FileDictionaryFactory – user can specify a file which contains 
 suggest entries, along with optional weights and payloads.
 Config (index time) options:
   - name - name of suggester
   - sourceLocation - external file location (for file-based suggesters)
   - lookupImpl - type of lookup to use [default JaspellLookupFactory]
   - dictionaryImpl - type of dictionary to use (lookup input) [default
 (sourceLocation == null ? HighFrequencyDictionaryFactory : 
 FileDictionaryFactory)]
   - storeDir - location to store in-memory data structure in disk
   - buildOnCommit - command to build suggester for every commit
   - buildOnOptimize - command to build suggester for every optimize
 Query time options:
   - suggest.dictionary - name of suggester to use (can occur multiple times 
 for batching suggester requests)
   - suggest.count - number of suggestions to return
   - suggest.q - query to use for lookup
   - suggest.build - command to build the suggester
   - suggest.reload - command to reload the suggester
   - buildAll – command to build all suggesters in the component
   - reloadAll – command to reload all suggesters in the component
 Example query:
 {code}
 http://localhost:8983/solr/suggest?suggest.dictionary=suggester1suggest=truesuggest.build=truesuggest.q=elec
 {code}
 Distributed query:
 {code}
 http://localhost:7574/solr/suggest?suggest.dictionary=suggester2suggest=truesuggest.build=truesuggest.q=elecshards=localhost:8983/solr,localhost:7574/solrshards.qt=/suggest
 {code}
 Response Format:
 The response format can be either XML or JSON. The typical response structure 
 is as follows:
  {code}
 {
   suggest: {
 suggester_name: {
suggest_query: { numFound:  .., suggestions: [ {term: .., weight: .., 
 payload: ..}, .. ]} 
}
 } 
 {code}
   
 Example Response:
 {code}
 {
 responseHeader: {
 status: 0,
 QTime: 3
 },
 suggest: {
 suggester1: {
 e: {
 numFound: 1,
 suggestions: [
 {
 term: electronics and computer1,
 weight: 100,
 payload: 
 }
 ]
 }
 },
 suggester2: {
 e: {
 numFound: 1,
 suggestions: [
 {
 term: electronics and computer1,
 weight: 10,
 payload: 
 }
 ]
 }
 }
 }
 }
 {code}
 Example solrconfig snippet with multiple suggester configuration:
 {code}  
   searchComponent name=suggest class=solr.SuggestComponent
 lst name=suggester
   str name=namesuggester1/str
   str name=lookupImplFuzzyLookupFactory/str  
   str name=dictionaryImplDocumentDictionaryFactory/str  
 

[jira] [Updated] (SOLR-6398) Add IterativeMergeStrategy to support Parallel Iterative Algorithms

2014-08-21 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6398:
-

Attachment: SOLR-6398.patch

 Add IterativeMergeStrategy to support Parallel Iterative Algorithms
 ---

 Key: SOLR-6398
 URL: https://issues.apache.org/jira/browse/SOLR-6398
 Project: Solr
  Issue Type: Improvement
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-6398.patch, SOLR-6398.patch, SOLR-6398.patch


 The main inspiration for this ticket came from this presentation:
 http://www.slideshare.net/jpatanooga/hadoop-summit-eu-2013-parallel-linear-regression-iterativereduce-and-yarn
 This ticket builds on the existing AnalyticsQuery / MergeStrategy framework 
 by adding the abstract class IterativeMergeStrategy,  which has built-in 
 support for call-backs to the shards. By extending this class you can plugin 
 parallel iterative algorithms that will run efficiently inside Solr.
 I will update this ticket with more information about the design soon.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6400) SolrCloud tests are not properly testing session expiration.

2014-08-21 Thread Mark Miller (JIRA)
Mark Miller created SOLR-6400:
-

 Summary: SolrCloud tests are not properly testing session 
expiration.
 Key: SOLR-6400
 URL: https://issues.apache.org/jira/browse/SOLR-6400
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller


We are using a test method from the ZK project to pause the connection for a 
length of time. A while back, I found that the pause time did not really 
matter. All that happens is that the connection is closed and the zk client 
creates a new one. So it just causes a dc event, but never reaches expiration.




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6398) Add IterativeMergeStrategy to support Parallel Iterative Algorithms

2014-08-21 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6398:
-

Attachment: (was: SOLR-6398.patch)

 Add IterativeMergeStrategy to support Parallel Iterative Algorithms
 ---

 Key: SOLR-6398
 URL: https://issues.apache.org/jira/browse/SOLR-6398
 Project: Solr
  Issue Type: Improvement
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-6398.patch, SOLR-6398.patch, SOLR-6398.patch


 The main inspiration for this ticket came from this presentation:
 http://www.slideshare.net/jpatanooga/hadoop-summit-eu-2013-parallel-linear-regression-iterativereduce-and-yarn
 This ticket builds on the existing AnalyticsQuery / MergeStrategy framework 
 by adding the abstract class IterativeMergeStrategy,  which has built-in 
 support for call-backs to the shards. By extending this class you can plugin 
 parallel iterative algorithms that will run efficiently inside Solr.
 I will update this ticket with more information about the design soon.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6398) Add IterativeMergeStrategy to support Parallel Iterative Algorithms

2014-08-21 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6398:
-

Attachment: SOLR-6398.patch

 Add IterativeMergeStrategy to support Parallel Iterative Algorithms
 ---

 Key: SOLR-6398
 URL: https://issues.apache.org/jira/browse/SOLR-6398
 Project: Solr
  Issue Type: Improvement
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-6398.patch, SOLR-6398.patch, SOLR-6398.patch


 The main inspiration for this ticket came from this presentation:
 http://www.slideshare.net/jpatanooga/hadoop-summit-eu-2013-parallel-linear-regression-iterativereduce-and-yarn
 This ticket builds on the existing AnalyticsQuery / MergeStrategy framework 
 by adding the abstract class IterativeMergeStrategy,  which has built-in 
 support for call-backs to the shards. By extending this class you can plugin 
 parallel iterative algorithms that will run efficiently inside Solr.
 I will update this ticket with more information about the design soon.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6379) CloudSolrServer can query the wrong replica if a collection has a SolrCore name that matches a collection name.

2014-08-21 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105937#comment-14105937
 ] 

Hoss Man commented on SOLR-6379:


bq. Let's look at the zk thread leaks in another JIRA?

It could just be a stupid mistake in my test -- i'll defer to you on that one 
since you're more familiar with the add/delete replica stuff.

More to the point though is that in terms of the original concern: 
cores/replicas being used in qeries when they shouldn't be, both tests seem 
useful.

 CloudSolrServer can query the wrong replica if a collection has a SolrCore 
 name that matches a collection name.
 ---

 Key: SOLR-6379
 URL: https://issues.apache.org/jira/browse/SOLR-6379
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Hoss Man
Assignee: Anshum Gupta
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: SOLR-6379.patch, SOLR-6379.patch, SOLR-6379.patch, 
 SOLR-6379.patch, SOLR-6379.patch, SOLR-6379.pristine_collection.test.patch


 spin off of SOLR-2894 where sarowe  miller were getting failures from 
 TestCloudPivot that seemed unrelated to any of hte distrib pivot logic itself.
 in particular: adding a call to waitForThingsToLevelOut at the start of the 
 test, even before indexing any docs, seemed to work around the problem -- but 
 even if all replicas aren't yet up when the test starts, we should either get 
 a failure when adding docs (ie: no replica hosting the target shard) or 
 queries should only be routed to the replicas that are up and fully caught up 
 with the rest of the collection.
 (NOTE: we're specifically talking about a situation where the set of docs in 
 the collection is static during the query request)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 93045 - Failure!

2014-08-21 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/93045/

1 tests failed.
REGRESSION:  org.apache.lucene.util.TestVersion.testDeprecations

Error Message:
LUCENE_4_11_0 should be deprecated

Stack Trace:
java.lang.AssertionError: LUCENE_4_11_0 should be deprecated
at 
__randomizedtesting.SeedInfo.seed([A45A2E778D4EE590:47F36B501E3E5D61]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.lucene.util.TestVersion.testDeprecations(TestVersion.java:176)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 829 lines...]
   [junit4] Suite: org.apache.lucene.util.TestVersion
   [junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestVersion 
-Dtests.method=testDeprecations -Dtests.seed=A45A2E778D4EE590 -Dtests.slow=true 
-Dtests.locale=es_MX -Dtests.timezone=Europe/Vienna -Dtests.file.encoding=UTF-8
   [junit4] FAILURE 0.05s J6 | TestVersion.testDeprecations 
   [junit4] Throwable #1: java.lang.AssertionError: LUCENE_4_11_0 should 
be deprecated
   [junit4]at 

[jira] [Created] (SOLR-6401) Update CSS for Solr Ref Guide to format code examples automatically

2014-08-21 Thread Cassandra Targett (JIRA)
Cassandra Targett created SOLR-6401:
---

 Summary: Update CSS for Solr Ref Guide to format code examples 
automatically
 Key: SOLR-6401
 URL: https://issues.apache.org/jira/browse/SOLR-6401
 Project: Solr
  Issue Type: Improvement
  Components: documentation
Reporter: Cassandra Targett
Assignee: Uwe Schindler


In the online version of the Solr Ref Guide 
(https://cwiki.apache.org/confluence/solr), we have a number of code example 
boxes. We manually change the default styling to have a solid black border. It 
used to be relatively easy to set those properties each time you made a code 
box, but with the Confluence upgrades it's not as simple anymore, it requires a 
special level of access, and the convention isn't known by everyone so we're 
getting a melange of styles.

What we'd like to do instead is set the formatting via the CSS. It's a simple 
change, but editing the CSS for the Solr space is limited to system admins of 
the Confluence instance only.

1. Go to 
https://cwiki.apache.org/confluence/spaces/viewstylesheet.action?key=solr and 
click edit (at the bottom of the page).

2. Add the following to the bottom of the existing CSS:

{code}
#content .code {
   border-color: black;
   border-width: 1px;
   border-style: solid;
}
{code}

3. Save and you're done.

[~thetaphi], I've assigned to you because AFAIK you are the only one in 
Lucene/Solr with system admin-level access to CWIKI. If someone else has that 
access, please reassign as appropriate.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6400) SolrCloud tests are not properly testing session expiration.

2014-08-21 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-6400:
--

Attachment: SOLR-6400.patch

 SolrCloud tests are not properly testing session expiration.
 

 Key: SOLR-6400
 URL: https://issues.apache.org/jira/browse/SOLR-6400
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Attachments: SOLR-6400.patch


 We are using a test method from the ZK project to pause the connection for a 
 length of time. A while back, I found that the pause time did not really 
 matter. All that happens is that the connection is closed and the zk client 
 creates a new one. So it just causes a dc event, but never reaches expiration.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-4x-Linux-Java7-64-test-only - Build # 29371 - Failure!

2014-08-21 Thread builder
Build: builds.flonkings.com/job/Lucene-4x-Linux-Java7-64-test-only/29371/

2 tests failed.
REGRESSION:  org.apache.lucene.util.TestVersion.testDeprecations

Error Message:
LUCENE_4_11_0 should be deprecated

Stack Trace:
java.lang.AssertionError: LUCENE_4_11_0 should be deprecated
at 
__randomizedtesting.SeedInfo.seed([FC4F704819C60CDB:1FE6356F8AB6B42A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.lucene.util.TestVersion.testDeprecations(TestVersion.java:179)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


REGRESSION:  org.apache.lucene.util.TestVersion.testOnOrAfter

Error Message:
LATEST must be always onOrAfter(4.11.0)

Stack Trace:
java.lang.AssertionError: LATEST must be always onOrAfter(4.11.0)
at 
__randomizedtesting.SeedInfo.seed([FC4F704819C60CDB:E007E76CBDA4369]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 

Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 93045 - Failure!

2014-08-21 Thread Ryan Ernst
I just pushed a fix. My mistake in adding the new constant.

On Thu, Aug 21, 2014 at 1:54 PM,  buil...@flonkings.com wrote:
 Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/93045/

 1 tests failed.
 REGRESSION:  org.apache.lucene.util.TestVersion.testDeprecations

 Error Message:
 LUCENE_4_11_0 should be deprecated

 Stack Trace:
 java.lang.AssertionError: LUCENE_4_11_0 should be deprecated
 at 
 __randomizedtesting.SeedInfo.seed([A45A2E778D4EE590:47F36B501E3E5D61]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at org.junit.Assert.assertTrue(Assert.java:43)
 at 
 org.apache.lucene.util.TestVersion.testDeprecations(TestVersion.java:176)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at java.lang.Thread.run(Thread.java:745)




 Build Log:
 [...truncated 829 lines...]
[junit4] Suite: org.apache.lucene.util.TestVersion
[junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestVersion 
 -Dtests.method=testDeprecations -Dtests.seed=A45A2E778D4EE590 
 -Dtests.slow=true -Dtests.locale=es_MX -Dtests.timezone=Europe/Vienna 
 -Dtests.file.encoding=UTF-8
[junit4] FAILURE 

Re: [JENKINS] Lucene-4x-Linux-Java7-64-test-only - Build # 29371 - Failure!

2014-08-21 Thread Ryan Ernst
I fixed this as well.

On Thu, Aug 21, 2014 at 2:02 PM,  buil...@flonkings.com wrote:
 Build: builds.flonkings.com/job/Lucene-4x-Linux-Java7-64-test-only/29371/

 2 tests failed.
 REGRESSION:  org.apache.lucene.util.TestVersion.testDeprecations

 Error Message:
 LUCENE_4_11_0 should be deprecated

 Stack Trace:
 java.lang.AssertionError: LUCENE_4_11_0 should be deprecated
 at 
 __randomizedtesting.SeedInfo.seed([FC4F704819C60CDB:1FE6356F8AB6B42A]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at org.junit.Assert.assertTrue(Assert.java:43)
 at 
 org.apache.lucene.util.TestVersion.testDeprecations(TestVersion.java:179)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at java.lang.Thread.run(Thread.java:745)


 REGRESSION:  org.apache.lucene.util.TestVersion.testOnOrAfter

 Error Message:
 LATEST must be always onOrAfter(4.11.0)

 Stack Trace:
 java.lang.AssertionError: LATEST must be always onOrAfter(4.11.0)
 at 
 __randomizedtesting.SeedInfo.seed([FC4F704819C60CDB:E007E76CBDA4369]:0)
  

[jira] [Commented] (SOLR-6400) SolrCloud tests are not properly testing session expiration.

2014-08-21 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105973#comment-14105973
 ] 

Timothy Potter commented on SOLR-6400:
--

Great catch Mark! I feel bad because I noticed this too, assumed it was working 
as designed, and then just worked around it by calling expire directly on the 
session ;-)

 SolrCloud tests are not properly testing session expiration.
 

 Key: SOLR-6400
 URL: https://issues.apache.org/jira/browse/SOLR-6400
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Attachments: SOLR-6400.patch


 We are using a test method from the ZK project to pause the connection for a 
 length of time. A while back, I found that the pause time did not really 
 matter. All that happens is that the connection is closed and the zk client 
 creates a new one. So it just causes a dc event, but never reaches expiration.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 93045 - Failure!

2014-08-21 Thread Uwe Schindler
But that is a cool test! :-)

This is what I love - also with the new and old common-build.xml checks:

This one shows the next problem in 4.x branch:
   [junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestVersion 
-Dtests.method=testLatestVersionCommonBuild -Dtests.seed=48874B3AF6322450 
-Dtests.locale=tr_TR -Dtests.timezone=America/Indiana/Indianapolis 
-Dtests.file.encoding=US-ASCII
   [junit4] FAILURE 0.05s | TestVersion.testLatestVersionCommonBuild 
   [junit4] Throwable #1: org.junit.ComparisonFailure: Version.LATEST does 
not match the one given in common-build.xml expected:4.1[1].0 but 
was:4.1[0].0

I am really happy,
Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Ryan Ernst [mailto:r...@iernst.net]
 Sent: Thursday, August 21, 2014 11:05 PM
 To: dev@lucene.apache.org
 Cc: sim...@apache.org
 Subject: Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 93045
 - Failure!
 
 I just pushed a fix. My mistake in adding the new constant.
 
 On Thu, Aug 21, 2014 at 1:54 PM,  buil...@flonkings.com wrote:
  Build:
  builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/93045/
 
  1 tests failed.
  REGRESSION:  org.apache.lucene.util.TestVersion.testDeprecations
 
  Error Message:
  LUCENE_4_11_0 should be deprecated
 
  Stack Trace:
  java.lang.AssertionError: LUCENE_4_11_0 should be deprecated
  at
 __randomizedtesting.SeedInfo.seed([A45A2E778D4EE590:47F36B501E3E5D6
 1]:0)
  at org.junit.Assert.fail(Assert.java:93)
  at org.junit.Assert.assertTrue(Assert.java:43)
  at
 org.apache.lucene.util.TestVersion.testDeprecations(TestVersion.java:176)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j
 ava:57)
  at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
 sorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606)
  at
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(Randomize
 dRunner.java:1618)
  at
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(Rando
 mizedRunner.java:827)
  at
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(Rando
 mizedRunner.java:863)
  at
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(Rando
 mizedRunner.java:877)
  at
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRule
 SetupTeardownChained.java:50)
  at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeA
 fterRule.java:46)
  at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1
 .evaluate(SystemPropertiesInvariantRule.java:55)
  at
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleTh
 readAndTestName.java:49)
  at
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRule
 IgnoreAfterMaxFailures.java:65)
  at
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure
 .java:48)
  at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat
 ementAdapter.java:36)
  at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.
 run(ThreadLeakControl.java:365)
  at
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask
 (ThreadLeakControl.java:798)
  at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadL
 eakControl.java:458)
  at
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(Ran
 domizedRunner.java:836)
  at
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(Rando
 mizedRunner.java:738)
  at
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(Rando
 mizedRunner.java:772)
  at
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(Rando
 mizedRunner.java:783)
  at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeA
 fterRule.java:46)
  at
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreCl
 assName.java:42)
  at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1
 .evaluate(SystemPropertiesInvariantRule.java:55)
  at
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMet
 hodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
  at
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMet
 hodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
  at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat
 ementAdapter.java:36)
  at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat
 ementAdapter.java:36)
  at
 

[jira] [Commented] (SOLR-3617) Consider adding start scripts.

2014-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105977#comment-14105977
 ] 

ASF subversion and git services commented on SOLR-3617:
---

Commit 1619591 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1619591 ]

SOLR-3617: minor issue if the user selects a port that is in use for the cloud 
example.

 Consider adding start scripts.
 --

 Key: SOLR-3617
 URL: https://issues.apache.org/jira/browse/SOLR-3617
 Project: Solr
  Issue Type: New Feature
Reporter: Mark Miller
Assignee: Timothy Potter
 Fix For: 5.0, 4.10

 Attachments: SOLR-3617.patch, SOLR-3617.patch, SOLR-3617.patch, 
 SOLR-3617.patch, SOLR-3617.patch


 I've always found that starting Solr with java -jar start.jar is a little odd 
 if you are not a java guy, but I think there are bigger pros than looking 
 less odd in shipping some start scripts.
 Not only do you get a cleaner start command:
 sh solr.sh or solr.bat or something
 But you also can do a couple other little nice things:
 * it becomes fairly obvious for a new casual user to see how to start the 
 system without reading doc.
 * you can make the working dir the location of the script - this lets you 
 call the start script from another dir and still have all the relative dir 
 setup work.
 * have an out of the box place to save startup params like -Xmx.
 * we could have multiple start scripts - say solr-dev.sh that logged to the 
 console and default to sys default for RAM - and also solr-prod which was 
 fully configured for logging, pegged Xms and Xmx at some larger value (1GB?) 
 etc.
 You would still of course be able to make the java cmd directly - and that is 
 probably what you would do when it's time to run as a service - but these 
 could be good starter scripts to get people on the right track and improve 
 the initial user experience.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3617) Consider adding start scripts.

2014-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105978#comment-14105978
 ] 

ASF subversion and git services commented on SOLR-3617:
---

Commit 1619592 from [~thelabdude] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1619592 ]

SOLR-3617: minor issue if the user selects a port that is in use for the cloud 
example.

 Consider adding start scripts.
 --

 Key: SOLR-3617
 URL: https://issues.apache.org/jira/browse/SOLR-3617
 Project: Solr
  Issue Type: New Feature
Reporter: Mark Miller
Assignee: Timothy Potter
 Fix For: 5.0, 4.10

 Attachments: SOLR-3617.patch, SOLR-3617.patch, SOLR-3617.patch, 
 SOLR-3617.patch, SOLR-3617.patch


 I've always found that starting Solr with java -jar start.jar is a little odd 
 if you are not a java guy, but I think there are bigger pros than looking 
 less odd in shipping some start scripts.
 Not only do you get a cleaner start command:
 sh solr.sh or solr.bat or something
 But you also can do a couple other little nice things:
 * it becomes fairly obvious for a new casual user to see how to start the 
 system without reading doc.
 * you can make the working dir the location of the script - this lets you 
 call the start script from another dir and still have all the relative dir 
 setup work.
 * have an out of the box place to save startup params like -Xmx.
 * we could have multiple start scripts - say solr-dev.sh that logged to the 
 console and default to sys default for RAM - and also solr-prod which was 
 fully configured for logging, pegged Xms and Xmx at some larger value (1GB?) 
 etc.
 You would still of course be able to make the java cmd directly - and that is 
 probably what you would do when it's time to run as a service - but these 
 could be good starter scripts to get people on the right track and improve 
 the initial user experience.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3617) Consider adding start scripts.

2014-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105979#comment-14105979
 ] 

ASF subversion and git services commented on SOLR-3617:
---

Commit 1619594 from [~thelabdude] in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1619594 ]

SOLR-3617: minor issue if the user selects a port that is in use for the cloud 
example.

 Consider adding start scripts.
 --

 Key: SOLR-3617
 URL: https://issues.apache.org/jira/browse/SOLR-3617
 Project: Solr
  Issue Type: New Feature
Reporter: Mark Miller
Assignee: Timothy Potter
 Fix For: 5.0, 4.10

 Attachments: SOLR-3617.patch, SOLR-3617.patch, SOLR-3617.patch, 
 SOLR-3617.patch, SOLR-3617.patch


 I've always found that starting Solr with java -jar start.jar is a little odd 
 if you are not a java guy, but I think there are bigger pros than looking 
 less odd in shipping some start scripts.
 Not only do you get a cleaner start command:
 sh solr.sh or solr.bat or something
 But you also can do a couple other little nice things:
 * it becomes fairly obvious for a new casual user to see how to start the 
 system without reading doc.
 * you can make the working dir the location of the script - this lets you 
 call the start script from another dir and still have all the relative dir 
 setup work.
 * have an out of the box place to save startup params like -Xmx.
 * we could have multiple start scripts - say solr-dev.sh that logged to the 
 console and default to sys default for RAM - and also solr-prod which was 
 fully configured for logging, pegged Xms and Xmx at some larger value (1GB?) 
 etc.
 You would still of course be able to make the java cmd directly - and that is 
 probably what you would do when it's time to run as a service - but these 
 could be good starter scripts to get people on the right track and improve 
 the initial user experience.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-4x-Linux-Java7-64-test-only - Build # 29372 - Still Failing!

2014-08-21 Thread builder
Build: builds.flonkings.com/job/Lucene-4x-Linux-Java7-64-test-only/29372/

1 tests failed.
REGRESSION:  org.apache.lucene.util.TestVersion.testLatestVersionCommonBuild

Error Message:
Version.LATEST does not match the one given in common-build.xml 
expected:4.1[1].0 but was:4.1[0].0

Stack Trace:
org.junit.ComparisonFailure: Version.LATEST does not match the one given in 
common-build.xml expected:4.1[1].0 but was:4.1[0].0
at 
__randomizedtesting.SeedInfo.seed([901BD37392503E42:742C2314416AF9DD]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at 
org.apache.lucene.util.TestVersion.testLatestVersionCommonBuild(TestVersion.java:190)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 136 lines...]
   [junit4] Suite: org.apache.lucene.util.TestVersion
   [junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestVersion 
-Dtests.method=testLatestVersionCommonBuild -Dtests.seed=901BD37392503E42 
-Dtests.slow=true -Dtests.locale=de -Dtests.timezone=Universal 
-Dtests.file.encoding=UTF-8
   [junit4] 

[jira] [Commented] (LUCENE-5672) Addindexes does not call maybeMerge

2014-08-21 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105980#comment-14105980
 ] 

Ryan Ernst commented on LUCENE-5672:


{quote}
I think we would have way more major concerns if this was a dangerous change.
This is a really ugly bug with a simple fix.
{quote}

I don't know enough about the history of this issue here, so I trust your 
judgement.  I'm just trying to be protective from a RM standpoint. :)

 Addindexes does not call maybeMerge
 ---

 Key: LUCENE-5672
 URL: https://issues.apache.org/jira/browse/LUCENE-5672
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Robert Muir
 Attachments: LUCENE-5672.patch


 I don't know why this was removed, but this is buggy and just asking for 
 trouble.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-4x-Linux-Java7-64-test-only - Build # 29373 - Still Failing!

2014-08-21 Thread builder
Build: builds.flonkings.com/job/Lucene-4x-Linux-Java7-64-test-only/29373/

1 tests failed.
FAILED:  org.apache.lucene.util.TestVersion.testLatestVersionCommonBuild

Error Message:
Version.LATEST does not match the one given in common-build.xml 
expected:4.1[1].0 but was:4.1[0].0

Stack Trace:
org.junit.ComparisonFailure: Version.LATEST does not match the one given in 
common-build.xml expected:4.1[1].0 but was:4.1[0].0
at 
__randomizedtesting.SeedInfo.seed([6FB5ED8C852523F5:8B821DEB561FE46A]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at 
org.apache.lucene.util.TestVersion.testLatestVersionCommonBuild(TestVersion.java:190)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 1353 lines...]
   [junit4] Suite: org.apache.lucene.util.TestVersion
   [junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestVersion 
-Dtests.method=testLatestVersionCommonBuild -Dtests.seed=6FB5ED8C852523F5 
-Dtests.slow=true -Dtests.locale=ar_KW -Dtests.timezone=America/Regina 
-Dtests.file.encoding=ISO-8859-1
   

Re: [JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 608 - Still Failing

2014-08-21 Thread Michael McCandless
I dug a bit on this, I think it's just that we are missing a try /
finally in the new SegmentDocValuesProducer to close all producers
that had been opened if we hit an exc.

Mike McCandless

http://blog.mikemccandless.com


On Thu, Aug 21, 2014 at 3:47 PM, Apache Jenkins Server
jenk...@builds.apache.org wrote:
 Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/608/

 1 tests failed.
 REGRESSION:  org.apache.lucene.index.TestIndexWriterOutOfMemory.testBasics

 Error Message:
 MockDirectoryWrapper: cannot close: there are still open files: {_1_1.dvd=1}

 Stack Trace:
 java.lang.RuntimeException: MockDirectoryWrapper: cannot close: there are 
 still open files: {_1_1.dvd=1}
 at 
 __randomizedtesting.SeedInfo.seed([3C3A06BDFCB270AC:1E2A891C45C2EDC]:0)
 at 
 org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:672)
 at 
 org.apache.lucene.index.TestIndexWriterOutOfMemory.testBasics(TestIndexWriterOutOfMemory.java:85)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.RuntimeException: unclosed IndexInput: 

ant precommit fails because of SimpleNaiveBayesClassifier?

2014-08-21 Thread Michael McCandless
Is anyone else seeing ant precommit failures...:

 [exec] 
build/docs/classification/org/apache/lucene/classification/SimpleNaiveBayesClassifier.html
 [exec]   missing Fields: analyzer
 [exec]   missing Fields: atomicReader
 [exec]   missing Fields: classFieldName
 [exec]   missing Fields: indexSearcher
 [exec]   missing Fields: query
 [exec]   missing Fields: textFieldNames
 [exec]   missing Methods: countDocsWithClass()
 [exec]   missing Methods: tokenizeDoc(java.lang.String)
 [exec]
 [exec] Missing javadocs were found!


Mike McCandless

http://blog.mikemccandless.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: ant precommit fails because of SimpleNaiveBayesClassifier?

2014-08-21 Thread Uwe Schindler
Jenkins complains all the time...

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Michael McCandless [mailto:luc...@mikemccandless.com]
 Sent: Friday, August 22, 2014 12:22 AM
 To: Lucene/Solr dev
 Subject: ant precommit fails because of SimpleNaiveBayesClassifier?
 
 Is anyone else seeing ant precommit failures...:
 
  [exec]
 build/docs/classification/org/apache/lucene/classification/SimpleNaiveBayes
 Classifier.html
  [exec]   missing Fields: analyzer
  [exec]   missing Fields: atomicReader
  [exec]   missing Fields: classFieldName
  [exec]   missing Fields: indexSearcher
  [exec]   missing Fields: query
  [exec]   missing Fields: textFieldNames
  [exec]   missing Methods: countDocsWithClass()
  [exec]   missing Methods: tokenizeDoc(java.lang.String)
  [exec]
  [exec] Missing javadocs were found!
 
 
 Mike McCandless
 
 http://blog.mikemccandless.com
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 608 - Still Failing

2014-08-21 Thread Michael McCandless
I committed a fix.

Mike McCandless

http://blog.mikemccandless.com


On Thu, Aug 21, 2014 at 5:48 PM, Michael McCandless
luc...@mikemccandless.com wrote:
 I dug a bit on this, I think it's just that we are missing a try /
 finally in the new SegmentDocValuesProducer to close all producers
 that had been opened if we hit an exc.

 Mike McCandless

 http://blog.mikemccandless.com


 On Thu, Aug 21, 2014 at 3:47 PM, Apache Jenkins Server
 jenk...@builds.apache.org wrote:
 Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/608/

 1 tests failed.
 REGRESSION:  org.apache.lucene.index.TestIndexWriterOutOfMemory.testBasics

 Error Message:
 MockDirectoryWrapper: cannot close: there are still open files: {_1_1.dvd=1}

 Stack Trace:
 java.lang.RuntimeException: MockDirectoryWrapper: cannot close: there are 
 still open files: {_1_1.dvd=1}
 at 
 __randomizedtesting.SeedInfo.seed([3C3A06BDFCB270AC:1E2A891C45C2EDC]:0)
 at 
 org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:672)
 at 
 org.apache.lucene.index.TestIndexWriterOutOfMemory.testBasics(TestIndexWriterOutOfMemory.java:85)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 

[jira] [Commented] (SOLR-6400) SolrCloud tests are not properly testing session expiration.

2014-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106072#comment-14106072
 ] 

ASF subversion and git services commented on SOLR-6400:
---

Commit 1619612 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1619612 ]

SOLR-6400: SolrCloud tests are not properly testing session expiration.

 SolrCloud tests are not properly testing session expiration.
 

 Key: SOLR-6400
 URL: https://issues.apache.org/jira/browse/SOLR-6400
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Attachments: SOLR-6400.patch


 We are using a test method from the ZK project to pause the connection for a 
 length of time. A while back, I found that the pause time did not really 
 matter. All that happens is that the connection is closed and the zk client 
 creates a new one. So it just causes a dc event, but never reaches expiration.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5467) Provide Solr Ref Guide in .epub format

2014-08-21 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106071#comment-14106071
 ] 

Hoss Man commented on SOLR-5467:


I found a confluence plugin that might work.  hard to be sure how well it will 
handle the quantity/formatting we use, but i've requested thta infra install 
it: INFRA-8225

 Provide Solr Ref Guide in .epub format
 --

 Key: SOLR-5467
 URL: https://issues.apache.org/jira/browse/SOLR-5467
 Project: Solr
  Issue Type: Wish
  Components: documentation
Reporter: Cassandra Targett

 From the solr-user list, a request for an .epub version of the Solr Ref Guide.
 There are two possible approaches that immediately come to mind:
 * Ask infra to install a plugin that automatically outputs the Confluence 
 pages in .epub
 * Investigate converting HTML export to .epub with something like calibre
 There might be other options, and there would be additional issues for 
 automating the process of creation and publication, so for now just recording 
 the request with an issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6400) SolrCloud tests are not properly testing session expiration.

2014-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106077#comment-14106077
 ] 

ASF subversion and git services commented on SOLR-6400:
---

Commit 1619613 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1619613 ]

SOLR-6400: SolrCloud tests are not properly testing session expiration.

 SolrCloud tests are not properly testing session expiration.
 

 Key: SOLR-6400
 URL: https://issues.apache.org/jira/browse/SOLR-6400
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Attachments: SOLR-6400.patch


 We are using a test method from the ZK project to pause the connection for a 
 length of time. A while back, I found that the pause time did not really 
 matter. All that happens is that the connection is closed and the zk client 
 creates a new one. So it just causes a dc event, but never reaches expiration.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: ant precommit fails because of SimpleNaiveBayesClassifier?

2014-08-21 Thread Michael McCandless
Looks like this particular failure is from LUCENE-5699 ...

Mike McCandless

http://blog.mikemccandless.com


On Thu, Aug 21, 2014 at 6:24 PM, Uwe Schindler u...@thetaphi.de wrote:
 Jenkins complains all the time...

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de


 -Original Message-
 From: Michael McCandless [mailto:luc...@mikemccandless.com]
 Sent: Friday, August 22, 2014 12:22 AM
 To: Lucene/Solr dev
 Subject: ant precommit fails because of SimpleNaiveBayesClassifier?

 Is anyone else seeing ant precommit failures...:

  [exec]
 build/docs/classification/org/apache/lucene/classification/SimpleNaiveBayes
 Classifier.html
  [exec]   missing Fields: analyzer
  [exec]   missing Fields: atomicReader
  [exec]   missing Fields: classFieldName
  [exec]   missing Fields: indexSearcher
  [exec]   missing Fields: query
  [exec]   missing Fields: textFieldNames
  [exec]   missing Methods: countDocsWithClass()
  [exec]   missing Methods: tokenizeDoc(java.lang.String)
  [exec]
  [exec] Missing javadocs were found!


 Mike McCandless

 http://blog.mikemccandless.com

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6235) SyncSliceTest fails on jenkins with no live servers available error

2014-08-21 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-6235.
---

   Resolution: Fixed
Fix Version/s: 5.0

 SyncSliceTest fails on jenkins with no live servers available error
 ---

 Key: SOLR-6235
 URL: https://issues.apache.org/jira/browse/SOLR-6235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 5.0, 4.10

 Attachments: SOLR-6235.patch, SOLR-6235.patch


 {code}
 1 tests failed.
 FAILED:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch
 Error Message:
 No live SolrServers available to handle this request
 Stack Trace:
 org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
 available to handle this request
 at 
 __randomizedtesting.SeedInfo.seed([685C57B3F25C854B:E9BAD9AB8503E577]:0)
 at 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:317)
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:659)
 at 
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
 at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1149)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
 at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:236)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5859) Remove Version from Analyzer constructors

2014-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106128#comment-14106128
 ] 

ASF subversion and git services commented on LUCENE-5859:
-

Commit 1619623 from [~sar...@syr.edu] in branch 'dev/trunk'
[ https://svn.apache.org/r1619623 ]

LUCENE-5859, LUCENE-5871: Remove Version.LUCENE_CURRENT from javadocs

 Remove Version from Analyzer constructors
 -

 Key: LUCENE-5859
 URL: https://issues.apache.org/jira/browse/LUCENE-5859
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Ryan Ernst
 Fix For: 5.0

 Attachments: LUCENE-5859.patch, LUCENE-5859_dead_code.patch


 This has always been a mess: analyzers are easy enough to make on your own, 
 we don't need to take responsibility for the users analysis chain for 2 
 major releases.
 The code maintenance is horrible here.
 This creates a huge usability issue too, and as seen from numerous mailing 
 list issues, users don't even understand how this versioning works anyway.
 I'm sure someone will whine if i try to remove these constants, but we can at 
 least make no-arg ctors forwarding to VERSION_CURRENT so that people who 
 don't care about back compat (e.g. just prototyping) don't have to deal with 
 the horribly complex versioning system.
 If you want to make the argument that doing this is trappy (i heard this 
 before), i think thats bogus, and ill counter by trying to remove them. 
 Either way, I'm personally not going to add any of this kind of back compat 
 logic myself ever again.
 Updated: description of the issue updated as expected. We should remove this 
 API completely. No one else on the planet has APIs that require a mandatory 
 version parameter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5871) Simplify or remove use of Version in IndexWriterConfig

2014-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106129#comment-14106129
 ] 

ASF subversion and git services commented on LUCENE-5871:
-

Commit 1619623 from [~sar...@syr.edu] in branch 'dev/trunk'
[ https://svn.apache.org/r1619623 ]

LUCENE-5859, LUCENE-5871: Remove Version.LUCENE_CURRENT from javadocs

 Simplify or remove use of Version in IndexWriterConfig
 --

 Key: LUCENE-5871
 URL: https://issues.apache.org/jira/browse/LUCENE-5871
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Ryan Ernst
Assignee: Ryan Ernst
 Fix For: 5.0

 Attachments: LUCENE-5871.iwclose.4x.patch, 
 LUCENE-5871.iwclose.trunk.patch, LUCENE-5871.patch, LUCENE-5871.patch, 
 LUCENE-5871.patch, LUCENE-5871.patch, LUCENE-5871.patch


 {{IndexWriter}} currently uses Version from {{IndexWriterConfig}} to 
 determine the semantics of {{close()}}.  This is a trapdoor for users, as 
 they often default to just sending Version.LUCENE_CURRENT since they don't 
 understand what it will be used for.  Instead, we should make the semantics 
 of close a direction option in IWC.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5859) Remove Version from Analyzer constructors

2014-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106138#comment-14106138
 ] 

ASF subversion and git services commented on LUCENE-5859:
-

Commit 1619625 from [~sar...@syr.edu] in branch 'dev/trunk'
[ https://svn.apache.org/r1619625 ]

LUCENE-5859: Remove Version.LUCENE_CURRENT from htmlentity.py code generator 
for HTMLStripCharFilter

 Remove Version from Analyzer constructors
 -

 Key: LUCENE-5859
 URL: https://issues.apache.org/jira/browse/LUCENE-5859
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Ryan Ernst
 Fix For: 5.0

 Attachments: LUCENE-5859.patch, LUCENE-5859_dead_code.patch


 This has always been a mess: analyzers are easy enough to make on your own, 
 we don't need to take responsibility for the users analysis chain for 2 
 major releases.
 The code maintenance is horrible here.
 This creates a huge usability issue too, and as seen from numerous mailing 
 list issues, users don't even understand how this versioning works anyway.
 I'm sure someone will whine if i try to remove these constants, but we can at 
 least make no-arg ctors forwarding to VERSION_CURRENT so that people who 
 don't care about back compat (e.g. just prototyping) don't have to deal with 
 the horribly complex versioning system.
 If you want to make the argument that doing this is trappy (i heard this 
 before), i think thats bogus, and ill counter by trying to remove them. 
 Either way, I'm personally not going to add any of this kind of back compat 
 logic myself ever again.
 Updated: description of the issue updated as expected. We should remove this 
 API completely. No one else on the planet has APIs that require a mandatory 
 version parameter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5895) Add per-segment and per-commit id to help replication

2014-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106136#comment-14106136
 ] 

ASF subversion and git services commented on LUCENE-5895:
-

Commit 1619624 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1619624 ]

LUCENE-5895: fix version in javadocs

 Add per-segment and per-commit id to help replication
 -

 Key: LUCENE-5895
 URL: https://issues.apache.org/jira/browse/LUCENE-5895
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, 4.11

 Attachments: LUCENE-5895.patch, LUCENE-5895.patch


 It would be useful if Lucene recorded a unique id for each segment written 
 and each commit point.  This way, file-based replicators can use this to 
 know whether the segment/commit they are looking at on a source machine and 
 dest machine are in fact that same.
 I know this would have been very useful when I was playing with NRT 
 replication (LUCENE-5438).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-5699) Lucene classification score calculation normalize and return lists

2014-08-21 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless reopened LUCENE-5699:



Reopen to resolve ant precommit failures and maybe backport question ...

 Lucene classification score calculation normalize and return lists
 --

 Key: LUCENE-5699
 URL: https://issues.apache.org/jira/browse/LUCENE-5699
 Project: Lucene - Core
  Issue Type: Sub-task
  Components: modules/classification
Reporter: Gergő Törcsvári
Assignee: Tommaso Teofili
  Labels: gsoc2014
 Fix For: 5.0

 Attachments: 06-06-5699.patch, 0730.patch, 0803-base.patch, 
 0810-base.patch


 Now the classifiers can return only the best matching classes. If somebody 
 want it to use more complex tasks he need to modify these classes for get 
 second and third results too. If it is possible to return a list and it is 
 not a lot resource why we dont do that? (We iterate a list so also.)
 The Bayes classifier get too small return values, and there were a bug with 
 the zero floats. It was fixed with logarithmic. It would be nice to scale the 
 class scores sum vlue to one, and then we coud compare two documents return 
 score and relevance. (If we dont do this the wordcount in the test documents 
 affected the result score.)
 With bulletpoints:
 * In the Bayes classification normalized score values, and return with result 
 lists.
 * In the KNN classifier possibility to return a result list.
 * Make the ClassificationResult Comparable for list sorting.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >