[jira] [Created] (SOLR-7588) naturalSort.js is provided as coffeescript instead of plain javascript

2015-05-22 Thread Derek Wood (JIRA)
Derek Wood created SOLR-7588:


 Summary: naturalSort.js is provided as coffeescript instead of 
plain javascript
 Key: SOLR-7588
 URL: https://issues.apache.org/jira/browse/SOLR-7588
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: Trunk
 Environment: Fedora 21

openjdk version "1.8.0_45"
OpenJDK Runtime Environment (build 1.8.0_45-b14)
OpenJDK 64-Bit Server VM (build 25.45-b02, mixed mode)
Reporter: Derek Wood


The Dataimport tab of a core will hang with a loading screen or display the 
previously accessed tab instead of showing the expected dataimport screen.

The console in Chrome has the following error log, but it's obvious to me that 
it's trying to run un-transpiled coffeescript:

{noformat}
naturalSort.js?_=6.0.0:30 Uncaught SyntaxError: Unexpected token ILLEGAL
jquery.sammy.js?_=6.0.0:120 [Fri May 22 2015 23:36:59 GMT-0700 (MST)] runRoute 
get #/db/dataimport
dataimport.js?_=6.0.0:48 Uncaught ReferenceError: naturalSort is not defined
{noformat}

The file in question can be viewed here: 
https://svn.apache.org/viewvc/lucene/dev/trunk/solr/webapp/web/js/lib/naturalSort.js?view=markup

I was able to verify this in my own build as well as the nightly builds hosted 
on the Apache Jenkins server with the default DIH example ({{bin/solr start -e 
dih}}).

After replacing the coffeescript file with one transpiled to javascript 
(available at 
https://github.com/jarinudom/naturalSort.js/blob/master/dist/naturalSort.js), 
the dataimport tab worked as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_45) - Build #12782 - Failure!

2015-05-22 Thread Erick Erickson
thanks! I'll give it a whirl. It's particularly weird b/c my pro runs
everything just fine, my laptop fails all the time.

On Fri, May 22, 2015 at 10:10 PM, Ishan Chattopadhyaya
 wrote:
> Not sure if it is related or helpful, but while debugging tests for
> SOLR-7468 yesterday, I encountered this
> java.lang.NoSuchFieldError:
> totalTermCount
>
> few times, I had to forcefully clean at root of the project and it worked. I
> remember Anshum had to do that clean thing more than once to make it work
> and he remarked "don't ask why".
>
> Sent from my Windows Phone
> 
> From: Erick Erickson
> Sent: ‎5/‎23/‎2015 6:15 AM
> To: dev@lucene.apache.org
> Subject: Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_45) - Build
> #12782 - Failure!
>
> OK, this is somewhat weird. I still have the original tree that I
> checked in from which was up to date before I committed the code and
> the tests run from there fine. But a current trunk fails every time.
> Now, the machine it works on is my Mac Pro, and the failures are on my
> MacBook so there may be something going on there.
>
> I've got to leave for a while, I'll copy the tree that works on the
> Pro, update the copy and see if this test fails when I get back. If
> they fail, I can diff the trees to see what changed and see if I can
> make any sense out of this.
>
> I can always @Ignore this test to cut down on the noise, probably do
> that tonight if I don't have any revelations.
>
> I see this stack trace which makes no sense to me whatsoever (see the
> lines with lots of * in front). I looked at where the code
> originates (BufferedUpdatesStream[277]) and it looks like this:
>
> if (coalescedUpdates != null && coalescedUpdates.totalTermCount != 0) {
>
> And it's telling me there's no such field? Wha
>
> Which is freaking me out since I don't see how this would trigger the
> exception. Is this a red herring? And, of course, this doesn't fail in
> IntelliJ but it does fail every time from the shell. Shhh.
>
> Of course if this were something fundamental to Lucene, it seems like
> this would be failing all over the place so I assume it's something to
> do with CDCR... But what do I know?
>
> 1:56434/source_collection_shard2_replica1/&commit_end_point=true&wt=javabin&version=2&expungeDeletes=false}
> status=0 QTime=8
> *   [junit4]   2> 143699 T370 n:127.0.0.1:56443_
> c:source_collection s:shard1 r:core_node3
> x:source_collection_shard1_replica1 C122 oasc.SolrException.log ERROR
> null:java.lang.RuntimeException: java.lang.NoSuchFieldError:
> totalTermCount
>[junit4]   2> at
> org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:579)
>[junit4]   2> at
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:451)
>[junit4]   2> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>[junit4]   2> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>[junit4]   2> at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>[junit4]   2> at
> org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:105)
>[junit4]   2> at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>[junit4]   2> at
> org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83)
>[junit4]   2> at
> org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:300)
>[junit4]   2> at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>[junit4]   2> at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>[junit4]   2> at
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
>[junit4]   2> at
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>[junit4]   2> at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>[junit4]   2> at
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>[junit4]   2> at
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>[junit4]   2> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>[junit4]   2> at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>[junit4]   2> at org.eclipse.jetty.server.Server.handle(Server.java:497)
>[junit4]   2> at
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>[junit4]   2> at
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>[junit4]   2> at
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>[junit4]   2> at
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>[junit4]   2> at
> org.eclipse.je

[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_45) - Build # 4843 - Still Failing!

2015-05-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4843/
Java: 64bit/jdk1.8.0_45 -XX:-UseCompressedOops -XX:+UseG1GC

5 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.CdcrRequestHandlerTest

Error Message:
Some resources were not closed, shutdown, or released.

Stack Trace:
java.lang.AssertionError: Some resources were not closed, shutdown, or released.
at __randomizedtesting.SeedInfo.seed([46273C4DCB58A707]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:234)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.CdcrRequestHandlerTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.CdcrRequestHandlerTest: 1) Thread[id=563, 
name=zkCallback-101-thread-1, state=TIMED_WAITING, 
group=TGRP-CdcrRequestHandlerTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=586, 
name=zkCallback-101-thread-2, state=TIMED_WAITING, 
group=TGRP-CdcrRequestHandlerTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)3) Thread[id=561, 
name=zkCallback-72-thread-3-processing-{node_name=127.0.0.1:52489_jn%2Fbc}-Send

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_60-ea-b12) - Build # 12786 - Still Failing!

2015-05-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12786/
Java: 32bit/jdk1.8.0_60-ea-b12 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CdcrReplicationDistributedZkTest.doTest

Error Message:
expected: but was:

Stack Trace:
org.junit.ComparisonFailure: expected: but was:
at 
__randomizedtesting.SeedInfo.seed([EF101301FFD92986:4854ABA592623A3F]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.assertState(BaseCdcrDistributedZkTest.java:255)
at 
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.doTestTargetCollectionNotAvailable(CdcrReplicationDistributedZkTest.java:113)
at 
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.doTest(CdcrReplicationDistributedZkTest.java:42)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrot

RE: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_45) - Build #12782 - Failure!

2015-05-22 Thread Ishan Chattopadhyaya
Not sure if it is related or helpful, but while debugging tests for SOLR-7468 
yesterday, I encountered this
java.lang.NoSuchFieldError:
totalTermCount

few times, I had to forcefully clean at root of the project and it worked. I 
remember Anshum had to do that clean thing more than once to make it work and 
he remarked "don't ask why".

Sent from my Windows Phone

-Original Message-
From: "Erick Erickson" 
Sent: ‎5/‎23/‎2015 6:15 AM
To: "dev@lucene.apache.org" 
Subject: Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_45) - Build 
#12782 - Failure!

OK, this is somewhat weird. I still have the original tree that I
checked in from which was up to date before I committed the code and
the tests run from there fine. But a current trunk fails every time.
Now, the machine it works on is my Mac Pro, and the failures are on my
MacBook so there may be something going on there.

I've got to leave for a while, I'll copy the tree that works on the
Pro, update the copy and see if this test fails when I get back. If
they fail, I can diff the trees to see what changed and see if I can
make any sense out of this.

I can always @Ignore this test to cut down on the noise, probably do
that tonight if I don't have any revelations.

I see this stack trace which makes no sense to me whatsoever (see the
lines with lots of * in front). I looked at where the code
originates (BufferedUpdatesStream[277]) and it looks like this:

if (coalescedUpdates != null && coalescedUpdates.totalTermCount != 0) {

And it's telling me there's no such field? Wha

Which is freaking me out since I don't see how this would trigger the
exception. Is this a red herring? And, of course, this doesn't fail in
IntelliJ but it does fail every time from the shell. Shhh.

Of course if this were something fundamental to Lucene, it seems like
this would be failing all over the place so I assume it's something to
do with CDCR... But what do I know?

1:56434/source_collection_shard2_replica1/&commit_end_point=true&wt=javabin&version=2&expungeDeletes=false}
status=0 QTime=8
*   [junit4]   2> 143699 T370 n:127.0.0.1:56443_
c:source_collection s:shard1 r:core_node3
x:source_collection_shard1_replica1 C122 oasc.SolrException.log ERROR
null:java.lang.RuntimeException: java.lang.NoSuchFieldError:
totalTermCount
   [junit4]   2> at
org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:579)
   [junit4]   2> at
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:451)
   [junit4]   2> at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
   [junit4]   2> at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
   [junit4]   2> at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
   [junit4]   2> at
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:105)
   [junit4]   2> at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
   [junit4]   2> at
org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83)
   [junit4]   2> at
org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:300)
   [junit4]   2> at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
   [junit4]   2> at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
   [junit4]   2> at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
   [junit4]   2> at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
   [junit4]   2> at
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
   [junit4]   2> at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
   [junit4]   2> at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
   [junit4]   2> at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
   [junit4]   2> at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
   [junit4]   2> at org.eclipse.jetty.server.Server.handle(Server.java:497)
   [junit4]   2> at
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
   [junit4]   2> at
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
   [junit4]   2> at
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
   [junit4]   2> at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
   [junit4]   2> at
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
   [junit4]   2> at java.lang.Thread.run(Thread.java:745)
   [junit4]   2> Caused by: java.lang.NoSuchFieldError: totalTermCount
*   [junit4]   2> at
org.apache.lucene.index.BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:277)
   [junit4]   2> at
org.apache.lucene.index

[jira] [Resolved] (SOLR-7335) Multivalue field that is boosted on indexing time has wrong norm.

2015-05-22 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-7335.

   Resolution: Fixed
Fix Version/s: 5.2
   Trunk

thanks for the patch!

> Multivalue field that is boosted on indexing time has wrong norm.
> -
>
> Key: SOLR-7335
> URL: https://issues.apache.org/jira/browse/SOLR-7335
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10, 5.0, 5.1
>Reporter: Shingo Sasaki
>Assignee: Hoss Man
>Priority: Critical
> Fix For: Trunk, 5.2
>
> Attachments: SOLR-7335.patch
>
>
> Multivalue field has wrong norm when the field value is tokenized, the field 
> or document is boosted, and the field is not source of copyField.
> {noformat}
> $ java -jar start.jar &
> $ echo '{
> "add": {
>   "doc": {
> "id":"no-boosted",
> "features": ["a","b","c"],
> "dyn_not_copied_txt": ["a","b","c"]
>   }
> },
> "add": {
>   "boost": 10,
>   "doc": {
> "id":"boosted",
> "features": ["a","b","c"],
> "dyn_not_copied_txt": ["a","b","c"]
>   }
> }}' > test.json
> $ curl 'http://localhost:8983/solr/update/json?commit=true' -H 
> 'Content-type:application/json' --data-binary @test.json
> {"responseHeader":{"status":0,"QTime":41}}
> $ curl 'http://localhost:8983/solr/select' -d 
> 'omitHeader=true&wt=json&indent=on&q=*:*&fl=id,norm(features),norm(dyn_not_copied_txt)'
> {
>   "response":{"numFound":2,"start":0,"docs":[
>   {
> "id":"no-boosted",
> "norm(features)":0.5,
> "norm(dyn_not_copied_txt)":0.5},
>   {
> "id":"boosted",
> "norm(features)":5.0,
> "norm(dyn_not_copied_txt)":512.0}]
>   }}
> {noformat}
> In the above example, "features" is source of copyField. On the other hand, 
> "dyn_not_copied_txt" is not so. 
> "features" and "dyn_not_copied_txt" have the same type attribute 
> (type="text_general"), the same values ( ["a","b","c"] ) and the same boost. 
> So, both fields must have the same norm in the document.
> But, in boosted document only,  the field that is not copied have too larger 
> norm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: official ASF Jenkins Slave moved to Ubuntu 14.04

2015-05-22 Thread Shalin Shekhar Mangar
Thank you Uwe!

On Sat, May 23, 2015 at 2:53 AM, Uwe Schindler  wrote:

> Hi all,
>
> the old lucene-zones.apache.org machine, running on FreeBSD, was disabled
> an hour ago and all Jobs migrated. This old machine was not able to run
> Java 8 at all (crushed all the time and had the famous FreeBSD blackhole).
> In addition, it was about to be decommissioned soon; we moved the whole
> slave to a Ubuntu 14.04 machine.
>
> From now on all builds are running in a VMware machine (
> lucene1-us-west.apache.org) running Ubuntu 14.04 with 4 (virtual) cores
> [/proc/cpuinfo says: Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz] and 16 GiB
> of RAM.
>
> I changed all jobs to use the correct JDK version (because those are
> installed automatically now) - sorry for the mail flood about broken jobs.
> I hope all is fine now, if you find a problem, please respond to this mail
> with Jenkins Job and build number. Possible errors could be files not found
> (nightly jobs using Wikipedia dumps, Maven upload to snapshot repo not
> working, or cases where I missed to change JDK version).
>
> I will now cleanup the policy file of test security to no longer have the
> hardcoded FreeBSD localhost address workaround with hardcoded hostname
> (will just heavy commit).
>
> Uwe
>
> P.S.: Thanks to Chris Lambertus and Gavin McDonald for their help during
> the migration.
>
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


-- 
Regards,
Shalin Shekhar Mangar.


[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b60) - Build # 12785 - Still Failing!

2015-05-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12785/
Java: 64bit/jdk1.9.0-ea-b60 -XX:-UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 10763 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/temp/junit4-J1-20150523_014702_234.syserr
   [junit4] >>> JVM J1: stderr (verbatim) 
   [junit4] WARN: Unhandled exception in event serialization. -> 
java.lang.StackOverflowError
   [junit4] at sun.nio.cs.UTF_8$Encoder.encodeLoop(UTF_8.java:691)
   [junit4] at 
java.nio.charset.CharsetEncoder.encode(CharsetEncoder.java:578)
   [junit4] at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:274)
   [junit4] at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125)
   [junit4] at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:135)
   [junit4] at java.io.OutputStreamWriter.write(OutputStreamWriter.java:220)
   [junit4] at java.io.Writer.write(Writer.java:157)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.stream.JsonWriter.newline(JsonWriter.java:548)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.stream.JsonWriter.beforeValue(JsonWriter.java:589)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.stream.JsonWriter.value(JsonWriter.java:363)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.events.Serializer.serialize(Serializer.java:86)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain$4.write(SlaveMain.java:410)
   [junit4] at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
   [junit4] at 
java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
   [junit4] at java.io.PrintStream.flush(PrintStream.java:338)
   [junit4] at java.io.FilterOutputStream.flush(FilterOutputStream.java:142)
   [junit4] at java.io.PrintStream.write(PrintStream.java:482)
   [junit4] at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
   [junit4] at 
sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:294)
   [junit4] at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:298)
   [junit4] at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
   [junit4] at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
   [junit4] at 
org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:59)
   [junit4] at 
org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:324)
   [junit4] at 
org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
   [junit4] at 
org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
   [junit4] at 
org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
   [junit4] at org.apache.log4j.Category.callAppenders(Category.java:206)
   [junit4] at org.apache.log4j.Category.forcedLog(Category.java:391)
   [junit4] at org.apache.log4j.Category.log(Category.java:856)
   [junit4] at 
org.slf4j.impl.Log4jLoggerAdapter.info(Log4jLoggerAdapter.java:304)
   [junit4] at 
org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:207)
   [junit4] at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:198)
   [junit4] at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:159)
   [junit4] at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:348)
   [junit4] at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:256)
   [junit4] at 
org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:492)
   [junit4] at 
org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:248)
   [junit4] at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:198)
   [junit4] at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:159)
   [junit4] at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:348)
   [junit4] at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:256)
   [junit4] at 
org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:492)
   [junit4] at 
org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:248)
   [junit4] at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:198)
   [junit4] at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:159)
   [junit4] at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:348)
   [junit4] at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:256)
   [junit4] at 
org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContex

[jira] [Commented] (SOLR-7335) Multivalue field that is boosted on indexing time has wrong norm.

2015-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14557113#comment-14557113
 ] 

ASF subversion and git services commented on SOLR-7335:
---

Commit 1681260 from hoss...@apache.org in branch 'dev/branches/lucene_solr_5_2'
[ https://svn.apache.org/r1681260 ]

SOLR-7335: Fix doc boosts to no longer be multiplied in each field value in 
multivalued fields that are not used in copyFields (merge r1681249)

> Multivalue field that is boosted on indexing time has wrong norm.
> -
>
> Key: SOLR-7335
> URL: https://issues.apache.org/jira/browse/SOLR-7335
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10, 5.0, 5.1
>Reporter: Shingo Sasaki
>Assignee: Hoss Man
>Priority: Critical
> Attachments: SOLR-7335.patch
>
>
> Multivalue field has wrong norm when the field value is tokenized, the field 
> or document is boosted, and the field is not source of copyField.
> {noformat}
> $ java -jar start.jar &
> $ echo '{
> "add": {
>   "doc": {
> "id":"no-boosted",
> "features": ["a","b","c"],
> "dyn_not_copied_txt": ["a","b","c"]
>   }
> },
> "add": {
>   "boost": 10,
>   "doc": {
> "id":"boosted",
> "features": ["a","b","c"],
> "dyn_not_copied_txt": ["a","b","c"]
>   }
> }}' > test.json
> $ curl 'http://localhost:8983/solr/update/json?commit=true' -H 
> 'Content-type:application/json' --data-binary @test.json
> {"responseHeader":{"status":0,"QTime":41}}
> $ curl 'http://localhost:8983/solr/select' -d 
> 'omitHeader=true&wt=json&indent=on&q=*:*&fl=id,norm(features),norm(dyn_not_copied_txt)'
> {
>   "response":{"numFound":2,"start":0,"docs":[
>   {
> "id":"no-boosted",
> "norm(features)":0.5,
> "norm(dyn_not_copied_txt)":0.5},
>   {
> "id":"boosted",
> "norm(features)":5.0,
> "norm(dyn_not_copied_txt)":512.0}]
>   }}
> {noformat}
> In the above example, "features" is source of copyField. On the other hand, 
> "dyn_not_copied_txt" is not so. 
> "features" and "dyn_not_copied_txt" have the same type attribute 
> (type="text_general"), the same values ( ["a","b","c"] ) and the same boost. 
> So, both fields must have the same norm in the document.
> But, in boosted document only,  the field that is not copied have too larger 
> norm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



why didn't the test timeout? -- was: Re: [jira] [Updated] (SOLR-7587) TestSpellCheckResponse stalled and never timed out -- possible VersionBucket bug? (5.2 branch)

2015-05-22 Thread Chris Hostetter

Dawid: seperte from the questions raised in Jira about hte underlying 
problem in Solr, any ideas why the framework didn't time this test out 
long before the 110 minute mark when i noticed it still running?

(I don't see anything in the test or it's baseclass overriding hte default 
timeouts.)


: Date: Sat, 23 May 2015 03:23:17 + (UTC)
: From: "Hoss Man (JIRA)" 
: Reply-To: dev@lucene.apache.org
: To: dev@lucene.apache.org
: Subject: [jira] [Updated] (SOLR-7587) TestSpellCheckResponse stalled and never
:  timed out -- possible VersionBucket bug? (5.2 branch)
: 
: 
:  [ 
https://issues.apache.org/jira/browse/SOLR-7587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]
: 
: Hoss Man updated SOLR-7587:
: ---
: Attachment: jstack.1.txt
: jstack.2.txt
: junit4-J0-20150522_181244_599.events
: junit4-J0-20150522_181244_599.spill
: junit4-J0-20150522_181244_599.suites
: 
: 2 thread dumps, and the non-empty J0 files from 
solr/build/solr-solrj/test/temp/
: 
: Most interesting looking thread...
: 
: {noformat}
: "TEST-TestSpellCheckResponse.testSpellCheckResponse-seed#[FA0A9DF72EDC5BCD]" 
prio=10 tid=0x7f10843da000 nid=0x2ff9 waiting on condition 
[0x7f10c10f1000]
:java.lang.Thread.State: WAITING (parking)
: at sun.misc.Unsafe.park(Native Method)
: - parking to wait for  <0xf7f383e0> (a 
java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
: at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
: at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
: at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867)
: at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197)
: at 
java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:945)
: at 
org.apache.solr.update.VersionInfo.blockUpdates(VersionInfo.java:118)
: at 
org.apache.solr.update.UpdateLog.onFirstSearcher(UpdateLog.java:1604)
: at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1810)
: at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1505)
: at 
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:617)
: - locked <0xf6f09a10> (a java.lang.Object)
: at 
org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:95)
: at 
org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64)
: at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalCommit(DistributedUpdateProcessor.java:1635)
: at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1612)
: at 
org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:161)
: at 
org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69)
: at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
: at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
: at org.apache.solr.core.SolrCore.execute(SolrCore.java:2051)
: at 
org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:179)
: at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
: at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:483)
: at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:502)
: at 
org.apache.solr.client.solrj.response.TestSpellCheckResponse.testSpellCheckResponse(TestSpellCheckResponse.java:51)
: {noformat}
: 
: 
: 
: > TestSpellCheckResponse stalled and never timed out -- possible 
VersionBucket bug? (5.2 branch)
: > 
--
: >
: > Key: SOLR-7587
: > URL: https://issues.apache.org/jira/browse/SOLR-7587
: > Project: Solr
: >  Issue Type: Bug
: >Reporter: Hoss Man
: > Attachments: jstack.1.txt, jstack.2.txt, 
junit4-J0-20150522_181244_599.events, junit4-J0-20150522_181244_599.spill, 
junit4-J0-20150522_181244_599.suites
: >
: >
: > On the 5.2 branch (r1681250), I encountered a solrj test stalled for over 
110 minutes before i finally killed it...
: > {noformat}
: >[junit4] Suite: org.apache.solr.common.util.TestRetryUtil
: >[junit4] Completed [55/60] on J1 in 1.04s, 1 test
: >[junit4] 
: >[junit4] HEARTBEAT J0 PID(12147@tray): 2015-05-22T18:14:56, stalled for  
121s at: TestSpellCheckRe

[jira] [Updated] (SOLR-7587) TestSpellCheckResponse stalled and never timed out -- possible VersionBucket bug? (5.2 branch)

2015-05-22 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-7587:
---
 Priority: Blocker  (was: Major)
Fix Version/s: 5.2
 Assignee: Timothy Potter

marking as a blocker for 5.2 and assigning to tim to triage.

(i know tim recently made some VersionBucket chnages in SOLR-7332, so I'm 
suspicious that this might be related -- tim, if i'm wrong, and you're 
confident this is unrelated, please fill free to unassign and mark non-blocker.)

> TestSpellCheckResponse stalled and never timed out -- possible VersionBucket 
> bug? (5.2 branch)
> --
>
> Key: SOLR-7587
> URL: https://issues.apache.org/jira/browse/SOLR-7587
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Timothy Potter
>Priority: Blocker
> Fix For: 5.2
>
> Attachments: jstack.1.txt, jstack.2.txt, 
> junit4-J0-20150522_181244_599.events, junit4-J0-20150522_181244_599.spill, 
> junit4-J0-20150522_181244_599.suites
>
>
> On the 5.2 branch (r1681250), I encountered a solrj test stalled for over 110 
> minutes before i finally killed it...
> {noformat}
>[junit4] Suite: org.apache.solr.common.util.TestRetryUtil
>[junit4] Completed [55/60] on J1 in 1.04s, 1 test
>[junit4] 
>[junit4] HEARTBEAT J0 PID(12147@tray): 2015-05-22T18:14:56, stalled for  
> 121s at: TestSpellCheckResponse.testSpellCheckResponse
>[junit4] HEARTBEAT J0 PID(12147@tray): 2015-05-22T18:15:56, stalled for  
> 181s at: TestSpellCheckResponse.testSpellCheckResponse
> ...
>[junit4] HEARTBEAT J0 PID(12147@tray): 2015-05-22T20:00:56, stalled for 
> 6481s at: TestSpellCheckResponse.testSpellCheckResponse
>[junit4] HEARTBEAT J0 PID(12147@tray): 2015-05-22T20:01:56, stalled for 
> 6541s at: TestSpellCheckResponse.testSpellCheckResponse
>[junit4] HEARTBEAT J0 PID(12147@tray): 2015-05-22T20:02:56, stalled for 
> 6601s at: TestSpellCheckResponse.testSpellCheckResponse
> {noformat}
> I'll attach some jstack output as well as all the temp files from the J0 
> runner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7587) TestSpellCheckResponse stalled and never timed out -- possible VersionBucket bug? (5.2 branch)

2015-05-22 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-7587:
---
Attachment: jstack.1.txt
jstack.2.txt
junit4-J0-20150522_181244_599.events
junit4-J0-20150522_181244_599.spill
junit4-J0-20150522_181244_599.suites

2 thread dumps, and the non-empty J0 files from solr/build/solr-solrj/test/temp/

Most interesting looking thread...

{noformat}
"TEST-TestSpellCheckResponse.testSpellCheckResponse-seed#[FA0A9DF72EDC5BCD]" 
prio=10 tid=0x7f10843da000 nid=0x2ff9 waiting on condition 
[0x7f10c10f1000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0xf7f383e0> (a 
java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197)
at 
java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:945)
at org.apache.solr.update.VersionInfo.blockUpdates(VersionInfo.java:118)
at org.apache.solr.update.UpdateLog.onFirstSearcher(UpdateLog.java:1604)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1810)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1505)
at 
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:617)
- locked <0xf6f09a10> (a java.lang.Object)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:95)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalCommit(DistributedUpdateProcessor.java:1635)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1612)
at 
org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:161)
at 
org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2051)
at 
org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:179)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:483)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:502)
at 
org.apache.solr.client.solrj.response.TestSpellCheckResponse.testSpellCheckResponse(TestSpellCheckResponse.java:51)
{noformat}



> TestSpellCheckResponse stalled and never timed out -- possible VersionBucket 
> bug? (5.2 branch)
> --
>
> Key: SOLR-7587
> URL: https://issues.apache.org/jira/browse/SOLR-7587
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: jstack.1.txt, jstack.2.txt, 
> junit4-J0-20150522_181244_599.events, junit4-J0-20150522_181244_599.spill, 
> junit4-J0-20150522_181244_599.suites
>
>
> On the 5.2 branch (r1681250), I encountered a solrj test stalled for over 110 
> minutes before i finally killed it...
> {noformat}
>[junit4] Suite: org.apache.solr.common.util.TestRetryUtil
>[junit4] Completed [55/60] on J1 in 1.04s, 1 test
>[junit4] 
>[junit4] HEARTBEAT J0 PID(12147@tray): 2015-05-22T18:14:56, stalled for  
> 121s at: TestSpellCheckResponse.testSpellCheckResponse
>[junit4] HEARTBEAT J0 PID(12147@tray): 2015-05-22T18:15:56, stalled for  
> 181s at: TestSpellCheckResponse.testSpellCheckResponse
> ...
>[junit4] HEARTBEAT J0 PID(12147@tray): 2015-05-22T20:00:56, stalled for 
> 6481s at: TestSpellCheckResponse.testSpellCheckResponse
>[junit4] HEARTBEAT J0 PID(12147@tray): 2015-05-22T20:01:56, stalled for 
> 6541s at: TestSpellCheckResponse.testSpellCheckResponse
>[junit4] HEARTBEAT J0 PID(12147@tray): 2015-05-22T20:02:56, stalled for 
> 6601s at: TestSpellCheckResponse.testSpellCheckResponse
> {noformat}
> I'll attach some jstack output as well as all the temp files from the J0 
> runner.



--
This message was sen

[jira] [Created] (SOLR-7587) TestSpellCheckResponse stalled and never timed out -- possible VersionBucket bug? (5.2 branch)

2015-05-22 Thread Hoss Man (JIRA)
Hoss Man created SOLR-7587:
--

 Summary: TestSpellCheckResponse stalled and never timed out -- 
possible VersionBucket bug? (5.2 branch)
 Key: SOLR-7587
 URL: https://issues.apache.org/jira/browse/SOLR-7587
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man


On the 5.2 branch (r1681250), I encountered a solrj test stalled for over 110 
minutes before i finally killed it...


{noformat}
   [junit4] Suite: org.apache.solr.common.util.TestRetryUtil
   [junit4] Completed [55/60] on J1 in 1.04s, 1 test
   [junit4] 
   [junit4] HEARTBEAT J0 PID(12147@tray): 2015-05-22T18:14:56, stalled for  
121s at: TestSpellCheckResponse.testSpellCheckResponse
   [junit4] HEARTBEAT J0 PID(12147@tray): 2015-05-22T18:15:56, stalled for  
181s at: TestSpellCheckResponse.testSpellCheckResponse
...
   [junit4] HEARTBEAT J0 PID(12147@tray): 2015-05-22T20:00:56, stalled for 
6481s at: TestSpellCheckResponse.testSpellCheckResponse
   [junit4] HEARTBEAT J0 PID(12147@tray): 2015-05-22T20:01:56, stalled for 
6541s at: TestSpellCheckResponse.testSpellCheckResponse
   [junit4] HEARTBEAT J0 PID(12147@tray): 2015-05-22T20:02:56, stalled for 
6601s at: TestSpellCheckResponse.testSpellCheckResponse
{noformat}


I'll attach some jstack output as well as all the temp files from the J0 runner.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_45) - Build # 12784 - Still Failing!

2015-05-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12784/
Java: 64bit/jdk1.8.0_45 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CdcrRequestHandlerTest.doTest

Error Message:
expected:<[dis]abled> but was:<[en]abled>

Stack Trace:
org.junit.ComparisonFailure: expected:<[dis]abled> but was:<[en]abled>
at 
__randomizedtesting.SeedInfo.seed([16BD265503763F07:B1F99EF16ECD2CBE]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.assertState(BaseCdcrDistributedZkTest.java:256)
at 
org.apache.solr.cloud.CdcrRequestHandlerTest.doTestBufferActions(CdcrRequestHandlerTest.java:139)
at 
org.apache.solr.cloud.CdcrRequestHandlerTest.doTest(CdcrRequestHandlerTest.java:41)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.ra

[jira] [Commented] (SOLR-7335) Multivalue field that is boosted on indexing time has wrong norm.

2015-05-22 Thread Shingo Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14557050#comment-14557050
 ] 

Shingo Sasaki commented on SOLR-7335:
-

Thanks for the commit!

> Multivalue field that is boosted on indexing time has wrong norm.
> -
>
> Key: SOLR-7335
> URL: https://issues.apache.org/jira/browse/SOLR-7335
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10, 5.0, 5.1
>Reporter: Shingo Sasaki
>Assignee: Hoss Man
>Priority: Critical
> Attachments: SOLR-7335.patch
>
>
> Multivalue field has wrong norm when the field value is tokenized, the field 
> or document is boosted, and the field is not source of copyField.
> {noformat}
> $ java -jar start.jar &
> $ echo '{
> "add": {
>   "doc": {
> "id":"no-boosted",
> "features": ["a","b","c"],
> "dyn_not_copied_txt": ["a","b","c"]
>   }
> },
> "add": {
>   "boost": 10,
>   "doc": {
> "id":"boosted",
> "features": ["a","b","c"],
> "dyn_not_copied_txt": ["a","b","c"]
>   }
> }}' > test.json
> $ curl 'http://localhost:8983/solr/update/json?commit=true' -H 
> 'Content-type:application/json' --data-binary @test.json
> {"responseHeader":{"status":0,"QTime":41}}
> $ curl 'http://localhost:8983/solr/select' -d 
> 'omitHeader=true&wt=json&indent=on&q=*:*&fl=id,norm(features),norm(dyn_not_copied_txt)'
> {
>   "response":{"numFound":2,"start":0,"docs":[
>   {
> "id":"no-boosted",
> "norm(features)":0.5,
> "norm(dyn_not_copied_txt)":0.5},
>   {
> "id":"boosted",
> "norm(features)":5.0,
> "norm(dyn_not_copied_txt)":512.0}]
>   }}
> {noformat}
> In the above example, "features" is source of copyField. On the other hand, 
> "dyn_not_copied_txt" is not so. 
> "features" and "dyn_not_copied_txt" have the same type attribute 
> (type="text_general"), the same values ( ["a","b","c"] ) and the same boost. 
> So, both fields must have the same norm in the document.
> But, in boosted document only,  the field that is not copied have too larger 
> norm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7335) Multivalue field that is boosted on indexing time has wrong norm.

2015-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14557048#comment-14557048
 ] 

ASF subversion and git services commented on SOLR-7335:
---

Commit 1681253 from hoss...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1681253 ]

SOLR-7335: Fix doc boosts to no longer be multiplied in each field value in 
multivalued fields that are not used in copyFields (merge r1681249)

> Multivalue field that is boosted on indexing time has wrong norm.
> -
>
> Key: SOLR-7335
> URL: https://issues.apache.org/jira/browse/SOLR-7335
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10, 5.0, 5.1
>Reporter: Shingo Sasaki
>Assignee: Hoss Man
>Priority: Critical
> Attachments: SOLR-7335.patch
>
>
> Multivalue field has wrong norm when the field value is tokenized, the field 
> or document is boosted, and the field is not source of copyField.
> {noformat}
> $ java -jar start.jar &
> $ echo '{
> "add": {
>   "doc": {
> "id":"no-boosted",
> "features": ["a","b","c"],
> "dyn_not_copied_txt": ["a","b","c"]
>   }
> },
> "add": {
>   "boost": 10,
>   "doc": {
> "id":"boosted",
> "features": ["a","b","c"],
> "dyn_not_copied_txt": ["a","b","c"]
>   }
> }}' > test.json
> $ curl 'http://localhost:8983/solr/update/json?commit=true' -H 
> 'Content-type:application/json' --data-binary @test.json
> {"responseHeader":{"status":0,"QTime":41}}
> $ curl 'http://localhost:8983/solr/select' -d 
> 'omitHeader=true&wt=json&indent=on&q=*:*&fl=id,norm(features),norm(dyn_not_copied_txt)'
> {
>   "response":{"numFound":2,"start":0,"docs":[
>   {
> "id":"no-boosted",
> "norm(features)":0.5,
> "norm(dyn_not_copied_txt)":0.5},
>   {
> "id":"boosted",
> "norm(features)":5.0,
> "norm(dyn_not_copied_txt)":512.0}]
>   }}
> {noformat}
> In the above example, "features" is source of copyField. On the other hand, 
> "dyn_not_copied_txt" is not so. 
> "features" and "dyn_not_copied_txt" have the same type attribute 
> (type="text_general"), the same values ( ["a","b","c"] ) and the same boost. 
> So, both fields must have the same norm in the document.
> But, in boosted document only,  the field that is not copied have too larger 
> norm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_45) - Build # 4842 - Still Failing!

2015-05-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4842/
Java: 64bit/jdk1.8.0_45 -XX:-UseCompressedOops -XX:+UseSerialGC

5 tests failed.
FAILED:  org.apache.solr.cloud.CdcrReplicationDistributedZkTest.doTest

Error Message:
expected: but was:

Stack Trace:
org.junit.ComparisonFailure: expected: but was:
at 
__randomizedtesting.SeedInfo.seed([2C54B4B78A95EA2F:8B100C13E72EF996]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.assertState(BaseCdcrDistributedZkTest.java:255)
at 
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.doTestTargetCollectionNotAvailable(CdcrReplicationDistributedZkTest.java:113)
at 
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.doTest(CdcrReplicationDistributedZkTest.java:42)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
c

[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1442: POMs out of sync

2015-05-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1442/

No tests ran.

Build Log:
[...truncated 47724 lines...]
-validate-maven-dependencies:
[artifact:dependencies] Downloading: 
org/apache/mahout/mahout-collections/1.0/mahout-collections-1.0.pom from 
repository maven-restlet at http://maven.restlet.org
[artifact:dependencies] Unable to locate resource in repository
[artifact:dependencies] [INFO] Unable to find resource 
'org.apache.mahout:mahout-collections:pom:1.0' in repository maven-restlet 
(http://maven.restlet.org)
[artifact:dependencies] Downloading: 
org/apache/mahout/mahout-collections/1.0/mahout-collections-1.0.pom from 
repository releases.cloudera.com at 
https://repository.cloudera.com/artifactory/libs-release
[artifact:dependencies] Unable to locate resource in repository
[artifact:dependencies] [INFO] Unable to find resource 
'org.apache.mahout:mahout-collections:pom:1.0' in repository 
releases.cloudera.com (https://repository.cloudera.com/artifactory/libs-release)
[artifact:dependencies] Downloading: 
org/apache/mahout/mahout-collections/1.0/mahout-collections-1.0.pom from 
repository central at http://repo1.maven.org/maven2
[artifact:dependencies] Transferring 10K from central
[artifact:dependencies] Downloading: org/apache/apache/6/apache-6.pom from 
repository maven-restlet at http://maven.restlet.org
[artifact:dependencies] Unable to locate resource in repository
[artifact:dependencies] [INFO] Unable to find resource 
'org.apache:apache:pom:6' in repository maven-restlet (http://maven.restlet.org)
[artifact:dependencies] Downloading: org/apache/apache/6/apache-6.pom from 
repository releases.cloudera.com at 
https://repository.cloudera.com/artifactory/libs-release
[artifact:dependencies] Unable to locate resource in repository
[artifact:dependencies] [INFO] Unable to find resource 
'org.apache:apache:pom:6' in repository releases.cloudera.com 
(https://repository.cloudera.com/artifactory/libs-release)
[artifact:dependencies] Downloading: org/apache/apache/6/apache-6.pom from 
repository central at http://repo1.maven.org/maven2
[artifact:dependencies] Transferring 12K from central
[artifact:dependencies] Downloading: 
org/apache/mahout/mahout-math/0.6/mahout-math-0.6.pom from repository 
maven-restlet at http://maven.restlet.org
[artifact:dependencies] Unable to locate resource in repository
[artifact:dependencies] [INFO] Unable to find resource 
'org.apache.mahout:mahout-math:pom:0.6' in repository maven-restlet 
(http://maven.restlet.org)
[artifact:dependencies] Downloading: 
org/apache/mahout/mahout-math/0.6/mahout-math-0.6.pom from repository 
releases.cloudera.com at 
https://repository.cloudera.com/artifactory/libs-release
[artifact:dependencies] Unable to locate resource in repository
[artifact:dependencies] [INFO] Unable to find resource 
'org.apache.mahout:mahout-math:pom:0.6' in repository releases.cloudera.com 
(https://repository.cloudera.com/artifactory/libs-release)
[artifact:dependencies] Downloading: 
org/apache/mahout/mahout-math/0.6/mahout-math-0.6.pom from repository central 
at http://repo1.maven.org/maven2
[artifact:dependencies] Transferring 4K from central
[artifact:dependencies] Downloading: 
org/apache/mahout/mahout/0.6/mahout-0.6.pom from repository maven-restlet at 
http://maven.restlet.org
[artifact:dependencies] Unable to locate resource in repository
[artifact:dependencies] [INFO] Unable to find resource 
'org.apache.mahout:mahout:pom:0.6' in repository maven-restlet 
(http://maven.restlet.org)
[artifact:dependencies] Downloading: 
org/apache/mahout/mahout/0.6/mahout-0.6.pom from repository 
releases.cloudera.com at 
https://repository.cloudera.com/artifactory/libs-release
[artifact:dependencies] Unable to locate resource in repository
[artifact:dependencies] [INFO] Unable to find resource 
'org.apache.mahout:mahout:pom:0.6' in repository releases.cloudera.com 
(https://repository.cloudera.com/artifactory/libs-release)
[artifact:dependencies] Downloading: 
org/apache/mahout/mahout/0.6/mahout-0.6.pom from repository central at 
http://repo1.maven.org/maven2
[artifact:dependencies] Transferring 32K from central
[artifact:dependencies] Downloading: 
org/carrot2/carrot2-mini/3.9.0/carrot2-mini-3.9.0.pom from repository 
maven-restlet at http://maven.restlet.org
[artifact:dependencies] Unable to locate resource in repository
[artifact:dependencies] [INFO] Unable to find resource 
'org.carrot2:carrot2-mini:pom:3.9.0' in repository maven-restlet 
(http://maven.restlet.org)
[artifact:dependencies] Downloading: 
org/carrot2/carrot2-mini/3.9.0/carrot2-mini-3.9.0.pom from repository 
releases.cloudera.com at 
https://repository.cloudera.com/artifactory/libs-release
[artifact:dependencies] Unable to locate resource in repository
[artifact:dependencies] [INFO] Unable to find resource 
'org.carrot2:carrot2-mini:pom:3.9.0' in repository releases.cloudera.com 
(https://repository.cloudera.com/artifactory/libs-release)
[artifact:dependencies] Downloading: 
or

Re: 5.2 release branch created

2015-05-22 Thread Alexandre Rafalovitch
On 22 May 2015 at 17:28, Anshum Gupta  wrote:
> The 5.2 branch has been created -
> https://svn.apache.org/repos/asf/lucene/dev/branches/lucene_solr_5_2/

Not on GitHub mirror yet. Slow link again I guess.

Regards,
   Alex.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7335) Multivalue field that is boosted on indexing time has wrong norm.

2015-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14557010#comment-14557010
 ] 

ASF subversion and git services commented on SOLR-7335:
---

Commit 1681249 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1681249 ]

SOLR-7335: Fix doc boosts to no longer be multiplied in each field value in 
multivalued fields that are not used in copyFields

> Multivalue field that is boosted on indexing time has wrong norm.
> -
>
> Key: SOLR-7335
> URL: https://issues.apache.org/jira/browse/SOLR-7335
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10, 5.0, 5.1
>Reporter: Shingo Sasaki
>Assignee: Hoss Man
>Priority: Critical
> Attachments: SOLR-7335.patch
>
>
> Multivalue field has wrong norm when the field value is tokenized, the field 
> or document is boosted, and the field is not source of copyField.
> {noformat}
> $ java -jar start.jar &
> $ echo '{
> "add": {
>   "doc": {
> "id":"no-boosted",
> "features": ["a","b","c"],
> "dyn_not_copied_txt": ["a","b","c"]
>   }
> },
> "add": {
>   "boost": 10,
>   "doc": {
> "id":"boosted",
> "features": ["a","b","c"],
> "dyn_not_copied_txt": ["a","b","c"]
>   }
> }}' > test.json
> $ curl 'http://localhost:8983/solr/update/json?commit=true' -H 
> 'Content-type:application/json' --data-binary @test.json
> {"responseHeader":{"status":0,"QTime":41}}
> $ curl 'http://localhost:8983/solr/select' -d 
> 'omitHeader=true&wt=json&indent=on&q=*:*&fl=id,norm(features),norm(dyn_not_copied_txt)'
> {
>   "response":{"numFound":2,"start":0,"docs":[
>   {
> "id":"no-boosted",
> "norm(features)":0.5,
> "norm(dyn_not_copied_txt)":0.5},
>   {
> "id":"boosted",
> "norm(features)":5.0,
> "norm(dyn_not_copied_txt)":512.0}]
>   }}
> {noformat}
> In the above example, "features" is source of copyField. On the other hand, 
> "dyn_not_copied_txt" is not so. 
> "features" and "dyn_not_copied_txt" have the same type attribute 
> (type="text_general"), the same values ( ["a","b","c"] ) and the same boost. 
> So, both fields must have the same norm in the document.
> But, in boosted document only,  the field that is not copied have too larger 
> norm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-5.x #944: POMs out of sync

2015-05-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-5.x/944/

No tests ran.

Build Log:
[...truncated 47387 lines...]
-validate-maven-dependencies:
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-test-framework:5.3.0-SNAPSHOT: checking for updates 
from maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-test-framework:5.3.0-SNAPSHOT: checking for updates 
from releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-parent:5.3.0-SNAPSHOT: checking for updates from 
sonatype.releases
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-parent:5.3.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-parent:5.3.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-analyzers-common:5.3.0-SNAPSHOT: checking for updates 
from maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-analyzers-common:5.3.0-SNAPSHOT: checking for updates 
from releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-analyzers-kuromoji:5.3.0-SNAPSHOT: checking for 
updates from maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-analyzers-kuromoji:5.3.0-SNAPSHOT: checking for 
updates from releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-analyzers-phonetic:5.3.0-SNAPSHOT: checking for 
updates from maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-analyzers-phonetic:5.3.0-SNAPSHOT: checking for 
updates from releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-backward-codecs:5.3.0-SNAPSHOT: checking for updates 
from maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-backward-codecs:5.3.0-SNAPSHOT: checking for updates 
from releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-codecs:5.3.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-codecs:5.3.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-core:5.3.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-core:5.3.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-expressions:5.3.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-expressions:5.3.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-grouping:5.3.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-grouping:5.3.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-highlighter:5.3.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-highlighter:5.3.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-join:5.3.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-join:5.3.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-memory:5.3.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-memory:5.3.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-misc:5.3.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-misc:5.3.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-queries:5.3.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-queries:5.3.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-queryparser:5.3.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-queryparser:5.3.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-spatial:5.3.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-spatial:5.3.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO]

[jira] [Commented] (SOLR-5955) Add config templates to SolrCloud.

2015-05-22 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556994#comment-14556994
 ] 

Gregory Chanan commented on SOLR-5955:
--

Some thoughts on this and SOLR-7570:

1) The motivation here is really about having some immutability around 
configsets.  This is important in a couple of scenarios:
  a) you want built-in immutable templates that people can use to get started 
quickly.  This should be immutable so no one else accidently screws it up.
  b) in a secure setup, you don't want end-users to write directly to zookeeper 
or to disk.  We've seen security complaints when we've allowed that sort of 
thing in the past.  Providing an immutable template that end-users can build on 
and modify via Config APIs is much more sensible.  Again, you want to make this 
immutable so no one accidently or maliciously screws up the template.

2) A template is really just an immutable configset where instantiating creates 
a mutable copy
>From a complexity POV, It seems not worth it to maintain a separate "template" 
>concept if mutability is the only difference from configsets.  I.e. templates 
>are stored in a different location in ZK, have different zkcli commands that 
>need to be maintained, would need to have separate handling from ConfigSets in 
>a UI, require a different paramter in the CollectionsAPI for instantiation 
>("configTemplate" above).  If ConfigSets could be marked as mutable vs 
>immutable, a ConfigSet API could have reasonable semantics here, like copying 
>an immutable configset makes a mutable copy (since making an immutable copy 
>seems pointless), or that copy is disallowed on immutable ConfigSets, instead 
>you have to call a different command like "instantiate" instead.  Anshum's 
>idea above (/collections/collection_name/config/xxx could be implemented in 
>this setup, by just creating a collection-specific mutable diff as described 
>in SOLR-7570.  From an engineering complexity perspective, maintaining 
>mutable-vs-immutable seems a lot simpler than having templates.

That said, from the end-user-perspective referring to immutable configsets as 
"templates" is great -- I think most users would immediately understand what 
that means and why it is important rather than "immutable configsets."  Perhaps 
the correct way to go here is to allow (do we already?) configs in 
subdirectories, e.g. we put all "immutable configsets" under /configs/templates 
and we can refer to them as "templates" in the documentation, but they don't 
need any special handling in the code compared to ConfigSets (i.e. you could 
create a collection in one step via 
&collection.configName=template/secureTemplate.

So, TLDR: we should have the concept of immutable ConfigSets, we don't need a 
separate concept of templates.  Thoughts?

> Add config templates to SolrCloud.
> --
>
> Key: SOLR-5955
> URL: https://issues.apache.org/jira/browse/SOLR-5955
> Project: Solr
>  Issue Type: New Feature
>Reporter: Mark Miller
> Attachments: SOLR-5955.patch
>
>
> You should be able to upload config sets to a templates location and then 
> specify a template as your starting config when creating new collections via 
> REST API. We can have a default template that we ship with.
> This will let you create collections from scratch via REST API, and then you 
> can use things like the schema REST API to customize the template config to 
> your needs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7335) Multivalue field that is boosted on indexing time has wrong norm.

2015-05-22 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-7335:
---
Affects Version/s: (was: Trunk)
   5.1

> Multivalue field that is boosted on indexing time has wrong norm.
> -
>
> Key: SOLR-7335
> URL: https://issues.apache.org/jira/browse/SOLR-7335
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10, 5.0, 5.1
>Reporter: Shingo Sasaki
>Assignee: Hoss Man
>Priority: Critical
> Attachments: SOLR-7335.patch
>
>
> Multivalue field has wrong norm when the field value is tokenized, the field 
> or document is boosted, and the field is not source of copyField.
> {noformat}
> $ java -jar start.jar &
> $ echo '{
> "add": {
>   "doc": {
> "id":"no-boosted",
> "features": ["a","b","c"],
> "dyn_not_copied_txt": ["a","b","c"]
>   }
> },
> "add": {
>   "boost": 10,
>   "doc": {
> "id":"boosted",
> "features": ["a","b","c"],
> "dyn_not_copied_txt": ["a","b","c"]
>   }
> }}' > test.json
> $ curl 'http://localhost:8983/solr/update/json?commit=true' -H 
> 'Content-type:application/json' --data-binary @test.json
> {"responseHeader":{"status":0,"QTime":41}}
> $ curl 'http://localhost:8983/solr/select' -d 
> 'omitHeader=true&wt=json&indent=on&q=*:*&fl=id,norm(features),norm(dyn_not_copied_txt)'
> {
>   "response":{"numFound":2,"start":0,"docs":[
>   {
> "id":"no-boosted",
> "norm(features)":0.5,
> "norm(dyn_not_copied_txt)":0.5},
>   {
> "id":"boosted",
> "norm(features)":5.0,
> "norm(dyn_not_copied_txt)":512.0}]
>   }}
> {noformat}
> In the above example, "features" is source of copyField. On the other hand, 
> "dyn_not_copied_txt" is not so. 
> "features" and "dyn_not_copied_txt" have the same type attribute 
> (type="text_general"), the same values ( ["a","b","c"] ) and the same boost. 
> So, both fields must have the same norm in the document.
> But, in boosted document only,  the field that is not copied have too larger 
> norm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-7335) Multivalue field that is boosted on indexing time has wrong norm.

2015-05-22 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reassigned SOLR-7335:
--

Assignee: Hoss Man  (was: Shalin Shekhar Mangar)

Bug looks terrible, patch looks good.

Running tests now, and then i'm going to start committing & backporting to 5x 
and 5.2)

> Multivalue field that is boosted on indexing time has wrong norm.
> -
>
> Key: SOLR-7335
> URL: https://issues.apache.org/jira/browse/SOLR-7335
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10, 5.0, Trunk
>Reporter: Shingo Sasaki
>Assignee: Hoss Man
>Priority: Critical
> Attachments: SOLR-7335.patch
>
>
> Multivalue field has wrong norm when the field value is tokenized, the field 
> or document is boosted, and the field is not source of copyField.
> {noformat}
> $ java -jar start.jar &
> $ echo '{
> "add": {
>   "doc": {
> "id":"no-boosted",
> "features": ["a","b","c"],
> "dyn_not_copied_txt": ["a","b","c"]
>   }
> },
> "add": {
>   "boost": 10,
>   "doc": {
> "id":"boosted",
> "features": ["a","b","c"],
> "dyn_not_copied_txt": ["a","b","c"]
>   }
> }}' > test.json
> $ curl 'http://localhost:8983/solr/update/json?commit=true' -H 
> 'Content-type:application/json' --data-binary @test.json
> {"responseHeader":{"status":0,"QTime":41}}
> $ curl 'http://localhost:8983/solr/select' -d 
> 'omitHeader=true&wt=json&indent=on&q=*:*&fl=id,norm(features),norm(dyn_not_copied_txt)'
> {
>   "response":{"numFound":2,"start":0,"docs":[
>   {
> "id":"no-boosted",
> "norm(features)":0.5,
> "norm(dyn_not_copied_txt)":0.5},
>   {
> "id":"boosted",
> "norm(features)":5.0,
> "norm(dyn_not_copied_txt)":512.0}]
>   }}
> {noformat}
> In the above example, "features" is source of copyField. On the other hand, 
> "dyn_not_copied_txt" is not so. 
> "features" and "dyn_not_copied_txt" have the same type attribute 
> (type="text_general"), the same values ( ["a","b","c"] ) and the same boost. 
> So, both fields must have the same norm in the document.
> But, in boosted document only,  the field that is not copied have too larger 
> norm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Save your data from lucene.zones.apache.org

2015-05-22 Thread Uwe Schindler
Hi,

 

Jenkins successfully moved already, but next to Jenkins, some people had a user 
folder on lucene.zones.apache.org:

 

drwxr-xr-x   7 dweiss  dweiss  17 Jul  1  2014 dweiss

drwxr-xr-x   6 gmcdonald   gmcdonald8 Apr 19  2011 gmcdonald

drwxr-xr-x   2 gsingersgsingers12 Oct  1  2010 gsingers

lrwxr-xr-x   1 rootwheel7 Aug 19  2014 hudson -> jenkins

drwxr-xr-x  17 jenkins jenkins 33 Dec 30 21:51 jenkins

drwx--   3 rootwheel3 Jun 19  2014 joes

drwxr-xr-x   2 markmiller  markmiller  10 Jan 27  2012 markmiller

drwxr-xr-x  14 mikemccand  mikemccand  29 Sep 13  2013 mikemccand

drwxr-xr-x   4 ngn ngn 23 Feb 10  2013 ngn

drwxr-xr-x   2 pgollucci   pgollucci   11 Dec 12  2009 pgollucci

drwxr-xr-x  10 rcmuir  rcmuir  39 Oct  9  2013 rcmuir

drwxr-xr-x   6 sarowe  sarowe  32 Feb 26 13:53 sarowe

drwxr-xr-x   2 simonw  simonw  11 Jun 22  2011 simonw

drwxr-xr-x   3 thelabdude  thelabdude  12 Nov 24 18:02 thelabdude

drwxr-xr-x  11 uschindler  uschindler  30 May  9 17:03 uschindler

drwxr-xr-x   3 yonik   yonik   13 Dec  3  2010 yonik

 

It would be good, if everybody in this account list would log into this machine 
and backup the data in home directory somewhere else.

Those people who want an account on the new Ubuntu machine should drop a note 
on: https://issues.apache.org/jira/browse/INFRA-9096

 

Uwe

 

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

http://www.thetaphi.de

eMail: u...@thetaphi.de

 

 



[jira] [Commented] (SOLR-7264) DocValues should support BoolField

2015-05-22 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556975#comment-14556975
 ] 

Hoss Man commented on SOLR-7264:


took a quick look at the patch...

At first glance it seems mostly fine -- but tests are key. Take a look at 
DocValuesTest.java, DocValuesMissingTest.java, and DocValuesMultiTest.java (and 
the corresponding schema files) for examples of the bare bones basics we should 
be sanity checking are working properly with the new code.

> DocValues should support BoolField
> --
>
> Key: SOLR-7264
> URL: https://issues.apache.org/jira/browse/SOLR-7264
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.0
>Reporter: Kevin Osborn
> Attachments: SOLR-7264.patch
>
>
> DocValues supports numerics and strings, but it currently does not support 
> booleans. Please add this support.
> Here is the error message you get if you try to use DocValues with a 
> BoolField.
> ERROR - 2015-03-18 00:49:54.041; org.apache.solr.common.SolrException; 
> null:org.apache.solr.common.SolrException: SolrCore 'test' is not available 
> due to init failure: Could not load conf for core test: F
> ield type 
> boolean{class=org.apache.solr.schema.BoolField,analyzer=org.apache.solr.schema.FieldType$DefaultAnalyzer,args={sortMissingLast=true,
>  class=solr.BoolField}} does not support doc values. Schema fi
> le is /Users/kosborn/solr/server/solr/test/conf/schema.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_60-ea-b12) - Build # 12783 - Still Failing!

2015-05-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12783/
Java: 64bit/jdk1.8.0_60-ea-b12 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([35642EE5937987FB:92209641FEC29442]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:172)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:133)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:128)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForRecoveriesToFinish(BaseCdcrDistributedZkTest.java:419)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.clearSourceCollection(BaseCdcrDistributedZkTest.java:285)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTestPartialReplicationAfterPeerSync(CdcrReplicationHandlerTest.java:159)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest(CdcrReplicationHandlerTest.java:53)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.S

[jira] [Commented] (LUCENE-6481) Improve GeoPointField type to only visit high precision boundary terms

2015-05-22 Thread Nicholas Knize (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556943#comment-14556943
 ] 

Nicholas Knize commented on LUCENE-6481:


This patch is ready to go. In fact, this performance improvement should 
supersede the existing patch in 
[LUCENE-6450|https://issues.apache.org/jira/browse/LUCENE-6450].

> Improve GeoPointField type to only visit high precision boundary terms 
> ---
>
> Key: LUCENE-6481
> URL: https://issues.apache.org/jira/browse/LUCENE-6481
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Reporter: Nicholas Knize
> Attachments: LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, 
> LUCENE-6481.patch, LUCENE-6481_WIP.patch
>
>
> Current GeoPointField [LUCENE-6450 | 
> https://issues.apache.org/jira/browse/LUCENE-6450] computes a set of ranges 
> along the space-filling curve that represent a provided bounding box.  This 
> determines which terms to visit in the terms dictionary and which to skip. 
> This is suboptimal for large bounding boxes as we may end up visiting all 
> terms (which could be quite large). 
> This incremental improvement is to improve GeoPointField to only visit high 
> precision terms in boundary ranges and use the postings list for ranges that 
> are completely within the target bounding box.
> A separate improvement is to switch over to auto-prefix and build an 
> Automaton representing the bounding box.  That can be tracked in a separate 
> issue.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6481) Improve GeoPointField type to only visit high precision boundary terms

2015-05-22 Thread Nicholas Knize (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-6481:
---
Attachment: LUCENE-6481.patch

Updated patch to fix false negatives. This now improves performance of 
[LUCENE-6450|https://issues.apache.org/jira/browse/LUCENE-6450] to 0.02 sec / 
query by using the postings list instead of visiting every term.

> Improve GeoPointField type to only visit high precision boundary terms 
> ---
>
> Key: LUCENE-6481
> URL: https://issues.apache.org/jira/browse/LUCENE-6481
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Reporter: Nicholas Knize
> Attachments: LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, 
> LUCENE-6481.patch, LUCENE-6481_WIP.patch
>
>
> Current GeoPointField [LUCENE-6450 | 
> https://issues.apache.org/jira/browse/LUCENE-6450] computes a set of ranges 
> along the space-filling curve that represent a provided bounding box.  This 
> determines which terms to visit in the terms dictionary and which to skip. 
> This is suboptimal for large bounding boxes as we may end up visiting all 
> terms (which could be quite large). 
> This incremental improvement is to improve GeoPointField to only visit high 
> precision terms in boundary ranges and use the postings list for ranges that 
> are completely within the target bounding box.
> A separate improvement is to switch over to auto-prefix and build an 
> Automaton representing the bounding box.  That can be tracked in a separate 
> issue.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5692) StackOverflowError during SolrCloud leader election process

2015-05-22 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-5692.

Resolution: Duplicate

Resolving as duplicate of SOLR-6213 (this issue is older, but that one has more 
discussion/context)

> StackOverflowError during SolrCloud leader election process
> ---
>
> Key: SOLR-5692
> URL: https://issues.apache.org/jira/browse/SOLR-5692
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.6.1
>Reporter: Bojan Smid
>  Labels: difficulty-hard, impact-medium
> Attachments: recovery-stackoverflow.txt
>
>
> I have SolrCloud cluster with 7 nodes, each with few 1000 cores. I got this 
> StackOverflow few times when starting one of the nodes (just a piece of stack 
> trace, the rest repeats, leader election process obviously got stuck in 
> infinite repetition of steps):
> [2/4/14 3:42:43 PM] Bojan: 2014-02-04 15:18:01,947 
> [localhost-startStop-1-EventThread] ERROR org.apache.zookeeper.ClientCnxn- 
> Error while calling watcher 
> java.lang.StackOverflowError
> at java.security.AccessController.doPrivileged(Native Method)
> at java.io.PrintWriter.(PrintWriter.java:116)
> at java.io.PrintWriter.(PrintWriter.java:100)
> at org.apache.solr.common.SolrException.toStr(SolrException.java:138)
> at org.apache.solr.common.SolrException.log(SolrException.java:113)
> [2/4/14 3:42:58 PM] Bojan: at 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:377)
> at 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:184)
> at 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:162)
> at 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:106)
> at 
> org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:272)
> at 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:380)
> at 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:184)
>  at 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:162)
> at 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:106)
> at 
> org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:272)
> at 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:380)
> at 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:184)
> at 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:162)
> at 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:106)
> at 
> org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:272)
> at 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:380)
> at 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:184)
> at 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:162)
> at 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:106)
> at 
> org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:272)
> at 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:380)
> at 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:184)
> at 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:162)
> at 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:106)
> at 
> org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:272)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: official ASF Jenkins Slave moved to Ubuntu 14.04

2015-05-22 Thread Anshum Gupta
Thanks for the effort Uwe!

On Fri, May 22, 2015 at 2:23 PM, Uwe Schindler  wrote:

> Hi all,
>
> the old lucene-zones.apache.org machine, running on FreeBSD, was disabled
> an hour ago and all Jobs migrated. This old machine was not able to run
> Java 8 at all (crushed all the time and had the famous FreeBSD blackhole).
> In addition, it was about to be decommissioned soon; we moved the whole
> slave to a Ubuntu 14.04 machine.
>
> From now on all builds are running in a VMware machine (
> lucene1-us-west.apache.org) running Ubuntu 14.04 with 4 (virtual) cores
> [/proc/cpuinfo says: Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz] and 16 GiB
> of RAM.
>
> I changed all jobs to use the correct JDK version (because those are
> installed automatically now) - sorry for the mail flood about broken jobs.
> I hope all is fine now, if you find a problem, please respond to this mail
> with Jenkins Job and build number. Possible errors could be files not found
> (nightly jobs using Wikipedia dumps, Maven upload to snapshot repo not
> working, or cases where I missed to change JDK version).
>
> I will now cleanup the policy file of test security to no longer have the
> hardcoded FreeBSD localhost address workaround with hardcoded hostname
> (will just heavy commit).
>
> Uwe
>
> P.S.: Thanks to Chris Lambertus and Gavin McDonald for their help during
> the migration.
>
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


-- 
Anshum Gupta


Re: official ASF Jenkins Slave moved to Ubuntu 14.04

2015-05-22 Thread Erick Erickson
+1!

On Fri, May 22, 2015 at 2:29 PM, david.w.smi...@gmail.com
 wrote:
> Thanks for your hard work on keeping Lucene/Solr well tested!
>
> On Fri, May 22, 2015 at 5:23 PM Uwe Schindler  wrote:
>>
>> Hi all,
>>
>> the old lucene-zones.apache.org machine, running on FreeBSD, was disabled
>> an hour ago and all Jobs migrated. This old machine was not able to run Java
>> 8 at all (crushed all the time and had the famous FreeBSD blackhole). In
>> addition, it was about to be decommissioned soon; we moved the whole slave
>> to a Ubuntu 14.04 machine.
>>
>> From now on all builds are running in a VMware machine
>> (lucene1-us-west.apache.org) running Ubuntu 14.04 with 4 (virtual) cores
>> [/proc/cpuinfo says: Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz] and 16 GiB of
>> RAM.
>>
>> I changed all jobs to use the correct JDK version (because those are
>> installed automatically now) - sorry for the mail flood about broken jobs. I
>> hope all is fine now, if you find a problem, please respond to this mail
>> with Jenkins Job and build number. Possible errors could be files not found
>> (nightly jobs using Wikipedia dumps, Maven upload to snapshot repo not
>> working, or cases where I missed to change JDK version).
>>
>> I will now cleanup the policy file of test security to no longer have the
>> hardcoded FreeBSD localhost address workaround with hardcoded hostname (will
>> just heavy commit).
>>
>> Uwe
>>
>> P.S.: Thanks to Chris Lambertus and Gavin McDonald for their help during
>> the migration.
>>
>> -
>> Uwe Schindler
>> H.-H.-Meier-Allee 63, D-28213 Bremen
>> http://www.thetaphi.de
>> eMail: u...@thetaphi.de
>>
>>
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: official ASF Jenkins Slave moved to Ubuntu 14.04

2015-05-22 Thread david.w.smi...@gmail.com
Thanks for your hard work on keeping Lucene/Solr well tested!

On Fri, May 22, 2015 at 5:23 PM Uwe Schindler  wrote:

> Hi all,
>
> the old lucene-zones.apache.org machine, running on FreeBSD, was disabled
> an hour ago and all Jobs migrated. This old machine was not able to run
> Java 8 at all (crushed all the time and had the famous FreeBSD blackhole).
> In addition, it was about to be decommissioned soon; we moved the whole
> slave to a Ubuntu 14.04 machine.
>
> From now on all builds are running in a VMware machine (
> lucene1-us-west.apache.org) running Ubuntu 14.04 with 4 (virtual) cores
> [/proc/cpuinfo says: Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz] and 16 GiB
> of RAM.
>
> I changed all jobs to use the correct JDK version (because those are
> installed automatically now) - sorry for the mail flood about broken jobs.
> I hope all is fine now, if you find a problem, please respond to this mail
> with Jenkins Job and build number. Possible errors could be files not found
> (nightly jobs using Wikipedia dumps, Maven upload to snapshot repo not
> working, or cases where I missed to change JDK version).
>
> I will now cleanup the policy file of test security to no longer have the
> hardcoded FreeBSD localhost address workaround with hardcoded hostname
> (will just heavy commit).
>
> Uwe
>
> P.S.: Thanks to Chris Lambertus and Gavin McDonald for their help during
> the migration.
>
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


official ASF Jenkins Slave moved to Ubuntu 14.04

2015-05-22 Thread Uwe Schindler
Hi all,

the old lucene-zones.apache.org machine, running on FreeBSD, was disabled an 
hour ago and all Jobs migrated. This old machine was not able to run Java 8 at 
all (crushed all the time and had the famous FreeBSD blackhole). In addition, 
it was about to be decommissioned soon; we moved the whole slave to a Ubuntu 
14.04 machine.

>From now on all builds are running in a VMware machine 
>(lucene1-us-west.apache.org) running Ubuntu 14.04 with 4 (virtual) cores 
>[/proc/cpuinfo says: Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz] and 16 GiB of 
>RAM.

I changed all jobs to use the correct JDK version (because those are installed 
automatically now) - sorry for the mail flood about broken jobs. I hope all is 
fine now, if you find a problem, please respond to this mail with Jenkins Job 
and build number. Possible errors could be files not found (nightly jobs using 
Wikipedia dumps, Maven upload to snapshot repo not working, or cases where I 
missed to change JDK version).

I will now cleanup the policy file of test security to no longer have the 
hardcoded FreeBSD localhost address workaround with hardcoded hostname (will 
just heavy commit).

Uwe

P.S.: Thanks to Chris Lambertus and Gavin McDonald for their help during the 
migration.

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 2287 - Still Failing!

2015-05-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2287/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
org.apache.solr.handler.component.StatsComponentTest.testFieldStatisticsDocValuesAndMultiValuedIntegerFacetStats

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([11C9E20C38F900EA]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.component.StatsComponentTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([11C9E20C38F900EA]:0)




Build Log:
[...truncated 11124 lines...]
   [junit4] Suite: org.apache.solr.handler.component.StatsComponentTest
   [junit4]   2> Creating dataDir: 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/solr-core/test/J0/temp/solr.handler.component.StatsComponentTest
 11C9E20C38F900EA-001/init-core-data-001
   [junit4]   2> 1575001 T8355 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(false) and clientAuth (false)
   [junit4]   2> 1575003 T8355 oas.SolrTestCaseJ4.initCore initCore
   [junit4]   2> 1575003 T8355 oasc.SolrResourceLoader. new 
SolrResourceLoader for directory: 
'/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/test-files/solr/collection1/'
   [junit4]   2> 1575004 T8355 oasc.SolrResourceLoader.replaceClassLoader 
Adding 
'file:/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/test-files/solr/collection1/lib/classes/'
 to classloader
   [junit4]   2> 1575005 T8355 oasc.SolrResourceLoader.replaceClassLoader 
Adding 
'file:/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/test-files/solr/collection1/lib/README'
 to classloader
   [junit4]   2> 1575059 T8355 oasc.SolrConfig.refreshRequestParams current 
version of requestparams : -1
   [junit4]   2> 1575067 T8355 oasc.SolrConfig. Using Lucene 
MatchVersion: 5.3.0
   [junit4]   2> 1575085 T8355 oasc.SolrConfig. Loaded SolrConfig: 
solrconfig.xml
   [junit4]   2> 1575086 T8355 oass.IndexSchema.readSchema Reading Solr Schema 
from 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/test-files/solr/collection1/conf/schema11.xml
   [junit4]   2> 1575091 T8355 oass.IndexSchema.readSchema [null] Schema 
name=example
   [junit4]   2> 1575132 T8355 oass.IndexSchema.readSchema default search field 
in schema is text
   [junit4]   2> 1575134 T8355 oass.IndexSchema.readSchema unique key field: id
   [junit4]   2> 1575150 T8355 oasc.SolrResourceLoader.locateSolrHome JNDI not 
configured for solr (NoInitialContextEx)
   [junit4]   2> 1575150 T8355 oasc.SolrResourceLoader.locateSolrHome using 
system property solr.solr.home: 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/test-files/solr
   [junit4]   2> 1575150 T8355 oasc.SolrResourceLoader. new 
SolrResourceLoader for directory: 
'/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/test-files/solr/'
   [junit4]   2> 1575166 T8355 oasc.CoreContainer. New CoreContainer 
294614085
   [junit4]   2> 1575166 T8355 oasc.CoreContainer.load Loading cores into 
CoreContainer 
[instanceDir=/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/test-files/solr/]
   [junit4]   2> 1575166 T8355 oasc.CoreContainer.load loading shared library: 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/test-files/solr/lib
   [junit4]   2> 1575167 T8355 oasc.SolrResourceLoader.addToClassLoader WARN 
Can't find (or read) directory to add to classloader: lib (resolved as: 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/test-files/solr/lib).
   [junit4]   2> 1575176 T8355 oashc.HttpShardHandlerFactory.init created with 
socketTimeout : 60,connTimeout : 6,maxConnectionsPerHost : 
20,maxConnections : 1,corePoolSize : 0,maximumPoolSize : 
2147483647,maxThreadIdleTime : 5,sizeOfQueue : -1,fairnessPolicy : 
false,useRetries : false,
   [junit4]   2> 1575181 T8355 oasu.UpdateShardHandler. Creating 
UpdateShardHandler HTTP client with params: 
socketTimeout=3&connTimeout=3&retry=true
   [junit4]   2> 1575181 T8355 oasl.LogWatcher.createWatcher SLF4J impl is 
org.slf4j.impl.Log4jLoggerFactory
   [junit4]   2> 1575182 T8355 oasl.LogWatcher.newRegisteredLogWatcher 
Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
   [junit4]   2> 1575183 T8355 oasc.CoreContainer.load Node Name: testNode
   [junit4]   2> 1575183 T8355 
oasc.CoreContainer.initializeAuthenticationPlugin No authentication plugin used.
   [junit4]   2> 1575186 T8355 oasc.CoreDescriptor. CORE DESCRIPTOR: 
{name=collection1, config=solrconfig.xml, transient=false, schema=schema11.xml, 
loadOnStartup=true, instanceDir=collection1, collection=collection1, 
absoluteInstDir=/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/test-files/solr/collection1/,
 
dataD

[jira] [Updated] (SOLR-7583) API to download a snapshot by name

2015-05-22 Thread Greg Solovyev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Solovyev updated SOLR-7583:

Attachment: SOLR-7583.patch

This patch has this feature implemented on 5_x branch. The patch includes unit 
test for downloading a snapshot as a zip file.
REST API for downloading a zipped snapshot:

http://localhost:8983/solr/collection1/replication?command=downloadbackup&name=namedBackupName&wt=filestream

The response returns a chunked filestream. First 8 bytes have the file size, 
and rest of the stream is structured the same way as if you were making 
"filecontent" request: 
 - 4 bytes for chunk size
 - N bytes for next chunk

there is no support for checksum yet

> API to download a snapshot by name
> --
>
> Key: SOLR-7583
> URL: https://issues.apache.org/jira/browse/SOLR-7583
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Greg Solovyev
> Attachments: SOLR-7583.patch
>
>
> What we are looking for:
> SolrCloud and Solr should have APIs to download a snapshot via HTTP. 
> For single node Solr, this API will find a snapshot and stream it back over 
> HTTP. For SolrCloud, this API will find a Replica that has the snapshot with 
> requested name and stream the snapshot from that replica. Since there are 
> multiple files inside a snapshot, the API should probably zip the snapshot 
> folder before sending it back to the client.
> Why we need this:
> this will allow us to create and fetch fully contained archives of customer 
> data where each backup archive will contain Solr index as well as other 
> customer data (DB, metadata, files, etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7567) Replication handler to support restore via upload

2015-05-22 Thread Greg Solovyev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Solovyev updated SOLR-7567:

Attachment: (was: SOLR-7583.patch)

> Replication handler to support restore via upload
> -
>
> Key: SOLR-7567
> URL: https://issues.apache.org/jira/browse/SOLR-7567
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Greg Solovyev
> Fix For: Trunk
>
> Attachments: SOLR-7567.patch
>
>
> Sometimes the snapshot is not available on a file system that can be accessed 
> by Solr or SolrCloud. It would be useful to be able to send snapshot  files 
> to Solr over HTTP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-7567) Replication handler to support restore via upload

2015-05-22 Thread Greg Solovyev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Solovyev updated SOLR-7567:

Comment: was deleted

(was: This patch has this feature implemented on 5_x branch. The patch includes 
unit test for downloading a snapshot as a zip file.
REST API for downloading a zipped snapshot:

http://localhost:8983/solr/collection1/replication?command=downloadbackup&name=namedBackupName&wt=filestream

The response returns a chunked filestream. First 8 bytes have the file size, 
and rest of the stream is structured the same way as if you were making 
"filecontent" request: 
 - 4 bytes for chunk size
 - N bytes for next chunk

there is no support for checksum yet)

> Replication handler to support restore via upload
> -
>
> Key: SOLR-7567
> URL: https://issues.apache.org/jira/browse/SOLR-7567
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Greg Solovyev
> Fix For: Trunk
>
> Attachments: SOLR-7567.patch
>
>
> Sometimes the snapshot is not available on a file system that can be accessed 
> by Solr or SolrCloud. It would be useful to be able to send snapshot  files 
> to Solr over HTTP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7567) Replication handler to support restore via upload

2015-05-22 Thread Greg Solovyev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Solovyev updated SOLR-7567:

Attachment: SOLR-7583.patch

This patch has this feature implemented on 5_x branch. The patch includes unit 
test for downloading a snapshot as a zip file.
REST API for downloading a zipped snapshot:

http://localhost:8983/solr/collection1/replication?command=downloadbackup&name=namedBackupName&wt=filestream

The response returns a chunked filestream. First 8 bytes have the file size, 
and rest of the stream is structured the same way as if you were making 
"filecontent" request: 
 - 4 bytes for chunk size
 - N bytes for next chunk

there is no support for checksum yet

> Replication handler to support restore via upload
> -
>
> Key: SOLR-7567
> URL: https://issues.apache.org/jira/browse/SOLR-7567
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Greg Solovyev
> Fix For: Trunk
>
> Attachments: SOLR-7567.patch, SOLR-7583.patch
>
>
> Sometimes the snapshot is not available on a file system that can be accessed 
> by Solr or SolrCloud. It would be useful to be able to send snapshot  files 
> to Solr over HTTP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_45) - Build # 12782 - Failure!

2015-05-22 Thread Erick Erickson
Looking..


On Fri, May 22, 2015 at 2:06 PM, Policeman Jenkins Server
 wrote:
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12782/
> Java: 64bit/jdk1.8.0_45 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC
>
> 4 tests failed.
> FAILED:  
> junit.framework.TestSuite.org.apache.solr.cloud.CdcrReplicationDistributedZkTest
>
> Error Message:
> Some resources were not closed, shutdown, or released.
>
> Stack Trace:
> java.lang.AssertionError: Some resources were not closed, shutdown, or 
> released.
> at __randomizedtesting.SeedInfo.seed([77F4964252CFD56A]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:234)
> at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
> at java.lang.Thread.run(Thread.java:745)
>
>
> FAILED:  
> junit.framework.TestSuite.org.apache.solr.cloud.CdcrReplicationDistributedZkTest
>
> Error Message:
> 5 threads leaked from SUITE scope at 
> org.apache.solr.cloud.CdcrReplicationDistributedZkTest: 1) 
> Thread[id=4675, 
> name=zkCallback-547-thread-1-processing-{node_name=127.0.0.1:42803_}-SendThread(127.0.0.1:54300),
>  state=TIMED_WAITING, group=TGRP-CdcrReplicationDistributedZkTest] at 
> java.lang.Thread.sleep(Native Method) at 
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:994)2) 
> Thread[id=4719, name=zkCallback-697-thread-2, state=TIMED_WAITING, 
> group=TGRP-CdcrReplicationDistributedZkTest] at 
> sun.misc.Unsafe.park(Native Method) at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>  at 
> java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
>  at 
> java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
>  at 
> java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:745)3) Thread[id=4676, 
> name=zkCallback-547-thread-1-processing-{node_name=127.0.0.1:42803_}-EventThread,
>  state=WAITING, group=TGRP-CdcrReplicationDistributedZkTest] at 
> sun.misc.Unsafe.park(Native Method) at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java

[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_45) - Build # 12782 - Failure!

2015-05-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12782/
Java: 64bit/jdk1.8.0_45 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CdcrReplicationDistributedZkTest

Error Message:
Some resources were not closed, shutdown, or released.

Stack Trace:
java.lang.AssertionError: Some resources were not closed, shutdown, or released.
at __randomizedtesting.SeedInfo.seed([77F4964252CFD56A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:234)
at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CdcrReplicationDistributedZkTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.CdcrReplicationDistributedZkTest: 1) Thread[id=4675, 
name=zkCallback-547-thread-1-processing-{node_name=127.0.0.1:42803_}-SendThread(127.0.0.1:54300),
 state=TIMED_WAITING, group=TGRP-CdcrReplicationDistributedZkTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:994)2) 
Thread[id=4719, name=zkCallback-697-thread-2, state=TIMED_WAITING, 
group=TGRP-CdcrReplicationDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)3) Thread[id=4676, 
name=zkCallback-547-thread-1-processing-{node_name=127.0.0.1:42803_}-EventThread,
 state=WAITING, group=TGRP-CdcrReplicationDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
4) Thread[id=4720, name=zkCallback-697-thread-3, state=TIMED_WAITING, 
group=TGRP-C

[jira] [Commented] (SOLR-6082) Umbrella JIRA for Admin UI and SolrCloud.

2015-05-22 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556820#comment-14556820
 ] 

Upayavira commented on SOLR-6082:
-

I've started thinking about a collections API panel. Here's what I have in mind:

A list of collections down the left hand side. A list of nodes down the right 
hand side. Each of these lists is independently scrollable.

At the top of the collections box is a big (+) symbol, clicking on which allows 
us to create a new collection.

Click on a collection, it will expand to show its shards. Click on a node, it 
will expand to show the shards/collections it hosts.

You can drag a shard from a collection over onto a node. You can drag a shard 
from one node to another (visually, it will leave the shard behind which will 
make it clear that you are cloning). 

There's more to it than this, but this is the basic idea.

This should all be pretty straightforward. Ironically, the one thing I've got 
to work out now is how to work with the CSS, as, to-date I've always had the 
old UI to base my work upon :-)

I am deliberately not addressing configurations as a part of the above. I am 
working on the assumption that relevant configs are already in place.

> Umbrella JIRA for Admin UI and SolrCloud.
> -
>
> Key: SOLR-6082
> URL: https://issues.apache.org/jira/browse/SOLR-6082
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Affects Versions: 4.9, Trunk
>Reporter: Erick Erickson
>Assignee: Shalin Shekhar Mangar
>
> It would be very helpful if the admin UI were more "cloud friendly". This is 
> an umbrella JIRA so we can collect sub-tasks as necessary. I think there 
> might be scattered JIRAs about this, let's link them in as we find them.
> [~steffkes] - I've taken the liberty of assigning it to you since you 
> expressed some interest. Feel free to assign it back if you want...
> Let's imagine that a user has a cluster with _no_ collections assigned and 
> start from there.
> Here's a simple way to set this up. Basically you follow the reference guide 
> tutorial but _don't_ define a collection.
> 1> completely delete the "collection1" directory from example
> 2> cp -r example example2
> 3> in example, execute "java -DzkRun -jar start.jar"
> 4> in example2, execute "java -Djetty.port=7574 -DzkHost=localhost:9983 -jar 
> start.jar"
> Now the "cloud link" appears. If you expand the tree view, you see the two 
> live nodes. But, there's nothing in the graph view, no cores are selectable, 
> etc.
> First problem (need to solve before any sub-jiras, so including it here): You 
> have to push a configuration directory to ZK.
> [~thetapi] The _last_ time Stefan and I started allowing files to be written 
> to Solr from the UI it was...unfortunate. I'm assuming that there's something 
> similar here. That is, we shouldn't allow pushing the Solr config _to_ 
> ZooKeeper through the Admin UI, where they'd be distributed to all the solr 
> nodes. Is that true? If this is a security issue, we can keep pushing the 
> config dirs to ZK a manual step for now...
> Once we determine how to get configurations up, we can work on the various 
> sub-jiras.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7586) State of external collections not displayed in cloud graph panel (In AngularJS)

2015-05-22 Thread Upayavira (JIRA)
Upayavira created SOLR-7586:
---

 Summary: State of external collections not displayed in cloud 
graph panel (In AngularJS)
 Key: SOLR-7586
 URL: https://issues.apache.org/jira/browse/SOLR-7586
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 5.2
Reporter: Upayavira
Priority: Minor
 Fix For: 5.3


I have just tracked down SOLR-5810. This code still needs porting over to the 
angularJS admin UI. With this ticket as a background, it will be much easier to 
do.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_45) - Build # 12607 - Failure!

2015-05-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12607/
Java: 64bit/jdk1.8.0_45 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithKerberos

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([E90436806BFC92C9]:0)


FAILED:  org.apache.solr.cloud.TestSolrCloudWithKerberos.testKerberizedSolr

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([E90436806BFC92C9]:0)




Build Log:
[...truncated 11307 lines...]
   [junit4] Suite: org.apache.solr.cloud.TestSolrCloudWithKerberos
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestSolrCloudWithKerberos
 E90436806BFC92C9-001/init-core-data-001
   [junit4]   2> 1173274 T8758 
oas.BaseDistributedSearchTestCase.initHostContext Setting hostContext system 
property: /_cq/
   [junit4]   2> 1184948 T8758 
oadsc.DefaultDirectoryService.showSecurityWarnings WARN You didn't change the 
admin password of directory service instance 'DefaultKrbServer'.  Please update 
the admin password as soon as possible to prevent a possible security breach.
   [junit4]   2> 1194163 T8758 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   2> 1194164 T8766 oasc.ZkTestServer$2$1.setClientPort client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1194164 T8766 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2> 1194264 T8758 oasc.ZkTestServer.run start zk server on 
port:33198
   [junit4]   2> 1194264 T8758 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2> 1194264 T8758 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2> 1194265 T8771 oaz.ClientCnxn$SendThread.startConnect WARN SASL 
configuration failed: javax.security.auth.login.LoginException: No JAAS 
configuration section named 'Client' was found in specified JAAS configuration 
file: 
'/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestSolrCloudWithKerberos
 E90436806BFC92C9-001/tempDir-001/minikdc/jaas-client.conf'. Will continue 
connection to Zookeeper server without SASL authentication, if Zookeeper server 
allows it.
   [junit4]   2> 1194265 T8773 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@2f66d4c9 
name:ZooKeeperConnection Watcher:127.0.0.1:33198 got event WatchedEvent 
state:AuthFailed type:None path:null path:null type:None
   [junit4]   2> 1194265 T8773 oascc.ConnectionManager.process WARN zkClient 
received AuthFailed
   [junit4]   2> 1194267 T8773 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@2f66d4c9 
name:ZooKeeperConnection Watcher:127.0.0.1:33198 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 1194267 T8758 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2> 1194267 T8758 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2> 1194267 T8758 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2> 1194268 T8758 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2> 1194269 T8758 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2> 1194269 T8774 oaz.ClientCnxn$SendThread.startConnect WARN SASL 
configuration failed: javax.security.auth.login.LoginException: No JAAS 
configuration section named 'Client' was found in specified JAAS configuration 
file: 
'/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestSolrCloudWithKerberos
 E90436806BFC92C9-001/tempDir-001/minikdc/jaas-client.conf'. Will continue 
connection to Zookeeper server without SASL authentication, if Zookeeper server 
allows it.
   [junit4]   2> 1194269 T8776 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@6c2ad4bb 
name:ZooKeeperConnection Watcher:127.0.0.1:33198/solr got event WatchedEvent 
state:AuthFailed type:None path:null path:null type:None
   [junit4]   2> 1194269 T8776 oascc.ConnectionManager.process WARN zkClient 
received AuthFailed
   [junit4]   2> 1194269 T8776 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@6c2ad4bb 
name:ZooKeeperConnection Watcher:127.0.0.1:33198/solr got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 1194269 T8758 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2> 1194270 T8758 oascc.SolrZk

[jira] [Resolved] (SOLR-6820) The sync on the VersionInfo bucket in DistributedUpdateProcesser#addDocument appears to be a large bottleneck when using replication.

2015-05-22 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter resolved SOLR-6820.
--
Resolution: Fixed

> The sync on the VersionInfo bucket in DistributedUpdateProcesser#addDocument 
> appears to be a large bottleneck when using replication.
> -
>
> Key: SOLR-6820
> URL: https://issues.apache.org/jira/browse/SOLR-6820
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: Trunk, 5.2
>
> Attachments: SOLR-6820.patch, threads.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6820) The sync on the VersionInfo bucket in DistributedUpdateProcesser#addDocument appears to be a large bottleneck when using replication.

2015-05-22 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556774#comment-14556774
 ] 

Timothy Potter edited comment on SOLR-6820 at 5/22/15 8:19 PM:
---

This has been committed to 5.2 but I forgot to include the ticket # in the 
commit message for the 5.2 branch :-(

Commit on 5.2 branch was: revision 1681229.


was (Author: thelabdude):
This has been committed to 5.2 but I forgot to include the ticket # in the 
commit message for the 5.2 branch :-(

> The sync on the VersionInfo bucket in DistributedUpdateProcesser#addDocument 
> appears to be a large bottleneck when using replication.
> -
>
> Key: SOLR-6820
> URL: https://issues.apache.org/jira/browse/SOLR-6820
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: Trunk, 5.2
>
> Attachments: SOLR-6820.patch, threads.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-6820) The sync on the VersionInfo bucket in DistributedUpdateProcesser#addDocument appears to be a large bottleneck when using replication.

2015-05-22 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter reopened SOLR-6820:
--

> The sync on the VersionInfo bucket in DistributedUpdateProcesser#addDocument 
> appears to be a large bottleneck when using replication.
> -
>
> Key: SOLR-6820
> URL: https://issues.apache.org/jira/browse/SOLR-6820
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: Trunk, 5.2
>
> Attachments: SOLR-6820.patch, threads.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Solr-Artifacts-trunk - Build # 2559 - Failure

2015-05-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-Artifacts-trunk/2559/

No tests ran.

Build Log:
[...truncated 10412 lines...]


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-6820) The sync on the VersionInfo bucket in DistributedUpdateProcesser#addDocument appears to be a large bottleneck when using replication.

2015-05-22 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556774#comment-14556774
 ] 

Timothy Potter commented on SOLR-6820:
--

This has been committed to 5.2 but I forgot to include the ticket # in the 
commit message for the 5.2 branch :-(

> The sync on the VersionInfo bucket in DistributedUpdateProcesser#addDocument 
> appears to be a large bottleneck when using replication.
> -
>
> Key: SOLR-6820
> URL: https://issues.apache.org/jira/browse/SOLR-6820
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: Trunk, 5.2
>
> Attachments: SOLR-6820.patch, threads.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Artifacts-trunk - Build # 2666 - Failure

2015-05-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Artifacts-trunk/2666/

No tests ran.

Build Log:
[...truncated 12149 lines...]
BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Lucene-Artifacts-trunk/lucene/common-build.xml:716:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Artifacts-trunk/lucene/common-build.xml:427:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Artifacts-trunk/lucene/common-build.xml:464:
 Ivy is not available

Total time: 0 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Publishing Javadoc
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

RE: [JENKINS] Lucene-Artifacts-5.x - Build # 861 - Failure

2015-05-22 Thread Uwe Schindler
Hi,

we are currently moving FreeBSD Jenkins to Ubuntu. Its now running, we just 
have wrong versions of JDKs selected. Will fix that now.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


> -Original Message-
> From: Apache Jenkins Server [mailto:jenk...@builds.apache.org]
> Sent: Friday, May 22, 2015 10:06 PM
> To: dev@lucene.apache.org
> Subject: [JENKINS] Lucene-Artifacts-5.x - Build # 861 - Failure
> 
> Build: https://builds.apache.org/job/Lucene-Artifacts-5.x/861/
> 
> No tests ran.
> 
> Build Log:
> [...truncated 10619 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Artifacts-5.x - Build # 861 - Failure

2015-05-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Artifacts-5.x/861/

No tests ran.

Build Log:
[...truncated 10619 lines...]


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-7468) Kerberos authentication module

2015-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556756#comment-14556756
 ] 

ASF subversion and git services commented on SOLR-7468:
---

Commit 1681226 from [~anshumg] in branch 'dev/branches/lucene_solr_5_2'
[ https://svn.apache.org/r1681226 ]

SOLR-7468: Fix the Kerberos test to use a reconfigured client always.(merge 
from branch_5x)

> Kerberos authentication module
> --
>
> Key: SOLR-7468
> URL: https://issues.apache.org/jira/browse/SOLR-7468
> Project: Solr
>  Issue Type: New Feature
>  Components: security
>Reporter: Ishan Chattopadhyaya
>Assignee: Anshum Gupta
> Fix For: 5.2
>
> Attachments: SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
> SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
> SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
> SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
> SOLR-7468.patch, SOLR-7468.patch
>
>
> SOLR-7274 introduces a pluggable authentication framework. This issue 
> provides a Kerberos plugin implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3163 - Still Failing

2015-05-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3163/

No tests ran.

Build Log:
[...truncated 10622 lines...]


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-7468) Kerberos authentication module

2015-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556753#comment-14556753
 ] 

ASF subversion and git services commented on SOLR-7468:
---

Commit 1681220 from [~anshumg] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1681220 ]

SOLR-7468: Fix the Kerberos test to use a reconfigured client always.(merge 
from trunk)

> Kerberos authentication module
> --
>
> Key: SOLR-7468
> URL: https://issues.apache.org/jira/browse/SOLR-7468
> Project: Solr
>  Issue Type: New Feature
>  Components: security
>Reporter: Ishan Chattopadhyaya
>Assignee: Anshum Gupta
> Fix For: 5.2
>
> Attachments: SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
> SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
> SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
> SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
> SOLR-7468.patch, SOLR-7468.patch
>
>
> SOLR-7274 introduces a pluggable authentication framework. This issue 
> provides a Kerberos plugin implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.2-Java7 - Build # 1 - Failure

2015-05-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.2-Java7/1/

No tests ran.

Build Log:
[...truncated 10618 lines...]


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Resolved] (SOLR-7582) The SolrCloud example (bin/solr -e cloud) should have soft auto-commits enabled by default.

2015-05-22 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter resolved SOLR-7582.
--
   Resolution: Fixed
Fix Version/s: Trunk

> The SolrCloud example (bin/solr -e cloud) should have soft auto-commits 
> enabled by default.
> ---
>
> Key: SOLR-7582
> URL: https://issues.apache.org/jira/browse/SOLR-7582
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: 5.1
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Fix For: Trunk, 5.2
>
> Attachments: SOLR-7582.patch
>
>
> I think the SolrCloud example (bin/solr -e cloud) should enable soft- 
> auto-commits by default. The script should enable soft-commits using the 
> Config API, which will give a good example of using the Config API for new 
> users.
> Also, the data_driven configs should allow setting the auto-commit values 
> using -D sys props as in the techproducts solrconfig.xml.
> I'd like to get this into 5.2 as I've run into several people that send data 
> into their collection only and don't see any docs (because soft-commits are 
> disabled). So this is a usability issue for new users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7582) The SolrCloud example (bin/solr -e cloud) should have soft auto-commits enabled by default.

2015-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556741#comment-14556741
 ] 

ASF subversion and git services commented on SOLR-7582:
---

Commit 1681219 from [~thelabdude] in branch 'dev/branches/lucene_solr_5_2'
[ https://svn.apache.org/r1681219 ]

SOLR-7582: Allow auto-commit to be set with system properties in 
data_driven_schema_configs and enable auto soft-commits for the bin/solr -e 
cloud example using the Config API.

> The SolrCloud example (bin/solr -e cloud) should have soft auto-commits 
> enabled by default.
> ---
>
> Key: SOLR-7582
> URL: https://issues.apache.org/jira/browse/SOLR-7582
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: 5.1
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Fix For: Trunk, 5.2
>
> Attachments: SOLR-7582.patch
>
>
> I think the SolrCloud example (bin/solr -e cloud) should enable soft- 
> auto-commits by default. The script should enable soft-commits using the 
> Config API, which will give a good example of using the Config API for new 
> users.
> Also, the data_driven configs should allow setting the auto-commit values 
> using -D sys props as in the techproducts solrconfig.xml.
> I'd like to get this into 5.2 as I've run into several people that send data 
> into their collection only and don't see any docs (because soft-commits are 
> disabled). So this is a usability issue for new users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7560) Parallel SQL Support

2015-05-22 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-7560:
-
Fix Version/s: 5.3

> Parallel SQL Support
> 
>
> Key: SOLR-7560
> URL: https://issues.apache.org/jira/browse/SOLR-7560
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, search
>Reporter: Joel Bernstein
> Fix For: 5.3
>
>
> This ticket provides support for executing *Parallel SQL* queries across 
> SolrCloud collections. The SQL engine will be built on top of the Streaming 
> API (SOLR-7082), which provides support for *parallel relational algebra* and 
> *real-time map-reduce*.
> Basic design:
> 1) A new SQLHandler will be added to process SQL requests. The SQL statements 
> will be compiled to live Streaming API objects for parallel execution across 
> SolrCloud worker nodes.
> 2) SolrCloud collections will be abstracted as *Relational Tables*. 
> 3) The Presto SQL parser will be used to parse the SQL statements.
> 4) A JDBC thin client will be added as a Solrj client.
> This ticket will focus on putting the framework in place and providing basic 
> SELECT support and GROUP BY aggregate support.
> Future releases will build on this framework to provide additional SQL 
> features.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7582) The SolrCloud example (bin/solr -e cloud) should have soft auto-commits enabled by default.

2015-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556723#comment-14556723
 ] 

ASF subversion and git services commented on SOLR-7582:
---

Commit 1681216 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1681216 ]

SOLR-7582: Allow auto-commit to be set with system properties in 
data_driven_schema_configs and enable auto soft-commits for the bin/solr -e 
cloud example using the Config API.

> The SolrCloud example (bin/solr -e cloud) should have soft auto-commits 
> enabled by default.
> ---
>
> Key: SOLR-7582
> URL: https://issues.apache.org/jira/browse/SOLR-7582
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: 5.1
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Fix For: 5.2
>
> Attachments: SOLR-7582.patch
>
>
> I think the SolrCloud example (bin/solr -e cloud) should enable soft- 
> auto-commits by default. The script should enable soft-commits using the 
> Config API, which will give a good example of using the Config API for new 
> users.
> Also, the data_driven configs should allow setting the auto-commit values 
> using -D sys props as in the techproducts solrconfig.xml.
> I'd like to get this into 5.2 as I've run into several people that send data 
> into their collection only and don't see any docs (because soft-commits are 
> disabled). So this is a usability issue for new users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7585) ConcurrentLFUCache throws NoSuchElementException under a write-heavy load

2015-05-22 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556707#comment-14556707
 ] 

Shawn Heisey commented on SOLR-7585:


The LFU cache implementation is my code, but I didn't write it from scratch.  I 
started with the existing LRU code.  Now that I look at ConcurrentLRUCache, the 
markAndSweep method there seems to be quite a bit different, and I can't 
remember what I actually did or whether the differences came afterwards.

I tried adding just the test so I could see if the test fails without the 
patch.  I did so on branch_5x, which is how I discovered that the test uses a 
lambda expression -- it won't compile on branch_5x because the build is 
targeted for a 1.7 compile version, and lamba expressions require 1.8.  Since I 
haven't yet wrestled with lambda expressions, I have no idea what they actually 
do.  Can you rewrite the test so it's compatible with Java 7?


> ConcurrentLFUCache throws NoSuchElementException under a write-heavy load
> -
>
> Key: SOLR-7585
> URL: https://issues.apache.org/jira/browse/SOLR-7585
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1
>Reporter: Maciej Zasada
>Priority: Minor
> Attachments: SOLR-7585.patch
>
>
> Under a write-heavy load {{ConcurrentLFUCache}} throws 
> {{NoSuchElementException}}. The problem lies within 
> {{org.apache.solr.util.ConcurrentLFUCache#put}} method, which allows for a 
> race condition between the check and the call to {{markAndSweep}} method. 
> Despite that a thread must acquire a lock to perform sweeping, it's still 
> possible that multiple threads successfully detected a need for calling 
> markAndSweep. If they execute it sequentially, subsequent runs will fail with 
> {{NoSuchElementException}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6497) Allow subclasses of FieldType to check frozen state

2015-05-22 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst resolved LUCENE-6497.

   Resolution: Fixed
Fix Version/s: 5.2
   Trunk

> Allow subclasses of FieldType to check frozen state
> ---
>
> Key: LUCENE-6497
> URL: https://issues.apache.org/jira/browse/LUCENE-6497
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
> Fix For: Trunk, 5.2
>
> Attachments: LUCENE-6497.patch
>
>
> checkIfFrozen() is currently private. We should this protected, so subclasses 
> of FieldType can add additional state that is protected by freezing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6497) Allow subclasses of FieldType to check frozen state

2015-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556687#comment-14556687
 ] 

ASF subversion and git services commented on LUCENE-6497:
-

Commit 1681212 from [~rjernst] in branch 'dev/branches/lucene_solr_5_2'
[ https://svn.apache.org/r1681212 ]

LUCENE-6497: Allow subclasses of FieldType to check frozen state (merged 
r1681211)

> Allow subclasses of FieldType to check frozen state
> ---
>
> Key: LUCENE-6497
> URL: https://issues.apache.org/jira/browse/LUCENE-6497
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
> Attachments: LUCENE-6497.patch
>
>
> checkIfFrozen() is currently private. We should this protected, so subclasses 
> of FieldType can add additional state that is protected by freezing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6497) Allow subclasses of FieldType to check frozen state

2015-05-22 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556686#comment-14556686
 ] 

Ryan Ernst commented on LUCENE-6497:


I spoke with Anshum and will push this to 5.2 as well.

> Allow subclasses of FieldType to check frozen state
> ---
>
> Key: LUCENE-6497
> URL: https://issues.apache.org/jira/browse/LUCENE-6497
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
> Attachments: LUCENE-6497.patch
>
>
> checkIfFrozen() is currently private. We should this protected, so subclasses 
> of FieldType can add additional state that is protected by freezing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6497) Allow subclasses of FieldType to check frozen state

2015-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556683#comment-14556683
 ] 

ASF subversion and git services commented on LUCENE-6497:
-

Commit 1681211 from [~rjernst] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1681211 ]

LUCENE-6497: Allow subclasses of FieldType to check frozen state (merged 
r1681207)

> Allow subclasses of FieldType to check frozen state
> ---
>
> Key: LUCENE-6497
> URL: https://issues.apache.org/jira/browse/LUCENE-6497
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
> Attachments: LUCENE-6497.patch
>
>
> checkIfFrozen() is currently private. We should this protected, so subclasses 
> of FieldType can add additional state that is protected by freezing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6497) Allow subclasses of FieldType to check frozen state

2015-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556678#comment-14556678
 ] 

ASF subversion and git services commented on LUCENE-6497:
-

Commit 1681207 from [~rjernst] in branch 'dev/trunk'
[ https://svn.apache.org/r1681207 ]

LUCENE-6497: Allow subclasses of FieldType to check frozen state

> Allow subclasses of FieldType to check frozen state
> ---
>
> Key: LUCENE-6497
> URL: https://issues.apache.org/jira/browse/LUCENE-6497
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
> Attachments: LUCENE-6497.patch
>
>
> checkIfFrozen() is currently private. We should this protected, so subclasses 
> of FieldType can add additional state that is protected by freezing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6497) Allow subclasses of FieldType to check frozen state

2015-05-22 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556676#comment-14556676
 ] 

Michael McCandless commented on LUCENE-6497:


+1

> Allow subclasses of FieldType to check frozen state
> ---
>
> Key: LUCENE-6497
> URL: https://issues.apache.org/jira/browse/LUCENE-6497
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
> Attachments: LUCENE-6497.patch
>
>
> checkIfFrozen() is currently private. We should this protected, so subclasses 
> of FieldType can add additional state that is protected by freezing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7468) Kerberos authentication module

2015-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556672#comment-14556672
 ] 

ASF subversion and git services commented on SOLR-7468:
---

Commit 1681198 from [~anshumg] in branch 'dev/trunk'
[ https://svn.apache.org/r1681198 ]

SOLR-7468: Fix the Kerberos test to use a reconfigured client always.

> Kerberos authentication module
> --
>
> Key: SOLR-7468
> URL: https://issues.apache.org/jira/browse/SOLR-7468
> Project: Solr
>  Issue Type: New Feature
>  Components: security
>Reporter: Ishan Chattopadhyaya
>Assignee: Anshum Gupta
> Fix For: 5.2
>
> Attachments: SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
> SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
> SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
> SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
> SOLR-7468.patch, SOLR-7468.patch
>
>
> SOLR-7274 introduces a pluggable authentication framework. This issue 
> provides a Kerberos plugin implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7567) Replication handler to support restore via upload

2015-05-22 Thread Greg Solovyev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Solovyev updated SOLR-7567:

Fix Version/s: (was: 5.2)

> Replication handler to support restore via upload
> -
>
> Key: SOLR-7567
> URL: https://issues.apache.org/jira/browse/SOLR-7567
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Greg Solovyev
> Fix For: Trunk
>
> Attachments: SOLR-7567.patch
>
>
> Sometimes the snapshot is not available on a file system that can be accessed 
> by Solr or SolrCloud. It would be useful to be able to send snapshot  files 
> to Solr over HTTP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6273) Cross Data Center Replication

2015-05-22 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556630#comment-14556630
 ] 

Erick Erickson commented on SOLR-6273:
--

I'm going to let this bake on trunk for a week or so, then merge into 5.3.

thanks Renaud, Yonik et.al.!

> Cross Data Center Replication
> -
>
> Key: SOLR-6273
> URL: https://issues.apache.org/jira/browse/SOLR-6273
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Erick Erickson
> Attachments: SOLR-6273-trunk.patch, SOLR-6273-trunk.patch, 
> SOLR-6273.patch, SOLR-6273.patch, SOLR-6273.patch, SOLR-6273.patch
>
>
> This is the master issue for Cross Data Center Replication (CDCR)
> described at a high level here: 
> http://heliosearch.org/solr-cross-data-center-replication/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6273) Cross Data Center Replication

2015-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556624#comment-14556624
 ] 

ASF subversion and git services commented on SOLR-6273:
---

Commit 1681186 from [~erickoerickson] in branch 'dev/trunk'
[ https://svn.apache.org/r1681186 ]

SOLR-6273: Cross Data Center Replication

> Cross Data Center Replication
> -
>
> Key: SOLR-6273
> URL: https://issues.apache.org/jira/browse/SOLR-6273
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Erick Erickson
> Attachments: SOLR-6273-trunk.patch, SOLR-6273-trunk.patch, 
> SOLR-6273.patch, SOLR-6273.patch, SOLR-6273.patch, SOLR-6273.patch
>
>
> This is the master issue for Cross Data Center Replication (CDCR)
> described at a high level here: 
> http://heliosearch.org/solr-cross-data-center-replication/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6497) Allow subclasses of FieldType to check frozen state

2015-05-22 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst updated LUCENE-6497:
---
Attachment: LUCENE-6497.patch

Simple patch.

> Allow subclasses of FieldType to check frozen state
> ---
>
> Key: LUCENE-6497
> URL: https://issues.apache.org/jira/browse/LUCENE-6497
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
> Attachments: LUCENE-6497.patch
>
>
> checkIfFrozen() is currently private. We should this protected, so subclasses 
> of FieldType can add additional state that is protected by freezing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6497) Allow subclasses of FieldType to check frozen state

2015-05-22 Thread Ryan Ernst (JIRA)
Ryan Ernst created LUCENE-6497:
--

 Summary: Allow subclasses of FieldType to check frozen state
 Key: LUCENE-6497
 URL: https://issues.apache.org/jira/browse/LUCENE-6497
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Ryan Ernst


checkIfFrozen() is currently private. We should this protected, so subclasses 
of FieldType can add additional state that is protected by freezing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 5.2 release branch created

2015-05-22 Thread Anshum Gupta
+1 on that. Low regression risk + high performance improvement = totally
worth it!

On Fri, May 22, 2015 at 11:19 AM, Timothy Potter 
wrote:

> Thanks Anshum!
>
> I have one more minor change to SOLR-6820 (which is already in the 5.2
> branch), but I was waiting on some feedback before increasing the
> default number of version buckets for the Solr UpdateLog. That fix is
> in and I would like to commit that change, risk of regression is near
> nil and the benefit is faster indexing performance ;-)
>
> On Fri, May 22, 2015 at 1:28 AM, Anshum Gupta 
> wrote:
> > The 5.2 branch has been created -
> > https://svn.apache.org/repos/asf/lucene/dev/branches/lucene_solr_5_2/
> >
> > @Tim Potter: I know you want to get SOLR-7582 in and you'd mentioned to
> me
> > about it. feel free to commit that change to the branch.
> >
> > If there's anything else that someone feels must go in with this release,
> > please take a judgement call while also considering the ramifications of
> > such a commit (if any).
> >
> > *Bug fixes* that aren't huge or complex and/or fix a critical bug are
> fine.
> >
> > Let's play nice with the release branch. :)
> >
> > I plan on creating the RC sometime on Monday/Tuesday.
> >
> > --
> > Anshum Gupta
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


-- 
Anshum Gupta


Re: 5.2 release branch created

2015-05-22 Thread Timothy Potter
Thanks Anshum!

I have one more minor change to SOLR-6820 (which is already in the 5.2
branch), but I was waiting on some feedback before increasing the
default number of version buckets for the Solr UpdateLog. That fix is
in and I would like to commit that change, risk of regression is near
nil and the benefit is faster indexing performance ;-)

On Fri, May 22, 2015 at 1:28 AM, Anshum Gupta  wrote:
> The 5.2 branch has been created -
> https://svn.apache.org/repos/asf/lucene/dev/branches/lucene_solr_5_2/
>
> @Tim Potter: I know you want to get SOLR-7582 in and you'd mentioned to me
> about it. feel free to commit that change to the branch.
>
> If there's anything else that someone feels must go in with this release,
> please take a judgement call while also considering the ramifications of
> such a commit (if any).
>
> *Bug fixes* that aren't huge or complex and/or fix a critical bug are fine.
>
> Let's play nice with the release branch. :)
>
> I plan on creating the RC sometime on Monday/Tuesday.
>
> --
> Anshum Gupta

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7582) The SolrCloud example (bin/solr -e cloud) should have soft auto-commits enabled by default.

2015-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556529#comment-14556529
 ] 

ASF subversion and git services commented on SOLR-7582:
---

Commit 1681177 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1681177 ]

SOLR-7582: Allow auto-commit to be set with system properties in 
data_driven_schema_configs and enable auto soft-commits for the bin/solr -e 
cloud example using the Config API.

> The SolrCloud example (bin/solr -e cloud) should have soft auto-commits 
> enabled by default.
> ---
>
> Key: SOLR-7582
> URL: https://issues.apache.org/jira/browse/SOLR-7582
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: 5.1
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Fix For: 5.2
>
> Attachments: SOLR-7582.patch
>
>
> I think the SolrCloud example (bin/solr -e cloud) should enable soft- 
> auto-commits by default. The script should enable soft-commits using the 
> Config API, which will give a good example of using the Config API for new 
> users.
> Also, the data_driven configs should allow setting the auto-commit values 
> using -D sys props as in the techproducts solrconfig.xml.
> I'd like to get this into 5.2 as I've run into several people that send data 
> into their collection only and don't see any docs (because soft-commits are 
> disabled). So this is a usability issue for new users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7582) The SolrCloud example (bin/solr -e cloud) should have soft auto-commits enabled by default.

2015-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556524#comment-14556524
 ] 

ASF subversion and git services commented on SOLR-7582:
---

Commit 1681175 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1681175 ]

Revert changes committed under wrong JIRA #, should have been SOLR-7582, not 
7583

> The SolrCloud example (bin/solr -e cloud) should have soft auto-commits 
> enabled by default.
> ---
>
> Key: SOLR-7582
> URL: https://issues.apache.org/jira/browse/SOLR-7582
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: 5.1
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Fix For: 5.2
>
> Attachments: SOLR-7582.patch
>
>
> I think the SolrCloud example (bin/solr -e cloud) should enable soft- 
> auto-commits by default. The script should enable soft-commits using the 
> Config API, which will give a good example of using the Config API for new 
> users.
> Also, the data_driven configs should allow setting the auto-commit values 
> using -D sys props as in the techproducts solrconfig.xml.
> I'd like to get this into 5.2 as I've run into several people that send data 
> into their collection only and don't see any docs (because soft-commits are 
> disabled). So this is a usability issue for new users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7583) API to download a snapshot by name

2015-05-22 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556522#comment-14556522
 ] 

Timothy Potter commented on SOLR-7583:
--

Wrong ticket number ... reverting! sorry for the noise

> API to download a snapshot by name
> --
>
> Key: SOLR-7583
> URL: https://issues.apache.org/jira/browse/SOLR-7583
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Greg Solovyev
>
> What we are looking for:
> SolrCloud and Solr should have APIs to download a snapshot via HTTP. 
> For single node Solr, this API will find a snapshot and stream it back over 
> HTTP. For SolrCloud, this API will find a Replica that has the snapshot with 
> requested name and stream the snapshot from that replica. Since there are 
> multiple files inside a snapshot, the API should probably zip the snapshot 
> folder before sending it back to the client.
> Why we need this:
> this will allow us to create and fetch fully contained archives of customer 
> data where each backup archive will contain Solr index as well as other 
> customer data (DB, metadata, files, etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7583) API to download a snapshot by name

2015-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556518#comment-14556518
 ] 

ASF subversion and git services commented on SOLR-7583:
---

Commit 1681173 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1681173 ]

SOLR-7583: Allow auto-commit to be set with system properties in 
data_driven_schema_configs and enable auto soft-commits for the bin/solr -e 
cloud example using the Config API.

> API to download a snapshot by name
> --
>
> Key: SOLR-7583
> URL: https://issues.apache.org/jira/browse/SOLR-7583
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Greg Solovyev
>
> What we are looking for:
> SolrCloud and Solr should have APIs to download a snapshot via HTTP. 
> For single node Solr, this API will find a snapshot and stream it back over 
> HTTP. For SolrCloud, this API will find a Replica that has the snapshot with 
> requested name and stream the snapshot from that replica. Since there are 
> multiple files inside a snapshot, the API should probably zip the snapshot 
> folder before sending it back to the client.
> Why we need this:
> this will allow us to create and fetch fully contained archives of customer 
> data where each backup archive will contain Solr index as well as other 
> customer data (DB, metadata, files, etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6820) The sync on the VersionInfo bucket in DistributedUpdateProcesser#addDocument appears to be a large bottleneck when using replication.

2015-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556512#comment-14556512
 ] 

ASF subversion and git services commented on SOLR-6820:
---

Commit 1681171 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1681171 ]

SOLR-6820: Increase the default number of buckets to 65536 instead of 256

> The sync on the VersionInfo bucket in DistributedUpdateProcesser#addDocument 
> appears to be a large bottleneck when using replication.
> -
>
> Key: SOLR-6820
> URL: https://issues.apache.org/jira/browse/SOLR-6820
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: Trunk, 5.2
>
> Attachments: SOLR-6820.patch, threads.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6820) The sync on the VersionInfo bucket in DistributedUpdateProcesser#addDocument appears to be a large bottleneck when using replication.

2015-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556497#comment-14556497
 ] 

ASF subversion and git services commented on SOLR-6820:
---

Commit 1681169 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1681169 ]

SOLR-6820: Increase the default number of buckets to 65536 instead of 256

> The sync on the VersionInfo bucket in DistributedUpdateProcesser#addDocument 
> appears to be a large bottleneck when using replication.
> -
>
> Key: SOLR-6820
> URL: https://issues.apache.org/jira/browse/SOLR-6820
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: Trunk, 5.2
>
> Attachments: SOLR-6820.patch, threads.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7582) The SolrCloud example (bin/solr -e cloud) should have soft auto-commits enabled by default.

2015-05-22 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556467#comment-14556467
 ] 

Anshum Gupta commented on SOLR-7582:


LGTM, +1!
It would be good to add pre-checks for all the 3 tests though i.e that a 
property does exist before you delete and confirm, etc.

> The SolrCloud example (bin/solr -e cloud) should have soft auto-commits 
> enabled by default.
> ---
>
> Key: SOLR-7582
> URL: https://issues.apache.org/jira/browse/SOLR-7582
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: 5.1
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Fix For: 5.2
>
> Attachments: SOLR-7582.patch
>
>
> I think the SolrCloud example (bin/solr -e cloud) should enable soft- 
> auto-commits by default. The script should enable soft-commits using the 
> Config API, which will give a good example of using the Config API for new 
> users.
> Also, the data_driven configs should allow setting the auto-commit values 
> using -D sys props as in the techproducts solrconfig.xml.
> I'd like to get this into 5.2 as I've run into several people that send data 
> into their collection only and don't see any docs (because soft-commits are 
> disabled). So this is a usability issue for new users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 855 - Still Failing

2015-05-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/855/

2 tests failed.
REGRESSION:  org.apache.solr.cloud.RecoveryZkTest.test

Error Message:
shard1 is not consistent.  Got 864 from 
http://127.0.0.1:61546/up_lvc/gp/collection1lastClient and got 223 from 
http://127.0.0.1:61558/up_lvc/gp/collection1

Stack Trace:
java.lang.AssertionError: shard1 is not consistent.  Got 864 from 
http://127.0.0.1:61546/up_lvc/gp/collection1lastClient and got 223 from 
http://127.0.0.1:61558/up_lvc/gp/collection1
at 
__randomizedtesting.SeedInfo.seed([19F35FD276FB405C:91A76008D8072DA4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.apache.solr.cloud.RecoveryZkTest.test(RecoveryZkTest.java:123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$Sta

Re: Configsets and Config APIs in Solr

2015-05-22 Thread Tomás Fernández Löbbe
> TLDR: we should think about this as configset base vs per-collection diff,
> not as immutable base vs per-collection mutable.

Makes sense, I was mostly thinking of it being immutable from the current
Config APIs. Editing a configset for multiple collection is a valid and
useful feature, the problem is doing that from inside one collection's API
call.

> So then the question becomes, do we want an API that can *also* make
> collection-specific changes to a shared config?

If we feel there is no need for collection-specific config changes, I'm OK,
but again, the API should be outside of the collection, like a Configset
API. The "generate configset based on X" should also be a command of this
API. In addition, this could allow users to edit a configset that's not
currently being used by any collection.

Tomás


On Fri, May 22, 2015 at 7:10 AM, Yonik Seeley  wrote:

> Makes sense Greg.
>
> Just looking at it from the ZK perspective (APIs aside), the original
> idea behind referencing a config set by name was so that you could
> change it in one place and everyone relying on it would get the
> changes.
>
> If one wants collections to have separate independent config sets they
> can already do that.
>
> So then the question becomes, do we want an API that can *also* make
> collection-specific changes to a shared config?
>
> An alternative would be a command to make a copy of a config set, and
> a command to switch a specific collection to use that new config set.
> Then any further changes would be collection specific.  That's sort of
> like SOLR-5955 - config templates - but you can "template" off of any
> other config set, at any point in time.  Actually, that type of
> functionality seems generally useful regardless.
>
> -Yonik
>
>
> On Thu, May 21, 2015 at 8:07 PM, Gregory Chanan 
> wrote:
> > I'm +1 on the general idea, but I'm not convinced about the
> > mutable/immutable separation.
> >
> > Do we not think it is valid to modify a single config(set) that affects
> > multiple collections?  I can imagine a case where my data with the same
> > config(set) is partitioned into many different collections, whether by
> date,
> > sorted order, etc. that all use the same underlying config(set).  Let's
> say
> > I have collections partitioned by month and I decide I want to add
> another
> > field; I don't want to have to modify
> > jan/schema
> > feb/schema
> > mar/schema
> > etc.
> >
> > I just want to modify the single underlying config(set).  You can imagine
> > having a configset API that let's me do that, so if I wanted to modify a
> > single collection's config I would call:
> > jan/schema
> > but if i wanted to modify the underlying config(set) I would call:
> > configset/month_partitioned_config
> >
> > My point is this: if the problem is that it is confusing to have
> configsets
> > modified when you make collection-level calls, then we should fix that
> (I'm
> > 100% in agreement with that, btw).  You can fix that by having a
> configset
> > and a per-collection diff; defining the configset as immutable doesn't
> solve
> > the problem, only locks us into a implementation that doesn't support the
> > use case above.  I'm not even saying we should implement a configset API,
> > only that defining this as an immutable vs mutable implementation blocks
> us
> > from doing that.
> >
> > TLDR: we should think about this as configset base vs per-collection
> diff,
> > not as immutable base vs per-collection mutable.
> >
> > Thoughts?
> > Greg
> >
> >
> > On Tue, May 19, 2015 at 10:52 AM, Tomás Fernández Löbbe
> >  wrote:
> >>
> >> I created https://issues.apache.org/jira/browse/SOLR-7570
> >>
> >> On Fri, May 15, 2015 at 10:31 AM, Alan Woodward 
> wrote:
> >>>
> >>> +1
> >>>
> >>> A nice way of doing it would be to make it part of the
> SolrResourceLoader
> >>> interface.  The ZK resource loader could check in the
> collection-specific
> >>> zknode first, and then under configs/, and we could add a
> writeResource()
> >>> method that writes to the collection-specific node as well.  Then all
> config
> >>> I/O goes via the resource loader, and we have a way of keeping certain
> parts
> >>> immutable.
> >>>
> >>> On 15 May 2015, at 17:39, Tomás Fernández Löbbe  >
> >>> wrote:
> >>>
> >>> I agree about differentiating the mutable part (configoverlay,
> generated
> >>> schema, etc) and the immutable (the configset) , but I think it would
> be
> >>> better if the mutable part is placed under /collections/x/...,
> otherwise
> >>> "/configs" would have a mix of ConfigSets and collection-specific
> >>> configuration.
> >>>
> >>> On Fri, May 15, 2015 at 6:38 AM, Noble Paul 
> wrote:
> 
>  I think this needs more discussion
> 
>  When a collection is created we should have two things
> 
>  an immutable part and a mutable part
> 
>  for instance my collection name is "x" and it uses schemaless example
>  conf
> 
>  I must now have two conf dirs
> 
>  configs/schemaless 

[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.7.0_80) - Build # 4718 - Failure!

2015-05-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4718/
Java: 64bit/jdk1.7.0_80 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.core.RequestHandlersTest.testStatistics

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([78307338B0DFCF72]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.RequestHandlersTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([78307338B0DFCF72]:0)




Build Log:
[...truncated 0 lines...]
   [junit4] Suite: org.apache.solr.core.RequestHandlersTest
   [junit4]   2> Creating dataDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.RequestHandlersTest
 78307338B0DFCF72-001\init-core-data-001
   [junit4]   2> 897451 T6840 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(true) and clientAuth (false)
   [junit4]   2> 897456 T6840 oas.SolrTestCaseJ4.initCore initCore
   [junit4]   2> 897456 T6840 oasc.SolrResourceLoader. new 
SolrResourceLoader for directory: 
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr\collection1\'
   [junit4]   2> 897457 T6840 oasc.SolrResourceLoader.replaceClassLoader Adding 
'file:/C:/Users/JenkinsSlave/workspace/Lucene-Solr-5.x-Windows/solr/core/src/test-files/solr/collection1/lib/classes/'
 to classloader
   [junit4]   2> 897457 T6840 oasc.SolrResourceLoader.replaceClassLoader Adding 
'file:/C:/Users/JenkinsSlave/workspace/Lucene-Solr-5.x-Windows/solr/core/src/test-files/solr/collection1/lib/README'
 to classloader
   [junit4]   2> 897526 T6840 oasc.SolrConfig.refreshRequestParams current 
version of requestparams : -1
   [junit4]   2> 897573 T6840 oasc.SolrConfig. Using Lucene MatchVersion: 
5.3.0
   [junit4]   2> 897630 T6840 oasc.SolrConfig. Loaded SolrConfig: 
solrconfig.xml
   [junit4]   2> 897631 T6840 oass.IndexSchema.readSchema Reading Solr Schema 
from 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr\collection1\conf\schema.xml
   [junit4]   2> 897638 T6840 oass.IndexSchema.readSchema [null] Schema 
name=test
   [junit4]   2> 898088 T6840 oass.OpenExchangeRatesOrgProvider.init 
Initialized with rates=open-exchange-rates.json, refreshInterval=1440.
   [junit4]   2> 898101 T6840 oass.IndexSchema.readSchema default search field 
in schema is text
   [junit4]   2> 898118 T6840 oass.IndexSchema.readSchema unique key field: id
   [junit4]   2> 898124 T6840 oass.FileExchangeRateProvider.reload Reloading 
exchange rates from file currency.xml
   [junit4]   2> 898127 T6840 oass.FileExchangeRateProvider.reload Reloading 
exchange rates from file currency.xml
   [junit4]   2> 898130 T6840 oass.OpenExchangeRatesOrgProvider.reload 
Reloading exchange rates from open-exchange-rates.json
   [junit4]   2> 898130 T6840 
oass.OpenExchangeRatesOrgProvider$OpenExchangeRates. WARN Unknown key 
IMPORTANT NOTE
   [junit4]   2> 898131 T6840 
oass.OpenExchangeRatesOrgProvider$OpenExchangeRates. WARN Expected key, 
got STRING
   [junit4]   2> 898132 T6840 oass.OpenExchangeRatesOrgProvider.reload 
Reloading exchange rates from open-exchange-rates.json
   [junit4]   2> 898132 T6840 
oass.OpenExchangeRatesOrgProvider$OpenExchangeRates. WARN Unknown key 
IMPORTANT NOTE
   [junit4]   2> 898132 T6840 
oass.OpenExchangeRatesOrgProvider$OpenExchangeRates. WARN Expected key, 
got STRING
   [junit4]   2> 898133 T6840 oasc.SolrResourceLoader.locateSolrHome JNDI not 
configured for solr (NoInitialContextEx)
   [junit4]   2> 898133 T6840 oasc.SolrResourceLoader.locateSolrHome using 
system property solr.solr.home: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr
   [junit4]   2> 898133 T6840 oasc.SolrResourceLoader. new 
SolrResourceLoader for directory: 
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr\'
   [junit4]   2> 898155 T6840 oasc.CoreContainer. New CoreContainer 
1188332566
   [junit4]   2> 898155 T6840 oasc.CoreContainer.load Loading cores into 
CoreContainer 
[instanceDir=C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr\]
   [junit4]   2> 898155 T6840 oasc.CoreContainer.load loading shared library: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr\lib
   [junit4]   2> 898155 T6840 oasc.SolrResourceLoader.addToClassLoader WARN 
Can't find (or read) directory to add to classloader: lib (resolved as: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr\lib).
   [junit4]   2> 898164 T6840 oashc.HttpShardHandlerFactory.init created with 
socketTimeout : 60,connTimeout : 6,maxConnectionsPerHost : 
20,maxConnections : 1

[jira] [Commented] (SOLR-6820) The sync on the VersionInfo bucket in DistributedUpdateProcesser#addDocument appears to be a large bottleneck when using replication.

2015-05-22 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556417#comment-14556417
 ] 

Yonik Seeley commented on SOLR-6820:


I don't like that it's a band-aid around the real problem, but it's the best 
pseudo-workaround we currently have I guess (it's based on luck... we're just 
lowering the likelihood of a different thread hitting the blocked bucket).

+1 for raising to 65536 for 5.2

> The sync on the VersionInfo bucket in DistributedUpdateProcesser#addDocument 
> appears to be a large bottleneck when using replication.
> -
>
> Key: SOLR-6820
> URL: https://issues.apache.org/jira/browse/SOLR-6820
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: Trunk, 5.2
>
> Attachments: SOLR-6820.patch, threads.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6820) The sync on the VersionInfo bucket in DistributedUpdateProcesser#addDocument appears to be a large bottleneck when using replication.

2015-05-22 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556412#comment-14556412
 ] 

Erick Erickson commented on SOLR-6820:
--

Should we put the 65K bucket default into 5.2? I don't see a good reason not to.

> The sync on the VersionInfo bucket in DistributedUpdateProcesser#addDocument 
> appears to be a large bottleneck when using replication.
> -
>
> Key: SOLR-6820
> URL: https://issues.apache.org/jira/browse/SOLR-6820
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: Trunk, 5.2
>
> Attachments: SOLR-6820.patch, threads.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: TermsQuery in ‘join’ module redundant?

2015-05-22 Thread Adrien Grand
On the one hand, I don't like that the TermsQuery might encourage data
modeling that requires to build queries over thousands of terms. On
the other hand I think it has become more reasonable recently and
tries to do the right thing given the terms it has been created with:
 - it rewrites to a disjunction when there are few terms
 - it uses a compressed bit set if it merges sparse postings lists

So I'm +1 on moving it to lucene/core.

At the same time, maybe it would make sense to move
DocValuesRangeQuery and DocValuesTermsQuery to from lucene/core to
lucene/queries? Even if two-phase support made them better it's still
a bit scary to have queries whose approximations match all documents?


On Thu, May 21, 2015 at 11:00 PM, david.w.smi...@gmail.com
 wrote:
> Today I noticed org.apache.lucene.search.join.TermsQuery (package access) in
> the join module that is functionally equivalent to one by the same name in
> the queries module.  It may be a bit historical since the one in the queries
> module until recently was a Filter, not a Query.  But now there is
> redundancy.  Based on recent changes in Lucene 5.2 done by Adrien to
> TermsQuery; I suspect both implementations would perform about the same.
> Thus, I think the one in the ‘join’ module should be deleted.
>
> Note the ‘join’ module does not yet depends on the ‘queries’ module. I think
> the TermsQuery is of such broad general utility that it should go into
> Lucene core.
>
> ~ David



-- 
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7582) The SolrCloud example (bin/solr -e cloud) should have soft auto-commits enabled by default.

2015-05-22 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-7582:
-
Attachment: SOLR-7582.patch

Here's a patch that has:

* updates the data_driven auto-commit settings to match what's in the other 
configsets

* adds a new "config" action to SolrCLI for setting a config property from 
bin/solr

* enables soft-auto-commit for 3 seconds for the gettingstarted collection of 
the bin/solr -e cloud example

* improvements to the unit test for SolrCLI

> The SolrCloud example (bin/solr -e cloud) should have soft auto-commits 
> enabled by default.
> ---
>
> Key: SOLR-7582
> URL: https://issues.apache.org/jira/browse/SOLR-7582
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: 5.1
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Fix For: 5.2
>
> Attachments: SOLR-7582.patch
>
>
> I think the SolrCloud example (bin/solr -e cloud) should enable soft- 
> auto-commits by default. The script should enable soft-commits using the 
> Config API, which will give a good example of using the Config API for new 
> users.
> Also, the data_driven configs should allow setting the auto-commit values 
> using -D sys props as in the techproducts solrconfig.xml.
> I'd like to get this into 5.2 as I've run into several people that send data 
> into their collection only and don't see any docs (because soft-commits are 
> disabled). So this is a usability issue for new users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7547) SDF should short circuit for static content request

2015-05-22 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-7547.

Resolution: Fixed

> SDF should short circuit for static content request
> ---
>
> Key: SOLR-7547
> URL: https://issues.apache.org/jira/browse/SOLR-7547
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: 5.2
>
> Attachments: SOLR-7547.patch
>
>
> As of now, when we request the Admin UI page, I see those requests coming 
> into SDF and creating the HttpSolrCall object. This shouldn't happen and 
> requests for those paths should just short circuit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5955) Add config templates to SolrCloud.

2015-05-22 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556395#comment-14556395
 ] 

Anshum Gupta commented on SOLR-5955:


It sounds like what I suggested but with the ability to dynamically switch a 
config-set for a collection. That might get tricky unless you're only talking 
about allowing a one time move from a shared config-set to a collection 
specific config-set. e.g.
# collection1 uses conf1 (shared config-set)
# you want collection1 to have it's own copy of conf1, so you make the API 
call, which copies the config to another location 
{{(/collections/collection1/config/... )}} ? and then links it there.
# Going forward, you could edit this config using the API without worrying 
about the impact on other collections.

It would be tricky to have an API that allows collection1 to be linked to a new 
config called, conf2, and then in a while switch it to say, conf3, which may or 
may-not be even compatible.

> Add config templates to SolrCloud.
> --
>
> Key: SOLR-5955
> URL: https://issues.apache.org/jira/browse/SOLR-5955
> Project: Solr
>  Issue Type: New Feature
>Reporter: Mark Miller
> Attachments: SOLR-5955.patch
>
>
> You should be able to upload config sets to a templates location and then 
> specify a template as your starting config when creating new collections via 
> REST API. We can have a default template that we ship with.
> This will let you create collections from scratch via REST API, and then you 
> can use things like the schema REST API to customize the template config to 
> your needs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_45) - Build # 12780 - Failure!

2015-05-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12780/
Java: 32bit/jdk1.8.0_45 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestSolrCloudWithKerberos.testKerberizedSolr

Error Message:
Error from server at http://127.0.0.1:55467: Expected mime type 
application/octet-stream but got text/html.Error 
401HTTP ERROR: 401 Problem accessing 
/admin/collections. Reason: Authentication required Powered by Jetty://   

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:55467: Expected mime type 
application/octet-stream but got text/html. 


Error 401 


HTTP ERROR: 401
Problem accessing /admin/collections. Reason:
Authentication required
Powered by Jetty://



at 
__randomizedtesting.SeedInfo.seed([1CD9F54B95055022:B7C0238CDF649015]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:529)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:235)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:227)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:376)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:328)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1086)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:856)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:799)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:152)
at 
org.apache.solr.cloud.TestSolrCloudWithKerberos.testKerberizedSolr(TestSolrCloudWithKerberos.java:152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStore

[jira] [Commented] (SOLR-6846) deadlock in UninvertedField#getUninvertedField()

2015-05-22 Thread Avishai Ish-Shalom (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556341#comment-14556341
 ] 

Avishai Ish-Shalom commented on SOLR-6846:
--

You're right of course, cache.wait() isn't the problems here. it seems the 
problem is monitor granularity, the code uses the same lock and notifies all 
waiting threads regardless of what field they are waiting for.
It's been a while since I worked on this issue, I'll try reproducing it when I 
have some time on my hand.

> deadlock in UninvertedField#getUninvertedField()
> 
>
> Key: SOLR-6846
> URL: https://issues.apache.org/jira/browse/SOLR-6846
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.10.2
>Reporter: Avishai Ish-Shalom
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6846.patch
>
>
> Multiple concurrent calls to UninvertedField#getUninvertedField may deadlock: 
> if a call gets to {{cache.wait()}} before another thread gets to the 
> synchronized block around {{cache.notifyAll()}} code will deadlock because 
> {{cache.wait()}} is synchronized with the same monitor object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7555) Display total space and available space in Admin

2015-05-22 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556336#comment-14556336
 ] 

Erik Hatcher commented on SOLR-7555:


How's this play out if the index is stored in HDFS?I see it using 
java.nio's Files and Paths API, which concerns me that this might break when 
the index is on a different kind of beast.   I'll await feedback that this 
scenario is ok before committing this patch - but good idea!  

> Display total space and available space in Admin
> 
>
> Key: SOLR-7555
> URL: https://issues.apache.org/jira/browse/SOLR-7555
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Affects Versions: 5.1
>Reporter: Eric Pugh
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: 5.2
>
> Attachments: SOLR-7555.patch
>
>
> Frequently I have access to the Solr Admin console, but not the underlying 
> server, and I'm curious how much space remains available.   This little patch 
> exposes total Volume size as well as the usable space remaining:
> !https://monosnap.com/file/VqlReekCFwpK6utI3lP18fbPqrGI4b.png!
> I'm not sure if this is the best place to put this, as every shard will share 
> the same data, so maybe it should be on the top level Dashboard?  Also not 
> sure what to call the fields! 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7579) Angular admin UI core analysis screen field name/type dropdown issues

2015-05-22 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556309#comment-14556309
 ] 

Erik Hatcher commented on SOLR-7579:


I reverted this bogus misfired commit of junk laying around.   Sorry! 

> Angular admin UI core analysis screen field name/type dropdown issues
> -
>
> Key: SOLR-7579
> URL: https://issues.apache.org/jira/browse/SOLR-7579
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
> Fix For: Trunk, 5.2
>
> Attachments: SOLR-7579-schema.json, SOLR-7579.patch, screenshot-1.png
>
>
> field name/type drop-down too narrow and unusable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7579) Angular admin UI core analysis screen field name/type dropdown issues

2015-05-22 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher resolved SOLR-7579.

   Resolution: Fixed
Fix Version/s: Trunk

Thanks [~upayavira]!  Applied on branch_5x, lucene_solr_5_2, and trunk

> Angular admin UI core analysis screen field name/type dropdown issues
> -
>
> Key: SOLR-7579
> URL: https://issues.apache.org/jira/browse/SOLR-7579
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
> Fix For: Trunk, 5.2
>
> Attachments: SOLR-7579-schema.json, SOLR-7579.patch, screenshot-1.png
>
>
> field name/type drop-down too narrow and unusable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6846) deadlock in UninvertedField#getUninvertedField()

2015-05-22 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556278#comment-14556278
 ] 

Yonik Seeley commented on SOLR-6846:


It's not clear from the description what the problem is.

bq. because cache.wait() is synchronized with the same monitor object.

This is how wait/notify are supposed to work - use the same monitor.  The call 
to wait() will release the monitor while the thread that called it is blocked.

> deadlock in UninvertedField#getUninvertedField()
> 
>
> Key: SOLR-6846
> URL: https://issues.apache.org/jira/browse/SOLR-6846
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.10.2
>Reporter: Avishai Ish-Shalom
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6846.patch
>
>
> Multiple concurrent calls to UninvertedField#getUninvertedField may deadlock: 
> if a call gets to {{cache.wait()}} before another thread gets to the 
> synchronized block around {{cache.notifyAll()}} code will deadlock because 
> {{cache.wait()}} is synchronized with the same monitor object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7579) Angular admin UI core analysis screen field name/type dropdown issues

2015-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556259#comment-14556259
 ] 

ASF subversion and git services commented on SOLR-7579:
---

Commit 1681136 from [~ehatcher] in branch 'dev/branches/lucene_solr_5_2'
[ https://svn.apache.org/r1681136 ]

SOLR-7579: Fix Angular admin UI analysis screen drop-down issue

> Angular admin UI core analysis screen field name/type dropdown issues
> -
>
> Key: SOLR-7579
> URL: https://issues.apache.org/jira/browse/SOLR-7579
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
> Fix For: 5.2
>
> Attachments: SOLR-7579-schema.json, SOLR-7579.patch, screenshot-1.png
>
>
> field name/type drop-down too narrow and unusable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7579) Angular admin UI core analysis screen field name/type dropdown issues

2015-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556258#comment-14556258
 ] 

ASF subversion and git services commented on SOLR-7579:
---

Commit 1681135 from [~ehatcher] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1681135 ]

SOLR-7579: Fix Angular admin UI analysis screen drop-down issue

> Angular admin UI core analysis screen field name/type dropdown issues
> -
>
> Key: SOLR-7579
> URL: https://issues.apache.org/jira/browse/SOLR-7579
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
> Fix For: 5.2
>
> Attachments: SOLR-7579-schema.json, SOLR-7579.patch, screenshot-1.png
>
>
> field name/type drop-down too narrow and unusable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6496) Updatable OrdinalMap

2015-05-22 Thread Martijn van Groningen (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn van Groningen updated LUCENE-6496:
--
Attachment: LUCENE-6496.patch

After I chatted with Robert, I removed the ImmutableOrdinalMap impl and just 
let MultiDocValues.OrdinalMap implement the OrdinalMap interface. 

Also I moved the UpdatableOrdinalMap to the sandbox module, so the updatable 
impl can be ironed out. For example the updatable ordinal stuff can may also be 
implemented as a DirectoryReader impl.

> Updatable OrdinalMap 
> -
>
> Key: LUCENE-6496
> URL: https://issues.apache.org/jira/browse/LUCENE-6496
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Martijn van Groningen
>Priority: Minor
> Attachments: LUCENE-6496.patch, LUCENE-6496.patch
>
>
> The MultiDocValues.OrdinalMap that we have to today requires a rebuild on 
> each reopen. When the OrdinalMap has been built, lookups are fast and the 
> logic is simple. Many time rebuilding the the OrdinalMap isn't even an issue, 
> because for low to medium cardinality fields the rebuilding doesn't take that 
> much time. The time required to build the OrdinalMap depends on the number of 
> unique terms in a field.
> For high cardinality fields (lets say >= 1M terms) rebuilding the OrdinalMap 
> can take some time to complete. This can then impact the NRT aspect of many 
> applications (facets may rely on ordinal maps to be rebuilt before a new 
> search can happen after the reopen).
> I like to explore a different OrdinalMap implementation that doesn't need to 
> be rebuilt on each reopen. There are simple improvements that can made:
> * Lets say docs have only been marked as deleted, then we basically reuse the 
> OrdinalMap that has already been built. 
> * If no new terms have been introduced we can just add segment ordinal to 
> global ordinal lookups to the OrdinalMap that has already been built.
> I think a complete OrdinalMap rebuild is inevitable, but it would be great if 
> we could rebuild on a flush / merge instead of on each reopen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >