[JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.6.0_45) - Build # 5488 - Still Failing!

2013-05-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/5488/
Java: 64bit/jdk1.6.0_45 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
REGRESSION:  org.apache.solr.cloud.ChaosMonkeyShardSplitTest.testDistribSearch

Error Message:
Wrong doc count on shard1_1 expected:<335> but was:<334>

Stack Trace:
java.lang.AssertionError: Wrong doc count on shard1_1 expected:<335> but 
was:<334>
at 
__randomizedtesting.SeedInfo.seed([CC2958DE1FAD9484:4DCFD6C668F2F4B8]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.ShardSplitTest.checkDocCountsAndShardStates(ShardSplitTest.java:160)
at 
org.apache.solr.cloud.ChaosMonkeyShardSplitTest.doTest(ChaosMonkeyShardSplitTest.java:155)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:815)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
 

[jira] [Updated] (SOLR-4791) solr.xml sharedLib does not work in 4.3.0

2013-05-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-4791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-4791:
--

Component/s: multicore

> solr.xml sharedLib does not work in 4.3.0
> -
>
> Key: SOLR-4791
> URL: https://issues.apache.org/jira/browse/SOLR-4791
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.3
>Reporter: Jan Høydahl
>Assignee: Erick Erickson
> Attachments: SOLR-4791.patch, SOLR-4791.patch
>
>
> The sharedLib attribute on {{}} tag in solr.xml stopped working in 4.3.
> Using old-style solr.xml with sharedLib defined on solr tag. Solr does not 
> load any of them. Simply swapping out solr.war with the 4.2.1 one, brings 
> sharedLib loading back.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4791) solr.xml sharedLib does not work in 4.3.0

2013-05-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-4791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-4791:
--

Fix Version/s: 4.4
   5.0

> solr.xml sharedLib does not work in 4.3.0
> -
>
> Key: SOLR-4791
> URL: https://issues.apache.org/jira/browse/SOLR-4791
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.3
>Reporter: Jan Høydahl
>Assignee: Erick Erickson
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-4791.patch, SOLR-4791.patch
>
>
> The sharedLib attribute on {{}} tag in solr.xml stopped working in 4.3.
> Using old-style solr.xml with sharedLib defined on solr tag. Solr does not 
> load any of them. Simply swapping out solr.war with the 4.2.1 one, brings 
> sharedLib loading back.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4791) solr.xml sharedLib does not work in 4.3.0

2013-05-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-4791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-4791:
--

Fix Version/s: 4.3.1

> solr.xml sharedLib does not work in 4.3.0
> -
>
> Key: SOLR-4791
> URL: https://issues.apache.org/jira/browse/SOLR-4791
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.3
>Reporter: Jan Høydahl
>Assignee: Erick Erickson
> Fix For: 5.0, 4.4, 4.3.1
>
> Attachments: SOLR-4791.patch, SOLR-4791.patch
>
>
> The sharedLib attribute on {{}} tag in solr.xml stopped working in 4.3.
> Using old-style solr.xml with sharedLib defined on solr tag. Solr does not 
> load any of them. Simply swapping out solr.war with the 4.2.1 one, brings 
> sharedLib loading back.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: VOTE: solr no longer webapp

2013-05-08 Thread Federico Méndez
+1...because the best way to manage the system resources (memory, cpu,
etc.) in a enviroment that involves Solr is in having an isolated Solr
server that doesn't interfere with resources demanded by other web
applications; this is the case of large Solr indexes...of course that you
can achieve that installing a servlet container for Solr (only!)...and
that's not very different to run Solr as a separate service.


On Tue, May 7, 2013 at 4:48 PM, Shawn Heisey  wrote:

> On 5/7/2013 8:02 AM, Ariel Zerbib wrote:
> > It also means that it will be necessary to run 2 big JVMs on the same
> > server or to have 2 servers. You need to take it in account; do you
> > really want to force to run solr standalone as a database?
>
> We are always telling people that Solr works best when it is the only
> thing that runs on the server.  Solr loves resources, and if something
> else is using them, it won't perform as well.
>
> Typically the amount of heap memory required for your servlet container
> and the JVM is nothing compared to Solr itself, so if you can remove
> Solr from your web server container, you can reduce the size of that
> JVM.  That will free up system memory for use with Solr.  To be
> perfectly frank here, if your memory is so tight that this will be a
> problem, then your server is undersized and performance problems with
> Solr are inevitable.
>
> Thanks,
> Shawn
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_21) - Build # 5550 - Failure!

2013-05-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/5550/
Java: 32bit/jdk1.7.0_21 -client -XX:+UseG1GC

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
There are still nodes recoverying - waited for 230 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 230 
seconds
at 
__randomizedtesting.SeedInfo.seed([4EC15F3E2AF168FC:CF27D1265DAE08C0]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:173)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:131)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:126)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:512)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:146)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:815)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)

Re: VOTE: solr no longer webapp

2013-05-08 Thread Toke Eskildsen
On Tue, 2013-05-07 at 16:02 +0200, Ariel Zerbib wrote:
> But it's only a point of view of an user.

You need to remove the "But" and the "only" from that sentence.

If we are talking about Solr in large-scale deployment (whether we
measure by load, latency requirements, index size or machine counts), it
makes a lot of sense to shun the webapp solutions and treat Solr as
something you install explicitly and something that is prioritized very
high in terms of resources. It could even require root if that would
provide any advantages.

At the other end of the scale, Solr is also used for less intensive
tasks. It might take the role of a secondary functionality for another
webapp (e.g. search for persons in an email application) or the corpus
or user base so small that performance is just not a problem. For these
situations, the "Run it as a WAR together with 5 other WARs"-approach is
sane enough and might be preferable.

What I see in this thread is that there is a lot of focus on the
large-scale view and little on the small-scale. That might be the right
approach, but I really have no clue as to how many Solr installations
falls in either camp.

It is natural that developers sees Solr as the most important component
and that makes us somewhat blind to the situations where Solr is just
another cog in a complex machinery. As the choice of how Solr is
deployed is highly relevant for users and maintenance guys, hearing
their point of view is important.

- Toke Eskildsen, State and University Library, Denmark


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4975) Add Replication module to Lucene

2013-05-08 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated LUCENE-4975:
---

Attachment: LUCENE-4975.patch

Patch fixes a bug in IndexReplicationHandler (still need to fix in 
IndexAndTaxonomy) and adds some nocommits which I want to take care before I 
commit it.

However, I hit a new test failure, which reproduces with the following command 
{{ant test -Dtestcase=IndexReplicationClientTest 
-Dtests.method=testConsistencyOnExceptions 
-Dtests.seed=EAF5294292642F1:6EE70BB59A9FC3CA}}.

The error is weird. I ran the test w/ -Dtests.verbose=true and here's the 
troubling parts from the log:

{noformat}
ReplicationThread-index: MockDirectoryWrapper: now throw random exception 
during open file=segments_a
java.lang.Throwable
at 
org.apache.lucene.store.MockDirectoryWrapper.maybeThrowIOExceptionOnOpen(MockDirectoryWrapper.java:364)
at 
org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:522)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:281)
at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:340)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:668)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:515)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:343)
at org.apache.lucene.index.IndexWriter.(IndexWriter.java:682)
at 
org.apache.lucene.replicator.IndexReplicationHandler.revisionReady(IndexReplicationHandler.java:208)
at 
org.apache.lucene.replicator.ReplicationClient.doUpdate(ReplicationClient.java:248)
at 
org.apache.lucene.replicator.ReplicationClient.access$1(ReplicationClient.java:188)
at 
org.apache.lucene.replicator.ReplicationClient$ReplicationThread.run(ReplicationClient.java:76)
IFD 0 [Wed May 08 22:47:46 WST 2013; ReplicationThread-index]: init: current 
segments file is "segments_9"; 
deletionPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy@117da39a
IFD 0 [Wed May 08 22:47:46 WST 2013; ReplicationThread-index]: init: load 
commit "segments_9"
IFD 0 [Wed May 08 22:47:46 WST 2013; ReplicationThread-index]: init: load 
commit "segments_a"
IFD 0 [Wed May 08 22:47:46 WST 2013; ReplicationThread-index]: now checkpoint 
"_0(5.0):C1 _1(5.0):C1 _2(5.0):c1 _3(5.0):c1 _4(5.0):c1 _5(5.0):c1 _6(5.0):c1 
_7(5.0):c1 _8(5.0):c1" [9 segments ; isCommit = false]
IFD 0 [Wed May 08 22:47:46 WST 2013; ReplicationThread-index]: 0 msec to 
checkpoint
IFD 0 [Wed May 08 22:47:46 WST 2013; ReplicationThread-index]: deleteCommits: 
now decRef commit "segments_9"
IFD 0 [Wed May 08 22:47:46 WST 2013; ReplicationThread-index]: delete 
"segments_9"
IW 0 [Wed May 08 22:47:46 WST 2013; ReplicationThread-index]: init: create=false



IW 0 [Wed May 08 22:47:46 WST 2013; ReplicationThread-index]: startCommit(): 
start
IW 0 [Wed May 08 22:47:46 WST 2013; ReplicationThread-index]: startCommit 
index=_0(5.0):C1 _1(5.0):C1 _2(5.0):c1 _3(5.0):c1 _4(5.0):c1 _5(5.0):c1 
_6(5.0):c1 _7(5.0):c1 _8(5.0):c1 changeCount=1
IW 0 [Wed May 08 22:47:46 WST 2013; ReplicationThread-index]: done all syncs: 
[_2.si, _7.si, _5.cfs, _1.fnm, _4.cfs, _8.si, _4.cfe, _5.cfe, _0.si, _0.fnm, 
_6.cfe, _8.cfs, _3.cfs, _4.si, _7.cfe, _2.cfs, _5.si, _6.cfs, _1.fdx, _8.cfe, 
_1.fdt, _1.si, _7.cfs, _0.fdx, _3.si, _6.si, _3.cfe, _2.cfe, _0.fdt]
IW 0 [Wed May 08 22:47:46 WST 2013; ReplicationThread-index]: commit: 
pendingCommit != null
IW 0 [Wed May 08 22:47:46 WST 2013; ReplicationThread-index]: commit: wrote 
segments file "segments_a"
IFD 0 [Wed May 08 22:47:46 WST 2013; ReplicationThread-index]: now checkpoint 
"_0(5.0):C1 _1(5.0):C1 _2(5.0):c1 _3(5.0):c1 _4(5.0):c1 _5(5.0):c1 _6(5.0):c1 
_7(5.0):c1 _8(5.0):c1" [9 segments ; isCommit = true]
IFD 0 [Wed May 08 22:47:46 WST 2013; ReplicationThread-index]: deleteCommits: 
now decRef commit "segments_a"
IFD 0 [Wed May 08 22:47:46 WST 2013; ReplicationThread-index]: delete "_9.cfe"
IFD 0 [Wed May 08 22:47:46 WST 2013; ReplicationThread-index]: delete "_9.cfs"
IFD 0 [Wed May 08 22:47:46 WST 2013; ReplicationThread-index]: delete "_9.si"
IFD 0 [Wed May 08 22:47:46 WST 2013; ReplicationThread-index]: 0 msec to 
checkpoint
IW 0 [Wed May 08 22:47:46 WST 2013; ReplicationThread-index]: commit: done
IW 0 [Wed May 08 22:47:46 WST 2013; ReplicationThread-index]: at close: 
_0(5.0):C1 _1(5.0):C1 _2(5.0):c1 _3(5.0):c1 _4(5.0):c1 _5(5.0):c1 _6(5.0):c1 
_7(5.0):c1 _8(5.0):c1
IndexReplicationHandler 0 [Wed May 08 22:47:46 WST 2013; 
ReplicationThread-index]: updateHandlerState(): currentVersion=a 
currentRevisionFiles={index=[Lorg.apache.lucene.replicator.RevisionFile;@9bc2e26e}
IndexReplicationHandler 0 [Wed May 08 22:47:46 WST 2013; 
ReplicationThread-index]: {version=9}
{noformat}

I debug traced it and here's what I think is happening:

* MDW throws FNFE for segmen

[JENKINS] Lucene-Solr-SmokeRelease-4.x - Build # 71 - Still Failing

2013-05-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-4.x/71/

No tests ran.

Build Log:
[...truncated 33010 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/lucene/build/fakeRelease
 [copy] Copying 401 files to 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/lucene/build/fakeRelease/lucene
 [copy] Copying 194 files to 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/lucene/build/fakeRelease/solr
 [exec] JAVA6_HOME is /home/hudson/tools/java/latest1.6
 [exec] JAVA7_HOME is /home/hudson/tools/java/latest1.7
 [exec] NOTE: output encoding is US-ASCII
 [exec] 
 [exec] Load release URL 
"file:/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/lucene/build/fakeRelease/"...
 [exec] 
 [exec] Test Lucene...
 [exec]   test basics...
 [exec]   get KEYS
 [exec] 0.1 MB in 0.01 sec (17.5 MB/sec)
 [exec]   check changes HTML...
 [exec]   download lucene-4.4.0-src.tgz...
 [exec] 26.8 MB in 0.04 sec (654.3 MB/sec)
 [exec] verify md5/sha1 digests
 [exec]   download lucene-4.4.0.tgz...
 [exec] 48.5 MB in 0.09 sec (536.6 MB/sec)
 [exec] verify md5/sha1 digests
 [exec]   download lucene-4.4.0.zip...
 [exec] 58.3 MB in 0.14 sec (414.8 MB/sec)
 [exec] verify md5/sha1 digests
 [exec]   unpack lucene-4.4.0.tgz...
 [exec] verify JAR/WAR metadata...
 [exec] test demo with 1.6...
 [exec]   got 5572 hits for query "lucene"
 [exec] test demo with 1.7...
 [exec]   got 5572 hits for query "lucene"
 [exec] check Lucene's javadoc JAR
 [exec]   unpack lucene-4.4.0.zip...
 [exec] verify JAR/WAR metadata...
 [exec] test demo with 1.6...
 [exec]   got 5572 hits for query "lucene"
 [exec] test demo with 1.7...
 [exec]   got 5572 hits for query "lucene"
 [exec] check Lucene's javadoc JAR
 [exec]   unpack lucene-4.4.0-src.tgz...
 [exec] make sure no JARs/WARs in src dist...
 [exec] run "ant validate"
 [exec] run tests w/ Java 6...
 [exec] test demo with 1.6...
 [exec]   got 223 hits for query "lucene"
 [exec] generate javadocs w/ Java 6...
 [exec] run tests w/ Java 7...
 [exec] test demo with 1.7...
 [exec]   got 223 hits for query "lucene"
 [exec] generate javadocs w/ Java 7...
 [exec] 
 [exec] Crawl/parse...
 [exec] 
 [exec]   
file:///usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/lucene/build/fakeReleaseTmp/unpack/lucene-4.4.0/build/docs/analyzers-common/org/apache/lucene/analysis/util/FilteringTokenFilter.html
 [exec] WARNING: anchor "version" appears more than once
 [exec] 
 [exec] Verify...
 [exec] 
 [exec] Test Solr...
 [exec]   test basics...
 [exec]   get KEYS
 [exec] 0.1 MB in 0.05 sec (3.1 MB/sec)
 [exec]   check changes HTML...
 [exec]   download solr-4.4.0-src.tgz...
 [exec] 30.5 MB in 0.39 sec (77.9 MB/sec)
 [exec] verify md5/sha1 digests
 [exec]   download solr-4.4.0.tgz...
 [exec] 112.4 MB in 0.44 sec (254.7 MB/sec)
 [exec] verify md5/sha1 digests
 [exec]   download solr-4.4.0.zip...
 [exec] 116.9 MB in 0.61 sec (190.5 MB/sec)
 [exec] verify md5/sha1 digests
 [exec]   unpack solr-4.4.0.tgz...
 [exec] verify JAR/WAR metadata...
 [exec]   **WARNING**: skipping check of 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/lucene/build/fakeReleaseTmp/unpack/solr-4.4.0/contrib/dataimporthandler/lib/activation-1.1.jar:
 it has javax.* classes
 [exec]   **WARNING**: skipping check of 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/lucene/build/fakeReleaseTmp/unpack/solr-4.4.0/contrib/dataimporthandler/lib/mail-1.4.1.jar:
 it has javax.* classes
 [exec] make sure WAR file has no javax.* or java.* classes...
 [exec] copying unpacked distribution for Java 6 ...
 [exec] test solr example w/ Java 6...
 [exec]   start Solr instance 
(log=/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/lucene/build/fakeReleaseTmp/unpack/solr-4.4.0-java6/solr-example.log)...
 [exec]   startup done
 [exec]   test utf8...
 [exec] 
 [exec] command "sh ./exampledocs/test_utf8.sh" failed:
 [exec] ERROR: Could not curl to Solr - is curl installed? Is Solr not 
running?
 [exec] 
 [exec] 
 [exec]   stop server (SIGINT)...
 [exec] Traceback (most recent call last):
 [exec]   File 
"/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/dev-tools/scripts/smokeTestRelease.py",
 line 1423, in 
 [exec] main()
 [exec]   File 
"/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/de

[jira] [Commented] (LUCENE-4975) Add Replication module to Lucene

2013-05-08 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651762#comment-13651762
 ] 

Shai Erera commented on LUCENE-4975:


Ok some more insights ... I think this additional chain of events occurs:

* IW's ctor reads gen 9, and SegmentInfos.getSegmentsFileName returns 
segments_9.
* IFD then successfully reads both segments_9 and segments_a, ending up w/ two 
commit points.
* IFD sorts them and passes to IndexDeletionPolicy (KeepLastCommit) which 
deletes segments_9 and keeps segments_a
* IFD marks that startingCommitDeleted, as it is

And here's what I still don't understand -- for some reason, IW creates a new 
commit point, segments_a, with the commitData from segments_9. Still need to 
dig into that.

In the meanwhile, I made the above change to the hanlder (to rollback(), not 
close()), and 430 iterations passed. Not sure if that's the right way to go ... 
if IW.close() didn't commit ... ;)

> Add Replication module to Lucene
> 
>
> Key: LUCENE-4975
> URL: https://issues.apache.org/jira/browse/LUCENE-4975
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-4975.patch, LUCENE-4975.patch, LUCENE-4975.patch, 
> LUCENE-4975.patch, LUCENE-4975.patch, LUCENE-4975.patch
>
>
> I wrote a replication module which I think will be useful to Lucene users who 
> want to replicate their indexes for e.g high-availability, taking hot backups 
> etc.
> I will upload a patch soon where I'll describe in general how it works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4800) java.lang.IllegalArgumentException: No enum const class org.apache.lucene.util.Version.LUCENE_43

2013-05-08 Thread Roald Hopman (JIRA)
Roald Hopman created SOLR-4800:
--

 Summary: java.lang.IllegalArgumentException: No enum const class 
org.apache.lucene.util.Version.LUCENE_43
 Key: SOLR-4800
 URL: https://issues.apache.org/jira/browse/SOLR-4800
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.3
 Environment: Solr 4.3.0 on Ubuntu 12.04.2 LTS

Reporter: Roald Hopman


java.lang.IllegalArgumentException: No enum const class 
org.apache.lucene.util.Version.LUCENE_43

solr-4.3.0/example/solr/collection1/conf/solrconfig.xml has   
LUCENE_43

Which causes: 

SolrCore Initialization Failures

collection1: 
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
Could not load config for solrconfig.xml 

>From catalina.out :

SEVERE: Unable to create core: collection1
org.apache.solr.common.SolrException: Could not load config for solrconfig.xml
at 
org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:991)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1051)
at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:634)
at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:629)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)
Caused by: org.apache.solr.common.SolrException: Invalid luceneMatchVersion 
'LUCENE_43', valid values are: [LUCENE_30, LUCENE_31, LUCENE_32, LUCENE_33, 
LUCENE_34, LUCENE_35, LUCENE_36, LUCENE_40, LUCENE_41, LUCENE_42, 
LUCENE_CURRENT] or a string in format 'V.V'
at org.apache.solr.core.Config.parseLuceneVersionString(Config.java:313)
at org.apache.solr.core.Config.getLuceneVersion(Config.java:298)
at org.apache.solr.core.SolrConfig.(SolrConfig.java:119)
at 
org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:989)
... 11 more
Caused by: java.lang.IllegalArgumentException: No enum const class 
org.apache.lucene.util.Version.LUCENE_43
at java.lang.Enum.valueOf(Enum.java:214)
at org.apache.lucene.util.Version.valueOf(Version.java:34)
at org.apache.lucene.util.Version.parseLeniently(Version.java:133)
at org.apache.solr.core.Config.parseLuceneVersionString(Config.java:311)
... 14 more
May 7, 2013 9:10:00 PM org.apache.solr.common.SolrException log
SEVERE: null:org.apache.solr.common.SolrException: Unable to create core: 
collection1
at 
org.apache.solr.core.CoreContainer.recordAndThrow(CoreContainer.java:1672)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1057)
at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:634)
at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:629)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)
Caused by: org.apache.solr.common.SolrException: Could not load config for 
solrconfig.xml
at 
org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:991)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1051)
... 10 more
Caused by: org.apache.solr.common.SolrException: Invalid luceneMatchVersion 
'LUCENE_43', valid values are: [LUCENE_30, LUCENE_31, LUCENE_32, LUCENE_33, 
LUCENE_34, LUCENE_35, LUCENE_36, LUCENE_40, LUCENE_41, LUCENE_42, 
LUCENE_CURRENT] or a string in format 'V.V'
at org.apache.solr.core.Config.parseLuceneVersionString(Config.java:313)
at org.apache.solr.core.Config.getLuceneVersion(Config.java:298)
at org.apache.solr.core.SolrConfig.(SolrConfig.java:119)
at 
org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:989)
... 11 more
Caused by: java.lang.IllegalArgumentException: No enum const class 
org.apache.lucene.util.Version.LUCENE_43
at java.lang.Enum.valueOf(Enum.java:214)
at or

[jira] [Commented] (LUCENE-4975) Add Replication module to Lucene

2013-05-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651786#comment-13651786
 ] 

Robert Muir commented on LUCENE-4975:
-

I'm trying really hard not to say anything sarcastic here :)

> Add Replication module to Lucene
> 
>
> Key: LUCENE-4975
> URL: https://issues.apache.org/jira/browse/LUCENE-4975
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-4975.patch, LUCENE-4975.patch, LUCENE-4975.patch, 
> LUCENE-4975.patch, LUCENE-4975.patch, LUCENE-4975.patch
>
>
> I wrote a replication module which I think will be useful to Lucene users who 
> want to replicate their indexes for e.g high-availability, taking hot backups 
> etc.
> I will upload a patch soon where I'll describe in general how it works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 448 - Still Failing!

2013-05-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/448/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
collection already exists: awholynewcollection_0

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: 
collection already exists: awholynewcollection_0
at 
__randomizedtesting.SeedInfo.seed([40C35E1A4590BB04:C125D00232CFDB38]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:424)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:264)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:306)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1400)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:438)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:146)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:815)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAsserti

[jira] [Updated] (SOLR-4791) solr.xml sharedLib does not work in 4.3.0

2013-05-08 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-4791:
-

Attachment: SOLR-4791.patch

Ryan:

Looks great! I'll check it in today trunk and 4x.

I took out a couple of unused imports I noticed that had probably been hanging 
around since forever but otherwise I didn't change anything

> solr.xml sharedLib does not work in 4.3.0
> -
>
> Key: SOLR-4791
> URL: https://issues.apache.org/jira/browse/SOLR-4791
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.3
>Reporter: Jan Høydahl
>Assignee: Erick Erickson
> Fix For: 5.0, 4.4, 4.3.1
>
> Attachments: SOLR-4791.patch, SOLR-4791.patch, SOLR-4791.patch
>
>
> The sharedLib attribute on {{}} tag in solr.xml stopped working in 4.3.
> Using old-style solr.xml with sharedLib defined on solr tag. Solr does not 
> load any of them. Simply swapping out solr.war with the 4.2.1 one, brings 
> sharedLib loading back.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4800) java.lang.IllegalArgumentException: No enum const class org.apache.lucene.util.Version.LUCENE_43

2013-05-08 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-4800.
--

Resolution: Invalid

Works fine for me, and it's highly unlikely that this would get through the 
smoke/pre-release tests. I suspect you have a classpath issue whereby you're 
using a 4.3 solrconfig.xml with a pre 4.3 set of jars.

We can re-open if necessary, but please ask on the user list before re-opening.



> java.lang.IllegalArgumentException: No enum const class 
> org.apache.lucene.util.Version.LUCENE_43
> 
>
> Key: SOLR-4800
> URL: https://issues.apache.org/jira/browse/SOLR-4800
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.3
> Environment: Solr 4.3.0 on Ubuntu 12.04.2 LTS
>Reporter: Roald Hopman
>
> java.lang.IllegalArgumentException: No enum const class 
> org.apache.lucene.util.Version.LUCENE_43
> solr-4.3.0/example/solr/collection1/conf/solrconfig.xml has   
> LUCENE_43
> Which causes: 
> SolrCore Initialization Failures
> collection1: 
> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Could not load config for solrconfig.xml 
> From catalina.out :
> SEVERE: Unable to create core: collection1
> org.apache.solr.common.SolrException: Could not load config for solrconfig.xml
>   at 
> org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:991)
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1051)
>   at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:634)
>   at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:629)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:679)
> Caused by: org.apache.solr.common.SolrException: Invalid luceneMatchVersion 
> 'LUCENE_43', valid values are: [LUCENE_30, LUCENE_31, LUCENE_32, LUCENE_33, 
> LUCENE_34, LUCENE_35, LUCENE_36, LUCENE_40, LUCENE_41, LUCENE_42, 
> LUCENE_CURRENT] or a string in format 'V.V'
>   at org.apache.solr.core.Config.parseLuceneVersionString(Config.java:313)
>   at org.apache.solr.core.Config.getLuceneVersion(Config.java:298)
>   at org.apache.solr.core.SolrConfig.(SolrConfig.java:119)
>   at 
> org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:989)
>   ... 11 more
> Caused by: java.lang.IllegalArgumentException: No enum const class 
> org.apache.lucene.util.Version.LUCENE_43
>   at java.lang.Enum.valueOf(Enum.java:214)
>   at org.apache.lucene.util.Version.valueOf(Version.java:34)
>   at org.apache.lucene.util.Version.parseLeniently(Version.java:133)
>   at org.apache.solr.core.Config.parseLuceneVersionString(Config.java:311)
>   ... 14 more
> May 7, 2013 9:10:00 PM org.apache.solr.common.SolrException log
> SEVERE: null:org.apache.solr.common.SolrException: Unable to create core: 
> collection1
>   at 
> org.apache.solr.core.CoreContainer.recordAndThrow(CoreContainer.java:1672)
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1057)
>   at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:634)
>   at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:629)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:679)
> Caused by: org.apache.solr.common.SolrException: Could not load config for 
> solrconfig.xml
>   at 
> org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:991)
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1051)
>   ... 10 more
> Caused by: org.apache.solr.common.SolrException: Invalid luceneMatchVersion 
> 'LUCENE_43', valid values are: [LUCENE_30, LUCENE_31, LUCENE_32, LUC

[jira] [Commented] (SOLR-4800) java.lang.IllegalArgumentException: No enum const class org.apache.lucene.util.Version.LUCENE_43

2013-05-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651798#comment-13651798
 ] 

Robert Muir commented on SOLR-4800:
---

This seems to be a common theme with tomcat users. Recommend ... not using 
tomcat :)

> java.lang.IllegalArgumentException: No enum const class 
> org.apache.lucene.util.Version.LUCENE_43
> 
>
> Key: SOLR-4800
> URL: https://issues.apache.org/jira/browse/SOLR-4800
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.3
> Environment: Solr 4.3.0 on Ubuntu 12.04.2 LTS
>Reporter: Roald Hopman
>
> java.lang.IllegalArgumentException: No enum const class 
> org.apache.lucene.util.Version.LUCENE_43
> solr-4.3.0/example/solr/collection1/conf/solrconfig.xml has   
> LUCENE_43
> Which causes: 
> SolrCore Initialization Failures
> collection1: 
> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Could not load config for solrconfig.xml 
> From catalina.out :
> SEVERE: Unable to create core: collection1
> org.apache.solr.common.SolrException: Could not load config for solrconfig.xml
>   at 
> org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:991)
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1051)
>   at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:634)
>   at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:629)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:679)
> Caused by: org.apache.solr.common.SolrException: Invalid luceneMatchVersion 
> 'LUCENE_43', valid values are: [LUCENE_30, LUCENE_31, LUCENE_32, LUCENE_33, 
> LUCENE_34, LUCENE_35, LUCENE_36, LUCENE_40, LUCENE_41, LUCENE_42, 
> LUCENE_CURRENT] or a string in format 'V.V'
>   at org.apache.solr.core.Config.parseLuceneVersionString(Config.java:313)
>   at org.apache.solr.core.Config.getLuceneVersion(Config.java:298)
>   at org.apache.solr.core.SolrConfig.(SolrConfig.java:119)
>   at 
> org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:989)
>   ... 11 more
> Caused by: java.lang.IllegalArgumentException: No enum const class 
> org.apache.lucene.util.Version.LUCENE_43
>   at java.lang.Enum.valueOf(Enum.java:214)
>   at org.apache.lucene.util.Version.valueOf(Version.java:34)
>   at org.apache.lucene.util.Version.parseLeniently(Version.java:133)
>   at org.apache.solr.core.Config.parseLuceneVersionString(Config.java:311)
>   ... 14 more
> May 7, 2013 9:10:00 PM org.apache.solr.common.SolrException log
> SEVERE: null:org.apache.solr.common.SolrException: Unable to create core: 
> collection1
>   at 
> org.apache.solr.core.CoreContainer.recordAndThrow(CoreContainer.java:1672)
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1057)
>   at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:634)
>   at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:629)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:679)
> Caused by: org.apache.solr.common.SolrException: Could not load config for 
> solrconfig.xml
>   at 
> org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:991)
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1051)
>   ... 10 more
> Caused by: org.apache.solr.common.SolrException: Invalid luceneMatchVersion 
> 'LUCENE_43', valid values are: [LUCENE_30, LUCENE_31, LUCENE_32, LUCENE_33, 
> LUCENE_34, LUCENE_35, LUCENE_36, LUCENE_40, LUCENE_41, LUCENE_42, 
> LUCENE_CURRENT] or a string in format 'V.V'
>   at org.apache.solr.core.Config.parseLuceneVersionSt

[jira] [Created] (LUCENE-4986) NRT reader doesn't see changes after successful IW.tryDeleteDocument

2013-05-08 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-4986:
--

 Summary: NRT reader doesn't see changes after successful 
IW.tryDeleteDocument
 Key: LUCENE-4986
 URL: https://issues.apache.org/jira/browse/LUCENE-4986
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, 4.4


Reported by Reg on the java-user list, subject 
"TrackingIndexWriter.tryDeleteDocument(IndexReader, int) vs 
deleteDocuments(Query)":

When IW.tryDeleteDocument succeeds, it marks the document as deleted in the 
pending BitVector in ReadersAndLiveDocs, but then when the NRT reader checks if 
it's still current by calling IW.nrtIsCurrent, we fail to catch changes to the 
BitVector, resulting in the NRT reader thinking it's current and not reopening.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4986) NRT reader doesn't see changes after successful IW.tryDeleteDocument

2013-05-08 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-4986:
---

Attachment: LUCENE-4986.patch

Patch with test case showing the failure, from Reg (I tweaked it a bit to adapt 
to trunk APIs).

I think to fix this, the NRT reader should also (only?) track the changeCount 
that IndexWriter uses.

> NRT reader doesn't see changes after successful IW.tryDeleteDocument
> 
>
> Key: LUCENE-4986
> URL: https://issues.apache.org/jira/browse/LUCENE-4986
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, 4.4
>
> Attachments: LUCENE-4986.patch
>
>
> Reported by Reg on the java-user list, subject 
> "TrackingIndexWriter.tryDeleteDocument(IndexReader, int) vs 
> deleteDocuments(Query)":
> When IW.tryDeleteDocument succeeds, it marks the document as deleted in the 
> pending BitVector in ReadersAndLiveDocs, but then when the NRT reader checks 
> if it's still current by calling IW.nrtIsCurrent, we fail to catch changes to 
> the BitVector, resulting in the NRT reader thinking it's current and not 
> reopening.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4800) java.lang.IllegalArgumentException: No enum const class org.apache.lucene.util.Version.LUCENE_43

2013-05-08 Thread Roald Hopman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651815#comment-13651815
 ] 

Roald Hopman commented on SOLR-4800:


I'll ask on the user list but I don't think it's a classpath issue.

If I change LUCENE_43 to LUCENE_42 it works. The admin webpage reports the 
following versions:

solr-spec : 4.2.1.2013.03.26.08.26.55
solr-impl : 4.2.1 1461071 - mark - 2013-03-26 08:26:55
lucene-spec : 4.2.1
lucene-impl : 4.2.1 1461071 - mark - 2013-03-26 08:23:34


> java.lang.IllegalArgumentException: No enum const class 
> org.apache.lucene.util.Version.LUCENE_43
> 
>
> Key: SOLR-4800
> URL: https://issues.apache.org/jira/browse/SOLR-4800
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.3
> Environment: Solr 4.3.0 on Ubuntu 12.04.2 LTS
>Reporter: Roald Hopman
>
> java.lang.IllegalArgumentException: No enum const class 
> org.apache.lucene.util.Version.LUCENE_43
> solr-4.3.0/example/solr/collection1/conf/solrconfig.xml has   
> LUCENE_43
> Which causes: 
> SolrCore Initialization Failures
> collection1: 
> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Could not load config for solrconfig.xml 
> From catalina.out :
> SEVERE: Unable to create core: collection1
> org.apache.solr.common.SolrException: Could not load config for solrconfig.xml
>   at 
> org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:991)
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1051)
>   at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:634)
>   at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:629)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:679)
> Caused by: org.apache.solr.common.SolrException: Invalid luceneMatchVersion 
> 'LUCENE_43', valid values are: [LUCENE_30, LUCENE_31, LUCENE_32, LUCENE_33, 
> LUCENE_34, LUCENE_35, LUCENE_36, LUCENE_40, LUCENE_41, LUCENE_42, 
> LUCENE_CURRENT] or a string in format 'V.V'
>   at org.apache.solr.core.Config.parseLuceneVersionString(Config.java:313)
>   at org.apache.solr.core.Config.getLuceneVersion(Config.java:298)
>   at org.apache.solr.core.SolrConfig.(SolrConfig.java:119)
>   at 
> org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:989)
>   ... 11 more
> Caused by: java.lang.IllegalArgumentException: No enum const class 
> org.apache.lucene.util.Version.LUCENE_43
>   at java.lang.Enum.valueOf(Enum.java:214)
>   at org.apache.lucene.util.Version.valueOf(Version.java:34)
>   at org.apache.lucene.util.Version.parseLeniently(Version.java:133)
>   at org.apache.solr.core.Config.parseLuceneVersionString(Config.java:311)
>   ... 14 more
> May 7, 2013 9:10:00 PM org.apache.solr.common.SolrException log
> SEVERE: null:org.apache.solr.common.SolrException: Unable to create core: 
> collection1
>   at 
> org.apache.solr.core.CoreContainer.recordAndThrow(CoreContainer.java:1672)
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1057)
>   at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:634)
>   at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:629)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:679)
> Caused by: org.apache.solr.common.SolrException: Could not load config for 
> solrconfig.xml
>   at 
> org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:991)
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1051)
>   ... 10 more
> Caused by: org.apache.solr.common.SolrException: Invalid luceneMatchVersion 

[jira] [Commented] (SOLR-4791) solr.xml sharedLib does not work in 4.3.0

2013-05-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-4791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651816#comment-13651816
 ] 

Jan Høydahl commented on SOLR-4791:
---

Tested the patch in my environment, works well (with old-style solr.xml, not 
tested with new style).

> solr.xml sharedLib does not work in 4.3.0
> -
>
> Key: SOLR-4791
> URL: https://issues.apache.org/jira/browse/SOLR-4791
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.3
>Reporter: Jan Høydahl
>Assignee: Erick Erickson
> Fix For: 5.0, 4.4, 4.3.1
>
> Attachments: SOLR-4791.patch, SOLR-4791.patch, SOLR-4791.patch
>
>
> The sharedLib attribute on {{}} tag in solr.xml stopped working in 4.3.
> Using old-style solr.xml with sharedLib defined on solr tag. Solr does not 
> load any of them. Simply swapping out solr.war with the 4.2.1 one, brings 
> sharedLib loading back.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4791) solr.xml sharedLib does not work in 4.3.0

2013-05-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-4791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651820#comment-13651820
 ] 

Jan Høydahl commented on SOLR-4791:
---

Tested with new style solr.xml, and it works as well.

> solr.xml sharedLib does not work in 4.3.0
> -
>
> Key: SOLR-4791
> URL: https://issues.apache.org/jira/browse/SOLR-4791
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.3
>Reporter: Jan Høydahl
>Assignee: Erick Erickson
> Fix For: 5.0, 4.4, 4.3.1
>
> Attachments: SOLR-4791.patch, SOLR-4791.patch, SOLR-4791.patch
>
>
> The sharedLib attribute on {{}} tag in solr.xml stopped working in 4.3.
> Using old-style solr.xml with sharedLib defined on solr tag. Solr does not 
> load any of them. Simply swapping out solr.war with the 4.2.1 one, brings 
> sharedLib loading back.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4956) the korean analyzer that has a korean morphological analyzer and dictionaries

2013-05-08 Thread Christian Moen (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651826#comment-13651826
 ] 

Christian Moen commented on LUCENE-4956:


Updates:

* Added {{text_kr}} field type to {{schema.xml}}
* Fixed Solr factories to load field type {{text_kr}} in the example
* Updated javadoc so that it compiles cleanly (mostly removed illegal javadoc)
* Updated various build things related to include Korean in the Solr 
distribution
* Added placeholder stopwords file
* Added services for arirang

Korean analysis using field type {{text_kr}} seems to be doing the right thing 
out-of-the-box now, but some configuration options in the factories aren't 
working as of now.  There are several other things that needs polishing up, but 
we're making progress.

> the korean analyzer that has a korean morphological analyzer and dictionaries
> -
>
> Key: LUCENE-4956
> URL: https://issues.apache.org/jira/browse/LUCENE-4956
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Affects Versions: 4.2
>Reporter: SooMyung Lee
>Assignee: Christian Moen
>  Labels: newbie
> Attachments: kr.analyzer.4x.tar
>
>
> Korean language has specific characteristic. When developing search service 
> with lucene & solr in korean, there are some problems in searching and 
> indexing. The korean analyer solved the problems with a korean morphological 
> anlyzer. It consists of a korean morphological analyzer, dictionaries, a 
> korean tokenizer and a korean filter. The korean anlyzer is made for lucene 
> and solr. If you develop a search service with lucene in korean, It is the 
> best idea to choose the korean analyzer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4975) Add Replication module to Lucene

2013-05-08 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651839#comment-13651839
 ] 

Shai Erera commented on LUCENE-4975:


bq. I'm trying really hard not to say anything sarcastic here

Heh, I expected something from you :).

I chatted about this with Mike and he confirmed my reasoning. This is very slim 
chance, and usually indicates a truly bad (or crazy) IO subsystem (i.e. not 
like MDW throwing random IOEs on opening the same file over and over again). I 
think perhaps this can be solved in IW by having it refer to the latest commit 
point read by IFD and not what it read. This seems safe to me, but perhaps an 
overkill. Anyway, it belongs in a different issue.

What also happened here is that IW overwrote segments_a (violating write-once 
policy), which MDW didn't catch because for replicator tests I need to turn off 
preventDoubleWrite. Mike also says that IW doesn't guarantee write-once if it 
hits exceptions ...

So I think the safest solution is to deleteUnused() + rollback(), as anyway the 
handler must ensure that commits are not created by this "kiss".

I will resolve the remaining nocommits and post a new patch.

> Add Replication module to Lucene
> 
>
> Key: LUCENE-4975
> URL: https://issues.apache.org/jira/browse/LUCENE-4975
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-4975.patch, LUCENE-4975.patch, LUCENE-4975.patch, 
> LUCENE-4975.patch, LUCENE-4975.patch, LUCENE-4975.patch
>
>
> I wrote a replication module which I think will be useful to Lucene users who 
> want to replicate their indexes for e.g high-availability, taking hot backups 
> etc.
> I will upload a patch soon where I'll describe in general how it works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4975) Add Replication module to Lucene

2013-05-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651847#comment-13651847
 ] 

Robert Muir commented on LUCENE-4975:
-

{quote}
I chatted about this with Mike and he confirmed my reasoning. This is very slim 
chance, and usually indicates a truly bad (or crazy) IO subsystem (i.e. not 
like MDW throwing random IOEs on opening the same file over and over again). I 
think perhaps this can be solved in IW by having it refer to the latest commit 
point read by IFD and not what it read. This seems safe to me, but perhaps an 
overkill. Anyway, it belongs in a different issue.

What also happened here is that IW overwrote segments_a (violating write-once 
policy), which MDW didn't catch because for replicator tests I need to turn off 
preventDoubleWrite. Mike also says that IW doesn't guarantee write-once if it 
hits exceptions ...
{quote}

Sorry, i disagree and am against this.

The bug is that IndexWriter.close() waits for merges and commits. Lets quit 
kidding ourselves ok?

> Add Replication module to Lucene
> 
>
> Key: LUCENE-4975
> URL: https://issues.apache.org/jira/browse/LUCENE-4975
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-4975.patch, LUCENE-4975.patch, LUCENE-4975.patch, 
> LUCENE-4975.patch, LUCENE-4975.patch, LUCENE-4975.patch
>
>
> I wrote a replication module which I think will be useful to Lucene users who 
> want to replicate their indexes for e.g high-availability, taking hot backups 
> etc.
> I will upload a patch soon where I'll describe in general how it works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-4975) Add Replication module to Lucene

2013-05-08 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651850#comment-13651850
 ] 

Shai Erera edited comment on LUCENE-4975 at 5/8/13 12:37 PM:
-

bq. The bug is that IndexWriter.close() waits for merges and commits. Lets quit 
kidding ourselves ok?

Not sure I agree .. the bug is real, and if somebody did new 
IW().commit().close() without making any change, he might be surprised about 
that commit too, and also IW would overwrite an existing file. The only thing 
"close-not-committing" would solve in this case is that I won't need to call 
rollback(), but close(). The bug won't disappear.

  was (Author: shaie):
bq. The bug is that IndexWriter.close() waits for merges and commits. Lets 
quit kidding ourselves ok?

Not sure I agree .. the bug is real, and if somebody did new 
IW().commit().close() without making any change, he might be surprised about 
that commit too, and also IW would overwrite any existing file. The only thing 
"close-not-committing" would solve in this case is that I won't need to call 
rollback(), but close(). The bug won't disappear.
  
> Add Replication module to Lucene
> 
>
> Key: LUCENE-4975
> URL: https://issues.apache.org/jira/browse/LUCENE-4975
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-4975.patch, LUCENE-4975.patch, LUCENE-4975.patch, 
> LUCENE-4975.patch, LUCENE-4975.patch, LUCENE-4975.patch
>
>
> I wrote a replication module which I think will be useful to Lucene users who 
> want to replicate their indexes for e.g high-availability, taking hot backups 
> etc.
> I will upload a patch soon where I'll describe in general how it works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4975) Add Replication module to Lucene

2013-05-08 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651850#comment-13651850
 ] 

Shai Erera commented on LUCENE-4975:


bq. The bug is that IndexWriter.close() waits for merges and commits. Lets quit 
kidding ourselves ok?

Not sure I agree .. the bug is real, and if somebody did new 
IW().commit().close() without making any change, he might be surprised about 
that commit too, and also IW would overwrite any existing file. The only thing 
"close-not-committing" would solve in this case is that I won't need to call 
rollback(), but close(). The bug won't disappear.

> Add Replication module to Lucene
> 
>
> Key: LUCENE-4975
> URL: https://issues.apache.org/jira/browse/LUCENE-4975
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-4975.patch, LUCENE-4975.patch, LUCENE-4975.patch, 
> LUCENE-4975.patch, LUCENE-4975.patch, LUCENE-4975.patch
>
>
> I wrote a replication module which I think will be useful to Lucene users who 
> want to replicate their indexes for e.g high-availability, taking hot backups 
> etc.
> I will upload a patch soon where I'll describe in general how it works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4791) solr.xml sharedLib does not work in 4.3.0

2013-05-08 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-4791.
--

Resolution: Fixed

trunk: 1480228
4x:1480252

Thanks Ryan for the patch and Jan for testing!

Ryan: My apologies, I forgot to add this to CHANGES.txt. I have another patch 
to commit sometime in the next couple of days and I'll add this (with 
attribution!) in then.

> solr.xml sharedLib does not work in 4.3.0
> -
>
> Key: SOLR-4791
> URL: https://issues.apache.org/jira/browse/SOLR-4791
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.3
>Reporter: Jan Høydahl
>Assignee: Erick Erickson
> Fix For: 5.0, 4.4, 4.3.1
>
> Attachments: SOLR-4791.patch, SOLR-4791.patch, SOLR-4791.patch
>
>
> The sharedLib attribute on {{}} tag in solr.xml stopped working in 4.3.
> Using old-style solr.xml with sharedLib defined on solr tag. Solr does not 
> load any of them. Simply swapping out solr.war with the 4.2.1 one, brings 
> sharedLib loading back.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.7.0) - Build # 435 - Still Failing!

2013-05-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/435/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
There are still nodes recoverying - waited for 230 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 230 
seconds
at 
__randomizedtesting.SeedInfo.seed([223750121E635B13:A3D1DE0A693C3B2F]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:173)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:131)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:126)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:512)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:146)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:815)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)

[jira] [Created] (LUCENE-4987) Test framework may fail internally under J9 (some serious JVM exclusive-section issue).

2013-05-08 Thread Dawid Weiss (JIRA)
Dawid Weiss created LUCENE-4987:
---

 Summary: Test framework may fail internally under J9 (some serious 
JVM exclusive-section issue).
 Key: LUCENE-4987
 URL: https://issues.apache.org/jira/browse/LUCENE-4987
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor


This was reported by Shai. The runner failed with an exception:
{code}
[junit4:junit4] Caused by: java.util.NoSuchElementException
[junit4:junit4] at java.util.ArrayDeque.removeFirst(ArrayDeque.java:289)
[junit4:junit4] at java.util.ArrayDeque.pop(ArrayDeque.java:518)
[junit4:junit4] at 
com.carrotsearch.ant.tasks.junit4.JUnit4$1.onSlaveIdle(JUnit4.java:809)
[junit4:junit4] ... 17 more
{code}

The problem is that this is impossible because the code around JUnit4.java:809 
looks like this:

{code}
 final Deque stealingQueue = new ArrayDeque(...);
 aggregatedBus.register(new Object() {
@Subscribe
public void onSlaveIdle(SlaveIdle slave) {
  if (stealingQueue.isEmpty()) {
...
  } else {
String suiteName = stealingQueue.pop();
...
  }
}
  });
{code}

and the contract on Guava's EventBus states that:

{code}
 * The EventBus guarantees that it will not call a handler method from
 * multiple threads simultaneously, unless the method explicitly allows it by
 * bearing the {@link AllowConcurrentEvents} annotation.  If this annotation is
 * not present, handler methods need not worry about being reentrant, unless
 * also called from outside the EventBus
{code}

I wrote a simple snippet of code that does it in a loop and indeed, two threads 
can appear in the critical section at once. This is not reproducible on Hotspot 
and only appears to be the problem on J9/1.7/Windows (J9 1.6 works fine).

I'll provide a workaround in the runner (an explicit monitor seems to be 
working) but this is some serious J9 issue.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4987) Test framework may fail internally under J9 (some serious JVM exclusive-section issue).

2013-05-08 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-4987:


Attachment: j9.zip

Compile and run under J9 (and hotspot):
{code}
javac -cp guava-14.0.1.jar;junit-4.10.jar;randomizedtesting-runner-2.0.9.jar 
J9SanityCheck.java
java -cp guava-14.0.1.jar;junit-4.10.jar;randomizedtesting-runner-2.0.9.jar;. 
J9SanityCheck
{code}

> Test framework may fail internally under J9 (some serious JVM 
> exclusive-section issue).
> ---
>
> Key: LUCENE-4987
> URL: https://issues.apache.org/jira/browse/LUCENE-4987
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Attachments: j9.zip
>
>
> This was reported by Shai. The runner failed with an exception:
> {code}
> [junit4:junit4] Caused by: java.util.NoSuchElementException
> [junit4:junit4] at 
> java.util.ArrayDeque.removeFirst(ArrayDeque.java:289)
> [junit4:junit4] at java.util.ArrayDeque.pop(ArrayDeque.java:518)
> [junit4:junit4] at 
> com.carrotsearch.ant.tasks.junit4.JUnit4$1.onSlaveIdle(JUnit4.java:809)
> [junit4:junit4] ... 17 more
> {code}
> The problem is that this is impossible because the code around 
> JUnit4.java:809 looks like this:
> {code}
>  final Deque stealingQueue = new ArrayDeque(...);
>  aggregatedBus.register(new Object() {
> @Subscribe
> public void onSlaveIdle(SlaveIdle slave) {
>   if (stealingQueue.isEmpty()) {
> ...
>   } else {
> String suiteName = stealingQueue.pop();
> ...
>   }
> }
>   });
> {code}
> and the contract on Guava's EventBus states that:
> {code}
>  * The EventBus guarantees that it will not call a handler method from
>  * multiple threads simultaneously, unless the method explicitly allows it by
>  * bearing the {@link AllowConcurrentEvents} annotation.  If this annotation 
> is
>  * not present, handler methods need not worry about being reentrant, unless
>  * also called from outside the EventBus
> {code}
> I wrote a simple snippet of code that does it in a loop and indeed, two 
> threads can appear in the critical section at once. This is not reproducible 
> on Hotspot and only appears to be the problem on J9/1.7/Windows (J9 1.6 works 
> fine).
> I'll provide a workaround in the runner (an explicit monitor seems to be 
> working) but this is some serious J9 issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4801) "Can't find (or read) directory to add to classloader" at startup

2013-05-08 Thread Leith Tussing (JIRA)
Leith Tussing created SOLR-4801:
---

 Summary: "Can't find (or read) directory to add to classloader" at 
startup
 Key: SOLR-4801
 URL: https://issues.apache.org/jira/browse/SOLR-4801
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.3
 Environment: Windows 2008 R2
Java JRE 1.6.0_45 x64
Jetty 8.1.10
Solr 4.3.0 + lib\etc files
Reporter: Leith Tussing
Priority: Minor


I upgraded from 4.2.1 by replacing the war file and adding the lib\etc files 
from the example section into my Jetty installation.  Now every time I start 
the services I get WARN messages I've never gotten before during startup.  

Additionally there appears to be profanity in the last WARN message as it looks 
for a "/total/crap/dir/ignored" after failing the other items.

{code:xml}
WARN  - 2013-05-08 08:46:06.710; org.apache.solr.core.SolrResourceLoader; Can't 
find (or read) directory to add to classloader: ../../../contrib/extraction/lib 
(resolved as: C:\Jetty\solr\instance1\..\..\..\contrib\extraction\lib).
WARN  - 2013-05-08 08:46:06.714; org.apache.solr.core.SolrResourceLoader; Can't 
find (or read) directory to add to classloader: ../../../dist/ (resolved as: 
C:\Jetty\solr\instance1\..\..\..\dist).
WARN  - 2013-05-08 08:46:06.715; org.apache.solr.core.SolrResourceLoader; Can't 
find (or read) directory to add to classloader: 
../../../contrib/clustering/lib/ (resolved as: 
C:\Jetty\solr\instance1\..\..\..\contrib\clustering\lib).
WARN  - 2013-05-08 08:46:06.716; org.apache.solr.core.SolrResourceLoader; Can't 
find (or read) directory to add to classloader: ../../../dist/ (resolved as: 
C:\Jetty\solr\instance1\..\..\..\dist).
WARN  - 2013-05-08 08:46:06.716; org.apache.solr.core.SolrResourceLoader; Can't 
find (or read) directory to add to classloader: ../../../contrib/langid/lib/ 
(resolved as: C:\Jetty\solr\instance1\..\..\..\contrib\langid\lib).
WARN  - 2013-05-08 08:46:06.717; org.apache.solr.core.SolrResourceLoader; Can't 
find (or read) directory to add to classloader: ../../../dist/ (resolved as: 
C:\Jetty\solr\instance1\..\..\..\dist).
WARN  - 2013-05-08 08:46:06.717; org.apache.solr.core.SolrResourceLoader; Can't 
find (or read) directory to add to classloader: ../../../contrib/velocity/lib 
(resolved as: C:\Jetty\solr\instance1\..\..\..\contrib\velocity\lib).
WARN  - 2013-05-08 08:46:06.718; org.apache.solr.core.SolrResourceLoader; Can't 
find (or read) directory to add to classloader: ../../../dist/ (resolved as: 
C:\Jetty\solr\instance1\..\..\..\dist).
WARN  - 2013-05-08 08:46:06.719; org.apache.solr.core.SolrResourceLoader; Can't 
find (or read) directory to add to classloader: /total/crap/dir/ignored 
(resolved as: C:\Jetty\solr\instance1\total\crap\dir\ignored).
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4987) Test framework may fail internally under J9 (some serious JVM exclusive-section issue).

2013-05-08 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651882#comment-13651882
 ] 

Dawid Weiss commented on LUCENE-4987:
-

I think I found the reason. J9 seems to be optimizing away the following code:
{code}
  @Override public synchronized void handleEvent(Object event)
  throws InvocationTargetException {
super.handleEvent(event);
  }
{code}
this method is removed entirely as the stack trace from the 'wtf' exception 
shows:
{code}
java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:613)
at 
com.google.common.eventbus.EventHandler.handleEvent(EventHandler.java:74)
at com.google.common.eventbus.EventBus.dispatch(EventBus.java:314)
at 
com.google.common.eventbus.EventBus.dispatchQueuedEvents(EventBus.java:296)
at com.google.common.eventbus.EventBus.post(EventBus.java:267)
at 
com.carrotsearch.ant.tasks.junit4.it.TestJ9SanityCheck$2.call(TestJ9SanityCheck.java:75)
at 
com.carrotsearch.ant.tasks.junit4.it.TestJ9SanityCheck$2.call(TestJ9SanityCheck.java:1)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:345)
at java.util.concurrent.FutureTask.run(FutureTask.java:177)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:614)
at java.lang.Thread.run(Thread.java:777)
Caused by: java.lang.RuntimeException: Wtf? two threads in a handler: 
Thread[pool-100-thread-1,5,TGRP-TestJ9SanityCheck] and 
Thread[pool-100-thread-2,5,TGRP-TestJ9SanityCheck]
at 
com.carrotsearch.ant.tasks.junit4.it.TestJ9SanityCheck$1.onSlaveIdle(TestJ9SanityCheck.java:52)
... 14 more
{code}

The eventhandler type is definitely SynchronizedEventHandler and on Hotspot you 
do get a synchronized indirection layer (a call under a synchronized monitor), 
whereas on J9 this becomes a full race condition.



> Test framework may fail internally under J9 (some serious JVM 
> exclusive-section issue).
> ---
>
> Key: LUCENE-4987
> URL: https://issues.apache.org/jira/browse/LUCENE-4987
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Attachments: j9.zip
>
>
> This was reported by Shai. The runner failed with an exception:
> {code}
> [junit4:junit4] Caused by: java.util.NoSuchElementException
> [junit4:junit4] at 
> java.util.ArrayDeque.removeFirst(ArrayDeque.java:289)
> [junit4:junit4] at java.util.ArrayDeque.pop(ArrayDeque.java:518)
> [junit4:junit4] at 
> com.carrotsearch.ant.tasks.junit4.JUnit4$1.onSlaveIdle(JUnit4.java:809)
> [junit4:junit4] ... 17 more
> {code}
> The problem is that this is impossible because the code around 
> JUnit4.java:809 looks like this:
> {code}
>  final Deque stealingQueue = new ArrayDeque(...);
>  aggregatedBus.register(new Object() {
> @Subscribe
> public void onSlaveIdle(SlaveIdle slave) {
>   if (stealingQueue.isEmpty()) {
> ...
>   } else {
> String suiteName = stealingQueue.pop();
> ...
>   }
> }
>   });
> {code}
> and the contract on Guava's EventBus states that:
> {code}
>  * The EventBus guarantees that it will not call a handler method from
>  * multiple threads simultaneously, unless the method explicitly allows it by
>  * bearing the {@link AllowConcurrentEvents} annotation.  If this annotation 
> is
>  * not present, handler methods need not worry about being reentrant, unless
>  * also called from outside the EventBus
> {code}
> I wrote a simple snippet of code that does it in a loop and indeed, two 
> threads can appear in the critical section at once. This is not reproducible 
> on Hotspot and only appears to be the problem on J9/1.7/Windows (J9 1.6 works 
> fine).
> I'll provide a workaround in the runner (an explicit monitor seems to be 
> working) but this is some serious J9 issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4801) "Can't find (or read) directory to add to classloader" at startup

2013-05-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651885#comment-13651885
 ] 

Mark Miller commented on SOLR-4801:
---

Woah, that's some vulgar stuff ;) Don't worry, I know who is responsible and 
they won't get away with it unscathed.

> "Can't find (or read) directory to add to classloader" at startup
> -
>
> Key: SOLR-4801
> URL: https://issues.apache.org/jira/browse/SOLR-4801
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.3
> Environment: Windows 2008 R2
> Java JRE 1.6.0_45 x64
> Jetty 8.1.10
> Solr 4.3.0 + lib\etc files
>Reporter: Leith Tussing
>Priority: Minor
>
> I upgraded from 4.2.1 by replacing the war file and adding the lib\etc files 
> from the example section into my Jetty installation.  Now every time I start 
> the services I get WARN messages I've never gotten before during startup.  
> Additionally there appears to be profanity in the last WARN message as it 
> looks for a "/total/crap/dir/ignored" after failing the other items.
> {code:xml}
> WARN  - 2013-05-08 08:46:06.710; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/extraction/lib (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\extraction\lib).
> WARN  - 2013-05-08 08:46:06.714; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.715; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/clustering/lib/ (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\clustering\lib).
> WARN  - 2013-05-08 08:46:06.716; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.716; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/langid/lib/ (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\langid\lib).
> WARN  - 2013-05-08 08:46:06.717; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.717; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/velocity/lib (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\velocity\lib).
> WARN  - 2013-05-08 08:46:06.718; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.719; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: /total/crap/dir/ignored 
> (resolved as: C:\Jetty\solr\instance1\total\crap\dir\ignored).
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4801) "Can't find (or read) directory to add to classloader" at startup

2013-05-08 Thread Leith Tussing (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651886#comment-13651886
 ] 

Leith Tussing commented on SOLR-4801:
-

The profanity part doesn't bother me, but there's probably someone out there 
who will bust a blood vessel if they saw it.  :D

> "Can't find (or read) directory to add to classloader" at startup
> -
>
> Key: SOLR-4801
> URL: https://issues.apache.org/jira/browse/SOLR-4801
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.3
> Environment: Windows 2008 R2
> Java JRE 1.6.0_45 x64
> Jetty 8.1.10
> Solr 4.3.0 + lib\etc files
>Reporter: Leith Tussing
>Priority: Minor
>
> I upgraded from 4.2.1 by replacing the war file and adding the lib\etc files 
> from the example section into my Jetty installation.  Now every time I start 
> the services I get WARN messages I've never gotten before during startup.  
> Additionally there appears to be profanity in the last WARN message as it 
> looks for a "/total/crap/dir/ignored" after failing the other items.
> {code:xml}
> WARN  - 2013-05-08 08:46:06.710; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/extraction/lib (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\extraction\lib).
> WARN  - 2013-05-08 08:46:06.714; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.715; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/clustering/lib/ (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\clustering\lib).
> WARN  - 2013-05-08 08:46:06.716; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.716; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/langid/lib/ (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\langid\lib).
> WARN  - 2013-05-08 08:46:06.717; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.717; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/velocity/lib (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\velocity\lib).
> WARN  - 2013-05-08 08:46:06.718; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.719; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: /total/crap/dir/ignored 
> (resolved as: C:\Jetty\solr\instance1\total\crap\dir\ignored).
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4987) Test framework may fail internally under J9 (some serious JVM exclusive-section issue).

2013-05-08 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651888#comment-13651888
 ] 

Shai Erera commented on LUCENE-4987:


Thanks Dawid. I filed a PMR with the J9 team. Will post back when they 
respond/resolve the issue.

> Test framework may fail internally under J9 (some serious JVM 
> exclusive-section issue).
> ---
>
> Key: LUCENE-4987
> URL: https://issues.apache.org/jira/browse/LUCENE-4987
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Attachments: j9.zip
>
>
> This was reported by Shai. The runner failed with an exception:
> {code}
> [junit4:junit4] Caused by: java.util.NoSuchElementException
> [junit4:junit4] at 
> java.util.ArrayDeque.removeFirst(ArrayDeque.java:289)
> [junit4:junit4] at java.util.ArrayDeque.pop(ArrayDeque.java:518)
> [junit4:junit4] at 
> com.carrotsearch.ant.tasks.junit4.JUnit4$1.onSlaveIdle(JUnit4.java:809)
> [junit4:junit4] ... 17 more
> {code}
> The problem is that this is impossible because the code around 
> JUnit4.java:809 looks like this:
> {code}
>  final Deque stealingQueue = new ArrayDeque(...);
>  aggregatedBus.register(new Object() {
> @Subscribe
> public void onSlaveIdle(SlaveIdle slave) {
>   if (stealingQueue.isEmpty()) {
> ...
>   } else {
> String suiteName = stealingQueue.pop();
> ...
>   }
> }
>   });
> {code}
> and the contract on Guava's EventBus states that:
> {code}
>  * The EventBus guarantees that it will not call a handler method from
>  * multiple threads simultaneously, unless the method explicitly allows it by
>  * bearing the {@link AllowConcurrentEvents} annotation.  If this annotation 
> is
>  * not present, handler methods need not worry about being reentrant, unless
>  * also called from outside the EventBus
> {code}
> I wrote a simple snippet of code that does it in a loop and indeed, two 
> threads can appear in the critical section at once. This is not reproducible 
> on Hotspot and only appears to be the problem on J9/1.7/Windows (J9 1.6 works 
> fine).
> I'll provide a workaround in the runner (an explicit monitor seems to be 
> working) but this is some serious J9 issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4801) "Can't find (or read) directory to add to classloader" at startup

2013-05-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651889#comment-13651889
 ] 

Mark Miller commented on SOLR-4801:
---

I've mainly only heard of laughter as a response, so perhaps it's all balanced 
out in the world. I'm still ready to scold the culprit. I saw a guy end his 
interview on the BBC early by using such language :)

I do think those warning messages are (kind of oddly) expected now unless you 
remove those lib directives from the solrconfig.xml.

> "Can't find (or read) directory to add to classloader" at startup
> -
>
> Key: SOLR-4801
> URL: https://issues.apache.org/jira/browse/SOLR-4801
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.3
> Environment: Windows 2008 R2
> Java JRE 1.6.0_45 x64
> Jetty 8.1.10
> Solr 4.3.0 + lib\etc files
>Reporter: Leith Tussing
>Priority: Minor
>
> I upgraded from 4.2.1 by replacing the war file and adding the lib\etc files 
> from the example section into my Jetty installation.  Now every time I start 
> the services I get WARN messages I've never gotten before during startup.  
> Additionally there appears to be profanity in the last WARN message as it 
> looks for a "/total/crap/dir/ignored" after failing the other items.
> {code:xml}
> WARN  - 2013-05-08 08:46:06.710; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/extraction/lib (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\extraction\lib).
> WARN  - 2013-05-08 08:46:06.714; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.715; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/clustering/lib/ (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\clustering\lib).
> WARN  - 2013-05-08 08:46:06.716; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.716; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/langid/lib/ (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\langid\lib).
> WARN  - 2013-05-08 08:46:06.717; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.717; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/velocity/lib (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\velocity\lib).
> WARN  - 2013-05-08 08:46:06.718; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.719; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: /total/crap/dir/ignored 
> (resolved as: C:\Jetty\solr\instance1\total\crap\dir\ignored).
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4801) "Can't find (or read) directory to add to classloader" at startup

2013-05-08 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651895#comment-13651895
 ] 

Shawn Heisey commented on SOLR-4801:


The profanity is in the log because it's in the example solrconfig.xml file, 
and this config was likely based on the example.  We might want to replace that 
path in the example config with something less controversial.  I find it 
amusing, and I hate giving into such pressures, but we have to live in the real 
world.

I'm wondering if these warnings might be another symptom of SOLR-4791.

[~brentil], does everything work despite the warnings?  If you used the entire 
Solr download as-is, those paths should exist, but if you've copied only 
required bits from example into a separate Jetty installation, they would not 
exist.  Do you need any of the contrib features in one of the directories that 
are mentioned?  Is testing a branch_4x checkout something you can do?  A more 
relevant option would be to try 4.3.0 source with the patch from SOLR-4791.


> "Can't find (or read) directory to add to classloader" at startup
> -
>
> Key: SOLR-4801
> URL: https://issues.apache.org/jira/browse/SOLR-4801
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.3
> Environment: Windows 2008 R2
> Java JRE 1.6.0_45 x64
> Jetty 8.1.10
> Solr 4.3.0 + lib\etc files
>Reporter: Leith Tussing
>Priority: Minor
>
> I upgraded from 4.2.1 by replacing the war file and adding the lib\etc files 
> from the example section into my Jetty installation.  Now every time I start 
> the services I get WARN messages I've never gotten before during startup.  
> Additionally there appears to be profanity in the last WARN message as it 
> looks for a "/total/crap/dir/ignored" after failing the other items.
> {code:xml}
> WARN  - 2013-05-08 08:46:06.710; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/extraction/lib (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\extraction\lib).
> WARN  - 2013-05-08 08:46:06.714; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.715; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/clustering/lib/ (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\clustering\lib).
> WARN  - 2013-05-08 08:46:06.716; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.716; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/langid/lib/ (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\langid\lib).
> WARN  - 2013-05-08 08:46:06.717; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.717; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/velocity/lib (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\velocity\lib).
> WARN  - 2013-05-08 08:46:06.718; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.719; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: /total/crap/dir/ignored 
> (resolved as: C:\Jetty\solr\instance1\total\crap\dir\ignored).
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: VOTE: solr no longer webapp

2013-05-08 Thread Mark Miller

On May 8, 2013, at 4:52 AM, Toke Eskildsen  wrote:

> What I see in this thread is that there is a lot of focus on the
> large-scale view and little on the small-scale.

The small scale is what embedded Solr can be used for. Much like you might use 
embedded Derby or something. I personally develop for the large scale - that's 
where the party is - and I want the software I work on to be the best for the 
large scale that it can be out of the box. It's much easier for a user to work 
that out for a smaller scale than the reverse.

I want Solr out of the box to be a scaling monster. I have very little concern 
for the guy that want's to run Solr with 5 other webapps. That guy has to 
suffer for the greater good. And as his little solution grows with success, 
perhaps he won't curse us later when he realizes how poorly he planned ahead.

- Mark
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: VOTE: solr no longer webapp

2013-05-08 Thread Alan Woodward
> The small scale is what embedded Solr can be used for. 

Could we merge the embedded server in SolrJ with the code in 
SolrDispatchFilter?  Then we basically get the war version 'for free' - it's 
just a servlet wrapper around EmbeddedSolrServer.

In fact, we should probably do this anyway.  There's a bunch of stuff 
duplicated between the two classes, and all the new schema-rest functionality 
is missing from ESS.
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4799) SQLEntityProcessor for zipper join

2013-05-08 Thread James Dyer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651923#comment-13651923
 ] 

James Dyer commented on SOLR-4799:
--

Mikhail,

Let me clarify that DIH is not mostly considered a "playground tool".  It 
performs very well and has a rich feature-set.  We use this in production to 
import millions of documents each day, with each document consisting of fields 
from 50+ data sources.  For simpler imports, it is a quick and easy way to get 
your data into Solr and run imports.  Many many installations use this in 
production and it works well in many cases.

That said, the codebase has suffered from years of neglect.  Over time people 
have been more willing to add features rather than refactor.  A lot of the code 
needs to be simpified, re-worked, less-important features removed, etc.  The 
tests need further improvement as well.

Your idea has great merit.  I think this would be an awesome feature to have in 
DIH.  I've wished for it before.  But I personally tend to shy away from 
committing big features to DIH because the code is not stable enough in my 
opinion.  I even have features in JIRA that I've developed and use in 
Production but feel uneasy about committing until more refactoring and test 
improvement work is done.

> SQLEntityProcessor for zipper join
> --
>
> Key: SOLR-4799
> URL: https://issues.apache.org/jira/browse/SOLR-4799
> Project: Solr
>  Issue Type: New Feature
>  Components: contrib - DataImportHandler
>Reporter: Mikhail Khludnev
>Priority: Minor
>  Labels: dih
>
> DIH is mostly considered as a playground tool, and real usages end up with 
> SolrJ. I want to contribute few improvements target DIH performance.
> This one provides performant approach for joining SQL Entities with miserable 
> memory at contrast to 
> http://wiki.apache.org/solr/DataImportHandler#CachedSqlEntityProcessor  
> The idea is:
> * parent table is explicitly ordered by it’s PK in SQL
> * children table is explicitly ordered by parent_id FK in SQL
> * children entity processor joins ordered resultsets by ‘zipper’ algorithm.
> Do you think it’s worth to contribute it into DIH?
> cc: [~goksron] [~jdyer]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4583) StraightBytesDocValuesField fails if bytes > 32k

2013-05-08 Thread Barakat Barakat (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barakat Barakat updated LUCENE-4583:


Attachment: LUCENE-4583.patch

I recently switched from 4.1 to 4.3, and my patch needed to be updated because 
of the changes to DocValues. The problem was almost fixed for BinaryDocValues, 
but it just needed one little change. I've attached a patch that removes the 
BinaryDocValues exception when the length is over BYTE_BLOCK_SIZE (32k), fixes 
ByteBlockPool#readBytes:348, and changes the 
TestDocValuesIndexing#testTooLargeBytes test to check for accuracy.


> StraightBytesDocValuesField fails if bytes > 32k
> 
>
> Key: LUCENE-4583
> URL: https://issues.apache.org/jira/browse/LUCENE-4583
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.0, 4.1, 5.0
>Reporter: David Smiley
>Priority: Critical
> Fix For: 4.3
>
> Attachments: LUCENE-4583.patch, LUCENE-4583.patch
>
>
> I didn't observe any limitations on the size of a bytes based DocValues field 
> value in the docs.  It appears that the limit is 32k, although I didn't get 
> any friendly error telling me that was the limit.  32k is kind of small IMO; 
> I suspect this limit is unintended and as such is a bug.The following 
> test fails:
> {code:java}
>   public void testBigDocValue() throws IOException {
> Directory dir = newDirectory();
> IndexWriter writer = new IndexWriter(dir, writerConfig(false));
> Document doc = new Document();
> BytesRef bytes = new BytesRef((4+4)*4097);//4096 works
> bytes.length = bytes.bytes.length;//byte data doesn't matter
> doc.add(new StraightBytesDocValuesField("dvField", bytes));
> writer.addDocument(doc);
> writer.commit();
> writer.close();
> DirectoryReader reader = DirectoryReader.open(dir);
> DocValues docValues = MultiDocValues.getDocValues(reader, "dvField");
> //FAILS IF BYTES IS BIG!
> docValues.getSource().getBytes(0, bytes);
> reader.close();
> dir.close();
>   }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.7.0_21) - Build # 2799 - Failure!

2013-05-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/2799/
Java: 64bit/jdk1.7.0_21 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.ResourceLoaderTest

Error Message:
Resource in scope SUITE failed to close. Resource was registered from thread 
Thread[id=1159, 
name=TEST-ResourceLoaderTest.testClassLoaderLibs-seed#[333E0EA661C38064], 
state=RUNNABLE, group=TGRP-ResourceLoaderTest], registration stack trace below.

Stack Trace:
com.carrotsearch.randomizedtesting.ResourceDisposalError: Resource in scope 
SUITE failed to close. Resource was registered from thread Thread[id=1159, 
name=TEST-ResourceLoaderTest.testClassLoaderLibs-seed#[333E0EA661C38064], 
state=RUNNABLE, group=TGRP-ResourceLoaderTest], registration stack trace below.
at __randomizedtesting.SeedInfo.seed([333E0EA661C38064]:0)
at java.lang.Thread.getStackTrace(Thread.java:1567)
at 
com.carrotsearch.randomizedtesting.RandomizedContext.closeAtEnd(RandomizedContext.java:150)
at 
org.apache.lucene.util.LuceneTestCase.closeAfterSuite(LuceneTestCase.java:524)
at org.apache.lucene.util._TestUtil.getTempDir(_TestUtil.java:125)
at 
org.apache.solr.core.ResourceLoaderTest.testClassLoaderLibs(ResourceLoaderTest.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stateme

[jira] [Updated] (SOLR-4796) zkcli.sh should honor JAVA_HOME

2013-05-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-4796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-4796:
--

Fix Version/s: (was: 4.3)
   4.4

> zkcli.sh should honor JAVA_HOME
> ---
>
> Key: SOLR-4796
> URL: https://issues.apache.org/jira/browse/SOLR-4796
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 4.2
>Reporter: Roman Shaposhnik
> Fix For: 4.4
>
> Attachments: SOLR-4796.patch.txt
>
>
> On a system with GNU java installed the fact that zkcli.sh doesn't honor 
> JAVA_HOME could lead to hard to diagnose failure:
> {noformat}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org.apache.solr.cloud.ZkCLI
>at gnu.java.lang.MainThread.run(libgcj.so.7rh)
> Caused by: java.lang.ClassNotFoundException: org.apache.solr.cloud.ZkCLI not 
> found in gnu.gcj.runtime.SystemClassLoader{urls=[], 
> parent=gnu.gcj.runtime.ExtensionClassLoader{urls=[], parent=null}}
>at java.net.URLClassLoader.findClass(libgcj.so.7rh)
>at java.lang.ClassLoader.loadClass(libgcj.so.7rh)
>at java.lang.ClassLoader.loadClass(libgcj.so.7rh)
>at gnu.java.lang.MainThread.run(libgcj.so.7rh)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4655) The Overseer should assign node names by default.

2013-05-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651934#comment-13651934
 ] 

Mark Miller commented on SOLR-4655:
---

I'm a little confused how the tests are currently passing - makes me think 
something is off.

OverseerCollectionProcesser will try and wait for the coreNodeName of: 
cmd.setCoreNodeName(nodeName + "_" + subShardName);

But according to your output above, the core node name will will be core_node{n}

I'm not sure how that wait ends up working right.

> The Overseer should assign node names by default.
> -
>
> Key: SOLR-4655
> URL: https://issues.apache.org/jira/browse/SOLR-4655
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 4.3, 5.0
>
> Attachments: SOLR-4655.patch, SOLR-4655.patch, SOLR-4655.patch, 
> SOLR-4655.patch, SOLR-4655.patch, SOLR-4655.patch
>
>
> Currently we make a unique node name by using the host address as part of the 
> name. This means that if you want a node with a new address to take over, the 
> node name is misleading. It's best if you set custom names for each node 
> before starting your cluster. This is cumbersome though, and cannot currently 
> be done with the collections API. Instead, the overseer could assign a more 
> generic name such as nodeN by default. Then you can easily swap in another 
> node with no pre planning and no confusion in the name.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4800) java.lang.IllegalArgumentException: No enum const class org.apache.lucene.util.Version.LUCENE_43

2013-05-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-4800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651937#comment-13651937
 ] 

Jan Høydahl commented on SOLR-4800:
---

Start from scratch and deploy your app making sure you use the 4.3 war.

Note that you may have to remove old unpacked "webapps/solr/" folder if you 
previously deployed 4.2.

rm -rf tomcat/webapps/solr

Then restart Tomcat, and it will pick up the new solr.war and unpack it. This 
depends on your settings whether you want the appserver to always unpack wars...

> java.lang.IllegalArgumentException: No enum const class 
> org.apache.lucene.util.Version.LUCENE_43
> 
>
> Key: SOLR-4800
> URL: https://issues.apache.org/jira/browse/SOLR-4800
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.3
> Environment: Solr 4.3.0 on Ubuntu 12.04.2 LTS
>Reporter: Roald Hopman
>
> java.lang.IllegalArgumentException: No enum const class 
> org.apache.lucene.util.Version.LUCENE_43
> solr-4.3.0/example/solr/collection1/conf/solrconfig.xml has   
> LUCENE_43
> Which causes: 
> SolrCore Initialization Failures
> collection1: 
> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Could not load config for solrconfig.xml 
> From catalina.out :
> SEVERE: Unable to create core: collection1
> org.apache.solr.common.SolrException: Could not load config for solrconfig.xml
>   at 
> org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:991)
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1051)
>   at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:634)
>   at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:629)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:679)
> Caused by: org.apache.solr.common.SolrException: Invalid luceneMatchVersion 
> 'LUCENE_43', valid values are: [LUCENE_30, LUCENE_31, LUCENE_32, LUCENE_33, 
> LUCENE_34, LUCENE_35, LUCENE_36, LUCENE_40, LUCENE_41, LUCENE_42, 
> LUCENE_CURRENT] or a string in format 'V.V'
>   at org.apache.solr.core.Config.parseLuceneVersionString(Config.java:313)
>   at org.apache.solr.core.Config.getLuceneVersion(Config.java:298)
>   at org.apache.solr.core.SolrConfig.(SolrConfig.java:119)
>   at 
> org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:989)
>   ... 11 more
> Caused by: java.lang.IllegalArgumentException: No enum const class 
> org.apache.lucene.util.Version.LUCENE_43
>   at java.lang.Enum.valueOf(Enum.java:214)
>   at org.apache.lucene.util.Version.valueOf(Version.java:34)
>   at org.apache.lucene.util.Version.parseLeniently(Version.java:133)
>   at org.apache.solr.core.Config.parseLuceneVersionString(Config.java:311)
>   ... 14 more
> May 7, 2013 9:10:00 PM org.apache.solr.common.SolrException log
> SEVERE: null:org.apache.solr.common.SolrException: Unable to create core: 
> collection1
>   at 
> org.apache.solr.core.CoreContainer.recordAndThrow(CoreContainer.java:1672)
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1057)
>   at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:634)
>   at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:629)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:679)
> Caused by: org.apache.solr.common.SolrException: Could not load config for 
> solrconfig.xml
>   at 
> org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:991)
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1051)
>   ... 10 more
> Caused by: org.apache.solr.common.SolrException: Inv

[jira] [Commented] (SOLR-4801) "Can't find (or read) directory to add to classloader" at startup

2013-05-08 Thread Leith Tussing (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651943#comment-13651943
 ] 

Leith Tussing commented on SOLR-4801:
-

That's correct, I picked the war file and the lib\etc files from the package 
and put them into a non-example Jetty installation.  I had copied over just the 
dist\solr-4.3.0.war and renamed it to solr.war at first which then resulted in 
failed loading.  Looking at the logs informed me I was missing slf4j components 
so I then read the newly created [http://wiki.apache.org/solr/SolrLogging] 
which informed me to copy over the solr/example/lib/ext items as well.

>From everything I've testing the functionality is working as expected.  I was 
>able to add new entries and search existing ones.  As far as I know we do not 
>need the contrib items, however if I've incorrectly installed Solr I'd like to 
>know how to fix it.

I'm not exactly sure how to run the branch_4x (this correct? 
[http://svn.apache.org/viewvc/lucene/dev/branches/branch_4x/solr/]) or the 
normal 4.3.0 as source.  If there is some guidance documentation you can point 
me to I would gladly give it a try.

> "Can't find (or read) directory to add to classloader" at startup
> -
>
> Key: SOLR-4801
> URL: https://issues.apache.org/jira/browse/SOLR-4801
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.3
> Environment: Windows 2008 R2
> Java JRE 1.6.0_45 x64
> Jetty 8.1.10
> Solr 4.3.0 + lib\etc files
>Reporter: Leith Tussing
>Priority: Minor
>
> I upgraded from 4.2.1 by replacing the war file and adding the lib\etc files 
> from the example section into my Jetty installation.  Now every time I start 
> the services I get WARN messages I've never gotten before during startup.  
> Additionally there appears to be profanity in the last WARN message as it 
> looks for a "/total/crap/dir/ignored" after failing the other items.
> {code:xml}
> WARN  - 2013-05-08 08:46:06.710; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/extraction/lib (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\extraction\lib).
> WARN  - 2013-05-08 08:46:06.714; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.715; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/clustering/lib/ (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\clustering\lib).
> WARN  - 2013-05-08 08:46:06.716; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.716; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/langid/lib/ (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\langid\lib).
> WARN  - 2013-05-08 08:46:06.717; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.717; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/velocity/lib (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\velocity\lib).
> WARN  - 2013-05-08 08:46:06.718; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.719; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: /total/crap/dir/ignored 
> (resolved as: C:\Jetty\solr\instance1\total\crap\dir\ignored).
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4583) StraightBytesDocValuesField fails if bytes > 32k

2013-05-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651945#comment-13651945
 ] 

Robert Muir commented on LUCENE-4583:
-

I dont think we should just bump the limit like this.

the patch is not safe: Some codecs rely upon this limit (e.g. they use 64k-size 
pagedbytes and other things).

But in general anyway I'm not sure what the real use cases are for storing > 
32kb inside a single document value.


> StraightBytesDocValuesField fails if bytes > 32k
> 
>
> Key: LUCENE-4583
> URL: https://issues.apache.org/jira/browse/LUCENE-4583
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.0, 4.1, 5.0
>Reporter: David Smiley
>Priority: Critical
> Fix For: 4.3
>
> Attachments: LUCENE-4583.patch, LUCENE-4583.patch
>
>
> I didn't observe any limitations on the size of a bytes based DocValues field 
> value in the docs.  It appears that the limit is 32k, although I didn't get 
> any friendly error telling me that was the limit.  32k is kind of small IMO; 
> I suspect this limit is unintended and as such is a bug.The following 
> test fails:
> {code:java}
>   public void testBigDocValue() throws IOException {
> Directory dir = newDirectory();
> IndexWriter writer = new IndexWriter(dir, writerConfig(false));
> Document doc = new Document();
> BytesRef bytes = new BytesRef((4+4)*4097);//4096 works
> bytes.length = bytes.bytes.length;//byte data doesn't matter
> doc.add(new StraightBytesDocValuesField("dvField", bytes));
> writer.addDocument(doc);
> writer.commit();
> writer.close();
> DirectoryReader reader = DirectoryReader.open(dir);
> DocValues docValues = MultiDocValues.getDocValues(reader, "dvField");
> //FAILS IF BYTES IS BIG!
> docValues.getSource().getBytes(0, bytes);
> reader.close();
> dir.close();
>   }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4508) DIHConfiguration and Document classes are suspiciously similar

2013-05-08 Thread Vadim Kirilchuk (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651946#comment-13651946
 ] 

Vadim Kirilchuk commented on SOLR-4508:
---

[~sar...@syr.edu] 
It looks like you finally wiped it out in r1470539.
Thanks. I am closing the issue.

> DIHConfiguration and Document classes are suspiciously similar
> --
>
> Key: SOLR-4508
> URL: https://issues.apache.org/jira/browse/SOLR-4508
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler
>Reporter: Vadim Kirilchuk
>
> Hi, 
> trunk/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/DIHConfiguration.java
> trunk/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/Document.java
> I see that there are two classes in dih codebase which are quite similar, and 
> looks like the latter (Document.java) has no usages.
> Is it dead code? Should Document.java be removed?
> Thanks

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-4508) DIHConfiguration and Document classes are suspiciously similar

2013-05-08 Thread Vadim Kirilchuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Kirilchuk closed SOLR-4508.
-

   Resolution: Fixed
Fix Version/s: 5.0

> DIHConfiguration and Document classes are suspiciously similar
> --
>
> Key: SOLR-4508
> URL: https://issues.apache.org/jira/browse/SOLR-4508
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler
>Reporter: Vadim Kirilchuk
> Fix For: 5.0
>
>
> Hi, 
> trunk/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/DIHConfiguration.java
> trunk/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/Document.java
> I see that there are two classes in dih codebase which are quite similar, and 
> looks like the latter (Document.java) has no usages.
> Is it dead code? Should Document.java be removed?
> Thanks

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4655) The Overseer should assign node names by default.

2013-05-08 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651951#comment-13651951
 ] 

Anshum Gupta commented on SOLR-4655:


I'll just patch up and have a look at it later in the day. I guess I know 
what's going on there. i.e. it waits (for the wrong coreNodeName) and times out 
but this isn't getting caught up.
Once I confirm that, I'll also create a JIRA to fix that.

> The Overseer should assign node names by default.
> -
>
> Key: SOLR-4655
> URL: https://issues.apache.org/jira/browse/SOLR-4655
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 4.3, 5.0
>
> Attachments: SOLR-4655.patch, SOLR-4655.patch, SOLR-4655.patch, 
> SOLR-4655.patch, SOLR-4655.patch, SOLR-4655.patch
>
>
> Currently we make a unique node name by using the host address as part of the 
> name. This means that if you want a node with a new address to take over, the 
> node name is misleading. It's best if you set custom names for each node 
> before starting your cluster. This is cumbersome though, and cannot currently 
> be done with the collections API. Instead, the overseer could assign a more 
> generic name such as nodeN by default. Then you can easily swap in another 
> node with no pre planning and no confusion in the name.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4801) "Can't find (or read) directory to add to classloader" at startup

2013-05-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-4801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651952#comment-13651952
 ] 

Jan Høydahl commented on SOLR-4801:
---

I vote for getting rid of all of these from the example solrconfig.xml and 
instead wire up the paths in solr.xml

What about an option for recursive adding of shared libs in solr.xml, i.e. 
adding all jars from all sub folders?

{code:xml}
  ${solr.base.dir}/contrib
  ${solr.base.dir}/dist
{code}

We could then make the example Jetty automatically set/detect 
{{solr.base.dir}}, or if someone places them elsewhere, they can override it on 
command line.

> "Can't find (or read) directory to add to classloader" at startup
> -
>
> Key: SOLR-4801
> URL: https://issues.apache.org/jira/browse/SOLR-4801
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.3
> Environment: Windows 2008 R2
> Java JRE 1.6.0_45 x64
> Jetty 8.1.10
> Solr 4.3.0 + lib\etc files
>Reporter: Leith Tussing
>Priority: Minor
>
> I upgraded from 4.2.1 by replacing the war file and adding the lib\etc files 
> from the example section into my Jetty installation.  Now every time I start 
> the services I get WARN messages I've never gotten before during startup.  
> Additionally there appears to be profanity in the last WARN message as it 
> looks for a "/total/crap/dir/ignored" after failing the other items.
> {code:xml}
> WARN  - 2013-05-08 08:46:06.710; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/extraction/lib (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\extraction\lib).
> WARN  - 2013-05-08 08:46:06.714; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.715; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/clustering/lib/ (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\clustering\lib).
> WARN  - 2013-05-08 08:46:06.716; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.716; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/langid/lib/ (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\langid\lib).
> WARN  - 2013-05-08 08:46:06.717; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.717; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/velocity/lib (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\velocity\lib).
> WARN  - 2013-05-08 08:46:06.718; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.719; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: /total/crap/dir/ignored 
> (resolved as: C:\Jetty\solr\instance1\total\crap\dir\ignored).
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4801) "Can't find (or read) directory to add to classloader" at startup

2013-05-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651958#comment-13651958
 ] 

Robert Muir commented on SOLR-4801:
---

Guys I seriously dont know if I should laugh or cry.

If one of these lib directories does not exist, solr should fail hard. This 
total/crap/ignored is an anti-feature.

> "Can't find (or read) directory to add to classloader" at startup
> -
>
> Key: SOLR-4801
> URL: https://issues.apache.org/jira/browse/SOLR-4801
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.3
> Environment: Windows 2008 R2
> Java JRE 1.6.0_45 x64
> Jetty 8.1.10
> Solr 4.3.0 + lib\etc files
>Reporter: Leith Tussing
>Priority: Minor
>
> I upgraded from 4.2.1 by replacing the war file and adding the lib\etc files 
> from the example section into my Jetty installation.  Now every time I start 
> the services I get WARN messages I've never gotten before during startup.  
> Additionally there appears to be profanity in the last WARN message as it 
> looks for a "/total/crap/dir/ignored" after failing the other items.
> {code:xml}
> WARN  - 2013-05-08 08:46:06.710; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/extraction/lib (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\extraction\lib).
> WARN  - 2013-05-08 08:46:06.714; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.715; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/clustering/lib/ (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\clustering\lib).
> WARN  - 2013-05-08 08:46:06.716; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.716; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/langid/lib/ (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\langid\lib).
> WARN  - 2013-05-08 08:46:06.717; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.717; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/velocity/lib (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\velocity\lib).
> WARN  - 2013-05-08 08:46:06.718; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.719; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: /total/crap/dir/ignored 
> (resolved as: C:\Jetty\solr\instance1\total\crap\dir\ignored).
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4655) The Overseer should assign node names by default.

2013-05-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651967#comment-13651967
 ] 

Mark Miller commented on SOLR-4655:
---

What's odd is that I have another branch I'm working on where it does get 
caught up over that...

Another problem is that in ZkController#waitForCoreNodeName, it must use 
getSlicesMap rather than getActiveSlicesMap, otherwise these subshards don't 
find there set coreNodeNames.

> The Overseer should assign node names by default.
> -
>
> Key: SOLR-4655
> URL: https://issues.apache.org/jira/browse/SOLR-4655
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 4.3, 5.0
>
> Attachments: SOLR-4655.patch, SOLR-4655.patch, SOLR-4655.patch, 
> SOLR-4655.patch, SOLR-4655.patch, SOLR-4655.patch
>
>
> Currently we make a unique node name by using the host address as part of the 
> name. This means that if you want a node with a new address to take over, the 
> node name is misleading. It's best if you set custom names for each node 
> before starting your cluster. This is cumbersome though, and cannot currently 
> be done with the collections API. Instead, the overseer could assign a more 
> generic name such as nodeN by default. Then you can easily swap in another 
> node with no pre planning and no confusion in the name.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4655) The Overseer should assign node names by default.

2013-05-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651970#comment-13651970
 ] 

Mark Miller commented on SOLR-4655:
---

Re: the wrong coreNodeName

Because the coreNodeName is set in the call to the coreadminhandler, there is 
no realy way to find out what it's set to without polling the clusterstate I 
think...I'm going to look into that real quick.

> The Overseer should assign node names by default.
> -
>
> Key: SOLR-4655
> URL: https://issues.apache.org/jira/browse/SOLR-4655
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 4.3, 5.0
>
> Attachments: SOLR-4655.patch, SOLR-4655.patch, SOLR-4655.patch, 
> SOLR-4655.patch, SOLR-4655.patch, SOLR-4655.patch
>
>
> Currently we make a unique node name by using the host address as part of the 
> name. This means that if you want a node with a new address to take over, the 
> node name is misleading. It's best if you set custom names for each node 
> before starting your cluster. This is cumbersome though, and cannot currently 
> be done with the collections API. Instead, the overseer could assign a more 
> generic name such as nodeN by default. Then you can easily swap in another 
> node with no pre planning and no confusion in the name.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4802) Atomic Updated for large documents is very slow and at some point Server stops responding.

2013-05-08 Thread Aditya (JIRA)
Aditya created SOLR-4802:


 Summary: Atomic Updated for large documents is very slow and at 
some point Server stops responding. 
 Key: SOLR-4802
 URL: https://issues.apache.org/jira/browse/SOLR-4802
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.2.1
 Environment: Jboss 5.1 with Solr 4.2.1 and JDk 1.6.0_33u
Reporter: Aditya
Priority: Critical


I am updating three fields in the document which are of type long and float. 
This is an atomic update. Updating around 30K doucments and i always get stuck 
after 80%.
Update 200 docs per request in a thread and i execute 5 such thread in 
parallel. 
The current work around is that i have auto-commit setup for 1 docs with 
openSearcher = false.

i guess that this is related to SOLR-4589

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2834) AnalysisResponseBase.java doesn't handle org.apache.solr.analysis.HTMLStripCharFilter

2013-05-08 Thread Shane (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane updated SOLR-2834:


Labels: patch  (was: )
  Priority: Blocker  (was: Major)
 Affects Version/s: 4.2
Remaining Estimate: 5m
 Original Estimate: 5m

This is still a confirmed issue in version 4.2 and presumably in 4.3 
(AnalysisResponseBase.java doesn't appear to have changed between version).

> AnalysisResponseBase.java doesn't handle 
> org.apache.solr.analysis.HTMLStripCharFilter
> -
>
> Key: SOLR-2834
> URL: https://issues.apache.org/jira/browse/SOLR-2834
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java, Schema and Analysis
>Affects Versions: 3.4, 3.6, 4.2
>Reporter: Shane
>Priority: Blocker
>  Labels: patch
> Attachments: AnalysisResponseBase.patch
>
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> When using FieldAnalysisRequest.java to analysis a field, a 
> ClassCastExcpetion is thrown if the schema defines the filter 
> org.apache.solr.analysis.HTMLStripCharFilter.  The exception is:
> java.lang.ClassCastException: java.lang.String cannot be cast to 
> java.util.List
>at 
> org.apache.solr.client.solrj.response.AnalysisResponseBase.buildPhases(AnalysisResponseBase.java:69)
>at 
> org.apache.solr.client.solrj.response.FieldAnalysisResponse.setResponse(FieldAnalysisResponse.java:66)
>at 
> org.apache.solr.client.solrj.request.FieldAnalysisRequest.process(FieldAnalysisRequest.java:107)
> My schema definition is:
> 
>   
> 
> 
> 
> 
> 
>   
> 
> The response is part is:
> 
>   testing 
> analysis
>   
> ...
> A simplistic fix would be to test if the Entry value is an instance of List.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4802) Atomic Updated for large documents is very slow and at some point Server stops responding.

2013-05-08 Thread Aditya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya updated SOLR-4802:
-

Description: 
I am updating three fields in the document which are of type long and float. 
This is an atomic update. Updating around 30K doucments and i always get stuck 
after 80%.
Update 200 docs per request in a thread and i execute 5 such thread in 
parallel. 
The current work around is that i have auto-commit setup for 1 docs with 
openSearcher = false.

i guess that this is related to SOLR-4589

Some additional information 
the machine is 6core with 5GB Heap. ramBuffer=1024MB


  was:
I am updating three fields in the document which are of type long and float. 
This is an atomic update. Updating around 30K doucments and i always get stuck 
after 80%.
Update 200 docs per request in a thread and i execute 5 such thread in 
parallel. 
The current work around is that i have auto-commit setup for 1 docs with 
openSearcher = false.

i guess that this is related to SOLR-4589


> Atomic Updated for large documents is very slow and at some point Server 
> stops responding. 
> ---
>
> Key: SOLR-4802
> URL: https://issues.apache.org/jira/browse/SOLR-4802
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.2.1
> Environment: Jboss 5.1 with Solr 4.2.1 and JDk 1.6.0_33u
>Reporter: Aditya
>Priority: Critical
>
> I am updating three fields in the document which are of type long and float. 
> This is an atomic update. Updating around 30K doucments and i always get 
> stuck after 80%.
> Update 200 docs per request in a thread and i execute 5 such thread in 
> parallel. 
> The current work around is that i have auto-commit setup for 1 docs with 
> openSearcher = false.
> i guess that this is related to SOLR-4589
> Some additional information 
> the machine is 6core with 5GB Heap. ramBuffer=1024MB

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Compressed field reader and memory problems with atomic updates

2013-05-08 Thread Erick Erickson
I'm seeing a case where (reported to me, not on my personal machine)
where Solr's heap is being exhausted apparently by compressed field
reader. Here's a sample line from a summary of a heap dump:

CompressingStoredFieldReader 735 instances

where the CompressingStoredFieldReader instances are taking up 7.5G of
heap space (out of 8 allocated). The GC cycles are a pattern I've seen
before
gc for a long time, recover a few meg
run for a _very_ short time
gc again, recover a few meg

This app happens to be doing a lot of atomic updates, running with CMS
collector, java 1.7. There is very little querying going on, and it is
happening on several different machines. Solr 4.2.

Now, there is a bit of custom update code involved, we're testing whether
1> if not compressing the stored fields changes things
2> if disabling the custom code changes things
3> if Solr 4.3 changes things.
4> whether memory usage grows over time linearly or spikes (pretty
sure the former).

This is an index-heavy/query-light application, and we're hosing a
_bunch_ of documents at the indexing process. But still, assuming that
the memory reporting above is accurate, each
CompressingStoredFieldReader is apparently taking up 10M each and
there are over 700 of them. The documents are being batched (don't
quite know how many per batch, will find out).

Mainly, I'm asking if this rings any bells. I don't see anything on a
quick look at the JIRAs.

Thanks,
Erick

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4583) StraightBytesDocValuesField fails if bytes > 32k

2013-05-08 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651990#comment-13651990
 ] 

Shai Erera commented on LUCENE-4583:


A user hit this limitation on the user list a couple weeks ago while try to 
index a very large number of facets: 
http://markmail.org/message/dfn4mk3qe7advzcd. So this is one usecase, and also 
it appears there is an exception thrown at indexing time?

> StraightBytesDocValuesField fails if bytes > 32k
> 
>
> Key: LUCENE-4583
> URL: https://issues.apache.org/jira/browse/LUCENE-4583
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.0, 4.1, 5.0
>Reporter: David Smiley
>Priority: Critical
> Fix For: 4.3
>
> Attachments: LUCENE-4583.patch, LUCENE-4583.patch
>
>
> I didn't observe any limitations on the size of a bytes based DocValues field 
> value in the docs.  It appears that the limit is 32k, although I didn't get 
> any friendly error telling me that was the limit.  32k is kind of small IMO; 
> I suspect this limit is unintended and as such is a bug.The following 
> test fails:
> {code:java}
>   public void testBigDocValue() throws IOException {
> Directory dir = newDirectory();
> IndexWriter writer = new IndexWriter(dir, writerConfig(false));
> Document doc = new Document();
> BytesRef bytes = new BytesRef((4+4)*4097);//4096 works
> bytes.length = bytes.bytes.length;//byte data doesn't matter
> doc.add(new StraightBytesDocValuesField("dvField", bytes));
> writer.addDocument(doc);
> writer.commit();
> writer.close();
> DirectoryReader reader = DirectoryReader.open(dir);
> DocValues docValues = MultiDocValues.getDocValues(reader, "dvField");
> //FAILS IF BYTES IS BIG!
> docValues.getSource().getBytes(0, bytes);
> reader.close();
> dir.close();
>   }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_21) - Build # 2768 - Failure!

2013-05-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Windows/2768/
Java: 32bit/jdk1.7.0_21 -client -XX:+UseSerialGC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.ResourceLoaderTest

Error Message:
Resource in scope SUITE failed to close. Resource was registered from thread 
Thread[id=3356, 
name=TEST-ResourceLoaderTest.testClassLoaderLibs-seed#[8E4AE938ACF8AFEB], 
state=RUNNABLE, group=TGRP-ResourceLoaderTest], registration stack trace below.

Stack Trace:
com.carrotsearch.randomizedtesting.ResourceDisposalError: Resource in scope 
SUITE failed to close. Resource was registered from thread Thread[id=3356, 
name=TEST-ResourceLoaderTest.testClassLoaderLibs-seed#[8E4AE938ACF8AFEB], 
state=RUNNABLE, group=TGRP-ResourceLoaderTest], registration stack trace below.
at __randomizedtesting.SeedInfo.seed([8E4AE938ACF8AFEB]:0)
at java.lang.Thread.getStackTrace(Thread.java:1567)
at 
com.carrotsearch.randomizedtesting.RandomizedContext.closeAtEnd(RandomizedContext.java:150)
at 
org.apache.lucene.util.LuceneTestCase.closeAfterSuite(LuceneTestCase.java:525)
at org.apache.lucene.util._TestUtil.getTempDir(_TestUtil.java:125)
at 
org.apache.solr.core.ResourceLoaderTest.testClassLoaderLibs(ResourceLoaderTest.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36

[jira] [Commented] (SOLR-4788) Multiple Entities DIH delta import: dataimporter.[entityName].last_index_time is empty

2013-05-08 Thread Eric (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651992#comment-13651992
 ] 

Eric commented on SOLR-4788:


Same issue 

solr-spec
4.1.0.2013.01.16.17.21.36

solr-impl
4.1.0 1434440 - sarowe - 2013-01-16 17:21:36

lucene-spec
4.1.0 

> Multiple Entities DIH delta import: dataimporter.[entityName].last_index_time 
> is empty
> --
>
> Key: SOLR-4788
> URL: https://issues.apache.org/jira/browse/SOLR-4788
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.2, 4.3
> Environment: solr-spec
> 4.2.1.2013.03.26.08.26.55
> solr-impl
> 4.2.1 1461071 - mark - 2013-03-26 08:26:55
> lucene-spec
> 4.2.1
> lucene-impl
> 4.2.1 1461071 - mark - 2013-03-26 08:23:34
> OR
> solr-spec
> 4.3.0
> solr-impl
> 4.3.0 1477023 - simonw - 2013-04-29 15:10:12
> lucene-spec
> 4.3.0
> lucene-impl
> 4.3.0 1477023 - simonw - 2013-04-29 14:55:14
>Reporter: chakming wong
>
> {code:title=conf/dataimport.properties|borderStyle=solid}entity1.last_index_time=2013-05-06
>  03\:02\:06
> last_index_time=2013-05-06 03\:05\:22
> entity2.last_index_time=2013-05-06 03\:03\:14
> entity3.last_index_time=2013-05-06 03\:05\:22
> {code}
> {code:title=conf/solrconfig.xml|borderStyle=solid} encoding="UTF-8" ?>
> ...
>  class="org.apache.solr.handler.dataimport.DataImportHandler">
> 
> dihconfig.xml
> 
> 
> ...
> {code}
> {code:title=conf/dihconfig.xml|borderStyle=solid} encoding="UTF-8" ?>
> 
>  type="JdbcDataSource" driver="com.mysql.jdbc.Driver"
> url="jdbc:mysql://*:*/*"
> user="*" password="*"/>
> 
>  query="SELECT * FROM table_a"
> deltaQuery="SELECT table_a_id FROM table_b WHERE 
> last_modified > '${dataimporter.entity1.last_index_time}'"
> deltaImportQuery="SELECT * FROM table_a WHERE id = 
> '${dataimporter.entity1.id}'"
> transformer="TemplateTransformer">
>  ...
>   ... 
> ... 
> 
> 
>   ... 
>   ...
> 
> 
>   ... 
>   ...
> 
> 
> 
> {code} 
> In above setup, *dataimporter.entity1.last_index_time* is *empty string* and 
> cause the sql query having error

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4583) StraightBytesDocValuesField fails if bytes > 32k

2013-05-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651994#comment-13651994
 ] 

Robert Muir commented on LUCENE-4583:
-

Over 6,000 facets per doc?

Yes there is an exception thrown at index time. I added this intentionally and 
added a test that ensures you get it.

> StraightBytesDocValuesField fails if bytes > 32k
> 
>
> Key: LUCENE-4583
> URL: https://issues.apache.org/jira/browse/LUCENE-4583
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.0, 4.1, 5.0
>Reporter: David Smiley
>Priority: Critical
> Fix For: 4.3
>
> Attachments: LUCENE-4583.patch, LUCENE-4583.patch
>
>
> I didn't observe any limitations on the size of a bytes based DocValues field 
> value in the docs.  It appears that the limit is 32k, although I didn't get 
> any friendly error telling me that was the limit.  32k is kind of small IMO; 
> I suspect this limit is unintended and as such is a bug.The following 
> test fails:
> {code:java}
>   public void testBigDocValue() throws IOException {
> Directory dir = newDirectory();
> IndexWriter writer = new IndexWriter(dir, writerConfig(false));
> Document doc = new Document();
> BytesRef bytes = new BytesRef((4+4)*4097);//4096 works
> bytes.length = bytes.bytes.length;//byte data doesn't matter
> doc.add(new StraightBytesDocValuesField("dvField", bytes));
> writer.addDocument(doc);
> writer.commit();
> writer.close();
> DirectoryReader reader = DirectoryReader.open(dir);
> DocValues docValues = MultiDocValues.getDocValues(reader, "dvField");
> //FAILS IF BYTES IS BIG!
> docValues.getSource().getBytes(0, bytes);
> reader.close();
> dir.close();
>   }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4800) java.lang.IllegalArgumentException: No enum const class org.apache.lucene.util.Version.LUCENE_43

2013-05-08 Thread Roald Hopman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651999#comment-13651999
 ] 

Roald Hopman commented on SOLR-4800:


I solved it by setting up a new virtual machine. Apparantly tomcat was still 
using 4.2.1 somehow.

Thanks!

> java.lang.IllegalArgumentException: No enum const class 
> org.apache.lucene.util.Version.LUCENE_43
> 
>
> Key: SOLR-4800
> URL: https://issues.apache.org/jira/browse/SOLR-4800
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.3
> Environment: Solr 4.3.0 on Ubuntu 12.04.2 LTS
>Reporter: Roald Hopman
>
> java.lang.IllegalArgumentException: No enum const class 
> org.apache.lucene.util.Version.LUCENE_43
> solr-4.3.0/example/solr/collection1/conf/solrconfig.xml has   
> LUCENE_43
> Which causes: 
> SolrCore Initialization Failures
> collection1: 
> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Could not load config for solrconfig.xml 
> From catalina.out :
> SEVERE: Unable to create core: collection1
> org.apache.solr.common.SolrException: Could not load config for solrconfig.xml
>   at 
> org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:991)
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1051)
>   at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:634)
>   at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:629)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:679)
> Caused by: org.apache.solr.common.SolrException: Invalid luceneMatchVersion 
> 'LUCENE_43', valid values are: [LUCENE_30, LUCENE_31, LUCENE_32, LUCENE_33, 
> LUCENE_34, LUCENE_35, LUCENE_36, LUCENE_40, LUCENE_41, LUCENE_42, 
> LUCENE_CURRENT] or a string in format 'V.V'
>   at org.apache.solr.core.Config.parseLuceneVersionString(Config.java:313)
>   at org.apache.solr.core.Config.getLuceneVersion(Config.java:298)
>   at org.apache.solr.core.SolrConfig.(SolrConfig.java:119)
>   at 
> org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:989)
>   ... 11 more
> Caused by: java.lang.IllegalArgumentException: No enum const class 
> org.apache.lucene.util.Version.LUCENE_43
>   at java.lang.Enum.valueOf(Enum.java:214)
>   at org.apache.lucene.util.Version.valueOf(Version.java:34)
>   at org.apache.lucene.util.Version.parseLeniently(Version.java:133)
>   at org.apache.solr.core.Config.parseLuceneVersionString(Config.java:311)
>   ... 14 more
> May 7, 2013 9:10:00 PM org.apache.solr.common.SolrException log
> SEVERE: null:org.apache.solr.common.SolrException: Unable to create core: 
> collection1
>   at 
> org.apache.solr.core.CoreContainer.recordAndThrow(CoreContainer.java:1672)
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1057)
>   at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:634)
>   at org.apache.solr.core.CoreContainer$3.call(CoreContainer.java:629)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:679)
> Caused by: org.apache.solr.common.SolrException: Could not load config for 
> solrconfig.xml
>   at 
> org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:991)
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1051)
>   ... 10 more
> Caused by: org.apache.solr.common.SolrException: Invalid luceneMatchVersion 
> 'LUCENE_43', valid values are: [LUCENE_30, LUCENE_31, LUCENE_32, LUCENE_33, 
> LUCENE_34, LUCENE_35, LUCENE_36, LUCENE_40, LUCENE_41, LUCENE_42, 
> LUCENE_CURRENT] or a string in format 'V.V'
>   at org.apache.solr.core.Con

AW: [VOTE] Release PyLucene 4.3.0-1

2013-05-08 Thread Thomas Koch
Hi,
I was able to build JCC 2.16 and PyLucene 4.3.0-1 on Win32, Python 2.7, Java
1.6, ant 1.9. The "make test" raises the typical windows issues. I also
tried the basic Index/SearchFiles example (with no problems).

+1

Regards
Thomas

-Ursprüngliche Nachricht-
Von: Andi Vajda [mailto:va...@apache.org] 
Gesendet: Dienstag, 7. Mai 2013 02:27
An: pylucene-...@lucene.apache.org
Cc: gene...@lucene.apache.org
Betreff: [VOTE] Release PyLucene 4.3.0-1


It looks like the time has finally come for a PyLucene 4.x release !

The PyLucene 4.3.0-1 release tracking the recent release of Apache Lucene 
4.3.0 is ready.

A release candidate is available from:
http://people.apache.org/~vajda/staging_area/

A list of changes in this release can be seen at:
http://svn.apache.org/repos/asf/lucene/pylucene/branches/pylucene_4_3/CHANGE
S

PyLucene 4.3.0 is built with JCC 2.16 included in these release artifacts:
http://svn.apache.org/repos/asf/lucene/pylucene/trunk/jcc/CHANGES

A list of Lucene Java changes can be seen at:
http://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_4_3_0/lucene/CHA
NGES.txt

Please vote to release these artifacts as PyLucene 4.3.0-1.

Thanks !

Andi..

ps: the KEYS file for PyLucene release signing is at:
http://svn.apache.org/repos/asf/lucene/pylucene/dist/KEYS
http://people.apache.org/~vajda/staging_area/KEYS

pps: here is my +1



Re: [jira] [Commented] (SOLR-4801) "Can't find (or read) directory to add to classloader" at startup

2013-05-08 Thread Erick Erickson
FWIW, total/crap/ignored is already gone in 4.3.

On Wed, May 8, 2013 at 10:59 AM, Robert Muir (JIRA)  wrote:
>
> [ 
> https://issues.apache.org/jira/browse/SOLR-4801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651958#comment-13651958
>  ]
>
> Robert Muir commented on SOLR-4801:
> ---
>
> Guys I seriously dont know if I should laugh or cry.
>
> If one of these lib directories does not exist, solr should fail hard. This 
> total/crap/ignored is an anti-feature.
>
>> "Can't find (or read) directory to add to classloader" at startup
>> -
>>
>> Key: SOLR-4801
>> URL: https://issues.apache.org/jira/browse/SOLR-4801
>> Project: Solr
>>  Issue Type: Bug
>>Affects Versions: 4.3
>> Environment: Windows 2008 R2
>> Java JRE 1.6.0_45 x64
>> Jetty 8.1.10
>> Solr 4.3.0 + lib\etc files
>>Reporter: Leith Tussing
>>Priority: Minor
>>
>> I upgraded from 4.2.1 by replacing the war file and adding the lib\etc files 
>> from the example section into my Jetty installation.  Now every time I start 
>> the services I get WARN messages I've never gotten before during startup.
>> Additionally there appears to be profanity in the last WARN message as it 
>> looks for a "/total/crap/dir/ignored" after failing the other items.
>> {code:xml}
>> WARN  - 2013-05-08 08:46:06.710; org.apache.solr.core.SolrResourceLoader; 
>> Can't find (or read) directory to add to classloader: 
>> ../../../contrib/extraction/lib (resolved as: 
>> C:\Jetty\solr\instance1\..\..\..\contrib\extraction\lib).
>> WARN  - 2013-05-08 08:46:06.714; org.apache.solr.core.SolrResourceLoader; 
>> Can't find (or read) directory to add to classloader: ../../../dist/ 
>> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
>> WARN  - 2013-05-08 08:46:06.715; org.apache.solr.core.SolrResourceLoader; 
>> Can't find (or read) directory to add to classloader: 
>> ../../../contrib/clustering/lib/ (resolved as: 
>> C:\Jetty\solr\instance1\..\..\..\contrib\clustering\lib).
>> WARN  - 2013-05-08 08:46:06.716; org.apache.solr.core.SolrResourceLoader; 
>> Can't find (or read) directory to add to classloader: ../../../dist/ 
>> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
>> WARN  - 2013-05-08 08:46:06.716; org.apache.solr.core.SolrResourceLoader; 
>> Can't find (or read) directory to add to classloader: 
>> ../../../contrib/langid/lib/ (resolved as: 
>> C:\Jetty\solr\instance1\..\..\..\contrib\langid\lib).
>> WARN  - 2013-05-08 08:46:06.717; org.apache.solr.core.SolrResourceLoader; 
>> Can't find (or read) directory to add to classloader: ../../../dist/ 
>> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
>> WARN  - 2013-05-08 08:46:06.717; org.apache.solr.core.SolrResourceLoader; 
>> Can't find (or read) directory to add to classloader: 
>> ../../../contrib/velocity/lib (resolved as: 
>> C:\Jetty\solr\instance1\..\..\..\contrib\velocity\lib).
>> WARN  - 2013-05-08 08:46:06.718; org.apache.solr.core.SolrResourceLoader; 
>> Can't find (or read) directory to add to classloader: ../../../dist/ 
>> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
>> WARN  - 2013-05-08 08:46:06.719; org.apache.solr.core.SolrResourceLoader; 
>> Can't find (or read) directory to add to classloader: 
>> /total/crap/dir/ignored (resolved as: 
>> C:\Jetty\solr\instance1\total\crap\dir\ignored).
>> {code}
>
> --
> This message is automatically generated by JIRA.
> If you think it was sent incorrectly, please contact your JIRA administrators
> For more information on JIRA, see: http://www.atlassian.com/software/jira
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4802) Atomic Updated for large documents is very slow and at some point Server stops responding.

2013-05-08 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13652006#comment-13652006
 ] 

Erick Erickson commented on SOLR-4802:
--

Are you seeing any out of memory errors?

> Atomic Updated for large documents is very slow and at some point Server 
> stops responding. 
> ---
>
> Key: SOLR-4802
> URL: https://issues.apache.org/jira/browse/SOLR-4802
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.2.1
> Environment: Jboss 5.1 with Solr 4.2.1 and JDk 1.6.0_33u
>Reporter: Aditya
>Priority: Critical
>
> I am updating three fields in the document which are of type long and float. 
> This is an atomic update. Updating around 30K doucments and i always get 
> stuck after 80%.
> Update 200 docs per request in a thread and i execute 5 such thread in 
> parallel. 
> The current work around is that i have auto-commit setup for 1 docs with 
> openSearcher = false.
> i guess that this is related to SOLR-4589
> Some additional information 
> the machine is 6core with 5GB Heap. ramBuffer=1024MB

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Compressed field reader and memory problems with atomic updates

2013-05-08 Thread Erick Erickson
Since what we're seeing is increasing heap usage, it doesn't quite
smell like SOLR-4802 or SOLR-4589, but it's possible



On Wed, May 8, 2013 at 11:21 AM, Erick Erickson  wrote:
> I'm seeing a case where (reported to me, not on my personal machine)
> where Solr's heap is being exhausted apparently by compressed field
> reader. Here's a sample line from a summary of a heap dump:
>
> CompressingStoredFieldReader 735 instances
>
> where the CompressingStoredFieldReader instances are taking up 7.5G of
> heap space (out of 8 allocated). The GC cycles are a pattern I've seen
> before
> gc for a long time, recover a few meg
> run for a _very_ short time
> gc again, recover a few meg
>
> This app happens to be doing a lot of atomic updates, running with CMS
> collector, java 1.7. There is very little querying going on, and it is
> happening on several different machines. Solr 4.2.
>
> Now, there is a bit of custom update code involved, we're testing whether
> 1> if not compressing the stored fields changes things
> 2> if disabling the custom code changes things
> 3> if Solr 4.3 changes things.
> 4> whether memory usage grows over time linearly or spikes (pretty
> sure the former).
>
> This is an index-heavy/query-light application, and we're hosing a
> _bunch_ of documents at the indexing process. But still, assuming that
> the memory reporting above is accurate, each
> CompressingStoredFieldReader is apparently taking up 10M each and
> there are over 700 of them. The documents are being batched (don't
> quite know how many per batch, will find out).
>
> Mainly, I'm asking if this rings any bells. I don't see anything on a
> quick look at the JIRAs.
>
> Thanks,
> Erick

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4583) StraightBytesDocValuesField fails if bytes > 32k

2013-05-08 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13652012#comment-13652012
 ] 

Shai Erera commented on LUCENE-4583:


bq. Over 6,000 facets per doc?

Right. Not sure how realistic is his case (something about proteins, he 
explains it here: http://markmail.org/message/2wxavktzyxjtijqe), and whether he 
should enable facet partitions or not, but he agreed these are extreme cases 
and expects only few docs like that. At any rate, you asked for a usecase :). 
Not saying we should support it, but it is one. If it can be supported without 
too much pain, then perhaps we should. Otherwise, I think we can live w/ the 
limitation and leave it for the app to worry about a workaround / write its own 
Codec.

> StraightBytesDocValuesField fails if bytes > 32k
> 
>
> Key: LUCENE-4583
> URL: https://issues.apache.org/jira/browse/LUCENE-4583
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.0, 4.1, 5.0
>Reporter: David Smiley
>Priority: Critical
> Fix For: 4.3
>
> Attachments: LUCENE-4583.patch, LUCENE-4583.patch
>
>
> I didn't observe any limitations on the size of a bytes based DocValues field 
> value in the docs.  It appears that the limit is 32k, although I didn't get 
> any friendly error telling me that was the limit.  32k is kind of small IMO; 
> I suspect this limit is unintended and as such is a bug.The following 
> test fails:
> {code:java}
>   public void testBigDocValue() throws IOException {
> Directory dir = newDirectory();
> IndexWriter writer = new IndexWriter(dir, writerConfig(false));
> Document doc = new Document();
> BytesRef bytes = new BytesRef((4+4)*4097);//4096 works
> bytes.length = bytes.bytes.length;//byte data doesn't matter
> doc.add(new StraightBytesDocValuesField("dvField", bytes));
> writer.addDocument(doc);
> writer.commit();
> writer.close();
> DirectoryReader reader = DirectoryReader.open(dir);
> DocValues docValues = MultiDocValues.getDocValues(reader, "dvField");
> //FAILS IF BYTES IS BIG!
> docValues.getSource().getBytes(0, bytes);
> reader.close();
> dir.close();
>   }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Commented] (SOLR-4801) "Can't find (or read) directory to add to classloader" at startup

2013-05-08 Thread Robert Muir
But the problem is not total/crap/ignored vs
/non/existent/dir/yields/warning

This makes no difference.

My complaint is about loose error checking.

On Wed, May 8, 2013 at 11:51 AM, Erick Erickson wrote:

> FWIW, total/crap/ignored is already gone in 4.3.
>
> On Wed, May 8, 2013 at 10:59 AM, Robert Muir (JIRA) 
> wrote:
> >
> > [
> https://issues.apache.org/jira/browse/SOLR-4801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651958#comment-13651958]
> >
> > Robert Muir commented on SOLR-4801:
> > ---
> >
> > Guys I seriously dont know if I should laugh or cry.
> >
> > If one of these lib directories does not exist, solr should fail hard.
> This total/crap/ignored is an anti-feature.
> >
> >> "Can't find (or read) directory to add to classloader" at startup
> >> -
> >>
> >> Key: SOLR-4801
> >> URL: https://issues.apache.org/jira/browse/SOLR-4801
> >> Project: Solr
> >>  Issue Type: Bug
> >>Affects Versions: 4.3
> >> Environment: Windows 2008 R2
> >> Java JRE 1.6.0_45 x64
> >> Jetty 8.1.10
> >> Solr 4.3.0 + lib\etc files
> >>Reporter: Leith Tussing
> >>Priority: Minor
> >>
> >> I upgraded from 4.2.1 by replacing the war file and adding the lib\etc
> files from the example section into my Jetty installation.  Now every time
> I start the services I get WARN messages I've never gotten before during
> startup.
> >> Additionally there appears to be profanity in the last WARN message as
> it looks for a "/total/crap/dir/ignored" after failing the other items.
> >> {code:xml}
> >> WARN  - 2013-05-08 08:46:06.710;
> org.apache.solr.core.SolrResourceLoader; Can't find (or read) directory to
> add to classloader: ../../../contrib/extraction/lib (resolved as:
> C:\Jetty\solr\instance1\..\..\..\contrib\extraction\lib).
> >> WARN  - 2013-05-08 08:46:06.714;
> org.apache.solr.core.SolrResourceLoader; Can't find (or read) directory to
> add to classloader: ../../../dist/ (resolved as:
> C:\Jetty\solr\instance1\..\..\..\dist).
> >> WARN  - 2013-05-08 08:46:06.715;
> org.apache.solr.core.SolrResourceLoader; Can't find (or read) directory to
> add to classloader: ../../../contrib/clustering/lib/ (resolved as:
> C:\Jetty\solr\instance1\..\..\..\contrib\clustering\lib).
> >> WARN  - 2013-05-08 08:46:06.716;
> org.apache.solr.core.SolrResourceLoader; Can't find (or read) directory to
> add to classloader: ../../../dist/ (resolved as:
> C:\Jetty\solr\instance1\..\..\..\dist).
> >> WARN  - 2013-05-08 08:46:06.716;
> org.apache.solr.core.SolrResourceLoader; Can't find (or read) directory to
> add to classloader: ../../../contrib/langid/lib/ (resolved as:
> C:\Jetty\solr\instance1\..\..\..\contrib\langid\lib).
> >> WARN  - 2013-05-08 08:46:06.717;
> org.apache.solr.core.SolrResourceLoader; Can't find (or read) directory to
> add to classloader: ../../../dist/ (resolved as:
> C:\Jetty\solr\instance1\..\..\..\dist).
> >> WARN  - 2013-05-08 08:46:06.717;
> org.apache.solr.core.SolrResourceLoader; Can't find (or read) directory to
> add to classloader: ../../../contrib/velocity/lib (resolved as:
> C:\Jetty\solr\instance1\..\..\..\contrib\velocity\lib).
> >> WARN  - 2013-05-08 08:46:06.718;
> org.apache.solr.core.SolrResourceLoader; Can't find (or read) directory to
> add to classloader: ../../../dist/ (resolved as:
> C:\Jetty\solr\instance1\..\..\..\dist).
> >> WARN  - 2013-05-08 08:46:06.719;
> org.apache.solr.core.SolrResourceLoader; Can't find (or read) directory to
> add to classloader: /total/crap/dir/ignored (resolved as:
> C:\Jetty\solr\instance1\total\crap\dir\ignored).
> >> {code}
> >
> > --
> > This message is automatically generated by JIRA.
> > If you think it was sent incorrectly, please contact your JIRA
> administrators
> > For more information on JIRA, see:
> http://www.atlassian.com/software/jira
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (LUCENE-4583) StraightBytesDocValuesField fails if bytes > 32k

2013-05-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13652014#comment-13652014
 ] 

Robert Muir commented on LUCENE-4583:
-

For the faceting use case, we have SortedSetDocValuesField, which can hold 2B 
facets per-field and you can have maybe 2B fields per doc.

So the limitation here is not docvalues.

BINARY datatype isnt designed for faceting.

> StraightBytesDocValuesField fails if bytes > 32k
> 
>
> Key: LUCENE-4583
> URL: https://issues.apache.org/jira/browse/LUCENE-4583
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.0, 4.1, 5.0
>Reporter: David Smiley
>Priority: Critical
> Fix For: 4.3
>
> Attachments: LUCENE-4583.patch, LUCENE-4583.patch
>
>
> I didn't observe any limitations on the size of a bytes based DocValues field 
> value in the docs.  It appears that the limit is 32k, although I didn't get 
> any friendly error telling me that was the limit.  32k is kind of small IMO; 
> I suspect this limit is unintended and as such is a bug.The following 
> test fails:
> {code:java}
>   public void testBigDocValue() throws IOException {
> Directory dir = newDirectory();
> IndexWriter writer = new IndexWriter(dir, writerConfig(false));
> Document doc = new Document();
> BytesRef bytes = new BytesRef((4+4)*4097);//4096 works
> bytes.length = bytes.bytes.length;//byte data doesn't matter
> doc.add(new StraightBytesDocValuesField("dvField", bytes));
> writer.addDocument(doc);
> writer.commit();
> writer.close();
> DirectoryReader reader = DirectoryReader.open(dir);
> DocValues docValues = MultiDocValues.getDocValues(reader, "dvField");
> //FAILS IF BYTES IS BIG!
> docValues.getSource().getBytes(0, bytes);
> reader.close();
> dir.close();
>   }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-4791) solr.xml sharedLib does not work in 4.3.0

2013-05-08 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reopened SOLR-4791:
--


Apparently causing a test failure...

> solr.xml sharedLib does not work in 4.3.0
> -
>
> Key: SOLR-4791
> URL: https://issues.apache.org/jira/browse/SOLR-4791
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.3
>Reporter: Jan Høydahl
>Assignee: Erick Erickson
> Fix For: 5.0, 4.4, 4.3.1
>
> Attachments: SOLR-4791.patch, SOLR-4791.patch, SOLR-4791.patch
>
>
> The sharedLib attribute on {{}} tag in solr.xml stopped working in 4.3.
> Using old-style solr.xml with sharedLib defined on solr tag. Solr does not 
> load any of them. Simply swapping out solr.war with the 4.2.1 one, brings 
> sharedLib loading back.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4791) solr.xml sharedLib does not work in 4.3.0

2013-05-08 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-4791:
-

Attachment: SOLR-4791-test.patch

Could someone with a windows machine try this patch out? Other then having to 
indent, all it does is try to delete the directory created in the new test. The 
only changed code is the adding try and the finally block. Passes on my Mac

> solr.xml sharedLib does not work in 4.3.0
> -
>
> Key: SOLR-4791
> URL: https://issues.apache.org/jira/browse/SOLR-4791
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.3
>Reporter: Jan Høydahl
>Assignee: Erick Erickson
> Fix For: 5.0, 4.4, 4.3.1
>
> Attachments: SOLR-4791.patch, SOLR-4791.patch, SOLR-4791.patch, 
> SOLR-4791-test.patch
>
>
> The sharedLib attribute on {{}} tag in solr.xml stopped working in 4.3.
> Using old-style solr.xml with sharedLib defined on solr tag. Solr does not 
> load any of them. Simply swapping out solr.war with the 4.2.1 one, brings 
> sharedLib loading back.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4753) SEVERE: Too many close [count:-1] for SolrCore in logs (4.2.1)

2013-05-08 Thread Yago Riveiro (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13652018#comment-13652018
 ] 

Yago Riveiro commented on SOLR-4753:


Hi,

Today I upgrade solr to 4.3 and I had the same issue.


{noformat}
65907 4159431 [node02.solrcloud-startStop-1-EventThread] INFO  
org.apache.solr.common.cloud.ZkStateReader  – A cluster state change: 
WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, 
has occurred - updating... (live nodes size: 4)
65908 4159698 [RecoveryThread] INFO  org.apache.solr.cloud.RecoveryStrategy  – 
Finished recovery process. core=ST-0712
65909 4159698 [RecoveryThread] INFO  org.apache.solr.core.SolrCore  – [ST-0712] 
 CLOSING SolrCore org.apache.solr.core.SolrCore@73e004a
65910 4159698 [RecoveryThread] INFO  org.apache.solr.update.UpdateHandler  – 
closing DirectUpdateHandler2{commits=0,autocommit maxDocs=5000,autocommit 
maxTime=1ms,autocommits=0,soft autocommit maxTime=2500ms,soft 
autocommits=0,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=0,cumulative_deletesById=0,cumulative_deletesByQuery=0,cumulative_errors=0}
65911 4159699 [RecoveryThread] INFO  org.apache.solr.core.SolrCore  – [ST-0712] 
Closing main searcher on request.
65912 4159699 [catalina-exec-12] ERROR org.apache.solr.core.SolrCore  – Too 
many close [count:-1] on org.apache.solr.core.SolrCore@73e004a. Please report 
this exception to solr-u...@lucene.apache.org
65913 4159699 [catalina-exec-12] INFO  org.apache.solr.core.CoreContainer  – 
Persisting cores config to /opt/node02.solrcloud/solr/home/solr.xml
65914 4160185 [catalina-exec-12] INFO  org.apache.solr.core.SolrXMLSerializer  
– Persisting cores config to /opt/node02.solrcloud/solr/home/solr.xml
{noformat}

> SEVERE: Too many close [count:-1] for SolrCore in logs (4.2.1)
> --
>
> Key: SOLR-4753
> URL: https://issues.apache.org/jira/browse/SOLR-4753
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, 4.4
>
>
> a user reported core reference counting issues in 4.2.1...
> http://markmail.org/message/akrrj5o24prasm6e

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4791) solr.xml sharedLib does not work in 4.3.0

2013-05-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13652028#comment-13652028
 ] 

Robert Muir commented on SOLR-4791:
---

Probably need to close() the classloader I think? ill test on a windows box if 
this does the trick

> solr.xml sharedLib does not work in 4.3.0
> -
>
> Key: SOLR-4791
> URL: https://issues.apache.org/jira/browse/SOLR-4791
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.3
>Reporter: Jan Høydahl
>Assignee: Erick Erickson
> Fix For: 5.0, 4.4, 4.3.1
>
> Attachments: SOLR-4791.patch, SOLR-4791.patch, SOLR-4791.patch, 
> SOLR-4791-test.patch
>
>
> The sharedLib attribute on {{}} tag in solr.xml stopped working in 4.3.
> Using old-style solr.xml with sharedLib defined on solr tag. Solr does not 
> load any of them. Simply swapping out solr.war with the 4.2.1 one, brings 
> sharedLib loading back.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4791) solr.xml sharedLib does not work in 4.3.0

2013-05-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13652040#comment-13652040
 ] 

Robert Muir commented on SOLR-4791:
---

We can also only do this in trunk. there is no close() method in java6. So I 
think in branch_4x, the test should have an assumeFalse(Constants.WINDOWS)

> solr.xml sharedLib does not work in 4.3.0
> -
>
> Key: SOLR-4791
> URL: https://issues.apache.org/jira/browse/SOLR-4791
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.3
>Reporter: Jan Høydahl
>Assignee: Erick Erickson
> Fix For: 5.0, 4.4, 4.3.1
>
> Attachments: SOLR-4791.patch, SOLR-4791.patch, SOLR-4791.patch, 
> SOLR-4791-test.patch
>
>
> The sharedLib attribute on {{}} tag in solr.xml stopped working in 4.3.
> Using old-style solr.xml with sharedLib defined on solr tag. Solr does not 
> load any of them. Simply swapping out solr.war with the 4.2.1 one, brings 
> sharedLib loading back.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4583) StraightBytesDocValuesField fails if bytes > 32k

2013-05-08 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13652045#comment-13652045
 ] 

Michael McCandless commented on LUCENE-4583:


I think we should fix the limit in core, and then existing codecs should 
enforce their own limits, at indexing time?

This way users that sometimes need to store > 32 KB binary doc value can do so, 
with the right DVFormat.

> StraightBytesDocValuesField fails if bytes > 32k
> 
>
> Key: LUCENE-4583
> URL: https://issues.apache.org/jira/browse/LUCENE-4583
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.0, 4.1, 5.0
>Reporter: David Smiley
>Priority: Critical
> Fix For: 4.3
>
> Attachments: LUCENE-4583.patch, LUCENE-4583.patch
>
>
> I didn't observe any limitations on the size of a bytes based DocValues field 
> value in the docs.  It appears that the limit is 32k, although I didn't get 
> any friendly error telling me that was the limit.  32k is kind of small IMO; 
> I suspect this limit is unintended and as such is a bug.The following 
> test fails:
> {code:java}
>   public void testBigDocValue() throws IOException {
> Directory dir = newDirectory();
> IndexWriter writer = new IndexWriter(dir, writerConfig(false));
> Document doc = new Document();
> BytesRef bytes = new BytesRef((4+4)*4097);//4096 works
> bytes.length = bytes.bytes.length;//byte data doesn't matter
> doc.add(new StraightBytesDocValuesField("dvField", bytes));
> writer.addDocument(doc);
> writer.commit();
> writer.close();
> DirectoryReader reader = DirectoryReader.open(dir);
> DocValues docValues = MultiDocValues.getDocValues(reader, "dvField");
> //FAILS IF BYTES IS BIG!
> docValues.getSource().getBytes(0, bytes);
> reader.close();
> dir.close();
>   }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4791) solr.xml sharedLib does not work in 4.3.0

2013-05-08 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated SOLR-4791:
--

Attachment: closeLoader.patch

totally untested patch. Ill try it on my windows machien in a few.

I know that Uwe knows a trick for java6 (at least a best-effort we can do, 
which we should). But i still think we should have an assume() for windows 
there since it may not work on all jvms.

> solr.xml sharedLib does not work in 4.3.0
> -
>
> Key: SOLR-4791
> URL: https://issues.apache.org/jira/browse/SOLR-4791
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.3
>Reporter: Jan Høydahl
>Assignee: Erick Erickson
> Fix For: 5.0, 4.4, 4.3.1
>
> Attachments: closeLoader.patch, SOLR-4791.patch, SOLR-4791.patch, 
> SOLR-4791.patch, SOLR-4791-test.patch
>
>
> The sharedLib attribute on {{}} tag in solr.xml stopped working in 4.3.
> Using old-style solr.xml with sharedLib defined on solr tag. Solr does not 
> load any of them. Simply swapping out solr.war with the 4.2.1 one, brings 
> sharedLib loading back.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4791) solr.xml sharedLib does not work in 4.3.0

2013-05-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13652063#comment-13652063
 ] 

Robert Muir commented on SOLR-4791:
---

This works on my windows box with trunk (where it fails before). Ill take care 
of this...

> solr.xml sharedLib does not work in 4.3.0
> -
>
> Key: SOLR-4791
> URL: https://issues.apache.org/jira/browse/SOLR-4791
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.3
>Reporter: Jan Høydahl
>Assignee: Erick Erickson
> Fix For: 5.0, 4.4, 4.3.1
>
> Attachments: closeLoader.patch, SOLR-4791.patch, SOLR-4791.patch, 
> SOLR-4791.patch, SOLR-4791-test.patch
>
>
> The sharedLib attribute on {{}} tag in solr.xml stopped working in 4.3.
> Using old-style solr.xml with sharedLib defined on solr tag. Solr does not 
> load any of them. Simply swapping out solr.war with the 4.2.1 one, brings 
> sharedLib loading back.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4799) SQLEntityProcessor for zipper join

2013-05-08 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13652066#comment-13652066
 ] 

Shalin Shekhar Mangar commented on SOLR-4799:
-

James, I think it is time to do some of the tasks that you have outlined and 
I'm willing to do the work.

Mikhail, I'm happy to review and commit such a patch. I think it is a very nice 
improvement.

> SQLEntityProcessor for zipper join
> --
>
> Key: SOLR-4799
> URL: https://issues.apache.org/jira/browse/SOLR-4799
> Project: Solr
>  Issue Type: New Feature
>  Components: contrib - DataImportHandler
>Reporter: Mikhail Khludnev
>Priority: Minor
>  Labels: dih
>
> DIH is mostly considered as a playground tool, and real usages end up with 
> SolrJ. I want to contribute few improvements target DIH performance.
> This one provides performant approach for joining SQL Entities with miserable 
> memory at contrast to 
> http://wiki.apache.org/solr/DataImportHandler#CachedSqlEntityProcessor  
> The idea is:
> * parent table is explicitly ordered by it’s PK in SQL
> * children table is explicitly ordered by parent_id FK in SQL
> * children entity processor joins ordered resultsets by ‘zipper’ algorithm.
> Do you think it’s worth to contribute it into DIH?
> cc: [~goksron] [~jdyer]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4791) solr.xml sharedLib does not work in 4.3.0

2013-05-08 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13652081#comment-13652081
 ] 

Erick Erickson commented on SOLR-4791:
--

Thanks, man. I was taking a wild shot in the dark, obviously missing

> solr.xml sharedLib does not work in 4.3.0
> -
>
> Key: SOLR-4791
> URL: https://issues.apache.org/jira/browse/SOLR-4791
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.3
>Reporter: Jan Høydahl
>Assignee: Erick Erickson
> Fix For: 5.0, 4.4, 4.3.1
>
> Attachments: closeLoader.patch, SOLR-4791.patch, SOLR-4791.patch, 
> SOLR-4791.patch, SOLR-4791-test.patch
>
>
> The sharedLib attribute on {{}} tag in solr.xml stopped working in 4.3.
> Using old-style solr.xml with sharedLib defined on solr tag. Solr does not 
> load any of them. Simply swapping out solr.war with the 4.2.1 one, brings 
> sharedLib loading back.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4799) SQLEntityProcessor for zipper join

2013-05-08 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13652088#comment-13652088
 ] 

Mikhail Khludnev commented on SOLR-4799:


Ok. James, I've got your point. Let me collect code and tests for submitting. 

> SQLEntityProcessor for zipper join
> --
>
> Key: SOLR-4799
> URL: https://issues.apache.org/jira/browse/SOLR-4799
> Project: Solr
>  Issue Type: New Feature
>  Components: contrib - DataImportHandler
>Reporter: Mikhail Khludnev
>Priority: Minor
>  Labels: dih
>
> DIH is mostly considered as a playground tool, and real usages end up with 
> SolrJ. I want to contribute few improvements target DIH performance.
> This one provides performant approach for joining SQL Entities with miserable 
> memory at contrast to 
> http://wiki.apache.org/solr/DataImportHandler#CachedSqlEntityProcessor  
> The idea is:
> * parent table is explicitly ordered by it’s PK in SQL
> * children table is explicitly ordered by parent_id FK in SQL
> * children entity processor joins ordered resultsets by ‘zipper’ algorithm.
> Do you think it’s worth to contribute it into DIH?
> cc: [~goksron] [~jdyer]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4986) NRT reader doesn't see changes after successful IW.tryDeleteDocument

2013-05-08 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-4986:
---

Attachment: LUCENE-4986.patch

Patch w/ fix, plus some cleanup in StandardDirectoryReader ... I think it's 
ready.

> NRT reader doesn't see changes after successful IW.tryDeleteDocument
> 
>
> Key: LUCENE-4986
> URL: https://issues.apache.org/jira/browse/LUCENE-4986
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, 4.4
>
> Attachments: LUCENE-4986.patch, LUCENE-4986.patch
>
>
> Reported by Reg on the java-user list, subject 
> "TrackingIndexWriter.tryDeleteDocument(IndexReader, int) vs 
> deleteDocuments(Query)":
> When IW.tryDeleteDocument succeeds, it marks the document as deleted in the 
> pending BitVector in ReadersAndLiveDocs, but then when the NRT reader checks 
> if it's still current by calling IW.nrtIsCurrent, we fail to catch changes to 
> the BitVector, resulting in the NRT reader thinking it's current and not 
> reopening.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4791) solr.xml sharedLib does not work in 4.3.0

2013-05-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13652096#comment-13652096
 ] 

Robert Muir commented on SOLR-4791:
---

I committed the close()'ing (and a best-effort hack+assume). So I think we are 
now ok.

> solr.xml sharedLib does not work in 4.3.0
> -
>
> Key: SOLR-4791
> URL: https://issues.apache.org/jira/browse/SOLR-4791
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.3
>Reporter: Jan Høydahl
>Assignee: Erick Erickson
> Fix For: 5.0, 4.4, 4.3.1
>
> Attachments: closeLoader.patch, SOLR-4791.patch, SOLR-4791.patch, 
> SOLR-4791.patch, SOLR-4791-test.patch
>
>
> The sharedLib attribute on {{}} tag in solr.xml stopped working in 4.3.
> Using old-style solr.xml with sharedLib defined on solr tag. Solr does not 
> load any of them. Simply swapping out solr.war with the 4.2.1 one, brings 
> sharedLib loading back.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4583) StraightBytesDocValuesField fails if bytes > 32k

2013-05-08 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13652097#comment-13652097
 ] 

Shai Erera commented on LUCENE-4583:


bq. For the faceting use case, we have SortedSetDocValuesField

No, this is for one type of faceting, no hierarchies etc. It's also slower than 
the BINARY DV method ...

bq. BINARY datatype isnt designed for faceting.

Maybe, but that's the best we have for now.

> StraightBytesDocValuesField fails if bytes > 32k
> 
>
> Key: LUCENE-4583
> URL: https://issues.apache.org/jira/browse/LUCENE-4583
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.0, 4.1, 5.0
>Reporter: David Smiley
>Priority: Critical
> Fix For: 4.3
>
> Attachments: LUCENE-4583.patch, LUCENE-4583.patch
>
>
> I didn't observe any limitations on the size of a bytes based DocValues field 
> value in the docs.  It appears that the limit is 32k, although I didn't get 
> any friendly error telling me that was the limit.  32k is kind of small IMO; 
> I suspect this limit is unintended and as such is a bug.The following 
> test fails:
> {code:java}
>   public void testBigDocValue() throws IOException {
> Directory dir = newDirectory();
> IndexWriter writer = new IndexWriter(dir, writerConfig(false));
> Document doc = new Document();
> BytesRef bytes = new BytesRef((4+4)*4097);//4096 works
> bytes.length = bytes.bytes.length;//byte data doesn't matter
> doc.add(new StraightBytesDocValuesField("dvField", bytes));
> writer.addDocument(doc);
> writer.commit();
> writer.close();
> DirectoryReader reader = DirectoryReader.open(dir);
> DocValues docValues = MultiDocValues.getDocValues(reader, "dvField");
> //FAILS IF BYTES IS BIG!
> docValues.getSource().getBytes(0, bytes);
> reader.close();
> dir.close();
>   }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 449 - Still Failing!

2013-05-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/449/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
There are still nodes recoverying - waited for 230 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 230 
seconds
at 
__randomizedtesting.SeedInfo.seed([6919CC37A34BDBE6:E8FF422FD414BBDA]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:173)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:131)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:126)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:512)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:146)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:815)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java

[jira] [Commented] (LUCENE-4583) StraightBytesDocValuesField fails if bytes > 32k

2013-05-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13652105#comment-13652105
 ] 

Robert Muir commented on LUCENE-4583:
-

{quote}
No, this is for one type of faceting, no hierarchies etc. It's also slower than 
the BINARY DV method ...
{quote}

Its not inherently slower. I just didnt spend a month inlining vint codes or 
writing custom codecs like you did for the other faceting method. 
Instead its the simplest thing that can possibly work right now.

I will call you guys out on this every single time you bring it up.

I'm -1 to bumping the limit

> StraightBytesDocValuesField fails if bytes > 32k
> 
>
> Key: LUCENE-4583
> URL: https://issues.apache.org/jira/browse/LUCENE-4583
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.0, 4.1, 5.0
>Reporter: David Smiley
>Priority: Critical
> Fix For: 4.3
>
> Attachments: LUCENE-4583.patch, LUCENE-4583.patch
>
>
> I didn't observe any limitations on the size of a bytes based DocValues field 
> value in the docs.  It appears that the limit is 32k, although I didn't get 
> any friendly error telling me that was the limit.  32k is kind of small IMO; 
> I suspect this limit is unintended and as such is a bug.The following 
> test fails:
> {code:java}
>   public void testBigDocValue() throws IOException {
> Directory dir = newDirectory();
> IndexWriter writer = new IndexWriter(dir, writerConfig(false));
> Document doc = new Document();
> BytesRef bytes = new BytesRef((4+4)*4097);//4096 works
> bytes.length = bytes.bytes.length;//byte data doesn't matter
> doc.add(new StraightBytesDocValuesField("dvField", bytes));
> writer.addDocument(doc);
> writer.commit();
> writer.close();
> DirectoryReader reader = DirectoryReader.open(dir);
> DocValues docValues = MultiDocValues.getDocValues(reader, "dvField");
> //FAILS IF BYTES IS BIG!
> docValues.getSource().getBytes(0, bytes);
> reader.close();
> dir.close();
>   }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4583) StraightBytesDocValuesField fails if bytes > 32k

2013-05-08 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13652108#comment-13652108
 ] 

David Smiley commented on LUCENE-4583:
--

I looked over the patch carefully and stepped through the relevant code in a 
debugging session and I think it's good.

I don't see why this arbitrary limit was here. The use-case for why Barakat hit 
this is related to storing values per document so that a custom ValueSource I 
wrote can examine it.  It's for spatial multi-value fields and some businesses 
(docs) have a ton of locations out there (e.g. McDonalds).  FWIW very few docs 
have such large values, and if it were to become slow then I have ways to more 
cleverly examine a subset of the returned bytes.  I think its silly to force 
the app to write a Codec (even if its trivial) to get out of this arbitrary 
limit.



> StraightBytesDocValuesField fails if bytes > 32k
> 
>
> Key: LUCENE-4583
> URL: https://issues.apache.org/jira/browse/LUCENE-4583
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.0, 4.1, 5.0
>Reporter: David Smiley
>Priority: Critical
> Fix For: 4.3
>
> Attachments: LUCENE-4583.patch, LUCENE-4583.patch
>
>
> I didn't observe any limitations on the size of a bytes based DocValues field 
> value in the docs.  It appears that the limit is 32k, although I didn't get 
> any friendly error telling me that was the limit.  32k is kind of small IMO; 
> I suspect this limit is unintended and as such is a bug.The following 
> test fails:
> {code:java}
>   public void testBigDocValue() throws IOException {
> Directory dir = newDirectory();
> IndexWriter writer = new IndexWriter(dir, writerConfig(false));
> Document doc = new Document();
> BytesRef bytes = new BytesRef((4+4)*4097);//4096 works
> bytes.length = bytes.bytes.length;//byte data doesn't matter
> doc.add(new StraightBytesDocValuesField("dvField", bytes));
> writer.addDocument(doc);
> writer.commit();
> writer.close();
> DirectoryReader reader = DirectoryReader.open(dir);
> DocValues docValues = MultiDocValues.getDocValues(reader, "dvField");
> //FAILS IF BYTES IS BIG!
> docValues.getSource().getBytes(0, bytes);
> reader.close();
> dir.close();
>   }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (64bit/ibm-j9-jdk7) - Build # 5495 - Failure!

2013-05-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/5495/
Java: 64bit/ibm-j9-jdk7 

1 tests failed.
REGRESSION:  
org.apache.solr.client.solrj.embedded.SolrExampleJettyTest.testLukeHandler

Error Message:
IOException occured when talking to server at: 
https://127.0.0.1:54733/solr/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: https://127.0.0.1:54733/solr/collection1
at 
__randomizedtesting.SeedInfo.seed([555DBFC9398FEB3A:913DAE6BB2A8EC52]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:435)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:116)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:102)
at 
org.apache.solr.client.solrj.SolrExampleTests.testLukeHandler(SolrExampleTests.java:770)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:88)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:613)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnor

MockDirWrapper always syncs

2013-05-08 Thread Shai Erera
Hi

Look at this code from MDW.sync:

if (true || LuceneTestCase.rarely(randomState) || delegate instanceof
NRTCachingDirectory) {
  // don't wear out our hardware so much in tests.
  delegate.sync(names);
}

Is the 'if (true)' intentional or left there by mistake? The comment
afterwards suggests it should not be there, but I wanted to confirm before
I remove it.

Shai


[jira] [Commented] (SOLR-4801) "Can't find (or read) directory to add to classloader" at startup

2013-05-08 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13652167#comment-13652167
 ] 

Shawn Heisey commented on SOLR-4801:


bq.  As far as I know we do not need the contrib items, however if I've 
incorrectly installed Solr I'd like to know how to fix it.

In my mind, your minimalist install approach is the right way to go.  I also 
take a minimalist approach to forking the example solrconfig.xml.  When forking 
the example schema.xml, my approach is semi-minimalist - the "primitive" 
fieldType entries are very useful to keep around.

bq. If one of these lib directories does not exist, solr should fail hard. This 
total/crap/ignored is an anti-feature.

As much as the little bit of profanity amuses me, I agree with Robert.  Config 
elements that reference invalid information or have typos should result in a 
launch failure.  For solrconfig.xml, that would result in that core not 
loading.  The same goes for errors in a  tag in solr.xml.  For parts of 
solr.xml outside of the  tags, it should result in Solr failing to start. 
 Unrelated note:  This is another indirect argument in favor of Solr no longer 
being a webapp.

[~janhoy], I hope that your proposed config example was just a quick brain dump 
and you don't really want "str" tags in solr.xml.  That would probably make it 
very hard to catch config errors at startup time.

On a larger XML topic, using generic tags (str, int, etc) might make sense in 
parts of solrconfig.xml where it is used extensively, but I have sometimes 
wondered whether there would be fewer problems if we used more specific tags 
and fewer generic.  Making it through the transition period might be a 
nightmare, though.


> "Can't find (or read) directory to add to classloader" at startup
> -
>
> Key: SOLR-4801
> URL: https://issues.apache.org/jira/browse/SOLR-4801
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.3
> Environment: Windows 2008 R2
> Java JRE 1.6.0_45 x64
> Jetty 8.1.10
> Solr 4.3.0 + lib\etc files
>Reporter: Leith Tussing
>Priority: Minor
>
> I upgraded from 4.2.1 by replacing the war file and adding the lib\etc files 
> from the example section into my Jetty installation.  Now every time I start 
> the services I get WARN messages I've never gotten before during startup.  
> Additionally there appears to be profanity in the last WARN message as it 
> looks for a "/total/crap/dir/ignored" after failing the other items.
> {code:xml}
> WARN  - 2013-05-08 08:46:06.710; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/extraction/lib (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\extraction\lib).
> WARN  - 2013-05-08 08:46:06.714; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.715; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/clustering/lib/ (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\clustering\lib).
> WARN  - 2013-05-08 08:46:06.716; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.716; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/langid/lib/ (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\langid\lib).
> WARN  - 2013-05-08 08:46:06.717; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.717; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: 
> ../../../contrib/velocity/lib (resolved as: 
> C:\Jetty\solr\instance1\..\..\..\contrib\velocity\lib).
> WARN  - 2013-05-08 08:46:06.718; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: ../../../dist/ 
> (resolved as: C:\Jetty\solr\instance1\..\..\..\dist).
> WARN  - 2013-05-08 08:46:06.719; org.apache.solr.core.SolrResourceLoader; 
> Can't find (or read) directory to add to classloader: /total/crap/dir/ignored 
> (resolved as: C:\Jetty\solr\instance1\total\crap\dir\ignored).
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 38460 - Failure!

2013-05-08 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/38460/

1 tests failed.
REGRESSION:  
org.apache.lucene.index.TestIndexWriterOnDiskFull.testImmediateDiskFull

Error Message:
did not hit disk full

Stack Trace:
java.lang.AssertionError: did not hit disk full
at 
__randomizedtesting.SeedInfo.seed([E79CBF17F77C5418:76DA2D7B8B7DBADC]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.lucene.index.TestIndexWriterOnDiskFull.testImmediateDiskFull(TestIndexWriterOnDiskFull.java:537)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:722)




Build Log:
[...truncated 945 lines...]
[junit4:junit4] Suite: org.apache.lucene.index.TestIndexWriterOnDiskFull
[junit4:junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestIndexWriterOnDiskFull -Dtests.method=testImmediateDiskFull 
-Dtests.seed=E79CBF17F77C5418 -Dtests.slow=true -Dtests.locale=sr_RS_#Latn 
-Dtests.timezone=America/Montreal -Dtests.file.encoding=UTF-8
[junit4:junit4] FAILURE 0.03s J6 | 
TestIndexWriterOnDiskFull.testImmediateDiskFull <<<
[junit4:junit4]> Throwable #1: java.lang.AssertionError: did not hit disk 
full
[junit4:junit4]>at 
__randomizedte

[jira] [Commented] (SOLR-4791) solr.xml sharedLib does not work in 4.3.0

2013-05-08 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13652188#comment-13652188
 ] 

Uwe Schindler commented on SOLR-4791:
-

Thanks for referring to my Schindler-hack in the comment! I did this the same 
way in forbidden-apis, so the hack comes from there: 
[https://code.google.com/p/forbidden-apis/source/browse/trunk/src/main/java/de/thetaphi/forbiddenapis/AbstractCheckMojo.java#229]

So I told this Robert privately (was on phone) how to handle URLClassLoader 
without reflecting methods. The addition of the Closeable interface to 
URLClassLoader is one of the "important" Java 7 bug fixes, which is also listed 
in Java 7's release notes (which was added to make e.g. Tomcat be able to 
remove the JAR files of a webapp when redeploying them on a windows 
container...).

> solr.xml sharedLib does not work in 4.3.0
> -
>
> Key: SOLR-4791
> URL: https://issues.apache.org/jira/browse/SOLR-4791
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.3
>Reporter: Jan Høydahl
>Assignee: Erick Erickson
> Fix For: 5.0, 4.4, 4.3.1
>
> Attachments: closeLoader.patch, SOLR-4791.patch, SOLR-4791.patch, 
> SOLR-4791.patch, SOLR-4791-test.patch
>
>
> The sharedLib attribute on {{}} tag in solr.xml stopped working in 4.3.
> Using old-style solr.xml with sharedLib defined on solr tag. Solr does not 
> load any of them. Simply swapping out solr.war with the 4.2.1 one, brings 
> sharedLib loading back.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Commented] (SOLR-4801) "Can't find (or read) directory to add to classloader" at startup

2013-05-08 Thread Erick Erickson
bq: But the problem is not total/crap/ignored vs
/non/existent/dir/yields/warning

bq: This makes no difference

Agreed, that was just a side comment.

On Wed, May 8, 2013 at 12:03 PM, Robert Muir  wrote:
> But the problem is not total/crap/ignored vs
> /non/existent/dir/yields/warning
>
> This makes no difference.
>
> My complaint is about loose error checking.
>
>
> On Wed, May 8, 2013 at 11:51 AM, Erick Erickson 
> wrote:
>>
>> FWIW, total/crap/ignored is already gone in 4.3.
>>
>> On Wed, May 8, 2013 at 10:59 AM, Robert Muir (JIRA) 
>> wrote:
>> >
>> > [
>> > https://issues.apache.org/jira/browse/SOLR-4801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651958#comment-13651958
>> > ]
>> >
>> > Robert Muir commented on SOLR-4801:
>> > ---
>> >
>> > Guys I seriously dont know if I should laugh or cry.
>> >
>> > If one of these lib directories does not exist, solr should fail hard.
>> > This total/crap/ignored is an anti-feature.
>> >
>> >> "Can't find (or read) directory to add to classloader" at startup
>> >> -
>> >>
>> >> Key: SOLR-4801
>> >> URL: https://issues.apache.org/jira/browse/SOLR-4801
>> >> Project: Solr
>> >>  Issue Type: Bug
>> >>Affects Versions: 4.3
>> >> Environment: Windows 2008 R2
>> >> Java JRE 1.6.0_45 x64
>> >> Jetty 8.1.10
>> >> Solr 4.3.0 + lib\etc files
>> >>Reporter: Leith Tussing
>> >>Priority: Minor
>> >>
>> >> I upgraded from 4.2.1 by replacing the war file and adding the lib\etc
>> >> files from the example section into my Jetty installation.  Now every 
>> >> time I
>> >> start the services I get WARN messages I've never gotten before during
>> >> startup.
>> >> Additionally there appears to be profanity in the last WARN message as
>> >> it looks for a "/total/crap/dir/ignored" after failing the other items.
>> >> {code:xml}
>> >> WARN  - 2013-05-08 08:46:06.710;
>> >> org.apache.solr.core.SolrResourceLoader; Can't find (or read) directory to
>> >> add to classloader: ../../../contrib/extraction/lib (resolved as:
>> >> C:\Jetty\solr\instance1\..\..\..\contrib\extraction\lib).
>> >> WARN  - 2013-05-08 08:46:06.714;
>> >> org.apache.solr.core.SolrResourceLoader; Can't find (or read) directory to
>> >> add to classloader: ../../../dist/ (resolved as:
>> >> C:\Jetty\solr\instance1\..\..\..\dist).
>> >> WARN  - 2013-05-08 08:46:06.715;
>> >> org.apache.solr.core.SolrResourceLoader; Can't find (or read) directory to
>> >> add to classloader: ../../../contrib/clustering/lib/ (resolved as:
>> >> C:\Jetty\solr\instance1\..\..\..\contrib\clustering\lib).
>> >> WARN  - 2013-05-08 08:46:06.716;
>> >> org.apache.solr.core.SolrResourceLoader; Can't find (or read) directory to
>> >> add to classloader: ../../../dist/ (resolved as:
>> >> C:\Jetty\solr\instance1\..\..\..\dist).
>> >> WARN  - 2013-05-08 08:46:06.716;
>> >> org.apache.solr.core.SolrResourceLoader; Can't find (or read) directory to
>> >> add to classloader: ../../../contrib/langid/lib/ (resolved as:
>> >> C:\Jetty\solr\instance1\..\..\..\contrib\langid\lib).
>> >> WARN  - 2013-05-08 08:46:06.717;
>> >> org.apache.solr.core.SolrResourceLoader; Can't find (or read) directory to
>> >> add to classloader: ../../../dist/ (resolved as:
>> >> C:\Jetty\solr\instance1\..\..\..\dist).
>> >> WARN  - 2013-05-08 08:46:06.717;
>> >> org.apache.solr.core.SolrResourceLoader; Can't find (or read) directory to
>> >> add to classloader: ../../../contrib/velocity/lib (resolved as:
>> >> C:\Jetty\solr\instance1\..\..\..\contrib\velocity\lib).
>> >> WARN  - 2013-05-08 08:46:06.718;
>> >> org.apache.solr.core.SolrResourceLoader; Can't find (or read) directory to
>> >> add to classloader: ../../../dist/ (resolved as:
>> >> C:\Jetty\solr\instance1\..\..\..\dist).
>> >> WARN  - 2013-05-08 08:46:06.719;
>> >> org.apache.solr.core.SolrResourceLoader; Can't find (or read) directory to
>> >> add to classloader: /total/crap/dir/ignored (resolved as:
>> >> C:\Jetty\solr\instance1\total\crap\dir\ignored).
>> >> {code}
>> >
>> > --
>> > This message is automatically generated by JIRA.
>> > If you think it was sent incorrectly, please contact your JIRA
>> > administrators
>> > For more information on JIRA, see:
>> > http://www.atlassian.com/software/jira
>> >
>> > -
>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> > For additional commands, e-mail: dev-h...@lucene.apache.org
>> >
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@luce

Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 38460 - Failure!

2013-05-08 Thread Steve Rowe
This seed reproduces for me on trunk and branch_4x, on OS X w/Apple 1.6 
(branch_4x) and Oracle 1.7 (trunk) JVMs. - Steve
 
On May 8, 2013, at 11:21 AM, buil...@flonkings.com wrote:

> Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/38460/
> 
> 1 tests failed.
> REGRESSION:  
> org.apache.lucene.index.TestIndexWriterOnDiskFull.testImmediateDiskFull
> 
> Error Message:
> did not hit disk full
> 
> Stack Trace:
> java.lang.AssertionError: did not hit disk full
>   at 
> __randomizedtesting.SeedInfo.seed([E79CBF17F77C5418:76DA2D7B8B7DBADC]:0)
>   at org.junit.Assert.fail(Assert.java:93)
>   at 
> org.apache.lucene.index.TestIndexWriterOnDiskFull.testImmediateDiskFull(TestIndexWriterOnDiskFull.java:537)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
>   at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
>   at 
> org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
>   at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
>   at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
>   at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
>   at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
>   at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
>   at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
>   at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
>   at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
>   at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
>   at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
>   at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
>   at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
>   at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
>   at java.lang.Thread.run(Thread.java:722)
> 
> 
> 
> 
> Build Log:
> [...truncated 945 lines...]
> [junit4:junit4] Suite: org.apache.lucene.index.TestIndexWriterOnDiskFull
> [junit4:junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterOnDiskFull -Dtests.method=testImmediateDiskFull 
> -Dtests.seed=E79CBF17F77C5418 -Dtests.slow=true -Dtests

[jira] [Comment Edited] (SOLR-4791) solr.xml sharedLib does not work in 4.3.0

2013-05-08 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13652188#comment-13652188
 ] 

Uwe Schindler edited comment on SOLR-4791 at 5/8/13 6:37 PM:
-

Thanks for referring to my Schindler-hack in the comment! I did this the same 
way in forbidden-apis, so the hack comes from there: 
[https://code.google.com/p/forbidden-apis/source/browse/trunk/src/main/java/de/thetaphi/forbiddenapis/AbstractCheckMojo.java?r=153#229]

So I told this Robert privately (was on phone) how to handle URLClassLoader 
without reflecting methods. The addition of the Closeable interface to 
URLClassLoader is one of the "important" Java 7 bug fixes, which is also listed 
in Java 7's release notes (which was added to make e.g. Tomcat be able to 
remove the JAR files of a webapp when redeploying them on a windows 
container...).

  was (Author: thetaphi):
Thanks for referring to my Schindler-hack in the comment! I did this the 
same way in forbidden-apis, so the hack comes from there: 
[https://code.google.com/p/forbidden-apis/source/browse/trunk/src/main/java/de/thetaphi/forbiddenapis/AbstractCheckMojo.java#229]

So I told this Robert privately (was on phone) how to handle URLClassLoader 
without reflecting methods. The addition of the Closeable interface to 
URLClassLoader is one of the "important" Java 7 bug fixes, which is also listed 
in Java 7's release notes (which was added to make e.g. Tomcat be able to 
remove the JAR files of a webapp when redeploying them on a windows 
container...).
  
> solr.xml sharedLib does not work in 4.3.0
> -
>
> Key: SOLR-4791
> URL: https://issues.apache.org/jira/browse/SOLR-4791
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.3
>Reporter: Jan Høydahl
>Assignee: Erick Erickson
> Fix For: 5.0, 4.4, 4.3.1
>
> Attachments: closeLoader.patch, SOLR-4791.patch, SOLR-4791.patch, 
> SOLR-4791.patch, SOLR-4791-test.patch
>
>
> The sharedLib attribute on {{}} tag in solr.xml stopped working in 4.3.
> Using old-style solr.xml with sharedLib defined on solr tag. Solr does not 
> load any of them. Simply swapping out solr.war with the 4.2.1 one, brings 
> sharedLib loading back.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.6.0) - Build # 436 - Still Failing!

2013-05-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/436/
Java: 64bit/jdk1.6.0 -XX:+UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ERROR: SolrIndexSearcher opens=36 closes=35

Stack Trace:
java.lang.AssertionError: ERROR: SolrIndexSearcher opens=36 closes=35
at __randomizedtesting.SeedInfo.seed([F8C42785E683BA1B]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:252)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:101)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:700)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:680)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=284, name=searcherExecutor-194-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:399) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917) 
at java.lang.Thread.run(Thread.java:680)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=284, name=searcherExecutor-194-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:399)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
at java.lang.Thread.run(Thread.java:680)
at __randomizedtesting.SeedInfo.seed([F8C42785E683BA1B]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=284, name=searcherExecutor-194-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:1

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_21) - Build # 5557 - Failure!

2013-05-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/5557/
Java: 32bit/jdk1.7.0_21 -client -XX:+UseSerialGC

1 tests failed.
REGRESSION:  
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
timed out waiting for collection1 startAt time to exceed: Thu May 09 03:46:53 
KST 2013

Stack Trace:
java.lang.AssertionError: timed out waiting for collection1 startAt time to 
exceed: Thu May 09 03:46:53 KST 2013
at 
__randomizedtesting.SeedInfo.seed([F5C23E803E5F39:DB5EC2F88516368A]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.handler.TestReplicationHandler.watchCoreStartAt(TestReplicationHandler.java:1473)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:777)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)

[jira] [Updated] (SOLR-4797) Sub shards have wrong hash range in cluster state except when using PlainIdRouter

2013-05-08 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-4797:


Attachment: SOLR-4797.patch

Changes:
# Partition the range with the correct router
# Index docs with composite ids

> Sub shards have wrong hash range in cluster state except when using 
> PlainIdRouter
> -
>
> Key: SOLR-4797
> URL: https://issues.apache.org/jira/browse/SOLR-4797
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.3
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 4.4
>
> Attachments: SOLR-4797.patch
>
>
> The overseer collection processor always uses PlainIdRouter to partition the 
> hash range. However, some router implementations (most notably 
> CompositeIdRouter) can provide a different implementation to partition hash 
> ranges. So there can be a mismatch between the hash ranges which are 
> persisted in clusterstate.xml for sub shards and the actual index which is 
> split using the collection's configured router impl.
> This bug does not affect collections using PlainIdRouter.
> The overseer collection processor should always use the collection's 
> configured router to partition ranges.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: MockDirWrapper always syncs

2013-05-08 Thread Simon Willnauer
seems like unintentional... +1 to remove

On Wed, May 8, 2013 at 8:16 PM, Shai Erera  wrote:
> Hi
>
> Look at this code from MDW.sync:
>
> if (true || LuceneTestCase.rarely(randomState) || delegate instanceof
> NRTCachingDirectory) {
>   // don't wear out our hardware so much in tests.
>   delegate.sync(names);
> }
>
> Is the 'if (true)' intentional or left there by mistake? The comment
> afterwards suggests it should not be there, but I wanted to confirm before I
> remove it.
>
> Shai

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: MockDirWrapper always syncs

2013-05-08 Thread Robert Muir
originally the if (true) did not exist.

problem is (apart from system crash: where it matters to any directory),
NRTCachingDirectory's sync() actually causes stuff to go to disk. otherwise
it might just be sitting in RAM.

however, then we added RateLimitingDirectory. so now you have possibility
of RateLimitingDirectory(NRTCaching( and other structures. So its not
easily possible to only sync sometimes without breaking things.

On Wed, May 8, 2013 at 2:16 PM, Shai Erera  wrote:

> Hi
>
> Look at this code from MDW.sync:
>
> if (true || LuceneTestCase.rarely(randomState) || delegate instanceof
> NRTCachingDirectory) {
>   // don't wear out our hardware so much in tests.
>   delegate.sync(names);
> }
>
> Is the 'if (true)' intentional or left there by mistake? The comment
> afterwards suggests it should not be there, but I wanted to confirm before
> I remove it.
>
> Shai
>


  1   2   >