[jira] [Commented] (SOLR-4511) Repeater doesn't return correct index version to slaves

2013-03-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590319#comment-13590319
 ] 

Raúl Grande commented on SOLR-4511:
---

Thank you, I will try to install the patch asap. If I find any issues I will 
let you know.

 Repeater doesn't return correct index version to slaves
 ---

 Key: SOLR-4511
 URL: https://issues.apache.org/jira/browse/SOLR-4511
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 4.1
Reporter: Raúl Grande
Assignee: Mark Miller
 Fix For: 4.2, 5.0

 Attachments: o8uzad.jpg, SOLR-4511.patch


 Related to SOLR-4471. I have a master-repeater-2slaves architecture. The 
 replication between master and repeater is working fine but slaves aren't 
 able to replicate because their master (repeater node) is returning an old 
 index version, but in admin UI the version that repeater have is correct.
 When I do http://localhost:17045/solr/replication?command=indexversion 
 response is: long name=generation29037/long when the version should be 
 29042
 If I restart the repeater node this URL returns the correct index version, 
 but after a while it fails again.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0-ea-b65) - Build # 4529 - Still Failing!

2013-03-01 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/4529/
Java: 32bit/jdk1.8.0-ea-b65 -server -XX:+UseG1GC

1 tests failed.
REGRESSION:  org.apache.lucene.TestExternalCodecs.testPerFieldCodec

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([FDFA7D9384D827BE:51DFE29F7C043144]:0)
at 
org.apache.lucene.index.FreqProxTermsWriterPerField.flush(FreqProxTermsWriterPerField.java:452)
at 
org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
at org.apache.lucene.index.TermsHash.flush(TermsHash.java:116)
at org.apache.lucene.index.DocInverter.flush(DocInverter.java:53)
at 
org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:81)
at 
org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:494)
at 
org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:422)
at 
org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:559)
at 
org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2638)
at 
org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2782)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2762)
at 
org.apache.lucene.TestExternalCodecs.testPerFieldCodec(TestExternalCodecs.java:84)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:474)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 

[jira] [Commented] (SOLR-4517) make FieldType.properties protected

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590327#comment-13590327
 ] 

Commit Tag Bot commented on SOLR-4517:
--

[trunk commit] Robert Muir
http://svn.apache.org/viewvc?view=revisionrevision=1451521

SOLR-4517: make FieldType.properties protected for subclassing


 make FieldType.properties protected
 ---

 Key: SOLR-4517
 URL: https://issues.apache.org/jira/browse/SOLR-4517
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Reporter: Robert Muir
 Attachments: SOLR-4517.patch


 Currently its not possible to make a fieldtype plugin via the normal lib/ 
 mechanism that hides field type impl details (e.g. you just want a no-arg 
 IDFieldType). 
 This is because you cant do sheisty package-private stuff in a different 
 classloader without extra sheisty reflection.
 So I think pkg-private access is not very good for things intended to be 
 plugins, you can only make a custom war...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4517) make FieldType.properties protected

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590339#comment-13590339
 ] 

Commit Tag Bot commented on SOLR-4517:
--

[branch_4x commit] Robert Muir
http://svn.apache.org/viewvc?view=revisionrevision=1451524

SOLR-4517: make FieldType.properties protected for subclassing


 make FieldType.properties protected
 ---

 Key: SOLR-4517
 URL: https://issues.apache.org/jira/browse/SOLR-4517
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Reporter: Robert Muir
 Attachments: SOLR-4517.patch


 Currently its not possible to make a fieldtype plugin via the normal lib/ 
 mechanism that hides field type impl details (e.g. you just want a no-arg 
 IDFieldType). 
 This is because you cant do sheisty package-private stuff in a different 
 classloader without extra sheisty reflection.
 So I think pkg-private access is not very good for things intended to be 
 plugins, you can only make a custom war...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4520) clean up war file/solr-core dependencies

2013-03-01 Thread Robert Muir (JIRA)
Robert Muir created SOLR-4520:
-

 Summary: clean up war file/solr-core dependencies
 Key: SOLR-4520
 URL: https://issues.apache.org/jira/browse/SOLR-4520
 Project: Solr
  Issue Type: Improvement
  Components: Build
Reporter: Robert Muir


Spinoff from SOLR-3843.

We should clean up dependencies tests here: so that solr-core.jar does not 
depend on things it really doesnt depend on.

Instead the webapp can depend on this. Also it would be good to move the 
example tests to something like webapp/, so solr-core.jar isn't depending on 
analyzers it really needs just because some tests use configs with them and so 
on.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4506) [solr4.0.0] many index.{date} dir in replication node

2013-03-01 Thread zhuojunjian (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590369#comment-13590369
 ] 

zhuojunjian commented on SOLR-4506:
---

hi
we have duplicated the issue today.
step 1: kill one replicate node (node A) , and make it not running.
step 2: import many data to the solrcloud so that its leader node created too 
many new indexes.
step 3: make node A running normally, and it will download files from its 
leader node.
step 4: before node A finishes the download operation, kill node A again.
step 5: then make node A running normally again, we will find there are two 
index dirs in the ../data/., and if we continue step 3 ~ step 4 , the number of 
index dirs will increase .

I think it may be a bug. do you have any idea about that? 

 [solr4.0.0] many index.{date} dir in replication node 
 --

 Key: SOLR-4506
 URL: https://issues.apache.org/jira/browse/SOLR-4506
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0
 Environment: the solr4.0 runs on suse11.
 mem:32G
 cpu:16 cores
Reporter: zhuojunjian
 Fix For: 4.0

   Original Estimate: 12h
  Remaining Estimate: 12h

 in our test,we used solrcloud feature in solr4.0(version detail 
 :4.0.0.2012.10.06.03.04.33).
 the solrcloud configuration is 3 shards and 2 replications each shard.
 we found that there are over than 25 dirs which named index.{date} in one 
 replication node belonging to shard 3. 
 for example:
 index.2013021725864  index.20130218012211880  index.20130218015714713  
 index.20130218023101958  index.20130218030424083  tlog
 index.20130218005648324  index.20130218012751078  index.20130218020141293  
 the issue seems like SOLR-1781. but it is fixed in 4.0-BETA,5.0. 
 so is solr4.0 ? if it is fixed too in solr4.0, why we find the issue again ?
 what can I do?   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3843) Add lucene-codecs to Solr libs?

2013-03-01 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved SOLR-3843.
---

Resolution: Fixed

I opened SOLR-4520 to clean up the dependencies.

 Add lucene-codecs to Solr libs?
 ---

 Key: SOLR-3843
 URL: https://issues.apache.org/jira/browse/SOLR-3843
 Project: Solr
  Issue Type: Wish
Affects Versions: 4.0
Reporter: Adrien Grand
Assignee: Robert Muir
Priority: Critical
 Fix For: 4.2, 5.0

 Attachments: SOLR-3843.patch, SOLR-3843.patch, SOLR-3843.patch


 Solr gives the ability to its users to select the postings format to use on a 
 per-field basis but only Lucene40PostingsFormat is available by default 
 (unless users add lucene-codecs to the Solr lib directory). Maybe we should 
 add lucene-codecs to Solr libs (I mean in the WAR file) so that people can 
 try our non-default postings formats with minimum effort?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3843) Add lucene-codecs to Solr libs?

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590379#comment-13590379
 ] 

Commit Tag Bot commented on SOLR-3843:
--

[trunk commit] Robert Muir
http://svn.apache.org/viewvc?view=revisionrevision=1451542

SOLR-3843: add lucene-codecs.jar


 Add lucene-codecs to Solr libs?
 ---

 Key: SOLR-3843
 URL: https://issues.apache.org/jira/browse/SOLR-3843
 Project: Solr
  Issue Type: Wish
Affects Versions: 4.0
Reporter: Adrien Grand
Assignee: Robert Muir
Priority: Critical
 Fix For: 4.2, 5.0

 Attachments: SOLR-3843.patch, SOLR-3843.patch, SOLR-3843.patch


 Solr gives the ability to its users to select the postings format to use on a 
 per-field basis but only Lucene40PostingsFormat is available by default 
 (unless users add lucene-codecs to the Solr lib directory). Maybe we should 
 add lucene-codecs to Solr libs (I mean in the WAR file) so that people can 
 try our non-default postings formats with minimum effort?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4465) Configurable Collectors

2013-03-01 Thread Dan Rosher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590382#comment-13590382
 ] 

Dan Rosher commented on SOLR-4465:
--

Looking at the patch I think that default needs to be in the solrconfig 
otherwise it would result in a npe. Perhaps replace default with new 
DefaultCollectorFactory(...) ? Also if the user requests a collector that 
dosen't exist, this results in a npe too, Would it be better to throw an 
exception in this case? The other option is to fall back to a default but this 
would give unexpected results. 

Additionally since the collector is free to alter results between requests, I 
think it should be used to create the QueryResultKey object for caching 
docSets, otherwise you going to get unexpected results. Perhaps 
CollectorFactory should be an interface with signatures for 
getCollector,getDocSetCollector and hashCode and equals. QueryResultKey can 
then delegate to CollectorFactory.hashCode. Then have a default implementation 
implementing the current hashCode for QueryResultKey. This would ensure 
CollectorFactory implementors have thought about hashCode and are free to 
simply extend the default CollectorFactory if they wish. 

 Configurable Collectors
 ---

 Key: SOLR-4465
 URL: https://issues.apache.org/jira/browse/SOLR-4465
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 4.1
Reporter: Joel Bernstein
 Fix For: 4.2, 5.0

 Attachments: SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, 
 SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, 
 SOLR-4465.patch, SOLR-4465.patch


 This issue is to add configurable custom collectors to Solr. This expands the 
 design and work done in issue SOLR-1680 to include:
 1) CollectorFactory configuration in solconfig.xml
 2) Http parameters to allow clients to dynamically select a CollectorFactory 
 and construct a custom Collector.
 3) Make aspects of QueryComponent pluggable so that the output from 
 distributed search can conform with custom collectors at the shard level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3843) Add lucene-codecs to Solr libs?

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590400#comment-13590400
 ] 

Commit Tag Bot commented on SOLR-3843:
--

[branch_4x commit] Robert Muir
http://svn.apache.org/viewvc?view=revisionrevision=1451543

SOLR-3843: add lucene-codecs.jar


 Add lucene-codecs to Solr libs?
 ---

 Key: SOLR-3843
 URL: https://issues.apache.org/jira/browse/SOLR-3843
 Project: Solr
  Issue Type: Wish
Affects Versions: 4.0
Reporter: Adrien Grand
Assignee: Robert Muir
Priority: Critical
 Fix For: 4.2, 5.0

 Attachments: SOLR-3843.patch, SOLR-3843.patch, SOLR-3843.patch


 Solr gives the ability to its users to select the postings format to use on a 
 per-field basis but only Lucene40PostingsFormat is available by default 
 (unless users add lucene-codecs to the Solr lib directory). Maybe we should 
 add lucene-codecs to Solr libs (I mean in the WAR file) so that people can 
 try our non-default postings formats with minimum effort?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4808) Add workaround for a JDK 8 class library bug which is still under discussion any may *not* be fixed

2013-03-01 Thread Uwe Schindler (JIRA)
Uwe Schindler created LUCENE-4808:
-

 Summary: Add workaround for a JDK 8 class library bug which is 
still under discussion any may *not* be fixed
 Key: LUCENE-4808
 URL: https://issues.apache.org/jira/browse/LUCENE-4808
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/build
Affects Versions: 4.1, 5.0
Reporter: Uwe Schindler
Assignee: Uwe Schindler


With JDK8 build 78 there was introduced a backwards compatibility regression 
which may not be fixed until release, because Oracle is possibly accepting this 
backwards break regarding cross-compilation to JDK6.

The full thread on the OpenJDK mailing list: 
[http://mail.openjdk.java.net/pipermail/compiler-dev/2013-February/005737.html] 
(continues in next month: 
http://mail.openjdk.java.net/pipermail/compiler-dev/2013-March/005748.html)

*In short:* JDK 8 adds so called default implementations on interfaces (means 
you can add a new method to an interface and provide a default 
implementation, so code implementing this interface is not required to 
implement the new method. This is really cool and would also help Lucene to 
make use of Interfaces instead of abstract classes which don't allow 
polymorphism).

In Lucene we are still compatible with Java 7 and Java 6. So like millions of 
other open source projects, we use -source 1.6 -target 1.6 to produce class 
files which are Java 1.6 conform, although you use a newer JDK to compile. Of 
course this approach has problem (e.g. if you use older new methods, not 
available in earliert JDKs). Because of this we must at least compile Lucene 
with a JDK 1.6 legacy JDK and also release the class files with this version. 
For 3.6, the RM has to also install JDK 1.5 (which makes it impossible to do 
this on Mac). So -source/-target is a alternative to at least produce 1.6/1.5 
compliant classes. According to Oracle, this is *not* the correct way to do 
this: Oracle says, you have to use: -source, -target and -Xbootclasspath to 
really crosscompile - and the last thing is what breaks here. To correctly set 
the bootclasspath, you need to have an older JDK installed or you should be 
able to at least download it from maven (which is not available to my 
knowledge).

The problem with JDK8 is now: If you compile with -source/-target but not the 
bootclasspath, it happens that the compiler does no longer understand new JDK8 
class file structures in the new rt.jar, so producing compile failures. In the 
case of this bug: AnnotatedElement#isAnnotationPresent() exists since Java 1.5 
in the interface, but all implementing classes have almost the same 
implementation: return getAnnotation(..) != null;. So the designers of the 
class library decided to move that method as so called default method into the 
interface itsself, removing code duplication. If you then compile code with 
-source 1.6 -target 1.6 using that method, the javac compier does not know 
about the new default method feature and simply says: Method not found in 
java.lang.Class:

{noformat}
[javac] Compiling 113 source files to C:\Users\Uwe 
Schindler\Projects\lucene\trunk-lusolr3\lucene\build\test-framework\classes\java
[javac] C:\Users\Uwe 
Schindler\Projects\lucene\trunk-lusolr3\lucene\test-framework\src\java\org\apache\lucene\util\TestRuleSetupAndRestoreClassEnv.java:134:
 error: cannot find symbol
[javac] if (targetClass.isAnnotationPresent(SuppressCodecs.class)) {
[javac]^
[javac]   symbol:   method isAnnotationPresent(ClassSuppressCodecs)
[javac]   location: variable targetClass of type Class?
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] 1 error
{noformat}

But until the Oracle people have a good workaround (I suggested to still 
implement the method on the implementation classes like Class/Method/... but 
delegate to the interface's default impl), we can quickly commit a replacement 
of this broken method call by (getAnnotation(..) != null). I want to do this, 
so we can enable jenkins builds with recent JDK 8 again.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4797) Fix remaining Lucene/Solr Javadocs issue

2013-03-01 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590421#comment-13590421
 ] 

Uwe Schindler commented on LUCENE-4797:
---

There is another problem in JDK8 b78, I opened LUCENE-4808 about it. Otherwise 
the code and javadocs now compile.

 Fix remaining Lucene/Solr Javadocs issue
 

 Key: LUCENE-4797
 URL: https://issues.apache.org/jira/browse/LUCENE-4797
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/javadocs
Affects Versions: 4.1
Reporter: Uwe Schindler
Assignee: Uwe Schindler

 Java 8 has a new feature (enabled by default): 
 http://openjdk.java.net/jeps/172
 It fails the build on:
 - incorrect links (@see, @link,...)
 - incorrect HTML entities
 - invalid HTML in general
 Thanks to our linter written in HTMLTidy and Python, most of the bugs are 
 already solved in our source code, but the Oracle Linter finds some more 
 problems, our linter does not:
 - missing escapes 
 - invalid entities
 Unfortunately the versions of JDK8 released up to today have a bug, making 
 optional closing tags (which are valid HTML4), like /p, mandatory. This 
 will be fixed in b78.
 Currently there is another bug in the Oracle javadocs tool (it fails to copy 
 doc-files folders), but this is under investigation at the moment.
 We should clean up our javadocs, so the pass the new JDK8 javadocs tool with 
 build 78+. Maybe we can put our own linter out of service, once we rely on 
 Java 8 :-)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4808) Add workaround for a JDK 8 class library bug which is still under discussion any may *not* be fixed

2013-03-01 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590422#comment-13590422
 ] 

Uwe Schindler commented on LUCENE-4808:
---

One note: Code that's already compiled and calling isAnnotationPresent() will 
not break java, as the runtime is emulating default method calls in hotspot. 
It's only javac with -source 1.6 that breaks. So if you download older Lucene 
versions as binary and compile your code against it, e.g. SolrJ will still 
work, although it calls this method. But You are no longer able to compile 
Lucene/Solr with JDK8 - and thats the regression.

 Add workaround for a JDK 8 class library bug which is still under 
 discussion any may *not* be fixed
 -

 Key: LUCENE-4808
 URL: https://issues.apache.org/jira/browse/LUCENE-4808
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/build
Affects Versions: 4.1, 5.0
Reporter: Uwe Schindler
Assignee: Uwe Schindler

 With JDK8 build 78 there was introduced a backwards compatibility regression 
 which may not be fixed until release, because Oracle is possibly accepting 
 this backwards break regarding cross-compilation to JDK6.
 The full thread on the OpenJDK mailing list: 
 [http://mail.openjdk.java.net/pipermail/compiler-dev/2013-February/005737.html]
  (continues in next month: 
 http://mail.openjdk.java.net/pipermail/compiler-dev/2013-March/005748.html)
 *In short:* JDK 8 adds so called default implementations on interfaces (means 
 you can add a new method to an interface and provide a default 
 implementation, so code implementing this interface is not required to 
 implement the new method. This is really cool and would also help Lucene to 
 make use of Interfaces instead of abstract classes which don't allow 
 polymorphism).
 In Lucene we are still compatible with Java 7 and Java 6. So like millions of 
 other open source projects, we use -source 1.6 -target 1.6 to produce class 
 files which are Java 1.6 conform, although you use a newer JDK to compile. Of 
 course this approach has problem (e.g. if you use older new methods, not 
 available in earliert JDKs). Because of this we must at least compile Lucene 
 with a JDK 1.6 legacy JDK and also release the class files with this version. 
 For 3.6, the RM has to also install JDK 1.5 (which makes it impossible to do 
 this on Mac). So -source/-target is a alternative to at least produce 1.6/1.5 
 compliant classes. According to Oracle, this is *not* the correct way to do 
 this: Oracle says, you have to use: -source, -target and -Xbootclasspath to 
 really crosscompile - and the last thing is what breaks here. To correctly 
 set the bootclasspath, you need to have an older JDK installed or you should 
 be able to at least download it from maven (which is not available to my 
 knowledge).
 The problem with JDK8 is now: If you compile with -source/-target but not the 
 bootclasspath, it happens that the compiler does no longer understand new 
 JDK8 class file structures in the new rt.jar, so producing compile failures. 
 In the case of this bug: AnnotatedElement#isAnnotationPresent() exists since 
 Java 1.5 in the interface, but all implementing classes have almost the same 
 implementation: return getAnnotation(..) != null;. So the designers of the 
 class library decided to move that method as so called default method into 
 the interface itsself, removing code duplication. If you then compile code 
 with -source 1.6 -target 1.6 using that method, the javac compier does not 
 know about the new default method feature and simply says: Method not found 
 in java.lang.Class:
 {noformat}
 [javac] Compiling 113 source files to C:\Users\Uwe 
 Schindler\Projects\lucene\trunk-lusolr3\lucene\build\test-framework\classes\java
 [javac] C:\Users\Uwe 
 Schindler\Projects\lucene\trunk-lusolr3\lucene\test-framework\src\java\org\apache\lucene\util\TestRuleSetupAndRestoreClassEnv.java:134:
  error: cannot find symbol
 [javac] if (targetClass.isAnnotationPresent(SuppressCodecs.class)) {
 [javac]^
 [javac]   symbol:   method isAnnotationPresent(ClassSuppressCodecs)
 [javac]   location: variable targetClass of type Class?
 [javac] Note: Some input files use or override a deprecated API.
 [javac] Note: Recompile with -Xlint:deprecation for details.
 [javac] 1 error
 {noformat}
 But until the Oracle people have a good workaround (I suggested to still 
 implement the method on the implementation classes like Class/Method/... but 
 delegate to the interface's default impl), we can quickly commit a 
 replacement of this broken method call by (getAnnotation(..) != null). I want 
 to do this, so we can enable jenkins builds 

[jira] [Updated] (LUCENE-4808) Add workaround for a JDK 8 class library bug which is still under discussion any may *not* be fixed

2013-03-01 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-4808:
--

Attachment: LUCENE-4808.patch

This patch fixed some tests using isAnnotationPresent() and SolrJ. We should 
maybe add a comment about this, but its not a major thing at all (the 
implementation does the same thing, so we already save one method call...).

 Add workaround for a JDK 8 class library bug which is still under 
 discussion any may *not* be fixed
 -

 Key: LUCENE-4808
 URL: https://issues.apache.org/jira/browse/LUCENE-4808
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/build
Affects Versions: 4.1, 5.0
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Attachments: LUCENE-4808.patch


 With JDK8 build 78 there was introduced a backwards compatibility regression 
 which may not be fixed until release, because Oracle is possibly accepting 
 this backwards break regarding cross-compilation to JDK6.
 The full thread on the OpenJDK mailing list: 
 [http://mail.openjdk.java.net/pipermail/compiler-dev/2013-February/005737.html]
  (continues in next month: 
 http://mail.openjdk.java.net/pipermail/compiler-dev/2013-March/005748.html)
 *In short:* JDK 8 adds so called default implementations on interfaces (means 
 you can add a new method to an interface and provide a default 
 implementation, so code implementing this interface is not required to 
 implement the new method. This is really cool and would also help Lucene to 
 make use of Interfaces instead of abstract classes which don't allow 
 polymorphism).
 In Lucene we are still compatible with Java 7 and Java 6. So like millions of 
 other open source projects, we use -source 1.6 -target 1.6 to produce class 
 files which are Java 1.6 conform, although you use a newer JDK to compile. Of 
 course this approach has problem (e.g. if you use older new methods, not 
 available in earliert JDKs). Because of this we must at least compile Lucene 
 with a JDK 1.6 legacy JDK and also release the class files with this version. 
 For 3.6, the RM has to also install JDK 1.5 (which makes it impossible to do 
 this on Mac). So -source/-target is a alternative to at least produce 1.6/1.5 
 compliant classes. According to Oracle, this is *not* the correct way to do 
 this: Oracle says, you have to use: -source, -target and -Xbootclasspath to 
 really crosscompile - and the last thing is what breaks here. To correctly 
 set the bootclasspath, you need to have an older JDK installed or you should 
 be able to at least download it from maven (which is not available to my 
 knowledge).
 The problem with JDK8 is now: If you compile with -source/-target but not the 
 bootclasspath, it happens that the compiler does no longer understand new 
 JDK8 class file structures in the new rt.jar, so producing compile failures. 
 In the case of this bug: AnnotatedElement#isAnnotationPresent() exists since 
 Java 1.5 in the interface, but all implementing classes have almost the same 
 implementation: return getAnnotation(..) != null;. So the designers of the 
 class library decided to move that method as so called default method into 
 the interface itsself, removing code duplication. If you then compile code 
 with -source 1.6 -target 1.6 using that method, the javac compier does not 
 know about the new default method feature and simply says: Method not found 
 in java.lang.Class:
 {noformat}
 [javac] Compiling 113 source files to C:\Users\Uwe 
 Schindler\Projects\lucene\trunk-lusolr3\lucene\build\test-framework\classes\java
 [javac] C:\Users\Uwe 
 Schindler\Projects\lucene\trunk-lusolr3\lucene\test-framework\src\java\org\apache\lucene\util\TestRuleSetupAndRestoreClassEnv.java:134:
  error: cannot find symbol
 [javac] if (targetClass.isAnnotationPresent(SuppressCodecs.class)) {
 [javac]^
 [javac]   symbol:   method isAnnotationPresent(ClassSuppressCodecs)
 [javac]   location: variable targetClass of type Class?
 [javac] Note: Some input files use or override a deprecated API.
 [javac] Note: Recompile with -Xlint:deprecation for details.
 [javac] 1 error
 {noformat}
 But until the Oracle people have a good workaround (I suggested to still 
 implement the method on the implementation classes like Class/Method/... but 
 delegate to the interface's default impl), we can quickly commit a 
 replacement of this broken method call by (getAnnotation(..) != null). I want 
 to do this, so we can enable jenkins builds with recent JDK 8 again.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: 

[jira] [Updated] (LUCENE-4808) Add workaround for a JDK 8 class library bug which is still under discussion and may *not* be fixed

2013-03-01 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-4808:
--

Summary: Add workaround for a JDK 8 class library bug which is still 
under discussion and may *not* be fixed  (was: Add workaround for a JDK 8 
class library bug which is still under discussion any may *not* be fixed)

 Add workaround for a JDK 8 class library bug which is still under 
 discussion and may *not* be fixed
 -

 Key: LUCENE-4808
 URL: https://issues.apache.org/jira/browse/LUCENE-4808
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/build
Affects Versions: 4.1, 5.0
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Attachments: LUCENE-4808.patch


 With JDK8 build 78 there was introduced a backwards compatibility regression 
 which may not be fixed until release, because Oracle is possibly accepting 
 this backwards break regarding cross-compilation to JDK6.
 The full thread on the OpenJDK mailing list: 
 [http://mail.openjdk.java.net/pipermail/compiler-dev/2013-February/005737.html]
  (continues in next month: 
 http://mail.openjdk.java.net/pipermail/compiler-dev/2013-March/005748.html)
 *In short:* JDK 8 adds so called default implementations on interfaces (means 
 you can add a new method to an interface and provide a default 
 implementation, so code implementing this interface is not required to 
 implement the new method. This is really cool and would also help Lucene to 
 make use of Interfaces instead of abstract classes which don't allow 
 polymorphism).
 In Lucene we are still compatible with Java 7 and Java 6. So like millions of 
 other open source projects, we use -source 1.6 -target 1.6 to produce class 
 files which are Java 1.6 conform, although you use a newer JDK to compile. Of 
 course this approach has problem (e.g. if you use older new methods, not 
 available in earliert JDKs). Because of this we must at least compile Lucene 
 with a JDK 1.6 legacy JDK and also release the class files with this version. 
 For 3.6, the RM has to also install JDK 1.5 (which makes it impossible to do 
 this on Mac). So -source/-target is a alternative to at least produce 1.6/1.5 
 compliant classes. According to Oracle, this is *not* the correct way to do 
 this: Oracle says, you have to use: -source, -target and -Xbootclasspath to 
 really crosscompile - and the last thing is what breaks here. To correctly 
 set the bootclasspath, you need to have an older JDK installed or you should 
 be able to at least download it from maven (which is not available to my 
 knowledge).
 The problem with JDK8 is now: If you compile with -source/-target but not the 
 bootclasspath, it happens that the compiler does no longer understand new 
 JDK8 class file structures in the new rt.jar, so producing compile failures. 
 In the case of this bug: AnnotatedElement#isAnnotationPresent() exists since 
 Java 1.5 in the interface, but all implementing classes have almost the same 
 implementation: return getAnnotation(..) != null;. So the designers of the 
 class library decided to move that method as so called default method into 
 the interface itsself, removing code duplication. If you then compile code 
 with -source 1.6 -target 1.6 using that method, the javac compier does not 
 know about the new default method feature and simply says: Method not found 
 in java.lang.Class:
 {noformat}
 [javac] Compiling 113 source files to C:\Users\Uwe 
 Schindler\Projects\lucene\trunk-lusolr3\lucene\build\test-framework\classes\java
 [javac] C:\Users\Uwe 
 Schindler\Projects\lucene\trunk-lusolr3\lucene\test-framework\src\java\org\apache\lucene\util\TestRuleSetupAndRestoreClassEnv.java:134:
  error: cannot find symbol
 [javac] if (targetClass.isAnnotationPresent(SuppressCodecs.class)) {
 [javac]^
 [javac]   symbol:   method isAnnotationPresent(ClassSuppressCodecs)
 [javac]   location: variable targetClass of type Class?
 [javac] Note: Some input files use or override a deprecated API.
 [javac] Note: Recompile with -Xlint:deprecation for details.
 [javac] 1 error
 {noformat}
 But until the Oracle people have a good workaround (I suggested to still 
 implement the method on the implementation classes like Class/Method/... but 
 delegate to the interface's default impl), we can quickly commit a 
 replacement of this broken method call by (getAnnotation(..) != null). I want 
 to do this, so we can enable jenkins builds with recent JDK 8 again.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (LUCENE-4809) FuzzyLikeThisQuery fails if field does not exist or is not indexed with NPE during rewrite

2013-03-01 Thread Simon Willnauer (JIRA)
Simon Willnauer created LUCENE-4809:
---

 Summary: FuzzyLikeThisQuery fails if field does not exist or is 
not indexed with NPE during rewrite
 Key: LUCENE-4809
 URL: https://issues.apache.org/jira/browse/LUCENE-4809
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/other
Affects Versions: 4.1, 4.0
Reporter: Simon Willnauer
Priority: Minor
 Fix For: 4.2, 5.0


this occured here: https://github.com/elasticsearch/elasticsearch/issues/2690

{noformat}
 at 
org.apache.lucene.sandbox.queries.SlowFuzzyTermsEnum$LinearFuzzyTermsEnum.init(SlowFuzzyTermsEnum.java:89)
at 
org.apache.lucene.sandbox.queries.SlowFuzzyTermsEnum.maxEditDistanceChanged(SlowFuzzyTermsEnum.java:58)
at 
org.apache.lucene.search.FuzzyTermsEnum.bottomChanged(FuzzyTermsEnum.java:211)
at org.apache.lucene.search.FuzzyTermsEnum.init(FuzzyTermsEnum.java:144)
at 
org.apache.lucene.sandbox.queries.SlowFuzzyTermsEnum.init(SlowFuzzyTermsEnum.java:48)
at 
org.apache.lucene.sandbox.queries.FuzzyLikeThisQuery.addTerms(FuzzyLikeThisQuery.java:209)
at 
org.apache.lucene.sandbox.queries.FuzzyLikeThisQuery.rewrite(FuzzyLikeThisQuery.java:262)
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4809) FuzzyLikeThisQuery fails if field does not exist or is not indexed with NPE during rewrite

2013-03-01 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-4809:


Attachment: LUCENE-4809.patch

here is a patch

 FuzzyLikeThisQuery fails if field does not exist or is not indexed with NPE 
 during rewrite
 --

 Key: LUCENE-4809
 URL: https://issues.apache.org/jira/browse/LUCENE-4809
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/other
Affects Versions: 4.0, 4.1
Reporter: Simon Willnauer
Priority: Minor
 Fix For: 4.2, 5.0

 Attachments: LUCENE-4809.patch


 this occured here: https://github.com/elasticsearch/elasticsearch/issues/2690
 {noformat}
  at 
 org.apache.lucene.sandbox.queries.SlowFuzzyTermsEnum$LinearFuzzyTermsEnum.init(SlowFuzzyTermsEnum.java:89)
 at 
 org.apache.lucene.sandbox.queries.SlowFuzzyTermsEnum.maxEditDistanceChanged(SlowFuzzyTermsEnum.java:58)
 at 
 org.apache.lucene.search.FuzzyTermsEnum.bottomChanged(FuzzyTermsEnum.java:211)
 at org.apache.lucene.search.FuzzyTermsEnum.init(FuzzyTermsEnum.java:144)
 at 
 org.apache.lucene.sandbox.queries.SlowFuzzyTermsEnum.init(SlowFuzzyTermsEnum.java:48)
 at 
 org.apache.lucene.sandbox.queries.FuzzyLikeThisQuery.addTerms(FuzzyLikeThisQuery.java:209)
 at 
 org.apache.lucene.sandbox.queries.FuzzyLikeThisQuery.rewrite(FuzzyLikeThisQuery.java:262)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4806) change FacetIndexingParams.DEFAULT_FACET_DELIM_CHAR to U+001F

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590479#comment-13590479
 ] 

Commit Tag Bot commented on LUCENE-4806:


[trunk commit] Michael McCandless
http://svn.apache.org/viewvc?view=revisionrevision=1451578

LUCENE-4806: change facet delim character to use 3 bytes instead of 1 (in UTF-8)


 change FacetIndexingParams.DEFAULT_FACET_DELIM_CHAR to U+001F
 -

 Key: LUCENE-4806
 URL: https://issues.apache.org/jira/browse/LUCENE-4806
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Reporter: Michael McCandless
Assignee: Michael McCandless
Priority: Minor
 Attachments: LUCENE-4806.patch


 The current delim char takes 3 bytes as UTF-8 ... but U+001F (= 
 INFORMATION_SEPARATOR, which seems appropriate) takes only 1 byte.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4809) FuzzyLikeThisQuery fails if field does not exist or is not indexed with NPE during rewrite

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590480#comment-13590480
 ] 

Commit Tag Bot commented on LUCENE-4809:


[trunk commit] Simon Willnauer
http://svn.apache.org/viewvc?view=revisionrevision=1451577

LUCENE-4809: FuzzyLikeThisQuery fails if field does not exist or is not indexed 
with NPE during rewrite


 FuzzyLikeThisQuery fails if field does not exist or is not indexed with NPE 
 during rewrite
 --

 Key: LUCENE-4809
 URL: https://issues.apache.org/jira/browse/LUCENE-4809
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/other
Affects Versions: 4.0, 4.1
Reporter: Simon Willnauer
Priority: Minor
 Fix For: 4.2, 5.0

 Attachments: LUCENE-4809.patch


 this occured here: https://github.com/elasticsearch/elasticsearch/issues/2690
 {noformat}
  at 
 org.apache.lucene.sandbox.queries.SlowFuzzyTermsEnum$LinearFuzzyTermsEnum.init(SlowFuzzyTermsEnum.java:89)
 at 
 org.apache.lucene.sandbox.queries.SlowFuzzyTermsEnum.maxEditDistanceChanged(SlowFuzzyTermsEnum.java:58)
 at 
 org.apache.lucene.search.FuzzyTermsEnum.bottomChanged(FuzzyTermsEnum.java:211)
 at org.apache.lucene.search.FuzzyTermsEnum.init(FuzzyTermsEnum.java:144)
 at 
 org.apache.lucene.sandbox.queries.SlowFuzzyTermsEnum.init(SlowFuzzyTermsEnum.java:48)
 at 
 org.apache.lucene.sandbox.queries.FuzzyLikeThisQuery.addTerms(FuzzyLikeThisQuery.java:209)
 at 
 org.apache.lucene.sandbox.queries.FuzzyLikeThisQuery.rewrite(FuzzyLikeThisQuery.java:262)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4806) change FacetIndexingParams.DEFAULT_FACET_DELIM_CHAR to U+001F

2013-03-01 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-4806.


   Resolution: Fixed
Fix Version/s: 5.0
   4.2

 change FacetIndexingParams.DEFAULT_FACET_DELIM_CHAR to U+001F
 -

 Key: LUCENE-4806
 URL: https://issues.apache.org/jira/browse/LUCENE-4806
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Reporter: Michael McCandless
Assignee: Michael McCandless
Priority: Minor
 Fix For: 4.2, 5.0

 Attachments: LUCENE-4806.patch


 The current delim char takes 3 bytes as UTF-8 ... but U+001F (= 
 INFORMATION_SEPARATOR, which seems appropriate) takes only 1 byte.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4809) FuzzyLikeThisQuery fails if field does not exist or is not indexed with NPE during rewrite

2013-03-01 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer resolved LUCENE-4809.
-

Resolution: Fixed

 FuzzyLikeThisQuery fails if field does not exist or is not indexed with NPE 
 during rewrite
 --

 Key: LUCENE-4809
 URL: https://issues.apache.org/jira/browse/LUCENE-4809
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/other
Affects Versions: 4.0, 4.1
Reporter: Simon Willnauer
Priority: Minor
 Fix For: 4.2, 5.0

 Attachments: LUCENE-4809.patch


 this occured here: https://github.com/elasticsearch/elasticsearch/issues/2690
 {noformat}
  at 
 org.apache.lucene.sandbox.queries.SlowFuzzyTermsEnum$LinearFuzzyTermsEnum.init(SlowFuzzyTermsEnum.java:89)
 at 
 org.apache.lucene.sandbox.queries.SlowFuzzyTermsEnum.maxEditDistanceChanged(SlowFuzzyTermsEnum.java:58)
 at 
 org.apache.lucene.search.FuzzyTermsEnum.bottomChanged(FuzzyTermsEnum.java:211)
 at org.apache.lucene.search.FuzzyTermsEnum.init(FuzzyTermsEnum.java:144)
 at 
 org.apache.lucene.sandbox.queries.SlowFuzzyTermsEnum.init(SlowFuzzyTermsEnum.java:48)
 at 
 org.apache.lucene.sandbox.queries.FuzzyLikeThisQuery.addTerms(FuzzyLikeThisQuery.java:209)
 at 
 org.apache.lucene.sandbox.queries.FuzzyLikeThisQuery.rewrite(FuzzyLikeThisQuery.java:262)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4806) change FacetIndexingParams.DEFAULT_FACET_DELIM_CHAR to U+001F

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590493#comment-13590493
 ] 

Commit Tag Bot commented on LUCENE-4806:


[branch_4x commit] Michael McCandless
http://svn.apache.org/viewvc?view=revisionrevision=1451579

LUCENE-4806: change facet delim character to use 3 bytes instead of 1 (in UTF-8)


 change FacetIndexingParams.DEFAULT_FACET_DELIM_CHAR to U+001F
 -

 Key: LUCENE-4806
 URL: https://issues.apache.org/jira/browse/LUCENE-4806
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Reporter: Michael McCandless
Assignee: Michael McCandless
Priority: Minor
 Fix For: 4.2, 5.0

 Attachments: LUCENE-4806.patch


 The current delim char takes 3 bytes as UTF-8 ... but U+001F (= 
 INFORMATION_SEPARATOR, which seems appropriate) takes only 1 byte.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4808) Add workaround for a JDK 8 class library bug which is still under discussion and may *not* be fixed

2013-03-01 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-4808:
--

Attachment: LUCENE-4808.patch

Patch with a comment (TODO).

I will now commit this and enable JDK8 b78 on Jenkins. So we can at least check 
that hotspot works. Javadocs are still not working but are disabled in Java 8 
already in an earlier commit.

I will keep this issue open without fix version, to revert the commit once 
Oracle fixes this bug.

 Add workaround for a JDK 8 class library bug which is still under 
 discussion and may *not* be fixed
 -

 Key: LUCENE-4808
 URL: https://issues.apache.org/jira/browse/LUCENE-4808
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/build
Affects Versions: 4.1, 5.0
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Attachments: LUCENE-4808.patch, LUCENE-4808.patch


 With JDK8 build 78 there was introduced a backwards compatibility regression 
 which may not be fixed until release, because Oracle is possibly accepting 
 this backwards break regarding cross-compilation to JDK6.
 The full thread on the OpenJDK mailing list: 
 [http://mail.openjdk.java.net/pipermail/compiler-dev/2013-February/005737.html]
  (continues in next month: 
 http://mail.openjdk.java.net/pipermail/compiler-dev/2013-March/005748.html)
 *In short:* JDK 8 adds so called default implementations on interfaces (means 
 you can add a new method to an interface and provide a default 
 implementation, so code implementing this interface is not required to 
 implement the new method. This is really cool and would also help Lucene to 
 make use of Interfaces instead of abstract classes which don't allow 
 polymorphism).
 In Lucene we are still compatible with Java 7 and Java 6. So like millions of 
 other open source projects, we use -source 1.6 -target 1.6 to produce class 
 files which are Java 1.6 conform, although you use a newer JDK to compile. Of 
 course this approach has problem (e.g. if you use older new methods, not 
 available in earliert JDKs). Because of this we must at least compile Lucene 
 with a JDK 1.6 legacy JDK and also release the class files with this version. 
 For 3.6, the RM has to also install JDK 1.5 (which makes it impossible to do 
 this on Mac). So -source/-target is a alternative to at least produce 1.6/1.5 
 compliant classes. According to Oracle, this is *not* the correct way to do 
 this: Oracle says, you have to use: -source, -target and -Xbootclasspath to 
 really crosscompile - and the last thing is what breaks here. To correctly 
 set the bootclasspath, you need to have an older JDK installed or you should 
 be able to at least download it from maven (which is not available to my 
 knowledge).
 The problem with JDK8 is now: If you compile with -source/-target but not the 
 bootclasspath, it happens that the compiler does no longer understand new 
 JDK8 class file structures in the new rt.jar, so producing compile failures. 
 In the case of this bug: AnnotatedElement#isAnnotationPresent() exists since 
 Java 1.5 in the interface, but all implementing classes have almost the same 
 implementation: return getAnnotation(..) != null;. So the designers of the 
 class library decided to move that method as so called default method into 
 the interface itsself, removing code duplication. If you then compile code 
 with -source 1.6 -target 1.6 using that method, the javac compier does not 
 know about the new default method feature and simply says: Method not found 
 in java.lang.Class:
 {noformat}
 [javac] Compiling 113 source files to C:\Users\Uwe 
 Schindler\Projects\lucene\trunk-lusolr3\lucene\build\test-framework\classes\java
 [javac] C:\Users\Uwe 
 Schindler\Projects\lucene\trunk-lusolr3\lucene\test-framework\src\java\org\apache\lucene\util\TestRuleSetupAndRestoreClassEnv.java:134:
  error: cannot find symbol
 [javac] if (targetClass.isAnnotationPresent(SuppressCodecs.class)) {
 [javac]^
 [javac]   symbol:   method isAnnotationPresent(ClassSuppressCodecs)
 [javac]   location: variable targetClass of type Class?
 [javac] Note: Some input files use or override a deprecated API.
 [javac] Note: Recompile with -Xlint:deprecation for details.
 [javac] 1 error
 {noformat}
 But until the Oracle people have a good workaround (I suggested to still 
 implement the method on the implementation classes like Class/Method/... but 
 delegate to the interface's default impl), we can quickly commit a 
 replacement of this broken method call by (getAnnotation(..) != null). I want 
 to do this, so we can enable jenkins builds with recent JDK 8 again.

--
This message is automatically generated by JIRA.
If 

[jira] [Commented] (LUCENE-4809) FuzzyLikeThisQuery fails if field does not exist or is not indexed with NPE during rewrite

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590509#comment-13590509
 ] 

Commit Tag Bot commented on LUCENE-4809:


[branch_4x commit] Simon Willnauer
http://svn.apache.org/viewvc?view=revisionrevision=1451581

LUCENE-4809: FuzzyLikeThisQuery fails if field does not exist or is not indexed 
with NPE during rewrite


 FuzzyLikeThisQuery fails if field does not exist or is not indexed with NPE 
 during rewrite
 --

 Key: LUCENE-4809
 URL: https://issues.apache.org/jira/browse/LUCENE-4809
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/other
Affects Versions: 4.0, 4.1
Reporter: Simon Willnauer
Priority: Minor
 Fix For: 4.2, 5.0

 Attachments: LUCENE-4809.patch


 this occured here: https://github.com/elasticsearch/elasticsearch/issues/2690
 {noformat}
  at 
 org.apache.lucene.sandbox.queries.SlowFuzzyTermsEnum$LinearFuzzyTermsEnum.init(SlowFuzzyTermsEnum.java:89)
 at 
 org.apache.lucene.sandbox.queries.SlowFuzzyTermsEnum.maxEditDistanceChanged(SlowFuzzyTermsEnum.java:58)
 at 
 org.apache.lucene.search.FuzzyTermsEnum.bottomChanged(FuzzyTermsEnum.java:211)
 at org.apache.lucene.search.FuzzyTermsEnum.init(FuzzyTermsEnum.java:144)
 at 
 org.apache.lucene.sandbox.queries.SlowFuzzyTermsEnum.init(SlowFuzzyTermsEnum.java:48)
 at 
 org.apache.lucene.sandbox.queries.FuzzyLikeThisQuery.addTerms(FuzzyLikeThisQuery.java:209)
 at 
 org.apache.lucene.sandbox.queries.FuzzyLikeThisQuery.rewrite(FuzzyLikeThisQuery.java:262)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4808) Add workaround for a JDK 8 class library bug which is still under discussion and may *not* be fixed

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590510#comment-13590510
 ] 

Commit Tag Bot commented on LUCENE-4808:


[trunk commit] Uwe Schindler
http://svn.apache.org/viewvc?view=revisionrevision=1451584

LUCENE-4808: Add workaround for a JDK 8 class library bug which is still 
under discussion and may *not* be fixed


 Add workaround for a JDK 8 class library bug which is still under 
 discussion and may *not* be fixed
 -

 Key: LUCENE-4808
 URL: https://issues.apache.org/jira/browse/LUCENE-4808
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/build
Affects Versions: 4.1, 5.0
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Attachments: LUCENE-4808.patch, LUCENE-4808.patch


 With JDK8 build 78 there was introduced a backwards compatibility regression 
 which may not be fixed until release, because Oracle is possibly accepting 
 this backwards break regarding cross-compilation to JDK6.
 The full thread on the OpenJDK mailing list: 
 [http://mail.openjdk.java.net/pipermail/compiler-dev/2013-February/005737.html]
  (continues in next month: 
 http://mail.openjdk.java.net/pipermail/compiler-dev/2013-March/005748.html)
 *In short:* JDK 8 adds so called default implementations on interfaces (means 
 you can add a new method to an interface and provide a default 
 implementation, so code implementing this interface is not required to 
 implement the new method. This is really cool and would also help Lucene to 
 make use of Interfaces instead of abstract classes which don't allow 
 polymorphism).
 In Lucene we are still compatible with Java 7 and Java 6. So like millions of 
 other open source projects, we use -source 1.6 -target 1.6 to produce class 
 files which are Java 1.6 conform, although you use a newer JDK to compile. Of 
 course this approach has problem (e.g. if you use older new methods, not 
 available in earliert JDKs). Because of this we must at least compile Lucene 
 with a JDK 1.6 legacy JDK and also release the class files with this version. 
 For 3.6, the RM has to also install JDK 1.5 (which makes it impossible to do 
 this on Mac). So -source/-target is a alternative to at least produce 1.6/1.5 
 compliant classes. According to Oracle, this is *not* the correct way to do 
 this: Oracle says, you have to use: -source, -target and -Xbootclasspath to 
 really crosscompile - and the last thing is what breaks here. To correctly 
 set the bootclasspath, you need to have an older JDK installed or you should 
 be able to at least download it from maven (which is not available to my 
 knowledge).
 The problem with JDK8 is now: If you compile with -source/-target but not the 
 bootclasspath, it happens that the compiler does no longer understand new 
 JDK8 class file structures in the new rt.jar, so producing compile failures. 
 In the case of this bug: AnnotatedElement#isAnnotationPresent() exists since 
 Java 1.5 in the interface, but all implementing classes have almost the same 
 implementation: return getAnnotation(..) != null;. So the designers of the 
 class library decided to move that method as so called default method into 
 the interface itsself, removing code duplication. If you then compile code 
 with -source 1.6 -target 1.6 using that method, the javac compier does not 
 know about the new default method feature and simply says: Method not found 
 in java.lang.Class:
 {noformat}
 [javac] Compiling 113 source files to C:\Users\Uwe 
 Schindler\Projects\lucene\trunk-lusolr3\lucene\build\test-framework\classes\java
 [javac] C:\Users\Uwe 
 Schindler\Projects\lucene\trunk-lusolr3\lucene\test-framework\src\java\org\apache\lucene\util\TestRuleSetupAndRestoreClassEnv.java:134:
  error: cannot find symbol
 [javac] if (targetClass.isAnnotationPresent(SuppressCodecs.class)) {
 [javac]^
 [javac]   symbol:   method isAnnotationPresent(ClassSuppressCodecs)
 [javac]   location: variable targetClass of type Class?
 [javac] Note: Some input files use or override a deprecated API.
 [javac] Note: Recompile with -Xlint:deprecation for details.
 [javac] 1 error
 {noformat}
 But until the Oracle people have a good workaround (I suggested to still 
 implement the method on the implementation classes like Class/Method/... but 
 delegate to the interface's default impl), we can quickly commit a 
 replacement of this broken method call by (getAnnotation(..) != null). I want 
 to do this, so we can enable jenkins builds with recent JDK 8 again.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more 

JDK 8 b78 installed on Policeman Jenkins

2013-03-01 Thread Uwe Schindler
After some workaround and lots of special cases for JDK 8 we have no a 
compiling version of Lucene that can be tested with recent JDK 8, see 
https://issues.apache.org/jira/browse/LUCENE-4808 and 
https://issues.apache.org/jira/browse/LUCENE-4797.

I updated the Jenkins to use JDK 8 b78, I hope hotspot no longer corrumpts 
itself (like b65 sometimes did). :-)

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4506) [solr4.0.0] many index.{date} dir in replication node

2013-03-01 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590522#comment-13590522
 ] 

Mark Miller commented on SOLR-4506:
---

I think its a known issue that interrupted replications will leave dirs around. 
We can look at cleaning them up on startup or something...

 [solr4.0.0] many index.{date} dir in replication node 
 --

 Key: SOLR-4506
 URL: https://issues.apache.org/jira/browse/SOLR-4506
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0
 Environment: the solr4.0 runs on suse11.
 mem:32G
 cpu:16 cores
Reporter: zhuojunjian
 Fix For: 4.0

   Original Estimate: 12h
  Remaining Estimate: 12h

 in our test,we used solrcloud feature in solr4.0(version detail 
 :4.0.0.2012.10.06.03.04.33).
 the solrcloud configuration is 3 shards and 2 replications each shard.
 we found that there are over than 25 dirs which named index.{date} in one 
 replication node belonging to shard 3. 
 for example:
 index.2013021725864  index.20130218012211880  index.20130218015714713  
 index.20130218023101958  index.20130218030424083  tlog
 index.20130218005648324  index.20130218012751078  index.20130218020141293  
 the issue seems like SOLR-1781. but it is fixed in 4.0-BETA,5.0. 
 so is solr4.0 ? if it is fixed too in solr4.0, why we find the issue again ?
 what can I do?   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4519) corrupt tlog causes fullCopy download index files every time reboot a node

2013-03-01 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590526#comment-13590526
 ] 

Mark Miller commented on SOLR-4519:
---

bq. the tlog should be fixed.

Currently, when you replicate, you get nothing in the tlog. Yonik has brought 
up perhaps doing a little trick on replication to populate the tlog a bit, but 
nothing has been started on that front. So once you replicate, unless some docs 
are then added, the next fail will require another replication.

However, we may actually be able to take advantage of replication itself 
noticing that it doesn't need to do a full replicate. Currently, in SolrCloud 
we force a replication every time no matter what when we call replicate - now 
that std replication has had some bugs fixed and has better tests, we may not 
have to force that anymore - and so the next full replication would not 
actually move any files.

 corrupt tlog causes fullCopy download index files every time reboot a node
 --

 Key: SOLR-4519
 URL: https://issues.apache.org/jira/browse/SOLR-4519
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.0
 Environment: The solrcloud is implemented on three servers. There are 
 three solr instance on each server. The collection has three shards. Every 
 shard has three replica. Replicas in same shard run in solr instance on 
 different server.
Reporter: Simon Scofield

 There are two questions:
 1. The tlog of one replica of shard1 is damaged by some reason. We are still 
 looking for the reason. Please give some clue if you are familia with this 
 problem.
 2. The error replica successed to recovery by fullcopy download index files 
 from leader. Then I killed the instance and started it again, the recovery 
 process still is fullcopy download. In my opinion, after the first time 
 fullcopy recovery, the tlog should be fixed. Here is some log: 
 2013-02-28 15:04:58,622 INFO org.apache.solr.cloud.ZkController:757 - Core 
 needs to recover:metadata
 2013-02-28 15:04:58,622 INFO org.apache.solr.update.DefaultSolrCoreState:214 
 - Running recovery - first canceling any ongoing recovery
 2013-02-28 15:04:58,625 INFO org.apache.solr.cloud.RecoveryStrategy:217 - 
 Starting recovery process.  core=metadata recoveringAfterStartup=true
 2013-02-28 15:04:58,626 INFO org.apache.solr.common.cloud.ZkStateReader:295 - 
 Updating cloud state from ZooKeeper...
 2013-02-28 15:04:58,628 ERROR org.apache.solr.update.UpdateLog:957 - 
 Exception reading versions from log
 java.io.EOFException
 at 
 org.apache.solr.common.util.FastInputStream.readUnsignedByte(FastInputStream.java:72)
 at 
 org.apache.solr.common.util.FastInputStream.readInt(FastInputStream.java:206)
 at 
 org.apache.solr.update.TransactionLog$ReverseReader.next(TransactionLog.java:705)
 at 
 org.apache.solr.update.UpdateLog$RecentUpdates.update(UpdateLog.java:906)
 at 
 org.apache.solr.update.UpdateLog$RecentUpdates.access$000(UpdateLog.java:846)
 at 
 org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:996)
 at 
 org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:256)
 at 
 org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:220)
 2013-02-28 15:05:01,857 INFO org.apache.solr.cloud.RecoveryStrategy:399 - 
 Begin buffering updates. core=metadata
 2013-02-28 15:05:01,857 INFO org.apache.solr.update.UpdateLog:1015 - Starting 
 to buffer updates. FSUpdateLog{state=ACTIVE, tlog=null}
 2013-02-28 15:05:01,857 INFO org.apache.solr.cloud.RecoveryStrategy:126 - 
 Attempting to replicate from http://23.61.21.121:65201/solr/metadata/. 
 core=metadata
 2013-02-28 15:05:02,882 INFO org.apache.solr.handler.SnapPuller:305 - 
 Master's generation: 6993
 2013-02-28 15:05:02,882 INFO org.apache.solr.handler.SnapPuller:306 - Slave's 
 generation: 6993
 2013-02-28 15:05:02,882 INFO org.apache.solr.handler.SnapPuller:307 - 
 Starting replication process
 2013-02-28 15:05:02,893 INFO org.apache.solr.handler.SnapPuller:312 - Number 
 of files in latest index in master: 422
 2013-02-28 15:05:02,897 INFO org.apache.solr.handler.SnapPuller:325 - 
 Starting download to 
 /solr/nodes/node1/bin/../solr/metadata/data/index.20130228150502893 
 fullCopy=true
 2013-02-28 15:33:55,848 INFO org.apache.solr.handler.SnapPuller:334 - Total 
 time taken for download : 1732 secs (The size of index files is 94G)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional 

[jira] [Created] (SOLR-4521) Consider not using 'force' replications in SolrCloud recovery.

2013-03-01 Thread Mark Miller (JIRA)
Mark Miller created SOLR-4521:
-

 Summary: Consider not using 'force' replications in SolrCloud 
recovery.
 Key: SOLR-4521
 URL: https://issues.apache.org/jira/browse/SOLR-4521
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.2, 5.0


Now that replication has some better tests and bugs fixed, we may be able to 
stop forcing a replication on every replication call and let the snap pull 
determine if one is actually needed. This never worked quite right in the past, 
so I got around it by forcing a replication on recovery whether it was needed 
or not - the peer sync phase made this not the biggest deal. However, there are 
cases where it would still be useful - see SOLR-4519.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4519) corrupt tlog causes fullCopy download index files every time reboot a node

2013-03-01 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590529#comment-13590529
 ] 

Mark Miller commented on SOLR-4519:
---

I filed SOLR-4521

 corrupt tlog causes fullCopy download index files every time reboot a node
 --

 Key: SOLR-4519
 URL: https://issues.apache.org/jira/browse/SOLR-4519
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.0
 Environment: The solrcloud is implemented on three servers. There are 
 three solr instance on each server. The collection has three shards. Every 
 shard has three replica. Replicas in same shard run in solr instance on 
 different server.
Reporter: Simon Scofield

 There are two questions:
 1. The tlog of one replica of shard1 is damaged by some reason. We are still 
 looking for the reason. Please give some clue if you are familia with this 
 problem.
 2. The error replica successed to recovery by fullcopy download index files 
 from leader. Then I killed the instance and started it again, the recovery 
 process still is fullcopy download. In my opinion, after the first time 
 fullcopy recovery, the tlog should be fixed. Here is some log: 
 2013-02-28 15:04:58,622 INFO org.apache.solr.cloud.ZkController:757 - Core 
 needs to recover:metadata
 2013-02-28 15:04:58,622 INFO org.apache.solr.update.DefaultSolrCoreState:214 
 - Running recovery - first canceling any ongoing recovery
 2013-02-28 15:04:58,625 INFO org.apache.solr.cloud.RecoveryStrategy:217 - 
 Starting recovery process.  core=metadata recoveringAfterStartup=true
 2013-02-28 15:04:58,626 INFO org.apache.solr.common.cloud.ZkStateReader:295 - 
 Updating cloud state from ZooKeeper...
 2013-02-28 15:04:58,628 ERROR org.apache.solr.update.UpdateLog:957 - 
 Exception reading versions from log
 java.io.EOFException
 at 
 org.apache.solr.common.util.FastInputStream.readUnsignedByte(FastInputStream.java:72)
 at 
 org.apache.solr.common.util.FastInputStream.readInt(FastInputStream.java:206)
 at 
 org.apache.solr.update.TransactionLog$ReverseReader.next(TransactionLog.java:705)
 at 
 org.apache.solr.update.UpdateLog$RecentUpdates.update(UpdateLog.java:906)
 at 
 org.apache.solr.update.UpdateLog$RecentUpdates.access$000(UpdateLog.java:846)
 at 
 org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:996)
 at 
 org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:256)
 at 
 org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:220)
 2013-02-28 15:05:01,857 INFO org.apache.solr.cloud.RecoveryStrategy:399 - 
 Begin buffering updates. core=metadata
 2013-02-28 15:05:01,857 INFO org.apache.solr.update.UpdateLog:1015 - Starting 
 to buffer updates. FSUpdateLog{state=ACTIVE, tlog=null}
 2013-02-28 15:05:01,857 INFO org.apache.solr.cloud.RecoveryStrategy:126 - 
 Attempting to replicate from http://23.61.21.121:65201/solr/metadata/. 
 core=metadata
 2013-02-28 15:05:02,882 INFO org.apache.solr.handler.SnapPuller:305 - 
 Master's generation: 6993
 2013-02-28 15:05:02,882 INFO org.apache.solr.handler.SnapPuller:306 - Slave's 
 generation: 6993
 2013-02-28 15:05:02,882 INFO org.apache.solr.handler.SnapPuller:307 - 
 Starting replication process
 2013-02-28 15:05:02,893 INFO org.apache.solr.handler.SnapPuller:312 - Number 
 of files in latest index in master: 422
 2013-02-28 15:05:02,897 INFO org.apache.solr.handler.SnapPuller:325 - 
 Starting download to 
 /solr/nodes/node1/bin/../solr/metadata/data/index.20130228150502893 
 fullCopy=true
 2013-02-28 15:33:55,848 INFO org.apache.solr.handler.SnapPuller:334 - Total 
 time taken for download : 1732 secs (The size of index files is 94G)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4808) Add workaround for a JDK 8 class library bug which is still under discussion and may *not* be fixed

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590556#comment-13590556
 ] 

Commit Tag Bot commented on LUCENE-4808:


[branch_4x commit] Uwe Schindler
http://svn.apache.org/viewvc?view=revisionrevision=1451585

Merged revision(s) 1451584 from lucene/dev/trunk:
LUCENE-4808: Add workaround for a JDK 8 class library bug which is still 
under discussion and may *not* be fixed


 Add workaround for a JDK 8 class library bug which is still under 
 discussion and may *not* be fixed
 -

 Key: LUCENE-4808
 URL: https://issues.apache.org/jira/browse/LUCENE-4808
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/build
Affects Versions: 4.1, 5.0
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Attachments: LUCENE-4808.patch, LUCENE-4808.patch


 With JDK8 build 78 there was introduced a backwards compatibility regression 
 which may not be fixed until release, because Oracle is possibly accepting 
 this backwards break regarding cross-compilation to JDK6.
 The full thread on the OpenJDK mailing list: 
 [http://mail.openjdk.java.net/pipermail/compiler-dev/2013-February/005737.html]
  (continues in next month: 
 http://mail.openjdk.java.net/pipermail/compiler-dev/2013-March/005748.html)
 *In short:* JDK 8 adds so called default implementations on interfaces (means 
 you can add a new method to an interface and provide a default 
 implementation, so code implementing this interface is not required to 
 implement the new method. This is really cool and would also help Lucene to 
 make use of Interfaces instead of abstract classes which don't allow 
 polymorphism).
 In Lucene we are still compatible with Java 7 and Java 6. So like millions of 
 other open source projects, we use -source 1.6 -target 1.6 to produce class 
 files which are Java 1.6 conform, although you use a newer JDK to compile. Of 
 course this approach has problem (e.g. if you use older new methods, not 
 available in earliert JDKs). Because of this we must at least compile Lucene 
 with a JDK 1.6 legacy JDK and also release the class files with this version. 
 For 3.6, the RM has to also install JDK 1.5 (which makes it impossible to do 
 this on Mac). So -source/-target is a alternative to at least produce 1.6/1.5 
 compliant classes. According to Oracle, this is *not* the correct way to do 
 this: Oracle says, you have to use: -source, -target and -Xbootclasspath to 
 really crosscompile - and the last thing is what breaks here. To correctly 
 set the bootclasspath, you need to have an older JDK installed or you should 
 be able to at least download it from maven (which is not available to my 
 knowledge).
 The problem with JDK8 is now: If you compile with -source/-target but not the 
 bootclasspath, it happens that the compiler does no longer understand new 
 JDK8 class file structures in the new rt.jar, so producing compile failures. 
 In the case of this bug: AnnotatedElement#isAnnotationPresent() exists since 
 Java 1.5 in the interface, but all implementing classes have almost the same 
 implementation: return getAnnotation(..) != null;. So the designers of the 
 class library decided to move that method as so called default method into 
 the interface itsself, removing code duplication. If you then compile code 
 with -source 1.6 -target 1.6 using that method, the javac compier does not 
 know about the new default method feature and simply says: Method not found 
 in java.lang.Class:
 {noformat}
 [javac] Compiling 113 source files to C:\Users\Uwe 
 Schindler\Projects\lucene\trunk-lusolr3\lucene\build\test-framework\classes\java
 [javac] C:\Users\Uwe 
 Schindler\Projects\lucene\trunk-lusolr3\lucene\test-framework\src\java\org\apache\lucene\util\TestRuleSetupAndRestoreClassEnv.java:134:
  error: cannot find symbol
 [javac] if (targetClass.isAnnotationPresent(SuppressCodecs.class)) {
 [javac]^
 [javac]   symbol:   method isAnnotationPresent(ClassSuppressCodecs)
 [javac]   location: variable targetClass of type Class?
 [javac] Note: Some input files use or override a deprecated API.
 [javac] Note: Recompile with -Xlint:deprecation for details.
 [javac] 1 error
 {noformat}
 But until the Oracle people have a good workaround (I suggested to still 
 implement the method on the implementation classes like Class/Method/... but 
 delegate to the interface's default impl), we can quickly commit a 
 replacement of this broken method call by (getAnnotation(..) != null). I want 
 to do this, so we can enable jenkins builds with recent JDK 8 again.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, 

[jira] [Commented] (SOLR-4521) Consider not using 'force' replications in SolrCloud recovery.

2013-03-01 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590557#comment-13590557
 ] 

Mark Miller commented on SOLR-4521:
---

I just tried this out and the cloud tests do actually pass now (had SOLR-4511 
applied).

 Consider not using 'force' replications in SolrCloud recovery.
 --

 Key: SOLR-4521
 URL: https://issues.apache.org/jira/browse/SOLR-4521
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.2, 5.0


 Now that replication has some better tests and bugs fixed, we may be able to 
 stop forcing a replication on every replication call and let the snap pull 
 determine if one is actually needed. This never worked quite right in the 
 past, so I got around it by forcing a replication on recovery whether it was 
 needed or not - the peer sync phase made this not the biggest deal. However, 
 there are cases where it would still be useful - see SOLR-4519.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4449) Enable backup requests for the internal solr load balancer

2013-03-01 Thread philip hoy (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

philip hoy updated SOLR-4449:
-

Attachment: SOLR-4449.patch

added logging to the load balancer and fixed a bug.

 Enable backup requests for the internal solr load balancer
 --

 Key: SOLR-4449
 URL: https://issues.apache.org/jira/browse/SOLR-4449
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: philip hoy
Priority: Minor
 Attachments: patch-4449.txt, SOLR-4449.patch, SOLR-4449.patch


 Add the ability to configure the built-in solr load balancer such that it 
 submits a backup request to the next server in the list if the initial 
 request takes too long. Employing such an algorithm could improve the latency 
 of the 9xth percentile albeit at the expense of increasing overall load due 
 to additional requests. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2155) Geospatial search using geohash prefixes

2013-03-01 Thread Sandeep Tucknat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590636#comment-13590636
 ] 

Sandeep Tucknat commented on SOLR-2155:
---

I have a similar requirement to Sujan. I am doing a filtering spatial query, 
trying to find businesses (with multiple locations stored in a multi-valued 
field) available within a radius of a given point. I also need to know how many 
locations are actually within the radius as well as which one is the closest. 
Wondering if that's possible with the Solr 3 or 4 spatial implementation.

 Geospatial search using geohash prefixes
 

 Key: SOLR-2155
 URL: https://issues.apache.org/jira/browse/SOLR-2155
 Project: Solr
  Issue Type: Improvement
Reporter: David Smiley
Assignee: David Smiley
 Attachments: GeoHashPrefixFilter.patch, GeoHashPrefixFilter.patch, 
 GeoHashPrefixFilter.patch, Solr2155-1.0.2-project.zip, 
 Solr2155-1.0.3-project.zip, Solr2155-1.0.4-project.zip, 
 Solr2155-for-1.0.2-3.x-port.patch, 
 SOLR-2155_GeoHashPrefixFilter_with_sorting_no_poly.patch, SOLR.2155.p3.patch, 
 SOLR.2155.p3tests.patch


 {panel:title=NOTICE} The status of this issue is a plugin for Solr 3.x 
 located here: https://github.com/dsmiley/SOLR-2155.  Look at the introductory 
 readme and download the plugin .jar file.  Lucene 4's new spatial module is 
 largely based on this code.  The Solr 4 glue for it should come very soon but 
 as of this writing it's hosted temporarily at https://github.com/spatial4j.  
 For more information on using SOLR-2155 with Solr 3, see 
 http://wiki.apache.org/solr/SpatialSearch#SOLR-2155  This JIRA issue is 
 closed because it won't be committed in its current form.
 {panel}
 There currently isn't a solution in Solr for doing geospatial filtering on 
 documents that have a variable number of points.  This scenario occurs when 
 there is location extraction (i.e. via a gazateer) occurring on free text.  
 None, one, or many geospatial locations might be extracted from any given 
 document and users want to limit their search results to those occurring in a 
 user-specified area.
 I've implemented this by furthering the GeoHash based work in Lucene/Solr 
 with a geohash prefix based filter.  A geohash refers to a lat-lon box on the 
 earth.  Each successive character added further subdivides the box into a 4x8 
 (or 8x4 depending on the even/odd length of the geohash) grid.  The first 
 step in this scheme is figuring out which geohash grid squares cover the 
 user's search query.  I've added various extra methods to GeoHashUtils (and 
 added tests) to assist in this purpose.  The next step is an actual Lucene 
 Filter, GeoHashPrefixFilter, that uses these geohash prefixes in 
 TermsEnum.seek() to skip to relevant grid squares in the index.  Once a 
 matching geohash grid is found, the points therein are compared against the 
 user's query to see if it matches.  I created an abstraction GeoShape 
 extended by subclasses named PointDistance... and CartesianBox to support 
 different queried shapes so that the filter need not care about these details.
 This work was presented at LuceneRevolution in Boston on October 8th.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3955) Return only matched multiValued field

2013-03-01 Thread Sandeep Tucknat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590645#comment-13590645
 ] 

Sandeep Tucknat commented on SOLR-3955:
---

This is especially important in a spatial search since there's an important 
business case of finding the branches/locations for an entity within a spatial 
filtering query. While the multi-valued spatial field implementation provides 
for filtering and scoring, it does not return this information to the client at 
the moment.

 Return only matched multiValued field
 -

 Key: SOLR-3955
 URL: https://issues.apache.org/jira/browse/SOLR-3955
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 4.0
Reporter: Dotan Cohen
  Labels: features

 Assuming a multivalued, stored and indexed field named comment. When 
 performing a search, it would be very helpful if there were a way to return 
 only the values of comment which contain the match. For example:
 When searching for gold instead of getting this result:
 doc
 arr name=comment
 strTheres a lady whos sure/str
 strall that glitters is gold/str
 strand shes buying a stairway to heaven/str
 /arr
 /doc
 I would prefer to get this result:
 doc
 arr name=comment
 strall that glitters is gold/str
 /arr
 /doc
 (psuedo-XML from memory, may not be accurate but illustrates the point)
 Thanks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-3955) Return only matched multiValued field

2013-03-01 Thread Sandeep Tucknat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590645#comment-13590645
 ] 

Sandeep Tucknat edited comment on SOLR-3955 at 3/1/13 3:43 PM:
---

This is especially important in a spatial search since there's an important 
business case of finding the branches/locations for an entity within a spatial 
filtering query. While the multi-valued spatial field implementation provides 
for filtering and scoring, it does not return this information to the client at 
the moment.


PS : I am relatively new to Solr and SE in general but have years of Java 
coding and debugging experience. I'd love to help resolve this if someone can 
point me in the right direction and something more than 'hook it up to the 
debugger and start looking' would be appreciated.

  was (Author: mathakuna):
This is especially important in a spatial search since there's an important 
business case of finding the branches/locations for an entity within a spatial 
filtering query. While the multi-valued spatial field implementation provides 
for filtering and scoring, it does not return this information to the client at 
the moment.
  
 Return only matched multiValued field
 -

 Key: SOLR-3955
 URL: https://issues.apache.org/jira/browse/SOLR-3955
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 4.0
Reporter: Dotan Cohen
  Labels: features

 Assuming a multivalued, stored and indexed field named comment. When 
 performing a search, it would be very helpful if there were a way to return 
 only the values of comment which contain the match. For example:
 When searching for gold instead of getting this result:
 doc
 arr name=comment
 strTheres a lady whos sure/str
 strall that glitters is gold/str
 strand shes buying a stairway to heaven/str
 /arr
 /doc
 I would prefer to get this result:
 doc
 arr name=comment
 strall that glitters is gold/str
 /arr
 /doc
 (psuedo-XML from memory, may not be accurate but illustrates the point)
 Thanks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4511) Repeater doesn't return correct index version to slaves

2013-03-01 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590647#comment-13590647
 ] 

Mark Miller commented on SOLR-4511:
---

Thanks Raul! Look forward to hearing your results.

 Repeater doesn't return correct index version to slaves
 ---

 Key: SOLR-4511
 URL: https://issues.apache.org/jira/browse/SOLR-4511
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 4.1
Reporter: Raúl Grande
Assignee: Mark Miller
 Fix For: 4.2, 5.0

 Attachments: o8uzad.jpg, SOLR-4511.patch


 Related to SOLR-4471. I have a master-repeater-2slaves architecture. The 
 replication between master and repeater is working fine but slaves aren't 
 able to replicate because their master (repeater node) is returning an old 
 index version, but in admin UI the version that repeater have is correct.
 When I do http://localhost:17045/solr/replication?command=indexversion 
 response is: long name=generation29037/long when the version should be 
 29042
 If I restart the repeater node this URL returns the correct index version, 
 but after a while it fails again.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4521) Consider not using 'force' replications in SolrCloud recovery.

2013-03-01 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590664#comment-13590664
 ] 

Mark Miller commented on SOLR-4521:
---

So I'm going to flip this switch. It's looking good to me, and if there is 
still a problem it digs up, that's got to be a replication bug we want to find.

 Consider not using 'force' replications in SolrCloud recovery.
 --

 Key: SOLR-4521
 URL: https://issues.apache.org/jira/browse/SOLR-4521
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.2, 5.0


 Now that replication has some better tests and bugs fixed, we may be able to 
 stop forcing a replication on every replication call and let the snap pull 
 determine if one is actually needed. This never worked quite right in the 
 past, so I got around it by forcing a replication on recovery whether it was 
 needed or not - the peer sync phase made this not the biggest deal. However, 
 there are cases where it would still be useful - see SOLR-4519.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2155) Geospatial search using geohash prefixes

2013-03-01 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590665#comment-13590665
 ] 

David Smiley commented on SOLR-2155:


Sujan, Sandeep,
The filter doesn't ultimately know which, just that the document (business) 
matched.  At the time you display the search results, which is only the top-X 
(20?  100?) you could then figure out which addresses matched and which is 
closest.  Since this is only done on the limited number of documents you're 
displaying, it should scale fine. If your docs many many locations then ideally 
Solr would have a mechanism to filter out the locations outside the filter from 
the multi-value so that you needn't do this yourself client-side.  That 
optimization is on my TODO list.

For Solr 3 use SOLR-2155 (see the banner at the top of this JIRA issue) and for 
Solr 4, see the location_rpt field in the default schema to get started.

 Geospatial search using geohash prefixes
 

 Key: SOLR-2155
 URL: https://issues.apache.org/jira/browse/SOLR-2155
 Project: Solr
  Issue Type: Improvement
Reporter: David Smiley
Assignee: David Smiley
 Attachments: GeoHashPrefixFilter.patch, GeoHashPrefixFilter.patch, 
 GeoHashPrefixFilter.patch, Solr2155-1.0.2-project.zip, 
 Solr2155-1.0.3-project.zip, Solr2155-1.0.4-project.zip, 
 Solr2155-for-1.0.2-3.x-port.patch, 
 SOLR-2155_GeoHashPrefixFilter_with_sorting_no_poly.patch, SOLR.2155.p3.patch, 
 SOLR.2155.p3tests.patch


 {panel:title=NOTICE} The status of this issue is a plugin for Solr 3.x 
 located here: https://github.com/dsmiley/SOLR-2155.  Look at the introductory 
 readme and download the plugin .jar file.  Lucene 4's new spatial module is 
 largely based on this code.  The Solr 4 glue for it should come very soon but 
 as of this writing it's hosted temporarily at https://github.com/spatial4j.  
 For more information on using SOLR-2155 with Solr 3, see 
 http://wiki.apache.org/solr/SpatialSearch#SOLR-2155  This JIRA issue is 
 closed because it won't be committed in its current form.
 {panel}
 There currently isn't a solution in Solr for doing geospatial filtering on 
 documents that have a variable number of points.  This scenario occurs when 
 there is location extraction (i.e. via a gazateer) occurring on free text.  
 None, one, or many geospatial locations might be extracted from any given 
 document and users want to limit their search results to those occurring in a 
 user-specified area.
 I've implemented this by furthering the GeoHash based work in Lucene/Solr 
 with a geohash prefix based filter.  A geohash refers to a lat-lon box on the 
 earth.  Each successive character added further subdivides the box into a 4x8 
 (or 8x4 depending on the even/odd length of the geohash) grid.  The first 
 step in this scheme is figuring out which geohash grid squares cover the 
 user's search query.  I've added various extra methods to GeoHashUtils (and 
 added tests) to assist in this purpose.  The next step is an actual Lucene 
 Filter, GeoHashPrefixFilter, that uses these geohash prefixes in 
 TermsEnum.seek() to skip to relevant grid squares in the index.  Once a 
 matching geohash grid is found, the points therein are compared against the 
 user's query to see if it matches.  I created an abstraction GeoShape 
 extended by subclasses named PointDistance... and CartesianBox to support 
 different queried shapes so that the filter need not care about these details.
 This work was presented at LuceneRevolution in Boston on October 8th.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b78) - Build # 4506 - Failure!

2013-03-01 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/4506/
Java: 32bit/jdk1.8.0-ea-b78 -server -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 6029 lines...]
[junit4:junit4] ERROR: JVM J0 ended with an exception, command line: 
/var/lib/jenkins/tools/java/32bit/jdk1.8.0-ea-b78/jre/bin/java -server 
-XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/heapdumps 
-Dtests.prefix=tests -Dtests.seed=A499629747AF4A5F -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=4.2 
-Dtests.cleanthreads=perMethod 
-Djava.util.logging.config.file=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true 
-Dtests.asserts.gracious=false -Dtests.multiplier=3 -DtempDir=. 
-Djava.io.tmpdir=. 
-Djunit4.tempDir=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/analysis/uima/test/temp
 
-Dclover.db.dir=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/clover/db
 -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Djava.security.policy=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/junit4/tests.policy
 -Dlucene.version=4.2-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Dfile.encoding=UTF-8 -classpath 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/analysis/uima/classes/test:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/test-framework/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/codecs/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/analysis/common/lucene-analyzers-common-4.2-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/analysis/uima/src/test-files:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/analysis/uima/lib/Tagger-2.3.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/analysis/uima/lib/WhitespaceTokenizer-2.3.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/analysis/uima/lib/uimaj-core-2.3.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/core/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/test-framework/lib/junit-4.10.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/test-framework/lib/randomizedtesting-runner-2.0.8.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/analysis/uima/classes/java:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-launcher.jar:/var/lib/jenkins/.ant/lib/ivy-2.3.0.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jai.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-swing.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-oro.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jmf.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-xalan2.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-javamail.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-resolver.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-testutil.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-commons-logging.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-log4j.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-junit.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jsch.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-commons-net.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-bsf.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jdepend.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-netrexx.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-regexp.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-junit4.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-bcel.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-antlr.jar:/var/lib/jenkins/tools/java/32bit/jdk1.8.0-ea-b78/lib/tools.jar:/var/lib/jenkins/.ivy2/cache/com.carrotsearch.randomizedtesting/junit4-ant/jars/junit4-ant-2.0.8.jar
 -ea:org.apache.lucene... -ea:org.apache.solr... 

[jira] [Commented] (SOLR-2155) Geospatial search using geohash prefixes

2013-03-01 Thread Sandeep Tucknat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590679#comment-13590679
 ] 

Sandeep Tucknat commented on SOLR-2155:
---

First of all, thanks for the prompt response! It feels good to see you are 
supporting the approach we took in the interim :) I was just thinking that in 
order to do the ranking, the filter has to go through all the values of the 
field and it shouldn't be hard to persist this information and return to the 
client. We'll be waiting for that optimization to come through! Many thanks! 
let me know if I can help in any way (3 weeks in spatial or solr).

 Geospatial search using geohash prefixes
 

 Key: SOLR-2155
 URL: https://issues.apache.org/jira/browse/SOLR-2155
 Project: Solr
  Issue Type: Improvement
Reporter: David Smiley
Assignee: David Smiley
 Attachments: GeoHashPrefixFilter.patch, GeoHashPrefixFilter.patch, 
 GeoHashPrefixFilter.patch, Solr2155-1.0.2-project.zip, 
 Solr2155-1.0.3-project.zip, Solr2155-1.0.4-project.zip, 
 Solr2155-for-1.0.2-3.x-port.patch, 
 SOLR-2155_GeoHashPrefixFilter_with_sorting_no_poly.patch, SOLR.2155.p3.patch, 
 SOLR.2155.p3tests.patch


 {panel:title=NOTICE} The status of this issue is a plugin for Solr 3.x 
 located here: https://github.com/dsmiley/SOLR-2155.  Look at the introductory 
 readme and download the plugin .jar file.  Lucene 4's new spatial module is 
 largely based on this code.  The Solr 4 glue for it should come very soon but 
 as of this writing it's hosted temporarily at https://github.com/spatial4j.  
 For more information on using SOLR-2155 with Solr 3, see 
 http://wiki.apache.org/solr/SpatialSearch#SOLR-2155  This JIRA issue is 
 closed because it won't be committed in its current form.
 {panel}
 There currently isn't a solution in Solr for doing geospatial filtering on 
 documents that have a variable number of points.  This scenario occurs when 
 there is location extraction (i.e. via a gazateer) occurring on free text.  
 None, one, or many geospatial locations might be extracted from any given 
 document and users want to limit their search results to those occurring in a 
 user-specified area.
 I've implemented this by furthering the GeoHash based work in Lucene/Solr 
 with a geohash prefix based filter.  A geohash refers to a lat-lon box on the 
 earth.  Each successive character added further subdivides the box into a 4x8 
 (or 8x4 depending on the even/odd length of the geohash) grid.  The first 
 step in this scheme is figuring out which geohash grid squares cover the 
 user's search query.  I've added various extra methods to GeoHashUtils (and 
 added tests) to assist in this purpose.  The next step is an actual Lucene 
 Filter, GeoHashPrefixFilter, that uses these geohash prefixes in 
 TermsEnum.seek() to skip to relevant grid squares in the index.  Once a 
 matching geohash grid is found, the points therein are compared against the 
 user's query to see if it matches.  I created an abstraction GeoShape 
 extended by subclasses named PointDistance... and CartesianBox to support 
 different queried shapes so that the filter need not care about these details.
 This work was presented at LuceneRevolution in Boston on October 8th.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b78) - Build # 4506 - Failure!

2013-03-01 Thread Uwe Schindler
I had to kill this JVM with kill -9. It did not even respond to a stack trace 
request with kill -3.

Maybe it’s a new JDK 8 bug (a new self-corrumption, as Robert calls it).

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de]
 Sent: Friday, March 01, 2013 5:26 PM
 To: dev@lucene.apache.org; u...@thetaphi.de; sim...@apache.org;
 mikemcc...@apache.org
 Subject: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b78) - Build #
 4506 - Failure!
 
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/4506/
 Java: 32bit/jdk1.8.0-ea-b78 -server -XX:+UseG1GC
 
 All tests passed
 
 Build Log:
 [...truncated 6029 lines...]
 [junit4:junit4] ERROR: JVM J0 ended with an exception, command line:
 /var/lib/jenkins/tools/java/32bit/jdk1.8.0-ea-b78/jre/bin/java -server -
 XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError -
 XX:HeapDumpPath=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/heapdumps -Dtests.prefix=tests -Dtests.seed=A499629747AF4A5F -
 Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false -
 Dtests.codec=random -Dtests.postingsformat=random -
 Dtests.docvaluesformat=random -Dtests.locale=random -
 Dtests.timezone=random -Dtests.directory=random -
 Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=4.2 -
 Dtests.cleanthreads=perMethod -
 Djava.util.logging.config.file=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/lucene/tools/junit4/logging.properties -Dtests.nightly=false -
 Dtests.weekly=false -Dtests.slow=true -Dtests.asserts.gracious=false -
 Dtests.multiplier=3 -DtempDir=. -Djava.io.tmpdir=. -
 Djunit4.tempDir=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/lucene/build/analysis/uima/test/temp -
 Dclover.db.dir=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/lucene/build/clover/db -
 Djava.security.manager=org.apache.lucene.util.TestSecurityManager -
 Djava.security.policy=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/lucene/tools/junit4/tests.policy -Dlucene.version=4.2-SNAPSHOT -
 Djetty.testMode=1 -Djetty.insecurerandom=1 -
 Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory -
 Djava.awt.headless=true -Dfile.encoding=UTF-8 -classpath
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/lucene/build/analysis/uima/classes/test:/mnt/ssd/jenkins/workspace
 /Lucene-Solr-4.x-Linux/lucene/build/test-
 framework/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/lucene/build/codecs/classes/java:/mnt/ssd/jenkins/workspace/Lucen
 e-Solr-4.x-Linux/lucene/build/analysis/common/lucene-analyzers-common-
 4.2-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/lucene/analysis/uima/src/test-
 files:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/lucene/analysis/uima/lib/Tagger-
 2.3.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/lucene/analysis/uima/lib/WhitespaceTokenizer-
 2.3.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/lucene/analysis/uima/lib/uimaj-core-
 2.3.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/lucene/build/core/classes/java:/mnt/ssd/jenkins/workspace/Lucene-
 Solr-4.x-Linux/lucene/test-framework/lib/junit-
 4.10.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/test-
 framework/lib/randomizedtesting-runner-
 2.0.8.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/lucene/build/analysis/uima/classes/java:/var/lib/jenkins/tools/hudson
 .tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-
 launcher.jar:/var/lib/jenkins/.ant/lib/ivy-
 2.3.0.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/
 lib/ant-
 jai.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib
 /ant-
 swing.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
 /lib/ant-apache-
 oro.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/li
 b/ant-
 jmf.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/li
 b/ant-apache-
 xalan2.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.
 2/lib/ant-
 javamail.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.
 8.2/lib/ant-apache-
 resolver.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.
 8.2/lib/ant-
 testutil.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.
 2/lib/ant-commons-
 logging.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.
 2/lib/ant-apache-
 log4j.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/
 lib/ant-
 junit.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/
 lib/ant-
 jsch.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/li
 b/ant-commons-
 net.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/li
 b/ant-apache-
 bsf.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/li
 b/ant-
 

[jira] [Commented] (SOLR-2155) Geospatial search using geohash prefixes

2013-03-01 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590706#comment-13590706
 ] 

David Smiley commented on SOLR-2155:


bq. I was just thinking that in order to do the ranking, the filter has to go 
through all the values of the field and it shouldn't be hard to persist this 
information and return to the client

It doesn't; this is a common misconception!  It does *not* calculate the 
distance between every indexed matched point and the center point of a circle 
query shape.  If it did, it wouldn't be so fast :-)  If you want an 
implementation like that then look at LatLonType which is a brute force 
algorithm and hence is not as scalable, and doesn't support multi-value either. 
 To try and help explain how this can possibly be, understand that there are 
large grid cells that fit within the query shape and for these large grid 
cells, the index knows which documents are all in that grid cell and so it 
simply matches those documents without knowing or calculating more precisely 
where those underlying points actually are.  So it's not like the filter code 
has all this information you want and simply isn't exposing it to you.  I need 
to drive this point home at my next conference presentation at Lucene/Solr 
Revolution 2013 in May (San Diego CA).

 Geospatial search using geohash prefixes
 

 Key: SOLR-2155
 URL: https://issues.apache.org/jira/browse/SOLR-2155
 Project: Solr
  Issue Type: Improvement
Reporter: David Smiley
Assignee: David Smiley
 Attachments: GeoHashPrefixFilter.patch, GeoHashPrefixFilter.patch, 
 GeoHashPrefixFilter.patch, Solr2155-1.0.2-project.zip, 
 Solr2155-1.0.3-project.zip, Solr2155-1.0.4-project.zip, 
 Solr2155-for-1.0.2-3.x-port.patch, 
 SOLR-2155_GeoHashPrefixFilter_with_sorting_no_poly.patch, SOLR.2155.p3.patch, 
 SOLR.2155.p3tests.patch


 {panel:title=NOTICE} The status of this issue is a plugin for Solr 3.x 
 located here: https://github.com/dsmiley/SOLR-2155.  Look at the introductory 
 readme and download the plugin .jar file.  Lucene 4's new spatial module is 
 largely based on this code.  The Solr 4 glue for it should come very soon but 
 as of this writing it's hosted temporarily at https://github.com/spatial4j.  
 For more information on using SOLR-2155 with Solr 3, see 
 http://wiki.apache.org/solr/SpatialSearch#SOLR-2155  This JIRA issue is 
 closed because it won't be committed in its current form.
 {panel}
 There currently isn't a solution in Solr for doing geospatial filtering on 
 documents that have a variable number of points.  This scenario occurs when 
 there is location extraction (i.e. via a gazateer) occurring on free text.  
 None, one, or many geospatial locations might be extracted from any given 
 document and users want to limit their search results to those occurring in a 
 user-specified area.
 I've implemented this by furthering the GeoHash based work in Lucene/Solr 
 with a geohash prefix based filter.  A geohash refers to a lat-lon box on the 
 earth.  Each successive character added further subdivides the box into a 4x8 
 (or 8x4 depending on the even/odd length of the geohash) grid.  The first 
 step in this scheme is figuring out which geohash grid squares cover the 
 user's search query.  I've added various extra methods to GeoHashUtils (and 
 added tests) to assist in this purpose.  The next step is an actual Lucene 
 Filter, GeoHashPrefixFilter, that uses these geohash prefixes in 
 TermsEnum.seek() to skip to relevant grid squares in the index.  Once a 
 matching geohash grid is found, the points therein are compared against the 
 user's query to see if it matches.  I created an abstraction GeoShape 
 extended by subclasses named PointDistance... and CartesianBox to support 
 different queried shapes so that the filter need not care about these details.
 This work was presented at LuceneRevolution in Boston on October 8th.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4505) Deadlock around SolrCoreState update lock.

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590741#comment-13590741
 ] 

Commit Tag Bot commented on SOLR-4505:
--

[trunk commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1451653

SOLR-4505: Possible deadlock around SolrCoreState update lock.


 Deadlock around SolrCoreState update lock.
 --

 Key: SOLR-4505
 URL: https://issues.apache.org/jira/browse/SOLR-4505
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Erick Erickson
Priority: Minor
 Fix For: 4.2, 5.0

 Attachments: newstack.txt, newstack.txt, newstack.txt, 
 SOLR-4505.patch, SOLR-4505.patch, SOLR-4505.patch, SOLR-4505.patch


 Erick found a deadlock with his core stress tool - see 
 http://markmail.org/message/aq5hghbqia2uimgl

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4505) Deadlock around SolrCoreState update lock.

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590752#comment-13590752
 ] 

Commit Tag Bot commented on SOLR-4505:
--

[branch_4x commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1451654

SOLR-4505: Possible deadlock around SolrCoreState update lock.


 Deadlock around SolrCoreState update lock.
 --

 Key: SOLR-4505
 URL: https://issues.apache.org/jira/browse/SOLR-4505
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Erick Erickson
Priority: Minor
 Fix For: 4.2, 5.0

 Attachments: newstack.txt, newstack.txt, newstack.txt, 
 SOLR-4505.patch, SOLR-4505.patch, SOLR-4505.patch, SOLR-4505.patch


 Erick found a deadlock with his core stress tool - see 
 http://markmail.org/message/aq5hghbqia2uimgl

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4522) SimpleFacets.getListedTermCounts wont work for docvalues-only fields (or other custom fieldtypes)

2013-03-01 Thread Robert Muir (JIRA)
Robert Muir created SOLR-4522:
-

 Summary: SimpleFacets.getListedTermCounts wont work for 
docvalues-only fields (or other custom fieldtypes)
 Key: SOLR-4522
 URL: https://issues.apache.org/jira/browse/SOLR-4522
 Project: Solr
  Issue Type: Bug
  Components: search
Reporter: Robert Muir


Don't have a test yet: but this makes bare TermQuery (used for facet 
refinement).

SimpleFacets.rangeCount is correct (it invokes FieldType.getRangeQuery...), but 
I'm concerned about the semantics here for terms.

Because FieldType.getFieldQuery is really more for an unstructured query, it 
shouldnt call this method. 

Its probably better if FieldType has an explicit method to make a structured 
term query (with getFieldQuery forwarding to that by default), but I dont want 
to make FieldType api confusing either...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4505) Deadlock around SolrCoreState update lock.

2013-03-01 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590754#comment-13590754
 ] 

Mark Miller commented on SOLR-4505:
---

I've committed this - I'm juggling too many local issues and I want these 
things to bake as long as they possibly can before the 4.2 release.

 Deadlock around SolrCoreState update lock.
 --

 Key: SOLR-4505
 URL: https://issues.apache.org/jira/browse/SOLR-4505
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Erick Erickson
Priority: Minor
 Fix For: 4.2, 5.0

 Attachments: newstack.txt, newstack.txt, newstack.txt, 
 SOLR-4505.patch, SOLR-4505.patch, SOLR-4505.patch, SOLR-4505.patch


 Erick found a deadlock with his core stress tool - see 
 http://markmail.org/message/aq5hghbqia2uimgl

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-4505) Deadlock around SolrCoreState update lock.

2013-03-01 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-4505:
-

Assignee: Mark Miller  (was: Erick Erickson)

 Deadlock around SolrCoreState update lock.
 --

 Key: SOLR-4505
 URL: https://issues.apache.org/jira/browse/SOLR-4505
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.2, 5.0

 Attachments: newstack.txt, newstack.txt, newstack.txt, 
 SOLR-4505.patch, SOLR-4505.patch, SOLR-4505.patch, SOLR-4505.patch


 Erick found a deadlock with his core stress tool - see 
 http://markmail.org/message/aq5hghbqia2uimgl

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4511) Repeater doesn't return correct index version to slaves

2013-03-01 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590758#comment-13590758
 ] 

Mark Miller commented on SOLR-4511:
---

I've committed this and merged back to 4X - unfortunetly, I used the wrong 
commit msg for the 4.X merge back though, so it won't be tagged in JIRA 
correctly.

Let me know how it works Raul.

 Repeater doesn't return correct index version to slaves
 ---

 Key: SOLR-4511
 URL: https://issues.apache.org/jira/browse/SOLR-4511
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 4.1
Reporter: Raúl Grande
Assignee: Mark Miller
 Fix For: 4.2, 5.0

 Attachments: o8uzad.jpg, SOLR-4511.patch


 Related to SOLR-4471. I have a master-repeater-2slaves architecture. The 
 replication between master and repeater is working fine but slaves aren't 
 able to replicate because their master (repeater node) is returning an old 
 index version, but in admin UI the version that repeater have is correct.
 When I do http://localhost:17045/solr/replication?command=indexversion 
 response is: long name=generation29037/long when the version should be 
 29042
 If I restart the repeater node this URL returns the correct index version, 
 but after a while it fails again.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4505) Deadlock around SolrCoreState update lock.

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590760#comment-13590760
 ] 

Commit Tag Bot commented on SOLR-4505:
--

[branch_4x commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1451657

SOLR-4505: CHANGES entry


 Deadlock around SolrCoreState update lock.
 --

 Key: SOLR-4505
 URL: https://issues.apache.org/jira/browse/SOLR-4505
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.2, 5.0

 Attachments: newstack.txt, newstack.txt, newstack.txt, 
 SOLR-4505.patch, SOLR-4505.patch, SOLR-4505.patch, SOLR-4505.patch


 Erick found a deadlock with his core stress tool - see 
 http://markmail.org/message/aq5hghbqia2uimgl

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4511) Repeater doesn't return correct index version to slaves

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590761#comment-13590761
 ] 

Commit Tag Bot commented on SOLR-4511:
--

[trunk commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1451659

SOLR-4511: When a new index is replicated into place, we need to update the 
most recent replicatable index point without doing a commit. This is important 
for repeater use cases, as well as when nodes may switch master/slave roles.


 Repeater doesn't return correct index version to slaves
 ---

 Key: SOLR-4511
 URL: https://issues.apache.org/jira/browse/SOLR-4511
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 4.1
Reporter: Raúl Grande
Assignee: Mark Miller
 Fix For: 4.2, 5.0

 Attachments: o8uzad.jpg, SOLR-4511.patch


 Related to SOLR-4471. I have a master-repeater-2slaves architecture. The 
 replication between master and repeater is working fine but slaves aren't 
 able to replicate because their master (repeater node) is returning an old 
 index version, but in admin UI the version that repeater have is correct.
 When I do http://localhost:17045/solr/replication?command=indexversion 
 response is: long name=generation29037/long when the version should be 
 29042
 If I restart the repeater node this URL returns the correct index version, 
 but after a while it fails again.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4505) Deadlock around SolrCoreState update lock.

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590762#comment-13590762
 ] 

Commit Tag Bot commented on SOLR-4505:
--

[trunk commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1451656

SOLR-4505: CHANGES entry


 Deadlock around SolrCoreState update lock.
 --

 Key: SOLR-4505
 URL: https://issues.apache.org/jira/browse/SOLR-4505
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.2, 5.0

 Attachments: newstack.txt, newstack.txt, newstack.txt, 
 SOLR-4505.patch, SOLR-4505.patch, SOLR-4505.patch, SOLR-4505.patch


 Erick found a deadlock with his core stress tool - see 
 http://markmail.org/message/aq5hghbqia2uimgl

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4522) SimpleFacets.getListedTermCounts wont work for docvalues-only fields (or other custom fieldtypes)

2013-03-01 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated SOLR-4522:
--

Attachment: SOLR-4522.patch

Here's my idea I guess... no test yet though.

 SimpleFacets.getListedTermCounts wont work for docvalues-only fields (or 
 other custom fieldtypes)
 -

 Key: SOLR-4522
 URL: https://issues.apache.org/jira/browse/SOLR-4522
 Project: Solr
  Issue Type: Bug
  Components: search
Reporter: Robert Muir
 Attachments: SOLR-4522.patch


 Don't have a test yet: but this makes bare TermQuery (used for facet 
 refinement).
 SimpleFacets.rangeCount is correct (it invokes FieldType.getRangeQuery...), 
 but I'm concerned about the semantics here for terms.
 Because FieldType.getFieldQuery is really more for an unstructured query, it 
 shouldnt call this method. 
 Its probably better if FieldType has an explicit method to make a structured 
 term query (with getFieldQuery forwarding to that by default), but I dont 
 want to make FieldType api confusing either...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4521) Consider not using 'force' replications in SolrCloud recovery.

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590781#comment-13590781
 ] 

Commit Tag Bot commented on SOLR-4521:
--

[branch_4x commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1451663

SOLR-4521: Stop using the 'force' option for recovery replication. This will 
keep some less common unnecessary replications from happening.


 Consider not using 'force' replications in SolrCloud recovery.
 --

 Key: SOLR-4521
 URL: https://issues.apache.org/jira/browse/SOLR-4521
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.2, 5.0


 Now that replication has some better tests and bugs fixed, we may be able to 
 stop forcing a replication on every replication call and let the snap pull 
 determine if one is actually needed. This never worked quite right in the 
 past, so I got around it by forcing a replication on recovery whether it was 
 needed or not - the peer sync phase made this not the biggest deal. However, 
 there are cases where it would still be useful - see SOLR-4519.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4521) Consider not using 'force' replications in SolrCloud recovery.

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590782#comment-13590782
 ] 

Commit Tag Bot commented on SOLR-4521:
--

[branch_4x commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1451661

SOLR-4521: Stop using the 'force' option for recovery replication. This will 
keep some less common unnecessary replications from happening.


 Consider not using 'force' replications in SolrCloud recovery.
 --

 Key: SOLR-4521
 URL: https://issues.apache.org/jira/browse/SOLR-4521
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.2, 5.0


 Now that replication has some better tests and bugs fixed, we may be able to 
 stop forcing a replication on every replication call and let the snap pull 
 determine if one is actually needed. This never worked quite right in the 
 past, so I got around it by forcing a replication on recovery whether it was 
 needed or not - the peer sync phase made this not the biggest deal. However, 
 there are cases where it would still be useful - see SOLR-4519.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4521) Consider not using 'force' replications in SolrCloud recovery.

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590783#comment-13590783
 ] 

Commit Tag Bot commented on SOLR-4521:
--

[trunk commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1451660

SOLR-4521: Stop using the 'force' option for recovery replication. This will 
keep some less common unnecessary replications from happening.


 Consider not using 'force' replications in SolrCloud recovery.
 --

 Key: SOLR-4521
 URL: https://issues.apache.org/jira/browse/SOLR-4521
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.2, 5.0


 Now that replication has some better tests and bugs fixed, we may be able to 
 stop forcing a replication on every replication call and let the snap pull 
 determine if one is actually needed. This never worked quite right in the 
 past, so I got around it by forcing a replication on recovery whether it was 
 needed or not - the peer sync phase made this not the biggest deal. However, 
 there are cases where it would still be useful - see SOLR-4519.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4196) Untangle XML-specific nature of Config and Container classes

2013-03-01 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-4196:
-

Attachment: SOLR-4196.patch

Final version. I plan to commit this today or tomorrow, let it bake for a bit 
and merge into 4x unless there are objections. The fix for SOLR-4505 took care 
of the deadlocks I was seeing in the tests...

 Untangle XML-specific nature of Config and Container classes
 

 Key: SOLR-4196
 URL: https://issues.apache.org/jira/browse/SOLR-4196
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Reporter: Erick Erickson
Assignee: Erick Erickson
Priority: Minor
 Fix For: 4.2, 5.0

 Attachments: SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, 
 SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, 
 SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, 
 SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, StressTest.zip, 
 StressTest.zip, StressTest.zip, StressTest.zip


 sub-task for SOLR-4083. If we're going to try to obsolete solr.xml, we need 
 to pull all of the specific XML processing out of Config and Container. 
 Currently, we refer to xpaths all over the place. This JIRA is about 
 providing a thunking layer to isolate the XML-esque nature of solr.xml and 
 allow a simple properties file to be used instead which will lead, 
 eventually, to solr.xml going away.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2155) Geospatial search using geohash prefixes

2013-03-01 Thread Sandeep Tucknat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590810#comment-13590810
 ] 

Sandeep Tucknat commented on SOLR-2155:
---

Once again, thanks for the prompt response AND the information! It makes sense, 
you have a forward index of grid cells to documents, no geo comparisons 
required at run time. I won't be able to attend the conference but will 
definitely look forward to your presentation!

 Geospatial search using geohash prefixes
 

 Key: SOLR-2155
 URL: https://issues.apache.org/jira/browse/SOLR-2155
 Project: Solr
  Issue Type: Improvement
Reporter: David Smiley
Assignee: David Smiley
 Attachments: GeoHashPrefixFilter.patch, GeoHashPrefixFilter.patch, 
 GeoHashPrefixFilter.patch, Solr2155-1.0.2-project.zip, 
 Solr2155-1.0.3-project.zip, Solr2155-1.0.4-project.zip, 
 Solr2155-for-1.0.2-3.x-port.patch, 
 SOLR-2155_GeoHashPrefixFilter_with_sorting_no_poly.patch, SOLR.2155.p3.patch, 
 SOLR.2155.p3tests.patch


 {panel:title=NOTICE} The status of this issue is a plugin for Solr 3.x 
 located here: https://github.com/dsmiley/SOLR-2155.  Look at the introductory 
 readme and download the plugin .jar file.  Lucene 4's new spatial module is 
 largely based on this code.  The Solr 4 glue for it should come very soon but 
 as of this writing it's hosted temporarily at https://github.com/spatial4j.  
 For more information on using SOLR-2155 with Solr 3, see 
 http://wiki.apache.org/solr/SpatialSearch#SOLR-2155  This JIRA issue is 
 closed because it won't be committed in its current form.
 {panel}
 There currently isn't a solution in Solr for doing geospatial filtering on 
 documents that have a variable number of points.  This scenario occurs when 
 there is location extraction (i.e. via a gazateer) occurring on free text.  
 None, one, or many geospatial locations might be extracted from any given 
 document and users want to limit their search results to those occurring in a 
 user-specified area.
 I've implemented this by furthering the GeoHash based work in Lucene/Solr 
 with a geohash prefix based filter.  A geohash refers to a lat-lon box on the 
 earth.  Each successive character added further subdivides the box into a 4x8 
 (or 8x4 depending on the even/odd length of the geohash) grid.  The first 
 step in this scheme is figuring out which geohash grid squares cover the 
 user's search query.  I've added various extra methods to GeoHashUtils (and 
 added tests) to assist in this purpose.  The next step is an actual Lucene 
 Filter, GeoHashPrefixFilter, that uses these geohash prefixes in 
 TermsEnum.seek() to skip to relevant grid squares in the index.  Once a 
 matching geohash grid is found, the points therein are compared against the 
 user's query to see if it matches.  I created an abstraction GeoShape 
 extended by subclasses named PointDistance... and CartesianBox to support 
 different queried shapes so that the filter need not care about these details.
 This work was presented at LuceneRevolution in Boston on October 8th.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Positions in EdgeNgramTokenFilter

2013-03-01 Thread Walter Underwood
I'm fixing position increment in EdgeNgramTokenFilter to act like synonyms, 
with each ngram at the same position as the source token. Currently, the 
position is incremented for each output token, which breaks phrase searching 
with edge ngrams.

I could not find a current Jira issue for this. Is there one?

We are still on 3.3, but I'll submit a patch for 4.x.

Thanks to whoever converted EdgeNgramTokenFilter to use TokenStream.

wunder
--
Walter Underwood
wun...@wunderwood.org





[jira] [Commented] (SOLR-2166) termvector component has strange syntax

2013-03-01 Thread Walter Underwood (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590822#comment-13590822
 ] 

Walter Underwood commented on SOLR-2166:


JSON does not require that keys are unique, but the ECMA spec says to only keep 
the last value.

Because different JSON parsers interpret duplicate keys differently, this seems 
like something to avoid.

For more details, see: 
http://www.tbray.org/ongoing/When/201x/2013/02/21/JSON-Lesson

 termvector component has strange syntax
 ---

 Key: SOLR-2166
 URL: https://issues.apache.org/jira/browse/SOLR-2166
 Project: Solr
  Issue Type: Improvement
Reporter: Yonik Seeley
 Attachments: SOLR-2166.diff, workaround-managled-SOLR-2166.diff


 The termvector  response format could really be improved.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4505) Deadlock around SolrCoreState update lock.

2013-03-01 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590825#comment-13590825
 ] 

Erick Erickson commented on SOLR-4505:
--

You beat me to to it. My stress test ran all night last night without any 
problems, so we're good here as far as I can see.

 Deadlock around SolrCoreState update lock.
 --

 Key: SOLR-4505
 URL: https://issues.apache.org/jira/browse/SOLR-4505
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.2, 5.0

 Attachments: newstack.txt, newstack.txt, newstack.txt, 
 SOLR-4505.patch, SOLR-4505.patch, SOLR-4505.patch, SOLR-4505.patch


 Erick found a deadlock with his core stress tool - see 
 http://markmail.org/message/aq5hghbqia2uimgl

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Positions in EdgeNgramTokenFilter

2013-03-01 Thread Robert Muir
Walter, sounds very interesting. Maybe just use this issue:
https://issues.apache.org/jira/browse/LUCENE-3907 ?

On Fri, Mar 1, 2013 at 10:41 AM, Walter Underwood wun...@wunderwood.org wrote:
 I'm fixing position increment in EdgeNgramTokenFilter to act like synonyms,
 with each ngram at the same position as the source token. Currently, the
 position is incremented for each output token, which breaks phrase searching
 with edge ngrams.

 I could not find a current Jira issue for this. Is there one?

 We are still on 3.3, but I'll submit a patch for 4.x.

 Thanks to whoever converted EdgeNgramTokenFilter to use TokenStream.

 wunder
 --
 Walter Underwood
 wun...@wunderwood.org




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2728) DIH status: Total Documents Processed field disappears

2013-03-01 Thread Prachi Rath (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590832#comment-13590832
 ] 

Prachi Rath commented on SOLR-2728:
---

Hi , we were planning to use the Total Documents Processed to show the user 
the progress of full import and than I stumbled upon this defect.
Is there an alternate way to get the Total Documents Processed at a given time 
during full import?

thanks


 DIH status: Total Documents Processed field disappears
 

 Key: SOLR-2728
 URL: https://issues.apache.org/jira/browse/SOLR-2728
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 3.2
 Environment: Linux idxst0-a 2.6.18-238.12.1.el5.centos.plusxen #1 SMP 
 Wed Jun 1 11:57:54 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
 java version 1.6.0_26
 Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
 Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)
Reporter: Shawn Heisey
Assignee: James Dyer
Priority: Minor
 Fix For: 4.2


 As soon as the external data source is finished, the Total Documents 
 Processed field disappears from the /dataimport status response.  It only 
 returns once indexing, committing, and optimizing are complete and status 
 changes to idle.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Positions in EdgeNgramTokenFilter

2013-03-01 Thread Walter Underwood
That is a pretty broad bug, but this fix is somewhere in improve ngrams. 
Maybe a specific bug linked to that one?

Incrementing positions might be the right thing for pure ngrams. 

wunder

On Mar 1, 2013, at 11:02 AM, Robert Muir wrote:

 Walter, sounds very interesting. Maybe just use this issue:
 https://issues.apache.org/jira/browse/LUCENE-3907 ?
 
 On Fri, Mar 1, 2013 at 10:41 AM, Walter Underwood wun...@wunderwood.org 
 wrote:
 I'm fixing position increment in EdgeNgramTokenFilter to act like synonyms,
 with each ngram at the same position as the source token. Currently, the
 position is incremented for each output token, which breaks phrase searching
 with edge ngrams.
 
 I could not find a current Jira issue for this. Is there one?
 
 We are still on 3.3, but I'll submit a patch for 4.x.
 
 Thanks to whoever converted EdgeNgramTokenFilter to use TokenStream.
 
 wunder
 --
 Walter Underwood
 wun...@wunderwood.org
 




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Positions in EdgeNgramTokenFilter

2013-03-01 Thread Robert Muir
sure, you could just make a new issue and link it to that one if you
like. thanks for looking at this!

On Fri, Mar 1, 2013 at 2:15 PM, Walter Underwood wun...@wunderwood.org wrote:
 That is a pretty broad bug, but this fix is somewhere in improve ngrams. 
 Maybe a specific bug linked to that one?

 Incrementing positions might be the right thing for pure ngrams.

 wunder

 On Mar 1, 2013, at 11:02 AM, Robert Muir wrote:

 Walter, sounds very interesting. Maybe just use this issue:
 https://issues.apache.org/jira/browse/LUCENE-3907 ?

 On Fri, Mar 1, 2013 at 10:41 AM, Walter Underwood wun...@wunderwood.org 
 wrote:
 I'm fixing position increment in EdgeNgramTokenFilter to act like synonyms,
 with each ngram at the same position as the source token. Currently, the
 position is incremented for each output token, which breaks phrase searching
 with edge ngrams.

 I could not find a current Jira issue for this. Is there one?

 We are still on 3.3, but I'll submit a patch for 4.x.

 Thanks to whoever converted EdgeNgramTokenFilter to use TokenStream.

 wunder
 --
 Walter Underwood
 wun...@wunderwood.org





 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4514) Need tests for OpenExchangeRatesOrgProvider

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590856#comment-13590856
 ] 

Commit Tag Bot commented on SOLR-4514:
--

[trunk commit] Chris M. Hostetter
http://svn.apache.org/viewvc?view=revisionrevision=1451691

SOLR-4514: better tests for CurrencyField using OpenExchangeRatesOrgProvider


 Need tests for OpenExchangeRatesOrgProvider
 ---

 Key: SOLR-4514
 URL: https://issues.apache.org/jira/browse/SOLR-4514
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
 Attachments: SOLR-4514.patch, SOLR-4514.patch


 the schema.xml used by CurrencyFieldTest declares a fieldType named 
 openexchangeratesorg_currency using OpenExchangeRatesOrgProvider pointed at 
 a local copy of open-exchange-rates.json, but field type is completley 
 unused, so nothing about this provider's behavior is ever really tested.
 We should change the test such that all of the functionality tested against 
 the amount field is also tested against some new field using this OER 
 provider based fieldType, where the copied data (and static exchange rates) 
 are identical.
 (esasiest way would probably be to introduce a test variable for the field 
 name and let a new subclass override it)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4514) Need tests for OpenExchangeRatesOrgProvider

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590894#comment-13590894
 ] 

Commit Tag Bot commented on SOLR-4514:
--

[branch_4x commit] Chris M. Hostetter
http://svn.apache.org/viewvc?view=revisionrevision=1451699

SOLR-4514: better tests for CurrencyField using OpenExchangeRatesOrgProvider 
(merge r1451691)


 Need tests for OpenExchangeRatesOrgProvider
 ---

 Key: SOLR-4514
 URL: https://issues.apache.org/jira/browse/SOLR-4514
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
 Attachments: SOLR-4514.patch, SOLR-4514.patch


 the schema.xml used by CurrencyFieldTest declares a fieldType named 
 openexchangeratesorg_currency using OpenExchangeRatesOrgProvider pointed at 
 a local copy of open-exchange-rates.json, but field type is completley 
 unused, so nothing about this provider's behavior is ever really tested.
 We should change the test such that all of the functionality tested against 
 the amount field is also tested against some new field using this OER 
 provider based fieldType, where the copied data (and static exchange rates) 
 are identical.
 (esasiest way would probably be to introduce a test variable for the field 
 name and let a new subclass override it)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4373) In multicore, lib directives in solrconfig.xml cause conflict and clobber directives from earlier cores

2013-03-01 Thread Rene Nederhand (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13590908#comment-13590908
 ] 

Rene Nederhand commented on SOLR-4373:
--

I'm still experiencing the same problems (rev 1450937): Libraries cannot be 
found. 

Moreover, the workaround by adding {{coreLoadThreads=1}} to solr.xml does not 
work in SolrCloud. I get: SolrCloud requires a value of at least 2 in solr.xml 
for coreLoadThreads.

One additional observation: The first time I create a core it fails, however 
the second time it succeeds. Does anyone knows why it happens? FYI: I am 
attaching the test script I am using to upload, link the config and create the 
cores.

 In multicore, lib directives in solrconfig.xml cause conflict and clobber 
 directives from earlier cores
 ---

 Key: SOLR-4373
 URL: https://issues.apache.org/jira/browse/SOLR-4373
 Project: Solr
  Issue Type: Bug
  Components: multicore
Affects Versions: 4.1
Reporter: Alexandre Rafalovitch
Priority: Blocker
  Labels: lib, multicore
 Fix For: 4.2, 5.0, 4.1.1

 Attachments: multicore-bug.zip


 Having lib directives in the solrconfig.xml files of multiple cores can cause 
 problems when using multi-threaded core initialization -- which is the 
 default starting with Solr 4.1.
 The problem manifests itself as init errors in the logs related to not being 
 able to find classes located in plugin jars, even though earlier log messages 
 indicated that those jars had been added to the classpath.
 One work around is to set {{coreLoadThreads=1}} in your solr.xml file -- 
 forcing single threaded core initialization.  For example...
 {code}
 ?xml version=1.0 encoding=utf-8 ?
 solr coreLoadThreads=1
   cores adminPath=/admin/cores
 core name=core1 instanceDir=core1 /
 core name=core2 instanceDir=core2 /
   /cores
 /solr
 {code}
 (Similar problems may occur if multiple cores are initialized concurrently 
 using the /admin/cores handler)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4373) In multicore, lib directives in solrconfig.xml cause conflict and clobber directives from earlier cores

2013-03-01 Thread Rene Nederhand (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rene Nederhand updated SOLR-4373:
-

Attachment: testscript.sh

Test script to create a collection, upload/link a config and create two cores.

 In multicore, lib directives in solrconfig.xml cause conflict and clobber 
 directives from earlier cores
 ---

 Key: SOLR-4373
 URL: https://issues.apache.org/jira/browse/SOLR-4373
 Project: Solr
  Issue Type: Bug
  Components: multicore
Affects Versions: 4.1
Reporter: Alexandre Rafalovitch
Priority: Blocker
  Labels: lib, multicore
 Fix For: 4.2, 5.0, 4.1.1

 Attachments: multicore-bug.zip, testscript.sh


 Having lib directives in the solrconfig.xml files of multiple cores can cause 
 problems when using multi-threaded core initialization -- which is the 
 default starting with Solr 4.1.
 The problem manifests itself as init errors in the logs related to not being 
 able to find classes located in plugin jars, even though earlier log messages 
 indicated that those jars had been added to the classpath.
 One work around is to set {{coreLoadThreads=1}} in your solr.xml file -- 
 forcing single threaded core initialization.  For example...
 {code}
 ?xml version=1.0 encoding=utf-8 ?
 solr coreLoadThreads=1
   cores adminPath=/admin/cores
 core name=core1 instanceDir=core1 /
 core name=core2 instanceDir=core2 /
   /cores
 /solr
 {code}
 (Similar problems may occur if multiple cores are initialized concurrently 
 using the /admin/cores handler)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #787: POMs out of sync

2013-03-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/787/

2 tests failed.
REGRESSION:  org.apache.solr.cloud.BasicDistributedZk2Test.testDistribSearch

Error Message:
No registered leader was found, collection:collection1 slice:shard1

Stack Trace:
org.apache.solr.common.SolrException: No registered leader was found, 
collection:collection1 slice:shard1
at 
__randomizedtesting.SeedInfo.seed([3C41428A43FC06B6:BDA7CC9234A3668A]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:430)
at 
org.apache.solr.cloud.BasicDistributedZk2Test.brindDownShardIndexSomeDocsAndRecover(BasicDistributedZk2Test.java:295)
at 
org.apache.solr.cloud.BasicDistributedZk2Test.doTest(BasicDistributedZk2Test.java:116)


REGRESSION:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch

Error Message:
Test Setup Failure: shard1 should have just been set up to be inconsistent - 
but it's still consistent. Leader:http://127.0.0.1:51031/collection1skip 
list:[CloudJettyRunner [url=http://127.0.0.1:41150/collection1], 
CloudJettyRunner [url=http://127.0.0.1:23180/collection1]]

Stack Trace:
java.lang.AssertionError: Test Setup Failure: shard1 should have just been set 
up to be inconsistent - but it's still consistent. 
Leader:http://127.0.0.1:51031/collection1skip list:[CloudJettyRunner 
[url=http://127.0.0.1:41150/collection1], CloudJettyRunner 
[url=http://127.0.0.1:23180/collection1]]
at 
__randomizedtesting.SeedInfo.seed([46C828C8A28BD48A:C72EA6D0D5D4B4B6]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:212)




Build Log:
[...truncated 22620 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-4373) In multicore, lib directives in solrconfig.xml cause conflict and clobber directives from earlier cores

2013-03-01 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591071#comment-13591071
 ] 

Hoss Man commented on SOLR-4373:


bq. I'm still experiencing the same problems (rev 1450937): Libraries cannot be 
found. 
...
bq. The first time I create a core it fails, however the second time it 
succeeds.

On the same instance? or do you mean you get an error creating shard1 on port 
8080, but when you create shard2 on 8081 that works?

That sounds like it is a probably a distinct, semi-related, bug.

can you please file a new jira, and include: both the svn rev _and_ branch you 
are using (trunk vs 4x), the details of the configs you are using, the details 
of the errors you get (full logs would be nice so we can see the core startup 
info), and when/where you get those errors in the process of your script (ie: 
is it when creating the collection? linking the config to the collection? 
creating the cores?)

 In multicore, lib directives in solrconfig.xml cause conflict and clobber 
 directives from earlier cores
 ---

 Key: SOLR-4373
 URL: https://issues.apache.org/jira/browse/SOLR-4373
 Project: Solr
  Issue Type: Bug
  Components: multicore
Affects Versions: 4.1
Reporter: Alexandre Rafalovitch
Priority: Blocker
  Labels: lib, multicore
 Fix For: 4.2, 5.0, 4.1.1

 Attachments: multicore-bug.zip, testscript.sh


 Having lib directives in the solrconfig.xml files of multiple cores can cause 
 problems when using multi-threaded core initialization -- which is the 
 default starting with Solr 4.1.
 The problem manifests itself as init errors in the logs related to not being 
 able to find classes located in plugin jars, even though earlier log messages 
 indicated that those jars had been added to the classpath.
 One work around is to set {{coreLoadThreads=1}} in your solr.xml file -- 
 forcing single threaded core initialization.  For example...
 {code}
 ?xml version=1.0 encoding=utf-8 ?
 solr coreLoadThreads=1
   cores adminPath=/admin/cores
 core name=core1 instanceDir=core1 /
 core name=core2 instanceDir=core2 /
   /cores
 /solr
 {code}
 (Similar problems may occur if multiple cores are initialized concurrently 
 using the /admin/cores handler)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4458) accespt uppercase ASC and DESC as sort orders

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591080#comment-13591080
 ] 

Commit Tag Bot commented on SOLR-4458:
--

[trunk commit] Chris M. Hostetter
http://svn.apache.org/viewvc?view=revisionrevision=1451765

SOLR-4458: Sort directions (asc, desc) are now case insensitive


 accespt uppercase ASC and DESC as sort orders
 -

 Key: SOLR-4458
 URL: https://issues.apache.org/jira/browse/SOLR-4458
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
Priority: Minor
 Attachments: SOLR-4458.patch, SOLR-4458.patch, SOLR-4458.patch


 at least one user has gotten confused by doing a serach like this...
 http://localhost:8983/solr/shop/select/?indent=onfacet=truesort=prijs%20ASCstart=0rows=18fl=id,prijs,prijseenhe
 id,artikelgroepq=*:*facet.field=artikelgroepfacet.mincount=1
 and getting this error...
 {noformat}
 Can't determine Sort Order: 'prijs ASC', pos=5
 {noformat}
 ... i can't think of any reason why it would be bad to accept both uppercase 
 and lowercase versions of asc and desc
 http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201302.mbox/%3c0a03b892a1f8e14c8d9e8dcb8320052804f30...@ex2010-mail1.wizard.pvt%3E

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4458) accespt uppercase ASC and DESC as sort orders

2013-03-01 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-4458.


   Resolution: Fixed
Fix Version/s: 5.0
   4.2
 Assignee: Shawn Heisey

Shawn: thanks for following through with the tests!

Committed revision 1451765.
Committed revision 1451775.


 accespt uppercase ASC and DESC as sort orders
 -

 Key: SOLR-4458
 URL: https://issues.apache.org/jira/browse/SOLR-4458
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
Assignee: Shawn Heisey
Priority: Minor
 Fix For: 4.2, 5.0

 Attachments: SOLR-4458.patch, SOLR-4458.patch, SOLR-4458.patch


 at least one user has gotten confused by doing a serach like this...
 http://localhost:8983/solr/shop/select/?indent=onfacet=truesort=prijs%20ASCstart=0rows=18fl=id,prijs,prijseenhe
 id,artikelgroepq=*:*facet.field=artikelgroepfacet.mincount=1
 and getting this error...
 {noformat}
 Can't determine Sort Order: 'prijs ASC', pos=5
 {noformat}
 ... i can't think of any reason why it would be bad to accept both uppercase 
 and lowercase versions of asc and desc
 http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201302.mbox/%3c0a03b892a1f8e14c8d9e8dcb8320052804f30...@ex2010-mail1.wizard.pvt%3E

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4458) accespt uppercase ASC and DESC as sort orders

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591106#comment-13591106
 ] 

Commit Tag Bot commented on SOLR-4458:
--

[branch_4x commit] Chris M. Hostetter
http://svn.apache.org/viewvc?view=revisionrevision=1451775

SOLR-4458: Sort directions (asc, desc) are now case insensitive (merge r1451765)


 accespt uppercase ASC and DESC as sort orders
 -

 Key: SOLR-4458
 URL: https://issues.apache.org/jira/browse/SOLR-4458
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
Assignee: Shawn Heisey
Priority: Minor
 Fix For: 4.2, 5.0

 Attachments: SOLR-4458.patch, SOLR-4458.patch, SOLR-4458.patch


 at least one user has gotten confused by doing a serach like this...
 http://localhost:8983/solr/shop/select/?indent=onfacet=truesort=prijs%20ASCstart=0rows=18fl=id,prijs,prijseenhe
 id,artikelgroepq=*:*facet.field=artikelgroepfacet.mincount=1
 and getting this error...
 {noformat}
 Can't determine Sort Order: 'prijs ASC', pos=5
 {noformat}
 ... i can't think of any reason why it would be bad to accept both uppercase 
 and lowercase versions of asc and desc
 http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201302.mbox/%3c0a03b892a1f8e14c8d9e8dcb8320052804f30...@ex2010-mail1.wizard.pvt%3E

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4509) Disable Stale Check - Distributed Search (Performance)

2013-03-01 Thread Ryan Zezeski (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Zezeski updated SOLR-4509:
---

Attachment: baremetal-stale-nostale-throughput.svg
baremetal-stale-nostale-throughput.dat
baremetal-stale-nostale-med-latency.svg
baremetal-stale-nostale-med-latency.dat

I have some new results from a different cluster.  The short story is
that I still see improvement from removing the stale check, just not
as dramatic as on my SmartOS cluster.  Throughput improved by 108-120%
and there was a 0-5ms delta in latency.

What I take from this is that the benefits of removing the stale check
will vary depending on # of nodes, hardware, query and load.  In
theory removing the stale check should never hurt as removing blocking
syscalls should only help.  But I totally understand if about being
cautious with a change like this.  Personally I'd like to see at least
one other person confirm a non-negligible difference before I bother to
make this patch more acceptable.  Best to let this ticket stir a while
I suppose.

## Cluster Specs

Add nodes are running on baremetalcloud so this time they are truly
different physical machines with no virtualization involved.

* 8 nodes/shards
* 1 x 2.66GHz Woodcrest E5150 (2 cores)
* 2GB DDR2-667
* 73GB SAS 10k RPM
* Ubuntu 12.04
* Oracle JDK: Java HotSpot(TM) 64-Bit Server VM (build 23.7-b01, mixed mode)
* 512MB max heap
* Example schema

## Bench Runner

* 1 node
* 2 x 2.66GHz Woodcrest E5150
* 8GB DDR2-667
* Using Basho Bench as load gen

## Queries


All queries hit all shards.  All queries were single term queries
except for alpha which is conjunction.  The numbers listed are the
number of documents matching each term query.

* alpha: 100K, 100K, 0
* lima: 1
* mike: 10
* november: 100
* oscar: 1K
* papa: 10K
* quebec: 100K

Attached is the aggregate data (.dat) and corresponding plots (.svg)
of that data.  The data was aggregated from raw data collected by
Basho Bench (and this raw data is actually the aggregate of all events
at 10s intervals).  E.g. the median latency is actually the mean of
the median latencies calculated against all events in a given 10s
period.  What I'm saying is, it's rollup or a rollup so while there
are 2 decimals of precision those numbers are not actually that
precise.  But this should be good for ballpark figures (if you're a
stats geek please let me know if I'm committing a sin here).

There is a big delta in latency for the mike benchmark but I'm
chalking that up to an anomaly for the time being.


 Disable Stale Check - Distributed Search (Performance)
 --

 Key: SOLR-4509
 URL: https://issues.apache.org/jira/browse/SOLR-4509
 Project: Solr
  Issue Type: Improvement
  Components: search
 Environment: 5 node SmartOS cluster (all nodes living in same global 
 zone - i.e. same physical machine)
Reporter: Ryan Zezeski
Priority: Minor
 Attachments: baremetal-stale-nostale-med-latency.dat, 
 baremetal-stale-nostale-med-latency.svg, 
 baremetal-stale-nostale-throughput.dat, 
 baremetal-stale-nostale-throughput.svg, IsStaleTime.java, SOLR-4509.patch


 By disabling the Apache HTTP Client stale check I've witnessed a 2-4x 
 increase in throughput and reduction of over 100ms.  This patch was made in 
 the context of a project I'm leading, called Yokozuna, which relies on 
 distributed search.
 Here's the patch on Yokozuna: https://github.com/rzezeski/yokozuna/pull/26
 Here's a write-up I did on my findings: 
 http://www.zinascii.com/2013/solr-distributed-search-and-the-stale-check.html
 I'm happy to answer any questions or make changes to the patch to make it 
 acceptable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4810) Positions are incremented for each ngram in EdgeNGramTokenFilter

2013-03-01 Thread Walter Underwood (JIRA)
Walter Underwood created LUCENE-4810:


 Summary: Positions are incremented for each ngram in 
EdgeNGramTokenFilter
 Key: LUCENE-4810
 URL: https://issues.apache.org/jira/browse/LUCENE-4810
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
Reporter: Walter Underwood


Edge ngrams should be like synonyms, with all the ngrams generated from a token 
having the same position as that original token. The current code increments 
position.

For the text molecular biology, the query mol bio should match as a phrase 
in neighboring positions. It does not.

You can see this in the Analysis page in the admin UI.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4810) Positions are incremented for each ngram in EdgeNGramTokenFilter

2013-03-01 Thread Walter Underwood (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Underwood updated LUCENE-4810:
-

Attachment: LUCENE-4810.diff

Patch based on the 4.x source. The filenames are a bit odd because I was 
developing on 3.3.0.

 Positions are incremented for each ngram in EdgeNGramTokenFilter
 

 Key: LUCENE-4810
 URL: https://issues.apache.org/jira/browse/LUCENE-4810
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
Reporter: Walter Underwood
 Attachments: LUCENE-4810.diff


 Edge ngrams should be like synonyms, with all the ngrams generated from a 
 token having the same position as that original token. The current code 
 increments position.
 For the text molecular biology, the query mol bio should match as a 
 phrase in neighboring positions. It does not.
 You can see this in the Analysis page in the admin UI.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4196) Untangle XML-specific nature of Config and Container classes

2013-03-01 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591172#comment-13591172
 ] 

Erick Erickson commented on SOLR-4196:
--

Checked in, trunk r: 1451797

I want this to bake for a few days, then I'll merge it into 4x. Probably early 
next week unless things blow up.

 Untangle XML-specific nature of Config and Container classes
 

 Key: SOLR-4196
 URL: https://issues.apache.org/jira/browse/SOLR-4196
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Reporter: Erick Erickson
Assignee: Erick Erickson
Priority: Minor
 Fix For: 4.2, 5.0

 Attachments: SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, 
 SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, 
 SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, 
 SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, StressTest.zip, 
 StressTest.zip, StressTest.zip, StressTest.zip


 sub-task for SOLR-4083. If we're going to try to obsolete solr.xml, we need 
 to pull all of the specific XML processing out of Config and Container. 
 Currently, we refer to xpaths all over the place. This JIRA is about 
 providing a thunking layer to isolate the XML-esque nature of solr.xml and 
 allow a simple properties file to be used instead which will lead, 
 eventually, to solr.xml going away.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4196) Untangle XML-specific nature of Config and Container classes

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591174#comment-13591174
 ] 

Commit Tag Bot commented on SOLR-4196:
--

[trunk commit] Erick Erickson
http://svn.apache.org/viewvc?view=revisionrevision=1451797

SOLR-4196, steps toward making solr.xml obsolete


 Untangle XML-specific nature of Config and Container classes
 

 Key: SOLR-4196
 URL: https://issues.apache.org/jira/browse/SOLR-4196
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Reporter: Erick Erickson
Assignee: Erick Erickson
Priority: Minor
 Fix For: 4.2, 5.0

 Attachments: SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, 
 SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, 
 SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, 
 SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, StressTest.zip, 
 StressTest.zip, StressTest.zip, StressTest.zip


 sub-task for SOLR-4083. If we're going to try to obsolete solr.xml, we need 
 to pull all of the specific XML processing out of Config and Container. 
 Currently, we refer to xpaths all over the place. This JIRA is about 
 providing a thunking layer to isolate the XML-esque nature of solr.xml and 
 allow a simple properties file to be used instead which will lead, 
 eventually, to solr.xml going away.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4810) Positions are incremented for each ngram in EdgeNGramTokenFilter

2013-03-01 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591198#comment-13591198
 ] 

Robert Muir commented on LUCENE-4810:
-

At a glance I think I like the idea myself. I don't like tokenfilters that 
'retokenize' by changing up the positions, i think it causes all kinds of havoc.

Instead with this patch, it simplifies what this filter is doing conceptually: 
for each word, all of its prefixes are added as synonyms.


 Positions are incremented for each ngram in EdgeNGramTokenFilter
 

 Key: LUCENE-4810
 URL: https://issues.apache.org/jira/browse/LUCENE-4810
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
Reporter: Walter Underwood
 Attachments: LUCENE-4810.diff


 Edge ngrams should be like synonyms, with all the ngrams generated from a 
 token having the same position as that original token. The current code 
 increments position.
 For the text molecular biology, the query mol bio should match as a 
 phrase in neighboring positions. It does not.
 You can see this in the Analysis page in the admin UI.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4515) OpenExchangeRatesOrgProvider needs to require a ratesFileLocation

2013-03-01 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-4515:
---

Attachment: SOLR-4515.patch

 OpenExchangeRatesOrgProvider needs to require a ratesFileLocation
 -

 Key: SOLR-4515
 URL: https://issues.apache.org/jira/browse/SOLR-4515
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
 Attachments: SOLR-4515.patch


 OpenExchangeRatesOrgProvider currently defaults the value of the 
 ratesFileLocation init param to http://openexchangerates.org/latest.json; 
 -- but that URL currently 301 redirects to a page with the following 
 information...
 {panel}
 Notice: App ID Required
 As per public notices beginning June 2012, an App ID is required to access 
 the Open Exchange Rates API. It's free for personal use, a bargain for your 
 business. You can sign up here » 
 {panel}
 ...so we should update the code to require users to be explicit about their 
 URL (including APP_ID) or point to a local file.  As things stand right now, 
 anyone who configures this provider w/o explicitly setting ratesFileLocation 
 ges a clean startup, but anytime they attempt to use the fieldtype to do a 
 conversion they get an error that No available conversion rate from USD to 
 USD. Available rates are []

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4515) OpenExchangeRatesOrgProvider needs to require a ratesFileLocation

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591206#comment-13591206
 ] 

Commit Tag Bot commented on SOLR-4515:
--

[trunk commit] Chris M. Hostetter
http://svn.apache.org/viewvc?view=revisionrevision=1451818

SOLR-4515: CurrencyField's OpenExchangeRatesOrgProvider now requires a 
ratesFileLocation init param, since the previous global default no longer works


 OpenExchangeRatesOrgProvider needs to require a ratesFileLocation
 -

 Key: SOLR-4515
 URL: https://issues.apache.org/jira/browse/SOLR-4515
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
 Attachments: SOLR-4515.patch


 OpenExchangeRatesOrgProvider currently defaults the value of the 
 ratesFileLocation init param to http://openexchangerates.org/latest.json; 
 -- but that URL currently 301 redirects to a page with the following 
 information...
 {panel}
 Notice: App ID Required
 As per public notices beginning June 2012, an App ID is required to access 
 the Open Exchange Rates API. It's free for personal use, a bargain for your 
 business. You can sign up here » 
 {panel}
 ...so we should update the code to require users to be explicit about their 
 URL (including APP_ID) or point to a local file.  As things stand right now, 
 anyone who configures this provider w/o explicitly setting ratesFileLocation 
 ges a clean startup, but anytime they attempt to use the fieldtype to do a 
 conversion they get an error that No available conversion rate from USD to 
 USD. Available rates are []

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4515) OpenExchangeRatesOrgProvider needs to require a ratesFileLocation

2013-03-01 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-4515.


   Resolution: Fixed
Fix Version/s: 5.0
   4.2
 Assignee: Hoss Man

Committed revision 1451818.
Committed revision 1451821.


 OpenExchangeRatesOrgProvider needs to require a ratesFileLocation
 -

 Key: SOLR-4515
 URL: https://issues.apache.org/jira/browse/SOLR-4515
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 4.2, 5.0

 Attachments: SOLR-4515.patch


 OpenExchangeRatesOrgProvider currently defaults the value of the 
 ratesFileLocation init param to http://openexchangerates.org/latest.json; 
 -- but that URL currently 301 redirects to a page with the following 
 information...
 {panel}
 Notice: App ID Required
 As per public notices beginning June 2012, an App ID is required to access 
 the Open Exchange Rates API. It's free for personal use, a bargain for your 
 business. You can sign up here » 
 {panel}
 ...so we should update the code to require users to be explicit about their 
 URL (including APP_ID) or point to a local file.  As things stand right now, 
 anyone who configures this provider w/o explicitly setting ratesFileLocation 
 ges a clean startup, but anytime they attempt to use the fieldtype to do a 
 conversion they get an error that No available conversion rate from USD to 
 USD. Available rates are []

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4511) Repeater doesn't return correct index version to slaves

2013-03-01 Thread Aditya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591213#comment-13591213
 ] 

Aditya commented on SOLR-4511:
--

I had similar issue .. did check with trunk build deployed on my dev 
environment and its now replicates properly even with the incremental index. 

 Repeater doesn't return correct index version to slaves
 ---

 Key: SOLR-4511
 URL: https://issues.apache.org/jira/browse/SOLR-4511
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 4.1
Reporter: Raúl Grande
Assignee: Mark Miller
 Fix For: 4.2, 5.0

 Attachments: o8uzad.jpg, SOLR-4511.patch


 Related to SOLR-4471. I have a master-repeater-2slaves architecture. The 
 replication between master and repeater is working fine but slaves aren't 
 able to replicate because their master (repeater node) is returning an old 
 index version, but in admin UI the version that repeater have is correct.
 When I do http://localhost:17045/solr/replication?command=indexversion 
 response is: long name=generation29037/long when the version should be 
 29042
 If I restart the repeater node this URL returns the correct index version, 
 but after a while it fails again.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4515) OpenExchangeRatesOrgProvider needs to require a ratesFileLocation

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591222#comment-13591222
 ] 

Commit Tag Bot commented on SOLR-4515:
--

[branch_4x commit] Chris M. Hostetter
http://svn.apache.org/viewvc?view=revisionrevision=1451821

SOLR-4515: CurrencyField's OpenExchangeRatesOrgProvider now requires a 
ratesFileLocation init param, since the previous global default no longer works 
(merge r1451818)


 OpenExchangeRatesOrgProvider needs to require a ratesFileLocation
 -

 Key: SOLR-4515
 URL: https://issues.apache.org/jira/browse/SOLR-4515
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 4.2, 5.0

 Attachments: SOLR-4515.patch


 OpenExchangeRatesOrgProvider currently defaults the value of the 
 ratesFileLocation init param to http://openexchangerates.org/latest.json; 
 -- but that URL currently 301 redirects to a page with the following 
 information...
 {panel}
 Notice: App ID Required
 As per public notices beginning June 2012, an App ID is required to access 
 the Open Exchange Rates API. It's free for personal use, a bargain for your 
 business. You can sign up here » 
 {panel}
 ...so we should update the code to require users to be explicit about their 
 URL (including APP_ID) or point to a local file.  As things stand right now, 
 anyone who configures this provider w/o explicitly setting ratesFileLocation 
 ges a clean startup, but anytime they attempt to use the fieldtype to do a 
 conversion they get an error that No available conversion rate from USD to 
 USD. Available rates are []

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-4329) Have DocumentBuilder give value collections to the FieldType

2013-03-01 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned SOLR-4329:
--

Assignee: David Smiley

 Have DocumentBuilder give value collections to the FieldType
 

 Key: SOLR-4329
 URL: https://issues.apache.org/jira/browse/SOLR-4329
 Project: Solr
  Issue Type: New Feature
  Components: update
Reporter: David Smiley
Assignee: David Smiley
 Attachments: DocumentBuilder.java


 I'd like to write a multi-value-configured FieldType that can return a 
 DocValue Field from its createFields().  Since DocValues holds a single value 
 per document for a field, you can only have one.  However 
 FieldType.createFields() is invoked by the DocumentBuilder once per each 
 value being indexed.
 FYI the reason I'm asking for this is for a multi-valued spatial field to 
 store its points in DocValues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4329) Have DocumentBuilder give value collections to the FieldType

2013-03-01 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-4329:
---

Attachment: DocumentBuilder.java

DocumentBuilder, at least today, is used entirely via 
DocumentBuilder.toDocument(...) (a static method) -- yet there are a bunch of 
old instance methods, fields, and constructor that are unused and thus only 
complicates what is going on.  The attached replacement for review removes all 
the old cruft, and refactors the code to use field instance state. I found that 
handy because I added a new optional FieldType interface DocumentBuilderAware 
with one method addFieldsToDocument(DocumentBuilder, boost) that takes the 
DocumentBuilder and can then access various state via getters.

The main use-case why I want this extension point is to add a single DocValues 
field to the Lucene document that has a byte array representation of each value 
the field will get (its multi-valued).

Another use-case is when the field value is pre-parsed from an URP or never was 
a string (EmbeddedSolrServer or perhaps DIH even), yet the value happens to 
implement Collection (e.g. a Spatial4j ShapeCollection).  This ShapeCollection 
needs to be seen by the FieldType yet the current DocumentBuilder will instead 
separately hand the FieldType each component shape value.

There are undoubtedly other creative use-cases (certainly non-spatial).  With 
access to the entire SolrInputDocument, a FieldType might want to look at 
another particular field to include it in some way.  Yet I can understand that 
tying FieldType to DocumentBuilder could be seen as a less than clean backwards 
dependency and so this is an optional interface.

(the attached is the entire file because it was modified so heavily that a 
patch would be unintelligible)

Comments please!

 Have DocumentBuilder give value collections to the FieldType
 

 Key: SOLR-4329
 URL: https://issues.apache.org/jira/browse/SOLR-4329
 Project: Solr
  Issue Type: New Feature
  Components: update
Reporter: David Smiley
 Attachments: DocumentBuilder.java


 I'd like to write a multi-value-configured FieldType that can return a 
 DocValue Field from its createFields().  Since DocValues holds a single value 
 per document for a field, you can only have one.  However 
 FieldType.createFields() is invoked by the DocumentBuilder once per each 
 value being indexed.
 FYI the reason I'm asking for this is for a multi-valued spatial field to 
 store its points in DocValues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4329) Have DocumentBuilder give value collections to the FieldType

2013-03-01 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591295#comment-13591295
 ] 

David Smiley commented on SOLR-4329:


One more thing... as part of looking over DocumentBuilder with a fine toothed 
comb, I noticed that schema.getCopyFieldsList(ifield.getName()) was being 
called for *every* field value. This method internally has to do a bunch of 
pattern matching to build up a list.  Quite wasteful.  I simply moved this out 
of the loop.  Another optimization that could be added is internal to that 
method implementation -- return Collections.EMPTY_LIST if there are no copy 
fields, instead of constructing an ArrayList.

 Have DocumentBuilder give value collections to the FieldType
 

 Key: SOLR-4329
 URL: https://issues.apache.org/jira/browse/SOLR-4329
 Project: Solr
  Issue Type: New Feature
  Components: update
Reporter: David Smiley
Assignee: David Smiley
 Attachments: DocumentBuilder.java


 I'd like to write a multi-value-configured FieldType that can return a 
 DocValue Field from its createFields().  Since DocValues holds a single value 
 per document for a field, you can only have one.  However 
 FieldType.createFields() is invoked by the DocumentBuilder once per each 
 value being indexed.
 FYI the reason I'm asking for this is for a multi-valued spatial field to 
 store its points in DocValues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4329) Have DocumentBuilder give value collections to the FieldType

2013-03-01 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-4329:
---

Fix Version/s: 4.2

 Have DocumentBuilder give value collections to the FieldType
 

 Key: SOLR-4329
 URL: https://issues.apache.org/jira/browse/SOLR-4329
 Project: Solr
  Issue Type: New Feature
  Components: update
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 4.2

 Attachments: DocumentBuilder.java


 I'd like to write a multi-value-configured FieldType that can return a 
 DocValue Field from its createFields().  Since DocValues holds a single value 
 per document for a field, you can only have one.  However 
 FieldType.createFields() is invoked by the DocumentBuilder once per each 
 value being indexed.
 FYI the reason I'm asking for this is for a multi-valued spatial field to 
 store its points in DocValues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Positions in EdgeNgramTokenFilter

2013-03-01 Thread Otis Gospodnetic
Wunder, you may be thinking of LUCENE-1224 from a few years ago?

http://search-lucene.com/?q=ngramfc_project=Lucenefc_type=issue

Otis
--
Solr  ElasticSearch Support
http://sematext.com/





On Fri, Mar 1, 2013 at 1:41 PM, Walter Underwood wun...@wunderwood.orgwrote:

 I'm fixing position increment in EdgeNgramTokenFilter to act like
 synonyms, with each ngram at the same position as the source token.
 Currently, the position is incremented for each output token, which breaks
 phrase searching with edge ngrams.

 I could not find a current Jira issue for this. Is there one?

 We are still on 3.3, but I'll submit a patch for 4.x.

 Thanks to whoever converted EdgeNgramTokenFilter to use TokenStream.

 wunder
 --
 Walter Underwood
 wun...@wunderwood.org






Re: Positions in EdgeNgramTokenFilter

2013-03-01 Thread Walter Underwood
I can see an argument for NGramTokenFilter incrementing position for each 
ngram, because they really are an ordered scan across the text. Pure ngrams are 
a different text representation than words. That could be an option on the 
token filter.

LUCENE-1224 is mostly concerned with ngram searching of CJK. That is a useful 
thing -- we had it in Ultraseek. It might be less important now that we have 
better tokenizers for those languages. I'd probably address that with a 
dedicated bigram tokenizer for CJK.

This bug is about EdgeNGramTokenFilter, which is a way of precomputing trailing 
(or leading) wildcards. Each shorter sequence is a partial synonym for the 
original token.

If the text is molecular biology, I'd like the query mol bio to match that 
as a phrase, with the edge-ngrams in sequential positions. The current behavior 
puts each edge-ngram in a new position, so bio could be 8 positions after 
mol (with mingram=1 and maxgram=1024). That won't match with a tight phrase 
slop, even though the original tokens are next to each other.

Personally, I need this to make phrase autocomplete really fast.

wunder

On Mar 1, 2013, at 8:46 PM, Otis Gospodnetic wrote:

 Wunder, you may be thinking of LUCENE-1224 from a few years ago?
 
 http://search-lucene.com/?q=ngramfc_project=Lucenefc_type=issue
 
 Otis
 --
 Solr  ElasticSearch Support
 http://sematext.com/
 
 
 
 
 
 On Fri, Mar 1, 2013 at 1:41 PM, Walter Underwood wun...@wunderwood.org 
 wrote:
 I'm fixing position increment in EdgeNgramTokenFilter to act like synonyms, 
 with each ngram at the same position as the source token. Currently, the 
 position is incremented for each output token, which breaks phrase searching 
 with edge ngrams.
 
 I could not find a current Jira issue for this. Is there one?
 
 We are still on 3.3, but I'll submit a patch for 4.x.
 
 Thanks to whoever converted EdgeNgramTokenFilter to use TokenStream.
 
 wunder
 --
 Walter Underwood
 wun...@wunderwood.org
 
 
 
 

--
Walter Underwood
wun...@wunderwood.org





Re: [JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 280 - Failure!

2013-03-01 Thread Chris Hostetter
: ...but i didn' get any errors when i did ant precommit, and i'm not 
: seeing any actual error reported here -- what does Tidy was unable to 
: process file ... 1 returned mean as far as the actual problem?

correction: i can reproduce, and i found the problem -- looks like i 
accidently hit the delete key while i was in my emacs buffer (after 
running precommit but before actually committing).

still confused as to why the html lint check doesn't print out the actual 
problem found with the html? Didn't it use to do that?

-Hoss

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4515) OpenExchangeRatesOrgProvider needs to require a ratesFileLocation

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591329#comment-13591329
 ] 

Commit Tag Bot commented on SOLR-4515:
--

[trunk commit] Chris M. Hostetter
http://svn.apache.org/viewvc?view=revisionrevision=1451838

SOLR-4515: typo


 OpenExchangeRatesOrgProvider needs to require a ratesFileLocation
 -

 Key: SOLR-4515
 URL: https://issues.apache.org/jira/browse/SOLR-4515
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 4.2, 5.0

 Attachments: SOLR-4515.patch


 OpenExchangeRatesOrgProvider currently defaults the value of the 
 ratesFileLocation init param to http://openexchangerates.org/latest.json; 
 -- but that URL currently 301 redirects to a page with the following 
 information...
 {panel}
 Notice: App ID Required
 As per public notices beginning June 2012, an App ID is required to access 
 the Open Exchange Rates API. It's free for personal use, a bargain for your 
 business. You can sign up here » 
 {panel}
 ...so we should update the code to require users to be explicit about their 
 URL (including APP_ID) or point to a local file.  As things stand right now, 
 anyone who configures this provider w/o explicitly setting ratesFileLocation 
 ges a clean startup, but anytime they attempt to use the fieldtype to do a 
 conversion they get an error that No available conversion rate from USD to 
 USD. Available rates are []

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4515) OpenExchangeRatesOrgProvider needs to require a ratesFileLocation

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591330#comment-13591330
 ] 

Commit Tag Bot commented on SOLR-4515:
--

[branch_4x commit] Chris M. Hostetter
http://svn.apache.org/viewvc?view=revisionrevision=1451839

SOLR-4515: typo (merge r1451838)


 OpenExchangeRatesOrgProvider needs to require a ratesFileLocation
 -

 Key: SOLR-4515
 URL: https://issues.apache.org/jira/browse/SOLR-4515
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 4.2, 5.0

 Attachments: SOLR-4515.patch


 OpenExchangeRatesOrgProvider currently defaults the value of the 
 ratesFileLocation init param to http://openexchangerates.org/latest.json; 
 -- but that URL currently 301 redirects to a page with the following 
 information...
 {panel}
 Notice: App ID Required
 As per public notices beginning June 2012, an App ID is required to access 
 the Open Exchange Rates API. It's free for personal use, a bargain for your 
 business. You can sign up here » 
 {panel}
 ...so we should update the code to require users to be explicit about their 
 URL (including APP_ID) or point to a local file.  As things stand right now, 
 anyone who configures this provider w/o explicitly setting ratesFileLocation 
 ges a clean startup, but anytime they attempt to use the fieldtype to do a 
 conversion they get an error that No available conversion rate from USD to 
 USD. Available rates are []

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-1206) Ability to store Reader / InputStream fields

2013-03-01 Thread Trejkaz (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-1206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591331#comment-13591331
 ] 

Trejkaz commented on LUCENE-1206:
-

I think this would still be useful. The workaround of using a separate database 
to store larger text and binary stuff has never really sat with me terribly 
well.

 Ability to store Reader / InputStream fields
 

 Key: LUCENE-1206
 URL: https://issues.apache.org/jira/browse/LUCENE-1206
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/index
Reporter: Trejkaz

 In some situations we would like to store the whole text, but the whole text 
 won't always fit in memory so we can't create a String.  Likewise for storing 
 binary, it would sometimes be better if we didn't have to read into a byte[] 
 up-front (even when it doesn't use much memory, it increases the number of 
 copies made and adds burden to GC.)
 FieldsWriter currently writes the length at the start of the chunks though, 
 so I don't know whether it would be possible to seek back and write the 
 length after writing the data.
 It would also be useful to use this in conjunction with compression, both for 
 Reader and InputStream types.  And when retrieving the field, it should be 
 possible to create a Reader without reading the entire String into memory 
 up-front.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (LUCENE-1245) MultiFieldQueryParser is not friendly for overriding getFieldQuery(String,String,int)

2013-03-01 Thread Trejkaz (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-1245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trejkaz closed LUCENE-1245.
---

Resolution: Not A Problem

I guess this resolution is correct. We have stopped using the basic query 
parser and now use the flexible one precisely because of this sort of issue, so 
the issue itself is no longer an issue for us.

 MultiFieldQueryParser is not friendly for overriding 
 getFieldQuery(String,String,int)
 -

 Key: LUCENE-1245
 URL: https://issues.apache.org/jira/browse/LUCENE-1245
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/queryparser
Affects Versions: 2.3.2
Reporter: Trejkaz
 Attachments: multifield.patch


 LUCENE-1213 fixed an issue in MultiFieldQueryParser where the slop parameter 
 wasn't being properly applied.  Problem is, the fix which eventually got 
 committed is calling super.getFieldQuery(String,String), bypassing any 
 possibility of customising the query behaviour.
 This should be relatively simply fixable by modifying 
 getFieldQuery(String,String,int) to, if field is null, recursively call 
 getFieldQuery(String,String,int) instead of setting the slop itself.  This 
 gives subclasses which override either getFieldQuery method a chance to do 
 something different.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4515) OpenExchangeRatesOrgProvider needs to require a ratesFileLocation

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591339#comment-13591339
 ] 

Commit Tag Bot commented on SOLR-4515:
--

[trunk commit] Chris M. Hostetter
http://svn.apache.org/viewvc?view=revisionrevision=1451841

SOLR-4515: more typos that documentation-lint aparently didn't catch


 OpenExchangeRatesOrgProvider needs to require a ratesFileLocation
 -

 Key: SOLR-4515
 URL: https://issues.apache.org/jira/browse/SOLR-4515
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 4.2, 5.0

 Attachments: SOLR-4515.patch


 OpenExchangeRatesOrgProvider currently defaults the value of the 
 ratesFileLocation init param to http://openexchangerates.org/latest.json; 
 -- but that URL currently 301 redirects to a page with the following 
 information...
 {panel}
 Notice: App ID Required
 As per public notices beginning June 2012, an App ID is required to access 
 the Open Exchange Rates API. It's free for personal use, a bargain for your 
 business. You can sign up here » 
 {panel}
 ...so we should update the code to require users to be explicit about their 
 URL (including APP_ID) or point to a local file.  As things stand right now, 
 anyone who configures this provider w/o explicitly setting ratesFileLocation 
 ges a clean startup, but anytime they attempt to use the fieldtype to do a 
 conversion they get an error that No available conversion rate from USD to 
 USD. Available rates are []

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4515) OpenExchangeRatesOrgProvider needs to require a ratesFileLocation

2013-03-01 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591338#comment-13591338
 ] 

Commit Tag Bot commented on SOLR-4515:
--

[branch_4x commit] Chris M. Hostetter
http://svn.apache.org/viewvc?view=revisionrevision=1451842

SOLR-4515: more typos that documentation-lint aparently didn't catch (merge 
r1451841)


 OpenExchangeRatesOrgProvider needs to require a ratesFileLocation
 -

 Key: SOLR-4515
 URL: https://issues.apache.org/jira/browse/SOLR-4515
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 4.2, 5.0

 Attachments: SOLR-4515.patch


 OpenExchangeRatesOrgProvider currently defaults the value of the 
 ratesFileLocation init param to http://openexchangerates.org/latest.json; 
 -- but that URL currently 301 redirects to a page with the following 
 information...
 {panel}
 Notice: App ID Required
 As per public notices beginning June 2012, an App ID is required to access 
 the Open Exchange Rates API. It's free for personal use, a bargain for your 
 business. You can sign up here » 
 {panel}
 ...so we should update the code to require users to be explicit about their 
 URL (including APP_ID) or point to a local file.  As things stand right now, 
 anyone who configures this provider w/o explicitly setting ratesFileLocation 
 ges a clean startup, but anytime they attempt to use the fieldtype to do a 
 conversion they get an error that No available conversion rate from USD to 
 USD. Available rates are []

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org