[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4107 - Failure

2013-07-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4107/

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=3014, 
name=recoveryCmdExecutor-1359-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at 
java.net.Socket.connect(Socket.java:579) at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
 at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
 at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
 at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
 at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
 at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:722)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.BasicDistributedZkTest: 
   1) Thread[id=3014, name=recoveryCmdExecutor-1359-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
at java.net.Socket.connect(Socket.java:579)
at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
at __randomizedtesting.SeedInfo.seed([9A9C17CFC0E0CE9E]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=3014, name=recoveryCmdExecutor-1359-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:39

[JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.6.0) - Build # 603 - Failure!

2013-07-03 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/603/
Java: 64bit/jdk1.6.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 9000 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: 
/System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home/bin/java 
-XX:-UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/heapdumps
 -Dtests.prefix=tests -Dtests.seed=BFFE7DD1047918A0 -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=4.4 
-Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true 
-Dtests.asserts.gracious=false -Dtests.multiplier=1 -DtempDir=. 
-Djava.io.tmpdir=. 
-Djunit4.tempDir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/test/temp
 
-Dclover.db.dir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/clover/db
 -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Djava.security.policy=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/tools/junit4/tests.policy
 -Dlucene.version=4.4-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Dfile.encoding=UTF-8 -classpath 
/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/classes/test:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-test-framework/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/commons-collections-3.2.1.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/hadoop-common-2.0.5-alpha-tests.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/hadoop-hdfs-2.0.5-alpha-tests.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/jersey-core-1.16.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/jetty-6.1.26.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/jetty-util-6.1.26.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/junit4-ant-2.0.10.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/test-files:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/test-framework/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/codecs/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-solrj/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/analysis/common/lucene-analyzers-common-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/analysis/kuromoji/lucene-analyzers-kuromoji-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/analysis/phonetic/lucene-analyzers-phonetic-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/codecs/lucene-codecs-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/highlighter/lucene-highlighter-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/memory/lucene-memory-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/misc/lucene-misc-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/spatial/lucene-spatial-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/suggest/lucene-suggest-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/grouping/lucene-grouping-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/queries/lucene-queries-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/queryparser/lucene-queryparser-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/core/lib/cglib-nodep-2.2.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/core/lib/commons-cli-1.2.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/core/lib/commons-codec-1.7.jar:/Users/jenkins/jenk

Re: [JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.6.0) - Build # 603 - Failure!

2013-07-03 Thread Dawid Weiss
Ooops -- this seems like an OOM (permgen space exhausted):

   [junit4] >>> JVM J0: stdout (verbatim) 
   [junit4] java.lang.OutOfMemoryError: PermGen space
   [junit4] Dumping heap to
/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/heapdumps/java_pid2363.hprof
...
   [junit4] Heap dump file created [49668499 bytes in 0.935 secs]
   [junit4] <<< JVM J0: EOF 
   [junit4] JVM J0: stderr was not empty, see:
/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/test/temp/junit4-J0-20130703_065329_394.syserr
   [junit4] >>> JVM J0: stderr (verbatim) 
   [junit4] 2013-07-03 07:05:21.819 java[2363:18433] Unable to load
realm info from SCDynamicStore
   [junit4] WARN: Unhandled exception in event serialization. ->
java.lang.OutOfMemoryError: PermGen spacejava.lang.OutOfMemoryError:
PermGen space
   [junit4] java.lang.OutOfMemoryError: PermGen space
   [junit4] at __randomizedtesting.SeedInfo.seed([BFFE7DD1047918A0]:0)
   [junit4] <<< JVM J0: EOF 

Dawid

On Wed, Jul 3, 2013 at 9:17 AM, Policeman Jenkins Server
 wrote:
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/603/
> Java: 64bit/jdk1.6.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC
>
> All tests passed
>
> Build Log:
> [...truncated 9000 lines...]
>[junit4] ERROR: JVM J0 ended with an exception, command line: 
> /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home/bin/java 
> -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 
> -XX:+HeapDumpOnOutOfMemoryError 
> -XX:HeapDumpPath=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/heapdumps
>  -Dtests.prefix=tests -Dtests.seed=BFFE7DD1047918A0 -Xmx512M -Dtests.iters= 
> -Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
> -Dtests.postingsformat=random -Dtests.docvaluesformat=random 
> -Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
> -Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=4.4 
> -Dtests.cleanthreads=perClass 
> -Djava.util.logging.config.file=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/tools/junit4/logging.properties
>  -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true 
> -Dtests.asserts.gracious=false -Dtests.multiplier=1 -DtempDir=. 
> -Djava.io.tmpdir=. 
> -Djunit4.tempDir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/test/temp
>  
> -Dclover.db.dir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/clover/db
>  -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
> -Djava.security.policy=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/tools/junit4/tests.policy
>  -Dlucene.version=4.4-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
> -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
> -Djava.awt.headless=true -Dfile.encoding=UTF-8 -classpath 
> /Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/classes/test:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-test-framework/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/commons-collections-3.2.1.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/hadoop-common-2.0.5-alpha-tests.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/hadoop-hdfs-2.0.5-alpha-tests.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/jersey-core-1.16.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/jetty-6.1.26.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/jetty-util-6.1.26.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/junit4-ant-2.0.10.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/test-files:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/test-framework/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/codecs/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-solrj/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/analysis/common/lucene-analyzers-common-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/analysis/kuromoji/lucene-analyzers-kuromoji-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/analysis/phonetic/lucene-analyzers-phonetic-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/codecs/lucene-codecs-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/highlighter/lu

[jira] [Commented] (LUCENE-4258) Incremental Field Updates through Stacked Segments

2013-07-03 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698719#comment-13698719
 ] 

Shai Erera commented on LUCENE-4258:


I think we're looking at not less than 4 months, and that assumes performance 
shows no big concerns. Otherwise, it's game open again. But in a release .. 
that could take a while.

Depending how adventurous you are, you can compile the branch and run with it 
for a while :). That will surely help us pinpoint issues sooner.

> Incremental Field Updates through Stacked Segments
> --
>
> Key: LUCENE-4258
> URL: https://issues.apache.org/jira/browse/LUCENE-4258
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Reporter: Sivan Yogev
> Fix For: 4.4
>
> Attachments: IncrementalFieldUpdates.odp, 
> LUCENE-4258-API-changes.patch, LUCENE-4258.branch.1.patch, 
> LUCENE-4258.branch.2.patch, LUCENE-4258.branch3.patch, 
> LUCENE-4258.branch.4.patch, LUCENE-4258.branch.5.patch, 
> LUCENE-4258.branch.6.patch, LUCENE-4258.branch.6.patch, 
> LUCENE-4258.r1410593.patch, LUCENE-4258.r1412262.patch, 
> LUCENE-4258.r1416438.patch, LUCENE-4258.r1416617.patch, 
> LUCENE-4258.r1422495.patch, LUCENE-4258.r1423010.patch
>
>   Original Estimate: 2,520h
>  Remaining Estimate: 2,520h
>
> Shai and I would like to start working on the proposal to Incremental Field 
> Updates outlined here (http://markmail.org/message/zhrdxxpfk6qvdaex).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.6.0) - Build # 603 - Failure!

2013-07-03 Thread Uwe Schindler
We had another one hanging yesterday on Linux. I killed it, looks like the same 
issue.

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


> -Original Message-
> From: dawid.we...@gmail.com [mailto:dawid.we...@gmail.com] On Behalf
> Of Dawid Weiss
> Sent: Wednesday, July 03, 2013 9:20 AM
> To: dev@lucene.apache.org
> Subject: Re: [JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.6.0) - Build # 603
> - Failure!
> 
> Ooops -- this seems like an OOM (permgen space exhausted):
> 
>[junit4] >>> JVM J0: stdout (verbatim) 
>[junit4] java.lang.OutOfMemoryError: PermGen space
>[junit4] Dumping heap to
> /Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-
> MacOSX/heapdumps/java_pid2363.hprof
> ...
>[junit4] Heap dump file created [49668499 bytes in 0.935 secs]
>[junit4] <<< JVM J0: EOF 
>[junit4] JVM J0: stderr was not empty, see:
> /Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-
> MacOSX/solr/build/solr-core/test/temp/junit4-J0-
> 20130703_065329_394.syserr
>[junit4] >>> JVM J0: stderr (verbatim) 
>[junit4] 2013-07-03 07:05:21.819 java[2363:18433] Unable to load realm info
> from SCDynamicStore
>[junit4] WARN: Unhandled exception in event serialization. ->
> java.lang.OutOfMemoryError: PermGen spacejava.lang.OutOfMemoryError:
> PermGen space
>[junit4] java.lang.OutOfMemoryError: PermGen space
>[junit4]   at
> __randomizedtesting.SeedInfo.seed([BFFE7DD1047918A0]:0)
>[junit4] <<< JVM J0: EOF 
> 
> Dawid
> 
> On Wed, Jul 3, 2013 at 9:17 AM, Policeman Jenkins Server
>  wrote:
> > Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/603/
> > Java: 64bit/jdk1.6.0 -XX:-UseCompressedOops -
> XX:+UseConcMarkSweepGC
> >
> > All tests passed
> >
> > Build Log:
> > [...truncated 9000 lines...]
> >[junit4] ERROR: JVM J0 ended with an exception, command line:
> /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home/bin/ja
> va -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC -
> XX:+HeapDumpOnOutOfMemoryError -
> XX:HeapDumpPath=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-
> 4.x-MacOSX/heapdumps -Dtests.prefix=tests -
> Dtests.seed=BFFE7DD1047918A0 -Xmx512M -Dtests.iters= -
> Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random -
> Dtests.postingsformat=random -Dtests.docvaluesformat=random -
> Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random -
> Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=4.4 -
> Dtests.cleanthreads=perClass -
> Djava.util.logging.config.file=/Users/jenkins/jenkins-
> slave/workspace/Lucene-Solr-4.x-
> MacOSX/lucene/tools/junit4/logging.properties -Dtests.nightly=false -
> Dtests.weekly=false -Dtests.slow=true -Dtests.asserts.gracious=false -
> Dtests.multiplier=1 -DtempDir=. -Djava.io.tmpdir=. -
> Djunit4.tempDir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-
> MacOSX/solr/build/solr-core/test/temp -
> Dclover.db.dir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-
> MacOSX/lucene/build/clover/db -
> Djava.security.manager=org.apache.lucene.util.TestSecurityManager -
> Djava.security.policy=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-
> 4.x-MacOSX/lucene/tools/junit4/tests.policy -Dlucene.version=4.4-
> SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 -
> Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory -
> Djava.awt.headless=true -Dfile.encoding=UTF-8 -classpath
> /Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-
> MacOSX/solr/build/solr-core/classes/test:/Users/jenkins/jenkins-
> slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-test-
> framework/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-
> Solr-4.x-MacOSX/solr/test-framework/lib/commons-collections-
> 3.2.1.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-
> MacOSX/solr/test-framework/lib/hadoop-common-2.0.5-alpha-
> tests.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-
> MacOSX/solr/test-framework/lib/hadoop-hdfs-2.0.5-alpha-
> tests.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-
> MacOSX/solr/test-framework/lib/jersey-core-
> 1.16.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-
> MacOSX/solr/test-framework/lib/jetty-6.1.26.jar:/Users/jenkins/jenkins-
> slave/workspace/Lucene-Solr-4.x-MacOSX/solr/test-framework/lib/jetty-
> util-6.1.26.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-
> MacOSX/solr/test-framework/lib/junit4-ant-
> 2.0.10.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-
> MacOSX/solr/build/solr-core/test-files:/Users/jenkins/jenkins-
> slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/test-
> framework/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-
> Solr-4.x-MacOSX/lucene/build/codecs/classes/java:/Users/jenkins/jenkins-
> slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-
> solrj/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-
> MacOSX/solr/bu

Re: svn commit: r1499074 - /lucene/dev/trunk/solr/core/src/test/org/apache/solr/core/TestCoreDiscovery.java

2013-07-03 Thread Alan Woodward
Thanks both!

Alan Woodward
www.flax.co.uk


On 3 Jul 2013, at 02:47, Erick Erickson wrote:

> Cool! thanks!
> 
> 
> On Tue, Jul 2, 2013 at 6:12 PM, Steve Rowe  wrote:
> After svn up'ing trunk, TestCoreDiscovery.testDuplicateNames (and all other 
> tests in that suite) now pass for me on Windows. - Steve
> 
> On Jul 2, 2013, at 3:44 PM, er...@apache.org wrote:
> 
> > Author: erick
> > Date: Tue Jul  2 19:44:46 2013
> > New Revision: 1499074
> >
> > URL: http://svn.apache.org/r1499074
> > Log:
> > Fixing another file separator issue, test only
> >
> > Modified:
> >
> > lucene/dev/trunk/solr/core/src/test/org/apache/solr/core/TestCoreDiscovery.java
> >
> > Modified: 
> > lucene/dev/trunk/solr/core/src/test/org/apache/solr/core/TestCoreDiscovery.java
> > URL: 
> > http://svn.apache.org/viewvc/lucene/dev/trunk/solr/core/src/test/org/apache/solr/core/TestCoreDiscovery.java?rev=1499074&r1=1499073&r2=1499074&view=diff
> > ==
> > --- 
> > lucene/dev/trunk/solr/core/src/test/org/apache/solr/core/TestCoreDiscovery.java
> >  (original)
> > +++ 
> > lucene/dev/trunk/solr/core/src/test/org/apache/solr/core/TestCoreDiscovery.java
> >  Tue Jul  2 19:44:46 2013
> > @@ -184,8 +184,10 @@ public class TestCoreDiscovery extends S
> >   String message = cause.getMessage();
> >   assertTrue("Should have seen an exception because two cores had the 
> > same name",
> >   message.indexOf("Core core1 defined more than once") != -1);
> > -  assertTrue("/core1 should have been mentioned in the message", 
> > message.indexOf("/core1") != -1);
> > -  assertTrue("/core2 should have been mentioned in the message", 
> > message.indexOf("/core2") != -1);
> > +  assertTrue(File.separator + "core1 should have been mentioned in the 
> > message: " + message,
> > +  message.indexOf(File.separator + "core1") != -1);
> > +  assertTrue(File.separator + "core2 should have been mentioned in the 
> > message:" + message,
> > +  message.indexOf(File.separator + "core2") != -1);
> > } finally {
> >   if (cc != null) {
> > cc.shutdown();
> >
> >
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 
> 



[JENKINS] Lucene-Solr-SmokeRelease-4.x - Build # 88 - Failure

2013-07-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-4.x/88/

No tests ran.

Build Log:
[...truncated 33853 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/lucene/build/fakeRelease
 [copy] Copying 416 files to 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/lucene/build/fakeRelease/lucene
 [copy] Copying 194 files to 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/lucene/build/fakeRelease/solr
 [exec] JAVA6_HOME is /home/hudson/tools/java/latest1.6
 [exec] JAVA7_HOME is /home/hudson/tools/java/latest1.7
 [exec] NOTE: output encoding is US-ASCII
 [exec] 
 [exec] Load release URL 
"file:/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/lucene/build/fakeRelease/"...
 [exec] 
 [exec] Test Lucene...
 [exec]   test basics...
 [exec]   get KEYS
 [exec] 0.1 MB in 0.01 sec (7.9 MB/sec)
 [exec]   check changes HTML...
 [exec]   download lucene-4.4.0-src.tgz...
 [exec] 26.9 MB in 0.04 sec (615.7 MB/sec)
 [exec] verify md5/sha1 digests
 [exec]   download lucene-4.4.0.tgz...
 [exec] 50.5 MB in 0.15 sec (347.5 MB/sec)
 [exec] verify md5/sha1 digests
 [exec]   download lucene-4.4.0.zip...
 [exec] 60.5 MB in 0.43 sec (142.1 MB/sec)
 [exec] verify md5/sha1 digests
 [exec]   unpack lucene-4.4.0.tgz...
 [exec] verify JAR/WAR metadata...
 [exec] test demo with 1.6...
 [exec]   got 5638 hits for query "lucene"
 [exec] test demo with 1.7...
 [exec]   got 5638 hits for query "lucene"
 [exec] check Lucene's javadoc JAR
 [exec]   unpack lucene-4.4.0.zip...
 [exec] verify JAR/WAR metadata...
 [exec] test demo with 1.6...
 [exec]   got 5638 hits for query "lucene"
 [exec] test demo with 1.7...
 [exec]   got 5638 hits for query "lucene"
 [exec] check Lucene's javadoc JAR
 [exec]   unpack lucene-4.4.0-src.tgz...
 [exec] make sure no JARs/WARs in src dist...
 [exec] run "ant validate"
 [exec] run tests w/ Java 6 and 
testArgs='-Dtests.jettyConnector=Socket'...
 [exec] test demo with 1.6...
 [exec]   got 223 hits for query "lucene"
 [exec] generate javadocs w/ Java 6...
 [exec] run tests w/ Java 7 and 
testArgs='-Dtests.jettyConnector=Socket'...
 [exec] test demo with 1.7...
 [exec]   got 223 hits for query "lucene"
 [exec] generate javadocs w/ Java 7...
 [exec] 
 [exec] Crawl/parse...
 [exec] 
 [exec] Verify...
 [exec] 
 [exec] Test Solr...
 [exec]   test basics...
 [exec]   get KEYS
 [exec] 0.1 MB in 0.02 sec (4.6 MB/sec)
 [exec]   check changes HTML...
 [exec]   download solr-4.4.0-src.tgz...
 [exec] 30.8 MB in 0.11 sec (273.9 MB/sec)
 [exec] verify md5/sha1 digests
 [exec]   download solr-4.4.0.tgz...
 [exec] 131.5 MB in 0.68 sec (194.7 MB/sec)
 [exec] verify md5/sha1 digests
 [exec]   download solr-4.4.0.zip...
 [exec] 136.4 MB in 0.86 sec (159.4 MB/sec)
 [exec] verify md5/sha1 digests
 [exec]   unpack solr-4.4.0.tgz...
 [exec] verify JAR/WAR metadata...
 [exec] Traceback (most recent call last):
 [exec]   File 
"/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/dev-tools/scripts/smokeTestRelease.py",
 line 1450, in 
 [exec] main()
 [exec]   File 
"/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/dev-tools/scripts/smokeTestRelease.py",
 line 1394, in main
 [exec] smokeTest(baseURL, svnRevision, version, tmpDir, isSigned, 
testArgs)
 [exec]   File 
"/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/dev-tools/scripts/smokeTestRelease.py",
 line 1438, in smokeTest
 [exec] unpackAndVerify('solr', tmpDir, artifact, svnRevision, version, 
testArgs)
 [exec]   File 
"/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/dev-tools/scripts/smokeTestRelease.py",
 line 607, in unpackAndVerify
 [exec] verifyUnpacked(project, artifact, unpackPath, svnRevision, 
version, testArgs)
 [exec]   File 
"/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/dev-tools/scripts/smokeTestRelease.py",
 line 753, in verifyUnpacked
 [exec] checkAllJARs(os.getcwd(), project, svnRevision, version)
 [exec]   File 
"/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/dev-tools/scripts/smokeTestRelease.py",
 line 281, in checkAllJARs
 [exec] noJavaPackageClasses('JAR file "%s"' % fullPath, fullPath)
 [exec]   File 
"/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/dev-tools/scripts/smokeTestRelease.py",
 line 178, in noJavaPackageClasses
 [exec] raise RuntimeError('%s contains sheisty class "%

[jira] [Created] (SOLR-4992) Solr queries don't propagate Java OutOfMemoryError back to the JVM

2013-07-03 Thread Daniel Collins (JIRA)
Daniel Collins created SOLR-4992:


 Summary: Solr queries don't propagate Java OutOfMemoryError back 
to the JVM
 Key: SOLR-4992
 URL: https://issues.apache.org/jira/browse/SOLR-4992
 Project: Solr
  Issue Type: Bug
  Components: search, SolrCloud, update
Affects Versions: 4.3.1
Reporter: Daniel Collins


Solr (specifically SolrDispatchFilter.doFilter() but there might be other 
places) handle generic java.lang.Throwable errors but that "hides" 
OutOfMemoryError scenarios.

IndexWriter does this too but that has a specific exclusion for OOM scenarios 
and handles them explicitly (stops committing and just logs to the transaction 
log).

{noformat}

Example Stack trace:
2013-06-26 19:31:33,801 [qtp632640515-62] ERROR
solr.servlet.SolrDispatchFilter Q:22 -
null:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap
space
at 
org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:670)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:380)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:155)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1423)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:450)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:138)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:564)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:213)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1083)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:379)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:175)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1017)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:136)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:258)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:109)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:445)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:260)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:225)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.run(AbstractConnection.java:358)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:596)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:527)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.OutOfMemoryError: Java heap space

{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[ANN] Lucene/SOLR hackday in Cambridge, UK

2013-07-03 Thread Alan Woodward
Hi all,

Flax is running a Lucene/SOLR hack day here in Cambridge on Friday, 26th July, 
with committer and LucidWorks co-founder Grant Ingersoll.  We'll provide the 
venue, some food and the internet - you provide enthusiasm and great ideas for 
hacking!

Details here: 
http://www.meetup.com/Enterprise-Search-Cambridge-UK/events/127351142/.  Places 
are limited, so please book early!

Looking forward to seeing you there,

Alan Woodward
www.flax.co.uk




[jira] [Updated] (SOLR-4815) DIH: Let "commit" be checked by default

2013-07-03 Thread Stefan Matheis (steffkes) (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Matheis (steffkes) updated SOLR-4815:


Attachment: SOLR-4815.patch

> DIH: Let "commit" be checked by default
> ---
>
> Key: SOLR-4815
> URL: https://issues.apache.org/jira/browse/SOLR-4815
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Reporter: Jan Høydahl
>Priority: Trivial
>  Labels: dataimportHandler
> Fix For: 4.4
>
> Attachments: SOLR-4815.patch
>
>
> The new DIH GUI should have "commit" checked by default.
> According to http://wiki.apache.org/solr/DataImportHandler#Commands the REST 
> API has commit=true by default, so it makes sense that the GUI has the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4221) Custom sharding

2013-07-03 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698142#comment-13698142
 ] 

Noble Paul edited comment on SOLR-4221 at 7/3/13 8:19 AM:
--

bq.Another usecase is to have multiple collections, each with different number 
of shards and different #replicas.

This is why I feel we should not automatically create cores in nodes that come 
up. The best thing to do is

Any new node will just not participate in any collection (unless a shard has 
fewer nodes than replicationFactor)

There should be an explicit ASSIGN_NODE command to add/remove nodes to/from a 
shard.  
It should be possible to do an ASSIGN_NODE without specifying a nodename in 
which case the overseer would look for free nodes in the cluster and add to the 
specified shard 
 
when a new shard is created by CREATESHARD command or by a SPLITSHARD command 
these nodes could be automatically be taken up. 


  was (Author: noble.paul):
bq.Another usecase is to have multiple collections, each with different 
number of shards and different #replicas.

This is why I feel we should not automatically create cores in nodes that come 
up. The best thing to do is

Any new node will just not participate in any collection (unless a shard has 
fewer nodes than replicationFactor)

There should be an explicit ASSIGN_NODE command to add/remove nodes to/from a 
shard.  
It should be possible to do an ASSIGN_NODE without specifying a nodename in 
which case the overseer would look for free nodes in the cluster and add to the 
specified shard 
 
when a new shard is created by CREATESHARD command or when a SPLITSHARD command 
this nodes could be automatically be taken up. 

  
> Custom sharding
> ---
>
> Key: SOLR-4221
> URL: https://issues.apache.org/jira/browse/SOLR-4221
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Noble Paul
> Attachments: SOLR-4221.patch
>
>
> Features to let users control everything about sharding/routing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4815) DIH: Let "commit" be checked by default

2013-07-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698740#comment-13698740
 ] 

ASF subversion and git services commented on SOLR-4815:
---

Commit 1499252 from steff...@apache.org
[ https://svn.apache.org/r1499252 ]

SOLR-4815: Admin-UI - DIH: Let "commit" be checked by default

> DIH: Let "commit" be checked by default
> ---
>
> Key: SOLR-4815
> URL: https://issues.apache.org/jira/browse/SOLR-4815
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Reporter: Jan Høydahl
>Priority: Trivial
>  Labels: dataimportHandler
> Fix For: 4.4
>
> Attachments: SOLR-4815.patch
>
>
> The new DIH GUI should have "commit" checked by default.
> According to http://wiki.apache.org/solr/DataImportHandler#Commands the REST 
> API has commit=true by default, so it makes sense that the GUI has the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4815) Admin-UI - DIH: Let "commit" be checked by default

2013-07-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698741#comment-13698741
 ] 

ASF subversion and git services commented on SOLR-4815:
---

Commit 1499254 from steff...@apache.org
[ https://svn.apache.org/r1499254 ]

SOLR-4815: Admin-UI - DIH: Let "commit" be checked by default (merge r1499252)

> Admin-UI - DIH: Let "commit" be checked by default
> --
>
> Key: SOLR-4815
> URL: https://issues.apache.org/jira/browse/SOLR-4815
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Reporter: Jan Høydahl
>Assignee: Stefan Matheis (steffkes)
>Priority: Trivial
>  Labels: dataimportHandler
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-4815.patch
>
>
> The new DIH GUI should have "commit" checked by default.
> According to http://wiki.apache.org/solr/DataImportHandler#Commands the REST 
> API has commit=true by default, so it makes sense that the GUI has the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4815) Admin-UI - DIH: Let "commit" be checked by default

2013-07-03 Thread Stefan Matheis (steffkes) (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Matheis (steffkes) resolved SOLR-4815.
-

Resolution: Implemented

Indeed [~janhoy], thanks for hint :)

> Admin-UI - DIH: Let "commit" be checked by default
> --
>
> Key: SOLR-4815
> URL: https://issues.apache.org/jira/browse/SOLR-4815
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Reporter: Jan Høydahl
>Assignee: Stefan Matheis (steffkes)
>Priority: Trivial
>  Labels: dataimportHandler
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-4815.patch
>
>
> The new DIH GUI should have "commit" checked by default.
> According to http://wiki.apache.org/solr/DataImportHandler#Commands the REST 
> API has commit=true by default, so it makes sense that the GUI has the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4815) Admin-UI - DIH: Let "commit" be checked by default

2013-07-03 Thread Stefan Matheis (steffkes) (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Matheis (steffkes) updated SOLR-4815:


Fix Version/s: 5.0
 Assignee: Stefan Matheis (steffkes)
  Summary: Admin-UI - DIH: Let "commit" be checked by default  (was: 
DIH: Let "commit" be checked by default)

> Admin-UI - DIH: Let "commit" be checked by default
> --
>
> Key: SOLR-4815
> URL: https://issues.apache.org/jira/browse/SOLR-4815
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Reporter: Jan Høydahl
>Assignee: Stefan Matheis (steffkes)
>Priority: Trivial
>  Labels: dataimportHandler
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-4815.patch
>
>
> The new DIH GUI should have "commit" checked by default.
> According to http://wiki.apache.org/solr/DataImportHandler#Commands the REST 
> API has commit=true by default, so it makes sense that the GUI has the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5084) EliasFanoDocIdSet

2013-07-03 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698767#comment-13698767
 ] 

Adrien Grand commented on LUCENE-5084:
--

bq. We could, but in which class? For example, in CachingWrapperFilter it might 
be good to save memory, so it could be there.

This new doc id set might be used for other use-cases in the future, so maybe 
we should have this method on the EliasFanoDocIdSet class?

bq. Also, would the expected size be the only thing to check for? When decoding 
speed is also important, other DocIdSets might be preferable.

Sure, this is something we need to give users control on. For filter caches, it 
is already possible to override CachingWrapperFilter.docIdSetToCache to decide 
whether speed or memory usage is more important. The decision can even depend 
on the cardinality of the set to cache or on its implementation. So we just 
need to provide users with good defaults I think?

I haven't run performance benchmarks on this set implementation yet, but if it 
is faster than the DocIdSets iterators of our default postings format, then 
they are not going to be a bottleneck and I think it makes sense to use the 
implementation that saves the most memory. If they are slower or not faster 
enough, then maybe other implementations such as kamikaze's p-for-delta-based 
doc ID sets (LUCENE-2750) would make more sense as a default.

bq. Can PackedInts.getMutable also be used in a codec?

The PackedInts API can return readers that can read directly from an IndexInput 
if this is the question but if we want to be able to store high and low bits 
contiguously then they are not going to be a good fit.

bq. I considered a decoder that returns ints but that would require a lot more 
casting in the decoder.

OK. I just wanted to have your opinion on this, we can keep everything as a 
long.

bq. I'll open another issue for broadword bit selection later.

Sounds good! I think backwards iteration and efficient skipping should be done 
in separate issues as well, even without them this new doc ID set would be a 
very nice addition.

> EliasFanoDocIdSet
> -
>
> Key: LUCENE-5084
> URL: https://issues.apache.org/jira/browse/LUCENE-5084
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Paul Elschot
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.0
>
> Attachments: LUCENE-5084.patch
>
>
> DocIdSet in Elias-Fano encoding

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4986) Upgrade to Tika 1.4

2013-07-03 Thread Markus Jelsma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Jelsma updated SOLR-4986:


Attachment: SOLR-4986-trunk.patch

Patch for trunk incl. dependencies, upgrade in dev-tools maven template and 
added checksums in licences.

> Upgrade to Tika 1.4
> ---
>
> Key: SOLR-4986
> URL: https://issues.apache.org/jira/browse/SOLR-4986
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Reporter: Jan Høydahl
>Priority: Minor
> Attachments: SOLR-4986-trunk.patch
>
>
> Just released http://www.apache.org/dist/tika/CHANGES-1.4.txt

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Folder solr/example/hdfs should not be there

2013-07-03 Thread Jan Høydahl
After running "ant example" a folder "hdfs" sneaks into the source tree at 
solr/example/hdfs
Have not checked why, but it certainly does not belong there

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Created] (LUCENE-5089) Update morfologik (polish stemmer) to 1.6.0

2013-07-03 Thread Dawid Weiss (JIRA)
Dawid Weiss created LUCENE-5089:
---

 Summary: Update morfologik (polish stemmer) to 1.6.0
 Key: LUCENE-5089
 URL: https://issues.apache.org/jira/browse/LUCENE-5089
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Trivial
 Fix For: 5.0




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-4986) Upgrade to Tika 1.4

2013-07-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-4986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reassigned SOLR-4986:
-

Assignee: Jan Høydahl

> Upgrade to Tika 1.4
> ---
>
> Key: SOLR-4986
> URL: https://issues.apache.org/jira/browse/SOLR-4986
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Minor
> Attachments: SOLR-4986-trunk.patch
>
>
> Just released http://www.apache.org/dist/tika/CHANGES-1.4.txt

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4986) Upgrade to Tika 1.4

2013-07-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-4986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-4986:
--

Fix Version/s: 4.4
   5.0

> Upgrade to Tika 1.4
> ---
>
> Key: SOLR-4986
> URL: https://issues.apache.org/jira/browse/SOLR-4986
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Minor
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-4986-trunk.patch
>
>
> Just released http://www.apache.org/dist/tika/CHANGES-1.4.txt

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5085) MorfologikFilter shoudn't stem words marked as keyword

2013-07-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698798#comment-13698798
 ] 

ASF subversion and git services commented on LUCENE-5085:
-

Commit 1499312 from [~dawidweiss]
[ https://svn.apache.org/r1499312 ]

LUCENE-5085: MorfologikFilter will no longer stem words marked as keywords.

> MorfologikFilter shoudn't stem words marked as keyword
> --
>
> Key: LUCENE-5085
> URL: https://issues.apache.org/jira/browse/LUCENE-5085
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.2.1
>Reporter: Grzegorz Sobczyk
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 5.0, 4.4
>
>
> I added "agd" as keyword using solr.KeywordMarkerFilterFactory
> I would be able to add synonyms after solr.MorfologikFilterFactory:
>  agd => lodówka, zamrażarka, chłodziarka, piekarnik, etc.
> It's not possible right now. All words (even keywords) are threated same way.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5085) MorfologikFilter shoudn't stem words marked as keyword

2013-07-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698799#comment-13698799
 ] 

ASF subversion and git services commented on LUCENE-5085:
-

Commit 1499313 from [~dawidweiss]
[ https://svn.apache.org/r1499313 ]

LUCENE-5085: MorfologikFilter will no longer stem words marked as keywords.

> MorfologikFilter shoudn't stem words marked as keyword
> --
>
> Key: LUCENE-5085
> URL: https://issues.apache.org/jira/browse/LUCENE-5085
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.2.1
>Reporter: Grzegorz Sobczyk
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 5.0, 4.4
>
>
> I added "agd" as keyword using solr.KeywordMarkerFilterFactory
> I would be able to add synonyms after solr.MorfologikFilterFactory:
>  agd => lodówka, zamrażarka, chłodziarka, piekarnik, etc.
> It's not possible right now. All words (even keywords) are threated same way.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5085) MorfologikFilter shoudn't stem words marked as keyword

2013-07-03 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-5085.
-

Resolution: Fixed

> MorfologikFilter shoudn't stem words marked as keyword
> --
>
> Key: LUCENE-5085
> URL: https://issues.apache.org/jira/browse/LUCENE-5085
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.2.1
>Reporter: Grzegorz Sobczyk
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 5.0, 4.4
>
>
> I added "agd" as keyword using solr.KeywordMarkerFilterFactory
> I would be able to add synonyms after solr.MorfologikFilterFactory:
>  agd => lodówka, zamrażarka, chłodziarka, piekarnik, etc.
> It's not possible right now. All words (even keywords) are threated same way.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5085) MorfologikFilter shoudn't stem words marked as keyword

2013-07-03 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698801#comment-13698801
 ] 

Dawid Weiss commented on LUCENE-5085:
-

The filter is now sensitive to keyword marker. Let me know if this works for 
your scenario (with synonyms in the chain, etc.)

> MorfologikFilter shoudn't stem words marked as keyword
> --
>
> Key: LUCENE-5085
> URL: https://issues.apache.org/jira/browse/LUCENE-5085
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.2.1
>Reporter: Grzegorz Sobczyk
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 5.0, 4.4
>
>
> I added "agd" as keyword using solr.KeywordMarkerFilterFactory
> I would be able to add synonyms after solr.MorfologikFilterFactory:
>  agd => lodówka, zamrażarka, chłodziarka, piekarnik, etc.
> It's not possible right now. All words (even keywords) are threated same way.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/ibm-j9-jdk6) - Build # 6334 - Failure!

2013-07-03 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6334/
Java: 32bit/ibm-j9-jdk6 
-Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}

1 tests failed.
REGRESSION:  org.apache.solr.cloud.BasicDistributedZk2Test.testDistribSearch

Error Message:
Server at http://127.0.0.1:46721/g/ht/onenodecollectioncore returned non ok 
status:404, message:Can not find: /g/ht/onenodecollectioncore/update

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Server at 
http://127.0.0.1:46721/g/ht/onenodecollectioncore returned non ok status:404, 
message:Can not find: /g/ht/onenodecollectioncore/update
at 
__randomizedtesting.SeedInfo.seed([F229A6EB98508E88:73CF28F3EF0FEEB4]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:385)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:116)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:102)
at 
org.apache.solr.cloud.BasicDistributedZk2Test.testNodeWithoutCollectionForwarding(BasicDistributedZk2Test.java:196)
at 
org.apache.solr.cloud.BasicDistributedZk2Test.doTest(BasicDistributedZk2Test.java:88)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:835)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
at java.lang.reflect.Method.invoke(Method.java:611)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carro

[jira] [Commented] (SOLR-4681) Add spellcheck to default /select handler

2013-07-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698805#comment-13698805
 ] 

Jan Høydahl commented on SOLR-4681:
---

bq. However, in Solr we DO expose the spellcheck box under /select, and it does 
not work, so I really think we should spellcheck should be in the default.

If this is the main concern, a better solution is letting the GUI 
disable/grey-out the spellcheck checkbox if the selected handler is not 
configured with the spellcheck component.

Don't know if it is easy to get to such info through clean REST APIs. We cannot 
rely on a component named "spellcheck", since it may be named anything when 
looking at a custom solrconfig. You could fire a dummy query and look at the 
response but that is not very clean?

> Add spellcheck to default /select handler
> -
>
> Key: SOLR-4681
> URL: https://issues.apache.org/jira/browse/SOLR-4681
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.2
>Reporter: Mark Bennett
> Attachments: SOLR-4681-with-default-select.patch
>
>
> In SOLR-4680 I put a patch to fix spellcheck for the /spell handler.
> This bug/patch does that and also adds spellcheck to the default /select 
> launch.  I'm putting it as a separate bug because I suspect some people may 
> have stronger feelings about adding a component to the default that everybody 
> uses.
> However, in Solr we DO expose the spellcheck box under /select, and it does 
> not work, so I really think we should spellcheck should be in the default.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4986) Upgrade to Tika 1.4

2013-07-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-4986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-4986:
--

Attachment: SOLR-4986.patch

New patch fixing CHANGES.TXT and wrong sha1 file. Passes precommit

> Upgrade to Tika 1.4
> ---
>
> Key: SOLR-4986
> URL: https://issues.apache.org/jira/browse/SOLR-4986
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Minor
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-4986.patch, SOLR-4986-trunk.patch
>
>
> Just released http://www.apache.org/dist/tika/CHANGES-1.4.txt

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0-ea-b94) - Build # 6408 - Failure!

2013-07-03 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/6408/
Java: 32bit/jdk1.8.0-ea-b94 -client -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestZkChroot

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.cloud.TestZkChroot: 1) 
Thread[id=5650, name=IPC Parameter Sending Thread #3, state=TIMED_WAITING, 
group=TGRP-TestRecoveryHdfs] at sun.misc.Unsafe.park(Native Method) 
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)   
  at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:360)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:939) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:724)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.TestZkChroot: 
   1) Thread[id=5650, name=IPC Parameter Sending Thread #3, 
state=TIMED_WAITING, group=TGRP-TestRecoveryHdfs]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458)
at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:360)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:939)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
at __randomizedtesting.SeedInfo.seed([9D5C960BAEFB5EE3]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestZkChroot

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=5650, name=IPC Parameter Sending Thread #3, state=TIMED_WAITING, 
group=TGRP-TestRecoveryHdfs] at sun.misc.Unsafe.park(Native Method) 
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)   
  at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:360)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:939) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:724)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=5650, name=IPC Parameter Sending Thread #3, 
state=TIMED_WAITING, group=TGRP-TestRecoveryHdfs]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458)
at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:360)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:939)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
at __randomizedtesting.SeedInfo.seed([9D5C960BAEFB5EE3]:0)




Build Log:
[...truncated 11212 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:386: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:366: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:39: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build.xml:190: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/common-build.xml:443: 
The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trun

[jira] [Commented] (SOLR-4986) Upgrade to Tika 1.4

2013-07-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698869#comment-13698869
 ] 

ASF subversion and git services commented on SOLR-4986:
---

Commit 1499338 from jan...@apache.org
[ https://svn.apache.org/r1499338 ]

SOLR-4986: Upgrade to Tika 1.4

> Upgrade to Tika 1.4
> ---
>
> Key: SOLR-4986
> URL: https://issues.apache.org/jira/browse/SOLR-4986
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Minor
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-4986.patch, SOLR-4986-trunk.patch
>
>
> Just released http://www.apache.org/dist/tika/CHANGES-1.4.txt

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.8.0-ea-b94) - Build # 6335 - Still Failing!

2013-07-03 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6335/
Java: 64bit/jdk1.8.0-ea-b94 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.RecoveryZkTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.cloud.RecoveryZkTest: 
1) Thread[id=4489, 
name=java.util.concurrent.ThreadPoolExecutor$Worker@740887b3[State = -1, empty 
queue], state=WAITING, group=TGRP-TestRecoveryHdfs] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2038)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:439) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1070)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:724)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.RecoveryZkTest: 
   1) Thread[id=4489, 
name=java.util.concurrent.ThreadPoolExecutor$Worker@740887b3[State = -1, empty 
queue], state=WAITING, group=TGRP-TestRecoveryHdfs]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2038)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:439)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1070)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
at __randomizedtesting.SeedInfo.seed([56391CCAD040FC84]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.RecoveryZkTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=4489, 
name=java.util.concurrent.ThreadPoolExecutor$Worker@740887b3[State = -1, empty 
queue], state=WAITING, group=TGRP-TestRecoveryHdfs] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2038)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:439) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1070)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:724)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=4489, 
name=java.util.concurrent.ThreadPoolExecutor$Worker@740887b3[State = -1, empty 
queue], state=WAITING, group=TGRP-TestRecoveryHdfs]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2038)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:439)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1070)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
at __randomizedtesting.SeedInfo.seed([56391CCAD040FC84]:0)




Build Log:
[...truncated 19361 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:392: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:372: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:39: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build.xml:190: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/common-build.xml:443: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/common-build.xml:1249: 
The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.

[jira] [Commented] (SOLR-4986) Upgrade to Tika 1.4

2013-07-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698879#comment-13698879
 ] 

ASF subversion and git services commented on SOLR-4986:
---

Commit 1499349 from jan...@apache.org
[ https://svn.apache.org/r1499349 ]

SOLR-4986: Upgrade to Tika 1.4 (merge from trunk)

> Upgrade to Tika 1.4
> ---
>
> Key: SOLR-4986
> URL: https://issues.apache.org/jira/browse/SOLR-4986
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Minor
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-4986.patch, SOLR-4986-trunk.patch
>
>
> Just released http://www.apache.org/dist/tika/CHANGES-1.4.txt

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4986) Upgrade to Tika 1.4

2013-07-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-4986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-4986.
---

Resolution: Fixed

Thanks Markus

> Upgrade to Tika 1.4
> ---
>
> Key: SOLR-4986
> URL: https://issues.apache.org/jira/browse/SOLR-4986
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Minor
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-4986.patch, SOLR-4986-trunk.patch
>
>
> Just released http://www.apache.org/dist/tika/CHANGES-1.4.txt

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5089) Update morfologik (polish stemmer) to 1.6.0

2013-07-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698885#comment-13698885
 ] 

ASF subversion and git services commented on LUCENE-5089:
-

Commit 1499352 from [~dawidweiss]
[ https://svn.apache.org/r1499352 ]

LUCENE-5089: Update to Morfologik 1.6.0. MorfologikAnalyzer and 
MorfologikFilter no longer support multiple "dictionaries" as there is only one 
dictionary available.

> Update morfologik (polish stemmer) to 1.6.0
> ---
>
> Key: LUCENE-5089
> URL: https://issues.apache.org/jira/browse/LUCENE-5089
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: 5.0
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5089) Update morfologik (polish stemmer) to 1.6.0

2013-07-03 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-5089.
-

Resolution: Fixed

> Update morfologik (polish stemmer) to 1.6.0
> ---
>
> Key: LUCENE-5089
> URL: https://issues.apache.org/jira/browse/LUCENE-5089
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: 5.0
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5089) Update morfologik (polish stemmer) to 1.6.0

2013-07-03 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-5089:


Issue Type: Improvement  (was: Bug)

> Update morfologik (polish stemmer) to 1.6.0
> ---
>
> Key: LUCENE-5089
> URL: https://issues.apache.org/jira/browse/LUCENE-5089
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: 5.0
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1163) Solr Explorer - A generic GWT client for Solr

2013-07-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-1163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698889#comment-13698889
 ] 

Jan Høydahl commented on SOLR-1163:
---

Long time since any activity. For those interested, we have put up a recent 
version of the explorer on GitHub:
https://github.com/cominvent/solr-explorer

> Solr Explorer - A generic GWT client for Solr
> -
>
> Key: SOLR-1163
> URL: https://issues.apache.org/jira/browse/SOLR-1163
> Project: Solr
>  Issue Type: New Feature
>  Components: web gui
>Affects Versions: 1.3
>Reporter: Uri Boness
> Attachments: graphics.zip, SOLR-1163.zip, SOLR-1163.zip, 
> solr-explorer.patch, solr-explorer.patch
>
>
> The attached patch is a GWT generic client for solr. It is currently 
> standalone, meaning that once built, one can open the generated HTML file in 
> a browser and communicate with any deployed solr. It is configured with it's 
> own configuration file, where one can configure the solr instance/core to 
> connect to. Since it's currently standalone and completely client side based, 
> it uses JSON with padding (cross-side scripting) to connect to remote solr 
> servers. Some of the supported features:
> - Simple query search
> - Sorting - one can dynamically define new sort criterias
> - Search results are rendered very much like Google search results are 
> rendered. It is also possible to view all stored field values for every hit. 
> - Custom hit rendering - It is possible to show thumbnails (images) per hit 
> and also customize a view for a hit based on html templates
> - Faceting - one can dynamically define field and query facets via the UI. it 
> is also possible to pre-configure these facets in the configuration file.
> - Highlighting - you can dynamically configure highlighting. it can also be 
> pre-configured in the configuration file
> - Spellchecking - you can dynamically configure spell checking. Can also be 
> done in the configuration file. Supports collation. It is also possible to 
> send "build" and "reload" commands.
> - Data import handler - if used, it is possible to send a "full-import" and 
> "status" command ("delta-import" is not implemented yet, but it's easy to add)
> - Console - For development time, there's a small console which can help to 
> better understand what's going on behind the scenes. One can use it to:
> ** view the client logs
> ** browse the solr scheme
> ** View a break down of the current search context
> ** View a break down of the query URL that is sent to solr
> ** View the raw JSON response returning from Solr
> This client is actually a platform that can be greatly extended for more 
> things. The goal is to have a client where the explorer part is just one view 
> of it. Other future views include: Monitoring, Administration, Query Builder, 
> DataImportHandler configuration, and more...
> To get a better view of what's currently possible. We've set up a public 
> version of this client at: http://search.jteam.nl/explorer. This client is 
> configured with one solr instance where crawled YouTube movies where indexed. 
> You can also check out a screencast for this deployed client: 
> http://search.jteam.nl/help
> The patch created a new folder in the contrib. directory. Since the patch 
> doesn't contain binaries, an additional zip file is provides that needs to be 
> extract to add all the required graphics. This module is maven2 based and is 
> configured in such a way that all GWT related tools/libraries are 
> automatically downloaded when the modules is compiled. One of the artifacts 
> of the build is a war file which can be deployed in any servlet container.
> NOTE: this client works best on WebKit based browsers (for performance 
> reason) but also works on firefox and ie 7+. That said, it should be taken 
> into account that it is still under development.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-5013) ScandinavianInterintelligableASCIIFoldingFilter

2013-07-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/LUCENE-5013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reassigned LUCENE-5013:
---

Assignee: Jan Høydahl

> ScandinavianInterintelligableASCIIFoldingFilter
> ---
>
> Key: LUCENE-5013
> URL: https://issues.apache.org/jira/browse/LUCENE-5013
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Affects Versions: 4.3
>Reporter: Karl Wettin
>Assignee: Jan Høydahl
>Priority: Trivial
> Attachments: LUCENE-5013-2.txt, LUCENE-5013-3.txt, LUCENE-5013-4.txt, 
> LUCENE-5013-5.txt, LUCENE-5013-6.txt, LUCENE-5013.patch, LUCENE-5013.txt
>
>
> This filter is an augmentation of output from ASCIIFoldingFilter,
> it discriminate against double vowels aa, ae, ao, oe and oo, leaving just the 
> first one.
> blåbærsyltetøj == blåbärsyltetöj == blaabaarsyltetoej == blabarsyltetoj
> räksmörgås == ræksmørgås == ræksmörgaos == raeksmoergaas == raksmorgas
> Caveats:
> Since this is a filtering on top of ASCIIFoldingFilter äöåøæ already has been 
> folded down to aoaoae when handled by this filter it will cause effects such 
> as:
> bøen -> boen -> bon
> åene -> aene -> ane
> I find this to be a trivial problem compared to not finding anything at all.
> Background:
> Swedish åäö is in fact the same letters as Norwegian and Danish åæø and thus 
> interchangeable in when used between these languages. They are however folded 
> differently when people type them on a keyboard lacking these characters and 
> ASCIIFoldingFilter handle ä and æ differently.
> When a Swedish person is lacking umlauted characters on the keyboard they 
> consistently type a, a, o instead of å, ä, ö. Foreigners also tend to use a, 
> a, o.
> In Norway people tend to type aa, ae and oe instead of å, æ and ø. Some use 
> a, a, o. I've also seen oo, ao, etc. And permutations. Not sure about Denmark 
> but the pattern is probably the same.
> This filter solves that problem, but might also cause new.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5013) ScandinavianFoldingFilterFactory and ScandinavianNormalizationFilterFactory

2013-07-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/LUCENE-5013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated LUCENE-5013:


Summary: ScandinavianFoldingFilterFactory and 
ScandinavianNormalizationFilterFactory  (was: 
ScandinavianInterintelligableASCIIFoldingFilter)

> ScandinavianFoldingFilterFactory and ScandinavianNormalizationFilterFactory
> ---
>
> Key: LUCENE-5013
> URL: https://issues.apache.org/jira/browse/LUCENE-5013
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Affects Versions: 4.3
>Reporter: Karl Wettin
>Assignee: Jan Høydahl
>Priority: Trivial
> Attachments: LUCENE-5013-2.txt, LUCENE-5013-3.txt, LUCENE-5013-4.txt, 
> LUCENE-5013-5.txt, LUCENE-5013-6.txt, LUCENE-5013.patch, LUCENE-5013.txt
>
>
> This filter is an augmentation of output from ASCIIFoldingFilter,
> it discriminate against double vowels aa, ae, ao, oe and oo, leaving just the 
> first one.
> blåbærsyltetøj == blåbärsyltetöj == blaabaarsyltetoej == blabarsyltetoj
> räksmörgås == ræksmørgås == ræksmörgaos == raeksmoergaas == raksmorgas
> Caveats:
> Since this is a filtering on top of ASCIIFoldingFilter äöåøæ already has been 
> folded down to aoaoae when handled by this filter it will cause effects such 
> as:
> bøen -> boen -> bon
> åene -> aene -> ane
> I find this to be a trivial problem compared to not finding anything at all.
> Background:
> Swedish åäö is in fact the same letters as Norwegian and Danish åæø and thus 
> interchangeable in when used between these languages. They are however folded 
> differently when people type them on a keyboard lacking these characters and 
> ASCIIFoldingFilter handle ä and æ differently.
> When a Swedish person is lacking umlauted characters on the keyboard they 
> consistently type a, a, o instead of å, ä, ö. Foreigners also tend to use a, 
> a, o.
> In Norway people tend to type aa, ae and oe instead of å, æ and ø. Some use 
> a, a, o. I've also seen oo, ao, etc. And permutations. Not sure about Denmark 
> but the pattern is probably the same.
> This filter solves that problem, but might also cause new.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5013) ScandinavianFoldingFilterFactory and ScandinavianNormalizationFilterFactory

2013-07-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/LUCENE-5013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated LUCENE-5013:


Fix Version/s: 4.4
   5.0

> ScandinavianFoldingFilterFactory and ScandinavianNormalizationFilterFactory
> ---
>
> Key: LUCENE-5013
> URL: https://issues.apache.org/jira/browse/LUCENE-5013
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Affects Versions: 4.3
>Reporter: Karl Wettin
>Assignee: Jan Høydahl
>Priority: Trivial
> Fix For: 5.0, 4.4
>
> Attachments: LUCENE-5013-2.txt, LUCENE-5013-3.txt, LUCENE-5013-4.txt, 
> LUCENE-5013-5.txt, LUCENE-5013-6.txt, LUCENE-5013.patch, LUCENE-5013.txt
>
>
> This filter is an augmentation of output from ASCIIFoldingFilter,
> it discriminate against double vowels aa, ae, ao, oe and oo, leaving just the 
> first one.
> blåbærsyltetøj == blåbärsyltetöj == blaabaarsyltetoej == blabarsyltetoj
> räksmörgås == ræksmørgås == ræksmörgaos == raeksmoergaas == raksmorgas
> Caveats:
> Since this is a filtering on top of ASCIIFoldingFilter äöåøæ already has been 
> folded down to aoaoae when handled by this filter it will cause effects such 
> as:
> bøen -> boen -> bon
> åene -> aene -> ane
> I find this to be a trivial problem compared to not finding anything at all.
> Background:
> Swedish åäö is in fact the same letters as Norwegian and Danish åæø and thus 
> interchangeable in when used between these languages. They are however folded 
> differently when people type them on a keyboard lacking these characters and 
> ASCIIFoldingFilter handle ä and æ differently.
> When a Swedish person is lacking umlauted characters on the keyboard they 
> consistently type a, a, o instead of å, ä, ö. Foreigners also tend to use a, 
> a, o.
> In Norway people tend to type aa, ae and oe instead of å, æ and ø. Some use 
> a, a, o. I've also seen oo, ao, etc. And permutations. Not sure about Denmark 
> but the pattern is probably the same.
> This filter solves that problem, but might also cause new.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4993) Document the new Scandinavian and Norwegian token filters

2013-07-03 Thread JIRA
Jan Høydahl created SOLR-4993:
-

 Summary: Document the new Scandinavian and Norwegian token filters
 Key: SOLR-4993
 URL: https://issues.apache.org/jira/browse/SOLR-4993
 Project: Solr
  Issue Type: Task
  Components: documentation
Affects Versions: 4.4
Reporter: Jan Høydahl
Assignee: Jan Høydahl
Priority: Trivial


Need to document in Wiki and Confluence the following:

* Improved NorwegianLightStemFilter, NorwegianMinimalStemFilter, 
variant=nb,nn,no
* New ScandinavianFoldingFilterFactory and 
ScandinavianNormalizationFilterFactory

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4994) Add text_nn and text_nb and improve the defaults

2013-07-03 Thread JIRA
Jan Høydahl created SOLR-4994:
-

 Summary: Add text_nn and text_nb and improve the defaults
 Key: SOLR-4994
 URL: https://issues.apache.org/jira/browse/SOLR-4994
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Affects Versions: 4.4
Reporter: Jan Høydahl
Assignee: Jan Høydahl
Priority: Trivial


h2. New field types
Create field types for the variants nn and nb, using specific stopwords and 
stemmer variants

The old text_no should then probably switch to using variant=no to be 
consistent, even if this is a backwards break - but it's only an example schema 
so people should take care. Perhaps mention on top of CHANGES.TXT

h2. Add normalization/folding
For all the Norwegian field types, consider adding 
ScandinavianNormalizationFilterFactory to normalize ae->æ, ä->æ etc. This is a 
light normalization which would be beneficial to most if not all users. 

Alternatively, add commented example

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b94) - Build # 6409 - Still Failing!

2013-07-03 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/6409/
Java: 64bit/jdk1.8.0-ea-b94 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeySafeLeaderTest

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest: 1) Thread[id=3451, 
name=IPC Client (1992368839) connection to 
localhost.localdomain/127.0.0.1:60617 from jenkins, state=TIMED_WAITING, 
group=TGRP-TestRecoveryHdfs] at java.lang.Object.wait(Native Method)
 at org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:799)
 at org.apache.hadoop.ipc.Client$Connection.run(Client.java:843)2) 
Thread[id=3452, name=IPC Parameter Sending Thread #5, state=TIMED_WAITING, 
group=TGRP-HdfsDirectoryTest] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)  
   at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:360)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:939) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:724)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.cloud.ChaosMonkeySafeLeaderTest: 
   1) Thread[id=3451, name=IPC Client (1992368839) connection to 
localhost.localdomain/127.0.0.1:60617 from jenkins, state=TIMED_WAITING, 
group=TGRP-TestRecoveryHdfs]
at java.lang.Object.wait(Native Method)
at org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:799)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:843)
   2) Thread[id=3452, name=IPC Parameter Sending Thread #5, 
state=TIMED_WAITING, group=TGRP-HdfsDirectoryTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458)
at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:360)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:939)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
at __randomizedtesting.SeedInfo.seed([1F1773714B67D70]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeySafeLeaderTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=3452, name=IPC Parameter Sending Thread #5, state=TIMED_WAITING, 
group=TGRP-HdfsDirectoryTest] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)  
   at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:360)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:939) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:724)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=3452, name=IPC Parameter Sending Thread #5, 
state=TIMED_WAITING, group=TGRP-HdfsDirectoryTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458)
at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:360)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:939)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Threa

[jira] [Updated] (LUCENE-5059) Report -Dtests.method properly when Repeat annotation is used (strip augmentations)

2013-07-03 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-5059:


Summary: Report -Dtests.method properly when Repeat annotation is used 
(strip augmentations)  (was: Check the issue with Repeat annotation and 
-Dtests.method argument)

> Report -Dtests.method properly when Repeat annotation is used (strip 
> augmentations)
> ---
>
> Key: LUCENE-5059
> URL: https://issues.apache.org/jira/browse/LUCENE-5059
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5059) Report -Dtests.method properly when Repeat annotation is used (strip augmentations)

2013-07-03 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-5059:


Fix Version/s: 4.4
   5.0

> Report -Dtests.method properly when Repeat annotation is used (strip 
> augmentations)
> ---
>
> Key: LUCENE-5059
> URL: https://issues.apache.org/jira/browse/LUCENE-5059
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 5.0, 4.4
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5059) Report -Dtests.method properly when Repeat annotation is used (strip augmentations)

2013-07-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698913#comment-13698913
 ] 

ASF subversion and git services commented on LUCENE-5059:
-

Commit 1499376 from [~dawidweiss]
[ https://svn.apache.org/r1499376 ]

LUCENE-5059: Report -Dtests.method properly when Repeat annotation is used 
(strip augmentations).

> Report -Dtests.method properly when Repeat annotation is used (strip 
> augmentations)
> ---
>
> Key: LUCENE-5059
> URL: https://issues.apache.org/jira/browse/LUCENE-5059
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 5.0, 4.4
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5059) Report -Dtests.method properly when Repeat annotation is used (strip augmentations)

2013-07-03 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-5059.
-

Resolution: Fixed

> Report -Dtests.method properly when Repeat annotation is used (strip 
> augmentations)
> ---
>
> Key: LUCENE-5059
> URL: https://issues.apache.org/jira/browse/LUCENE-5059
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 5.0, 4.4
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5059) Report -Dtests.method properly when Repeat annotation is used (strip augmentations)

2013-07-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698914#comment-13698914
 ] 

ASF subversion and git services commented on LUCENE-5059:
-

Commit 1499378 from [~dawidweiss]
[ https://svn.apache.org/r1499378 ]

LUCENE-5059: Report -Dtests.method properly when Repeat annotation is 
used(strip augmentations).

> Report -Dtests.method properly when Repeat annotation is used (strip 
> augmentations)
> ---
>
> Key: LUCENE-5059
> URL: https://issues.apache.org/jira/browse/LUCENE-5059
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 5.0, 4.4
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5086) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer

2013-07-03 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698926#comment-13698926
 ] 

Dawid Weiss commented on LUCENE-5086:
-

This would be normally handled by the test framework as a thread leak, but it's 
explicitly excluded in default randomizedtesting filters:
{code}
  // Explicit check for MacOSX AWT-AppKit
  if (t.getName().equals("AWT-AppKit")) {
return true;
  }
{code}

I believe ManagementFactory#getPlatformMBeanServer was not the only call that 
could have started that AWT daemon, very odd.

> RamUsageEstimator causes AWT classes to be loaded by calling 
> ManagementFactory#getPlatformMBeanServer
> -
>
> Key: LUCENE-5086
> URL: https://issues.apache.org/jira/browse/LUCENE-5086
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Shay Banon
>
> Yea, that type of day and that type of title :).
> Since the last update of Java 6 on OS X, I started to see an annoying icon 
> pop up at the doc whenever running elasticsearch. By default, all of our 
> scripts add headless AWT flag so people will probably not encounter it, but, 
> it was strange that I saw it when before I didn't.
> I started to dig around, and saw that when RamUsageEstimator was being 
> loaded, it was causing AWT classes to be loaded. Further investigation showed 
> that actually for some reason, calling 
> ManagementFactory#getPlatformMBeanServer now with the new Java version causes 
> AWT classes to be loaded (at least on the mac, haven't tested on other 
> platforms yet). 
> There are several ways to try and solve it, for example, by identifying the 
> bug in the JVM itself, but I think that there should be a fix for it in 
> Lucene itself, specifically since there is no need to call 
> #getPlatformMBeanServer to get the hotspot diagnostics one (its a heavy 
> call...).
> Here is a simple call that will allow to get the hotspot mxbean without using 
> the #getPlatformMBeanServer method, and not causing it to be loaded and 
> loading all those nasty AWT classes:
> {code}
> Object getHotSpotMXBean() {
> try {
> // Java 6
> Class sunMF = Class.forName("sun.management.ManagementFactory");
> return sunMF.getMethod("getDiagnosticMXBean").invoke(null);
> } catch (Throwable t) {
> // ignore
> }
> // potentially Java 7
> try {
> return ManagementFactory.class.getMethod("getPlatformMXBean", 
> Class.class).invoke(null, 
> Class.forName("com.sun.management.HotSpotDiagnosticMXBean"));
> } catch (Throwable t) {
> // ignore
> }
> return null;
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-5086) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer

2013-07-03 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss reassigned LUCENE-5086:
---

Assignee: Dawid Weiss

> RamUsageEstimator causes AWT classes to be loaded by calling 
> ManagementFactory#getPlatformMBeanServer
> -
>
> Key: LUCENE-5086
> URL: https://issues.apache.org/jira/browse/LUCENE-5086
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Shay Banon
>Assignee: Dawid Weiss
>
> Yea, that type of day and that type of title :).
> Since the last update of Java 6 on OS X, I started to see an annoying icon 
> pop up at the doc whenever running elasticsearch. By default, all of our 
> scripts add headless AWT flag so people will probably not encounter it, but, 
> it was strange that I saw it when before I didn't.
> I started to dig around, and saw that when RamUsageEstimator was being 
> loaded, it was causing AWT classes to be loaded. Further investigation showed 
> that actually for some reason, calling 
> ManagementFactory#getPlatformMBeanServer now with the new Java version causes 
> AWT classes to be loaded (at least on the mac, haven't tested on other 
> platforms yet). 
> There are several ways to try and solve it, for example, by identifying the 
> bug in the JVM itself, but I think that there should be a fix for it in 
> Lucene itself, specifically since there is no need to call 
> #getPlatformMBeanServer to get the hotspot diagnostics one (its a heavy 
> call...).
> Here is a simple call that will allow to get the hotspot mxbean without using 
> the #getPlatformMBeanServer method, and not causing it to be loaded and 
> loading all those nasty AWT classes:
> {code}
> Object getHotSpotMXBean() {
> try {
> // Java 6
> Class sunMF = Class.forName("sun.management.ManagementFactory");
> return sunMF.getMethod("getDiagnosticMXBean").invoke(null);
> } catch (Throwable t) {
> // ignore
> }
> // potentially Java 7
> try {
> return ManagementFactory.class.getMethod("getPlatformMXBean", 
> Class.class).invoke(null, 
> Class.forName("com.sun.management.HotSpotDiagnosticMXBean"));
> } catch (Throwable t) {
> // ignore
> }
> return null;
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5013) ScandinavianFoldingFilterFactory and ScandinavianNormalizationFilterFactory

2013-07-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698929#comment-13698929
 ] 

ASF subversion and git services commented on LUCENE-5013:
-

Commit 1499382 from jan...@apache.org
[ https://svn.apache.org/r1499382 ]

LUCENE-5013: ScandinavianFoldingFilterFactory and 
ScandinavianNormalizationFilterFactory

> ScandinavianFoldingFilterFactory and ScandinavianNormalizationFilterFactory
> ---
>
> Key: LUCENE-5013
> URL: https://issues.apache.org/jira/browse/LUCENE-5013
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Affects Versions: 4.3
>Reporter: Karl Wettin
>Assignee: Jan Høydahl
>Priority: Trivial
> Fix For: 5.0, 4.4
>
> Attachments: LUCENE-5013-2.txt, LUCENE-5013-3.txt, LUCENE-5013-4.txt, 
> LUCENE-5013-5.txt, LUCENE-5013-6.txt, LUCENE-5013.patch, LUCENE-5013.txt
>
>
> This filter is an augmentation of output from ASCIIFoldingFilter,
> it discriminate against double vowels aa, ae, ao, oe and oo, leaving just the 
> first one.
> blåbærsyltetøj == blåbärsyltetöj == blaabaarsyltetoej == blabarsyltetoj
> räksmörgås == ræksmørgås == ræksmörgaos == raeksmoergaas == raksmorgas
> Caveats:
> Since this is a filtering on top of ASCIIFoldingFilter äöåøæ already has been 
> folded down to aoaoae when handled by this filter it will cause effects such 
> as:
> bøen -> boen -> bon
> åene -> aene -> ane
> I find this to be a trivial problem compared to not finding anything at all.
> Background:
> Swedish åäö is in fact the same letters as Norwegian and Danish åæø and thus 
> interchangeable in when used between these languages. They are however folded 
> differently when people type them on a keyboard lacking these characters and 
> ASCIIFoldingFilter handle ä and æ differently.
> When a Swedish person is lacking umlauted characters on the keyboard they 
> consistently type a, a, o instead of å, ä, ö. Foreigners also tend to use a, 
> a, o.
> In Norway people tend to type aa, ae and oe instead of å, æ and ø. Some use 
> a, a, o. I've also seen oo, ao, etc. And permutations. Not sure about Denmark 
> but the pattern is probably the same.
> This filter solves that problem, but might also cause new.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5086) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer

2013-07-03 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698932#comment-13698932
 ] 

Dawid Weiss commented on LUCENE-5086:
-

Shay, can you say which version of Java (macosx) is causing this to happen? 
I'll try to reproduce tonight on my mac and see what the possible workarounds 
are (yours included).

> RamUsageEstimator causes AWT classes to be loaded by calling 
> ManagementFactory#getPlatformMBeanServer
> -
>
> Key: LUCENE-5086
> URL: https://issues.apache.org/jira/browse/LUCENE-5086
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Shay Banon
>Assignee: Dawid Weiss
>
> Yea, that type of day and that type of title :).
> Since the last update of Java 6 on OS X, I started to see an annoying icon 
> pop up at the doc whenever running elasticsearch. By default, all of our 
> scripts add headless AWT flag so people will probably not encounter it, but, 
> it was strange that I saw it when before I didn't.
> I started to dig around, and saw that when RamUsageEstimator was being 
> loaded, it was causing AWT classes to be loaded. Further investigation showed 
> that actually for some reason, calling 
> ManagementFactory#getPlatformMBeanServer now with the new Java version causes 
> AWT classes to be loaded (at least on the mac, haven't tested on other 
> platforms yet). 
> There are several ways to try and solve it, for example, by identifying the 
> bug in the JVM itself, but I think that there should be a fix for it in 
> Lucene itself, specifically since there is no need to call 
> #getPlatformMBeanServer to get the hotspot diagnostics one (its a heavy 
> call...).
> Here is a simple call that will allow to get the hotspot mxbean without using 
> the #getPlatformMBeanServer method, and not causing it to be loaded and 
> loading all those nasty AWT classes:
> {code}
> Object getHotSpotMXBean() {
> try {
> // Java 6
> Class sunMF = Class.forName("sun.management.ManagementFactory");
> return sunMF.getMethod("getDiagnosticMXBean").invoke(null);
> } catch (Throwable t) {
> // ignore
> }
> // potentially Java 7
> try {
> return ManagementFactory.class.getMethod("getPlatformMXBean", 
> Class.class).invoke(null, 
> Class.forName("com.sun.management.HotSpotDiagnosticMXBean"));
> } catch (Throwable t) {
> // ignore
> }
> return null;
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5013) ScandinavianFoldingFilterFactory and ScandinavianNormalizationFilterFactory

2013-07-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/LUCENE-5013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698945#comment-13698945
 ] 

Jan Høydahl commented on LUCENE-5013:
-

Oops, added at wrong root path, will fix

> ScandinavianFoldingFilterFactory and ScandinavianNormalizationFilterFactory
> ---
>
> Key: LUCENE-5013
> URL: https://issues.apache.org/jira/browse/LUCENE-5013
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Affects Versions: 4.3
>Reporter: Karl Wettin
>Assignee: Jan Høydahl
>Priority: Trivial
> Fix For: 5.0, 4.4
>
> Attachments: LUCENE-5013-2.txt, LUCENE-5013-3.txt, LUCENE-5013-4.txt, 
> LUCENE-5013-5.txt, LUCENE-5013-6.txt, LUCENE-5013.patch, LUCENE-5013.txt
>
>
> This filter is an augmentation of output from ASCIIFoldingFilter,
> it discriminate against double vowels aa, ae, ao, oe and oo, leaving just the 
> first one.
> blåbærsyltetøj == blåbärsyltetöj == blaabaarsyltetoej == blabarsyltetoj
> räksmörgås == ræksmørgås == ræksmörgaos == raeksmoergaas == raksmorgas
> Caveats:
> Since this is a filtering on top of ASCIIFoldingFilter äöåøæ already has been 
> folded down to aoaoae when handled by this filter it will cause effects such 
> as:
> bøen -> boen -> bon
> åene -> aene -> ane
> I find this to be a trivial problem compared to not finding anything at all.
> Background:
> Swedish åäö is in fact the same letters as Norwegian and Danish åæø and thus 
> interchangeable in when used between these languages. They are however folded 
> differently when people type them on a keyboard lacking these characters and 
> ASCIIFoldingFilter handle ä and æ differently.
> When a Swedish person is lacking umlauted characters on the keyboard they 
> consistently type a, a, o instead of å, ä, ö. Foreigners also tend to use a, 
> a, o.
> In Norway people tend to type aa, ae and oe instead of å, æ and ø. Some use 
> a, a, o. I've also seen oo, ao, etc. And permutations. Not sure about Denmark 
> but the pattern is probably the same.
> This filter solves that problem, but might also cause new.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5030) FuzzySuggester has to operate FSTs of Unicode-letters, not UTF-8, to work correctly for 1-byte (like English) and multi-byte (non-Latin) letters

2013-07-03 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698951#comment-13698951
 ] 

Michael McCandless commented on LUCENE-5030:


Maybe we should rename UNICODE_AWARE to FUZZY_UNICODE_AWARE?  (Because 
AnalyzingSuggester itself is already unicode aware... so this flag only impacts 
FuzzySuggester.)

> FuzzySuggester has to operate FSTs of Unicode-letters, not UTF-8, to work 
> correctly for 1-byte (like English) and multi-byte (non-Latin) letters
> 
>
> Key: LUCENE-5030
> URL: https://issues.apache.org/jira/browse/LUCENE-5030
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.3
>Reporter: Artem Lukanin
>Assignee: Michael McCandless
> Fix For: 5.0, 4.4
>
> Attachments: benchmark-INFO_SEP.txt, benchmark-old.txt, 
> benchmark-wo_convertion.txt, LUCENE-5030.patch, LUCENE-5030.patch, 
> nonlatin_fuzzySuggester1.patch, nonlatin_fuzzySuggester2.patch, 
> nonlatin_fuzzySuggester3.patch, nonlatin_fuzzySuggester4.patch, 
> nonlatin_fuzzySuggester_combo1.patch, nonlatin_fuzzySuggester_combo2.patch, 
> nonlatin_fuzzySuggester_combo.patch, nonlatin_fuzzySuggester.patch, 
> nonlatin_fuzzySuggester.patch, nonlatin_fuzzySuggester.patch, 
> run-suggest-benchmark.patch
>
>
> There is a limitation in the current FuzzySuggester implementation: it 
> computes edits in UTF-8 space instead of Unicode character (code point) 
> space. 
> This should be fixable: we'd need to fix TokenStreamToAutomaton to work in 
> Unicode character space, then fix FuzzySuggester to do the same steps that 
> FuzzyQuery does: do the LevN expansion in Unicode character space, then 
> convert that automaton to UTF-8, then intersect with the suggest FST.
> See the discussion here: 
> http://lucene.472066.n3.nabble.com/minFuzzyLength-in-FuzzySuggester-behaves-differently-for-English-and-Russian-td4067018.html#none

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5013) ScandinavianFoldingFilterFactory and ScandinavianNormalizationFilterFactory

2013-07-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698952#comment-13698952
 ] 

ASF subversion and git services commented on LUCENE-5013:
-

Commit 1499392 from jan...@apache.org
[ https://svn.apache.org/r1499392 ]

LUCENE-5013: Revert bad commit

> ScandinavianFoldingFilterFactory and ScandinavianNormalizationFilterFactory
> ---
>
> Key: LUCENE-5013
> URL: https://issues.apache.org/jira/browse/LUCENE-5013
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Affects Versions: 4.3
>Reporter: Karl Wettin
>Assignee: Jan Høydahl
>Priority: Trivial
> Fix For: 5.0, 4.4
>
> Attachments: LUCENE-5013-2.txt, LUCENE-5013-3.txt, LUCENE-5013-4.txt, 
> LUCENE-5013-5.txt, LUCENE-5013-6.txt, LUCENE-5013.patch, LUCENE-5013.txt
>
>
> This filter is an augmentation of output from ASCIIFoldingFilter,
> it discriminate against double vowels aa, ae, ao, oe and oo, leaving just the 
> first one.
> blåbærsyltetøj == blåbärsyltetöj == blaabaarsyltetoej == blabarsyltetoj
> räksmörgås == ræksmørgås == ræksmörgaos == raeksmoergaas == raksmorgas
> Caveats:
> Since this is a filtering on top of ASCIIFoldingFilter äöåøæ already has been 
> folded down to aoaoae when handled by this filter it will cause effects such 
> as:
> bøen -> boen -> bon
> åene -> aene -> ane
> I find this to be a trivial problem compared to not finding anything at all.
> Background:
> Swedish åäö is in fact the same letters as Norwegian and Danish åæø and thus 
> interchangeable in when used between these languages. They are however folded 
> differently when people type them on a keyboard lacking these characters and 
> ASCIIFoldingFilter handle ä and æ differently.
> When a Swedish person is lacking umlauted characters on the keyboard they 
> consistently type a, a, o instead of å, ä, ö. Foreigners also tend to use a, 
> a, o.
> In Norway people tend to type aa, ae and oe instead of å, æ and ø. Some use 
> a, a, o. I've also seen oo, ao, etc. And permutations. Not sure about Denmark 
> but the pattern is probably the same.
> This filter solves that problem, but might also cause new.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5086) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer

2013-07-03 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698956#comment-13698956
 ] 

Uwe Schindler commented on LUCENE-5086:
---

I updated MacOSX to 1.6.0_45 a few days ago on the Jenkins Server, but no 
popups :-)

Apple's MacOSX Java also starts the AWT thread when loading the Java scripting 
framework, because there is one scripting language in the SPI list (something 
like ÄppleFooBarScriptingEngine which also initializes AWT, also ignoring 
awt.headless=true)... It's all horrible.

> RamUsageEstimator causes AWT classes to be loaded by calling 
> ManagementFactory#getPlatformMBeanServer
> -
>
> Key: LUCENE-5086
> URL: https://issues.apache.org/jira/browse/LUCENE-5086
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Shay Banon
>Assignee: Dawid Weiss
>
> Yea, that type of day and that type of title :).
> Since the last update of Java 6 on OS X, I started to see an annoying icon 
> pop up at the doc whenever running elasticsearch. By default, all of our 
> scripts add headless AWT flag so people will probably not encounter it, but, 
> it was strange that I saw it when before I didn't.
> I started to dig around, and saw that when RamUsageEstimator was being 
> loaded, it was causing AWT classes to be loaded. Further investigation showed 
> that actually for some reason, calling 
> ManagementFactory#getPlatformMBeanServer now with the new Java version causes 
> AWT classes to be loaded (at least on the mac, haven't tested on other 
> platforms yet). 
> There are several ways to try and solve it, for example, by identifying the 
> bug in the JVM itself, but I think that there should be a fix for it in 
> Lucene itself, specifically since there is no need to call 
> #getPlatformMBeanServer to get the hotspot diagnostics one (its a heavy 
> call...).
> Here is a simple call that will allow to get the hotspot mxbean without using 
> the #getPlatformMBeanServer method, and not causing it to be loaded and 
> loading all those nasty AWT classes:
> {code}
> Object getHotSpotMXBean() {
> try {
> // Java 6
> Class sunMF = Class.forName("sun.management.ManagementFactory");
> return sunMF.getMethod("getDiagnosticMXBean").invoke(null);
> } catch (Throwable t) {
> // ignore
> }
> // potentially Java 7
> try {
> return ManagementFactory.class.getMethod("getPlatformMXBean", 
> Class.class).invoke(null, 
> Class.forName("com.sun.management.HotSpotDiagnosticMXBean"));
> } catch (Throwable t) {
> // ignore
> }
> return null;
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5014) ANTLR Lucene query parser

2013-07-03 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698970#comment-13698970
 ] 

Erik Hatcher commented on LUCENE-5014:
--

Roman - I see "SOLR" mentions in the patch, but this is purely at the lucene 
module level in this patch, right?  At least the mentions should be removed, 
but anything else needs adjusting?   Is there Solr QParserPlugin stuff you're 
contributing as well?

> ANTLR Lucene query parser
> -
>
> Key: LUCENE-5014
> URL: https://issues.apache.org/jira/browse/LUCENE-5014
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser, modules/queryparser
>Affects Versions: 4.3
> Environment: all
>Reporter: Roman Chyla
>  Labels: antlr, query, queryparser
> Attachments: LUCENE-5014.txt, LUCENE-5014.txt, LUCENE-5014.txt, 
> LUCENE-5014.txt
>
>
> I would like to propose a new way of building query parsers for Lucene.  
> Currently, most Lucene parsers are hard to extend because they are either 
> written in Java (ie. the SOLR query parser, or edismax) or the parsing logic 
> is 'married' with the query building logic (i.e. the standard lucene parser, 
> generated by JavaCC) - which makes any extension really hard.
> Few years back, Lucene got the contrib/modern query parser (later renamed to 
> 'flexible'), yet that parser didn't become a star (it must be very confusing 
> for many users). However, that parsing framework is very powerful! And it is 
> a real pity that there aren't more parsers already using it - because it 
> allows us to add/extend/change almost any aspect of the query parsing. 
> So, if we combine ANTLR + queryparser.flexible, we can get very powerful 
> framework for building almost any query language one can think of. And I hope 
> this extension can become useful.
> The details:
>  - every new query syntax is written in EBNF, it lives in separate files (and 
> can be tested/developed independently - using 'gunit')
>  - ANTLR parser generates parsing code (and it can generate parsers in 
> several languages, the main target is Java, but it can also do Python - which 
> may be interesting for pylucene)
>  - the parser generates AST (abstract syntax tree) which is consumed by a  
> 'pipeline' of processors, users can easily modify this pipeline to add a 
> desired functionality
>  - the new parser contains a few (very important) debugging functions; it can 
> print results of every stage of the build, generate AST's as graphical 
> charts; ant targets help to build/test/debug grammars
>  - I've tried to reuse the existing queryparser.flexible components as much 
> as possible, only adding new processors when necessary
> Assumptions about the grammar:
>  - every grammar must have one top parse rule called 'mainQ'
>  - parsers must generate AST (Abstract Syntax Tree)
> The structure of the AST is left open, there are components which make 
> assumptions about the shape of the AST (ie. that MODIFIER is parent of a a 
> FIELD) however users are free to choose/write different processors with 
> different assumptions about the AST shape.
> More documentation on how to use the parser can be seen here:
> http://29min.wordpress.com/category/antlrqueryparser/
> The parser has been created more than one year back and is used in production 
> (http://labs.adsabs.harvard.edu/adsabs/). A different dialects of query 
> languages (with proximity operatos, functions, special logic etc) - can be 
> seen here: 
> https://github.com/romanchyla/montysolr/tree/master/contrib/adsabs
> https://github.com/romanchyla/montysolr/tree/master/contrib/invenio

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5013) ScandinavianFoldingFilterFactory and ScandinavianNormalizationFilterFactory

2013-07-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698979#comment-13698979
 ] 

ASF subversion and git services commented on LUCENE-5013:
-

Commit 1499409 from jan...@apache.org
[ https://svn.apache.org/r1499409 ]

LUCENE-5013: ScandinavianFoldingFilterFactory and 
ScandinavianNormalizationFilterFactory

> ScandinavianFoldingFilterFactory and ScandinavianNormalizationFilterFactory
> ---
>
> Key: LUCENE-5013
> URL: https://issues.apache.org/jira/browse/LUCENE-5013
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Affects Versions: 4.3
>Reporter: Karl Wettin
>Assignee: Jan Høydahl
>Priority: Trivial
> Fix For: 5.0, 4.4
>
> Attachments: LUCENE-5013-2.txt, LUCENE-5013-3.txt, LUCENE-5013-4.txt, 
> LUCENE-5013-5.txt, LUCENE-5013-6.txt, LUCENE-5013.patch, LUCENE-5013.txt
>
>
> This filter is an augmentation of output from ASCIIFoldingFilter,
> it discriminate against double vowels aa, ae, ao, oe and oo, leaving just the 
> first one.
> blåbærsyltetøj == blåbärsyltetöj == blaabaarsyltetoej == blabarsyltetoj
> räksmörgås == ræksmørgås == ræksmörgaos == raeksmoergaas == raksmorgas
> Caveats:
> Since this is a filtering on top of ASCIIFoldingFilter äöåøæ already has been 
> folded down to aoaoae when handled by this filter it will cause effects such 
> as:
> bøen -> boen -> bon
> åene -> aene -> ane
> I find this to be a trivial problem compared to not finding anything at all.
> Background:
> Swedish åäö is in fact the same letters as Norwegian and Danish åæø and thus 
> interchangeable in when used between these languages. They are however folded 
> differently when people type them on a keyboard lacking these characters and 
> ASCIIFoldingFilter handle ä and æ differently.
> When a Swedish person is lacking umlauted characters on the keyboard they 
> consistently type a, a, o instead of å, ä, ö. Foreigners also tend to use a, 
> a, o.
> In Norway people tend to type aa, ae and oe instead of å, æ and ø. Some use 
> a, a, o. I've also seen oo, ao, etc. And permutations. Not sure about Denmark 
> but the pattern is probably the same.
> This filter solves that problem, but might also cause new.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5014) ANTLR Lucene query parser

2013-07-03 Thread Roman Chyla (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698981#comment-13698981
 ] 

Roman Chyla commented on LUCENE-5014:
-

HiErik, i'll add a solr qparser plugin too. thanks for reminding me. 

> ANTLR Lucene query parser
> -
>
> Key: LUCENE-5014
> URL: https://issues.apache.org/jira/browse/LUCENE-5014
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser, modules/queryparser
>Affects Versions: 4.3
> Environment: all
>Reporter: Roman Chyla
>  Labels: antlr, query, queryparser
> Attachments: LUCENE-5014.txt, LUCENE-5014.txt, LUCENE-5014.txt, 
> LUCENE-5014.txt
>
>
> I would like to propose a new way of building query parsers for Lucene.  
> Currently, most Lucene parsers are hard to extend because they are either 
> written in Java (ie. the SOLR query parser, or edismax) or the parsing logic 
> is 'married' with the query building logic (i.e. the standard lucene parser, 
> generated by JavaCC) - which makes any extension really hard.
> Few years back, Lucene got the contrib/modern query parser (later renamed to 
> 'flexible'), yet that parser didn't become a star (it must be very confusing 
> for many users). However, that parsing framework is very powerful! And it is 
> a real pity that there aren't more parsers already using it - because it 
> allows us to add/extend/change almost any aspect of the query parsing. 
> So, if we combine ANTLR + queryparser.flexible, we can get very powerful 
> framework for building almost any query language one can think of. And I hope 
> this extension can become useful.
> The details:
>  - every new query syntax is written in EBNF, it lives in separate files (and 
> can be tested/developed independently - using 'gunit')
>  - ANTLR parser generates parsing code (and it can generate parsers in 
> several languages, the main target is Java, but it can also do Python - which 
> may be interesting for pylucene)
>  - the parser generates AST (abstract syntax tree) which is consumed by a  
> 'pipeline' of processors, users can easily modify this pipeline to add a 
> desired functionality
>  - the new parser contains a few (very important) debugging functions; it can 
> print results of every stage of the build, generate AST's as graphical 
> charts; ant targets help to build/test/debug grammars
>  - I've tried to reuse the existing queryparser.flexible components as much 
> as possible, only adding new processors when necessary
> Assumptions about the grammar:
>  - every grammar must have one top parse rule called 'mainQ'
>  - parsers must generate AST (Abstract Syntax Tree)
> The structure of the AST is left open, there are components which make 
> assumptions about the shape of the AST (ie. that MODIFIER is parent of a a 
> FIELD) however users are free to choose/write different processors with 
> different assumptions about the AST shape.
> More documentation on how to use the parser can be seen here:
> http://29min.wordpress.com/category/antlrqueryparser/
> The parser has been created more than one year back and is used in production 
> (http://labs.adsabs.harvard.edu/adsabs/). A different dialects of query 
> languages (with proximity operatos, functions, special logic etc) - can be 
> seen here: 
> https://github.com/romanchyla/montysolr/tree/master/contrib/adsabs
> https://github.com/romanchyla/montysolr/tree/master/contrib/invenio

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-4992) Solr queries don't propagate Java OutOfMemoryError back to the JVM

2013-07-03 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-4992:
-

Assignee: Mark Miller

> Solr queries don't propagate Java OutOfMemoryError back to the JVM
> --
>
> Key: SOLR-4992
> URL: https://issues.apache.org/jira/browse/SOLR-4992
> Project: Solr
>  Issue Type: Bug
>  Components: search, SolrCloud, update
>Affects Versions: 4.3.1
>Reporter: Daniel Collins
>Assignee: Mark Miller
>
> Solr (specifically SolrDispatchFilter.doFilter() but there might be other 
> places) handle generic java.lang.Throwable errors but that "hides" 
> OutOfMemoryError scenarios.
> IndexWriter does this too but that has a specific exclusion for OOM scenarios 
> and handles them explicitly (stops committing and just logs to the 
> transaction log).
> {noformat}
> Example Stack trace:
> 2013-06-26 19:31:33,801 [qtp632640515-62] ERROR
> solr.servlet.SolrDispatchFilter Q:22 -
> null:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap
> space
> at 
> org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:670)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:380)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:155)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1423)
> at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:450)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:138)
> at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:564)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:213)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1083)
> at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:379)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:175)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1017)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:136)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:258)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:109)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:445)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:260)
> at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:225)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.run(AbstractConnection.java:358)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:596)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:527)
> at java.lang.Thread.run(Thread.java:722)
> Caused by: java.lang.OutOfMemoryError: Java heap space
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4992) Solr queries don't propagate Java OutOfMemoryError back to the JVM

2013-07-03 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4992:
--

Fix Version/s: 4.4
   5.0

> Solr queries don't propagate Java OutOfMemoryError back to the JVM
> --
>
> Key: SOLR-4992
> URL: https://issues.apache.org/jira/browse/SOLR-4992
> Project: Solr
>  Issue Type: Bug
>  Components: search, SolrCloud, update
>Affects Versions: 4.3.1
>Reporter: Daniel Collins
>Assignee: Mark Miller
> Fix For: 5.0, 4.4
>
>
> Solr (specifically SolrDispatchFilter.doFilter() but there might be other 
> places) handle generic java.lang.Throwable errors but that "hides" 
> OutOfMemoryError scenarios.
> IndexWriter does this too but that has a specific exclusion for OOM scenarios 
> and handles them explicitly (stops committing and just logs to the 
> transaction log).
> {noformat}
> Example Stack trace:
> 2013-06-26 19:31:33,801 [qtp632640515-62] ERROR
> solr.servlet.SolrDispatchFilter Q:22 -
> null:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap
> space
> at 
> org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:670)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:380)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:155)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1423)
> at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:450)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:138)
> at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:564)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:213)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1083)
> at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:379)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:175)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1017)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:136)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:258)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:109)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:445)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:260)
> at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:225)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.run(AbstractConnection.java:358)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:596)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:527)
> at java.lang.Thread.run(Thread.java:722)
> Caused by: java.lang.OutOfMemoryError: Java heap space
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5014) ANTLR Lucene query parser

2013-07-03 Thread Roman Chyla (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698985#comment-13698985
 ] 

Roman Chyla commented on LUCENE-5014:
-

will it be OK to include the solr parts in this ticket? besides the jira name, 
that seems s aa best option to me.

> ANTLR Lucene query parser
> -
>
> Key: LUCENE-5014
> URL: https://issues.apache.org/jira/browse/LUCENE-5014
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser, modules/queryparser
>Affects Versions: 4.3
> Environment: all
>Reporter: Roman Chyla
>  Labels: antlr, query, queryparser
> Attachments: LUCENE-5014.txt, LUCENE-5014.txt, LUCENE-5014.txt, 
> LUCENE-5014.txt
>
>
> I would like to propose a new way of building query parsers for Lucene.  
> Currently, most Lucene parsers are hard to extend because they are either 
> written in Java (ie. the SOLR query parser, or edismax) or the parsing logic 
> is 'married' with the query building logic (i.e. the standard lucene parser, 
> generated by JavaCC) - which makes any extension really hard.
> Few years back, Lucene got the contrib/modern query parser (later renamed to 
> 'flexible'), yet that parser didn't become a star (it must be very confusing 
> for many users). However, that parsing framework is very powerful! And it is 
> a real pity that there aren't more parsers already using it - because it 
> allows us to add/extend/change almost any aspect of the query parsing. 
> So, if we combine ANTLR + queryparser.flexible, we can get very powerful 
> framework for building almost any query language one can think of. And I hope 
> this extension can become useful.
> The details:
>  - every new query syntax is written in EBNF, it lives in separate files (and 
> can be tested/developed independently - using 'gunit')
>  - ANTLR parser generates parsing code (and it can generate parsers in 
> several languages, the main target is Java, but it can also do Python - which 
> may be interesting for pylucene)
>  - the parser generates AST (abstract syntax tree) which is consumed by a  
> 'pipeline' of processors, users can easily modify this pipeline to add a 
> desired functionality
>  - the new parser contains a few (very important) debugging functions; it can 
> print results of every stage of the build, generate AST's as graphical 
> charts; ant targets help to build/test/debug grammars
>  - I've tried to reuse the existing queryparser.flexible components as much 
> as possible, only adding new processors when necessary
> Assumptions about the grammar:
>  - every grammar must have one top parse rule called 'mainQ'
>  - parsers must generate AST (Abstract Syntax Tree)
> The structure of the AST is left open, there are components which make 
> assumptions about the shape of the AST (ie. that MODIFIER is parent of a a 
> FIELD) however users are free to choose/write different processors with 
> different assumptions about the AST shape.
> More documentation on how to use the parser can be seen here:
> http://29min.wordpress.com/category/antlrqueryparser/
> The parser has been created more than one year back and is used in production 
> (http://labs.adsabs.harvard.edu/adsabs/). A different dialects of query 
> languages (with proximity operatos, functions, special logic etc) - can be 
> seen here: 
> https://github.com/romanchyla/montysolr/tree/master/contrib/adsabs
> https://github.com/romanchyla/montysolr/tree/master/contrib/invenio

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5014) ANTLR Lucene query parser

2013-07-03 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13699005#comment-13699005
 ] 

Erik Hatcher commented on LUCENE-5014:
--

bq. will it be OK to include the solr parts in this ticket?

Seems the best way to do it to me as well.  It's probably not more than a few 
lines of code as a thin shim factory.

> ANTLR Lucene query parser
> -
>
> Key: LUCENE-5014
> URL: https://issues.apache.org/jira/browse/LUCENE-5014
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser, modules/queryparser
>Affects Versions: 4.3
> Environment: all
>Reporter: Roman Chyla
>  Labels: antlr, query, queryparser
> Attachments: LUCENE-5014.txt, LUCENE-5014.txt, LUCENE-5014.txt, 
> LUCENE-5014.txt
>
>
> I would like to propose a new way of building query parsers for Lucene.  
> Currently, most Lucene parsers are hard to extend because they are either 
> written in Java (ie. the SOLR query parser, or edismax) or the parsing logic 
> is 'married' with the query building logic (i.e. the standard lucene parser, 
> generated by JavaCC) - which makes any extension really hard.
> Few years back, Lucene got the contrib/modern query parser (later renamed to 
> 'flexible'), yet that parser didn't become a star (it must be very confusing 
> for many users). However, that parsing framework is very powerful! And it is 
> a real pity that there aren't more parsers already using it - because it 
> allows us to add/extend/change almost any aspect of the query parsing. 
> So, if we combine ANTLR + queryparser.flexible, we can get very powerful 
> framework for building almost any query language one can think of. And I hope 
> this extension can become useful.
> The details:
>  - every new query syntax is written in EBNF, it lives in separate files (and 
> can be tested/developed independently - using 'gunit')
>  - ANTLR parser generates parsing code (and it can generate parsers in 
> several languages, the main target is Java, but it can also do Python - which 
> may be interesting for pylucene)
>  - the parser generates AST (abstract syntax tree) which is consumed by a  
> 'pipeline' of processors, users can easily modify this pipeline to add a 
> desired functionality
>  - the new parser contains a few (very important) debugging functions; it can 
> print results of every stage of the build, generate AST's as graphical 
> charts; ant targets help to build/test/debug grammars
>  - I've tried to reuse the existing queryparser.flexible components as much 
> as possible, only adding new processors when necessary
> Assumptions about the grammar:
>  - every grammar must have one top parse rule called 'mainQ'
>  - parsers must generate AST (Abstract Syntax Tree)
> The structure of the AST is left open, there are components which make 
> assumptions about the shape of the AST (ie. that MODIFIER is parent of a a 
> FIELD) however users are free to choose/write different processors with 
> different assumptions about the AST shape.
> More documentation on how to use the parser can be seen here:
> http://29min.wordpress.com/category/antlrqueryparser/
> The parser has been created more than one year back and is used in production 
> (http://labs.adsabs.harvard.edu/adsabs/). A different dialects of query 
> languages (with proximity operatos, functions, special logic etc) - can be 
> seen here: 
> https://github.com/romanchyla/montysolr/tree/master/contrib/adsabs
> https://github.com/romanchyla/montysolr/tree/master/contrib/invenio

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5086) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer

2013-07-03 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5086:
--

Issue Type: Improvement  (was: Bug)

> RamUsageEstimator causes AWT classes to be loaded by calling 
> ManagementFactory#getPlatformMBeanServer
> -
>
> Key: LUCENE-5086
> URL: https://issues.apache.org/jira/browse/LUCENE-5086
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Shay Banon
>Assignee: Dawid Weiss
>
> Yea, that type of day and that type of title :).
> Since the last update of Java 6 on OS X, I started to see an annoying icon 
> pop up at the doc whenever running elasticsearch. By default, all of our 
> scripts add headless AWT flag so people will probably not encounter it, but, 
> it was strange that I saw it when before I didn't.
> I started to dig around, and saw that when RamUsageEstimator was being 
> loaded, it was causing AWT classes to be loaded. Further investigation showed 
> that actually for some reason, calling 
> ManagementFactory#getPlatformMBeanServer now with the new Java version causes 
> AWT classes to be loaded (at least on the mac, haven't tested on other 
> platforms yet). 
> There are several ways to try and solve it, for example, by identifying the 
> bug in the JVM itself, but I think that there should be a fix for it in 
> Lucene itself, specifically since there is no need to call 
> #getPlatformMBeanServer to get the hotspot diagnostics one (its a heavy 
> call...).
> Here is a simple call that will allow to get the hotspot mxbean without using 
> the #getPlatformMBeanServer method, and not causing it to be loaded and 
> loading all those nasty AWT classes:
> {code}
> Object getHotSpotMXBean() {
> try {
> // Java 6
> Class sunMF = Class.forName("sun.management.ManagementFactory");
> return sunMF.getMethod("getDiagnosticMXBean").invoke(null);
> } catch (Throwable t) {
> // ignore
> }
> // potentially Java 7
> try {
> return ManagementFactory.class.getMethod("getPlatformMXBean", 
> Class.class).invoke(null, 
> Class.forName("com.sun.management.HotSpotDiagnosticMXBean"));
> } catch (Throwable t) {
> // ignore
> }
> return null;
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5086) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer

2013-07-03 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13699014#comment-13699014
 ] 

Uwe Schindler commented on LUCENE-5086:
---

We can use the above code fragment to load the MXBean, if it can fall back to 
the official code when the direct instantiation does not work. And it should 
never ever catch Throwable, please specify the exact Exceptions and handle 
accordingly. On Trunk you can use the multi-catch as Lucene trunk is Java 7.

> RamUsageEstimator causes AWT classes to be loaded by calling 
> ManagementFactory#getPlatformMBeanServer
> -
>
> Key: LUCENE-5086
> URL: https://issues.apache.org/jira/browse/LUCENE-5086
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Shay Banon
>Assignee: Dawid Weiss
>
> Yea, that type of day and that type of title :).
> Since the last update of Java 6 on OS X, I started to see an annoying icon 
> pop up at the doc whenever running elasticsearch. By default, all of our 
> scripts add headless AWT flag so people will probably not encounter it, but, 
> it was strange that I saw it when before I didn't.
> I started to dig around, and saw that when RamUsageEstimator was being 
> loaded, it was causing AWT classes to be loaded. Further investigation showed 
> that actually for some reason, calling 
> ManagementFactory#getPlatformMBeanServer now with the new Java version causes 
> AWT classes to be loaded (at least on the mac, haven't tested on other 
> platforms yet). 
> There are several ways to try and solve it, for example, by identifying the 
> bug in the JVM itself, but I think that there should be a fix for it in 
> Lucene itself, specifically since there is no need to call 
> #getPlatformMBeanServer to get the hotspot diagnostics one (its a heavy 
> call...).
> Here is a simple call that will allow to get the hotspot mxbean without using 
> the #getPlatformMBeanServer method, and not causing it to be loaded and 
> loading all those nasty AWT classes:
> {code}
> Object getHotSpotMXBean() {
> try {
> // Java 6
> Class sunMF = Class.forName("sun.management.ManagementFactory");
> return sunMF.getMethod("getDiagnosticMXBean").invoke(null);
> } catch (Throwable t) {
> // ignore
> }
> // potentially Java 7
> try {
> return ManagementFactory.class.getMethod("getPlatformMXBean", 
> Class.class).invoke(null, 
> Class.forName("com.sun.management.HotSpotDiagnosticMXBean"));
> } catch (Throwable t) {
> // ignore
> }
> return null;
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4991) Register QParserPlugin's as SolrInfoMBean's, allowing them to be visible externally like other plugins

2013-07-03 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-4991:
---

Attachment: SOLR-4991.patch

Here's an updated patch that adds numRequests to QParserPlugin to be returned 
in the mbean stats (this is congruent to the way highlighting components work, 
so I figured why not qparsers too), and adds incrementing numRequests in every 
current QParserPlugin.

Also added an icon for the plugin UI.

> Register QParserPlugin's as SolrInfoMBean's, allowing them to be visible 
> externally like other plugins
> --
>
> Key: SOLR-4991
> URL: https://issues.apache.org/jira/browse/SOLR-4991
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers
>Reporter: Erik Hatcher
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-4991.patch, SOLR-4991.patch
>
>
> QParserPlugins currently cannot be seen externally as official plugins*.  
> Let's register them as SolrInfoMBeans so they can be seen remotely.
> * Yes, solrconfig.xml itself could be retrieved and parsed, but many other 
> similar plugins are available as MBeans and so should be query parsers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4991) Register QParserPlugin's as SolrInfoMBean's, allowing them to be visible externally like other plugins

2013-07-03 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13699033#comment-13699033
 ] 

Erik Hatcher commented on SOLR-4991:


Any objections?  With or without the numRequests addition?   This all should be 
backwards compatible, but if anyone sees a problem let me know (custom query 
parsers will have numRequests=0 until they upgrade Solr and add numRequests++ 
in their createParser() method).

> Register QParserPlugin's as SolrInfoMBean's, allowing them to be visible 
> externally like other plugins
> --
>
> Key: SOLR-4991
> URL: https://issues.apache.org/jira/browse/SOLR-4991
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers
>Reporter: Erik Hatcher
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-4991.patch, SOLR-4991.patch
>
>
> QParserPlugins currently cannot be seen externally as official plugins*.  
> Let's register them as SolrInfoMBeans so they can be seen remotely.
> * Yes, solrconfig.xml itself could be retrieved and parsed, but many other 
> similar plugins are available as MBeans and so should be query parsers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4943) Add a new info admin handler.

2013-07-03 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13699041#comment-13699041
 ] 

Mark Miller commented on SOLR-4943:
---

bq. Take a look at this

That's fine with me. I'll incorporate it into my patch.

> Add a new info admin handler.
> -
>
> Key: SOLR-4943
> URL: https://issues.apache.org/jira/browse/SOLR-4943
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-4943-2.patch, SOLR-4943-3__hoss_variant.patch, 
> SOLR-4943-3.patch, SOLR-4943.patch
>
>
> Currently, you have to specify a core to get system information for a variety 
> of request handlers - properties, logging, thread dump, system, etc.
> These should be available at a system location and not core specific location.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4943) Add a new info admin handler.

2013-07-03 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13699050#comment-13699050
 ] 

Mark Miller commented on SOLR-4943:
---

Hmm, on a quick look, it looks like your missing the reason things are as they 
are...you cannot get the core container from the request core in this case - 
there is no core. Your patch will just cause NPE's when you hit the system wide 
urls.

> Add a new info admin handler.
> -
>
> Key: SOLR-4943
> URL: https://issues.apache.org/jira/browse/SOLR-4943
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-4943-2.patch, SOLR-4943-3__hoss_variant.patch, 
> SOLR-4943-3.patch, SOLR-4943.patch
>
>
> Currently, you have to specify a core to get system information for a variety 
> of request handlers - properties, logging, thread dump, system, etc.
> These should be available at a system location and not core specific location.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



PropagateServer Implementation for Solr

2013-07-03 Thread Furkan KAMACI
Hi;

I will open two issues at Jira and I want to write down my thoughts here.
Currently Solr servers are interacting with only one Solr node. I think
that there should be an implementation that propagates requests into
multiple Solr nodes. For example when Solr is used as SolrCloud sending a
LukeRequest should be made to one node at each shard. First patch will be
related to implementing a PropagateServer for Solr.

Second issue is related to first one. Let's assume that you are using Solr
as SolrCloud and you have more than one shard. Let's assume that there are
20 docs at shard_1 and 15 docs at shard_2. When using CloudSolrServer if
you make a LukeRequest it uses LBHttpSolrServer internally and it sends
request to just one Solr Node (via HttpSolrServer) as round robin. So you
may get 20 docs as a result at first request and if you send same request
you may get 15 docs as a result too. Using a PropagateServer inside
CloudSolrServer will fix that bug.

I will make initial patchs for them and I will change/add code to them
after getting feedback from comminity (i.e. first patch will not make multi
threaded requests at PropagateServer, I just want to get feedbacks of
comminity)

Thanks;
Furkan KAMACI


[jira] [Updated] (SOLR-4816) Add document routing to CloudSolrServer

2013-07-03 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-4816:
-

Attachment: SOLR-4816.patch

Added tests for UpdateRequestExt document routing and deleteById routing.

Added test for UpdateRequest deleteById routing

> Add document routing to CloudSolrServer
> ---
>
> Key: SOLR-4816
> URL: https://issues.apache.org/jira/browse/SOLR-4816
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.3
>Reporter: Joel Bernstein
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
> SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
> SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
> SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
> SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
> SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
> SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816-sriesenberg.patch
>
>
> This issue adds the following enhancements to CloudSolrServer's update logic:
> 1) Document routing: Updates are routed directly to the correct shard leader 
> eliminating document routing at the server.
> 2) Optional parallel update execution: Updates for each shard are executed in 
> a separate thread so parallel indexing can occur across the cluster.
> These enhancements should allow for near linear scalability on indexing 
> throughput.
> Usage:
> CloudSolrServer cloudClient = new CloudSolrServer(zkAddress);
> cloudClient.setParallelUpdates(true); 
> SolrInputDocument doc1 = new SolrInputDocument();
> doc1.addField(id, "0");
> doc1.addField("a_t", "hello1");
> SolrInputDocument doc2 = new SolrInputDocument();
> doc2.addField(id, "2");
> doc2.addField("a_t", "hello2");
> UpdateRequest request = new UpdateRequest();
> request.add(doc1);
> request.add(doc2);
> request.setAction(AbstractUpdateRequest.ACTION.OPTIMIZE, false, false);
> NamedList response = cloudClient.request(request); // Returns a backwards 
> compatible condensed response.
> //To get more detailed response down cast to RouteResponse:
> CloudSolrServer.RouteResponse rr = (CloudSolrServer.RouteResponse)response;

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4995) Implementing a Server Capable of Propagating Requests

2013-07-03 Thread Furkan KAMACI (JIRA)
Furkan KAMACI created SOLR-4995:
---

 Summary: Implementing a Server Capable of Propagating Requests
 Key: SOLR-4995
 URL: https://issues.apache.org/jira/browse/SOLR-4995
 Project: Solr
  Issue Type: New Feature
Reporter: Furkan KAMACI


Currently Solr servers are interacting with only one Solr node. There should be 
an implementation that propagates requests into multiple Solr nodes. For 
example when Solr is used as SolrCloud sending a LukeRequest should be made to 
one node at each shard.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4996) CloudSolrServer Does Not Respect Propagate Requests

2013-07-03 Thread Furkan KAMACI (JIRA)
Furkan KAMACI created SOLR-4996:
---

 Summary: CloudSolrServer Does Not Respect Propagate Requests
 Key: SOLR-4996
 URL: https://issues.apache.org/jira/browse/SOLR-4996
 Project: Solr
  Issue Type: Bug
Reporter: Furkan KAMACI


When using CloudSolrServer if you make a request as like LukeRequest it uses 
LBHttpSolrServer internally and it sends request to just one Solr Node (via 
HttpSolrServer) as round robin. So you may get different results for same 
requests at different times event nothing changes. Using a PropagateServer 
inside CloudSolrServer will fix that bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5013) ScandinavianFoldingFilterFactory and ScandinavianNormalizationFilterFactory

2013-07-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/LUCENE-5013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved LUCENE-5013.
-

Resolution: Fixed

> ScandinavianFoldingFilterFactory and ScandinavianNormalizationFilterFactory
> ---
>
> Key: LUCENE-5013
> URL: https://issues.apache.org/jira/browse/LUCENE-5013
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Affects Versions: 4.3
>Reporter: Karl Wettin
>Assignee: Jan Høydahl
>Priority: Trivial
> Fix For: 5.0, 4.4
>
> Attachments: LUCENE-5013-2.txt, LUCENE-5013-3.txt, LUCENE-5013-4.txt, 
> LUCENE-5013-5.txt, LUCENE-5013-6.txt, LUCENE-5013.patch, LUCENE-5013.txt
>
>
> This filter is an augmentation of output from ASCIIFoldingFilter,
> it discriminate against double vowels aa, ae, ao, oe and oo, leaving just the 
> first one.
> blåbærsyltetøj == blåbärsyltetöj == blaabaarsyltetoej == blabarsyltetoj
> räksmörgås == ræksmørgås == ræksmörgaos == raeksmoergaas == raksmorgas
> Caveats:
> Since this is a filtering on top of ASCIIFoldingFilter äöåøæ already has been 
> folded down to aoaoae when handled by this filter it will cause effects such 
> as:
> bøen -> boen -> bon
> åene -> aene -> ane
> I find this to be a trivial problem compared to not finding anything at all.
> Background:
> Swedish åäö is in fact the same letters as Norwegian and Danish åæø and thus 
> interchangeable in when used between these languages. They are however folded 
> differently when people type them on a keyboard lacking these characters and 
> ASCIIFoldingFilter handle ä and æ differently.
> When a Swedish person is lacking umlauted characters on the keyboard they 
> consistently type a, a, o instead of å, ä, ö. Foreigners also tend to use a, 
> a, o.
> In Norway people tend to type aa, ae and oe instead of å, æ and ø. Some use 
> a, a, o. I've also seen oo, ao, etc. And permutations. Not sure about Denmark 
> but the pattern is probably the same.
> This filter solves that problem, but might also cause new.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5013) ScandinavianFoldingFilterFactory and ScandinavianNormalizationFilterFactory

2013-07-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13699062#comment-13699062
 ] 

ASF subversion and git services commented on LUCENE-5013:
-

Commit 1499437 from jan...@apache.org
[ https://svn.apache.org/r1499437 ]

LUCENE-5013: ScandinavianFoldingFilterFactory and 
ScandinavianNormalizationFilterFactory (backport)

> ScandinavianFoldingFilterFactory and ScandinavianNormalizationFilterFactory
> ---
>
> Key: LUCENE-5013
> URL: https://issues.apache.org/jira/browse/LUCENE-5013
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Affects Versions: 4.3
>Reporter: Karl Wettin
>Assignee: Jan Høydahl
>Priority: Trivial
> Fix For: 5.0, 4.4
>
> Attachments: LUCENE-5013-2.txt, LUCENE-5013-3.txt, LUCENE-5013-4.txt, 
> LUCENE-5013-5.txt, LUCENE-5013-6.txt, LUCENE-5013.patch, LUCENE-5013.txt
>
>
> This filter is an augmentation of output from ASCIIFoldingFilter,
> it discriminate against double vowels aa, ae, ao, oe and oo, leaving just the 
> first one.
> blåbærsyltetøj == blåbärsyltetöj == blaabaarsyltetoej == blabarsyltetoj
> räksmörgås == ræksmørgås == ræksmörgaos == raeksmoergaas == raksmorgas
> Caveats:
> Since this is a filtering on top of ASCIIFoldingFilter äöåøæ already has been 
> folded down to aoaoae when handled by this filter it will cause effects such 
> as:
> bøen -> boen -> bon
> åene -> aene -> ane
> I find this to be a trivial problem compared to not finding anything at all.
> Background:
> Swedish åäö is in fact the same letters as Norwegian and Danish åæø and thus 
> interchangeable in when used between these languages. They are however folded 
> differently when people type them on a keyboard lacking these characters and 
> ASCIIFoldingFilter handle ä and æ differently.
> When a Swedish person is lacking umlauted characters on the keyboard they 
> consistently type a, a, o instead of å, ä, ö. Foreigners also tend to use a, 
> a, o.
> In Norway people tend to type aa, ae and oe instead of å, æ and ø. Some use 
> a, a, o. I've also seen oo, ao, etc. And permutations. Not sure about Denmark 
> but the pattern is probably the same.
> This filter solves that problem, but might also cause new.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4995) Implementing a Server Capable of Propagating Requests

2013-07-03 Thread Furkan KAMACI (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Furkan KAMACI updated SOLR-4995:


Attachment: SOLR-4995.patch

> Implementing a Server Capable of Propagating Requests
> -
>
> Key: SOLR-4995
> URL: https://issues.apache.org/jira/browse/SOLR-4995
> Project: Solr
>  Issue Type: New Feature
>Reporter: Furkan KAMACI
> Attachments: SOLR-4995.patch
>
>
> Currently Solr servers are interacting with only one Solr node. There should 
> be an implementation that propagates requests into multiple Solr nodes. For 
> example when Solr is used as SolrCloud sending a LukeRequest should be made 
> to one node at each shard.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4693) Create a collections API to delete/cleanup a Slice

2013-07-03 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-4693:


Attachment: SOLR-4693.patch

* I changed Overseer action and methods to deleteShard instead of deleteSlice, 
sliceCmd to shardCmd and DeleteSliceTest to DeleteShardTest (and related 
changes inside the test as well). I know it is confusing but we already have 
createshard and updateshardstate in Overseer and I didn't want to add more 
inconsistency.
* In CollectionHandler.handleDeleteShardAction, I removed the name == null 
check because we used params.required().get which ensures that name can never 
be null. The createcollection api also makes the same mistake :)
* The Overseer.deleteSlice/deleteShard was not atomic so I changed the 
following:
{code}
Map newSlices = coll.getSlicesMap();
{code}
to
{code}
Map newSlices = new LinkedHashMap(coll.getSlicesMap());
{code}
* Added a wait loop to OCP.deleteShard like the one in delete collection
* Fixed shardCount and sliceCount in DeleteShardTest constructor
* Added a break to the wait loop inside DeleteShardTest.setSliceAsInactive and 
added a force update cluster state. Also an exception in if (!transition) is 
created but never thrown.
* Removed redundant assert in DeleteShardTest.doTest because it can never be 
triggerred (because an exception is thrown from the wait loop in 
setSliceAsInactive)
{code}
assertEquals("Shard1 is not inactive yet.", Slice.INACTIVE, slice1.getState());
{code}
* Added a connection and read timeout to HttpSolrServer created in 
DeleteShardTest.deleteShard
* I also took this opportunity to remove the timeout hack I had added to 
CollectionHandler for splitshard and had forgotten to remove it
* Moved zkStateReader.updateClusterState(true) inside the wait loop of 
DeleteShardTest.confirmShardDeletion
* Removed extra assert in DeleteShardTest.confirmShardDeletion for shard2 to be 
not null.

I'll commit this tomorrow if there are no objections.

> Create a collections API to delete/cleanup a Slice
> --
>
> Key: SOLR-4693
> URL: https://issues.apache.org/jira/browse/SOLR-4693
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Anshum Gupta
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-4693.patch, SOLR-4693.patch, SOLR-4693.patch, 
> SOLR-4693.patch
>
>
> Have a collections API that cleans up a given shard.
> Among other places, this would be useful post the shard split call to manage 
> the parent/original slice.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4996) CloudSolrServer Does Not Respect Propagate Requests

2013-07-03 Thread Furkan KAMACI (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Furkan KAMACI updated SOLR-4996:


Attachment: SOLR-4996.patch

> CloudSolrServer Does Not Respect Propagate Requests
> ---
>
> Key: SOLR-4996
> URL: https://issues.apache.org/jira/browse/SOLR-4996
> Project: Solr
>  Issue Type: Bug
>Reporter: Furkan KAMACI
> Attachments: SOLR-4996.patch
>
>
> When using CloudSolrServer if you make a request as like LukeRequest it uses 
> LBHttpSolrServer internally and it sends request to just one Solr Node (via 
> HttpSolrServer) as round robin. So you may get different results for same 
> requests at different times event nothing changes. Using a PropagateServer 
> inside CloudSolrServer will fix that bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4995) Implementing a Server Capable of Propagating Requests

2013-07-03 Thread Furkan KAMACI (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13699069#comment-13699069
 ] 

Furkan KAMACI commented on SOLR-4995:
-

Initial commit is made. Multi threaded requests ant others will come after 
getting feedbacks of community.

> Implementing a Server Capable of Propagating Requests
> -
>
> Key: SOLR-4995
> URL: https://issues.apache.org/jira/browse/SOLR-4995
> Project: Solr
>  Issue Type: New Feature
>Reporter: Furkan KAMACI
> Attachments: SOLR-4995.patch
>
>
> Currently Solr servers are interacting with only one Solr node. There should 
> be an implementation that propagates requests into multiple Solr nodes. For 
> example when Solr is used as SolrCloud sending a LukeRequest should be made 
> to one node at each shard.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4996) CloudSolrServer Does Not Respect Propagate Requests

2013-07-03 Thread Furkan KAMACI (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13699070#comment-13699070
 ] 

Furkan KAMACI commented on SOLR-4996:
-

Initial commit is made.

> CloudSolrServer Does Not Respect Propagate Requests
> ---
>
> Key: SOLR-4996
> URL: https://issues.apache.org/jira/browse/SOLR-4996
> Project: Solr
>  Issue Type: Bug
>Reporter: Furkan KAMACI
> Attachments: SOLR-4996.patch
>
>
> When using CloudSolrServer if you make a request as like LukeRequest it uses 
> LBHttpSolrServer internally and it sends request to just one Solr Node (via 
> HttpSolrServer) as round robin. So you may get different results for same 
> requests at different times event nothing changes. Using a PropagateServer 
> inside CloudSolrServer will fix that bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4816) Add document routing to CloudSolrServer

2013-07-03 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-4816:
-

Attachment: (was: SOLR-4816.patch)

> Add document routing to CloudSolrServer
> ---
>
> Key: SOLR-4816
> URL: https://issues.apache.org/jira/browse/SOLR-4816
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.3
>Reporter: Joel Bernstein
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
> SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
> SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
> SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
> SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
> SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
> SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816-sriesenberg.patch
>
>
> This issue adds the following enhancements to CloudSolrServer's update logic:
> 1) Document routing: Updates are routed directly to the correct shard leader 
> eliminating document routing at the server.
> 2) Optional parallel update execution: Updates for each shard are executed in 
> a separate thread so parallel indexing can occur across the cluster.
> These enhancements should allow for near linear scalability on indexing 
> throughput.
> Usage:
> CloudSolrServer cloudClient = new CloudSolrServer(zkAddress);
> cloudClient.setParallelUpdates(true); 
> SolrInputDocument doc1 = new SolrInputDocument();
> doc1.addField(id, "0");
> doc1.addField("a_t", "hello1");
> SolrInputDocument doc2 = new SolrInputDocument();
> doc2.addField(id, "2");
> doc2.addField("a_t", "hello2");
> UpdateRequest request = new UpdateRequest();
> request.add(doc1);
> request.add(doc2);
> request.setAction(AbstractUpdateRequest.ACTION.OPTIMIZE, false, false);
> NamedList response = cloudClient.request(request); // Returns a backwards 
> compatible condensed response.
> //To get more detailed response down cast to RouteResponse:
> CloudSolrServer.RouteResponse rr = (CloudSolrServer.RouteResponse)response;

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4816) Add document routing to CloudSolrServer

2013-07-03 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-4816:
-

Attachment: SOLR-4816.patch

> Add document routing to CloudSolrServer
> ---
>
> Key: SOLR-4816
> URL: https://issues.apache.org/jira/browse/SOLR-4816
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.3
>Reporter: Joel Bernstein
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
> SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
> SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
> SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
> SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
> SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
> SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816-sriesenberg.patch
>
>
> This issue adds the following enhancements to CloudSolrServer's update logic:
> 1) Document routing: Updates are routed directly to the correct shard leader 
> eliminating document routing at the server.
> 2) Optional parallel update execution: Updates for each shard are executed in 
> a separate thread so parallel indexing can occur across the cluster.
> These enhancements should allow for near linear scalability on indexing 
> throughput.
> Usage:
> CloudSolrServer cloudClient = new CloudSolrServer(zkAddress);
> cloudClient.setParallelUpdates(true); 
> SolrInputDocument doc1 = new SolrInputDocument();
> doc1.addField(id, "0");
> doc1.addField("a_t", "hello1");
> SolrInputDocument doc2 = new SolrInputDocument();
> doc2.addField(id, "2");
> doc2.addField("a_t", "hello2");
> UpdateRequest request = new UpdateRequest();
> request.add(doc1);
> request.add(doc2);
> request.setAction(AbstractUpdateRequest.ACTION.OPTIMIZE, false, false);
> NamedList response = cloudClient.request(request); // Returns a backwards 
> compatible condensed response.
> //To get more detailed response down cast to RouteResponse:
> CloudSolrServer.RouteResponse rr = (CloudSolrServer.RouteResponse)response;

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5030) FuzzySuggester has to operate FSTs of Unicode-letters, not UTF-8, to work correctly for 1-byte (like English) and multi-byte (non-Latin) letters

2013-07-03 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13699074#comment-13699074
 ] 

Michael McCandless commented on LUCENE-5030:


Hmm also "ant precommit" is failing ...

> FuzzySuggester has to operate FSTs of Unicode-letters, not UTF-8, to work 
> correctly for 1-byte (like English) and multi-byte (non-Latin) letters
> 
>
> Key: LUCENE-5030
> URL: https://issues.apache.org/jira/browse/LUCENE-5030
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.3
>Reporter: Artem Lukanin
>Assignee: Michael McCandless
> Fix For: 5.0, 4.4
>
> Attachments: benchmark-INFO_SEP.txt, benchmark-old.txt, 
> benchmark-wo_convertion.txt, LUCENE-5030.patch, LUCENE-5030.patch, 
> nonlatin_fuzzySuggester1.patch, nonlatin_fuzzySuggester2.patch, 
> nonlatin_fuzzySuggester3.patch, nonlatin_fuzzySuggester4.patch, 
> nonlatin_fuzzySuggester_combo1.patch, nonlatin_fuzzySuggester_combo2.patch, 
> nonlatin_fuzzySuggester_combo.patch, nonlatin_fuzzySuggester.patch, 
> nonlatin_fuzzySuggester.patch, nonlatin_fuzzySuggester.patch, 
> run-suggest-benchmark.patch
>
>
> There is a limitation in the current FuzzySuggester implementation: it 
> computes edits in UTF-8 space instead of Unicode character (code point) 
> space. 
> This should be fixable: we'd need to fix TokenStreamToAutomaton to work in 
> Unicode character space, then fix FuzzySuggester to do the same steps that 
> FuzzyQuery does: do the LevN expansion in Unicode character space, then 
> convert that automaton to UTF-8, then intersect with the suggest FST.
> See the discussion here: 
> http://lucene.472066.n3.nabble.com/minFuzzyLength-in-FuzzySuggester-behaves-differently-for-English-and-Russian-td4067018.html#none

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4997) The splitshard api doesn't call commit on new sub shards

2013-07-03 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-4997:
---

 Summary: The splitshard api doesn't call commit on new sub shards
 Key: SOLR-4997
 URL: https://issues.apache.org/jira/browse/SOLR-4997
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3.1, 4.3
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.4


The splitshard api doesn't call commit on new sub shards but it happily sets 
them to active state which means on a successful split, the documents are not 
visible to searchers unless an explicit commit is called on the cluster.

The coreadmin split api will still not call commit on targetCores. That is by 
design and we're not going to change that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4998) Make the use of Slice and Shard consistent across the code and document base

2013-07-03 Thread Anshum Gupta (JIRA)
Anshum Gupta created SOLR-4998:
--

 Summary: Make the use of Slice and Shard consistent across the 
code and document base
 Key: SOLR-4998
 URL: https://issues.apache.org/jira/browse/SOLR-4998
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3.1
Reporter: Anshum Gupta


The interchangeable use of Slice and Shard is pretty confusing at times. We 
should define each separately and use the apt term whenever we do so.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4999) Make the collections API consistent by using 'collection' instead of 'name'

2013-07-03 Thread Anshum Gupta (JIRA)
Anshum Gupta created SOLR-4999:
--

 Summary: Make the collections API consistent by using 'collection' 
instead of 'name'
 Key: SOLR-4999
 URL: https://issues.apache.org/jira/browse/SOLR-4999
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3.1
Reporter: Anshum Gupta


The collections API as of now are split between using 'name' and 'collection' 
parameter.
We should add support to all APIs to work with 'collection', while maintaining 
'name' (where it already exists) until 5.0.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4693) Create a collections API to delete/cleanup a Slice

2013-07-03 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13699081#comment-13699081
 ] 

Anshum Gupta commented on SOLR-4693:


I'll just go through the patch tonight. Also, here are some related JIRAs I 
created:
Make the use of Slice and Shard consistent across the code and document base 
[https://issues.apache.org/jira/browse/SOLR-4998]


> Create a collections API to delete/cleanup a Slice
> --
>
> Key: SOLR-4693
> URL: https://issues.apache.org/jira/browse/SOLR-4693
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Anshum Gupta
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-4693.patch, SOLR-4693.patch, SOLR-4693.patch, 
> SOLR-4693.patch
>
>
> Have a collections API that cleans up a given shard.
> Among other places, this would be useful post the shard split call to manage 
> the parent/original slice.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4998) Make the use of Slice and Shard consistent across the code and document base

2013-07-03 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-4998:
---

Affects Version/s: 4.3

> Make the use of Slice and Shard consistent across the code and document base
> 
>
> Key: SOLR-4998
> URL: https://issues.apache.org/jira/browse/SOLR-4998
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.3, 4.3.1
>Reporter: Anshum Gupta
>
> The interchangeable use of Slice and Shard is pretty confusing at times. We 
> should define each separately and use the apt term whenever we do so.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4943) Add a new info admin handler.

2013-07-03 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4943:
--

Attachment: SOLR-4943.patch

New patch with some testing. Takes out the UI changes - @steffkes has said he 
will flesh out that part of the patch and commit it after I put in the back end 
changes.

> Add a new info admin handler.
> -
>
> Key: SOLR-4943
> URL: https://issues.apache.org/jira/browse/SOLR-4943
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-4943-2.patch, SOLR-4943-3__hoss_variant.patch, 
> SOLR-4943-3.patch, SOLR-4943.patch, SOLR-4943.patch
>
>
> Currently, you have to specify a core to get system information for a variety 
> of request handlers - properties, logging, thread dump, system, etc.
> These should be available at a system location and not core specific location.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4998) Make the use of Slice and Shard consistent across the code and document base

2013-07-03 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13699089#comment-13699089
 ] 

Anshum Gupta commented on SOLR-4998:


Here's my take on what I think these terms should refer to. Any feedback 
suggestions would be great:
Slice: The higher level, logical representation.
Shard: The entity representing any physical index belonging to a Slice.

Collection has Slices
Slices have Shards

I'll take this up.

> Make the use of Slice and Shard consistent across the code and document base
> 
>
> Key: SOLR-4998
> URL: https://issues.apache.org/jira/browse/SOLR-4998
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.3, 4.3.1
>Reporter: Anshum Gupta
>
> The interchangeable use of Slice and Shard is pretty confusing at times. We 
> should define each separately and use the apt term whenever we do so.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4997) The splitshard api doesn't call commit on new sub shards

2013-07-03 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13699090#comment-13699090
 ] 

Shalin Shekhar Mangar commented on SOLR-4997:
-

Hmm, we don't know the name of the update handler inside the Overseer 
Collection Processor so we can do one of the following:
# Assume it is /update
# Pass in the update handler name in the OverseerCollectionProcessor constructor
# Change CoreAdmin request apply updates action to accept a commit param

> The splitshard api doesn't call commit on new sub shards
> 
>
> Key: SOLR-4997
> URL: https://issues.apache.org/jira/browse/SOLR-4997
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.3, 4.3.1
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 4.4
>
>
> The splitshard api doesn't call commit on new sub shards but it happily sets 
> them to active state which means on a successful split, the documents are not 
> visible to searchers unless an explicit commit is called on the cluster.
> The coreadmin split api will still not call commit on targetCores. That is by 
> design and we're not going to change that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5013) ScandinavianFoldingFilterFactory and ScandinavianNormalizationFilterFactory

2013-07-03 Thread Karl Wettin (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13699093#comment-13699093
 ] 

Karl Wettin commented on LUCENE-5013:
-

Takk Jan! <3

> ScandinavianFoldingFilterFactory and ScandinavianNormalizationFilterFactory
> ---
>
> Key: LUCENE-5013
> URL: https://issues.apache.org/jira/browse/LUCENE-5013
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Affects Versions: 4.3
>Reporter: Karl Wettin
>Assignee: Jan Høydahl
>Priority: Trivial
> Fix For: 5.0, 4.4
>
> Attachments: LUCENE-5013-2.txt, LUCENE-5013-3.txt, LUCENE-5013-4.txt, 
> LUCENE-5013-5.txt, LUCENE-5013-6.txt, LUCENE-5013.patch, LUCENE-5013.txt
>
>
> This filter is an augmentation of output from ASCIIFoldingFilter,
> it discriminate against double vowels aa, ae, ao, oe and oo, leaving just the 
> first one.
> blåbærsyltetøj == blåbärsyltetöj == blaabaarsyltetoej == blabarsyltetoj
> räksmörgås == ræksmørgås == ræksmörgaos == raeksmoergaas == raksmorgas
> Caveats:
> Since this is a filtering on top of ASCIIFoldingFilter äöåøæ already has been 
> folded down to aoaoae when handled by this filter it will cause effects such 
> as:
> bøen -> boen -> bon
> åene -> aene -> ane
> I find this to be a trivial problem compared to not finding anything at all.
> Background:
> Swedish åäö is in fact the same letters as Norwegian and Danish åæø and thus 
> interchangeable in when used between these languages. They are however folded 
> differently when people type them on a keyboard lacking these characters and 
> ASCIIFoldingFilter handle ä and æ differently.
> When a Swedish person is lacking umlauted characters on the keyboard they 
> consistently type a, a, o instead of å, ä, ö. Foreigners also tend to use a, 
> a, o.
> In Norway people tend to type aa, ae and oe instead of å, æ and ø. Some use 
> a, a, o. I've also seen oo, ao, etc. And permutations. Not sure about Denmark 
> but the pattern is probably the same.
> This filter solves that problem, but might also cause new.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4998) Make the use of Slice and Shard consistent across the code and document base

2013-07-03 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13699094#comment-13699094
 ] 

Mark Miller commented on SOLR-4998:
---

Good luck :)

> Make the use of Slice and Shard consistent across the code and document base
> 
>
> Key: SOLR-4998
> URL: https://issues.apache.org/jira/browse/SOLR-4998
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.3, 4.3.1
>Reporter: Anshum Gupta
>
> The interchangeable use of Slice and Shard is pretty confusing at times. We 
> should define each separately and use the apt term whenever we do so.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4997) The splitshard api doesn't call commit on new sub shards

2013-07-03 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13699099#comment-13699099
 ] 

Mark Miller commented on SOLR-4997:
---

1. -1


Some along 2 or 3 lines seems way preferable.

> The splitshard api doesn't call commit on new sub shards
> 
>
> Key: SOLR-4997
> URL: https://issues.apache.org/jira/browse/SOLR-4997
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.3, 4.3.1
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 4.4
>
>
> The splitshard api doesn't call commit on new sub shards but it happily sets 
> them to active state which means on a successful split, the documents are not 
> visible to searchers unless an explicit commit is called on the cluster.
> The coreadmin split api will still not call commit on targetCores. That is by 
> design and we're not going to change that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4997) The splitshard api doesn't call commit on new sub shards

2013-07-03 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13699099#comment-13699099
 ] 

Mark Miller edited comment on SOLR-4997 at 7/3/13 3:49 PM:
---

1. -1


Something along 2 or 3 lines seems way preferable.

  was (Author: markrmil...@gmail.com):
1. -1


Some along 2 or 3 lines seems way preferable.
  
> The splitshard api doesn't call commit on new sub shards
> 
>
> Key: SOLR-4997
> URL: https://issues.apache.org/jira/browse/SOLR-4997
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.3, 4.3.1
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 4.4
>
>
> The splitshard api doesn't call commit on new sub shards but it happily sets 
> them to active state which means on a successful split, the documents are not 
> visible to searchers unless an explicit commit is called on the cluster.
> The coreadmin split api will still not call commit on targetCores. That is by 
> design and we're not going to change that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4997) The splitshard api doesn't call commit on new sub shards

2013-07-03 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13699101#comment-13699101
 ] 

Shalin Shekhar Mangar commented on SOLR-4997:
-

Agreed. I am going to use 3 just because it is something I control. 
Configurable handlers for such basic things are a pain.

> The splitshard api doesn't call commit on new sub shards
> 
>
> Key: SOLR-4997
> URL: https://issues.apache.org/jira/browse/SOLR-4997
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.3, 4.3.1
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 4.4
>
>
> The splitshard api doesn't call commit on new sub shards but it happily sets 
> them to active state which means on a successful split, the documents are not 
> visible to searchers unless an explicit commit is called on the cluster.
> The coreadmin split api will still not call commit on targetCores. That is by 
> design and we're not going to change that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-4.x #376: POMs out of sync

2013-07-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-4.x/376/

No tests ran.

Build Log:
[...truncated 11787 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-4788) Multiple Entities DIH delta import: dataimporter.[entityName].last_index_time is empty

2013-07-03 Thread Bill Bell (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13699108#comment-13699108
 ] 

Bill Bell commented on SOLR-4788:
-

We are also running into this issue. Not sure how it happens yet though.

> Multiple Entities DIH delta import: dataimporter.[entityName].last_index_time 
> is empty
> --
>
> Key: SOLR-4788
> URL: https://issues.apache.org/jira/browse/SOLR-4788
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.2, 4.3
> Environment: solr-spec
> 4.2.1.2013.03.26.08.26.55
> solr-impl
> 4.2.1 1461071 - mark - 2013-03-26 08:26:55
> lucene-spec
> 4.2.1
> lucene-impl
> 4.2.1 1461071 - mark - 2013-03-26 08:23:34
> OR
> solr-spec
> 4.3.0
> solr-impl
> 4.3.0 1477023 - simonw - 2013-04-29 15:10:12
> lucene-spec
> 4.3.0
> lucene-impl
> 4.3.0 1477023 - simonw - 2013-04-29 14:55:14
>Reporter: chakming wong
>Assignee: Shalin Shekhar Mangar
> Attachments: entitytest.patch, entitytest.patch, entitytest.patch, 
> entitytest.patch, entitytest.patch
>
>
> {code:title=conf/dataimport.properties|borderStyle=solid}entity1.last_index_time=2013-05-06
>  03\:02\:06
> last_index_time=2013-05-06 03\:05\:22
> entity2.last_index_time=2013-05-06 03\:03\:14
> entity3.last_index_time=2013-05-06 03\:05\:22
> {code}
> {code:title=conf/solrconfig.xml|borderStyle=solid} encoding="UTF-8" ?>
> ...
>  class="org.apache.solr.handler.dataimport.DataImportHandler">
> 
> dihconfig.xml
> 
> 
> ...
> {code}
> {code:title=conf/dihconfig.xml|borderStyle=solid} encoding="UTF-8" ?>
> 
>  type="JdbcDataSource" driver="com.mysql.jdbc.Driver"
> url="jdbc:mysql://*:*/*"
> user="*" password="*"/>
> 
>  query="SELECT * FROM table_a"
> deltaQuery="SELECT table_a_id FROM table_b WHERE 
> last_modified > '${dataimporter.entity1.last_index_time}'"
> deltaImportQuery="SELECT * FROM table_a WHERE id = 
> '${dataimporter.entity1.id}'"
> transformer="TemplateTransformer">
>  ...
>   ... 
> ... 
> 
> 
>   ... 
>   ...
> 
> 
>   ... 
>   ...
> 
> 
> 
> {code} 
> In above setup, *dataimporter.entity1.last_index_time* is *empty string* and 
> cause the sql query having error

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS-MAVEN] Lucene-Solr-Maven-4.x #376: POMs out of sync

2013-07-03 Thread Robert Muir
this happens because the POMs are really out of sync :)

I think maven depends on asm-4.1 but ant does not?

On Wed, Jul 3, 2013 at 8:57 AM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> Build: https://builds.apache.org/job/Lucene-Solr-Maven-4.x/376/
>
> No tests ran.
>
> Build Log:
> [...truncated 11787 lines...]
>
>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>


Re: Folder solr/example/hdfs should not be there

2013-07-03 Thread Robert Muir
if its supposed to be there, it should at least be under svn:ignore!

'ant example' currently dirties up the source tree

On Wed, Jul 3, 2013 at 3:03 AM, Jan Høydahl  wrote:

> After running "ant example" a folder "hdfs" sneaks into the source tree at
> solr/example/hdfs
> Have not checked why, but it certainly does not belong there
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


  1   2   >