[jira] [Commented] (LUCENE-6883) Getting exception _t.si (No such file or directory)

2015-11-24 Thread Tejas Jethva (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15026300#comment-15026300
 ] 

Tejas Jethva commented on LUCENE-6883:
--

Thanks Michael,

Will try upgrading it to latest version.

> Getting exception _t.si (No such file or directory)
> ---
>
> Key: LUCENE-6883
> URL: https://issues.apache.org/jira/browse/LUCENE-6883
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.2
>Reporter: Tejas Jethva
>
> We are getting following exception when we are trying to update cache. 
> Following are two scenario when we get this error
> scenario 1:
> 2015-11-03 06:45:18,213 [main] ERROR java.io.FileNotFoundException: 
> /app/cache/index-persecurity/PERSECURITY_INDEX-QCH/_mb.si (No such file or 
> directory)
>   at java.io.RandomAccessFile.open(Native Method)
>   at java.io.RandomAccessFile.(RandomAccessFile.java:241)
>   at 
> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:193)
>   at 
> org.apache.lucene.codecs.lucene40.Lucene40SegmentInfoReader.read(Lucene40SegmentInfoReader.java:50)
>   at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:301)
>   at 
> org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:56)
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:783)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
>   at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:65)
>   .
>   
> scenario 2:
> java.io.FileNotFoundException: 
> /app/1.0.5_loadtest/index-persecurity/PERSECURITY_INDEX-ITQ/_t.si (No such 
> file or directory)
>   at java.io.RandomAccessFile.open(Native Method)
>   at java.io.RandomAccessFile.(RandomAccessFile.java:241)
>   at 
> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:193)
>   at 
> org.apache.lucene.codecs.lucene40.Lucene40SegmentInfoReader.read(Lucene40SegmentInfoReader.java:50)
>   at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:301)
>   at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:347)
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:783)
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:630)
>   at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:343)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.isCurrent(StandardDirectoryReader.java:326)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenNoWriter(StandardDirectoryReader.java:284)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:247)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:235)
>   at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:169)
>   ..
>   
>   
> What might be the possible reasons for this?  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8335) HdfsLockFactory does not allow core to come up after a node was killed

2015-11-24 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15026212#comment-15026212
 ] 

Varun Thacker commented on SOLR-8335:
-

Hi Mark,

bq.  a file in Hdfs won't just go away because of a crash.

Sure it won't go away. But the one difference which I am seeing in 4.10 is the 
naming of the lock and that a new named lock gets created every time you start 
solr after killing it abruptly ( kill -9 in my test )

Steps I followed on 4.10.4 

1. Start Solr using {{java -jar start.jar 
-Dsolr.directoryFactory=HdfsDirectoryFactory -Dsolr.lock.type=hdfs 
-Dsolr.data.dir=hdfs://localhost:9000/solr410 
-Dsolr.updatelog=hdfs://localhost:9000/solr410}}

2. Output from hdfs
{code}
Found 2 items
-rw-r--r--   3 varun supergroup  0 2015-11-25 10:28 
/solr410/index/HdfsDirectory@52959724 
lockFactory=org.apache.solr.store.hdfs.hdfslockfact...@9d59d3f-write.lock
-rwxr-xr-x   1 varun supergroup 53 2015-11-25 10:28 
/solr410/index/segments_1
{code}

3. Kill Solr and start again

4. Output from hdfs
{code}
Found 3 items
-rw-r--r--   3 varun supergroup  0 2015-11-25 10:29 
/solr410/index/HdfsDirectory@46ad6bd3 
lockFactory=org.apache.solr.store.hdfs.hdfslockfact...@4b44b5f6-write.lock
-rw-r--r--   3 varun supergroup  0 2015-11-25 10:28 
/solr410/index/HdfsDirectory@52959724 
lockFactory=org.apache.solr.store.hdfs.hdfslockfact...@9d59d3f-write.lock
-rwxr-xr-x   1 varun supergroup 53 2015-11-25 10:28 
/solr410/index/segments_1
{code}

Every time I repeat the kill + start a new lock file gets created . Hence it 
works on 4.10.4 but never worked in 5.x since the lock file in 5.x is called 
{{write.lock}} which made me believe it was a bug in 5.x

Am I doing anything differently in the tests here?

> HdfsLockFactory does not allow core to come up after a node was killed
> --
>
> Key: SOLR-8335
> URL: https://issues.apache.org/jira/browse/SOLR-8335
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.2.1, 5.3, 5.3.1
>Reporter: Varun Thacker
>
> When using HdfsLockFactory if a node gets killed instead of a graceful 
> shutdown the write.lock file remains in HDFS . The next time you start the 
> node the core doesn't load up because of LockObtainFailedException .
> I was able to reproduce this in all 5.x versions of Solr . The problem wasn't 
> there when I tested it in 4.10.4
> Steps to reproduce this on 5.x
> 1. Create directory in HDFS : {{bin/hdfs dfs -mkdir /solr}}
> 2. Start Solr: {{bin/solr start -Dsolr.directoryFactory=HdfsDirectoryFactory 
> -Dsolr.lock.type=hdfs -Dsolr.data.dir=hdfs://localhost:9000/solr 
> -Dsolr.updatelog=hdfs://localhost:9000/solr}}
> 3. Create core: {{./bin/solr create -c test -n data_driven}}
> 4. Kill solr
> 5. The lock file is there in HDFS and is called {{write.lock}}
> 6. Start Solr again and you get a stack trace like this:
> {code}
> 2015-11-23 13:28:04.287 ERROR (coreLoadExecutor-6-thread-1) [   x:test] 
> o.a.s.c.CoreContainer Error creating core [test]: Index locked for write for 
> core 'test'. Solr now longer supports forceful unlocking via 
> 'unlockOnStartup'. Please verify locks manually!
> org.apache.solr.common.SolrException: Index locked for write for core 'test'. 
> Solr now longer supports forceful unlocking via 'unlockOnStartup'. Please 
> verify locks manually!
> at org.apache.solr.core.SolrCore.(SolrCore.java:820)
> at org.apache.solr.core.SolrCore.(SolrCore.java:659)
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:723)
> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:443)
> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:434)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:210)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.lucene.store.LockObtainFailedException: Index locked 
> for write for core 'test'. Solr now longer supports forceful unlocking via 
> 'unlockOnStartup'. Please verify locks manually!
> at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:528)
> at org.apache.solr.core.SolrCore.(SolrCore.java:761)
> ... 9 more
> 2015-11-23 13:28:04.289 ERROR (coreContainerWorkExecutor-2-thread-1) [   ] 
> o.a.s.c.CoreContainer Error waiting for SolrCore to be created
> java.util.concurrent.ExecutionException: 
> org.apache.solr.common.SolrException: Unable to create core [test]
> at java.util.concurre

[jira] [Comment Edited] (SOLR-8339) SolrDocument and SolrInputDocument should have a common interface

2015-11-24 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15026180#comment-15026180
 ] 

Ishan Chattopadhyaya edited comment on SOLR-8339 at 11/25/15 4:54 AM:
--

Is there a historic reason for not having a common interface/abstract class for 
SolrDocument and SolrInputDocument other than a Map?
Since having this right now might break backcompat, does it make sense to do 
this for 6.0?
Right now, the motivation for doing this is SOLR-8220, but not strictly needed.


was (Author: ichattopadhyaya):
Is there a historic reason for not having a common interface/abstract class for 
SolrDocument and SolrInputDocument?
Since having this right now might break backcompat, does it make sense to do 
this for 6.0?
Right now, the motivation for doing this is SOLR-8220, but not strictly needed.

> SolrDocument and SolrInputDocument should have a common interface
> -
>
> Key: SOLR-8339
> URL: https://issues.apache.org/jira/browse/SOLR-8339
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
>
> Currently, both share a Map interface (SOLR-928). However, there are many 
> common methods like createField(), setField() etc. that should perhaps go 
> into an interface/abstract class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8339) SolrDocument and SolrInputDocument should have a common interface

2015-11-24 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-8339:
---
Description: Currently, both share a Map interface (SOLR-928). However, 
there are many common methods like createField(), setField() etc. that should 
perhaps go into an interface/abstract class.

> SolrDocument and SolrInputDocument should have a common interface
> -
>
> Key: SOLR-8339
> URL: https://issues.apache.org/jira/browse/SOLR-8339
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
>
> Currently, both share a Map interface (SOLR-928). However, there are many 
> common methods like createField(), setField() etc. that should perhaps go 
> into an interface/abstract class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8339) SolrDocument and SolrInputDocument should have a common interface

2015-11-24 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15026180#comment-15026180
 ] 

Ishan Chattopadhyaya commented on SOLR-8339:


Is there a historic reason for not having a common interface/abstract class for 
SolrDocument and SolrInputDocument?
Since having this right now might break backcompat, does it make sense to do 
this for 6.0?
Right now, the motivation for doing this is SOLR-8220, but not strictly needed.

> SolrDocument and SolrInputDocument should have a common interface
> -
>
> Key: SOLR-8339
> URL: https://issues.apache.org/jira/browse/SOLR-8339
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-5.x-Linux (32bit/jdk1.9.0-ea-b90) - Build # 14737 - Failure!

2015-11-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14737/
Java: 32bit/jdk1.9.0-ea-b90 -server -XX:+UseG1GC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=1212, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)2) Thread[id=1209, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)3) Thread[id=1211, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)4) Thread[id=1213, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)5) Thread[id=1210, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=1212, name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.

[jira] [Comment Edited] (SOLR-8337) Add ReduceOperation Interface

2015-11-24 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025697#comment-15025697
 ] 

Joel Bernstein edited comment on SOLR-8337 at 11/25/15 2:57 AM:


First crack at adding the ReduceOperation to the ReducerStream.

I'll create a GroupOperation that will emit a single Tuple with a list of all 
the Tuples in a group.

{code}
reduce(  search(collection1, 
  q="*:*",
  qt="/export", 
  fl="id,a_s,a_i,a_f", 
  sort="a_s asc, a_f asc"),
by="a_s",
group(sort="a_f asc"))
{code}  


was (Author: joel.bernstein):
First crack at adding the ReduceOperation to the ReducerStream.

I'll create a GroupOperation that will emit a single Tuple with a list of all 
the Tuples in a group.

{code}
reduce(  
 search(collection1, 
 q="*:*",
 qt="/export", 
 fl="id,a_s,a_i,a_f", 
 sort="a_s asc, a_f asc"),
  by="a_s",
  group(sort="a_f asc"))
{code}  

> Add ReduceOperation Interface
> -
>
> Key: SOLR-8337
> URL: https://issues.apache.org/jira/browse/SOLR-8337
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
> Attachments: SOLR-8337.patch, SOLR-8337.patch
>
>
> This is a very simple ticket to create new interface that extends the 
> StreamOperation. The interface will be called the ReduceOperation.
> In the near future the ReducerStream will be changed to accept a 
> ReduceOperation. This will allow users to pass in the specific reduce 
> algorithm to the ReducerStream, making the ReducerStream much more powerful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8337) Add ReduceOperation Interface

2015-11-24 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025697#comment-15025697
 ] 

Joel Bernstein edited comment on SOLR-8337 at 11/25/15 2:56 AM:


First crack at adding the ReduceOperation to the ReducerStream.

I'll create a GroupOperation that will emit a single Tuple with a list of all 
the Tuples in a group.

{code}
reduce(  
 search(collection1, 
 q="*:*",
 qt="/export", 
 fl="id,a_s,a_i,a_f", 
 sort="a_s asc, a_f asc"),
  by="a_s",
  group(sort="a_f asc"))
{code}  


was (Author: joel.bernstein):
First crack at adding the ReduceOperation to the ReducerStream.

I'll create a GroupOperation that will emit a single Tuple with a list of all 
the Tuples in a group.

{code}
reduce(  
 search(collection1, 
 q="*:*",
 qt="/export", 
 fl="id,a_s,a_i,a_f", 
 sort="a_s asc, a_f asc"),
  by="a_s",
  group())
{code}  

> Add ReduceOperation Interface
> -
>
> Key: SOLR-8337
> URL: https://issues.apache.org/jira/browse/SOLR-8337
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
> Attachments: SOLR-8337.patch, SOLR-8337.patch
>
>
> This is a very simple ticket to create new interface that extends the 
> StreamOperation. The interface will be called the ReduceOperation.
> In the near future the ReducerStream will be changed to accept a 
> ReduceOperation. This will allow users to pass in the specific reduce 
> algorithm to the ReducerStream, making the ReducerStream much more powerful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8337) Add ReduceOperation Interface

2015-11-24 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15026066#comment-15026066
 ] 

Joel Bernstein commented on SOLR-8337:
--

Sounds good. I'll make that change in the next iteration.

> Add ReduceOperation Interface
> -
>
> Key: SOLR-8337
> URL: https://issues.apache.org/jira/browse/SOLR-8337
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
> Attachments: SOLR-8337.patch, SOLR-8337.patch
>
>
> This is a very simple ticket to create new interface that extends the 
> StreamOperation. The interface will be called the ReduceOperation.
> In the near future the ReducerStream will be changed to accept a 
> ReduceOperation. This will allow users to pass in the specific reduce 
> algorithm to the ReducerStream, making the ReducerStream much more powerful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5129) If zookeeper is down, SolrCloud nodes will not start correctly, even if zookeeper is started later

2015-11-24 Thread Frank Kelly (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15026031#comment-15026031
 ] 

Frank Kelly commented on SOLR-5129:
---

My 2 cents - An Enterprise-class service should be "self-healing" when the 
problem (ZooKeeper state) is resolved.

> If zookeeper is down, SolrCloud nodes will not start correctly, even if 
> zookeeper is started later
> --
>
> Key: SOLR-5129
> URL: https://issues.apache.org/jira/browse/SOLR-5129
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.4
>Reporter: Shawn Heisey
>Priority: Minor
> Fix For: 4.9, Trunk
>
>
> Summary of report from user on mailing list:
> If zookeeper is down or doesn't have quorum when you start Solr nodes, they 
> will not function correctly, even if you later start zookeeper.  While 
> zookeeper is down, the log shows connection failures as expected.  When 
> zookeeper comes back, the log shows:
> INFO  - 2013-08-09 15:48:41.528; 
> org.apache.solr.common.cloud.ConnectionManager; Client->ZooKeeper status 
> change trigger but we are already closed
> At that point, Solr (admin UI and all other functions) does not work, and 
> won't work until it is restarted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8337) Add ReduceOperation Interface

2015-11-24 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15026025#comment-15026025
 ] 

Dennis Gove commented on SOLR-8337:
---

I might change the operationExpressions line to get operands of type 
ReduceOperation.class. This would ensure that only expressions adhering to the 
ReduceOperation interface are returned. That said, from a user perspective it 
might be nice to be told you provided a StreamOperation when a ReduceOperation 
is expected.
{code}
List operationExpressions = 
factory.getExpressionOperandsRepresentingTypes(expression, 
ReduceOperation.class);
{code}


> Add ReduceOperation Interface
> -
>
> Key: SOLR-8337
> URL: https://issues.apache.org/jira/browse/SOLR-8337
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
> Attachments: SOLR-8337.patch, SOLR-8337.patch
>
>
> This is a very simple ticket to create new interface that extends the 
> StreamOperation. The interface will be called the ReduceOperation.
> In the near future the ReducerStream will be changed to accept a 
> ReduceOperation. This will allow users to pass in the specific reduce 
> algorithm to the ReducerStream, making the ReducerStream much more powerful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8340) HighlightComponent throws a NullPointerException when the attribute of ResponseBuilder which named 'onePassDistributedQuery' is 'true' and 'rows' is greater than zero

2015-11-24 Thread zengjie (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zengjie updated SOLR-8340:
--
Attachment: solr.patch

I fix this issues by check sdoc is a null object ,but I'm not test this 
patch,just for reference

> HighlightComponent throws a NullPointerException when the attribute of 
> ResponseBuilder which named 'onePassDistributedQuery' is 'true' and 'rows' is 
> greater than zero
> --
>
> Key: SOLR-8340
> URL: https://issues.apache.org/jira/browse/SOLR-8340
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter
>Affects Versions: 5.3.1
>Reporter: zengjie
>Priority: Critical
>  Labels: highlighting
> Attachments: solr.patch
>
>
>  When the attribute 'onePassDistributedQuery' is 'true',QueryCompoent will 
> not  send a ShardRequest to retrive field values, highlight values has been 
> return by shards in createMainQuery together.
> See code below:
>  private void handleRegularResponses(ResponseBuilder rb, ShardRequest sreq) {
> if ((sreq.purpose & ShardRequest.PURPOSE_GET_TOP_IDS) != 0) {
> //merge all id and score,and ResponseBuilder.resultIds just stored id between 
> start to rows
>   mergeIds(rb, sreq);
> }
> if ((sreq.purpose & ShardRequest.PURPOSE_GET_TERM_STATS) != 0) {
>   updateStats(rb, sreq);
> }
> if ((sreq.purpose & ShardRequest.PURPOSE_GET_FIELDS) != 0) {
> //where ResponseBuilder.onePassDistributedQuery is true,highlight values was 
> retrived at same time,but not truncated by 'start' and 'rows',just return top 
> N(N=start+rows),
>   returnFields(rb, sreq);
> }
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8340) HighlightComponent throws a NullPointerException when the attribute of ResponseBuilder which named 'onePassDistributedQuery' is 'true' and 'rows' is greater than zero

2015-11-24 Thread zengjie (JIRA)
zengjie created SOLR-8340:
-

 Summary: HighlightComponent throws a NullPointerException when the 
attribute of ResponseBuilder which named 'onePassDistributedQuery' is 'true' 
and 'rows' is greater than zero
 Key: SOLR-8340
 URL: https://issues.apache.org/jira/browse/SOLR-8340
 Project: Solr
  Issue Type: Bug
  Components: highlighter
Affects Versions: 5.3.1
Reporter: zengjie
Priority: Critical


 When the attribute 'onePassDistributedQuery' is 'true',QueryCompoent will not  
send a ShardRequest to retrive field values, highlight values has been return 
by shards in createMainQuery together.
See code below:

 private void handleRegularResponses(ResponseBuilder rb, ShardRequest sreq) {
if ((sreq.purpose & ShardRequest.PURPOSE_GET_TOP_IDS) != 0) {
//merge all id and score,and ResponseBuilder.resultIds just stored id between 
start to rows
  mergeIds(rb, sreq);
}

if ((sreq.purpose & ShardRequest.PURPOSE_GET_TERM_STATS) != 0) {
  updateStats(rb, sreq);
}

if ((sreq.purpose & ShardRequest.PURPOSE_GET_FIELDS) != 0) {
//where ResponseBuilder.onePassDistributedQuery is true,highlight values was 
retrived at same time,but not truncated by 'start' and 'rows',just return top 
N(N=start+rows),
  returnFields(rb, sreq);
}
  }




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 1027 - Still Failing

2015-11-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/1027/

1 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=11073, name=Thread-6280, 
state=RUNNABLE, group=TGRP-FullSolrCloudDistribCmdsTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=11073, name=Thread-6280, state=RUNNABLE, 
group=TGRP-FullSolrCloudDistribCmdsTest]
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:43455/_pxg/h/collection1
at __randomizedtesting.SeedInfo.seed([59DA3974711B98F8]:0)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest$1IndexThread.run(FullSolrCloudDistribCmdsTest.java:645)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: http://127.0.0.1:43455/_pxg/h/collection1
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:584)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:240)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:229)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:167)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest$1IndexThread.run(FullSolrCloudDistribCmdsTest.java:643)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:479)
... 5 more




Build Log:
[...truncated 10686 lines...]
   [junit4] Suite: org.apache.solr.cloud.FullSolrCloudDistribCmdsTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/build/solr-core/test/J2/temp/solr.cloud.FullSolrCloudDistribCmdsTest_59DA3974711B98F8-001/init-core-data-001
   [junit4]   2> 1356205 INFO  
(SUITE-FullSolrCloudDistribCmdsTest-seed#[59DA3974711B98F8]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /_pxg/h
   [junit4]   2> 1356211 INFO  
(TEST-FullSolrCloudDistribCmdsTest.test-seed#[59DA3974711B98F8]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1356211 INFO  (Thread-6112) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1356211 INFO  (Thread-6112) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1356311 INFO  
(TEST-FullSolrCloudDistribCmdsTest.test-seed#[59DA3974711B98F8]) [] 
o.a.s.c.ZkTestServer start zk server on port:54391
   [junit4]   2> 1356312 INFO  
(TEST-FullSolrCloudDistribCmdsTest.test-seed#[59DA3974711B98F8]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2>

Re: Optimisations proposal for FacetsConfig

2015-11-24 Thread Sanne Grinovero
Thanks Erick!

It's done, it was as trivial as deleting a single word:
 https://issues.apache.org/jira/browse/LUCENE-6909

Sanne

On 24 November 2015 at 18:10, Erick Erickson  wrote:
> Sanne:
>
> Sure, please open a JIRA and add a patch. You'll need to create a user
> ID on the JIRA system, but that's a "self-serve" option.
>
> Best,
> Erick
>
> On Mon, Nov 23, 2015 at 8:21 AM, Sanne Grinovero
>  wrote:
>> Hello all,
>> I was looking into the source code for
>> org.apache.lucene.facet.FacetsConfig as it's being highlighted as an
>> hotspot of allocations during a performance analysis session.
>>
>> Our code was allocating a new instance of FacetsConfig for each
>> Document being built; there are several maps being allocated by such
>> an instance, both as instance fields and on the hot path of method
>> "#build(Document doc)".
>>
>> My understanding from reading the code is that it's designed to be
>> multi-threaded, probably to reuse one instance for a single index?
>>
>> That would resolve my issue with allocations at instance level, and
>> probably also the maps being allocated within the build method as the
>> JVM seems to be smart enough to skip those; at least that's my
>> impression with a quick experiment.
>>
>> However reusing this single instance across all threads would become a
>> contention point as all getters to read the field configurations are
>> synchronized.
>> Since the maps being read are actually safe ConcurrentMap instances, I
>> see no reason for the "synchronized", so really it just boils down to
>> a trivial patch to remove those on the reader methods.
>>
>> May I open a JIRA and propose a patch for that?
>>
>> As a second step, I'd also like to see if the build method could be
>> short-circuited for a quick return: in case there are no faceted
>> fields would be great to just return with the input document right
>> away.
>>
>> Thanks,
>> Sanne
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6909) Improve concurrency for FacetsConfig

2015-11-24 Thread Sanne Grinovero (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanne Grinovero updated LUCENE-6909:

Attachment: 0001-LUCENE-6909-Allow-efficient-concurrent-usage-of-a-Fa.patch

Trivial patch.

The synchronization isn't needed on `getDimConfig` because it's reading from a 
ConcurrentMap.

Synchronization is still needed on setters, but that's not a performance 
concern as the usage pattern is supposedly to configure the fields once and 
then reuse the instance mostly reading.

> Improve concurrency for FacetsConfig
> 
>
> Key: LUCENE-6909
> URL: https://issues.apache.org/jira/browse/LUCENE-6909
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Affects Versions: 5.3
>Reporter: Sanne Grinovero
>Priority: Trivial
> Attachments: 
> 0001-LUCENE-6909-Allow-efficient-concurrent-usage-of-a-Fa.patch
>
>
> The design of {{org.apache.lucene.facet.FacetsConfig}} encourages reuse of a 
> single instance across multiple threads, yet the current synchronization 
> model is too strict as it doesn't allow for concurrent read operations.
> I'll attach a trivial patch which removes the contention point.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6909) Improve concurrency for FacetsConfig

2015-11-24 Thread Sanne Grinovero (JIRA)
Sanne Grinovero created LUCENE-6909:
---

 Summary: Improve concurrency for FacetsConfig
 Key: LUCENE-6909
 URL: https://issues.apache.org/jira/browse/LUCENE-6909
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/other
Affects Versions: 5.3
Reporter: Sanne Grinovero
Priority: Trivial


The design of {{org.apache.lucene.facet.FacetsConfig}} encourages reuse of a 
single instance across multiple threads, yet the current synchronization model 
is too strict as it doesn't allow for concurrent read operations.

I'll attach a trivial patch which removes the contention point.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8220) Read field from docValues for non stored fields

2015-11-24 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025770#comment-15025770
 ] 

Ishan Chattopadhyaya edited comment on SOLR-8220 at 11/25/15 12:08 AM:
---

I was thinking of doing exactly that! SOLR-8339


was (Author: ichattopadhyaya):
I was thinking of doing exactly that! I'll raise another jira for it.

> Read field from docValues for non stored fields
> ---
>
> Key: SOLR-8220
> URL: https://issues.apache.org/jira/browse/SOLR-8220
> Project: Solr
>  Issue Type: Improvement
>Reporter: Keith Laban
> Attachments: SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, 
> SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch
>
>
> Many times a value will be both stored="true" and docValues="true" which 
> requires redundant data to be stored on disk. Since reading from docValues is 
> both efficient and a common practice (facets, analytics, streaming, etc), 
> reading values from docValues when a stored version of the field does not 
> exist would be a valuable disk usage optimization.
> The only caveat with this that I can see would be for multiValued fields as 
> they would always be returned sorted in the docValues approach. I believe 
> this is a fair compromise.
> I've done a rough implementation for this as a field transform, but I think 
> it should live closer to where stored fields are loaded in the 
> SolrIndexSearcher.
> Two open questions/observations:
> 1) There doesn't seem to be a standard way to read values for docValues, 
> facets, analytics, streaming, etc, all seem to be doing their own ways, 
> perhaps some of this logic should be centralized.
> 2) What will the API behavior be? (Below is my proposed implementation)
> Parameters for fl:
> - fl="docValueField"
>   -- return field from docValue if the field is not stored and in docValues, 
> if the field is stored return it from stored fields
> - fl="*"
>   -- return only stored fields
> - fl="+"
>-- return stored fields and docValue fields
> 2a - would be easiest implementation and might be sufficient for a first 
> pass. 2b - is current behavior



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8339) SolrDocument and SolrInputDocument should have a common interface

2015-11-24 Thread Ishan Chattopadhyaya (JIRA)
Ishan Chattopadhyaya created SOLR-8339:
--

 Summary: SolrDocument and SolrInputDocument should have a common 
interface
 Key: SOLR-8339
 URL: https://issues.apache.org/jira/browse/SOLR-8339
 Project: Solr
  Issue Type: Bug
Reporter: Ishan Chattopadhyaya






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8220) Read field from docValues for non stored fields

2015-11-24 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025770#comment-15025770
 ] 

Ishan Chattopadhyaya commented on SOLR-8220:


I was thinking of doing exactly that! I'll raise another jira for it.

> Read field from docValues for non stored fields
> ---
>
> Key: SOLR-8220
> URL: https://issues.apache.org/jira/browse/SOLR-8220
> Project: Solr
>  Issue Type: Improvement
>Reporter: Keith Laban
> Attachments: SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, 
> SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch
>
>
> Many times a value will be both stored="true" and docValues="true" which 
> requires redundant data to be stored on disk. Since reading from docValues is 
> both efficient and a common practice (facets, analytics, streaming, etc), 
> reading values from docValues when a stored version of the field does not 
> exist would be a valuable disk usage optimization.
> The only caveat with this that I can see would be for multiValued fields as 
> they would always be returned sorted in the docValues approach. I believe 
> this is a fair compromise.
> I've done a rough implementation for this as a field transform, but I think 
> it should live closer to where stored fields are loaded in the 
> SolrIndexSearcher.
> Two open questions/observations:
> 1) There doesn't seem to be a standard way to read values for docValues, 
> facets, analytics, streaming, etc, all seem to be doing their own ways, 
> perhaps some of this logic should be centralized.
> 2) What will the API behavior be? (Below is my proposed implementation)
> Parameters for fl:
> - fl="docValueField"
>   -- return field from docValue if the field is not stored and in docValues, 
> if the field is stored return it from stored fields
> - fl="*"
>   -- return only stored fields
> - fl="+"
>-- return stored fields and docValue fields
> 2a - would be easiest implementation and might be sufficient for a first 
> pass. 2b - is current behavior



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8337) Add ReduceOperation Interface

2015-11-24 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025697#comment-15025697
 ] 

Joel Bernstein edited comment on SOLR-8337 at 11/24/15 11:28 PM:
-

First crack at adding the ReduceOperation to the ReducerStream.

I'll create a GroupOperation that will emit a single Tuple with a list of all 
the Tuples in a group.

{code}
reduce(  
 search(collection1, 
 q="*:*",
 qt="/export", 
 fl="id,a_s,a_i,a_f", 
 sort="a_s asc, a_f asc"),
  by="a_s",
  group())
{code}  


was (Author: joel.bernstein):
First crack at adding the ReduceOperation to the ReducerStream.

I'll create a GroupOperation that will emit a single Tuple with a list of all 
the Tuples in a group.

  

> Add ReduceOperation Interface
> -
>
> Key: SOLR-8337
> URL: https://issues.apache.org/jira/browse/SOLR-8337
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
> Attachments: SOLR-8337.patch, SOLR-8337.patch
>
>
> This is a very simple ticket to create new interface that extends the 
> StreamOperation. The interface will be called the ReduceOperation.
> In the near future the ReducerStream will be changed to accept a 
> ReduceOperation. This will allow users to pass in the specific reduce 
> algorithm to the ReducerStream, making the ReducerStream much more powerful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8337) Add ReduceOperation Interface

2015-11-24 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8337:
-
Attachment: SOLR-8337.patch

First crack at adding the ReduceOperation to the ReducerStream.

I'll create a GroupOperation that will emit a single Tuple with a list of all 
the Tuples in a group.

  

> Add ReduceOperation Interface
> -
>
> Key: SOLR-8337
> URL: https://issues.apache.org/jira/browse/SOLR-8337
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
> Attachments: SOLR-8337.patch, SOLR-8337.patch
>
>
> This is a very simple ticket to create new interface that extends the 
> StreamOperation. The interface will be called the ReduceOperation.
> In the near future the ReducerStream will be changed to accept a 
> ReduceOperation. This will allow users to pass in the specific reduce 
> algorithm to the ReducerStream, making the ReducerStream much more powerful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8220) Read field from docValues for non stored fields

2015-11-24 Thread Keith Laban (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025686#comment-15025686
 ] 

Keith Laban commented on SOLR-8220:
---

I'm working on a separate patch which fixes EnumField, it also adds support for 
"*_dv" type queries. I'll take a look at merging your change in too. Do you 
think it would be worth adding an interface for SolrDocument and 
SolrInputDocument to implement which includes {{containsKey} and {{addField}}

> Read field from docValues for non stored fields
> ---
>
> Key: SOLR-8220
> URL: https://issues.apache.org/jira/browse/SOLR-8220
> Project: Solr
>  Issue Type: Improvement
>Reporter: Keith Laban
> Attachments: SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, 
> SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch
>
>
> Many times a value will be both stored="true" and docValues="true" which 
> requires redundant data to be stored on disk. Since reading from docValues is 
> both efficient and a common practice (facets, analytics, streaming, etc), 
> reading values from docValues when a stored version of the field does not 
> exist would be a valuable disk usage optimization.
> The only caveat with this that I can see would be for multiValued fields as 
> they would always be returned sorted in the docValues approach. I believe 
> this is a fair compromise.
> I've done a rough implementation for this as a field transform, but I think 
> it should live closer to where stored fields are loaded in the 
> SolrIndexSearcher.
> Two open questions/observations:
> 1) There doesn't seem to be a standard way to read values for docValues, 
> facets, analytics, streaming, etc, all seem to be doing their own ways, 
> perhaps some of this logic should be centralized.
> 2) What will the API behavior be? (Below is my proposed implementation)
> Parameters for fl:
> - fl="docValueField"
>   -- return field from docValue if the field is not stored and in docValues, 
> if the field is stored return it from stored fields
> - fl="*"
>   -- return only stored fields
> - fl="+"
>-- return stored fields and docValue fields
> 2a - would be easiest implementation and might be sufficient for a first 
> pass. 2b - is current behavior



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8220) Read field from docValues for non stored fields

2015-11-24 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025652#comment-15025652
 ] 

Ishan Chattopadhyaya commented on SOLR-8220:


Just noticed, EnumFieldTest.testEnumSort() fails with the last patch (and the 
one before it) when the severity_dv is chosen as the enum field. I think we're 
handling the enum dv fields incorrectly.

> Read field from docValues for non stored fields
> ---
>
> Key: SOLR-8220
> URL: https://issues.apache.org/jira/browse/SOLR-8220
> Project: Solr
>  Issue Type: Improvement
>Reporter: Keith Laban
> Attachments: SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, 
> SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch
>
>
> Many times a value will be both stored="true" and docValues="true" which 
> requires redundant data to be stored on disk. Since reading from docValues is 
> both efficient and a common practice (facets, analytics, streaming, etc), 
> reading values from docValues when a stored version of the field does not 
> exist would be a valuable disk usage optimization.
> The only caveat with this that I can see would be for multiValued fields as 
> they would always be returned sorted in the docValues approach. I believe 
> this is a fair compromise.
> I've done a rough implementation for this as a field transform, but I think 
> it should live closer to where stored fields are loaded in the 
> SolrIndexSearcher.
> Two open questions/observations:
> 1) There doesn't seem to be a standard way to read values for docValues, 
> facets, analytics, streaming, etc, all seem to be doing their own ways, 
> perhaps some of this logic should be centralized.
> 2) What will the API behavior be? (Below is my proposed implementation)
> Parameters for fl:
> - fl="docValueField"
>   -- return field from docValue if the field is not stored and in docValues, 
> if the field is stored return it from stored fields
> - fl="*"
>   -- return only stored fields
> - fl="+"
>-- return stored fields and docValue fields
> 2a - would be easiest implementation and might be sufficient for a first 
> pass. 2b - is current behavior



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7949) Thers is a xss issue in plugins/stats page of Admin Web UI.

2015-11-24 Thread Miriam Celi (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025642#comment-15025642
 ] 

Miriam Celi commented on SOLR-7949:
---

I wasn't sure if 5.3.0 was one of the affected versions, since the Details 
included at the top of the record only lists 4.9, 4.10.4, 5.2.1 as affected 
versions. Perhaps Affected Versions should be set to "All versions prior to 
5.3.1" in order to avoid confusion???


> Thers is a xss issue in plugins/stats page of Admin Web UI.
> ---
>
> Key: SOLR-7949
> URL: https://issues.apache.org/jira/browse/SOLR-7949
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Affects Versions: 4.9, 4.10.4, 5.2.1
>Reporter: davidchiu
>Assignee: Jan Høydahl
> Fix For: 5.4, 5.3.1, Trunk
>
>
> Open Solr Admin Web UI, select a core(such as collection1) and then click 
> "Plugins/stats",and type a url like 
> "http://127.0.0.1:8983/solr/#/collection1/plugins/cache?entry=score= src=1 onerror=alert(1);> to the browser address, you will get alert box with 
> "1".
> I changed follow code to resolve this problem:
> The Original code:
>   for( var i = 0; i < entry_count; i++ )
>   {
> $( 'a[data-bean="' + entries[i] + '"]', frame_element )
>   .parent().addClass( 'expanded' );
>   }
> The Changed code:
>   for( var i = 0; i < entry_count; i++ )
>   {
> $( 'a[data-bean="' + entries[i].esc() + '"]', frame_element )
>   .parent().addClass( 'expanded' );
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8333) fix public methods that take/return private/package-private arguments/results

2015-11-24 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025607#comment-15025607
 ] 

Hoss Man commented on SOLR-8333:


bq. ... Making CacheEntry public is the simple fix, but since the class should 
only be used internally and never by users, I wonder if this is better: ...

Shawn: I think any API improvements/refacotring/class-moving should be tracked 
in a dedicated issue, where the questions of backcompat and code structure can 
be considered appropriately, and decisions can be made about wehter those 
changes are trunk only or 5x, etc...

For this issue i really think we should focus on the minimum viable changes 
that can be made to the existing APIs in terms of class level visibility in 
order for the APIs to not be broken in 5.x - ideally in such a way that we 
don't break any existing user plugins that might be using these classes...

* erik already fixed hte issue with SimplePostTool -> GlobFileFilter by making 
the public API refer to an appropriate public abstraction/interface rather then 
the concrete impl.
* for HLL -> ISchemaVersion we should make ISchemaVersion public
** it was public in the original java-hll project, i'm not sure why dawid 
removed that when importing
* for ConcurrentLRUCache & ConcurrentLFUCache I think we should go ahead and 
make the respective static inner CacheEntry classes public for now.

any concerns with these solutions to address the immediate problems?

(I have yet to find any automated tools that might make it easy to fail the 
build if any other such API problems exist)

> fix public methods that take/return private/package-private arguments/results
> -
>
> Key: SOLR-8333
> URL: https://issues.apache.org/jira/browse/SOLR-8333
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-8333-ConcurrentLFUCache-protected.patch, 
> SOLR-8333.patch
>
>
> background info: 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201511.mbox/%3Calpine.DEB.2.11.1511231128450.24330@tray%3E
> A commit that added a package to solrj which already existed in solr-core 
> caused the javadoc link checker to uncover at least 4 instances of private or 
> package-private classes being neccessary to use public APIs.
> we should fix these instances and any other instances of APIs with similar 
> problems that we can find.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8338) in OverseerTest replace strings such as "collection1" and "state"

2015-11-24 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025597#comment-15025597
 ] 

Ishan Chattopadhyaya commented on SOLR-8338:


+1, LGTM. 
Though, just a thought, should we use SolrTestCaseJ4.DEFAULT_TEST_CORENAME for 
"collection1"?

> in OverseerTest replace strings such as "collection1" and "state"
> -
>
> Key: SOLR-8338
> URL: https://issues.apache.org/jira/browse/SOLR-8338
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8338.patch
>
>
> replace with variable or enum equivalent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8220) Read field from docValues for non stored fields

2015-11-24 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-8220:
---
Attachment: SOLR-8220.patch

Updating the patch with the following changes:
# Since this decorate method would be used from the RealTimeGetComponent as 
well, and there it will have to decorate a SolrInputDocument, I've changed the 
method to handle both a SolrDocument and SolrInputDocument, depending on what 
is sent in.
# The BasicFunctionalityTest was failing for me since it depended on a field 
"test_s_dv", which didn't exist in the schema. I've added that to the 
schema.xml.
# Added a javadoc for the decorate method.

Keith, please review. If you think these changes make sense, then I'll base 
SOLR-8276 on this one. As you mentioned, the schema version check is still a 
TODO.

> Read field from docValues for non stored fields
> ---
>
> Key: SOLR-8220
> URL: https://issues.apache.org/jira/browse/SOLR-8220
> Project: Solr
>  Issue Type: Improvement
>Reporter: Keith Laban
> Attachments: SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, 
> SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch
>
>
> Many times a value will be both stored="true" and docValues="true" which 
> requires redundant data to be stored on disk. Since reading from docValues is 
> both efficient and a common practice (facets, analytics, streaming, etc), 
> reading values from docValues when a stored version of the field does not 
> exist would be a valuable disk usage optimization.
> The only caveat with this that I can see would be for multiValued fields as 
> they would always be returned sorted in the docValues approach. I believe 
> this is a fair compromise.
> I've done a rough implementation for this as a field transform, but I think 
> it should live closer to where stored fields are loaded in the 
> SolrIndexSearcher.
> Two open questions/observations:
> 1) There doesn't seem to be a standard way to read values for docValues, 
> facets, analytics, streaming, etc, all seem to be doing their own ways, 
> perhaps some of this logic should be centralized.
> 2) What will the API behavior be? (Below is my proposed implementation)
> Parameters for fl:
> - fl="docValueField"
>   -- return field from docValue if the field is not stored and in docValues, 
> if the field is stored return it from stored fields
> - fl="*"
>   -- return only stored fields
> - fl="+"
>-- return stored fields and docValue fields
> 2a - would be easiest implementation and might be sufficient for a first 
> pass. 2b - is current behavior



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6900) Grouping sortWithinGroup should use Sort.RELEVANCE to indicate that, not null

2015-11-24 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-6900:
-
Attachment: LUCENE_6900.patch

Thanks for the review Christine!  I updated the patch with those simple 
changes.  _Also, I backed-out unrelated improvements to needScore; I'll file a 
separate issue_.

Absent of further feedback, I'll commit this tomorrow around now.

> Grouping sortWithinGroup should use Sort.RELEVANCE to indicate that, not null
> -
>
> Key: LUCENE-6900
> URL: https://issues.apache.org/jira/browse/LUCENE-6900
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/grouping
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Attachments: LUCENE_6900.patch, LUCENE_6900.patch
>
>
> In AbstractSecondPassGroupingCollector, {{withinGroupSort}} uses a value of 
> null to indicate a relevance sort.  I think it's nicer to use Sort.RELEVANCE 
> for this -- after all it's how the {{groupSort}} variable is handled.  This 
> choice is also seen in GroupingSearch; likely some other collaborators too.
> [~martijn.v.groningen] is there some wisdom in the current choice that 
> escapes me?  If not I'll post a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5209) last replica removal cascades to remove shard from clusterstate

2015-11-24 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025453#comment-15025453
 ] 

Christine Poerschke commented on SOLR-5209:
---

Am in the process of updating/rebasing the patch for this (SOLR-5209) ticket 
here. SOLR-8338 is towards that, just replacing magic strings so that the 
actual test changes required for SOLR-5209 will then be simpler and clearer.

> last replica removal cascades to remove shard from clusterstate
> ---
>
> Key: SOLR-5209
> URL: https://issues.apache.org/jira/browse/SOLR-5209
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.4
>Reporter: Christine Poerschke
>Assignee: Mark Miller
>Priority: Blocker
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-5209.patch
>
>
> The problem we saw was that unloading of an only replica of a shard deleted 
> that shard's info from the clusterstate. Once it was gone then there was no 
> easy way to re-create the shard (other than dropping and re-creating the 
> whole collection's state).
> This seems like a bug?
> Overseer.java around line 600 has a comment and commented out code:
> // TODO TODO TODO!!! if there are no replicas left for the slice, and the 
> slice has no hash range, remove it
> // if (newReplicas.size() == 0 && slice.getRange() == null) {
> // if there are no replicas left for the slice remove it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8338) in OverseerTest replace strings such as "collection1" and "state"

2015-11-24 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-8338:
--
Attachment: SOLR-8338.patch

> in OverseerTest replace strings such as "collection1" and "state"
> -
>
> Key: SOLR-8338
> URL: https://issues.apache.org/jira/browse/SOLR-8338
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8338.patch
>
>
> replace with variable or enum equivalent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7949) Thers is a xss issue in plugins/stats page of Admin Web UI.

2015-11-24 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025439#comment-15025439
 ] 

Upayavira commented on SOLR-7949:
-

[~mceli] from the fix version, it looks like it was resolved in 5.3.1, so yes, 
it is in 5.3.0.

> Thers is a xss issue in plugins/stats page of Admin Web UI.
> ---
>
> Key: SOLR-7949
> URL: https://issues.apache.org/jira/browse/SOLR-7949
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Affects Versions: 4.9, 4.10.4, 5.2.1
>Reporter: davidchiu
>Assignee: Jan Høydahl
> Fix For: 5.4, 5.3.1, Trunk
>
>
> Open Solr Admin Web UI, select a core(such as collection1) and then click 
> "Plugins/stats",and type a url like 
> "http://127.0.0.1:8983/solr/#/collection1/plugins/cache?entry=score= src=1 onerror=alert(1);> to the browser address, you will get alert box with 
> "1".
> I changed follow code to resolve this problem:
> The Original code:
>   for( var i = 0; i < entry_count; i++ )
>   {
> $( 'a[data-bean="' + entries[i] + '"]', frame_element )
>   .parent().addClass( 'expanded' );
>   }
> The Changed code:
>   for( var i = 0; i < entry_count; i++ )
>   {
> $( 'a[data-bean="' + entries[i].esc() + '"]', frame_element )
>   .parent().addClass( 'expanded' );
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8338) in OverseerTest replace strings such as "collection1" and "state"

2015-11-24 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-8338:
-

 Summary: in OverseerTest replace strings such as "collection1" and 
"state"
 Key: SOLR-8338
 URL: https://issues.apache.org/jira/browse/SOLR-8338
 Project: Solr
  Issue Type: Test
Reporter: Christine Poerschke
Assignee: Christine Poerschke
Priority: Minor


replace with variable or enum equivalent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8337) Add ReduceOperation Interface

2015-11-24 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025291#comment-15025291
 ] 

Joel Bernstein edited comment on SOLR-8337 at 11/24/15 8:27 PM:


Currently the ReduceStream is referred to by the *group* Streaming Expression 
function. Now that we will be passing in a ReduceOperation it makes sense to 
call this function *reduce*. For example the syntax would be:
{code}
reduce(  
 search(collection1, 
 q="*:*",
 qt="/export", 
 fl="id,a_s,a_i,a_f", 
 sort="a_s asc, a_f asc"),
  by="a_s",
  operation(...))
{code}


was (Author: joel.bernstein):
Currently the ReduceStream is referred to by *group* Streaming Expression 
function. Now that we will be passing in a ReduceOperation it makes sense to 
call this function *reduce*. For example the syntax would be:
{code}
reduce(  
 search(collection1, 
 q="*:*",
 qt="/export", 
 fl="id,a_s,a_i,a_f", 
 sort="a_s asc, a_f asc"),
  by="a_s",
  operation(...))
{code}

> Add ReduceOperation Interface
> -
>
> Key: SOLR-8337
> URL: https://issues.apache.org/jira/browse/SOLR-8337
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
> Attachments: SOLR-8337.patch
>
>
> This is a very simple ticket to create new interface that extends the 
> StreamOperation. The interface will be called the ReduceOperation.
> In the near future the ReducerStream will be changed to accept a 
> ReduceOperation. This will allow users to pass in the specific reduce 
> algorithm to the ReducerStream, making the ReducerStream much more powerful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8337) Add ReduceOperation Interface

2015-11-24 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025291#comment-15025291
 ] 

Joel Bernstein edited comment on SOLR-8337 at 11/24/15 8:27 PM:


Currently the ReducerStream is referred to by the *group* Streaming Expression 
function. Now that we will be passing in a ReduceOperation it makes sense to 
call this function *reduce*. For example the syntax would be:
{code}
reduce(  
 search(collection1, 
 q="*:*",
 qt="/export", 
 fl="id,a_s,a_i,a_f", 
 sort="a_s asc, a_f asc"),
  by="a_s",
  operation(...))
{code}


was (Author: joel.bernstein):
Currently the ReduceStream is referred to by the *group* Streaming Expression 
function. Now that we will be passing in a ReduceOperation it makes sense to 
call this function *reduce*. For example the syntax would be:
{code}
reduce(  
 search(collection1, 
 q="*:*",
 qt="/export", 
 fl="id,a_s,a_i,a_f", 
 sort="a_s asc, a_f asc"),
  by="a_s",
  operation(...))
{code}

> Add ReduceOperation Interface
> -
>
> Key: SOLR-8337
> URL: https://issues.apache.org/jira/browse/SOLR-8337
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
> Attachments: SOLR-8337.patch
>
>
> This is a very simple ticket to create new interface that extends the 
> StreamOperation. The interface will be called the ReduceOperation.
> In the near future the ReducerStream will be changed to accept a 
> ReduceOperation. This will allow users to pass in the specific reduce 
> algorithm to the ReducerStream, making the ReducerStream much more powerful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8337) Add ReduceOperation Interface

2015-11-24 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025291#comment-15025291
 ] 

Joel Bernstein commented on SOLR-8337:
--

Currently the ReduceStream is referred to by *group* Streaming Expression 
function. Now that we will be passing in a ReduceOperation it makes sense to 
call this function *reduce*. For example the syntax would be:
{code}
reduce(  
 search(collection1, 
 q="*:*",
 qt="/export", 
 fl="id,a_s,a_i,a_f", 
 sort="a_s asc, a_f asc"),
  by="a_s",
  operation(...))
{code}

> Add ReduceOperation Interface
> -
>
> Key: SOLR-8337
> URL: https://issues.apache.org/jira/browse/SOLR-8337
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
> Attachments: SOLR-8337.patch
>
>
> This is a very simple ticket to create new interface that extends the 
> StreamOperation. The interface will be called the ReduceOperation.
> In the near future the ReducerStream will be changed to accept a 
> ReduceOperation. This will allow users to pass in the specific reduce 
> algorithm to the ReducerStream, making the ReducerStream much more powerful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8337) Add ReduceOperation Interface

2015-11-24 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025241#comment-15025241
 ] 

Joel Bernstein commented on SOLR-8337:
--

Patch adds a single *reduce()* method that returns a single Tuple, which is the 
final reduction.

The *operate(Tuple)* method will be called for each Tuple that is read by the 
*ReducerStream*.

The reduce() method will be called each time the group by key changes. This 
will give the ReduceOperation a chance to finish the reduce algorithm and 
return a single Tuple. The ReduceOperation will also clear it's internal memory 
after each call to reduce() to prepare for the next Tuple grouping.

> Add ReduceOperation Interface
> -
>
> Key: SOLR-8337
> URL: https://issues.apache.org/jira/browse/SOLR-8337
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
> Attachments: SOLR-8337.patch
>
>
> This is a very simple ticket to create new interface that extends the 
> StreamOperation. The interface will be called the ReduceOperation.
> In the near future the ReducerStream will be changed to accept a 
> ReduceOperation. This will allow users to pass in the specific reduce 
> algorithm to the ReducerStream, making the ReducerStream much more powerful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7928) Improve CheckIndex to work against HdfsDirectory

2015-11-24 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025235#comment-15025235
 ] 

Mike Drob commented on SOLR-7928:
-

LGTM!

> Improve CheckIndex to work against HdfsDirectory
> 
>
> Key: SOLR-7928
> URL: https://issues.apache.org/jira/browse/SOLR-7928
> Project: Solr
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Mike Drob
>Assignee: Gregory Chanan
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-7928.patch, SOLR-7928.patch, SOLR-7928.patch, 
> SOLR-7928.patch, SOLR-7928.patch, SOLR-7928.patch, SOLR-7928.patch, 
> SOLR-7928.patch
>
>
> CheckIndex is very useful for testing an index for corruption. However, it 
> can only work with an index on an FSDirectory, meaning that if you need to 
> check an Hdfs Index, then you have to download it to local disk (which can be 
> very large).
> We should have a way to natively check index on hdfs for corruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8337) Add ReduceOperation Interface

2015-11-24 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8337:
-
Attachment: SOLR-8337.patch

> Add ReduceOperation Interface
> -
>
> Key: SOLR-8337
> URL: https://issues.apache.org/jira/browse/SOLR-8337
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
> Attachments: SOLR-8337.patch
>
>
> This is a very simple ticket to create new interface that extends the 
> StreamOperation. The interface will be called the ReduceOperation.
> In the near future the ReducerStream will be changed to accept a 
> ReduceOperation. This will allow users to pass in the specific reduce 
> algorithm to the ReducerStream, making the ReducerStream much more powerful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8337) Add ReduceOperation Interface

2015-11-24 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-8337:


 Summary: Add ReduceOperation Interface
 Key: SOLR-8337
 URL: https://issues.apache.org/jira/browse/SOLR-8337
 Project: Solr
  Issue Type: Bug
Reporter: Joel Bernstein


This is a very simple ticket to create new interface that extends the 
StreamOperation. The interface will be called the ReduceOperation.

In the near future the ReducerStream will be changed to accept a 
ReduceOperation. This will allow users to pass in the specific reduce algorithm 
to the ReducerStream, making the ReducerStream much more powerful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7928) Improve CheckIndex to work against HdfsDirectory

2015-11-24 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated SOLR-7928:
-
Attachment: SOLR-7928.patch

Here's a patch that passes precommit.  Added an explicit 0-arg constructor 
(there are other place sin the code that do that for javadoc).  Also added a 
package-info.html.

Let me know if this looks good to you [~mdrob] and I'll commit it.

> Improve CheckIndex to work against HdfsDirectory
> 
>
> Key: SOLR-7928
> URL: https://issues.apache.org/jira/browse/SOLR-7928
> Project: Solr
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Mike Drob
>Assignee: Gregory Chanan
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-7928.patch, SOLR-7928.patch, SOLR-7928.patch, 
> SOLR-7928.patch, SOLR-7928.patch, SOLR-7928.patch, SOLR-7928.patch, 
> SOLR-7928.patch
>
>
> CheckIndex is very useful for testing an index for corruption. However, it 
> can only work with an index on an FSDirectory, meaning that if you need to 
> check an Hdfs Index, then you have to download it to local disk (which can be 
> very large).
> We should have a way to natively check index on hdfs for corruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7197) Can solr EntityProcessor implement curosrs

2015-11-24 Thread Raveendra Yerraguntl (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025104#comment-15025104
 ] 

Raveendra Yerraguntl commented on SOLR-7197:


Will be doing  a bench mark test soon before and after the fix for a couple of 
million records. 

> Can solr EntityProcessor implement curosrs
> --
>
> Key: SOLR-7197
> URL: https://issues.apache.org/jira/browse/SOLR-7197
> Project: Solr
>  Issue Type: Wish
>  Components: contrib - DataImportHandler
>Affects Versions: 5.0
> Environment: Prod
>Reporter: Raveendra Yerraguntl
> Attachments: DIH_SEP_CURSOR_SOLR_7197.patch, lucene_ant_test_op
>
>
> package org.apache.solr.handler.dataimport;
> class SolrEntityProcessor
>  protected SolrDocumentList doQuery(int start) {
>  
>  SolrQuery solrQuery = new SolrQuery(queryString);
> solrQuery.setRows(rows);
> solrQuery.setStart(start);
> if (fields != null) {
>   for (String field : fields) {
> solrQuery.addField(field);
>   }
> }
> solrQuery.setRequestHandler(requestHandler);
> solrQuery.setFilterQueries(filterQueries);
> solrQuery.setTimeAllowed(timeout * 1000);
> 
> QueryResponse response = null;
> try {
>   response = solrClient.query(solrQuery);
> } catch (SolrServerException e) {
>   if (ABORT.equals(onError)) {
> wrapAndThrow(SEVERE, e);
>   } else if (SKIP.equals(onError)) {
> wrapAndThrow(DataImportHandlerException.SKIP_ROW, e);
>   }
> }
> ---
> If the do Query variant can be implemented with cursor, then it helps with 
> any heavy lifting (bulk processing) with entity processor. That really helps.
> If permitted I can contribute the fix. Currently I am using 4.10 and see the 
> performance issues and planning the work around. If the cursor is available 
> then it really helps. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8263) Tlog replication could interfere with the replay of buffered updates

2015-11-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025091#comment-15025091
 ] 

ASF subversion and git services commented on SOLR-8263:
---

Commit 1716233 from [~erickoerickson] in branch 'dev/trunk'
[ https://svn.apache.org/r1716233 ]

SOLR-8263: Reverting commit, missed latest patch from Renaud

> Tlog replication could interfere with the replay of buffered updates
> 
>
> Key: SOLR-8263
> URL: https://issues.apache.org/jira/browse/SOLR-8263
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Renaud Delbru
>Assignee: Erick Erickson
> Attachments: SOLR-8263-trunk-1.patch, SOLR-8263-trunk-2.patch, 
> SOLR-8263-trunk-3.patch
>
>
> The current implementation of the tlog replication might interfere with the 
> replay of the buffered updates. The current tlog replication works as follow:
> 1) Fetch the the tlog files from the master
> 2) reset the update log before switching the tlog directory
> 3) switch the tlog directory and re-initialise the update log with the new 
> directory.
> Currently there is no logic to keep "buffered updates" while resetting and 
> reinitializing the update log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8299) ConfigSet DELETE should not allow deletion of a a configset that's currently being used

2015-11-24 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025077#comment-15025077
 ] 

Anshum Gupta commented on SOLR-8299:


[~gerlowskija] All the user-specific information should be in the reference 
guide. Other than that, it's just javadocs.
In this particular case, this should go into the ref guide.

> ConfigSet DELETE should not allow deletion of a a configset that's currently 
> being used
> ---
>
> Key: SOLR-8299
> URL: https://issues.apache.org/jira/browse/SOLR-8299
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-8299.patch, SOLR-8299.patch
>
>
> The ConfigSet DELETE API currently doesn't check if the configuration 
> directory being deleted is being used by an active Collection. We should add 
> a check for the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8299) ConfigSet DELETE should not allow deletion of a a configset that's currently being used

2015-11-24 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-8299:
---
Fix Version/s: Trunk
   5.4

> ConfigSet DELETE should not allow deletion of a a configset that's currently 
> being used
> ---
>
> Key: SOLR-8299
> URL: https://issues.apache.org/jira/browse/SOLR-8299
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-8299.patch, SOLR-8299.patch
>
>
> The ConfigSet DELETE API currently doesn't check if the configuration 
> directory being deleted is being used by an active Collection. We should add 
> a check for the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8299) ConfigSet DELETE should not allow deletion of a a configset that's currently being used

2015-11-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025074#comment-15025074
 ] 

ASF subversion and git services commented on SOLR-8299:
---

Commit 1716230 from [~anshumg] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1716230 ]

SOLR-8299: ConfigSet DELETE operation no longer allows deletion of config sets 
that are currently in use by other collections (merge from trunk)

> ConfigSet DELETE should not allow deletion of a a configset that's currently 
> being used
> ---
>
> Key: SOLR-8299
> URL: https://issues.apache.org/jira/browse/SOLR-8299
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-8299.patch, SOLR-8299.patch
>
>
> The ConfigSet DELETE API currently doesn't check if the configuration 
> directory being deleted is being used by an active Collection. We should add 
> a check for the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7197) Can solr EntityProcessor implement curosrs

2015-11-24 Thread Raveendra Yerraguntl (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raveendra Yerraguntl updated SOLR-7197:
---
Attachment: lucene_ant_test_op
DIH_SEP_CURSOR_SOLR_7197.patch

Attaching the path file. The 'ant test' was successful for this test, though it 
failed for replication test. Attached the redirected output from ant test.

> Can solr EntityProcessor implement curosrs
> --
>
> Key: SOLR-7197
> URL: https://issues.apache.org/jira/browse/SOLR-7197
> Project: Solr
>  Issue Type: Wish
>  Components: contrib - DataImportHandler
>Affects Versions: 5.0
> Environment: Prod
>Reporter: Raveendra Yerraguntl
> Attachments: DIH_SEP_CURSOR_SOLR_7197.patch, lucene_ant_test_op
>
>
> package org.apache.solr.handler.dataimport;
> class SolrEntityProcessor
>  protected SolrDocumentList doQuery(int start) {
>  
>  SolrQuery solrQuery = new SolrQuery(queryString);
> solrQuery.setRows(rows);
> solrQuery.setStart(start);
> if (fields != null) {
>   for (String field : fields) {
> solrQuery.addField(field);
>   }
> }
> solrQuery.setRequestHandler(requestHandler);
> solrQuery.setFilterQueries(filterQueries);
> solrQuery.setTimeAllowed(timeout * 1000);
> 
> QueryResponse response = null;
> try {
>   response = solrClient.query(solrQuery);
> } catch (SolrServerException e) {
>   if (ABORT.equals(onError)) {
> wrapAndThrow(SEVERE, e);
>   } else if (SKIP.equals(onError)) {
> wrapAndThrow(DataImportHandlerException.SKIP_ROW, e);
>   }
> }
> ---
> If the do Query variant can be implemented with cursor, then it helps with 
> any heavy lifting (bulk processing) with entity processor. That really helps.
> If permitted I can contribute the fix. Currently I am using 4.10 and see the 
> performance issues and planning the work around. If the cursor is available 
> then it really helps. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8263) Tlog replication could interfere with the replay of buffered updates

2015-11-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025048#comment-15025048
 ] 

ASF subversion and git services commented on SOLR-8263:
---

Commit 1716229 from [~erickoerickson] in branch 'dev/trunk'
[ https://svn.apache.org/r1716229 ]

SOLR-8263: Tlog replication could interfere with the replay of buffered updates

> Tlog replication could interfere with the replay of buffered updates
> 
>
> Key: SOLR-8263
> URL: https://issues.apache.org/jira/browse/SOLR-8263
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Renaud Delbru
>Assignee: Erick Erickson
> Attachments: SOLR-8263-trunk-1.patch, SOLR-8263-trunk-2.patch, 
> SOLR-8263-trunk-3.patch
>
>
> The current implementation of the tlog replication might interfere with the 
> replay of the buffered updates. The current tlog replication works as follow:
> 1) Fetch the the tlog files from the master
> 2) reset the update log before switching the tlog directory
> 3) switch the tlog directory and re-initialise the update log with the new 
> directory.
> Currently there is no logic to keep "buffered updates" while resetting and 
> reinitializing the update log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Optimisations proposal for FacetsConfig

2015-11-24 Thread Erick Erickson
Sanne:

Sure, please open a JIRA and add a patch. You'll need to create a user
ID on the JIRA system, but that's a "self-serve" option.

Best,
Erick

On Mon, Nov 23, 2015 at 8:21 AM, Sanne Grinovero
 wrote:
> Hello all,
> I was looking into the source code for
> org.apache.lucene.facet.FacetsConfig as it's being highlighted as an
> hotspot of allocations during a performance analysis session.
>
> Our code was allocating a new instance of FacetsConfig for each
> Document being built; there are several maps being allocated by such
> an instance, both as instance fields and on the hot path of method
> "#build(Document doc)".
>
> My understanding from reading the code is that it's designed to be
> multi-threaded, probably to reuse one instance for a single index?
>
> That would resolve my issue with allocations at instance level, and
> probably also the maps being allocated within the build method as the
> JVM seems to be smart enough to skip those; at least that's my
> impression with a quick experiment.
>
> However reusing this single instance across all threads would become a
> contention point as all getters to read the field configurations are
> synchronized.
> Since the maps being read are actually safe ConcurrentMap instances, I
> see no reason for the "synchronized", so really it just boils down to
> a trivial patch to remove those on the reader methods.
>
> May I open a JIRA and propose a patch for that?
>
> As a second step, I'd also like to see if the build method could be
> short-circuited for a quick return: in case there are no faceted
> fields would be great to just return with the input document right
> away.
>
> Thanks,
> Sanne
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: CDCR is is being developed for SolrCloud not for Master/Slave?

2015-11-24 Thread Erick Erickson
I can also confirm that CDCR is currently in use in a production
environment. Beyond that I cannot comment. Whether this will be _ever_
be in 5.x is an open question at this point, it will probably be 6.0
only as it "fits" better being significant new functionality. Much
depends on how long it'll take to release 6.0.

I expect to close SOLR-6273 today, and commit and close SOLR-8263 on
trunk/6.0 today as well, barring unforeseen issues.

As far as 5x is concerned, I'll supply an "uber-patch" for SOLR-6273
but _not_ commit any changes to the 5x code line. Similarly, I'll make
sure that SOLR-8263 applies to the (patched with SOLR-6273_5x) 5x code
line but _not_ commit it either. I'll compile/test (beast) the
combined patches. These are "as is" for those who really want to jump
on this functionality and to preserve the port in case we want to
apply this in future to 5x.



On Mon, Nov 23, 2015 at 9:16 AM, Susheel Kumar  wrote:
> Thanks, Shalin for confirming. Good to know it is already being used and
> looking forward for getting it released soon.
>
> On Mon, Nov 23, 2015 at 12:08 PM, Shalin Shekhar Mangar
>  wrote:
>>
>> Hi Susheel,
>>
>> No, CDCR is a SolrCloud only feature. I've heard that the CDCR patches
>> are in production already at a large company but I don't have the
>> details.
>>
>> On Mon, Nov 23, 2015 at 10:01 PM, Susheel Kumar 
>> wrote:
>> > Thanks, Upayavira for confirming.  Do you know if CDCR is also going to
>> > work
>> > for Master/Slave old architecture if any of the folks are having that in
>> > production?
>> >
>> > On Sun, Nov 22, 2015 at 2:59 PM, Upayavira  wrote:
>> >>
>> >>
>> >>
>> >>
>> >> On Sun, Nov 22, 2015, at 07:51 PM, Susheel Kumar wrote:
>> >>
>> >> Hello,
>> >>
>> >> One of our architect in team mentioned that CDCR is being developed for
>> >> Master/Slave and can't be used for SolrCloud.  I did look the patches
>> >> (Zookeeper being used..)  and it doesn't seems to be the case.
>> >>
>> >> Can someone from dev community confirm that CDCR being developed is for
>> >> SolrCloud and not for Master/Slave architecture ?
>> >>
>> >> Thanks,
>> >> Susheel
>> >>
>> >>
>> >> You are correct - CDCR will be for allowing multiple SolrCloud farms to
>> >> work together.
>> >>
>> >> Upayavira
>> >
>> >
>>
>>
>>
>> --
>> Regards,
>> Shalin Shekhar Mangar.
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8299) ConfigSet DELETE should not allow deletion of a a configset that's currently being used

2015-11-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024980#comment-15024980
 ] 

ASF subversion and git services commented on SOLR-8299:
---

Commit 1716223 from [~anshumg] in branch 'dev/trunk'
[ https://svn.apache.org/r1716223 ]

SOLR-8299: ConfigSet DELETE operation no longer allows deletion of config sets 
that are currently in use by other collections

> ConfigSet DELETE should not allow deletion of a a configset that's currently 
> being used
> ---
>
> Key: SOLR-8299
> URL: https://issues.apache.org/jira/browse/SOLR-8299
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Attachments: SOLR-8299.patch, SOLR-8299.patch
>
>
> The ConfigSet DELETE API currently doesn't check if the configuration 
> directory being deleted is being used by an active Collection. We should add 
> a check for the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 660 - Failure

2015-11-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/660/

1 tests failed.
FAILED:  
org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeMixedAdds

Error Message:
soft530 after hard529 but no hard530: 16841805907278260 !<= 16841805895778190

Stack Trace:
java.lang.AssertionError: soft530 after hard529 but no hard530: 
16841805907278260 !<= 16841805895778190
at 
__randomizedtesting.SeedInfo.seed([30410231102B8B3B:6195FBB1A158BB9C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeMixedAdds(SoftAutoCommitTest.java:168)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 9439 lines...]
   [junit4] Suite: org.apache.solr.update.SoftAutoCommitTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java8/solr/build/solr

[jira] [Commented] (SOLR-7949) Thers is a xss issue in plugins/stats page of Admin Web UI.

2015-11-24 Thread Miriam Celi (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024787#comment-15024787
 ] 

Miriam Celi commented on SOLR-7949:
---

Does this issue also affect version 5.3.0?

> Thers is a xss issue in plugins/stats page of Admin Web UI.
> ---
>
> Key: SOLR-7949
> URL: https://issues.apache.org/jira/browse/SOLR-7949
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Affects Versions: 4.9, 4.10.4, 5.2.1
>Reporter: davidchiu
>Assignee: Jan Høydahl
> Fix For: 5.4, 5.3.1, Trunk
>
>
> Open Solr Admin Web UI, select a core(such as collection1) and then click 
> "Plugins/stats",and type a url like 
> "http://127.0.0.1:8983/solr/#/collection1/plugins/cache?entry=score= src=1 onerror=alert(1);> to the browser address, you will get alert box with 
> "1".
> I changed follow code to resolve this problem:
> The Original code:
>   for( var i = 0; i < entry_count; i++ )
>   {
> $( 'a[data-bean="' + entries[i] + '"]', frame_element )
>   .parent().addClass( 'expanded' );
>   }
> The Changed code:
>   for( var i = 0; i < entry_count; i++ )
>   {
> $( 'a[data-bean="' + entries[i].esc() + '"]', frame_element )
>   .parent().addClass( 'expanded' );
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8190) Implement Closeable on TupleStream

2015-11-24 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-8190:


Assignee: Joel Bernstein

> Implement Closeable on TupleStream
> --
>
> Key: SOLR-8190
> URL: https://issues.apache.org/jira/browse/SOLR-8190
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
>Assignee: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-8190.patch
>
>
> Implementing Closeable on TupleStream provides the ability to use 
> try-with-resources 
> (https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html)
>  in tests and in practice. This prevents TupleStreams from being left open 
> when there is an error in the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-8179) SQL JDBC - DriverImpl loadParams doesn't support keys with no values in the connection string

2015-11-24 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein closed SOLR-8179.

Resolution: Fixed

> SQL JDBC - DriverImpl loadParams doesn't support keys with no values in the 
> connection string
> -
>
> Key: SOLR-8179
> URL: https://issues.apache.org/jira/browse/SOLR-8179
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
>Assignee: Joel Bernstein
> Attachments: SOLR-8179.patch, SOLR-8179.patch
>
>
> DBVisualizer and SquirrelSQL when trying to use JDBC with no 
> username/password and the JDBC driver causes an exception.
> {code}
> DriverManager.getConnection("jdbc:solr://" + zkHost + 
> "?collection=collection1&username=&password=");
> {code}
> {code}
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> org.apache.solr.client.solrj.io.sql.DriverImpl.loadParams(DriverImpl.java:141)
>   ... 46 more
> {code}
> The loadParams method doesn't support keys with no values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8179) SQL JDBC - DriverImpl loadParams doesn't support keys with no values in the connection string

2015-11-24 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024779#comment-15024779
 ] 

Joel Bernstein commented on SOLR-8179:
--

Thanks Kevin, this is a big improvement!

> SQL JDBC - DriverImpl loadParams doesn't support keys with no values in the 
> connection string
> -
>
> Key: SOLR-8179
> URL: https://issues.apache.org/jira/browse/SOLR-8179
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
>Assignee: Joel Bernstein
> Attachments: SOLR-8179.patch, SOLR-8179.patch
>
>
> DBVisualizer and SquirrelSQL when trying to use JDBC with no 
> username/password and the JDBC driver causes an exception.
> {code}
> DriverManager.getConnection("jdbc:solr://" + zkHost + 
> "?collection=collection1&username=&password=");
> {code}
> {code}
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> org.apache.solr.client.solrj.io.sql.DriverImpl.loadParams(DriverImpl.java:141)
>   ... 46 more
> {code}
> The loadParams method doesn't support keys with no values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8179) SQL JDBC - DriverImpl loadParams doesn't support keys with no values in the connection string

2015-11-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024778#comment-15024778
 ] 

ASF subversion and git services commented on SOLR-8179:
---

Commit 1716198 from [~joel.bernstein] in branch 'dev/trunk'
[ https://svn.apache.org/r1716198 ]

SOLR-8179: SQL JDBC - DriverImpl loadParams doesn't support keys with no values 
in the connection string

> SQL JDBC - DriverImpl loadParams doesn't support keys with no values in the 
> connection string
> -
>
> Key: SOLR-8179
> URL: https://issues.apache.org/jira/browse/SOLR-8179
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
>Assignee: Joel Bernstein
> Attachments: SOLR-8179.patch, SOLR-8179.patch
>
>
> DBVisualizer and SquirrelSQL when trying to use JDBC with no 
> username/password and the JDBC driver causes an exception.
> {code}
> DriverManager.getConnection("jdbc:solr://" + zkHost + 
> "?collection=collection1&username=&password=");
> {code}
> {code}
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> org.apache.solr.client.solrj.io.sql.DriverImpl.loadParams(DriverImpl.java:141)
>   ... 46 more
> {code}
> The loadParams method doesn't support keys with no values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8263) Tlog replication could interfere with the replay of buffered updates

2015-11-24 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024770#comment-15024770
 ] 

Shalin Shekhar Mangar commented on SOLR-8263:
-

+1 LGTM

Thanks Renaud.

> Tlog replication could interfere with the replay of buffered updates
> 
>
> Key: SOLR-8263
> URL: https://issues.apache.org/jira/browse/SOLR-8263
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Renaud Delbru
>Assignee: Erick Erickson
> Attachments: SOLR-8263-trunk-1.patch, SOLR-8263-trunk-2.patch, 
> SOLR-8263-trunk-3.patch
>
>
> The current implementation of the tlog replication might interfere with the 
> replay of the buffered updates. The current tlog replication works as follow:
> 1) Fetch the the tlog files from the master
> 2) reset the update log before switching the tlog directory
> 3) switch the tlog directory and re-initialise the update log with the new 
> directory.
> Currently there is no logic to keep "buffered updates" while resetting and 
> reinitializing the update log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8263) Tlog replication could interfere with the replay of buffered updates

2015-11-24 Thread Renaud Delbru (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024736#comment-15024736
 ] 

Renaud Delbru edited comment on SOLR-8263 at 11/24/15 4:07 PM:
---

[~shalinmangar] [~erickerickson] A new patch including the dedup logic for the 
buffered updates. I have launched a few runs without any issue. The changes are 
minimal, but it might be good to beast it a last time ?


was (Author: rendel):
[~shalinmangar] [~erickerickson] A new patch including the dedup logic for the 
buffered updates. I have launched a few run without any issue. The change is 
minimal, but it might be good to beast it a last time ?

> Tlog replication could interfere with the replay of buffered updates
> 
>
> Key: SOLR-8263
> URL: https://issues.apache.org/jira/browse/SOLR-8263
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Renaud Delbru
>Assignee: Erick Erickson
> Attachments: SOLR-8263-trunk-1.patch, SOLR-8263-trunk-2.patch, 
> SOLR-8263-trunk-3.patch
>
>
> The current implementation of the tlog replication might interfere with the 
> replay of the buffered updates. The current tlog replication works as follow:
> 1) Fetch the the tlog files from the master
> 2) reset the update log before switching the tlog directory
> 3) switch the tlog directory and re-initialise the update log with the new 
> directory.
> Currently there is no logic to keep "buffered updates" while resetting and 
> reinitializing the update log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8263) Tlog replication could interfere with the replay of buffered updates

2015-11-24 Thread Renaud Delbru (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Renaud Delbru updated SOLR-8263:

Attachment: SOLR-8263-trunk-3.patch

[~shalinmangar] [~erickerickson] A new patch including the dedup logic for the 
buffered updates. I have launched a few run without any issue. The change is 
minimal, but it might be good to beast it a last time ?

> Tlog replication could interfere with the replay of buffered updates
> 
>
> Key: SOLR-8263
> URL: https://issues.apache.org/jira/browse/SOLR-8263
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Renaud Delbru
>Assignee: Erick Erickson
> Attachments: SOLR-8263-trunk-1.patch, SOLR-8263-trunk-2.patch, 
> SOLR-8263-trunk-3.patch
>
>
> The current implementation of the tlog replication might interfere with the 
> replay of the buffered updates. The current tlog replication works as follow:
> 1) Fetch the the tlog files from the master
> 2) reset the update log before switching the tlog directory
> 3) switch the tlog directory and re-initialise the update log with the new 
> directory.
> Currently there is no logic to keep "buffered updates" while resetting and 
> reinitializing the update log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6901) Optimize 1D dimensional value indexing

2015-11-24 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-6901.

Resolution: Fixed

> Optimize 1D dimensional value indexing
> --
>
> Key: LUCENE-6901
> URL: https://issues.apache.org/jira/browse/LUCENE-6901
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk
>
> Attachments: LUCENE-6901-timsort.patch, LUCENE-6901.patch, 
> LUCENE-6901.patch
>
>
> Dimensional values give a smaller index, and faster search times, for 
> indexing ordered byte[] values across one or more dimensions, vs our existing 
> approaches, but the indexing time is substantially slower.
> Since the 1D case is so important/common (numeric fields, range query) I 
> think it's worth optimizing its indexing time.  It should also be possible to 
> optimize the N > 1 dimensions case too, but it's more complex ... we can 
> postpone that.
> So for the 1D case, I changed the merge method to do a merge sort (like 
> postings) of the already sorted segments dimensional values, instead of 
> simply re-indexing all values from the incoming segments, and this was a big 
> speedup.
> I also changed from {{InPlaceMergeSorter}} to {{IntroSorter}} (this is what 
> postings use, and it's faster but still safe) and this was another good 
> speedup, which should also help the > 1D cases.
> Finally, I added a {{BKDReader.verify}} method (currently it's dark: NOT 
> called) that walks the index and then check that every value in each leaf 
> block does in fact fall within what the index expected/claimed.  This is 
> useful for finding bugs!  Maybe we can cleanly fold it into {{CheckIndex}} 
> somehow later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6901) Optimize 1D dimensional value indexing

2015-11-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024732#comment-15024732
 ] 

ASF subversion and git services commented on LUCENE-6901:
-

Commit 1716189 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1716189 ]

LUCENE-6901: speed up dimensional values indexing and merging

> Optimize 1D dimensional value indexing
> --
>
> Key: LUCENE-6901
> URL: https://issues.apache.org/jira/browse/LUCENE-6901
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk
>
> Attachments: LUCENE-6901-timsort.patch, LUCENE-6901.patch, 
> LUCENE-6901.patch
>
>
> Dimensional values give a smaller index, and faster search times, for 
> indexing ordered byte[] values across one or more dimensions, vs our existing 
> approaches, but the indexing time is substantially slower.
> Since the 1D case is so important/common (numeric fields, range query) I 
> think it's worth optimizing its indexing time.  It should also be possible to 
> optimize the N > 1 dimensions case too, but it's more complex ... we can 
> postpone that.
> So for the 1D case, I changed the merge method to do a merge sort (like 
> postings) of the already sorted segments dimensional values, instead of 
> simply re-indexing all values from the incoming segments, and this was a big 
> speedup.
> I also changed from {{InPlaceMergeSorter}} to {{IntroSorter}} (this is what 
> postings use, and it's faster but still safe) and this was another good 
> speedup, which should also help the > 1D cases.
> Finally, I added a {{BKDReader.verify}} method (currently it's dark: NOT 
> called) that walks the index and then check that every value in each leaf 
> block does in fact fall within what the index expected/claimed.  This is 
> useful for finding bugs!  Maybe we can cleanly fold it into {{CheckIndex}} 
> somehow later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8307) XXE Vulnerability

2015-11-24 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024669#comment-15024669
 ] 

Erik Hatcher edited comment on SOLR-8307 at 11/24/15 3:33 PM:
--

Thanks [~thetaphi].  Documented in CHANGES and committed:
* branch_5x: r1716161 https://svn.apache.org/r1716161
* trunk: r1716160 https://svn.apache.org/r1716160


was (Author: ehatcher):
Thanks [~thetaphi].  Documented in CHANGES and committed.

> XXE Vulnerability
> -
>
> Key: SOLR-8307
> URL: https://issues.apache.org/jira/browse/SOLR-8307
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 5.3
>Reporter: Adam Johnson
>Assignee: Erik Hatcher
>Priority: Blocker
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-8307.patch, SOLR-8307.patch
>
>
> Use the drop-down in the left menu to select a core. Use the “Watch Changes” 
> feature under the “Plugins / Stats” option. When submitting the changes, XML 
> is passed in the “stream.body” parameter and is vulnerable to XXE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8033) useless if branch (commented out log.debug in HdfsTransactionLog constructor)

2015-11-24 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024682#comment-15024682
 ] 

Erik Hatcher commented on SOLR-8033:


oops on the wrong JIRA mentioned in my commit message, sorry about that.  It 
was meant to be SOLR-8307

> useless if branch (commented out log.debug in HdfsTransactionLog constructor)
> -
>
> Key: SOLR-8033
> URL: https://issues.apache.org/jira/browse/SOLR-8033
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.0, 5.1
>Reporter: songwanging
>Assignee: Christine Poerschke
>Priority: Minor
>
> In method HdfsTransactionLog() of class HdfsTransactionLog 
> (solr\core\src\java\org\apache\solr\update\HdfsTransactionLog.java)
> The if branch presented in the following code snippet performs no actions, we 
> should add more code to handle this or directly delete this if branch.
> HdfsTransactionLog(FileSystem fs, Path tlogFile, Collection 
> globalStrings, boolean openExisting) {
>   ...
> try {
>   if (debug) {
> //log.debug("New TransactionLog file=" + tlogFile + ", exists=" + 
> tlogFile.exists() + ", size=" + tlogFile.length() + ", openExisting=" + 
> openExisting);
>   }
> ...
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8263) Tlog replication could interfere with the replay of buffered updates

2015-11-24 Thread Renaud Delbru (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024675#comment-15024675
 ] 

Renaud Delbru commented on SOLR-8263:
-

[~shalinmangar] Yes, you understood the sequence correctly. To be more precise 
here is how it works:
1) the tlog files of the leader are downloaded in a temporary directory
2) After the files have been downloaded properly, a write lock is acquired by 
the IndexFetcher. The original tlog directory is renamed as a backup directory, 
and the temporary directory is renamed as the active tlog directory.
3) The update log is reset with the new active log directory. During this 
reset, the recovery info is used to read the backup buffered tlog file and 
every buffered operation is copied to the new buffered tlog.
4) The write lock is released, and the recovery operation will continue and 
apply the buffered updates.

Indeed, the buffered tlog can contain duplicate operations with the replica 
update log. During the recovery operation, the replica might receive from the 
leader some operations that will be buffered, but they might be also present in 
one of the tlog that is downloaded from the leader. Apart from the disk space 
usage of these duplicate operations and the additional network transfer, there 
is no harm, as these duplicate operations will be ignored by the peer cluster. 
We could improve the tlog recovery operation to de-duplicate the buffered tlog 
while copying the buffered updates. We could check the version of the latest 
operations in the downloaded tlog, and skip operations from the buffered tlog 
if their version is inferior to the latest know. It should be a relatively 
small patch. I can try to work on that in the next days and submit something, 
if that's fine with you and [~erickerickson] ?



> Tlog replication could interfere with the replay of buffered updates
> 
>
> Key: SOLR-8263
> URL: https://issues.apache.org/jira/browse/SOLR-8263
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Renaud Delbru
>Assignee: Erick Erickson
> Attachments: SOLR-8263-trunk-1.patch, SOLR-8263-trunk-2.patch
>
>
> The current implementation of the tlog replication might interfere with the 
> replay of the buffered updates. The current tlog replication works as follow:
> 1) Fetch the the tlog files from the master
> 2) reset the update log before switching the tlog directory
> 3) switch the tlog directory and re-initialise the update log with the new 
> directory.
> Currently there is no logic to keep "buffered updates" while resetting and 
> reinitializing the update log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8333) fix public methods that take/return private/package-private arguments/results

2015-11-24 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024674#comment-15024674
 ] 

Erik Hatcher commented on SOLR-8333:


SOLR-8307 has been resolved (by moving EmptyEntityResolver to a non-overlapping 
package), and thus these errors are no longer reported by documentation-lint.

> fix public methods that take/return private/package-private arguments/results
> -
>
> Key: SOLR-8333
> URL: https://issues.apache.org/jira/browse/SOLR-8333
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-8333-ConcurrentLFUCache-protected.patch, 
> SOLR-8333.patch
>
>
> background info: 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201511.mbox/%3Calpine.DEB.2.11.1511231128450.24330@tray%3E
> A commit that added a package to solrj which already existed in solr-core 
> caused the javadoc link checker to uncover at least 4 instances of private or 
> package-private classes being neccessary to use public APIs.
> we should fix these instances and any other instances of APIs with similar 
> problems that we can find.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8307) XXE Vulnerability

2015-11-24 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher resolved SOLR-8307.

Resolution: Fixed

> XXE Vulnerability
> -
>
> Key: SOLR-8307
> URL: https://issues.apache.org/jira/browse/SOLR-8307
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 5.3
>Reporter: Adam Johnson
>Assignee: Erik Hatcher
>Priority: Blocker
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-8307.patch, SOLR-8307.patch
>
>
> Use the drop-down in the left menu to select a core. Use the “Watch Changes” 
> feature under the “Plugins / Stats” option. When submitting the changes, XML 
> is passed in the “stream.body” parameter and is vulnerable to XXE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8033) useless if branch (commented out log.debug in HdfsTransactionLog constructor)

2015-11-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024668#comment-15024668
 ] 

ASF subversion and git services commented on SOLR-8033:
---

Commit 1716161 from [~ehatcher] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1716161 ]

SOLR-8033: document the move of EmptyEntityResolver

> useless if branch (commented out log.debug in HdfsTransactionLog constructor)
> -
>
> Key: SOLR-8033
> URL: https://issues.apache.org/jira/browse/SOLR-8033
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.0, 5.1
>Reporter: songwanging
>Assignee: Christine Poerschke
>Priority: Minor
>
> In method HdfsTransactionLog() of class HdfsTransactionLog 
> (solr\core\src\java\org\apache\solr\update\HdfsTransactionLog.java)
> The if branch presented in the following code snippet performs no actions, we 
> should add more code to handle this or directly delete this if branch.
> HdfsTransactionLog(FileSystem fs, Path tlogFile, Collection 
> globalStrings, boolean openExisting) {
>   ...
> try {
>   if (debug) {
> //log.debug("New TransactionLog file=" + tlogFile + ", exists=" + 
> tlogFile.exists() + ", size=" + tlogFile.length() + ", openExisting=" + 
> openExisting);
>   }
> ...
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8307) XXE Vulnerability

2015-11-24 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024669#comment-15024669
 ] 

Erik Hatcher commented on SOLR-8307:


Thanks [~thetaphi].  Documented in CHANGES and committed.

> XXE Vulnerability
> -
>
> Key: SOLR-8307
> URL: https://issues.apache.org/jira/browse/SOLR-8307
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 5.3
>Reporter: Adam Johnson
>Assignee: Erik Hatcher
>Priority: Blocker
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-8307.patch, SOLR-8307.patch
>
>
> Use the drop-down in the left menu to select a core. Use the “Watch Changes” 
> feature under the “Plugins / Stats” option. When submitting the changes, XML 
> is passed in the “stream.body” parameter and is vulnerable to XXE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8033) useless if branch (commented out log.debug in HdfsTransactionLog constructor)

2015-11-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024667#comment-15024667
 ] 

ASF subversion and git services commented on SOLR-8033:
---

Commit 1716160 from [~ehatcher] in branch 'dev/trunk'
[ https://svn.apache.org/r1716160 ]

SOLR-8033: document the move of EmptyEntityResolver

> useless if branch (commented out log.debug in HdfsTransactionLog constructor)
> -
>
> Key: SOLR-8033
> URL: https://issues.apache.org/jira/browse/SOLR-8033
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.0, 5.1
>Reporter: songwanging
>Assignee: Christine Poerschke
>Priority: Minor
>
> In method HdfsTransactionLog() of class HdfsTransactionLog 
> (solr\core\src\java\org\apache\solr\update\HdfsTransactionLog.java)
> The if branch presented in the following code snippet performs no actions, we 
> should add more code to handle this or directly delete this if branch.
> HdfsTransactionLog(FileSystem fs, Path tlogFile, Collection 
> globalStrings, boolean openExisting) {
>   ...
> try {
>   if (debug) {
> //log.debug("New TransactionLog file=" + tlogFile + ", exists=" + 
> tlogFile.exists() + ", size=" + tlogFile.length() + ", openExisting=" + 
> openExisting);
>   }
> ...
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7912) Add support for boost and exclude the queried document id in MoreLikeThis QParser

2015-11-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024666#comment-15024666
 ] 

ASF subversion and git services commented on SOLR-7912:
---

Commit 1716159 from [~anshumg] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1716159 ]

SOLR-7912: Removing complicated query assertions from the MLTQParser cloud test 
as it was getting to be more of a hack. Only test for doc ordering and query 
assertion in simple cases. (merge from trunk)

> Add support for boost and exclude the queried document id in MoreLikeThis 
> QParser
> -
>
> Key: SOLR-7912
> URL: https://issues.apache.org/jira/browse/SOLR-7912
> Project: Solr
>  Issue Type: Improvement
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-7912-test-fix.patch, SOLR-7912.patch, 
> SOLR-7912.patch, SOLR-7912.patch, SOLR-7912.patch, SOLR-7912.patch
>
>
> Continuing from SOLR-7639. We need to support boost, and also exclude input 
> document from returned doc list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7912) Add support for boost and exclude the queried document id in MoreLikeThis QParser

2015-11-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024663#comment-15024663
 ] 

ASF subversion and git services commented on SOLR-7912:
---

Commit 1716156 from [~anshumg] in branch 'dev/trunk'
[ https://svn.apache.org/r1716156 ]

SOLR-7912: Removing complicated query assertions from the MLTQParser cloud test 
as it was getting to be more of a hack. Only test for doc ordering and query 
assertion in simple cases.

> Add support for boost and exclude the queried document id in MoreLikeThis 
> QParser
> -
>
> Key: SOLR-7912
> URL: https://issues.apache.org/jira/browse/SOLR-7912
> Project: Solr
>  Issue Type: Improvement
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-7912-test-fix.patch, SOLR-7912.patch, 
> SOLR-7912.patch, SOLR-7912.patch, SOLR-7912.patch, SOLR-7912.patch
>
>
> Continuing from SOLR-7639. We need to support boost, and also exclude input 
> document from returned doc list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8307) XXE Vulnerability

2015-11-24 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024649#comment-15024649
 ] 

Uwe Schindler commented on SOLR-8307:
-

I am fine with that. I don't think we need backwards compatibility.

> XXE Vulnerability
> -
>
> Key: SOLR-8307
> URL: https://issues.apache.org/jira/browse/SOLR-8307
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 5.3
>Reporter: Adam Johnson
>Assignee: Erik Hatcher
>Priority: Blocker
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-8307.patch, SOLR-8307.patch
>
>
> Use the drop-down in the left menu to select a core. Use the “Watch Changes” 
> feature under the “Plugins / Stats” option. When submitting the changes, XML 
> is passed in the “stream.body” parameter and is vulnerable to XXE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: lucene-java wiki access

2015-11-24 Thread Erick Erickson
done

On Tue, Nov 24, 2015 at 4:51 AM, Upayavira  wrote:
> Can someone please grant me access to the lucene-java wiki? My username
> should be 'Upayavira'.
>
> Thanks!
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b90) - Build # 15022 - Failure!

2015-11-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15022/
Java: 64bit/jdk1.9.0-ea-b90 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=8435, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)2) Thread[id=8436, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)3) Thread[id=8437, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)4) Thread[id=8434, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)5) Thread[id=8438, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=8435, name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledTh

[jira] [Commented] (SOLR-8335) HdfsLockFactory does not allow core to come up after a node was killed

2015-11-24 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024615#comment-15024615
 ] 

Mark Miller commented on SOLR-8335:
---

Would not have worked any better on 4.10 - a file in Hdfs won't just go away 
because of a crash. Unlock on startup is no real solution - that's saying, make 
the lock factory not really lock, even more dangerous on a shared fs where more 
than one Solr instance can easily point at an index. 

Manual removal is the best option without a different lock factory available. 

> HdfsLockFactory does not allow core to come up after a node was killed
> --
>
> Key: SOLR-8335
> URL: https://issues.apache.org/jira/browse/SOLR-8335
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.2.1, 5.3, 5.3.1
>Reporter: Varun Thacker
>
> When using HdfsLockFactory if a node gets killed instead of a graceful 
> shutdown the write.lock file remains in HDFS . The next time you start the 
> node the core doesn't load up because of LockObtainFailedException .
> I was able to reproduce this in all 5.x versions of Solr . The problem wasn't 
> there when I tested it in 4.10.4
> Steps to reproduce this on 5.x
> 1. Create directory in HDFS : {{bin/hdfs dfs -mkdir /solr}}
> 2. Start Solr: {{bin/solr start -Dsolr.directoryFactory=HdfsDirectoryFactory 
> -Dsolr.lock.type=hdfs -Dsolr.data.dir=hdfs://localhost:9000/solr 
> -Dsolr.updatelog=hdfs://localhost:9000/solr}}
> 3. Create core: {{./bin/solr create -c test -n data_driven}}
> 4. Kill solr
> 5. The lock file is there in HDFS and is called {{write.lock}}
> 6. Start Solr again and you get a stack trace like this:
> {code}
> 2015-11-23 13:28:04.287 ERROR (coreLoadExecutor-6-thread-1) [   x:test] 
> o.a.s.c.CoreContainer Error creating core [test]: Index locked for write for 
> core 'test'. Solr now longer supports forceful unlocking via 
> 'unlockOnStartup'. Please verify locks manually!
> org.apache.solr.common.SolrException: Index locked for write for core 'test'. 
> Solr now longer supports forceful unlocking via 'unlockOnStartup'. Please 
> verify locks manually!
> at org.apache.solr.core.SolrCore.(SolrCore.java:820)
> at org.apache.solr.core.SolrCore.(SolrCore.java:659)
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:723)
> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:443)
> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:434)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:210)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.lucene.store.LockObtainFailedException: Index locked 
> for write for core 'test'. Solr now longer supports forceful unlocking via 
> 'unlockOnStartup'. Please verify locks manually!
> at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:528)
> at org.apache.solr.core.SolrCore.(SolrCore.java:761)
> ... 9 more
> 2015-11-23 13:28:04.289 ERROR (coreContainerWorkExecutor-2-thread-1) [   ] 
> o.a.s.c.CoreContainer Error waiting for SolrCore to be created
> java.util.concurrent.ExecutionException: 
> org.apache.solr.common.SolrException: Unable to create core [test]
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at org.apache.solr.core.CoreContainer$2.run(CoreContainer.java:472)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:210)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.solr.common.SolrException: Unable to create core [test]
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:737)
> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:443)
> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:434)
> ... 5 more
> Caused by: org.apache.solr.common.SolrException: Index locked for write for 
> core 'test'. Solr now longer supports forceful unlocking via 
> 'unlockOnStartup'. Please verify locks manually!
> 

[jira] [Commented] (SOLR-8307) XXE Vulnerability

2015-11-24 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024608#comment-15024608
 ] 

Erik Hatcher commented on SOLR-8307:


[~thetaphi] - what do you think about the public EmptyEntityResolver moving to 
another package?   Do you think we should create a back compatible deprecated 
one in the same place?   I can't imagine it is being used externally.   I'll 
re-open this issue and document the change at least, and add a copy of it back 
if desired.

> XXE Vulnerability
> -
>
> Key: SOLR-8307
> URL: https://issues.apache.org/jira/browse/SOLR-8307
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 5.3
>Reporter: Adam Johnson
>Assignee: Erik Hatcher
>Priority: Blocker
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-8307.patch, SOLR-8307.patch
>
>
> Use the drop-down in the left menu to select a core. Use the “Watch Changes” 
> feature under the “Plugins / Stats” option. When submitting the changes, XML 
> is passed in the “stream.body” parameter and is vulnerable to XXE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-8307) XXE Vulnerability

2015-11-24 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher reopened SOLR-8307:


> XXE Vulnerability
> -
>
> Key: SOLR-8307
> URL: https://issues.apache.org/jira/browse/SOLR-8307
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 5.3
>Reporter: Adam Johnson
>Assignee: Erik Hatcher
>Priority: Blocker
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-8307.patch, SOLR-8307.patch
>
>
> Use the drop-down in the left menu to select a core. Use the “Watch Changes” 
> feature under the “Plugins / Stats” option. When submitting the changes, XML 
> is passed in the “stream.body” parameter and is vulnerable to XXE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8307) XXE Vulnerability

2015-11-24 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher resolved SOLR-8307.

Resolution: Fixed

> XXE Vulnerability
> -
>
> Key: SOLR-8307
> URL: https://issues.apache.org/jira/browse/SOLR-8307
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 5.3
>Reporter: Adam Johnson
>Assignee: Erik Hatcher
>Priority: Blocker
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-8307.patch, SOLR-8307.patch
>
>
> Use the drop-down in the left menu to select a core. Use the “Watch Changes” 
> feature under the “Plugins / Stats” option. When submitting the changes, XML 
> is passed in the “stream.body” parameter and is vulnerable to XXE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Fix heliosearch link for CDCR high level design Solr-6273

2015-11-24 Thread Shalin Shekhar Mangar
Fixed, thanks!

On Tue, Nov 24, 2015 at 7:18 PM, Susheel Kumar  wrote:
> Hi Shalin,
>
> Can we fix/replace appropriately the heliosearch link to refer to high level
> design for CDCR in SOLR-6273 jira description. Currently it seems to be
> broken.
>
> Thanks,
> Susheel
>
>  http://heliosearch.org/solr-cross-data-center-replication/
>
>
> On Mon, Nov 23, 2015 at 12:16 PM, Susheel Kumar 
> wrote:
>>
>> Thanks, Shalin for confirming. Good to know it is already being used and
>> looking forward for getting it released soon.
>>
>> On Mon, Nov 23, 2015 at 12:08 PM, Shalin Shekhar Mangar
>>  wrote:
>>>
>>> Hi Susheel,
>>>
>>> No, CDCR is a SolrCloud only feature. I've heard that the CDCR patches
>>> are in production already at a large company but I don't have the
>>> details.
>>>
>>> On Mon, Nov 23, 2015 at 10:01 PM, Susheel Kumar 
>>> wrote:
>>> > Thanks, Upayavira for confirming.  Do you know if CDCR is also going to
>>> > work
>>> > for Master/Slave old architecture if any of the folks are having that
>>> > in
>>> > production?
>>> >
>>> > On Sun, Nov 22, 2015 at 2:59 PM, Upayavira  wrote:
>>> >>
>>> >>
>>> >>
>>> >>
>>> >> On Sun, Nov 22, 2015, at 07:51 PM, Susheel Kumar wrote:
>>> >>
>>> >> Hello,
>>> >>
>>> >> One of our architect in team mentioned that CDCR is being developed
>>> >> for
>>> >> Master/Slave and can't be used for SolrCloud.  I did look the patches
>>> >> (Zookeeper being used..)  and it doesn't seems to be the case.
>>> >>
>>> >> Can someone from dev community confirm that CDCR being developed is
>>> >> for
>>> >> SolrCloud and not for Master/Slave architecture ?
>>> >>
>>> >> Thanks,
>>> >> Susheel
>>> >>
>>> >>
>>> >> You are correct - CDCR will be for allowing multiple SolrCloud farms
>>> >> to
>>> >> work together.
>>> >>
>>> >> Upayavira
>>> >
>>> >
>>>
>>>
>>>
>>> --
>>> Regards,
>>> Shalin Shekhar Mangar.
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>
>



-- 
Regards,
Shalin Shekhar Mangar.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6273) Cross Data Center Replication

2015-11-24 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6273:

Description: 
This is the master issue for Cross Data Center Replication (CDCR)
described at a high level here: 
http://yonik.com/solr-cross-data-center-replication/

  was:
This is the master issue for Cross Data Center Replication (CDCR)
described at a high level here: 
http://heliosearch.org/solr-cross-data-center-replication/


> Cross Data Center Replication
> -
>
> Key: SOLR-6273
> URL: https://issues.apache.org/jira/browse/SOLR-6273
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Erick Erickson
> Attachments: SOLR-6273-trunk-testfix1.patch, 
> SOLR-6273-trunk-testfix2.patch, SOLR-6273-trunk-testfix3.patch, 
> SOLR-6273-trunk-testfix6.patch, SOLR-6273-trunk-testfix7.patch, 
> SOLR-6273-trunk.patch, SOLR-6273-trunk.patch, SOLR-6273.patch, 
> SOLR-6273.patch, SOLR-6273.patch, SOLR-6273.patch, forShalin.patch
>
>
> This is the master issue for Cross Data Center Replication (CDCR)
> described at a high level here: 
> http://yonik.com/solr-cross-data-center-replication/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Fix heliosearch link for CDCR high level design Solr-6273

2015-11-24 Thread Susheel Kumar
Hi Shalin,

Can we fix/replace appropriately the heliosearch link to refer to high
level design for CDCR in SOLR-6273 jira description. Currently it seems to
be broken.

Thanks,
Susheel

 http://heliosearch.org/solr-cross-data-center-replication/


On Mon, Nov 23, 2015 at 12:16 PM, Susheel Kumar 
wrote:

> Thanks, Shalin for confirming. Good to know it is already being used and
> looking forward for getting it released soon.
>
> On Mon, Nov 23, 2015 at 12:08 PM, Shalin Shekhar Mangar <
> shalinman...@gmail.com> wrote:
>
>> Hi Susheel,
>>
>> No, CDCR is a SolrCloud only feature. I've heard that the CDCR patches
>> are in production already at a large company but I don't have the
>> details.
>>
>> On Mon, Nov 23, 2015 at 10:01 PM, Susheel Kumar 
>> wrote:
>> > Thanks, Upayavira for confirming.  Do you know if CDCR is also going to
>> work
>> > for Master/Slave old architecture if any of the folks are having that in
>> > production?
>> >
>> > On Sun, Nov 22, 2015 at 2:59 PM, Upayavira  wrote:
>> >>
>> >>
>> >>
>> >>
>> >> On Sun, Nov 22, 2015, at 07:51 PM, Susheel Kumar wrote:
>> >>
>> >> Hello,
>> >>
>> >> One of our architect in team mentioned that CDCR is being developed for
>> >> Master/Slave and can't be used for SolrCloud.  I did look the patches
>> >> (Zookeeper being used..)  and it doesn't seems to be the case.
>> >>
>> >> Can someone from dev community confirm that CDCR being developed is for
>> >> SolrCloud and not for Master/Slave architecture ?
>> >>
>> >> Thanks,
>> >> Susheel
>> >>
>> >>
>> >> You are correct - CDCR will be for allowing multiple SolrCloud farms to
>> >> work together.
>> >>
>> >> Upayavira
>> >
>> >
>>
>>
>>
>> --
>> Regards,
>> Shalin Shekhar Mangar.
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>


[jira] [Comment Edited] (SOLR-8299) ConfigSet DELETE should not allow deletion of a a configset that's currently being used

2015-11-24 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024531#comment-15024531
 ] 

Jason Gerlowski edited comment on SOLR-8299 at 11/24/15 1:41 PM:
-

It's got my +1, fwiw.

Though out of curiosity, are there any sort of docs that should be updated for 
this. (Still getting used to where Solr keeps tabs on various things.  So 
that's not a leading question; I actually don't know.)


was (Author: gerlowskija):
It's got my +1, fwiw.

Though out of curiosity, is there any sort of docs that should be updated for 
this. (Still getting used to where Solr keeps track on various things.  So 
that's not a leading question; I actually don't know.)

> ConfigSet DELETE should not allow deletion of a a configset that's currently 
> being used
> ---
>
> Key: SOLR-8299
> URL: https://issues.apache.org/jira/browse/SOLR-8299
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Attachments: SOLR-8299.patch, SOLR-8299.patch
>
>
> The ConfigSet DELETE API currently doesn't check if the configuration 
> directory being deleted is being used by an active Collection. We should add 
> a check for the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8299) ConfigSet DELETE should not allow deletion of a a configset that's currently being used

2015-11-24 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024531#comment-15024531
 ] 

Jason Gerlowski commented on SOLR-8299:
---

It's got my +1, fwiw.

Though out of curiosity, is there any sort of docs that should be updated for 
this. (Still getting used to where Solr keeps track on various things.  So 
that's not a leading question; I actually don't know.)

> ConfigSet DELETE should not allow deletion of a a configset that's currently 
> being used
> ---
>
> Key: SOLR-8299
> URL: https://issues.apache.org/jira/browse/SOLR-8299
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Attachments: SOLR-8299.patch, SOLR-8299.patch
>
>
> The ConfigSet DELETE API currently doesn't check if the configuration 
> directory being deleted is being used by an active Collection. We should add 
> a check for the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



lucene-java wiki access

2015-11-24 Thread Upayavira
Can someone please grant me access to the lucene-java wiki? My username
should be 'Upayavira'.

Thanks!

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8335) HdfsLockFactory does not allow core to come up after a node was killed

2015-11-24 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024421#comment-15024421
 ] 

Varun Thacker commented on SOLR-8335:
-

Interesting. Do you know why it worked in 4.10.4 ? 

Also since you are saying this is a known limitation, should we not get rid of 
'unlockOnStartup' in SOLR-6737 ? I know Lucene has removed it but does it make 
sense  for Solr to keep it and do something with the parameter?

For example, if a user is on HDFS and uses autoAddReplicas . If a process gets 
killed then unless someone manually removed the lock autoAddReplicas will fail 
right?

You mention on SOLR-6737 that its a really bad idea to have unlockOnStartup . 
Thats true especially after the read only replicas feature comes in I guess. 
But for current users , it's either they manually remove the lock files or use 
no lock factory at all ?

> HdfsLockFactory does not allow core to come up after a node was killed
> --
>
> Key: SOLR-8335
> URL: https://issues.apache.org/jira/browse/SOLR-8335
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.2.1, 5.3, 5.3.1
>Reporter: Varun Thacker
>
> When using HdfsLockFactory if a node gets killed instead of a graceful 
> shutdown the write.lock file remains in HDFS . The next time you start the 
> node the core doesn't load up because of LockObtainFailedException .
> I was able to reproduce this in all 5.x versions of Solr . The problem wasn't 
> there when I tested it in 4.10.4
> Steps to reproduce this on 5.x
> 1. Create directory in HDFS : {{bin/hdfs dfs -mkdir /solr}}
> 2. Start Solr: {{bin/solr start -Dsolr.directoryFactory=HdfsDirectoryFactory 
> -Dsolr.lock.type=hdfs -Dsolr.data.dir=hdfs://localhost:9000/solr 
> -Dsolr.updatelog=hdfs://localhost:9000/solr}}
> 3. Create core: {{./bin/solr create -c test -n data_driven}}
> 4. Kill solr
> 5. The lock file is there in HDFS and is called {{write.lock}}
> 6. Start Solr again and you get a stack trace like this:
> {code}
> 2015-11-23 13:28:04.287 ERROR (coreLoadExecutor-6-thread-1) [   x:test] 
> o.a.s.c.CoreContainer Error creating core [test]: Index locked for write for 
> core 'test'. Solr now longer supports forceful unlocking via 
> 'unlockOnStartup'. Please verify locks manually!
> org.apache.solr.common.SolrException: Index locked for write for core 'test'. 
> Solr now longer supports forceful unlocking via 'unlockOnStartup'. Please 
> verify locks manually!
> at org.apache.solr.core.SolrCore.(SolrCore.java:820)
> at org.apache.solr.core.SolrCore.(SolrCore.java:659)
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:723)
> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:443)
> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:434)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:210)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.lucene.store.LockObtainFailedException: Index locked 
> for write for core 'test'. Solr now longer supports forceful unlocking via 
> 'unlockOnStartup'. Please verify locks manually!
> at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:528)
> at org.apache.solr.core.SolrCore.(SolrCore.java:761)
> ... 9 more
> 2015-11-23 13:28:04.289 ERROR (coreContainerWorkExecutor-2-thread-1) [   ] 
> o.a.s.c.CoreContainer Error waiting for SolrCore to be created
> java.util.concurrent.ExecutionException: 
> org.apache.solr.common.SolrException: Unable to create core [test]
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at org.apache.solr.core.CoreContainer$2.run(CoreContainer.java:472)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:210)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.solr.common.SolrException: Unable to create core [test]
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:737)
> at org.apache.solr.core.CoreCon

[jira] [Commented] (SOLR-8335) HdfsLockFactory does not allow core to come up after a node was killed

2015-11-24 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024364#comment-15024364
 ] 

Mark Miller commented on SOLR-8335:
---

Not really a bug, same as with simple local simple fs locks. You have to use a 
native lock factory to avoid this. For HDFS, we have an issue to look at trying 
to use ZK in an alternate factory, though it's not an easy issue. 

> HdfsLockFactory does not allow core to come up after a node was killed
> --
>
> Key: SOLR-8335
> URL: https://issues.apache.org/jira/browse/SOLR-8335
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.2.1, 5.3, 5.3.1
>Reporter: Varun Thacker
>
> When using HdfsLockFactory if a node gets killed instead of a graceful 
> shutdown the write.lock file remains in HDFS . The next time you start the 
> node the core doesn't load up because of LockObtainFailedException .
> I was able to reproduce this in all 5.x versions of Solr . The problem wasn't 
> there when I tested it in 4.10.4
> Steps to reproduce this on 5.x
> 1. Create directory in HDFS : {{bin/hdfs dfs -mkdir /solr}}
> 2. Start Solr: {{bin/solr start -Dsolr.directoryFactory=HdfsDirectoryFactory 
> -Dsolr.lock.type=hdfs -Dsolr.data.dir=hdfs://localhost:9000/solr 
> -Dsolr.updatelog=hdfs://localhost:9000/solr}}
> 3. Create core: {{./bin/solr create -c test -n data_driven}}
> 4. Kill solr
> 5. The lock file is there in HDFS and is called {{write.lock}}
> 6. Start Solr again and you get a stack trace like this:
> {code}
> 2015-11-23 13:28:04.287 ERROR (coreLoadExecutor-6-thread-1) [   x:test] 
> o.a.s.c.CoreContainer Error creating core [test]: Index locked for write for 
> core 'test'. Solr now longer supports forceful unlocking via 
> 'unlockOnStartup'. Please verify locks manually!
> org.apache.solr.common.SolrException: Index locked for write for core 'test'. 
> Solr now longer supports forceful unlocking via 'unlockOnStartup'. Please 
> verify locks manually!
> at org.apache.solr.core.SolrCore.(SolrCore.java:820)
> at org.apache.solr.core.SolrCore.(SolrCore.java:659)
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:723)
> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:443)
> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:434)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:210)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.lucene.store.LockObtainFailedException: Index locked 
> for write for core 'test'. Solr now longer supports forceful unlocking via 
> 'unlockOnStartup'. Please verify locks manually!
> at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:528)
> at org.apache.solr.core.SolrCore.(SolrCore.java:761)
> ... 9 more
> 2015-11-23 13:28:04.289 ERROR (coreContainerWorkExecutor-2-thread-1) [   ] 
> o.a.s.c.CoreContainer Error waiting for SolrCore to be created
> java.util.concurrent.ExecutionException: 
> org.apache.solr.common.SolrException: Unable to create core [test]
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at org.apache.solr.core.CoreContainer$2.run(CoreContainer.java:472)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:210)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.solr.common.SolrException: Unable to create core [test]
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:737)
> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:443)
> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:434)
> ... 5 more
> Caused by: org.apache.solr.common.SolrException: Index locked for write for 
> core 'test'. Solr now longer supports forceful unlocking via 
> 'unlockOnStartup'. Please verify locks manually!
> at org.apache.solr.core.SolrCore.(SolrCore.java:820)
> at org.apache.solr.core.SolrCore.(SolrCore.java:659)
> at org.apache.so

[JENKINS] Lucene-Solr-5.x-Solaris (multiarch/jdk1.7.0) - Build # 207 - Still Failing!

2015-11-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Solaris/207/
Java: multiarch/jdk1.7.0 -d32 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.search.mlt.CloudMLTQParserTest.test

Error Message:
arrays first differed at element [0]; expected:<...lt:bmw lowerfilt:usa[]) 
-id:26)/no_coord> but was:<...lt:bmw lowerfilt:usa[ lowerfilt:328i]) 
-id:26)/no_coord>

Stack Trace:
arrays first differed at element [0]; expected:<...lt:bmw lowerfilt:usa[]) 
-id:26)/no_coord> but was:<...lt:bmw lowerfilt:usa[ lowerfilt:328i]) 
-id:26)/no_coord>
at 
__randomizedtesting.SeedInfo.seed([BD2AB13677BB5E3C:357E8EECD94733C4]:0)
at 
org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:52)
at org.junit.Assert.internalArrayEquals(Assert.java:416)
at org.junit.Assert.assertArrayEquals(Assert.java:168)
at org.junit.Assert.assertArrayEquals(Assert.java:185)
at 
org.apache.solr.search.mlt.CloudMLTQParserTest.test(CloudMLTQParserTest.java:175)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFail

[jira] [Updated] (SOLR-8145) bin/solr script oom_killer arg incorrect

2015-11-24 Thread Jurian Broertjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jurian Broertjes updated SOLR-8145:
---
Attachment: SOLR-8145.patch

Updated patch with proper svn diff instead of just diff

> bin/solr script oom_killer arg incorrect
> 
>
> Key: SOLR-8145
> URL: https://issues.apache.org/jira/browse/SOLR-8145
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 5.2.1
>Reporter: Nate Dire
>Priority: Minor
> Attachments: SOLR-8145.patch, SOLR-8145.patch, SOLR-8145.patch
>
>
> I noticed the oom_killer script wasn't working in our 5.2 deployment.
> In the {{bin/solr}} script, the {{OnOutOfMemoryError}} option is being passed 
> as an arg to the jar rather than to the JVM.  I moved it ahead of {{-jar}} 
> and verified it shows up in the JVM args in the UI.
> {noformat}
># run Solr in the background
> nohup "$JAVA" "${SOLR_START_OPTS[@]}" $SOLR_ADDL_ARGS -jar start.jar \
> "-XX:OnOutOfMemoryError=$SOLR_TIP/bin/oom_solr.sh $SOLR_PORT 
> $SOLR_LOGS_DIR" "${SOLR_JETTY_CONFIG[@]}" \
> {noformat}
> Also, I'm not sure what the {{SOLR_PORT}} and {{SOLR_LOGS_DIR}} args are 
> doing--they don't appear to be positional arguments to the jar.
> Attaching a patch against 5.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 862 - Still Failing

2015-11-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/862/

4 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=30579, 
name=zkCallback-1344-thread-11, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=30579, name=zkCallback-1344-thread-11, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
144 threads leaked from SUITE scope at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 1) Thread[id=26498, 
name=searcherExecutor-4915-thread-1, state=WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=30080, 
name=searcherExecutor-7983-thread-1, state=WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)3) Thread[id=30457, 
name=searcherExecutor-8170-thread-1, state=WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)4) Thread[id=26353, 
name=searcherExecutor-4863-thread-1, state=WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)5) Thread[id=28453, 
name=qtp1872157135-28453, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:389) 
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:531)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.access$700(QueuedThreadPool.java:47)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:590) 
at java.lang.Thread.run(Thread.java:745)6) Thread[id=26326, 
name=Thread-19482, state=WAITING, group=TGRP-CollectionsAPIDistributedZkTest]   
  at java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:502) at 

[jira] [Commented] (LUCENE-6905) GeoPointDistanceQuery using wrapped lon for dateline crossing query

2015-11-24 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024287#comment-15024287
 ] 

Michael McCandless commented on LUCENE-6905:


Thank you for separating out the patches [~nknize].

I don't like that we are changing the error tolerance from 0.5% to 7% when USGS 
says it's supposed to be 0.5%.  Is that intentional?  I mean, is the error in 
these queries really supposed to be so high?

Can we move the lon unwrapping up into {{GeoPointDistanceQuery.rewrite}} 
method, and add {{centerLon}} as a parameter to {{GeoPointDistanceQueryImpl}}, 
since it "knows" when it's making queries that have a boundary on the date 
line?  In fact, it knows which sub-query is "to the left" and which is "to the 
right", so maybe we just inline the logic inside rewrite and remove 
{{GeoUtils.unwrapLon}} public method?


> GeoPointDistanceQuery using wrapped lon for dateline crossing query
> ---
>
> Key: LUCENE-6905
> URL: https://issues.apache.org/jira/browse/LUCENE-6905
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nicholas Knize
> Fix For: Trunk, 6.0, 5.4
>
> Attachments: LUCENE-6905.patch, LUCENE-6905.patch, LUCENE-6905.patch
>
>
> GeoPointDistanceQuery handles dateline crossing by splitting the Minimum 
> Bounding Rectangle (MBR) into east/west ranges and rewriting to a Boolean 
> SHOULD. PostFiltering is accomplished by calculating the distance from the 
> center point to the candidate point field. Unfortunately the center point is 
> wrapped such that calculating the closest point on the "circle" from an 
> eastern point to a western MBR provides incorrect results thus causing false 
> negatives in the range creation. This was caught by a jenkins failure and 
> reproduced in 2 places: {{GeoPointDistanceTermsEnum}} and {{TestGeoRelations}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8145) bin/solr script oom_killer arg incorrect

2015-11-24 Thread Jurian Broertjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jurian Broertjes updated SOLR-8145:
---
Attachment: SOLR-8145.patch

SOLR_PORT and SOLR_LOGS_DIR are arguments for the oom_solr.sh script and are 
required for proper OOM handling. I've updated your patch and verified that 
it's working now.

> bin/solr script oom_killer arg incorrect
> 
>
> Key: SOLR-8145
> URL: https://issues.apache.org/jira/browse/SOLR-8145
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 5.2.1
>Reporter: Nate Dire
>Priority: Minor
> Attachments: SOLR-8145.patch, SOLR-8145.patch
>
>
> I noticed the oom_killer script wasn't working in our 5.2 deployment.
> In the {{bin/solr}} script, the {{OnOutOfMemoryError}} option is being passed 
> as an arg to the jar rather than to the JVM.  I moved it ahead of {{-jar}} 
> and verified it shows up in the JVM args in the UI.
> {noformat}
># run Solr in the background
> nohup "$JAVA" "${SOLR_START_OPTS[@]}" $SOLR_ADDL_ARGS -jar start.jar \
> "-XX:OnOutOfMemoryError=$SOLR_TIP/bin/oom_solr.sh $SOLR_PORT 
> $SOLR_LOGS_DIR" "${SOLR_JETTY_CONFIG[@]}" \
> {noformat}
> Also, I'm not sure what the {{SOLR_PORT}} and {{SOLR_LOGS_DIR}} args are 
> doing--they don't appear to be positional arguments to the jar.
> Attaching a patch against 5.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b90) - Build # 15020 - Failure!

2015-11-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15020/
Java: 32bit/jdk1.9.0-ea-b90 -server -XX:+UseSerialGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=1326, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)2) Thread[id=1323, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)3) Thread[id=1327, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)4) Thread[id=1324, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)5) Thread[id=1325, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=1326, name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExe

[jira] [Updated] (SOLR-8336) CoreDescriptor instance directory should be a Path, not a String

2015-11-24 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated SOLR-8336:

Attachment: SOLR-8336.patch

Patch.

This also moves all core-creation logic out of CoreAdminHandler and into 
CoreContainer, so that CAH now just translates query parameters into POJOs.

One thing we might want to consider in another issue is removing (in 6.0) the 
ability to specify arbitrary instance directories for cores.  This can already 
break core discovery, and isn't really necessary with configsets and arbitrary 
data directories.

> CoreDescriptor instance directory should be a Path, not a String
> 
>
> Key: SOLR-8336
> URL: https://issues.apache.org/jira/browse/SOLR-8336
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-8336.patch
>
>
> Next step in SOLR-8282



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8336) CoreDescriptor instance directory should be a Path, not a String

2015-11-24 Thread Alan Woodward (JIRA)
Alan Woodward created SOLR-8336:
---

 Summary: CoreDescriptor instance directory should be a Path, not a 
String
 Key: SOLR-8336
 URL: https://issues.apache.org/jira/browse/SOLR-8336
 Project: Solr
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward


Next step in SOLR-8282



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6900) Grouping sortWithinGroup should use Sort.RELEVANCE to indicate that, not null

2015-11-24 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024205#comment-15024205
 ] 

Christine Poerschke commented on LUCENE-6900:
-

Patch looks good to me. Two minor comments:
* in the {{AbstractSecondPassGroupingCollector}} constructor you replaced a 
{{size() == 0}} with {{isEmpty()}} and perhaps the wording of the exception on 
the following line might be changed also to mention empty rather than size==0
* in {{BlockGroupingCollector}} around line 235 there's an existing TODO 
comment mentioning null as meaning "by relevance"



> Grouping sortWithinGroup should use Sort.RELEVANCE to indicate that, not null
> -
>
> Key: LUCENE-6900
> URL: https://issues.apache.org/jira/browse/LUCENE-6900
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/grouping
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Attachments: LUCENE_6900.patch
>
>
> In AbstractSecondPassGroupingCollector, {{withinGroupSort}} uses a value of 
> null to indicate a relevance sort.  I think it's nicer to use Sort.RELEVANCE 
> for this -- after all it's how the {{groupSort}} variable is handled.  This 
> choice is also seen in GroupingSearch; likely some other collaborators too.
> [~martijn.v.groningen] is there some wisdom in the current choice that 
> escapes me?  If not I'll post a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8326) Adding read restriction to BasicAuth + RuleBased authorization causes issue with replication

2015-11-24 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8326:
-
Attachment: SOLR-8326.patch

If there is an error in {{PKIAuthenticationFilter}} the request does not really 
have to fail. It can go on as if it is unauthenticated

> Adding read restriction to BasicAuth + RuleBased authorization causes issue 
> with replication
> 
>
> Key: SOLR-8326
> URL: https://issues.apache.org/jira/browse/SOLR-8326
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3, 5.3.1
>Reporter: Anshum Gupta
>Assignee: Noble Paul
>Priority: Blocker
> Fix For: 5.4
>
> Attachments: SOLR-8326.patch
>
>
> This was reported on the mailing list:
> https://www.mail-archive.com/solr-user@lucene.apache.org/msg115921.html
> I tested it out as follows to confirm that adding a 'read' rule causes 
> replication to break. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7912) Add support for boost and exclude the queried document id in MoreLikeThis QParser

2015-11-24 Thread Jens Wille (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024083#comment-15024083
 ] 

Jens Wille commented on SOLR-7912:
--

Thank you, Anshum.

As for the test: Under what circumstances can {{parsedquery}} be a string 
instead of an array? I wasn't able to make that part fail.

> Add support for boost and exclude the queried document id in MoreLikeThis 
> QParser
> -
>
> Key: SOLR-7912
> URL: https://issues.apache.org/jira/browse/SOLR-7912
> Project: Solr
>  Issue Type: Improvement
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-7912-test-fix.patch, SOLR-7912.patch, 
> SOLR-7912.patch, SOLR-7912.patch, SOLR-7912.patch, SOLR-7912.patch
>
>
> Continuing from SOLR-7639. We need to support boost, and also exclude input 
> document from returned doc list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7867) implicit sharded, facet grouping problem with multivalued string field starting with digits

2015-11-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024046#comment-15024046
 ] 

Gürkan Vural commented on SOLR-7867:


Any updates to this bug?

> implicit sharded, facet grouping problem with multivalued string field 
> starting with digits
> ---
>
> Key: SOLR-7867
> URL: https://issues.apache.org/jira/browse/SOLR-7867
> Project: Solr
>  Issue Type: Bug
>  Components: faceting, SolrCloud
>Affects Versions: 5.2
> Environment: 3.13.0-48-generic #80-Ubuntu SMP x86_64 GNU/Linux
> java version "1.7.0_80"
> Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
> Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)
>Reporter: Umut Erogul
>  Labels: docValues, facet, group, sharding
> Attachments: DocValuesException.PNG, ErrorReadingDocValues.PNG
>
>
> related parts @ schema.xml:
> {code} docValues="true" multiValued="true"/>
>  docValues="true"/>{code}
> every document has valid author_s and keyword_ss fields;
> we can make successful facet group queries on single node, single collection, 
> solr-4.9.0 server
> {code}
> q: *:* fq: keyword_ss:3m
> facet=true&facet.field=keyword_ss&group=true&group.field=author_s&group.facet=true
> {code}
> when querying on solr-5.2.0 server with implicit sharded environment with:
> {code}
>  required="true"/>{code}
> with example shard names; affinity1 affinity2 affinity3 affinity4
> the same query with same documents gets:
> {code}
> ERROR - 2015-08-04 08:15:15.222; [document affinity3 core_node32 
> document_affinity3_replica2] org.apache.solr.common.SolrException; 
> org.apache.solr.common.SolrException: Exception during facet.field: keyword_ss
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:632)
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:617)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> org.apache.solr.request.SimpleFacets$2.execute(SimpleFacets.java:571)
> at 
> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:642)
> ...
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ArrayIndexOutOfBoundsException
> at 
> org.apache.lucene.codecs.lucene50.Lucene50DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.readTerm(Lucene50DocValuesProducer.java:1008)
> at 
> org.apache.lucene.codecs.lucene50.Lucene50DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.next(Lucene50DocValuesProducer.java:1026)
> at 
> org.apache.lucene.search.grouping.term.TermGroupFacetCollector$MV$SegmentResult.nextTerm(TermGroupFacetCollector.java:373)
> at 
> org.apache.lucene.search.grouping.AbstractGroupFacetCollector.mergeSegmentResults(AbstractGroupFacetCollector.java:91)
> at 
> org.apache.solr.request.SimpleFacets.getGroupedCounts(SimpleFacets.java:541)
> at 
> org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:463)
> at 
> org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:386)
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:626)
> ... 33 more
> {code}
> all the problematic queries are caused by strings starting with digits; 
> ("3m", "8 saniye", "2 broke girls", "1v1y")
> there are some strings that the query works like ("24", "90+", "45 dakika")
> we do not observe the problem when querying with 
> -keyword_ss:(0-9)*
> updating the problematic documents (a small subset of keyword_ss:(0-9)*), 
> fixes the query, 
> but we cannot find an easy solution to find the problematic documents
> there is around 400m docs; seperated at 28 shards; 
> -keyword_ss:(0-9)* matches %97 of documents



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org