Re: SIGSEGV indexing jcc-compiled module in IDEA PyCharm

2013-05-28 Thread Andi Vajda


On Tue, 28 May 2013, Barry Wark wrote:


Hi all,

This is an edge case, I realize, but thought I'd throw it out there in case
anyone has come across a solution.

I'm using IDEA's PyCharm IDE (v2.7). The project virtualenv contains a
jcc-compiled module (which the project uses). PyCharm's indexer crashes


Which project ? yours or PyCharm-the-project ?
What version of JCC was this module compiled with ?


when indexing this module with the crash report below. When running the
project's unit tests in PyCharm, this jcc-compiled module is imported (and
functions) without issue. PyCharm is a Java app, and I'm sure it's doing
some Java-Python bridging as well


If PyCharm is a Java app, what kind of python bridging is it doing ? And how 
does that involve JCC ? I'm assuming that if PyCharm is a Java module, its 
indexing would be implemented in Java too ?



, so it's possible there's a conflict that
is the root of this crash. If so, I'll gladly file this as a PyCharm issue,
but though I'd run this by the JCC gurus in case they recognize what's
going on. I've never seen the PyCharm indexer crash before on modules that
don't use jcc.


What version(s) of JCC are involved here ?

Andi..



Thanks,
Barry


The Crash Log:

Process: python [85552]
Path:/Users/USER/*/python
Identifier:  python
Version: 60.3
Code Type:   X86-64 (Native)
Parent Process:  pycharm [85481]
User ID: 501

Date/Time:   2013-05-28 13:35:21.728 -0400
OS Version:  Mac OS X 10.8.3 (12D78)
Report Version:  10
Sleep/Wake UUID: 45FF7EDE-FE31-4248-B03D-1EE79118E02F

Interval Since Last Report:  36408 sec
Crashes Since Last Report:   3
Per-App Crashes Since Last Report:   3
Anonymous UUID:  5340B35B-8410-1D7A-63C1-2E05A95E2522

Crashed Thread:  0  Dispatch queue: com.apple.main-thread

Exception Type:  EXC_BAD_ACCESS (SIGSEGV)
Exception Codes: KERN_INVALID_ADDRESS at 0x544857bc

VM Regions Near 0x544857bc:
--
   __TEXT 000104484000-000104485000 [4K]
r-x/rwx SM=COW  /Users/USER/*

Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0   _ovation_api.so   0x00010509e6c5 JArrayunsigned
char::JArray(long) + 37
1   _ovation_api.so   0x000105095250 _jclass*
initializeClassunsigned char(bool) + 42
2   _ovation_api.so   0x0001050a290e
JCCEnv::getClass(_jclass* (*)(bool)) const + 18
3   _ovation_api.so   0x0001050a6dd9
t_descriptor___get__(t_descriptor*, _object*, _object*) + 66
4   org.python.python 0x0001044a13b1 0x10448f000 + 74673
5   org.python.python 0x0001044a85a9 PyEval_EvalFrameEx +
9244
6   org.python.python 0x0001044a6147 PyEval_EvalCodeEx +
1934
7   org.python.python 0x0001044ac8df 0x10448f000 + 121055
8   org.python.python 0x0001044a863a PyEval_EvalFrameEx +
9389
9   org.python.python 0x0001044ac869 0x10448f000 + 120937
10  org.python.python 0x0001044a863a PyEval_EvalFrameEx +
9389
11  org.python.python 0x0001044ac869 0x10448f000 + 120937
12  org.python.python 0x0001044a863a PyEval_EvalFrameEx +
9389
13  org.python.python 0x0001044a6147 PyEval_EvalCodeEx +
1934
14  org.python.python 0x0001044ac8df 0x10448f000 + 121055
15  org.python.python 0x0001044a863a PyEval_EvalFrameEx +
9389
16  org.python.python 0x0001044a6147 PyEval_EvalCodeEx +
1934
17  org.python.python 0x0001044a59b3 PyEval_EvalCode + 54
18  org.python.python 0x0001044e1c70 0x10448f000 + 339056
19  org.python.python 0x0001044e1d3c PyRun_FileExFlags + 165
20  org.python.python 0x0001044e1726
PyRun_SimpleFileExFlags + 410
21  org.python.python 0x000104505e27 Py_Main + 2715
22  libdyld.dylib 0x7fff907047e1 start + 1

Thread 0 crashed with X86 Thread State (64-bit):
 rax: 0x7fff78dee180  rbx: 0x7fff5b77a768  rcx: 0x54485244
rdx: 0x0001053c55b0
 rdi: 0x7fff78dee180  rsi: 0x  rbp: 0x7fff5b77a750
rsp: 0x7fff5b77a740
  r8: 0x7fff5b77a838   r9: 0x7fff5b77a828  r10: 0x0002
r11: 0x0003
 r12: 0x00010465c320  r13: 0x000104bd37a0  r14: 0x
r15: 0x7fea32eb2880
 rip: 0x00010509e6c5  rfl: 0x00010246  cr2: 0x544857bc
Logical CPU: 1

Binary Images:
  0x104484000 -0x104484fff +python (60.3)
A3CE5618-7FE0-3307-B2C1-DE2661C936B2 /Users/USER/*/python
  0x10448f000 -0x10459cfff  org.python.python (2.7.2 - 2.7.2)
E7F3EED1-E55D-32AF-9649-77C814693F6A
/System/Library/Frameworks/Python.framework/Versions/2.7/Python
  0x104b1f000 -0x104b22fff +strop.so (60.4)
282D8F1C-D709-339B-86E2-CE318F0E28E6 /Users/USER/*/strop.so
  0x104b56000 -0x104b59fff 

Re: SIGSEGV indexing jcc-compiled module in IDEA PyCharm

2013-05-28 Thread Barry Wark
On Tue, May 28, 2013 at 11:09 PM, Andi Vajda va...@apache.org wrote:


 On Tue, 28 May 2013, Barry Wark wrote:

  Hi all,

 This is an edge case, I realize, but thought I'd throw it out there in
 case
 anyone has come across a solution.

 I'm using IDEA's PyCharm IDE (v2.7). The project virtualenv contains a
 jcc-compiled module (which the project uses). PyCharm's indexer crashes


 Which project ? yours or PyCharm-the-project ?
 What version of JCC was this module compiled with ?


The project is mine, a python wrapper around Physion's Ovation API. We're
using JCC 2.16. PyCharm is IDEA's Python IDE (
http://www.jetbrains.com/pycharm/).




  when indexing this module with the crash report below. When running the
 project's unit tests in PyCharm, this jcc-compiled module is imported (and
 functions) without issue. PyCharm is a Java app, and I'm sure it's doing
 some Java-Python bridging as well


 If PyCharm is a Java app, what kind of python bridging is it doing ? And
 how does that involve JCC ? I'm assuming that if PyCharm is a Java module,
 its indexing would be implemented in Java too ?


I don't really know how PyCharm handles Java/Python bridging. PyCharm is
built on IDEA's (Java) IDE framework, and it works with Python code. It's
purely speculation on my part that PyCharm's Java/Python bridging (if any)
is involved here.




  , so it's possible there's a conflict that
 is the root of this crash. If so, I'll gladly file this as a PyCharm
 issue,
 but though I'd run this by the JCC gurus in case they recognize what's
 going on. I've never seen the PyCharm indexer crash before on modules that
 don't use jcc.


 What version(s) of JCC are involved here ?


2.16 on OS X 10.8, Python 2.7.

Thanks,
Barry




 Andi..



 Thanks,
 Barry


 The Crash Log:

 Process: python [85552]
 Path:/Users/USER/*/python
 Identifier:  python
 Version: 60.3
 Code Type:   X86-64 (Native)
 Parent Process:  pycharm [85481]
 User ID: 501

 Date/Time:   2013-05-28 13:35:21.728 -0400
 OS Version:  Mac OS X 10.8.3 (12D78)
 Report Version:  10
 Sleep/Wake UUID: 45FF7EDE-FE31-4248-B03D-**1EE79118E02F

 Interval Since Last Report:  36408 sec
 Crashes Since Last Report:   3
 Per-App Crashes Since Last Report:   3
 Anonymous UUID:  5340B35B-8410-1D7A-63C1-**
 2E05A95E2522

 Crashed Thread:  0  Dispatch queue: com.apple.main-thread

 Exception Type:  EXC_BAD_ACCESS (SIGSEGV)
 Exception Codes: KERN_INVALID_ADDRESS at 0x544857bc

 VM Regions Near 0x544857bc:
 --
__TEXT 000104484000-**000104485000 [4K]
 r-x/rwx SM=COW  /Users/USER/*

 Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
 0   _ovation_api.so   0x00010509e6c5 JArrayunsigned
 char::JArray(long) + 37
 1   _ovation_api.so   0x000105095250 _jclass*
 initializeClassunsigned char(bool) + 42
 2   _ovation_api.so   0x0001050a290e
 JCCEnv::getClass(_jclass* (*)(bool)) const + 18
 3   _ovation_api.so   0x0001050a6dd9
 t_descriptor___get__(t_**descriptor*, _object*, _object*) + 66
 4   org.python.python 0x0001044a13b1 0x10448f000 + 74673
 5   org.python.python 0x0001044a85a9 PyEval_EvalFrameEx +
 9244
 6   org.python.python 0x0001044a6147 PyEval_EvalCodeEx +
 1934
 7   org.python.python 0x0001044ac8df 0x10448f000 + 121055
 8   org.python.python 0x0001044a863a PyEval_EvalFrameEx +
 9389
 9   org.python.python 0x0001044ac869 0x10448f000 + 120937
 10  org.python.python 0x0001044a863a PyEval_EvalFrameEx +
 9389
 11  org.python.python 0x0001044ac869 0x10448f000 + 120937
 12  org.python.python 0x0001044a863a PyEval_EvalFrameEx +
 9389
 13  org.python.python 0x0001044a6147 PyEval_EvalCodeEx +
 1934
 14  org.python.python 0x0001044ac8df 0x10448f000 + 121055
 15  org.python.python 0x0001044a863a PyEval_EvalFrameEx +
 9389
 16  org.python.python 0x0001044a6147 PyEval_EvalCodeEx +
 1934
 17  org.python.python 0x0001044a59b3 PyEval_EvalCode + 54
 18  org.python.python 0x0001044e1c70 0x10448f000 + 339056
 19  org.python.python 0x0001044e1d3c PyRun_FileExFlags +
 165
 20  org.python.python 0x0001044e1726
 PyRun_SimpleFileExFlags + 410
 21  org.python.python 0x000104505e27 Py_Main + 2715
 22  libdyld.dylib 0x7fff907047e1 start + 1

 Thread 0 crashed with X86 Thread State (64-bit):
  rax: 0x7fff78dee180  rbx: 0x7fff5b77a768  rcx: 0x54485244
 rdx: 0x0001053c55b0
  rdi: 0x7fff78dee180  rsi: 0x  rbp: 0x7fff5b77a750
 rsp: 0x7fff5b77a740
   r8: 0x7fff5b77a838   r9: 0x7fff5b77a828  r10: 0x0002
 r11: 0x0003
  r12: 0x00010465c320 

[jira] [Comment Edited] (LUCENE-5015) Unexpected performance difference between SamplingAccumulator and StandardFacetAccumulator

2013-05-28 Thread Rob Audenaerde (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668134#comment-13668134
 ] 

Rob Audenaerde edited comment on LUCENE-5015 at 5/28/13 7:09 AM:
-

Hi all, thanks for all the progress. 

I will try to build a Lucene with the latests patch and give it a go, but I'm 
not (yet) very familiar with git, so it might take a while.. :)

(do I take the 4.3 release version? or is there a 4.3 development branch where 
the patch has to be applied?)

  was (Author: robau):
Hi all, thanks for all the progress. 

I will try to build a Lucene with the latests patch and give it a go, but I'm 
not (yet) very familiar with git, so it might take a while.. :)
  
 Unexpected performance difference between SamplingAccumulator and 
 StandardFacetAccumulator
 --

 Key: LUCENE-5015
 URL: https://issues.apache.org/jira/browse/LUCENE-5015
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Affects Versions: 4.3
Reporter: Rob Audenaerde
Assignee: Shai Erera
Priority: Minor
 Attachments: LUCENE-5015.patch, LUCENE-5015.patch, LUCENE-5015.patch, 
 LUCENE-5015.patch, LUCENE-5015.patch


 I have an unexpected performance difference between the SamplingAccumulator 
 and the StandardFacetAccumulator. 
 The case is an index with about 5M documents and each document containing 
 about 10 fields. I created a facet on each of those fields. When searching to 
 retrieve facet-counts (using 1 CountFacetRequest), the SamplingAccumulator is 
 about twice as fast as the StandardFacetAccumulator. This is expected and a 
 nice speed-up. 
 However, when I use more CountFacetRequests to retrieve facet-counts for more 
 than one field, the speeds of the SampingAccumulator decreases, to the point 
 where the StandardFacetAccumulator is faster. 
 {noformat} 
 FacetRequests  SamplingStandard
  1   391 ms 1100 ms
  2   531 ms 1095 ms 
  3   948 ms 1108 ms
  4  1400 ms 1110 ms
  5  1901 ms 1102 ms
 {noformat} 
 Is this behaviour normal? I did not expect it, as the SamplingAccumulator 
 needs to do less work? 
 Some code to show what I do:
 {code}
   searcher.search( facetsQuery, facetsCollector );
   final ListFacetResult collectedFacets = 
 facetsCollector.getFacetResults();
 {code}
 {code}
 final FacetSearchParams facetSearchParams = new FacetSearchParams( 
 facetRequests );
 FacetsCollector facetsCollector;
 if ( isSampled )
 {
   facetsCollector =
   FacetsCollector.create( new SamplingAccumulator( new 
 RandomSampler(), facetSearchParams, searcher.getIndexReader(), taxo ) );
 }
 else
 {
   facetsCollector = FacetsCollector.create( FacetsAccumulator.create( 
 facetSearchParams, searcher.getIndexReader(), taxo ) );
 {code}
   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5015) Unexpected performance difference between SamplingAccumulator and StandardFacetAccumulator

2013-05-28 Thread Rob Audenaerde (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668134#comment-13668134
 ] 

Rob Audenaerde commented on LUCENE-5015:


Hi all, thanks for all the progress. 

I will try to build a Lucene with the latests patch and give it a go, but I'm 
not (yet) very familiar with git, so it might take a while.. :)

 Unexpected performance difference between SamplingAccumulator and 
 StandardFacetAccumulator
 --

 Key: LUCENE-5015
 URL: https://issues.apache.org/jira/browse/LUCENE-5015
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Affects Versions: 4.3
Reporter: Rob Audenaerde
Assignee: Shai Erera
Priority: Minor
 Attachments: LUCENE-5015.patch, LUCENE-5015.patch, LUCENE-5015.patch, 
 LUCENE-5015.patch, LUCENE-5015.patch


 I have an unexpected performance difference between the SamplingAccumulator 
 and the StandardFacetAccumulator. 
 The case is an index with about 5M documents and each document containing 
 about 10 fields. I created a facet on each of those fields. When searching to 
 retrieve facet-counts (using 1 CountFacetRequest), the SamplingAccumulator is 
 about twice as fast as the StandardFacetAccumulator. This is expected and a 
 nice speed-up. 
 However, when I use more CountFacetRequests to retrieve facet-counts for more 
 than one field, the speeds of the SampingAccumulator decreases, to the point 
 where the StandardFacetAccumulator is faster. 
 {noformat} 
 FacetRequests  SamplingStandard
  1   391 ms 1100 ms
  2   531 ms 1095 ms 
  3   948 ms 1108 ms
  4  1400 ms 1110 ms
  5  1901 ms 1102 ms
 {noformat} 
 Is this behaviour normal? I did not expect it, as the SamplingAccumulator 
 needs to do less work? 
 Some code to show what I do:
 {code}
   searcher.search( facetsQuery, facetsCollector );
   final ListFacetResult collectedFacets = 
 facetsCollector.getFacetResults();
 {code}
 {code}
 final FacetSearchParams facetSearchParams = new FacetSearchParams( 
 facetRequests );
 FacetsCollector facetsCollector;
 if ( isSampled )
 {
   facetsCollector =
   FacetsCollector.create( new SamplingAccumulator( new 
 RandomSampler(), facetSearchParams, searcher.getIndexReader(), taxo ) );
 }
 else
 {
   facetsCollector = FacetsCollector.create( FacetsAccumulator.create( 
 facetSearchParams, searcher.getIndexReader(), taxo ) );
 {code}
   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5015) Unexpected performance difference between SamplingAccumulator and StandardFacetAccumulator

2013-05-28 Thread Rob Audenaerde (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668134#comment-13668134
 ] 

Rob Audenaerde edited comment on LUCENE-5015 at 5/28/13 7:11 AM:
-

Hi all, thanks for all the progress. 

I will try to build a Lucene with the latests patch and give it a go.. :)

(do I take the 4.3 release version? or is there a 4.3 development branch where 
the patch has to be applied?)

  was (Author: robau):
Hi all, thanks for all the progress. 

I will try to build a Lucene with the latests patch and give it a go, but I'm 
not (yet) very familiar with git, so it might take a while.. :)

(do I take the 4.3 release version? or is there a 4.3 development branch where 
the patch has to be applied?)
  
 Unexpected performance difference between SamplingAccumulator and 
 StandardFacetAccumulator
 --

 Key: LUCENE-5015
 URL: https://issues.apache.org/jira/browse/LUCENE-5015
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Affects Versions: 4.3
Reporter: Rob Audenaerde
Assignee: Shai Erera
Priority: Minor
 Attachments: LUCENE-5015.patch, LUCENE-5015.patch, LUCENE-5015.patch, 
 LUCENE-5015.patch, LUCENE-5015.patch


 I have an unexpected performance difference between the SamplingAccumulator 
 and the StandardFacetAccumulator. 
 The case is an index with about 5M documents and each document containing 
 about 10 fields. I created a facet on each of those fields. When searching to 
 retrieve facet-counts (using 1 CountFacetRequest), the SamplingAccumulator is 
 about twice as fast as the StandardFacetAccumulator. This is expected and a 
 nice speed-up. 
 However, when I use more CountFacetRequests to retrieve facet-counts for more 
 than one field, the speeds of the SampingAccumulator decreases, to the point 
 where the StandardFacetAccumulator is faster. 
 {noformat} 
 FacetRequests  SamplingStandard
  1   391 ms 1100 ms
  2   531 ms 1095 ms 
  3   948 ms 1108 ms
  4  1400 ms 1110 ms
  5  1901 ms 1102 ms
 {noformat} 
 Is this behaviour normal? I did not expect it, as the SamplingAccumulator 
 needs to do less work? 
 Some code to show what I do:
 {code}
   searcher.search( facetsQuery, facetsCollector );
   final ListFacetResult collectedFacets = 
 facetsCollector.getFacetResults();
 {code}
 {code}
 final FacetSearchParams facetSearchParams = new FacetSearchParams( 
 facetRequests );
 FacetsCollector facetsCollector;
 if ( isSampled )
 {
   facetsCollector =
   FacetsCollector.create( new SamplingAccumulator( new 
 RandomSampler(), facetSearchParams, searcher.getIndexReader(), taxo ) );
 }
 else
 {
   facetsCollector = FacetsCollector.create( FacetsAccumulator.create( 
 facetSearchParams, searcher.getIndexReader(), taxo ) );
 {code}
   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5015) Unexpected performance difference between SamplingAccumulator and StandardFacetAccumulator

2013-05-28 Thread Rob Audenaerde (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668143#comment-13668143
 ] 

Rob Audenaerde commented on LUCENE-5015:


I took the revisionnumber that was in the patchfile and checked that out.
 
 svn checkout http://svn.apache.org/repos/asf/lucene/dev/trunk@1486401 .

After installing Ivy I'm now building Lucene myself for the first time

 Unexpected performance difference between SamplingAccumulator and 
 StandardFacetAccumulator
 --

 Key: LUCENE-5015
 URL: https://issues.apache.org/jira/browse/LUCENE-5015
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Affects Versions: 4.3
Reporter: Rob Audenaerde
Assignee: Shai Erera
Priority: Minor
 Attachments: LUCENE-5015.patch, LUCENE-5015.patch, LUCENE-5015.patch, 
 LUCENE-5015.patch, LUCENE-5015.patch


 I have an unexpected performance difference between the SamplingAccumulator 
 and the StandardFacetAccumulator. 
 The case is an index with about 5M documents and each document containing 
 about 10 fields. I created a facet on each of those fields. When searching to 
 retrieve facet-counts (using 1 CountFacetRequest), the SamplingAccumulator is 
 about twice as fast as the StandardFacetAccumulator. This is expected and a 
 nice speed-up. 
 However, when I use more CountFacetRequests to retrieve facet-counts for more 
 than one field, the speeds of the SampingAccumulator decreases, to the point 
 where the StandardFacetAccumulator is faster. 
 {noformat} 
 FacetRequests  SamplingStandard
  1   391 ms 1100 ms
  2   531 ms 1095 ms 
  3   948 ms 1108 ms
  4  1400 ms 1110 ms
  5  1901 ms 1102 ms
 {noformat} 
 Is this behaviour normal? I did not expect it, as the SamplingAccumulator 
 needs to do less work? 
 Some code to show what I do:
 {code}
   searcher.search( facetsQuery, facetsCollector );
   final ListFacetResult collectedFacets = 
 facetsCollector.getFacetResults();
 {code}
 {code}
 final FacetSearchParams facetSearchParams = new FacetSearchParams( 
 facetRequests );
 FacetsCollector facetsCollector;
 if ( isSampled )
 {
   facetsCollector =
   FacetsCollector.create( new SamplingAccumulator( new 
 RandomSampler(), facetSearchParams, searcher.getIndexReader(), taxo ) );
 }
 else
 {
   facetsCollector = FacetsCollector.create( FacetsAccumulator.create( 
 facetSearchParams, searcher.getIndexReader(), taxo ) );
 {code}
   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5015) Unexpected performance difference between SamplingAccumulator and StandardFacetAccumulator

2013-05-28 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668151#comment-13668151
 ] 

Shai Erera commented on LUCENE-5015:


Rob, you don't need to build Lucene to try what Gilad suggested, just modify 
your search code to disable complements. The problem is that if complements 
indeed kick in, and from the setup your describe it seems that they do because 
you search with MADQ, then sampling isn't done at all, yet the accumulator 
still corrects the counts.

After you try it, we can tell if the performance overhead is indeed because of 
complements or that the counts are corrected. In either case, I think it will 
be good to open up the SampleFixer.

 Unexpected performance difference between SamplingAccumulator and 
 StandardFacetAccumulator
 --

 Key: LUCENE-5015
 URL: https://issues.apache.org/jira/browse/LUCENE-5015
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Affects Versions: 4.3
Reporter: Rob Audenaerde
Assignee: Shai Erera
Priority: Minor
 Attachments: LUCENE-5015.patch, LUCENE-5015.patch, LUCENE-5015.patch, 
 LUCENE-5015.patch, LUCENE-5015.patch


 I have an unexpected performance difference between the SamplingAccumulator 
 and the StandardFacetAccumulator. 
 The case is an index with about 5M documents and each document containing 
 about 10 fields. I created a facet on each of those fields. When searching to 
 retrieve facet-counts (using 1 CountFacetRequest), the SamplingAccumulator is 
 about twice as fast as the StandardFacetAccumulator. This is expected and a 
 nice speed-up. 
 However, when I use more CountFacetRequests to retrieve facet-counts for more 
 than one field, the speeds of the SampingAccumulator decreases, to the point 
 where the StandardFacetAccumulator is faster. 
 {noformat} 
 FacetRequests  SamplingStandard
  1   391 ms 1100 ms
  2   531 ms 1095 ms 
  3   948 ms 1108 ms
  4  1400 ms 1110 ms
  5  1901 ms 1102 ms
 {noformat} 
 Is this behaviour normal? I did not expect it, as the SamplingAccumulator 
 needs to do less work? 
 Some code to show what I do:
 {code}
   searcher.search( facetsQuery, facetsCollector );
   final ListFacetResult collectedFacets = 
 facetsCollector.getFacetResults();
 {code}
 {code}
 final FacetSearchParams facetSearchParams = new FacetSearchParams( 
 facetRequests );
 FacetsCollector facetsCollector;
 if ( isSampled )
 {
   facetsCollector =
   FacetsCollector.create( new SamplingAccumulator( new 
 RandomSampler(), facetSearchParams, searcher.getIndexReader(), taxo ) );
 }
 else
 {
   facetsCollector = FacetsCollector.create( FacetsAccumulator.create( 
 facetSearchParams, searcher.getIndexReader(), taxo ) );
 {code}
   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4865) Improve UIMA UpdateRequestProcessor logging

2013-05-28 Thread Tommaso Teofili (JIRA)
Tommaso Teofili created SOLR-4865:
-

 Summary: Improve UIMA UpdateRequestProcessor logging
 Key: SOLR-4865
 URL: https://issues.apache.org/jira/browse/SOLR-4865
 Project: Solr
  Issue Type: Improvement
  Components: contrib - UIMA
Reporter: Tommaso Teofili
Priority: Trivial
 Fix For: 5.0, 4.4


UIMA UpdateRequestProcessor is sometimes too verbose, also it'd be better to 
use _log.method(String,Object[])_ instead of _log.method(String)_ with a 
dynamically built String.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-4865) Improve UIMA UpdateRequestProcessor logging

2013-05-28 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili reassigned SOLR-4865:
-

Assignee: Tommaso Teofili

 Improve UIMA UpdateRequestProcessor logging
 ---

 Key: SOLR-4865
 URL: https://issues.apache.org/jira/browse/SOLR-4865
 Project: Solr
  Issue Type: Improvement
  Components: contrib - UIMA
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
Priority: Trivial
 Fix For: 5.0, 4.4


 UIMA UpdateRequestProcessor is sometimes too verbose, also it'd be better to 
 use _log.method(String,Object[])_ instead of _log.method(String)_ with a 
 dynamically built String.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5018) Never update offsets in CompoundWordTokenFilterBase

2013-05-28 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-5018.
--

Resolution: Fixed

I just committed the patch on trunk and branch_4x.

 Never update offsets in CompoundWordTokenFilterBase
 ---

 Key: LUCENE-5018
 URL: https://issues.apache.org/jira/browse/LUCENE-5018
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
 Attachments: LUCENE-5018.patch


 CompoundWordTokenFilterBase and its children 
 DictionaryCompoundWordTokenFilter and HyphenationCompoundWordTokenFilter 
 update offsets. This can make OffsetAttributeImpl trip an exception when 
 chained with other filters that group tokens together such as ShingleFilter, 
 see http://www.gossamer-threads.com/lists/lucene/java-dev/196376?page=last.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5018) Never update offsets in CompoundWordTokenFilterBase

2013-05-28 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-5018:
-

Fix Version/s: 4.4
   5.0

 Never update offsets in CompoundWordTokenFilterBase
 ---

 Key: LUCENE-5018
 URL: https://issues.apache.org/jira/browse/LUCENE-5018
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
 Fix For: 5.0, 4.4

 Attachments: LUCENE-5018.patch


 CompoundWordTokenFilterBase and its children 
 DictionaryCompoundWordTokenFilter and HyphenationCompoundWordTokenFilter 
 update offsets. This can make OffsetAttributeImpl trip an exception when 
 chained with other filters that group tokens together such as ShingleFilter, 
 see http://www.gossamer-threads.com/lists/lucene/java-dev/196376?page=last.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4865) Improve UIMA UpdateRequestProcessor logging

2013-05-28 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili resolved SOLR-4865.
---

Resolution: Fixed

 Improve UIMA UpdateRequestProcessor logging
 ---

 Key: SOLR-4865
 URL: https://issues.apache.org/jira/browse/SOLR-4865
 Project: Solr
  Issue Type: Improvement
  Components: contrib - UIMA
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
Priority: Trivial
 Fix For: 5.0, 4.4


 UIMA UpdateRequestProcessor is sometimes too verbose, also it'd be better to 
 use _log.method(String,Object[])_ instead of _log.method(String)_ with a 
 dynamically built String.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4866) FieldCache insanity with field used as facet and group

2013-05-28 Thread Sannier Elodie (JIRA)
Sannier Elodie created SOLR-4866:


 Summary: FieldCache insanity with field used as facet and group
 Key: SOLR-4866
 URL: https://issues.apache.org/jira/browse/SOLR-4866
 Project: Solr
  Issue Type: Bug
Reporter: Sannier Elodie
Priority: Minor


I am using the Lucene FieldCache with SolrCloud 4.2.1 and I have insane 
instances for a field used as facet and group field.

schema fieldType  filed declaration for my
merchantid field :
fieldType name=int class=solr.TrieIntField precisionStep=0 
sortMissingLast=true omitNorms=true positionIncrementGap=0/

field name=merchantid type=int indexed=true stored=true 
required=true/

The mbean stats output shows the field cache insanity after executing queries 
like :
/select?q=*:*facet=truefacet.field=merchantid
/select?q=*:*group=truegroup.field=merchantid

int name=insanity_count25/int
str name=insanity#0VALUEMISMATCH: Multiple distinct value objects for 
SegmentCoreReader(owner=_1z1(4.2.1):C3916)+merchantid
'SegmentCoreReader(owner=_1z1(4.2.1):C3916)'='merchantid',class 
org.apache.lucene.index.SortedDocValues,0.5=org.apache.lucene.search.FieldCacheImpl$SortedDocValuesImpl#1517585400

'SegmentCoreReader(owner=_1z1(4.2.1):C3916)'='merchantid',int,org.apache.lucene.search.FieldCache.NUMERIC_UTILS_INT_PARSER=org.apache.lucene.search.FieldCacheImpl$IntsFromArray#781169939

'SegmentCoreReader(owner=_1z1(4.2.1):C3916)'='merchantid',int,null=org.apache.lucene.search.FieldCacheImpl$IntsFromArray#781169939
/str
...

see http://markmail.org/thread/7gctyh6vn3eq5jso

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4470) Support for basic http auth in internal solr requests

2013-05-28 Thread Sindre Fiskaa (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668209#comment-13668209
 ] 

Sindre Fiskaa commented on SOLR-4470:
-

Hi
Started using Solr for a half year ago. Solr has given our application new 
functionality that we could only dream of before so this has been very good. 
The problem is that our customers demand high uptime. SolrCloud fit perfectly 
and provide redundancy and failover. But our customers also have high security 
requirements. We have sensitive information and we protect our our Solr nodes 
with minimum basic authentication + SSL. We rely on this patch for us to take 
Solr out to our customers, so hope this patch gets released.

 Support for basic http auth in internal solr requests
 -

 Key: SOLR-4470
 URL: https://issues.apache.org/jira/browse/SOLR-4470
 Project: Solr
  Issue Type: New Feature
  Components: clients - java, multicore, replication (java), SolrCloud
Affects Versions: 4.0
Reporter: Per Steffensen
Assignee: Jan Høydahl
  Labels: authentication, https, solrclient, solrcloud, ssl
 Fix For: 4.4

 Attachments: SOLR-4470_branch_4x_r1452629.patch, 
 SOLR-4470_branch_4x_r1452629.patch, SOLR-4470_branch_4x_r145.patch, 
 SOLR-4470.patch


 We want to protect any HTTP-resource (url). We want to require credentials no 
 matter what kind of HTTP-request you make to a Solr-node.
 It can faily easy be acheived as described on 
 http://wiki.apache.org/solr/SolrSecurity. This problem is that Solr-nodes 
 also make internal request to other Solr-nodes, and for it to work 
 credentials need to be provided here also.
 Ideally we would like to forward credentials from a particular request to 
 all the internal sub-requests it triggers. E.g. for search and update 
 request.
 But there are also internal requests
 * that only indirectly/asynchronously triggered from outside requests (e.g. 
 shard creation/deletion/etc based on calls to the Collection API)
 * that do not in any way have relation to an outside super-request (e.g. 
 replica synching stuff)
 We would like to aim at a solution where original credentials are 
 forwarded when a request directly/synchronously trigger a subrequest, and 
 fallback to a configured internal credentials for the 
 asynchronous/non-rooted requests.
 In our solution we would aim at only supporting basic http auth, but we would 
 like to make a framework around it, so that not to much refactoring is 
 needed if you later want to make support for other kinds of auth (e.g. digest)
 We will work at a solution but create this JIRA issue early in order to get 
 input/comments from the community as early as possible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5021) NextDoc safety for bulk collecting

2013-05-28 Thread Alexis Torres Paderewski (JIRA)
Alexis Torres Paderewski created LUCENE-5021:


 Summary: NextDoc safety for bulk collecting
 Key: LUCENE-5021
 URL: https://issues.apache.org/jira/browse/LUCENE-5021
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index, core/other
Affects Versions: 3.6.2
 Environment: Any with custom filters
Reporter: Alexis Torres Paderewski


Hello,

I want to apply ACL once as a PostFilter and I would need to bulk this call 
since round trips would decrease severely performances.

I tryed to just stack them on the DelegatingCollector using this collect :

@Override
public void collect(int doc) throws IOException {
while ((doc = scorer.nextDoc()) != DocIdSetIterator.NO_MORE_DOCS) {
docs.put(getDocumentId(doc), doc);
}

batchCollect();
}

Sometime it works, when Scorer calling me are Safe like the ConstantScorer, 
but when DisjunctionSumScorer calls me, it either assert on NO_MORE_DOCS if 
assert are enabled, or it throw a NPE.

Should we copy DisjunctionMaxScorer mechanism to protect nextDoc of an exausted 
iterator using either current doc or checking numbers of subScorers ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5021) NextDoc safety for bulk collecting

2013-05-28 Thread Alexis Torres Paderewski (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexis Torres Paderewski updated LUCENE-5021:
-

Description: 
Hello,

I would like to apply ACL once as a PostFilter and I therefore need to bulk 
this call since round trips would severely decrease performances.

I tried to just stack them on the DelegatingCollector using this collect :

@Override
public void collect(int doc) throws IOException {
while ((doc = scorer.nextDoc()) != DocIdSetIterator.NO_MORE_DOCS) {
docs.put(getDocumentId(doc), doc);
}

batchCollect();
}

Depending on the Scorer it may or it may not work. Indeed when the Scorer is 
Safe  that is when it handles 
the case in which the scorer is exhausted and is called once again after 
exhaustion.
This is the case of the (e.g. DisjunctionMaxScorer, ConstantScorer):

if (numScorers == 0) return doc = NO_MORE_DOCS; 


On the other hand, when using the DisjunctionSumScorer, it either asserts on 
NO_MORE_DOCS, or it throws a NPE.


Shouldn't we copy the DisjunctionMaxScorer mechanism to protect nextDoc of an 
exausted iterator using either current doc or checking numbers of subScorers ?


  was:
Hello,

I want to apply ACL once as a PostFilter and I would need to bulk this call 
since round trips would decrease severely performances.

I tryed to just stack them on the DelegatingCollector using this collect :

@Override
public void collect(int doc) throws IOException {
while ((doc = scorer.nextDoc()) != DocIdSetIterator.NO_MORE_DOCS) {
docs.put(getDocumentId(doc), doc);
}

batchCollect();
}

Sometime it works, when Scorer calling me are Safe like the ConstantScorer, 
but when DisjunctionSumScorer calls me, it either assert on NO_MORE_DOCS if 
assert are enabled, or it throw a NPE.

Should we copy DisjunctionMaxScorer mechanism to protect nextDoc of an exausted 
iterator using either current doc or checking numbers of subScorers ?


 NextDoc safety for bulk collecting
 --

 Key: LUCENE-5021
 URL: https://issues.apache.org/jira/browse/LUCENE-5021
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index, core/other
Affects Versions: 3.6.2
 Environment: Any with custom filters
Reporter: Alexis Torres Paderewski
  Labels: NPE,, Null-Safety, Scorer

 Hello,
 I would like to apply ACL once as a PostFilter and I therefore need to bulk 
 this call since round trips would severely decrease performances.
 I tried to just stack them on the DelegatingCollector using this collect :
 @Override
 public void collect(int doc) throws IOException {
 while ((doc = scorer.nextDoc()) != DocIdSetIterator.NO_MORE_DOCS) {
 docs.put(getDocumentId(doc), doc);
 }
 batchCollect();
 }
 Depending on the Scorer it may or it may not work. Indeed when the Scorer is 
 Safe  that is when it handles 
 the case in which the scorer is exhausted and is called once again after 
 exhaustion.
 This is the case of the (e.g. DisjunctionMaxScorer, ConstantScorer):
 if (numScorers == 0) return doc = NO_MORE_DOCS; 
 On the other hand, when using the DisjunctionSumScorer, it either asserts on 
 NO_MORE_DOCS, or it throws a NPE.
 Shouldn't we copy the DisjunctionMaxScorer mechanism to protect nextDoc of an 
 exausted iterator using either current doc or checking numbers of subScorers ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4785) New MaxScoreQParserPlugin

2013-05-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-4785:
--

Attachment: SOLR-4785-boostfix.patch

This additional patch fixes a bug with top-level BooleanQuery boost being lost. 
Also adds some boost tests.

Will commit soon.

 New MaxScoreQParserPlugin
 -

 Key: SOLR-4785
 URL: https://issues.apache.org/jira/browse/SOLR-4785
 Project: Solr
  Issue Type: New Feature
  Components: query parsers
Reporter: Jan Høydahl
Assignee: Jan Høydahl
Priority: Minor
 Fix For: 5.0, 4.4

 Attachments: 
 SOLR-4785-Add-tests-for-maxscore-to-QueryEqualityTest.patch, 
 SOLR-4785-boostfix.patch, SOLR-4785.patch, SOLR-4785.patch


 A customer wants to contribute back this component.
 It is a QParser which behaves exactly like lucene parser (extends it), but 
 returns the Max score from the clauses, i.e. max(c1,c2,c3..) instead of the 
 default which is sum(c1,c2,c3...). It does this by wrapping all SHOULD 
 clauses in a DisjunctionMaxQuery with tie=1.0. Any MUST or PROHIBITED clauses 
 are passed through as-is. Non-boolean queries, e.g. NumericRange 
 falls-through to lucene parser.
 To use, add to solrconfig.xml:
 {code:xml}
   queryParser name=maxscore class=solr.MaxScoreQParserPlugin/
 {code}
 Then use it in a query
 {noformat}
 q=A AND B AND {!maxscore v=$max}max=C OR (D AND E)
 {noformat}
 This will return the score of A+B+max(C,sum(D+E))

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5021) NextDoc safety for bulk collecting

2013-05-28 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668230#comment-13668230
 ] 

Michael McCandless commented on LUCENE-5021:


In general you're not allowed to call nextDoc after it has returned 
NO_MORE_DOCS: the results are undefined.

 NextDoc safety for bulk collecting
 --

 Key: LUCENE-5021
 URL: https://issues.apache.org/jira/browse/LUCENE-5021
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index, core/other
Affects Versions: 3.6.2
 Environment: Any with custom filters
Reporter: Alexis Torres Paderewski
  Labels: NPE,, Null-Safety, Scorer

 Hello,
 I would like to apply ACL once as a PostFilter and I therefore need to bulk 
 this call since round trips would severely decrease performances.
 I tried to just stack them on the DelegatingCollector using this collect :
 @Override
 public void collect(int doc) throws IOException {
 while ((doc = scorer.nextDoc()) != DocIdSetIterator.NO_MORE_DOCS) {
 docs.put(getDocumentId(doc), doc);
 }
 batchCollect();
 }
 Depending on the Scorer it may or it may not work. Indeed when the Scorer is 
 Safe  that is when it handles 
 the case in which the scorer is exhausted and is called once again after 
 exhaustion.
 This is the case of the (e.g. DisjunctionMaxScorer, ConstantScorer):
 if (numScorers == 0) return doc = NO_MORE_DOCS; 
 On the other hand, when using the DisjunctionSumScorer, it either asserts on 
 NO_MORE_DOCS, or it throws a NPE.
 Shouldn't we copy the DisjunctionMaxScorer mechanism to protect nextDoc of an 
 exausted iterator using either current doc or checking numbers of subScorers ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5020) Make DrillSidewaysResult ctor public

2013-05-28 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668240#comment-13668240
 ] 

Michael McCandless commented on LUCENE-5020:


+1, looks great!

 Make DrillSidewaysResult ctor public
 

 Key: LUCENE-5020
 URL: https://issues.apache.org/jira/browse/LUCENE-5020
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Reporter: Shai Erera
Assignee: Shai Erera
Priority: Minor
 Attachments: LUCENE-5020.patch


 DrillSidewaysResult has a package-private ctor which prevents initializing it 
 by an app. I found that it's sometimes useful for e.g. doing some 
 post-processing on the returned TopDocs or ListFacetResult. Since you 
 cannot return two values from a method, it will be convenient if  method 
 could return a new 'processed' DSR.
 I would also like to make the hits member final.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5020) Make DrillSidewaysResult ctor public

2013-05-28 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera resolved LUCENE-5020.


   Resolution: Fixed
Fix Version/s: 4.4
   5.0

Thanks Mike. Committed to trunk and 4x.

 Make DrillSidewaysResult ctor public
 

 Key: LUCENE-5020
 URL: https://issues.apache.org/jira/browse/LUCENE-5020
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Reporter: Shai Erera
Assignee: Shai Erera
Priority: Minor
 Fix For: 5.0, 4.4

 Attachments: LUCENE-5020.patch


 DrillSidewaysResult has a package-private ctor which prevents initializing it 
 by an app. I found that it's sometimes useful for e.g. doing some 
 post-processing on the returned TopDocs or ListFacetResult. Since you 
 cannot return two values from a method, it will be convenient if  method 
 could return a new 'processed' DSR.
 I would also like to make the hits member final.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5015) Unexpected performance difference between SamplingAccumulator and StandardFacetAccumulator

2013-05-28 Thread Rob Audenaerde (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668267#comment-13668267
 ] 

Rob Audenaerde commented on LUCENE-5015:


Hi Shai,

I will check tomorrow. Just to be sure, this is what I will do:

- Lucene 4.3 release
- FacetsAccumulator with and without complements
- With the 'default TakmiSampleFixer
- With a NOOP empty Sampler implementation that I will return by overriding the 
'getSampleFixer' method in the Sampler that I will provide.
- MADQ with 1..5 selected facets
- some other query that will return about 50% of the documents, also with 1..5 
facets

I currently have a nice 15M document set, I will use that as a basis. 

 Unexpected performance difference between SamplingAccumulator and 
 StandardFacetAccumulator
 --

 Key: LUCENE-5015
 URL: https://issues.apache.org/jira/browse/LUCENE-5015
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Affects Versions: 4.3
Reporter: Rob Audenaerde
Assignee: Shai Erera
Priority: Minor
 Attachments: LUCENE-5015.patch, LUCENE-5015.patch, LUCENE-5015.patch, 
 LUCENE-5015.patch, LUCENE-5015.patch


 I have an unexpected performance difference between the SamplingAccumulator 
 and the StandardFacetAccumulator. 
 The case is an index with about 5M documents and each document containing 
 about 10 fields. I created a facet on each of those fields. When searching to 
 retrieve facet-counts (using 1 CountFacetRequest), the SamplingAccumulator is 
 about twice as fast as the StandardFacetAccumulator. This is expected and a 
 nice speed-up. 
 However, when I use more CountFacetRequests to retrieve facet-counts for more 
 than one field, the speeds of the SampingAccumulator decreases, to the point 
 where the StandardFacetAccumulator is faster. 
 {noformat} 
 FacetRequests  SamplingStandard
  1   391 ms 1100 ms
  2   531 ms 1095 ms 
  3   948 ms 1108 ms
  4  1400 ms 1110 ms
  5  1901 ms 1102 ms
 {noformat} 
 Is this behaviour normal? I did not expect it, as the SamplingAccumulator 
 needs to do less work? 
 Some code to show what I do:
 {code}
   searcher.search( facetsQuery, facetsCollector );
   final ListFacetResult collectedFacets = 
 facetsCollector.getFacetResults();
 {code}
 {code}
 final FacetSearchParams facetSearchParams = new FacetSearchParams( 
 facetRequests );
 FacetsCollector facetsCollector;
 if ( isSampled )
 {
   facetsCollector =
   FacetsCollector.create( new SamplingAccumulator( new 
 RandomSampler(), facetSearchParams, searcher.getIndexReader(), taxo ) );
 }
 else
 {
   facetsCollector = FacetsCollector.create( FacetsAccumulator.create( 
 facetSearchParams, searcher.getIndexReader(), taxo ) );
 {code}
   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5015) Unexpected performance difference between SamplingAccumulator and StandardFacetAccumulator

2013-05-28 Thread Rob Audenaerde (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668267#comment-13668267
 ] 

Rob Audenaerde edited comment on LUCENE-5015 at 5/28/13 12:11 PM:
--

Hi Shai,

I will check tomorrow. Just to be sure, this is what I will do:

- Lucene 4.3 release
- FacetsAccumulator with and without complements
-- With the 'default TakmiSampleFixer
-- With a NOOP empty Sampler implementation that I will return by overriding 
the 'getSampleFixer' method in the Sampler that I will provide.
- MADQ with 1..5 selected facets
- some other query that will return about 50% of the documents, also with 1..5 
facets

I currently have a nice 15M document set, I will use that as a basis. 

  was (Author: robau):
Hi Shai,

I will check tomorrow. Just to be sure, this is what I will do:

- Lucene 4.3 release
- FacetsAccumulator with and without complements
- With the 'default TakmiSampleFixer
- With a NOOP empty Sampler implementation that I will return by overriding the 
'getSampleFixer' method in the Sampler that I will provide.
- MADQ with 1..5 selected facets
- some other query that will return about 50% of the documents, also with 1..5 
facets

I currently have a nice 15M document set, I will use that as a basis. 
  
 Unexpected performance difference between SamplingAccumulator and 
 StandardFacetAccumulator
 --

 Key: LUCENE-5015
 URL: https://issues.apache.org/jira/browse/LUCENE-5015
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Affects Versions: 4.3
Reporter: Rob Audenaerde
Assignee: Shai Erera
Priority: Minor
 Attachments: LUCENE-5015.patch, LUCENE-5015.patch, LUCENE-5015.patch, 
 LUCENE-5015.patch, LUCENE-5015.patch


 I have an unexpected performance difference between the SamplingAccumulator 
 and the StandardFacetAccumulator. 
 The case is an index with about 5M documents and each document containing 
 about 10 fields. I created a facet on each of those fields. When searching to 
 retrieve facet-counts (using 1 CountFacetRequest), the SamplingAccumulator is 
 about twice as fast as the StandardFacetAccumulator. This is expected and a 
 nice speed-up. 
 However, when I use more CountFacetRequests to retrieve facet-counts for more 
 than one field, the speeds of the SampingAccumulator decreases, to the point 
 where the StandardFacetAccumulator is faster. 
 {noformat} 
 FacetRequests  SamplingStandard
  1   391 ms 1100 ms
  2   531 ms 1095 ms 
  3   948 ms 1108 ms
  4  1400 ms 1110 ms
  5  1901 ms 1102 ms
 {noformat} 
 Is this behaviour normal? I did not expect it, as the SamplingAccumulator 
 needs to do less work? 
 Some code to show what I do:
 {code}
   searcher.search( facetsQuery, facetsCollector );
   final ListFacetResult collectedFacets = 
 facetsCollector.getFacetResults();
 {code}
 {code}
 final FacetSearchParams facetSearchParams = new FacetSearchParams( 
 facetRequests );
 FacetsCollector facetsCollector;
 if ( isSampled )
 {
   facetsCollector =
   FacetsCollector.create( new SamplingAccumulator( new 
 RandomSampler(), facetSearchParams, searcher.getIndexReader(), taxo ) );
 }
 else
 {
   facetsCollector = FacetsCollector.create( FacetsAccumulator.create( 
 facetSearchParams, searcher.getIndexReader(), taxo ) );
 {code}
   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5015) Unexpected performance difference between SamplingAccumulator and StandardFacetAccumulator

2013-05-28 Thread Rob Audenaerde (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668267#comment-13668267
 ] 

Rob Audenaerde edited comment on LUCENE-5015 at 5/28/13 12:11 PM:
--

Hi Shai,

I will check tomorrow. Just to be sure, this is what I will do:

- Lucene 4.3 release
- FacetsAccumulator with and without complements
-- With the 'default' TakmiSampleFixer
-- With a NOOP empty Sampler implementation that I will return by overriding 
the 'getSampleFixer' method in the Sampler that I will provide.
- MADQ with 1..5 selected facets
- some other query that will return about 50% of the documents, also with 1..5 
facets

I currently have a nice 15M document set, I will use that as a basis. 

  was (Author: robau):
Hi Shai,

I will check tomorrow. Just to be sure, this is what I will do:

- Lucene 4.3 release
- FacetsAccumulator with and without complements
-- With the 'default TakmiSampleFixer
-- With a NOOP empty Sampler implementation that I will return by overriding 
the 'getSampleFixer' method in the Sampler that I will provide.
- MADQ with 1..5 selected facets
- some other query that will return about 50% of the documents, also with 1..5 
facets

I currently have a nice 15M document set, I will use that as a basis. 
  
 Unexpected performance difference between SamplingAccumulator and 
 StandardFacetAccumulator
 --

 Key: LUCENE-5015
 URL: https://issues.apache.org/jira/browse/LUCENE-5015
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Affects Versions: 4.3
Reporter: Rob Audenaerde
Assignee: Shai Erera
Priority: Minor
 Attachments: LUCENE-5015.patch, LUCENE-5015.patch, LUCENE-5015.patch, 
 LUCENE-5015.patch, LUCENE-5015.patch


 I have an unexpected performance difference between the SamplingAccumulator 
 and the StandardFacetAccumulator. 
 The case is an index with about 5M documents and each document containing 
 about 10 fields. I created a facet on each of those fields. When searching to 
 retrieve facet-counts (using 1 CountFacetRequest), the SamplingAccumulator is 
 about twice as fast as the StandardFacetAccumulator. This is expected and a 
 nice speed-up. 
 However, when I use more CountFacetRequests to retrieve facet-counts for more 
 than one field, the speeds of the SampingAccumulator decreases, to the point 
 where the StandardFacetAccumulator is faster. 
 {noformat} 
 FacetRequests  SamplingStandard
  1   391 ms 1100 ms
  2   531 ms 1095 ms 
  3   948 ms 1108 ms
  4  1400 ms 1110 ms
  5  1901 ms 1102 ms
 {noformat} 
 Is this behaviour normal? I did not expect it, as the SamplingAccumulator 
 needs to do less work? 
 Some code to show what I do:
 {code}
   searcher.search( facetsQuery, facetsCollector );
   final ListFacetResult collectedFacets = 
 facetsCollector.getFacetResults();
 {code}
 {code}
 final FacetSearchParams facetSearchParams = new FacetSearchParams( 
 facetRequests );
 FacetsCollector facetsCollector;
 if ( isSampled )
 {
   facetsCollector =
   FacetsCollector.create( new SamplingAccumulator( new 
 RandomSampler(), facetSearchParams, searcher.getIndexReader(), taxo ) );
 }
 else
 {
   facetsCollector = FacetsCollector.create( FacetsAccumulator.create( 
 facetSearchParams, searcher.getIndexReader(), taxo ) );
 {code}
   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4785) New MaxScoreQParserPlugin

2013-05-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-4785.
---

Resolution: Fixed

Committed fix to trunk (1486898) and 4x (1486901)

 New MaxScoreQParserPlugin
 -

 Key: SOLR-4785
 URL: https://issues.apache.org/jira/browse/SOLR-4785
 Project: Solr
  Issue Type: New Feature
  Components: query parsers
Reporter: Jan Høydahl
Assignee: Jan Høydahl
Priority: Minor
 Fix For: 5.0, 4.4

 Attachments: 
 SOLR-4785-Add-tests-for-maxscore-to-QueryEqualityTest.patch, 
 SOLR-4785-boostfix.patch, SOLR-4785.patch, SOLR-4785.patch


 A customer wants to contribute back this component.
 It is a QParser which behaves exactly like lucene parser (extends it), but 
 returns the Max score from the clauses, i.e. max(c1,c2,c3..) instead of the 
 default which is sum(c1,c2,c3...). It does this by wrapping all SHOULD 
 clauses in a DisjunctionMaxQuery with tie=1.0. Any MUST or PROHIBITED clauses 
 are passed through as-is. Non-boolean queries, e.g. NumericRange 
 falls-through to lucene parser.
 To use, add to solrconfig.xml:
 {code:xml}
   queryParser name=maxscore class=solr.MaxScoreQParserPlugin/
 {code}
 Then use it in a query
 {noformat}
 q=A AND B AND {!maxscore v=$max}max=C OR (D AND E)
 {noformat}
 This will return the score of A+B+max(C,sum(D+E))

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5015) Unexpected performance difference between SamplingAccumulator and StandardFacetAccumulator

2013-05-28 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668270#comment-13668270
 ] 

Shai Erera commented on LUCENE-5015:


yes that sounds good. If you don't want to apply the patch so you can use the 
Noop fixer, that's fine too. I think the main goal is to see whether the 
complements that kicked in were in the way.

 Unexpected performance difference between SamplingAccumulator and 
 StandardFacetAccumulator
 --

 Key: LUCENE-5015
 URL: https://issues.apache.org/jira/browse/LUCENE-5015
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Affects Versions: 4.3
Reporter: Rob Audenaerde
Assignee: Shai Erera
Priority: Minor
 Attachments: LUCENE-5015.patch, LUCENE-5015.patch, LUCENE-5015.patch, 
 LUCENE-5015.patch, LUCENE-5015.patch


 I have an unexpected performance difference between the SamplingAccumulator 
 and the StandardFacetAccumulator. 
 The case is an index with about 5M documents and each document containing 
 about 10 fields. I created a facet on each of those fields. When searching to 
 retrieve facet-counts (using 1 CountFacetRequest), the SamplingAccumulator is 
 about twice as fast as the StandardFacetAccumulator. This is expected and a 
 nice speed-up. 
 However, when I use more CountFacetRequests to retrieve facet-counts for more 
 than one field, the speeds of the SampingAccumulator decreases, to the point 
 where the StandardFacetAccumulator is faster. 
 {noformat} 
 FacetRequests  SamplingStandard
  1   391 ms 1100 ms
  2   531 ms 1095 ms 
  3   948 ms 1108 ms
  4  1400 ms 1110 ms
  5  1901 ms 1102 ms
 {noformat} 
 Is this behaviour normal? I did not expect it, as the SamplingAccumulator 
 needs to do less work? 
 Some code to show what I do:
 {code}
   searcher.search( facetsQuery, facetsCollector );
   final ListFacetResult collectedFacets = 
 facetsCollector.getFacetResults();
 {code}
 {code}
 final FacetSearchParams facetSearchParams = new FacetSearchParams( 
 facetRequests );
 FacetsCollector facetsCollector;
 if ( isSampled )
 {
   facetsCollector =
   FacetsCollector.create( new SamplingAccumulator( new 
 RandomSampler(), facetSearchParams, searcher.getIndexReader(), taxo ) );
 }
 else
 {
   facetsCollector = FacetsCollector.create( FacetsAccumulator.create( 
 facetSearchParams, searcher.getIndexReader(), taxo ) );
 {code}
   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5021) NextDoc safety for bulk collecting

2013-05-28 Thread Alexis Torres Paderewski (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668276#comment-13668276
 ] 

Alexis Torres Paderewski commented on LUCENE-5021:
--

I understand that note on DocIdSetIterator or Scorer. If DelegatingCollector 
had an event on, let's say, last document, I would be able to bufferize them 
until last document and call my ACL on another system without messing with 
nextDoc on DelegatingCollector. Another way around would be giving that event.

 NextDoc safety for bulk collecting
 --

 Key: LUCENE-5021
 URL: https://issues.apache.org/jira/browse/LUCENE-5021
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index, core/other
Affects Versions: 3.6.2
 Environment: Any with custom filters
Reporter: Alexis Torres Paderewski
  Labels: NPE,, Null-Safety, Scorer

 Hello,
 I would like to apply ACL once as a PostFilter and I therefore need to bulk 
 this call since round trips would severely decrease performances.
 I tried to just stack them on the DelegatingCollector using this collect :
 @Override
 public void collect(int doc) throws IOException {
 while ((doc = scorer.nextDoc()) != DocIdSetIterator.NO_MORE_DOCS) {
 docs.put(getDocumentId(doc), doc);
 }
 batchCollect();
 }
 Depending on the Scorer it may or it may not work. Indeed when the Scorer is 
 Safe  that is when it handles 
 the case in which the scorer is exhausted and is called once again after 
 exhaustion.
 This is the case of the (e.g. DisjunctionMaxScorer, ConstantScorer):
 if (numScorers == 0) return doc = NO_MORE_DOCS; 
 On the other hand, when using the DisjunctionSumScorer, it either asserts on 
 NO_MORE_DOCS, or it throws a NPE.
 Shouldn't we copy the DisjunctionMaxScorer mechanism to protect nextDoc of an 
 exausted iterator using either current doc or checking numbers of subScorers ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5015) Unexpected performance difference between SamplingAccumulator and StandardFacetAccumulator

2013-05-28 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated LUCENE-5015:
---

Attachment: LUCENE-5015.patch

Patch adds CHANGES entry as well as makes SampleFixer and TakmiSampleFixer 
public. I think this is ready but let's wait for Rob's input.

 Unexpected performance difference between SamplingAccumulator and 
 StandardFacetAccumulator
 --

 Key: LUCENE-5015
 URL: https://issues.apache.org/jira/browse/LUCENE-5015
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Affects Versions: 4.3
Reporter: Rob Audenaerde
Assignee: Shai Erera
Priority: Minor
 Attachments: LUCENE-5015.patch, LUCENE-5015.patch, LUCENE-5015.patch, 
 LUCENE-5015.patch, LUCENE-5015.patch, LUCENE-5015.patch


 I have an unexpected performance difference between the SamplingAccumulator 
 and the StandardFacetAccumulator. 
 The case is an index with about 5M documents and each document containing 
 about 10 fields. I created a facet on each of those fields. When searching to 
 retrieve facet-counts (using 1 CountFacetRequest), the SamplingAccumulator is 
 about twice as fast as the StandardFacetAccumulator. This is expected and a 
 nice speed-up. 
 However, when I use more CountFacetRequests to retrieve facet-counts for more 
 than one field, the speeds of the SampingAccumulator decreases, to the point 
 where the StandardFacetAccumulator is faster. 
 {noformat} 
 FacetRequests  SamplingStandard
  1   391 ms 1100 ms
  2   531 ms 1095 ms 
  3   948 ms 1108 ms
  4  1400 ms 1110 ms
  5  1901 ms 1102 ms
 {noformat} 
 Is this behaviour normal? I did not expect it, as the SamplingAccumulator 
 needs to do less work? 
 Some code to show what I do:
 {code}
   searcher.search( facetsQuery, facetsCollector );
   final ListFacetResult collectedFacets = 
 facetsCollector.getFacetResults();
 {code}
 {code}
 final FacetSearchParams facetSearchParams = new FacetSearchParams( 
 facetRequests );
 FacetsCollector facetsCollector;
 if ( isSampled )
 {
   facetsCollector =
   FacetsCollector.create( new SamplingAccumulator( new 
 RandomSampler(), facetSearchParams, searcher.getIndexReader(), taxo ) );
 }
 else
 {
   facetsCollector = FacetsCollector.create( FacetsAccumulator.create( 
 facetSearchParams, searcher.getIndexReader(), taxo ) );
 {code}
   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4744) Version conflict error during shard split test

2013-05-28 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668310#comment-13668310
 ] 

Yonik Seeley commented on SOLR-4744:


Hmmm, can we get away with fixing this with a much less invasive change?
What if we just send to sub-shards like we send to other replicas, and return a 
failure if the sub-shard fails (don't worry about trying to change the logic to 
send to a sub-shard before adding locally).  The shard is going to become 
inactive anyway, so it shouldn't matter if we accidentally add a document 
locally that goes on to be rejected by the sub-shard, right?



 Version conflict error during shard split test
 --

 Key: SOLR-4744
 URL: https://issues.apache.org/jira/browse/SOLR-4744
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 4.4

 Attachments: SOLR-4744.patch


 ShardSplitTest fails sometimes with the following error:
 {code}
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state 
 invoked for collection: collection1
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state shard1 
 to inactive
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state 
 shard1_0 to active
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state 
 shard1_1 to active
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.873; 
 org.apache.solr.update.processor.LogUpdateProcessor; [collection1] webapp= 
 path=/update params={wt=javabinversion=2} {add=[169 (1432319507166134272)]} 
 0 2
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.884; 
 org.apache.solr.update.processor.LogUpdateProcessor; 
 [collection1_shard1_1_replica1] webapp= path=/update 
 params={distrib.from=http://127.0.0.1:41028/collection1/update.distrib=FROMLEADERwt=javabindistrib.from.parent=shard1version=2}
  {} 0 1
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.885; 
 org.apache.solr.update.processor.LogUpdateProcessor; [collection1] webapp= 
 path=/update 
 params={distrib.from=http://127.0.0.1:41028/collection1/update.distrib=FROMLEADERwt=javabindistrib.from.parent=shard1version=2}
  {add=[169 (1432319507173474304)]} 0 2
 [junit4:junit4]   1 ERROR - 2013-04-14 19:05:26.885; 
 org.apache.solr.common.SolrException; shard update error StdNode: 
 http://127.0.0.1:41028/collection1_shard1_1_replica1/:org.apache.solr.common.SolrException:
  version conflict for 169 expected=1432319507173474304 actual=-1
 [junit4:junit4]   1  at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:404)
 [junit4:junit4]   1  at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:181)
 [junit4:junit4]   1  at 
 org.apache.solr.update.SolrCmdDistributor$1.call(SolrCmdDistributor.java:332)
 [junit4:junit4]   1  at 
 org.apache.solr.update.SolrCmdDistributor$1.call(SolrCmdDistributor.java:306)
 [junit4:junit4]   1  at 
 

Re: ReleaseTodo update

2013-05-28 Thread Jan Høydahl
Found another missing piece in the release instruction, updating Solr WIKI page 
after a release. Added this section:

https://wiki.apache.org/lucene-java/ReleaseTodo#Update_WIKI

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

23. mai 2013 kl. 00:42 skrev Jack Krupansky j...@basetechnology.com:

 Here’s a simple example of the problem: A user links to the current doc for 
 the PositionFilter. That link should continue to work for years to come – if 
 it is a versioned link. But if it links to a non-versioned doc page, and in 
 4.4 PositionFilter is removed, then suddenly the user’s link breaks. Or, 
 ditto for a link from one of the Solr wiki pages.
 
 -- Jack Krupansky
  
 From: Jan Høydahl
 Sent: Wednesday, May 22, 2013 6:14 PM
 To: dev@lucene.apache.org
 Subject: Re: ReleaseTodo update
  
 There was extension discussion about this previously on the mailing list.
 
 I found this, where in fact you are the one proposing the redirect to latest 
 doc: http://search-lucene.com/m/HPcIC1Nx9NE1
  
 If broken links are our main concern, perhaps we should deploy a BOT checking 
 our CMS/Wiki nightly for broken links and email the dev list if found?
  
 --
 Jan Høydahl, search solution architect
 Cominvent AS - www.cominvent.com
  
 22. mai 2013 kl. 23:07 skrev Robert Muir rcm...@gmail.com:
 
 I would rather remove the redirect. if we want to have a redirect to a 
 specific index.html thats way different than a rewrite rule that is ripe for 
 abuse.
 
 There was extension discussion about this previously on the mailing list.
 
 On Wed, May 22, 2013 at 2:03 PM, Jan Høydahl jan@cominvent.com wrote:
 Sure, broken links may be and probably have already been an issue.
  
 Anyway, the goal of this doc change was to have consistency between the real 
 world and the release todo, not to get stale redirects to some random 
 version.
  
 You can always open a JIRA to propose changing all of this.
  
 --
 Jan Høydahl, search solution architect
 Cominvent AS - www.cominvent.com
  
 22. mai 2013 kl. 19:24 skrev Robert Muir rcm...@gmail.com:
 
 There was a lot of discussions about these links before. 
 
 I don't think we should have such links. They just encourage broken links 
 from the wiki etc. 
 
 i don't want to see links like 
 http://lucene.apache.org/core/api/core/org/apache/lucene/index/FieldInvertState.html
  going to 
 http://lucene.apache.org/core/4_3_0/core/org/apache/lucene/index/FieldInvertState.html.
 
 
 These will only break as APIs evolve. Its better to have the version 
 explicit.
 
 On Wed, May 22, 2013 at 6:39 AM, Jan Høydahl jan@cominvent.com wrote:
 WIKI updated, please review: 
 https://wiki.apache.org/lucene-java/ReleaseTodo#Update_redirect_to_latest_Javadoc
 
 Also added redirects to the Lucene javadocs so that 
 http://lucene.apache.org/core/api/ ... now works. If that was avoided on 
 purpose for some reason, let me know and I'll remove it again.
 
 --
 Jan Høydahl, search solution architect
 Cominvent AS - www.cominvent.com
 
 13. mai 2013 kl. 21:20 skrev Steve Rowe sar...@gmail.com:
 
  Good catch, Jan - feel free to add this to the ReleaseTodo wiki page 
  yourself. - Steve
 
  On May 12, 2013, at 7:18 PM, Jan Høydahl jan@cominvent.com wrote:
 
  Hi,
 
  I discovered that the doc redirect still redirects to 4_1_0 javadocs.
  I changed .htaccess so it now points to 4_3_0
  https://svn.apache.org/repos/asf/lucene/cms/trunk/content/.htaccess
 
  The Release TODO should mention updating this link - 
  http://wiki.apache.org/lucene-java/ReleaseTodo
 
  --
  Jan Høydahl, search solution architect
  Cominvent AS - www.cominvent.com
 
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
  For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
  For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 
  
 
 
  



[jira] [Commented] (SOLR-4744) Version conflict error during shard split test

2013-05-28 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668335#comment-13668335
 ] 

Shalin Shekhar Mangar commented on SOLR-4744:
-

bq. What if we just send to sub-shards like we send to other replicas, and 
return a failure if the sub-shard fails (don't worry about trying to change the 
logic to send to a sub-shard before adding locally). The shard is going to 
become inactive anyway, so it shouldn't matter if we accidentally add a 
document locally that goes on to be rejected by the sub-shard, right?

What happens with partial updates in that case? Suppose an increment operation 
is requested which succeeds locally but is not propagated to the sub shard. If 
the client retries, the index will have wrong values.

 Version conflict error during shard split test
 --

 Key: SOLR-4744
 URL: https://issues.apache.org/jira/browse/SOLR-4744
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 4.4

 Attachments: SOLR-4744.patch


 ShardSplitTest fails sometimes with the following error:
 {code}
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state 
 invoked for collection: collection1
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state shard1 
 to inactive
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state 
 shard1_0 to active
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state 
 shard1_1 to active
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.873; 
 org.apache.solr.update.processor.LogUpdateProcessor; [collection1] webapp= 
 path=/update params={wt=javabinversion=2} {add=[169 (1432319507166134272)]} 
 0 2
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.884; 
 org.apache.solr.update.processor.LogUpdateProcessor; 
 [collection1_shard1_1_replica1] webapp= path=/update 
 params={distrib.from=http://127.0.0.1:41028/collection1/update.distrib=FROMLEADERwt=javabindistrib.from.parent=shard1version=2}
  {} 0 1
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.885; 
 org.apache.solr.update.processor.LogUpdateProcessor; [collection1] webapp= 
 path=/update 
 params={distrib.from=http://127.0.0.1:41028/collection1/update.distrib=FROMLEADERwt=javabindistrib.from.parent=shard1version=2}
  {add=[169 (1432319507173474304)]} 0 2
 [junit4:junit4]   1 ERROR - 2013-04-14 19:05:26.885; 
 org.apache.solr.common.SolrException; shard update error StdNode: 
 http://127.0.0.1:41028/collection1_shard1_1_replica1/:org.apache.solr.common.SolrException:
  version conflict for 169 expected=1432319507173474304 actual=-1
 [junit4:junit4]   1  at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:404)
 [junit4:junit4]   1  at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:181)
 [junit4:junit4]   1  at 
 

[jira] [Commented] (SOLR-4744) Version conflict error during shard split test

2013-05-28 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668344#comment-13668344
 ] 

Yonik Seeley commented on SOLR-4744:


bq. What happens with partial updates in that case? Suppose an increment 
operation is requested which succeeds locally but is not propagated to the sub 
shard.

If we're talking about failures due to the sub-shard already being active when 
it receives an update from the old shard who thinks it's still the leader, then 
I think we're fine.  This isn't a new failure mode, but just another way that 
the old shard can be out of date.  For example, once a normal update is 
received by the new shard, the old shard will be out of date anyway.

bq. If the client retries, the index will have wrong values.

If the client retries to the same old shard that is no longer the leader, then 
the update will fail again because the sub-shard will reject it again?  We 
could perhaps return an error code suggesting that the client is using stale 
cluster state (i.e. re-read before trying the update again).


 Version conflict error during shard split test
 --

 Key: SOLR-4744
 URL: https://issues.apache.org/jira/browse/SOLR-4744
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 4.4

 Attachments: SOLR-4744.patch


 ShardSplitTest fails sometimes with the following error:
 {code}
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state 
 invoked for collection: collection1
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state shard1 
 to inactive
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state 
 shard1_0 to active
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state 
 shard1_1 to active
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.873; 
 org.apache.solr.update.processor.LogUpdateProcessor; [collection1] webapp= 
 path=/update params={wt=javabinversion=2} {add=[169 (1432319507166134272)]} 
 0 2
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.884; 
 org.apache.solr.update.processor.LogUpdateProcessor; 
 [collection1_shard1_1_replica1] webapp= path=/update 
 params={distrib.from=http://127.0.0.1:41028/collection1/update.distrib=FROMLEADERwt=javabindistrib.from.parent=shard1version=2}
  {} 0 1
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.885; 
 org.apache.solr.update.processor.LogUpdateProcessor; [collection1] webapp= 
 path=/update 
 params={distrib.from=http://127.0.0.1:41028/collection1/update.distrib=FROMLEADERwt=javabindistrib.from.parent=shard1version=2}
  {add=[169 (1432319507173474304)]} 0 2
 [junit4:junit4]   1 ERROR - 2013-04-14 19:05:26.885; 
 org.apache.solr.common.SolrException; shard update error StdNode: 
 http://127.0.0.1:41028/collection1_shard1_1_replica1/:org.apache.solr.common.SolrException:
  version conflict for 169 expected=1432319507173474304 actual=-1
 

Infinispan JGroups migrating to Apache License

2013-05-28 Thread Sanne Grinovero
Hello all,
as some of you already know the Infinispan project includes several
integration points with the Apache Lucene project, including a
Directory implementation, but so far we had a separate community
because of the license incompatibility.

I'm very happy to announce now that both Infinispan and its dependency
JGroups are going to move to the Apache License, as you can see from
the following blogposts:

   
http://infinispan.blogspot.co.uk/2013/05/infinispan-to-adopt-apache-software.html

   
http://belaban.blogspot.ch/2013/05/jgroups-to-investigate-adopting-apache.html

I hope this will benefit both projects and allow more people to use both.

# What's Infinispan ?

It's an in-memory Key/Value store geared to fast data rather than very
large data, with Dynamo inspired consistent hashing to combine
reliability and resources of multiple machines.
Does not support eventual consistency but supports transactions, including XA.
When data gets too large to be handled in JVM heap it can swap over
to different storage engines, i.e. Cassandra, HBase, MongoDB, JDBC,
cloud storage,  ..

[there is much more but for the sake of brevity I expect this to be
most useful to Lucene developers]

# What's this state of this Infinispan / Lucene Directory?

Basically it stores the segments in the distributed cache: so it
provides a quick storage engine, real-time replication without NFS
trouble, optionally integration with transactions.

This is working quite well, and - depending on your needs and
configuration options - it might be faster than FSDirectory or
RAMDirectory. In all fairness it's not easy to defeat the efficiency
of FSDirectory when it's in memory-mapping mode: it might happen in
some cases that it will be faster, more or less significantly, but I
think the real difference is in the scalability options and the
flexibility in architectures.
It is generally faster than the RAMDirectory, especially under contention.

Support for Lucene 4 was just added recently, so while I think it
would be great to have custom Codecs for it, that isn't done yet: for
now it just stores the byte[] chunks of the segments.

This is not a replacement for Solr or ElasticSearch: it provides just
a storage component; it does not solve - among others - the problem of
distributed writers. It is used by Hibernate Search.

Regards,
Sanne

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5021) NextDoc NPE safety when bulk collecting

2013-05-28 Thread Alexis Torres Paderewski (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexis Torres Paderewski updated LUCENE-5021:
-

Summary: NextDoc NPE safety when bulk collecting  (was: NextDoc safety for 
bulk collecting)

 NextDoc NPE safety when bulk collecting
 ---

 Key: LUCENE-5021
 URL: https://issues.apache.org/jira/browse/LUCENE-5021
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index, core/other
Affects Versions: 3.6.2
 Environment: Any with custom filters
Reporter: Alexis Torres Paderewski
  Labels: NPE,, Null-Safety, Scorer

 Hello,
 I would like to apply ACL once as a PostFilter and I therefore need to bulk 
 this call since round trips would severely decrease performances.
 I tried to just stack them on the DelegatingCollector using this collect :
 @Override
 public void collect(int doc) throws IOException {
 while ((doc = scorer.nextDoc()) != DocIdSetIterator.NO_MORE_DOCS) {
 docs.put(getDocumentId(doc), doc);
 }
 batchCollect();
 }
 Depending on the Scorer it may or it may not work. Indeed when the Scorer is 
 Safe  that is when it handles 
 the case in which the scorer is exhausted and is called once again after 
 exhaustion.
 This is the case of the (e.g. DisjunctionMaxScorer, ConstantScorer):
 if (numScorers == 0) return doc = NO_MORE_DOCS; 
 On the other hand, when using the DisjunctionSumScorer, it either asserts on 
 NO_MORE_DOCS, or it throws a NPE.
 Shouldn't we copy the DisjunctionMaxScorer mechanism to protect nextDoc of an 
 exausted iterator using either current doc or checking numbers of subScorers ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5021) NextDoc NPE safety when bulk collecting

2013-05-28 Thread Alexis Torres Paderewski (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668350#comment-13668350
 ] 

Alexis Torres Paderewski commented on LUCENE-5021:
--

After checking lucene-core, it seems the only candidate for the ticket is the 
DisjunctionSumScorer.

 NextDoc NPE safety when bulk collecting
 ---

 Key: LUCENE-5021
 URL: https://issues.apache.org/jira/browse/LUCENE-5021
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index, core/other
Affects Versions: 3.6.2
 Environment: Any with custom filters
Reporter: Alexis Torres Paderewski
  Labels: NPE,, Null-Safety, Scorer

 Hello,
 I would like to apply ACL once as a PostFilter and I therefore need to bulk 
 this call since round trips would severely decrease performances.
 I tried to just stack them on the DelegatingCollector using this collect :
 @Override
 public void collect(int doc) throws IOException {
 while ((doc = scorer.nextDoc()) != DocIdSetIterator.NO_MORE_DOCS) {
 docs.put(getDocumentId(doc), doc);
 }
 batchCollect();
 }
 Depending on the Scorer it may or it may not work. Indeed when the Scorer is 
 Safe  that is when it handles 
 the case in which the scorer is exhausted and is called once again after 
 exhaustion.
 This is the case of the (e.g. DisjunctionMaxScorer, ConstantScorer):
 if (numScorers == 0) return doc = NO_MORE_DOCS; 
 On the other hand, when using the DisjunctionSumScorer, it either asserts on 
 NO_MORE_DOCS, or it throws a NPE.
 Shouldn't we copy the DisjunctionMaxScorer mechanism to protect nextDoc of an 
 exausted iterator using either current doc or checking numbers of subScorers ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4744) Version conflict error during shard split test

2013-05-28 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668351#comment-13668351
 ] 

Shalin Shekhar Mangar commented on SOLR-4744:
-

bq. If we're talking about failures due to the sub-shard already being active 
when it receives an update from the old shard who thinks it's still the leader, 
then I think we're fine.

Yes, that's true. I was thinking of the general failure scenario but perhaps we 
can ignore it because both parent and sub shard leaders are on the same JVM?

 Version conflict error during shard split test
 --

 Key: SOLR-4744
 URL: https://issues.apache.org/jira/browse/SOLR-4744
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 4.4

 Attachments: SOLR-4744.patch


 ShardSplitTest fails sometimes with the following error:
 {code}
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state 
 invoked for collection: collection1
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state shard1 
 to inactive
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state 
 shard1_0 to active
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state 
 shard1_1 to active
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.873; 
 org.apache.solr.update.processor.LogUpdateProcessor; [collection1] webapp= 
 path=/update params={wt=javabinversion=2} {add=[169 (1432319507166134272)]} 
 0 2
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.884; 
 org.apache.solr.update.processor.LogUpdateProcessor; 
 [collection1_shard1_1_replica1] webapp= path=/update 
 params={distrib.from=http://127.0.0.1:41028/collection1/update.distrib=FROMLEADERwt=javabindistrib.from.parent=shard1version=2}
  {} 0 1
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.885; 
 org.apache.solr.update.processor.LogUpdateProcessor; [collection1] webapp= 
 path=/update 
 params={distrib.from=http://127.0.0.1:41028/collection1/update.distrib=FROMLEADERwt=javabindistrib.from.parent=shard1version=2}
  {add=[169 (1432319507173474304)]} 0 2
 [junit4:junit4]   1 ERROR - 2013-04-14 19:05:26.885; 
 org.apache.solr.common.SolrException; shard update error StdNode: 
 http://127.0.0.1:41028/collection1_shard1_1_replica1/:org.apache.solr.common.SolrException:
  version conflict for 169 expected=1432319507173474304 actual=-1
 [junit4:junit4]   1  at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:404)
 [junit4:junit4]   1  at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:181)
 [junit4:junit4]   1  at 
 org.apache.solr.update.SolrCmdDistributor$1.call(SolrCmdDistributor.java:332)
 [junit4:junit4]   1  at 
 org.apache.solr.update.SolrCmdDistributor$1.call(SolrCmdDistributor.java:306)
 [junit4:junit4]   1  at 
 java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
 [junit4:junit4]   1  

[jira] [Updated] (SOLR-4693) Create a collections API to delete/cleanup a Slice

2013-05-28 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-4693:
---

Attachment: SOLR-4693.patch

Patch with the test.
As of now the API allows for the deletion of:
1. INACTIVE Slices
2. Slices which have no Range (Custom Hashing).

We should cleanup the shard-slice confusion and stick to something for all of 
SolrCloud.

 Create a collections API to delete/cleanup a Slice
 --

 Key: SOLR-4693
 URL: https://issues.apache.org/jira/browse/SOLR-4693
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Anshum Gupta
Assignee: Shalin Shekhar Mangar
 Attachments: SOLR-4693.patch, SOLR-4693.patch


 Have a collections API that cleans up a given shard.
 Among other places, this would be useful post the shard split call to manage 
 the parent/original slice.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4816) Add directUpdate capability to CloudSolrServer

2013-05-28 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-4816:
-

Attachment: SOLR-4816.patch

OK, I've created the ConcurrentUpdateCloudSolrServer class and reverted the 
CloudSolrServer.

I think trying to get this functionality into CloudSolrServer and not have any 
back compatibility issues was going to be blocker.

I'll update the ticket name and description to reflect this.

The initial ConcurrentUpdateCloudSolrServer in this patch seems to run fine. 
More tests are needed though.

 Add directUpdate capability to CloudSolrServer
 

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816-sriesenberg.patch


 This issue changes CloudSolrServer so it can optionally route update requests 
 to the correct shard. This would be a nice feature to have to eliminate the 
 document routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4816) Add directUpdate capability to CloudSolrServer

2013-05-28 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668361#comment-13668361
 ] 

Joel Bernstein edited comment on SOLR-4816 at 5/28/13 3:18 PM:
---

OK, I've created the ConcurrentUpdateCloudSolrServer class and reverted the 
CloudSolrServer.

I think trying to get this functionality into CloudSolrServer and not have any 
back compatibility issues was going to be a blocker.

I'll update the ticket name and description to reflect this.

The initial ConcurrentUpdateCloudSolrServer in this patch seems to run fine. 
More tests are needed though.

  was (Author: joel.bernstein):
OK, I've created the ConcurrentUpdateCloudSolrServer class and reverted the 
CloudSolrServer.

I think trying to get this functionality into CloudSolrServer and not have any 
back compatibility issues was going to be blocker.

I'll update the ticket name and description to reflect this.

The initial ConcurrentUpdateCloudSolrServer in this patch seems to run fine. 
More tests are needed though.
  
 Add directUpdate capability to CloudSolrServer
 

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816-sriesenberg.patch


 This issue changes CloudSolrServer so it can optionally route update requests 
 to the correct shard. This would be a nice feature to have to eliminate the 
 document routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4816) ConcurrentUpdateCloudSolrServer

2013-05-28 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-4816:
-

Summary: ConcurrentUpdateCloudSolrServer  (was: Add directUpdate 
capability to CloudSolrServer)

 ConcurrentUpdateCloudSolrServer
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816-sriesenberg.patch


 This issue changes CloudSolrServer so it can optionally route update requests 
 to the correct shard. This would be a nice feature to have to eliminate the 
 document routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5021) NextDoc NPE safety when bulk collecting

2013-05-28 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668370#comment-13668370
 ] 

Michael McCandless commented on LUCENE-5021:


If you are buffering up the docIDs and sending them in-batch somewhere else, 
then you'll need to also buffer up the score/freq/etc. and send those too.  
Really, you cannot call the scorer APIs after NO_MORE_DOCS was returned.

 NextDoc NPE safety when bulk collecting
 ---

 Key: LUCENE-5021
 URL: https://issues.apache.org/jira/browse/LUCENE-5021
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index, core/other
Affects Versions: 3.6.2
 Environment: Any with custom filters
Reporter: Alexis Torres Paderewski
  Labels: NPE,, Null-Safety, Scorer

 Hello,
 I would like to apply ACL once as a PostFilter and I therefore need to bulk 
 this call since round trips would severely decrease performances.
 I tried to just stack them on the DelegatingCollector using this collect :
 @Override
 public void collect(int doc) throws IOException {
 while ((doc = scorer.nextDoc()) != DocIdSetIterator.NO_MORE_DOCS) {
 docs.put(getDocumentId(doc), doc);
 }
 batchCollect();
 }
 Depending on the Scorer it may or it may not work. Indeed when the Scorer is 
 Safe  that is when it handles 
 the case in which the scorer is exhausted and is called once again after 
 exhaustion.
 This is the case of the (e.g. DisjunctionMaxScorer, ConstantScorer):
 if (numScorers == 0) return doc = NO_MORE_DOCS; 
 On the other hand, when using the DisjunctionSumScorer, it either asserts on 
 NO_MORE_DOCS, or it throws a NPE.
 Shouldn't we copy the DisjunctionMaxScorer mechanism to protect nextDoc of an 
 exausted iterator using either current doc or checking numbers of subScorers ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4816) ConcurrentUpdateCloudSolrServer

2013-05-28 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-4816:
-

Description: 
This issue adds a new Solr Cloud client called the 
ConcurrentUpdateCloudSolrServer. This cloud client implements document routing 
in the client so that document routing overhead is eliminated on the Solr 
servers. Documents are batched up for each shard and then each batch is sent in 
it's own thread. 

With this client Solr Cloud indexing throughput should scale linearly with 
cluster size.

Sample usage:

ConcurrentUpdateCloudServer client = new 
ConcurrentUpdateCloudSolrServer(zkHostAddress);
UpdateRequest request = new UpdateRequest();
SolrInputDocument doc = new SolrInputDocument();
doc.addField(id, 2);
doc.addField(manu,BMW);
request.add(doc);
NamedList response = client.request(request);
NamedList exceptions = response.get(exceptions); // contains any exceptions 
from the shards
NamedList responses = response.get(responses); // contains the responses from 
shards without exception.





  was:This issue changes CloudSolrServer so it can optionally route update 
requests to the correct shard. This would be a nice feature to have to 
eliminate the document routing overhead on the Solr servers.


 ConcurrentUpdateCloudSolrServer
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816-sriesenberg.patch


 This issue adds a new Solr Cloud client called the 
 ConcurrentUpdateCloudSolrServer. This cloud client implements document 
 routing in the client so that document routing overhead is eliminated on the 
 Solr servers. Documents are batched up for each shard and then each batch is 
 sent in it's own thread. 
 With this client Solr Cloud indexing throughput should scale linearly with 
 cluster size.
 Sample usage:
 ConcurrentUpdateCloudServer client = new 
 ConcurrentUpdateCloudSolrServer(zkHostAddress);
 UpdateRequest request = new UpdateRequest();
 SolrInputDocument doc = new SolrInputDocument();
 doc.addField(id, 2);
 doc.addField(manu,BMW);
 request.add(doc);
 NamedList response = client.request(request);
 NamedList exceptions = response.get(exceptions); // contains any exceptions 
 from the shards
 NamedList responses = response.get(responses); // contains the responses 
 from shards without exception.
 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4816) ConcurrentUpdateCloudSolrServer

2013-05-28 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-4816:
-

Description: 
This issue adds a new Solr Cloud client called the 
ConcurrentUpdateCloudSolrServer. This cloud client implements document routing 
in the client so that document routing overhead is eliminated on the Solr 
servers. Documents are batched up for each shard and then each batch is sent in 
it's own thread. 

With this client, Solr Cloud indexing throughput should scale linearly with 
cluster size.

Sample usage:

ConcurrentUpdateCloudServer client = new 
ConcurrentUpdateCloudSolrServer(zkHostAddress);
UpdateRequest request = new UpdateRequest();
SolrInputDocument doc = new SolrInputDocument();
doc.addField(id, 2);
doc.addField(manu,BMW);
request.add(doc);
NamedList response = client.request(request);
NamedList exceptions = response.get(exceptions); // contains any exceptions 
from the shards
NamedList responses = response.get(responses); // contains the responses from 
shards without exception.





  was:
This issue adds a new Solr Cloud client called the 
ConcurrentUpdateCloudSolrServer. This cloud client implements document routing 
in the client so that document routing overhead is eliminated on the Solr 
servers. Documents are batched up for each shard and then each batch is sent in 
it's own thread. 

With this client Solr Cloud indexing throughput should scale linearly with 
cluster size.

Sample usage:

ConcurrentUpdateCloudServer client = new 
ConcurrentUpdateCloudSolrServer(zkHostAddress);
UpdateRequest request = new UpdateRequest();
SolrInputDocument doc = new SolrInputDocument();
doc.addField(id, 2);
doc.addField(manu,BMW);
request.add(doc);
NamedList response = client.request(request);
NamedList exceptions = response.get(exceptions); // contains any exceptions 
from the shards
NamedList responses = response.get(responses); // contains the responses from 
shards without exception.






 ConcurrentUpdateCloudSolrServer
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816-sriesenberg.patch


 This issue adds a new Solr Cloud client called the 
 ConcurrentUpdateCloudSolrServer. This cloud client implements document 
 routing in the client so that document routing overhead is eliminated on the 
 Solr servers. Documents are batched up for each shard and then each batch is 
 sent in it's own thread. 
 With this client, Solr Cloud indexing throughput should scale linearly with 
 cluster size.
 Sample usage:
 ConcurrentUpdateCloudServer client = new 
 ConcurrentUpdateCloudSolrServer(zkHostAddress);
 UpdateRequest request = new UpdateRequest();
 SolrInputDocument doc = new SolrInputDocument();
 doc.addField(id, 2);
 doc.addField(manu,BMW);
 request.add(doc);
 NamedList response = client.request(request);
 NamedList exceptions = response.get(exceptions); // contains any exceptions 
 from the shards
 NamedList responses = response.get(responses); // contains the responses 
 from shards without exception.
 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4816) ConcurrentUpdateCloudSolrServer

2013-05-28 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-4816:
-

Description: 
This issue adds a new Solr Cloud client called the 
ConcurrentUpdateCloudSolrServer. This Solr Cloud client implements document 
routing in the client so that document routing overhead is eliminated on the 
Solr servers. Documents are batched up for each shard and then each batch is 
sent in it's own thread. 

With this client, Solr Cloud indexing throughput should scale linearly with 
cluster size.

Sample usage:

ConcurrentUpdateCloudServer client = new 
ConcurrentUpdateCloudSolrServer(zkHostAddress);
UpdateRequest request = new UpdateRequest();
SolrInputDocument doc = new SolrInputDocument();
doc.addField(id, 2);
doc.addField(manu,BMW);
request.add(doc);
NamedList response = client.request(request);
NamedList exceptions = response.get(exceptions); // contains any exceptions 
from the shards
NamedList responses = response.get(responses); // contains the responses from 
shards without exception.





  was:
This issue adds a new Solr Cloud client called the 
ConcurrentUpdateCloudSolrServer. This cloud client implements document routing 
in the client so that document routing overhead is eliminated on the Solr 
servers. Documents are batched up for each shard and then each batch is sent in 
it's own thread. 

With this client, Solr Cloud indexing throughput should scale linearly with 
cluster size.

Sample usage:

ConcurrentUpdateCloudServer client = new 
ConcurrentUpdateCloudSolrServer(zkHostAddress);
UpdateRequest request = new UpdateRequest();
SolrInputDocument doc = new SolrInputDocument();
doc.addField(id, 2);
doc.addField(manu,BMW);
request.add(doc);
NamedList response = client.request(request);
NamedList exceptions = response.get(exceptions); // contains any exceptions 
from the shards
NamedList responses = response.get(responses); // contains the responses from 
shards without exception.






 ConcurrentUpdateCloudSolrServer
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816-sriesenberg.patch


 This issue adds a new Solr Cloud client called the 
 ConcurrentUpdateCloudSolrServer. This Solr Cloud client implements document 
 routing in the client so that document routing overhead is eliminated on the 
 Solr servers. Documents are batched up for each shard and then each batch is 
 sent in it's own thread. 
 With this client, Solr Cloud indexing throughput should scale linearly with 
 cluster size.
 Sample usage:
 ConcurrentUpdateCloudServer client = new 
 ConcurrentUpdateCloudSolrServer(zkHostAddress);
 UpdateRequest request = new UpdateRequest();
 SolrInputDocument doc = new SolrInputDocument();
 doc.addField(id, 2);
 doc.addField(manu,BMW);
 request.add(doc);
 NamedList response = client.request(request);
 NamedList exceptions = response.get(exceptions); // contains any exceptions 
 from the shards
 NamedList responses = response.get(responses); // contains the responses 
 from shards without exception.
 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4744) Version conflict error during shard split test

2013-05-28 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-4744:


Fix Version/s: 4.3.1

 Version conflict error during shard split test
 --

 Key: SOLR-4744
 URL: https://issues.apache.org/jira/browse/SOLR-4744
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 4.4, 4.3.1

 Attachments: SOLR-4744.patch


 ShardSplitTest fails sometimes with the following error:
 {code}
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state 
 invoked for collection: collection1
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state shard1 
 to inactive
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state 
 shard1_0 to active
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state 
 shard1_1 to active
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.873; 
 org.apache.solr.update.processor.LogUpdateProcessor; [collection1] webapp= 
 path=/update params={wt=javabinversion=2} {add=[169 (1432319507166134272)]} 
 0 2
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.884; 
 org.apache.solr.update.processor.LogUpdateProcessor; 
 [collection1_shard1_1_replica1] webapp= path=/update 
 params={distrib.from=http://127.0.0.1:41028/collection1/update.distrib=FROMLEADERwt=javabindistrib.from.parent=shard1version=2}
  {} 0 1
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.885; 
 org.apache.solr.update.processor.LogUpdateProcessor; [collection1] webapp= 
 path=/update 
 params={distrib.from=http://127.0.0.1:41028/collection1/update.distrib=FROMLEADERwt=javabindistrib.from.parent=shard1version=2}
  {add=[169 (1432319507173474304)]} 0 2
 [junit4:junit4]   1 ERROR - 2013-04-14 19:05:26.885; 
 org.apache.solr.common.SolrException; shard update error StdNode: 
 http://127.0.0.1:41028/collection1_shard1_1_replica1/:org.apache.solr.common.SolrException:
  version conflict for 169 expected=1432319507173474304 actual=-1
 [junit4:junit4]   1  at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:404)
 [junit4:junit4]   1  at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:181)
 [junit4:junit4]   1  at 
 org.apache.solr.update.SolrCmdDistributor$1.call(SolrCmdDistributor.java:332)
 [junit4:junit4]   1  at 
 org.apache.solr.update.SolrCmdDistributor$1.call(SolrCmdDistributor.java:306)
 [junit4:junit4]   1  at 
 java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
 [junit4:junit4]   1  at 
 java.util.concurrent.FutureTask.run(FutureTask.java:166)
 [junit4:junit4]   1  at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 [junit4:junit4]   1  at 
 java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
 [junit4:junit4]   1  at 
 java.util.concurrent.FutureTask.run(FutureTask.java:166)
 [junit4:junit4]   1  at 
 

[jira] [Updated] (SOLR-4858) updateLog + core reload + deleteByQuery = leaked directory

2013-05-28 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-4858:


Fix Version/s: 4.3.1

 updateLog + core reload + deleteByQuery = leaked directory
 --

 Key: SOLR-4858
 URL: https://issues.apache.org/jira/browse/SOLR-4858
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.2.1
Reporter: Hoss Man
 Fix For: 4.3.1

 Attachments: SOLR-4858.patch


 I havene't been able to make sense of this yet, but trying to track down 
 another bug lead me to discover that the following combination leads to 
 problems...
 * updateLog enabled
 * do a core reload
 * do a delete by query \*:\*
 ...leave out any one of the three, and everything works fine.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4816) ConcurrentUpdateCloudSolrServer

2013-05-28 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-4816:
-

Description: 
This issue adds a new Solr Cloud client called the 
ConcurrentUpdateCloudSolrServer. This Solr Cloud client implements document 
routing in the client so that document routing overhead is eliminated on the 
Solr servers. Documents are batched up for each shard and then each batch is 
sent in it's own thread. 

With this client, Solr Cloud indexing throughput should scale linearly with 
cluster size.

This client also has robust failover built-in because the actual requests are 
made using the LBHttpSolrServer. The list of urls used for the request to each 
shard begins with the leader and is followed by that shard's replicas. So the 
leader will be tried first and if it fails it will try the replicas.


Sample usage:

ConcurrentUpdateCloudServer client = new 
ConcurrentUpdateCloudSolrServer(zkHostAddress);
UpdateRequest request = new UpdateRequest();
SolrInputDocument doc = new SolrInputDocument();
doc.addField(id, 2);
doc.addField(manu,BMW);
request.add(doc);
NamedList response = client.request(request);
NamedList exceptions = response.get(exceptions); // contains any exceptions 
from the shards
NamedList responses = response.get(responses); // contains the responses from 
shards without exception.





  was:
This issue adds a new Solr Cloud client called the 
ConcurrentUpdateCloudSolrServer. This Solr Cloud client implements document 
routing in the client so that document routing overhead is eliminated on the 
Solr servers. Documents are batched up for each shard and then each batch is 
sent in it's own thread. 

With this client, Solr Cloud indexing throughput should scale linearly with 
cluster size.

Sample usage:

ConcurrentUpdateCloudServer client = new 
ConcurrentUpdateCloudSolrServer(zkHostAddress);
UpdateRequest request = new UpdateRequest();
SolrInputDocument doc = new SolrInputDocument();
doc.addField(id, 2);
doc.addField(manu,BMW);
request.add(doc);
NamedList response = client.request(request);
NamedList exceptions = response.get(exceptions); // contains any exceptions 
from the shards
NamedList responses = response.get(responses); // contains the responses from 
shards without exception.






 ConcurrentUpdateCloudSolrServer
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816-sriesenberg.patch


 This issue adds a new Solr Cloud client called the 
 ConcurrentUpdateCloudSolrServer. This Solr Cloud client implements document 
 routing in the client so that document routing overhead is eliminated on the 
 Solr servers. Documents are batched up for each shard and then each batch is 
 sent in it's own thread. 
 With this client, Solr Cloud indexing throughput should scale linearly with 
 cluster size.
 This client also has robust failover built-in because the actual requests are 
 made using the LBHttpSolrServer. The list of urls used for the request to 
 each shard begins with the leader and is followed by that shard's replicas. 
 So the leader will be tried first and if it fails it will try the replicas.
 Sample usage:
 ConcurrentUpdateCloudServer client = new 
 ConcurrentUpdateCloudSolrServer(zkHostAddress);
 UpdateRequest request = new UpdateRequest();
 SolrInputDocument doc = new SolrInputDocument();
 doc.addField(id, 2);
 doc.addField(manu,BMW);
 request.add(doc);
 NamedList response = client.request(request);
 NamedList exceptions = response.get(exceptions); // contains any exceptions 
 from the shards
 NamedList responses = response.get(responses); // contains the responses 
 from shards without exception.
 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 515 - Still Failing!

2013-05-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/515/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 9215 lines...]
[junit4:junit4] ERROR: JVM J0 ended with an exception, command line: 
/Library/Java/JavaVirtualMachines/jdk1.7.0_21.jdk/Contents/Home/jre/bin/java 
-XX:-UseCompressedOops -XX:+UseParallelGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/heapdumps
 -Dtests.prefix=tests -Dtests.seed=52D426A5AEE165F1 -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=5.0 
-Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true 
-Dtests.asserts.gracious=false -Dtests.multiplier=1 -DtempDir=. 
-Djava.io.tmpdir=. 
-Djunit4.tempDir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/temp
 
-Dclover.db.dir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/clover/db
 -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Djava.security.policy=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/junit4/tests.policy
 -Dlucene.version=5.0-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -classpath 

[jira] [Commented] (SOLR-4744) Version conflict error during shard split test

2013-05-28 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668426#comment-13668426
 ] 

Yonik Seeley commented on SOLR-4744:


bq. Yes, that's true. I was thinking of the general failure scenario but 
perhaps we can ignore it because both parent and sub shard leaders are on the 
same JVM?

Yeah, I think that's best for now.  If it actually becomes an issue (which 
should be really rare), we could just cancel the split and maybe retry it from 
the start.

 Version conflict error during shard split test
 --

 Key: SOLR-4744
 URL: https://issues.apache.org/jira/browse/SOLR-4744
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 4.4, 4.3.1

 Attachments: SOLR-4744.patch


 ShardSplitTest fails sometimes with the following error:
 {code}
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state 
 invoked for collection: collection1
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state shard1 
 to inactive
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state 
 shard1_0 to active
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state 
 shard1_1 to active
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.873; 
 org.apache.solr.update.processor.LogUpdateProcessor; [collection1] webapp= 
 path=/update params={wt=javabinversion=2} {add=[169 (1432319507166134272)]} 
 0 2
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.884; 
 org.apache.solr.update.processor.LogUpdateProcessor; 
 [collection1_shard1_1_replica1] webapp= path=/update 
 params={distrib.from=http://127.0.0.1:41028/collection1/update.distrib=FROMLEADERwt=javabindistrib.from.parent=shard1version=2}
  {} 0 1
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.885; 
 org.apache.solr.update.processor.LogUpdateProcessor; [collection1] webapp= 
 path=/update 
 params={distrib.from=http://127.0.0.1:41028/collection1/update.distrib=FROMLEADERwt=javabindistrib.from.parent=shard1version=2}
  {add=[169 (1432319507173474304)]} 0 2
 [junit4:junit4]   1 ERROR - 2013-04-14 19:05:26.885; 
 org.apache.solr.common.SolrException; shard update error StdNode: 
 http://127.0.0.1:41028/collection1_shard1_1_replica1/:org.apache.solr.common.SolrException:
  version conflict for 169 expected=1432319507173474304 actual=-1
 [junit4:junit4]   1  at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:404)
 [junit4:junit4]   1  at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:181)
 [junit4:junit4]   1  at 
 org.apache.solr.update.SolrCmdDistributor$1.call(SolrCmdDistributor.java:332)
 [junit4:junit4]   1  at 
 org.apache.solr.update.SolrCmdDistributor$1.call(SolrCmdDistributor.java:306)
 [junit4:junit4]   1  at 
 java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
 [junit4:junit4]   1  at 
 

[jira] [Updated] (SOLR-4816) ConcurrentUpdateCloudSolrServer

2013-05-28 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-4816:
-

Attachment: SOLR-4816.patch

Formatted according to the official rules, removed extra imports, but no other 
changes.

 ConcurrentUpdateCloudSolrServer
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816-sriesenberg.patch


 This issue adds a new Solr Cloud client called the 
 ConcurrentUpdateCloudSolrServer. This Solr Cloud client implements document 
 routing in the client so that document routing overhead is eliminated on the 
 Solr servers. Documents are batched up for each shard and then each batch is 
 sent in it's own thread. 
 With this client, Solr Cloud indexing throughput should scale linearly with 
 cluster size.
 This client also has robust failover built-in because the actual requests are 
 made using the LBHttpSolrServer. The list of urls used for the request to 
 each shard begins with the leader and is followed by that shard's replicas. 
 So the leader will be tried first and if it fails it will try the replicas.
 Sample usage:
 ConcurrentUpdateCloudServer client = new 
 ConcurrentUpdateCloudSolrServer(zkHostAddress);
 UpdateRequest request = new UpdateRequest();
 SolrInputDocument doc = new SolrInputDocument();
 doc.addField(id, 2);
 doc.addField(manu,BMW);
 request.add(doc);
 NamedList response = client.request(request);
 NamedList exceptions = response.get(exceptions); // contains any exceptions 
 from the shards
 NamedList responses = response.get(responses); // contains the responses 
 from shards without exception.
 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4859) MinFieldValueUpdateProcessorFactory and MaxFieldValueUpdateProcessorFactory don't do numeric comparison for numeric fields

2013-05-28 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668491#comment-13668491
 ] 

Hoss Man commented on SOLR-4859:


The processors are deliberately agnostic to the fieldType associated with the 
field name of the input values because there is no requirement that the field 
name be in the schema at all -- it might be a purely virtual field that 
exists only during input/processing of the document, and the resulting values 
may later be used in some other field.

the crux of the problem here seems to simply be in the JSonLoader forcing 
everything to be a string.  (If, for example, you use SolrJ to send Numeric 
objects instead that should work fine).

bq. It would be nice to at least have a comparison override: 
compareNumeric=true.

that type of option shouldn't exist on these processors -- they do one thing, 
and one thing only.  An alternative solution (for people who want to configure 
processor chains that act on numeric values even if the client sends strings) 
there should be things like ParseIntegerUpdateProcessors

FYI: i a conversation with [~sar...@syr.edu] recently where we talked about 
some of these ideas ... i think he's planning on working on this soon actually.




 MinFieldValueUpdateProcessorFactory and MaxFieldValueUpdateProcessorFactory 
 don't do numeric comparison for numeric fields
 --

 Key: SOLR-4859
 URL: https://issues.apache.org/jira/browse/SOLR-4859
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.3
Reporter: Jack Krupansky

 MinFieldValueUpdateProcessorFactory and MaxFieldValueUpdateProcessorFactory 
 are advertised as supporting numeric comparisons, but this doesn't work - 
 only string comparison is available - and doesn't seem possible, although the 
 unit tests show it is possible at the unit test level.
 The problem is that numeric processing is dependent on the SolrInputDocument 
 containing a list of numeric values, but at least with both the current XML 
 and JSON loaders, only string values can be loaded.
 Test scenario.
 1. Use Solr 4.3 example.
 2. Add following update processor chain to solrconfig:
 {code}
   updateRequestProcessorChain name=max-only-num
 processor class=solr.MaxFieldValueUpdateProcessorFactory
   str name=fieldNamesizes_i/str
 /processor
 processor class=solr.LogUpdateProcessorFactory /
 processor class=solr.RunUpdateProcessorFactory /
   /updateRequestProcessorChain
 {code}
 3. Perform this update request:
 {code}
   curl 
 http://localhost:8983/solr/update?commit=trueupdate.chain=max-only-num; \
   -H 'Content-type:application/json' -d '
   [{id: doc-1,
 title_s: Hello World,
 sizes_i: [200, 999, 101, 199, 1000]}]'
 {code}
 Note that the values are JSON integer values.
 4. Perform this query:
 {code}
 curl http://localhost:8983/solr/select/?q=*:*indent=truewt=json;
 {code}
 Shows this result:
 {code}
   response:{numFound:1,start:0,docs:[
   {
 id:doc-1,
 title_s:Hello World,
 sizes_i:999,
 _version_:1436094187405574144}]
   }}
 {code}
 sizes_i should be 1000, not 999.
 Alternative update tests:
 {code}
   curl 
 http://localhost:8983/solr/update?commit=trueupdate.chain=max-only-num; \
   -H 'Content-type:application/json' -d '
   [{id: doc-1,
 title_s: Hello World,
 sizes_i: 200,
 sizes_i: 999,
 sizes_i: 101,
 sizes_i: 199,
 sizes_i: 1000}]'
 {code}
 and
 {code}
   curl 
 http://localhost:8983/solr/update?commit=trueupdate.chain=max-only-num; \
   -H 'Content-type:application/xml' -d '
   add
 doc
   field name=iddoc-1/field
   field name=title_sHello World/field
   field name=sizes_i42/field
   field name=sizes_i128/field
   field name=sizes_i-3/field
 /doc
   /add'
 {code}
 In XML, of course, there is no way for the input values to be anything other 
 than strings (text.)
 The JSON loader does parse the values with their type, but immediately 
 converts the values to strings:
 {code}
 private Object parseSingleFieldValue(int ev) throws IOException {
   switch (ev) {
 case JSONParser.STRING:
   return parser.getString();
 case JSONParser.LONG:
 case JSONParser.NUMBER:
 case JSONParser.BIGNUMBER:
   return parser.getNumberChars().toString();
 case JSONParser.BOOLEAN:
   return Boolean.toString(parser.getBoolean()); // for legacy 
 reasons, single values s are expected to be strings
 case JSONParser.NULL:
   parser.getNull();
   return null;
 case JSONParser.ARRAY_START:
   return parseArrayFieldValue(ev);
 default:
   throw new 

[jira] [Commented] (SOLR-4744) Version conflict error during shard split test

2013-05-28 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668522#comment-13668522
 ] 

Shalin Shekhar Mangar commented on SOLR-4744:
-

Okay, I'll put up a patch.

 Version conflict error during shard split test
 --

 Key: SOLR-4744
 URL: https://issues.apache.org/jira/browse/SOLR-4744
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 4.4, 4.3.1

 Attachments: SOLR-4744.patch


 ShardSplitTest fails sometimes with the following error:
 {code}
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state 
 invoked for collection: collection1
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state shard1 
 to inactive
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state 
 shard1_0 to active
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.861; 
 org.apache.solr.cloud.Overseer$ClusterStateUpdater; Update shard state 
 shard1_1 to active
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.873; 
 org.apache.solr.update.processor.LogUpdateProcessor; [collection1] webapp= 
 path=/update params={wt=javabinversion=2} {add=[169 (1432319507166134272)]} 
 0 2
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.877; 
 org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: 
 WatchedEvent state:SyncConnected type:NodeDataChanged 
 path:/clusterstate.json, has occurred - updating... (live nodes size: 5)
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.884; 
 org.apache.solr.update.processor.LogUpdateProcessor; 
 [collection1_shard1_1_replica1] webapp= path=/update 
 params={distrib.from=http://127.0.0.1:41028/collection1/update.distrib=FROMLEADERwt=javabindistrib.from.parent=shard1version=2}
  {} 0 1
 [junit4:junit4]   1 INFO  - 2013-04-14 19:05:26.885; 
 org.apache.solr.update.processor.LogUpdateProcessor; [collection1] webapp= 
 path=/update 
 params={distrib.from=http://127.0.0.1:41028/collection1/update.distrib=FROMLEADERwt=javabindistrib.from.parent=shard1version=2}
  {add=[169 (1432319507173474304)]} 0 2
 [junit4:junit4]   1 ERROR - 2013-04-14 19:05:26.885; 
 org.apache.solr.common.SolrException; shard update error StdNode: 
 http://127.0.0.1:41028/collection1_shard1_1_replica1/:org.apache.solr.common.SolrException:
  version conflict for 169 expected=1432319507173474304 actual=-1
 [junit4:junit4]   1  at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:404)
 [junit4:junit4]   1  at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:181)
 [junit4:junit4]   1  at 
 org.apache.solr.update.SolrCmdDistributor$1.call(SolrCmdDistributor.java:332)
 [junit4:junit4]   1  at 
 org.apache.solr.update.SolrCmdDistributor$1.call(SolrCmdDistributor.java:306)
 [junit4:junit4]   1  at 
 java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
 [junit4:junit4]   1  at 
 java.util.concurrent.FutureTask.run(FutureTask.java:166)
 [junit4:junit4]   1  at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 [junit4:junit4]   1  at 
 java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
 [junit4:junit4]   1  at 
 

[jira] [Commented] (SOLR-4859) MinFieldValueUpdateProcessorFactory and MaxFieldValueUpdateProcessorFactory don't do numeric comparison for numeric fields

2013-05-28 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668545#comment-13668545
 ] 

Steve Rowe commented on SOLR-4859:
--

bq. FYI: i a conversation with Steve Rowe recently where we talked about some 
of these ideas ... i think he's planning on working on this soon actually.

Yes, as part of SOLR-3250, I plan on fixing {{JsonLoader}} to not force 
everything to be a {{String}}, and adding date/numeric/boolean field update 
processor factories that will convert {{String}} values to the field's type 
({{typeClass}} in field update processor selector lingo), if the field 
matches a schema field.  (Hoss mentioned {{String}}-{{Foo}} parsing field 
update processor factories as useful possibilities in the first comment on 
SOLR-2802.)

 MinFieldValueUpdateProcessorFactory and MaxFieldValueUpdateProcessorFactory 
 don't do numeric comparison for numeric fields
 --

 Key: SOLR-4859
 URL: https://issues.apache.org/jira/browse/SOLR-4859
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.3
Reporter: Jack Krupansky

 MinFieldValueUpdateProcessorFactory and MaxFieldValueUpdateProcessorFactory 
 are advertised as supporting numeric comparisons, but this doesn't work - 
 only string comparison is available - and doesn't seem possible, although the 
 unit tests show it is possible at the unit test level.
 The problem is that numeric processing is dependent on the SolrInputDocument 
 containing a list of numeric values, but at least with both the current XML 
 and JSON loaders, only string values can be loaded.
 Test scenario.
 1. Use Solr 4.3 example.
 2. Add following update processor chain to solrconfig:
 {code}
   updateRequestProcessorChain name=max-only-num
 processor class=solr.MaxFieldValueUpdateProcessorFactory
   str name=fieldNamesizes_i/str
 /processor
 processor class=solr.LogUpdateProcessorFactory /
 processor class=solr.RunUpdateProcessorFactory /
   /updateRequestProcessorChain
 {code}
 3. Perform this update request:
 {code}
   curl 
 http://localhost:8983/solr/update?commit=trueupdate.chain=max-only-num; \
   -H 'Content-type:application/json' -d '
   [{id: doc-1,
 title_s: Hello World,
 sizes_i: [200, 999, 101, 199, 1000]}]'
 {code}
 Note that the values are JSON integer values.
 4. Perform this query:
 {code}
 curl http://localhost:8983/solr/select/?q=*:*indent=truewt=json;
 {code}
 Shows this result:
 {code}
   response:{numFound:1,start:0,docs:[
   {
 id:doc-1,
 title_s:Hello World,
 sizes_i:999,
 _version_:1436094187405574144}]
   }}
 {code}
 sizes_i should be 1000, not 999.
 Alternative update tests:
 {code}
   curl 
 http://localhost:8983/solr/update?commit=trueupdate.chain=max-only-num; \
   -H 'Content-type:application/json' -d '
   [{id: doc-1,
 title_s: Hello World,
 sizes_i: 200,
 sizes_i: 999,
 sizes_i: 101,
 sizes_i: 199,
 sizes_i: 1000}]'
 {code}
 and
 {code}
   curl 
 http://localhost:8983/solr/update?commit=trueupdate.chain=max-only-num; \
   -H 'Content-type:application/xml' -d '
   add
 doc
   field name=iddoc-1/field
   field name=title_sHello World/field
   field name=sizes_i42/field
   field name=sizes_i128/field
   field name=sizes_i-3/field
 /doc
   /add'
 {code}
 In XML, of course, there is no way for the input values to be anything other 
 than strings (text.)
 The JSON loader does parse the values with their type, but immediately 
 converts the values to strings:
 {code}
 private Object parseSingleFieldValue(int ev) throws IOException {
   switch (ev) {
 case JSONParser.STRING:
   return parser.getString();
 case JSONParser.LONG:
 case JSONParser.NUMBER:
 case JSONParser.BIGNUMBER:
   return parser.getNumberChars().toString();
 case JSONParser.BOOLEAN:
   return Boolean.toString(parser.getBoolean()); // for legacy 
 reasons, single values s are expected to be strings
 case JSONParser.NULL:
   parser.getNull();
   return null;
 case JSONParser.ARRAY_START:
   return parseArrayFieldValue(ev);
 default:
   throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, Error 
 parsing JSON field value. Unexpected +JSONParser.getEventString(ev) );
   }
 }
 private ListObject parseArrayFieldValue(int ev) throws IOException {
   assert ev == JSONParser.ARRAY_START;
   
   ArrayList lst = new ArrayList(2);
   for (;;) {
 ev = parser.nextEvent();
 if (ev == JSONParser.ARRAY_END) {
   return lst;
 }
 Object val = 

[jira] [Created] (LUCENE-5022) Add FacetResult.mergeHierarchies

2013-05-28 Thread Shai Erera (JIRA)
Shai Erera created LUCENE-5022:
--

 Summary: Add FacetResult.mergeHierarchies
 Key: LUCENE-5022
 URL: https://issues.apache.org/jira/browse/LUCENE-5022
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/facet
Reporter: Shai Erera
Assignee: Shai Erera
Priority: Minor


When you DrillSideways on a hierarchical dimension, and especially when you OR 
multiple drill-downs together, you get several FacetResults back, one for each 
category you drill down on. So for example, if you want to drill-down on 
Date/2010 OR Date/2011/May, the FacetRequests that you need to create (to get 
the sideways effect) are: Date/, Date/2010, Date/2011 and Date/2011/May. Date/ 
is because you want to get sideways counts as an alternative to Date/2010, and 
Date/2011 in order to get months count as an alternative to Date/2011/May.

That results in 4 FacetResult objects. Having a utility which merges all 
FacetResults of the same dimension into a single hierarchical one will be very 
useful for e.g. apps that want to display the hierarchy. I'm thinking of 
FacetResult.mergeHierarchies which takes a ListFacetResult and returns the 
merged ones, one FacetResult per dimension.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4866) FieldCache insanity with field used as facet and group

2013-05-28 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668580#comment-13668580
 ] 

Hoss Man commented on SOLR-4866:


Sannier, i was unable to reproduce the problem you described using 4.2.1 (or 
4.3, or the current trunk).

Steps i tried to reproduce problem...

1) modified 4.2.1 example such that the int fieldType and popularity field 
matched your merchantid exactly...

{noformat}
-fieldType name=int class=solr.TrieIntField precisionStep=0 
positionIncrementGap=0/
+fieldType name=int class=solr.TrieIntField precisionStep=0 
positionIncrementGap=0 sortMissingLast=true/
{noformat}

2) started up solr, indexed the example data, and confirmed empty fieldCaches...

{noformat}
hossman@frisbee:~/lucene/lucene-4.2.1_tag/solr/example/exampledocs$ java -jar 
post.jar *.xml
...
hossman@frisbee:~/lucene/lucene-4.2.1_tag/solr/example/exampledocs$ curl -sS 
http://localhost:8983/solr/admin/mbeans?stats=truekey=fieldCachewt=jsonindent=true;
 | grep entries_count\|insanity_count
  entries_count:0,
  insanity_count:0}}},
{noformat}

3) used both grouping and faceting on the popularity field, then checked the 
fieldcache insanity count..

{noformat}
hossman@frisbee:~/lucene/lucene-4.2.1_tag/solr/example/exampledocs$ curl -sS 
http://localhost:8983/solr/select?q=*:*facet=truefacet.field=popularity;  
/dev/null
hossman@frisbee:~/lucene/lucene-4.2.1_tag/solr/example/exampledocs$ curl -sS 
http://localhost:8983/solr/select?q=*:*group=truegroup.field=popularity;  
/dev/null
hossman@frisbee:~/lucene/lucene-4.2.1_tag/solr/example/exampledocs$ curl -sS 
http://localhost:8983/solr/admin/mbeans?stats=truekey=fieldCachewt=jsonindent=true;
 | grep entries_count\|insanity_count
  entries_count:4,
  insanity_count:0}}},
{noformat}

4) re-indexed a few docs to ensure i'd get multiple segments (in case that was 
neccessary to reproduce) and checked the insanity count again...

{noformat}
hossman@frisbee:~/lucene/lucene-4.2.1_tag/solr/example/exampledocs$ java -jar 
post.jar sd500.xml 
...
hossman@frisbee:~/lucene/lucene-4.2.1_tag/solr/example/exampledocs$ curl -sS 
http://localhost:8983/solr/select?q=*:*group=truegroup.field=popularity;  
/dev/null
hossman@frisbee:~/lucene/lucene-4.2.1_tag/solr/example/exampledocs$ curl -sS 
http://localhost:8983/solr/select?q=*:*facet=truefacet.field=popularity;  
/dev/null
hossman@frisbee:~/lucene/lucene-4.2.1_tag/solr/example/exampledocs$ curl -sS 
http://localhost:8983/solr/admin/mbeans?stats=truekey=fieldCachewt=jsonindent=true;
 | grep entries_count\|insanity_count
  entries_count:8,
  insanity_count:0}}},
{noformat}

...still no sign of insanity

So i suspect there must be something else going on in your setup? are you sure 
you don't have any other types of queries that might be using hte field cache 
in an inconsistent way?

(FWIW: even though i couldn't reproduce the insanity inconsistency you 
described, i'm still confuse/suspicious about hte number of field cache entries 
created for each segment when accessing a single field ... so i'm going to try 
dig into that a little bit to verify there is no related bug there -- it might 
be expected given the way FieldCache and DocValues have evolved, but i'd like 
to verify that)

 FieldCache insanity with field used as facet and group
 --

 Key: SOLR-4866
 URL: https://issues.apache.org/jira/browse/SOLR-4866
 Project: Solr
  Issue Type: Bug
Reporter: Sannier Elodie
Priority: Minor

 I am using the Lucene FieldCache with SolrCloud 4.2.1 and I have insane 
 instances for a field used as facet and group field.
 schema fieldType  filed declaration for my
 merchantid field :
 fieldType name=int class=solr.TrieIntField precisionStep=0 
 sortMissingLast=true omitNorms=true positionIncrementGap=0/
 field name=merchantid type=int indexed=true stored=true 
 required=true/
 The mbean stats output shows the field cache insanity after executing queries 
 like :
 /select?q=*:*facet=truefacet.field=merchantid
 /select?q=*:*group=truegroup.field=merchantid
 int name=insanity_count25/int
 str name=insanity#0VALUEMISMATCH: Multiple distinct value objects for 
 SegmentCoreReader(owner=_1z1(4.2.1):C3916)+merchantid
   'SegmentCoreReader(owner=_1z1(4.2.1):C3916)'='merchantid',class 
 org.apache.lucene.index.SortedDocValues,0.5=org.apache.lucene.search.FieldCacheImpl$SortedDocValuesImpl#1517585400
   
 'SegmentCoreReader(owner=_1z1(4.2.1):C3916)'='merchantid',int,org.apache.lucene.search.FieldCache.NUMERIC_UTILS_INT_PARSER=org.apache.lucene.search.FieldCacheImpl$IntsFromArray#781169939
   
 'SegmentCoreReader(owner=_1z1(4.2.1):C3916)'='merchantid',int,null=org.apache.lucene.search.FieldCacheImpl$IntsFromArray#781169939
 /str
 ...
 see 

[jira] [Updated] (LUCENE-5022) Add FacetResult.mergeHierarchies

2013-05-28 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated LUCENE-5022:
---

Attachment: LUCENE-5022.patch

Patch with the new mergeHierarchies method + test. I created it using git, so 
hopefully it applies ok (I tried patch --dry-run and it didn't complain).

 Add FacetResult.mergeHierarchies
 

 Key: LUCENE-5022
 URL: https://issues.apache.org/jira/browse/LUCENE-5022
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/facet
Reporter: Shai Erera
Assignee: Shai Erera
Priority: Minor
 Attachments: LUCENE-5022.patch


 When you DrillSideways on a hierarchical dimension, and especially when you 
 OR multiple drill-downs together, you get several FacetResults back, one for 
 each category you drill down on. So for example, if you want to drill-down on 
 Date/2010 OR Date/2011/May, the FacetRequests that you need to create (to get 
 the sideways effect) are: Date/, Date/2010, Date/2011 and Date/2011/May. 
 Date/ is because you want to get sideways counts as an alternative to 
 Date/2010, and Date/2011 in order to get months count as an alternative to 
 Date/2011/May.
 That results in 4 FacetResult objects. Having a utility which merges all 
 FacetResults of the same dimension into a single hierarchical one will be 
 very useful for e.g. apps that want to display the hierarchy. I'm thinking of 
 FacetResult.mergeHierarchies which takes a ListFacetResult and returns the 
 merged ones, one FacetResult per dimension.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4859) MinFieldValueUpdateProcessorFactory and MaxFieldValueUpdateProcessorFactory don't do numeric comparison for numeric fields

2013-05-28 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668596#comment-13668596
 ] 

Jack Krupansky commented on SOLR-4859:
--

I would note that the Javadoc for MinFieldValueUpdateProcessorFactory says:

{quote}
In the example configuration below, if a document contains multiple integer 
values (ie: 64, 128, 1024) in the field smallestFileSize then only the smallest 
value (ie: 64) will be kept in that field.

{code}
  processor class=solr.MinFieldValueUpdateProcessorFactory
str name=fieldNamesmallestFileSize/str
  /processor
{code}
{quote}

Even if the JSON loader is fixed, we still have the XML loader. So, I think the 
Javadoc needs to heavily caveat that example claim. CSV loading would also have 
this issue. SolrCell as well.

Also, should fixing this issue be 100% gated on completion of SOLR-3250? If so, 
at least update the Javadoc to indicate that min/max for integer field values 
is currently not supported, or indicate that it is supported only by Solr 4.4 
or/and later.


 MinFieldValueUpdateProcessorFactory and MaxFieldValueUpdateProcessorFactory 
 don't do numeric comparison for numeric fields
 --

 Key: SOLR-4859
 URL: https://issues.apache.org/jira/browse/SOLR-4859
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.3
Reporter: Jack Krupansky

 MinFieldValueUpdateProcessorFactory and MaxFieldValueUpdateProcessorFactory 
 are advertised as supporting numeric comparisons, but this doesn't work - 
 only string comparison is available - and doesn't seem possible, although the 
 unit tests show it is possible at the unit test level.
 The problem is that numeric processing is dependent on the SolrInputDocument 
 containing a list of numeric values, but at least with both the current XML 
 and JSON loaders, only string values can be loaded.
 Test scenario.
 1. Use Solr 4.3 example.
 2. Add following update processor chain to solrconfig:
 {code}
   updateRequestProcessorChain name=max-only-num
 processor class=solr.MaxFieldValueUpdateProcessorFactory
   str name=fieldNamesizes_i/str
 /processor
 processor class=solr.LogUpdateProcessorFactory /
 processor class=solr.RunUpdateProcessorFactory /
   /updateRequestProcessorChain
 {code}
 3. Perform this update request:
 {code}
   curl 
 http://localhost:8983/solr/update?commit=trueupdate.chain=max-only-num; \
   -H 'Content-type:application/json' -d '
   [{id: doc-1,
 title_s: Hello World,
 sizes_i: [200, 999, 101, 199, 1000]}]'
 {code}
 Note that the values are JSON integer values.
 4. Perform this query:
 {code}
 curl http://localhost:8983/solr/select/?q=*:*indent=truewt=json;
 {code}
 Shows this result:
 {code}
   response:{numFound:1,start:0,docs:[
   {
 id:doc-1,
 title_s:Hello World,
 sizes_i:999,
 _version_:1436094187405574144}]
   }}
 {code}
 sizes_i should be 1000, not 999.
 Alternative update tests:
 {code}
   curl 
 http://localhost:8983/solr/update?commit=trueupdate.chain=max-only-num; \
   -H 'Content-type:application/json' -d '
   [{id: doc-1,
 title_s: Hello World,
 sizes_i: 200,
 sizes_i: 999,
 sizes_i: 101,
 sizes_i: 199,
 sizes_i: 1000}]'
 {code}
 and
 {code}
   curl 
 http://localhost:8983/solr/update?commit=trueupdate.chain=max-only-num; \
   -H 'Content-type:application/xml' -d '
   add
 doc
   field name=iddoc-1/field
   field name=title_sHello World/field
   field name=sizes_i42/field
   field name=sizes_i128/field
   field name=sizes_i-3/field
 /doc
   /add'
 {code}
 In XML, of course, there is no way for the input values to be anything other 
 than strings (text.)
 The JSON loader does parse the values with their type, but immediately 
 converts the values to strings:
 {code}
 private Object parseSingleFieldValue(int ev) throws IOException {
   switch (ev) {
 case JSONParser.STRING:
   return parser.getString();
 case JSONParser.LONG:
 case JSONParser.NUMBER:
 case JSONParser.BIGNUMBER:
   return parser.getNumberChars().toString();
 case JSONParser.BOOLEAN:
   return Boolean.toString(parser.getBoolean()); // for legacy 
 reasons, single values s are expected to be strings
 case JSONParser.NULL:
   parser.getNull();
   return null;
 case JSONParser.ARRAY_START:
   return parseArrayFieldValue(ev);
 default:
   throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, Error 
 parsing JSON field value. Unexpected +JSONParser.getEventString(ev) );
   }
 }
 private ListObject parseArrayFieldValue(int ev) throws IOException {
  

[jira] [Commented] (SOLR-4470) Support for basic http auth in internal solr requests

2013-05-28 Thread Ryan McKinley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668606#comment-13668606
 ] 

Ryan McKinley commented on SOLR-4470:
-

just skimming the patch... it looks like it adds:
{code}
METHOD.GET, SEARCH_CREDENTIALS
{code}

to every request.

What about using a SolrServer implementation that adds whatever security it 
needs?  

HttpSolrServer has invariateParams with a comment:
{code:java}
  /**
   * Default value: null / empty.
   * p/
   * Parameters that are added to every request regardless. This may be a place
   * to add something like an authentication token.
   */
  protected ModifiableSolrParams invariantParams;
{code}

this *may* offer a client only option





 Support for basic http auth in internal solr requests
 -

 Key: SOLR-4470
 URL: https://issues.apache.org/jira/browse/SOLR-4470
 Project: Solr
  Issue Type: New Feature
  Components: clients - java, multicore, replication (java), SolrCloud
Affects Versions: 4.0
Reporter: Per Steffensen
Assignee: Jan Høydahl
  Labels: authentication, https, solrclient, solrcloud, ssl
 Fix For: 4.4

 Attachments: SOLR-4470_branch_4x_r1452629.patch, 
 SOLR-4470_branch_4x_r1452629.patch, SOLR-4470_branch_4x_r145.patch, 
 SOLR-4470.patch


 We want to protect any HTTP-resource (url). We want to require credentials no 
 matter what kind of HTTP-request you make to a Solr-node.
 It can faily easy be acheived as described on 
 http://wiki.apache.org/solr/SolrSecurity. This problem is that Solr-nodes 
 also make internal request to other Solr-nodes, and for it to work 
 credentials need to be provided here also.
 Ideally we would like to forward credentials from a particular request to 
 all the internal sub-requests it triggers. E.g. for search and update 
 request.
 But there are also internal requests
 * that only indirectly/asynchronously triggered from outside requests (e.g. 
 shard creation/deletion/etc based on calls to the Collection API)
 * that do not in any way have relation to an outside super-request (e.g. 
 replica synching stuff)
 We would like to aim at a solution where original credentials are 
 forwarded when a request directly/synchronously trigger a subrequest, and 
 fallback to a configured internal credentials for the 
 asynchronous/non-rooted requests.
 In our solution we would aim at only supporting basic http auth, but we would 
 like to make a framework around it, so that not to much refactoring is 
 needed if you later want to make support for other kinds of auth (e.g. digest)
 We will work at a solution but create this JIRA issue early in order to get 
 input/comments from the community as early as possible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4867) Admin UI - setting loglevel on root throws RangeError

2013-05-28 Thread Stefan Matheis (steffkes) (JIRA)
Stefan Matheis (steffkes) created SOLR-4867:
---

 Summary: Admin UI - setting loglevel on root throws RangeError
 Key: SOLR-4867
 URL: https://issues.apache.org/jira/browse/SOLR-4867
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.3
Reporter: Stefan Matheis (steffkes)
Assignee: Stefan Matheis (steffkes)
 Fix For: 5.0, 4.4


[~dboychuck] reported on #solr that the Page for Logging/Level only shows a 
spinning wheel w/o actually finishing at some time.

bq. Uncaught RangeError: Maximum call stack size exceeded

was the error message which led me to the suggestion, that the javascript goes 
crazy while building up the loglevel tree.

the attached patch includes an additional check for an empty logger-name, which 
does not fix the root of the problem (will open another issue for that one) but 
at least avoid confusion because the page doesn't get loaded at all.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4867) Admin UI - setting loglevel on root throws RangeError

2013-05-28 Thread Stefan Matheis (steffkes) (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Matheis (steffkes) updated SOLR-4867:


Attachment: SOLR-4867.patch

 Admin UI - setting loglevel on root throws RangeError
 -

 Key: SOLR-4867
 URL: https://issues.apache.org/jira/browse/SOLR-4867
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.3
Reporter: Stefan Matheis (steffkes)
Assignee: Stefan Matheis (steffkes)
 Fix For: 5.0, 4.4

 Attachments: SOLR-4867.patch


 [~dboychuck] reported on #solr that the Page for Logging/Level only shows a 
 spinning wheel w/o actually finishing at some time.
 bq. Uncaught RangeError: Maximum call stack size exceeded
 was the error message which led me to the suggestion, that the javascript 
 goes crazy while building up the loglevel tree.
 the attached patch includes an additional check for an empty logger-name, 
 which does not fix the root of the problem (will open another issue for that 
 one) but at least avoid confusion because the page doesn't get loaded at all.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5017) SpatialOpRecursivePrefixTreeTest is failing

2013-05-28 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-5017:
-

Attachment: LUCENE-5017_SpatialOpRecursivePrefixTreeTest_bug.patch

The problem is a bug in my test, relating to a shape-pair of adjacent shapes 
when testing for Contains.  I fixed this bug.

I also found out I could make the test Repeat annotation refer to a constant 
for the iterations, so I did that to make it easy to dial-up the testing 
temporarily.

I'll commit this shortly.

 SpatialOpRecursivePrefixTreeTest is failing
 ---

 Key: LUCENE-5017
 URL: https://issues.apache.org/jira/browse/LUCENE-5017
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/spatial
Reporter: Michael McCandless
Assignee: David Smiley
 Fix For: 5.0, 4.4

 Attachments: LUCENE-5017_SpatialOpRecursivePrefixTreeTest_bug.patch


 This has been failing lately on trunk (e.g. on rev 1486339):
 {noformat}
 ant test  -Dtestcase=SpatialOpRecursivePrefixTreeTest 
 -Dtestmethod=testContains -Dtests.seed=456022665217DADF:2C2A2816BD2BA1C5 
 -Dtests.slow=true -Dtests.locale=nl_BE -Dtests.timezone=Poland 
 -Dtests.file.encoding=ISO-8859-1
 {noformat}
 Not sure what's up ...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4867) Admin UI - setting loglevel on root throws RangeError

2013-05-28 Thread Stefan Matheis (steffkes) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668639#comment-13668639
 ] 

Stefan Matheis (steffkes) commented on SOLR-4867:
-

committed to
trunk (r1487096)
branch_4x (r1487097)

[~shalinmangar] not sure about the timeline for 4.3.1 back perhaps we can 
backport this one?

 Admin UI - setting loglevel on root throws RangeError
 -

 Key: SOLR-4867
 URL: https://issues.apache.org/jira/browse/SOLR-4867
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.3
Reporter: Stefan Matheis (steffkes)
Assignee: Stefan Matheis (steffkes)
 Fix For: 5.0, 4.4

 Attachments: SOLR-4867.patch


 [~dboychuck] reported on #solr that the Page for Logging/Level only shows a 
 spinning wheel w/o actually finishing at some time.
 bq. Uncaught RangeError: Maximum call stack size exceeded
 was the error message which led me to the suggestion, that the javascript 
 goes crazy while building up the loglevel tree.
 the attached patch includes an additional check for an empty logger-name, 
 which does not fix the root of the problem (will open another issue for that 
 one) but at least avoid confusion because the page doesn't get loaded at all.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5022) Add FacetResult.mergeHierarchies

2013-05-28 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668642#comment-13668642
 ] 

Michael McCandless commented on LUCENE-5022:


This looks great Shai!  Can it be used for non-hierarchical dims as well?  E.g. 
if I have one FacetRequest for top 10 under User/ and then a second 
FacetRequest for User/Bob (a leaf ... hmm can one make a FacetRequest like 
that?), will it merge them?

Or, what would it do if I (oddly) had one FacetRequest asking for top 10 under 
User/ and another FacetRequest asking for top 20 under User/?

 Add FacetResult.mergeHierarchies
 

 Key: LUCENE-5022
 URL: https://issues.apache.org/jira/browse/LUCENE-5022
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/facet
Reporter: Shai Erera
Assignee: Shai Erera
Priority: Minor
 Attachments: LUCENE-5022.patch


 When you DrillSideways on a hierarchical dimension, and especially when you 
 OR multiple drill-downs together, you get several FacetResults back, one for 
 each category you drill down on. So for example, if you want to drill-down on 
 Date/2010 OR Date/2011/May, the FacetRequests that you need to create (to get 
 the sideways effect) are: Date/, Date/2010, Date/2011 and Date/2011/May. 
 Date/ is because you want to get sideways counts as an alternative to 
 Date/2010, and Date/2011 in order to get months count as an alternative to 
 Date/2011/May.
 That results in 4 FacetResult objects. Having a utility which merges all 
 FacetResults of the same dimension into a single hierarchical one will be 
 very useful for e.g. apps that want to display the hierarchy. I'm thinking of 
 FacetResult.mergeHierarchies which takes a ListFacetResult and returns the 
 merged ones, one FacetResult per dimension.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4868) Log4J watcher - setting the root logger adds a logger for the empty string

2013-05-28 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-4868:
--

 Summary: Log4J watcher - setting the root logger adds a logger for 
the empty string
 Key: SOLR-4868
 URL: https://issues.apache.org/jira/browse/SOLR-4868
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.3
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.0, 4.3.1


Root cause for SOLR-4867.  Trying to set the root logging category results in 
the addition of a category that's the empty string.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4868) Log4J watcher - setting the root logger adds a logger for the empty string

2013-05-28 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-4868:
---

Attachment: SOLR-4868.patch

Patch to fix the issue.  [~shalinmangar], the CHANGES.txt entry is under 4.3.1, 
so I'd like to know whether it's OK to backport.  If not, I'll move that to 4.4.

 Log4J watcher - setting the root logger adds a logger for the empty string
 --

 Key: SOLR-4868
 URL: https://issues.apache.org/jira/browse/SOLR-4868
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.3
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.0, 4.3.1

 Attachments: SOLR-4868.patch


 Root cause for SOLR-4867.  Trying to set the root logging category results in 
 the addition of a category that's the empty string.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5017) SpatialOpRecursivePrefixTreeTest is failing

2013-05-28 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved LUCENE-5017.
--

Resolution: Fixed

 SpatialOpRecursivePrefixTreeTest is failing
 ---

 Key: LUCENE-5017
 URL: https://issues.apache.org/jira/browse/LUCENE-5017
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/spatial
Reporter: Michael McCandless
Assignee: David Smiley
 Fix For: 5.0, 4.4

 Attachments: LUCENE-5017_SpatialOpRecursivePrefixTreeTest_bug.patch


 This has been failing lately on trunk (e.g. on rev 1486339):
 {noformat}
 ant test  -Dtestcase=SpatialOpRecursivePrefixTreeTest 
 -Dtestmethod=testContains -Dtests.seed=456022665217DADF:2C2A2816BD2BA1C5 
 -Dtests.slow=true -Dtests.locale=nl_BE -Dtests.timezone=Poland 
 -Dtests.file.encoding=ISO-8859-1
 {noformat}
 Not sure what's up ...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4867) Admin UI - setting loglevel on root throws RangeError

2013-05-28 Thread Stefan Matheis (steffkes) (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Matheis (steffkes) resolved SOLR-4867.
-

Resolution: Fixed

 Admin UI - setting loglevel on root throws RangeError
 -

 Key: SOLR-4867
 URL: https://issues.apache.org/jira/browse/SOLR-4867
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.3
Reporter: Stefan Matheis (steffkes)
Assignee: Stefan Matheis (steffkes)
 Fix For: 5.0, 4.4

 Attachments: SOLR-4867.patch


 [~dboychuck] reported on #solr that the Page for Logging/Level only shows a 
 spinning wheel w/o actually finishing at some time.
 bq. Uncaught RangeError: Maximum call stack size exceeded
 was the error message which led me to the suggestion, that the javascript 
 goes crazy while building up the loglevel tree.
 the attached patch includes an additional check for an empty logger-name, 
 which does not fix the root of the problem (will open another issue for that 
 one) but at least avoid confusion because the page doesn't get loaded at all.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4868) Log4J watcher - setting the root logger adds a logger for the empty string

2013-05-28 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668657#comment-13668657
 ] 

Shawn Heisey commented on SOLR-4868:


Something related, not sure whether it's worth fixing: If you change the 
logging level to WARN (or below) on something low level like org (or the root, 
after this patch), you don't get anything in the log about changing the level - 
because the log entry about the change is INFO.


 Log4J watcher - setting the root logger adds a logger for the empty string
 --

 Key: SOLR-4868
 URL: https://issues.apache.org/jira/browse/SOLR-4868
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.3
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.0, 4.3.1

 Attachments: SOLR-4868.patch


 Root cause for SOLR-4867.  Trying to set the root logging category results in 
 the addition of a category that's the empty string.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4869) Allow easy ways to escape from UI overlays

2013-05-28 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-4869:
--

 Summary: Allow easy ways to escape from UI overlays
 Key: SOLR-4869
 URL: https://issues.apache.org/jira/browse/SOLR-4869
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Affects Versions: 4.3
Reporter: Shawn Heisey
Priority: Minor


For certain UI overlays, I'd like to be able to escape the overlay easily.  One 
example of a place where this would be useful is the page for setting log 
levels.  Once you've clicked on a logging category, common escape actions don't 
work, like hitting the esc key and clicking outside the overlay.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4816) ConcurrentUpdateCloudSolrServer

2013-05-28 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668722#comment-13668722
 ] 

Mark Miller commented on SOLR-4816:
---

bq. The exception handling seems to be backwards compatible.

By the sound of it, you changed the runtime behavior in an incompat way - with 
a single thread you have very tight control and knowledge of what docs got 
accepted in and what docs failed and the exception for every fail. It's not so 
easy to get that same back compat behavior with multiple threads and by the 
description, it sounds like a break to me.

I think a multi threaded version cannot likely be back compat easily and so it 
begs a new class similiar to the non cloud servers.

I'll look at the code when I get a chance though.



 ConcurrentUpdateCloudSolrServer
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816-sriesenberg.patch


 This issue adds a new Solr Cloud client called the 
 ConcurrentUpdateCloudSolrServer. This Solr Cloud client implements document 
 routing in the client so that document routing overhead is eliminated on the 
 Solr servers. Documents are batched up for each shard and then each batch is 
 sent in it's own thread. 
 With this client, Solr Cloud indexing throughput should scale linearly with 
 cluster size.
 This client also has robust failover built-in because the actual requests are 
 made using the LBHttpSolrServer. The list of urls used for the request to 
 each shard begins with the leader and is followed by that shard's replicas. 
 So the leader will be tried first and if it fails it will try the replicas.
 Sample usage:
 ConcurrentUpdateCloudServer client = new 
 ConcurrentUpdateCloudSolrServer(zkHostAddress);
 UpdateRequest request = new UpdateRequest();
 SolrInputDocument doc = new SolrInputDocument();
 doc.addField(id, 2);
 doc.addField(manu,BMW);
 request.add(doc);
 NamedList response = client.request(request);
 NamedList exceptions = response.get(exceptions); // contains any exceptions 
 from the shards
 NamedList responses = response.get(responses); // contains the responses 
 from shards without exception.
 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4859) MinFieldValueUpdateProcessorFactory and MaxFieldValueUpdateProcessorFactory don't do numeric comparison for numeric fields

2013-05-28 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668728#comment-13668728
 ] 

Steve Rowe commented on SOLR-4859:
--

bq. Also, should fixing this issue be 100% gated on completion of SOLR-3250? 

No, definitely not, I'm going to make separate issues.

 MinFieldValueUpdateProcessorFactory and MaxFieldValueUpdateProcessorFactory 
 don't do numeric comparison for numeric fields
 --

 Key: SOLR-4859
 URL: https://issues.apache.org/jira/browse/SOLR-4859
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.3
Reporter: Jack Krupansky

 MinFieldValueUpdateProcessorFactory and MaxFieldValueUpdateProcessorFactory 
 are advertised as supporting numeric comparisons, but this doesn't work - 
 only string comparison is available - and doesn't seem possible, although the 
 unit tests show it is possible at the unit test level.
 The problem is that numeric processing is dependent on the SolrInputDocument 
 containing a list of numeric values, but at least with both the current XML 
 and JSON loaders, only string values can be loaded.
 Test scenario.
 1. Use Solr 4.3 example.
 2. Add following update processor chain to solrconfig:
 {code}
   updateRequestProcessorChain name=max-only-num
 processor class=solr.MaxFieldValueUpdateProcessorFactory
   str name=fieldNamesizes_i/str
 /processor
 processor class=solr.LogUpdateProcessorFactory /
 processor class=solr.RunUpdateProcessorFactory /
   /updateRequestProcessorChain
 {code}
 3. Perform this update request:
 {code}
   curl 
 http://localhost:8983/solr/update?commit=trueupdate.chain=max-only-num; \
   -H 'Content-type:application/json' -d '
   [{id: doc-1,
 title_s: Hello World,
 sizes_i: [200, 999, 101, 199, 1000]}]'
 {code}
 Note that the values are JSON integer values.
 4. Perform this query:
 {code}
 curl http://localhost:8983/solr/select/?q=*:*indent=truewt=json;
 {code}
 Shows this result:
 {code}
   response:{numFound:1,start:0,docs:[
   {
 id:doc-1,
 title_s:Hello World,
 sizes_i:999,
 _version_:1436094187405574144}]
   }}
 {code}
 sizes_i should be 1000, not 999.
 Alternative update tests:
 {code}
   curl 
 http://localhost:8983/solr/update?commit=trueupdate.chain=max-only-num; \
   -H 'Content-type:application/json' -d '
   [{id: doc-1,
 title_s: Hello World,
 sizes_i: 200,
 sizes_i: 999,
 sizes_i: 101,
 sizes_i: 199,
 sizes_i: 1000}]'
 {code}
 and
 {code}
   curl 
 http://localhost:8983/solr/update?commit=trueupdate.chain=max-only-num; \
   -H 'Content-type:application/xml' -d '
   add
 doc
   field name=iddoc-1/field
   field name=title_sHello World/field
   field name=sizes_i42/field
   field name=sizes_i128/field
   field name=sizes_i-3/field
 /doc
   /add'
 {code}
 In XML, of course, there is no way for the input values to be anything other 
 than strings (text.)
 The JSON loader does parse the values with their type, but immediately 
 converts the values to strings:
 {code}
 private Object parseSingleFieldValue(int ev) throws IOException {
   switch (ev) {
 case JSONParser.STRING:
   return parser.getString();
 case JSONParser.LONG:
 case JSONParser.NUMBER:
 case JSONParser.BIGNUMBER:
   return parser.getNumberChars().toString();
 case JSONParser.BOOLEAN:
   return Boolean.toString(parser.getBoolean()); // for legacy 
 reasons, single values s are expected to be strings
 case JSONParser.NULL:
   parser.getNull();
   return null;
 case JSONParser.ARRAY_START:
   return parseArrayFieldValue(ev);
 default:
   throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, Error 
 parsing JSON field value. Unexpected +JSONParser.getEventString(ev) );
   }
 }
 private ListObject parseArrayFieldValue(int ev) throws IOException {
   assert ev == JSONParser.ARRAY_START;
   
   ArrayList lst = new ArrayList(2);
   for (;;) {
 ev = parser.nextEvent();
 if (ev == JSONParser.ARRAY_END) {
   return lst;
 }
 Object val = parseSingleFieldValue(ev);
 lst.add(val);
   }
 }
   }
 {code}
 Originally, I had hoped/expected that the schema type of the field would 
 determine the type of min/max comparison - integer for a *_i field in my case.
 The comparison logic for min:
 {code}
 public final class MinFieldValueUpdateProcessorFactory extends 
 FieldValueSubsetUpdateProcessorFactory {
   @Override
   @SuppressWarnings(unchecked)
   public Collection pickSubset(Collection values) {
 Collection result = 

[jira] [Commented] (SOLR-4868) Log4J watcher - setting the root logger adds a logger for the empty string

2013-05-28 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668731#comment-13668731
 ] 

Hoss Man commented on SOLR-4868:


+1 - patch looks clean and correct to me.

As discussed with shawn on IRC, JulWatcher has similar code, but that seems to 
be correct - most likely this bug stemmed from the JulWatcher code being 
copied/modified to create the Log4jWatcher


 Log4J watcher - setting the root logger adds a logger for the empty string
 --

 Key: SOLR-4868
 URL: https://issues.apache.org/jira/browse/SOLR-4868
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.3
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.0, 4.3.1

 Attachments: SOLR-4868.patch


 Root cause for SOLR-4867.  Trying to set the root logging category results in 
 the addition of a category that's the empty string.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4816) ConcurrentUpdateCloudSolrServer

2013-05-28 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668747#comment-13668747
 ] 

Mark Miller commented on SOLR-4816:
---

bq. The way Exceptions are handled in this patch is that each thread 

With a new impl, we should consider how we want to do this carefully. 

If we just go this route, it's really not much better the state the concurrent 
solrserver is in. This could be a good time to introduce better handling for 
concurrent solrservers - error detectiong and responses - you really still want 
to know exactly what happened with your updates, and it's currently very 
difficult to determine that. It's an improvement we have to get to, and it's 
probably going to be a back compat headache - perhaps we start by introducing 
something here and eventually this would become the standard client you want to 
use.

 ConcurrentUpdateCloudSolrServer
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816-sriesenberg.patch


 This issue adds a new Solr Cloud client called the 
 ConcurrentUpdateCloudSolrServer. This Solr Cloud client implements document 
 routing in the client so that document routing overhead is eliminated on the 
 Solr servers. Documents are batched up for each shard and then each batch is 
 sent in it's own thread. 
 With this client, Solr Cloud indexing throughput should scale linearly with 
 cluster size.
 This client also has robust failover built-in because the actual requests are 
 made using the LBHttpSolrServer. The list of urls used for the request to 
 each shard begins with the leader and is followed by that shard's replicas. 
 So the leader will be tried first and if it fails it will try the replicas.
 Sample usage:
 ConcurrentUpdateCloudServer client = new 
 ConcurrentUpdateCloudSolrServer(zkHostAddress);
 UpdateRequest request = new UpdateRequest();
 SolrInputDocument doc = new SolrInputDocument();
 doc.addField(id, 2);
 doc.addField(manu,BMW);
 request.add(doc);
 NamedList response = client.request(request);
 NamedList exceptions = response.get(exceptions); // contains any exceptions 
 from the shards
 NamedList responses = response.get(responses); // contains the responses 
 from shards without exception.
 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4816) ConcurrentUpdateCloudSolrServer

2013-05-28 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668761#comment-13668761
 ] 

Mark Miller commented on SOLR-4816:
---

I guess some of that presuposes that since this thing is already multi threaded 
for adding to multiple servers concurrently, we would eventually also want to 
make it even more concurrent.

 ConcurrentUpdateCloudSolrServer
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816-sriesenberg.patch


 This issue adds a new Solr Cloud client called the 
 ConcurrentUpdateCloudSolrServer. This Solr Cloud client implements document 
 routing in the client so that document routing overhead is eliminated on the 
 Solr servers. Documents are batched up for each shard and then each batch is 
 sent in it's own thread. 
 With this client, Solr Cloud indexing throughput should scale linearly with 
 cluster size.
 This client also has robust failover built-in because the actual requests are 
 made using the LBHttpSolrServer. The list of urls used for the request to 
 each shard begins with the leader and is followed by that shard's replicas. 
 So the leader will be tried first and if it fails it will try the replicas.
 Sample usage:
 ConcurrentUpdateCloudServer client = new 
 ConcurrentUpdateCloudSolrServer(zkHostAddress);
 UpdateRequest request = new UpdateRequest();
 SolrInputDocument doc = new SolrInputDocument();
 doc.addField(id, 2);
 doc.addField(manu,BMW);
 request.add(doc);
 NamedList response = client.request(request);
 NamedList exceptions = response.get(exceptions); // contains any exceptions 
 from the shards
 NamedList responses = response.get(responses); // contains the responses 
 from shards without exception.
 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4863) SolrDynamicMBean still uses sourceId in dynamic stats

2013-05-28 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-4863:
---

Attachment: SOLR-4863.patch

patch with fix and test that would have caught this bug before (so we don't 
inadvertently make a similar mistake in the future)

 SolrDynamicMBean still uses sourceId in dynamic stats
 -

 Key: SOLR-4863
 URL: https://issues.apache.org/jira/browse/SOLR-4863
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.0
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.4

 Attachments: SOLR-4863.patch


 As noted in solr-user:
 http://www.mail-archive.com/solr-user@lucene.apache.org/msg82650.html
 SOLR-3329 removed the sourceId from SolrInfoMBean but it wasn't removed from 
 the dynamic stats. This leads to exceptions on access.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4228) SolrPing - add methods for enable/disable

2013-05-28 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668907#comment-13668907
 ] 

Shawn Heisey commented on SOLR-4228:


Committed to trunk, r1487189.  Double-checking branch_4x before committing the 
backport.

 SolrPing - add methods for enable/disable
 -

 Key: SOLR-4228
 URL: https://issues.apache.org/jira/browse/SOLR-4228
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Affects Versions: 4.0
Reporter: Shawn Heisey
 Fix For: 4.4

 Attachments: SOLR-4228.patch, SOLR-4228.patch, SOLR-4228.patch, 
 SOLR-4228.patch, SOLR-4228.patch, SOLR-4228.patch, SOLR-4228.patch, 
 SOLR-4228.patch


 The new PingRequestHandler in Solr 4.0 takes over what actions.jsp used to do 
 in older versions.  Create methods in the SolrPing request object to access 
 this capability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.8.0-ea-b89) - Build # 5815 - Failure!

2013-05-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/5815/
Java: 64bit/jdk1.8.0-ea-b89 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.lucene.spatial.prefix.SpatialOpRecursivePrefixTreeTest.testContains 
{#0 seed=[5DDC55E1BA8E5BCE:3FC7DE545C4AF81A]}

Error Message:
Shouldn't match I #1:ShapePair(Rect(minX=16.0,maxX=122.0,minY=-86.0,maxY=17.0) 
, Rect(minX=175.0,maxX=190.0,minY=-112.0,maxY=110.0)) 
Q:Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0)

Stack Trace:
java.lang.AssertionError: Shouldn't match I 
#1:ShapePair(Rect(minX=16.0,maxX=122.0,minY=-86.0,maxY=17.0) , 
Rect(minX=175.0,maxX=190.0,minY=-112.0,maxY=110.0)) 
Q:Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0)
at 
__randomizedtesting.SeedInfo.seed([5DDC55E1BA8E5BCE:3FC7DE545C4AF81A]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.lucene.spatial.prefix.SpatialOpRecursivePrefixTreeTest.fail(SpatialOpRecursivePrefixTreeTest.java:287)
at 
org.apache.lucene.spatial.prefix.SpatialOpRecursivePrefixTreeTest.doTest(SpatialOpRecursivePrefixTreeTest.java:273)
at 
org.apache.lucene.spatial.prefix.SpatialOpRecursivePrefixTreeTest.testContains(SpatialOpRecursivePrefixTreeTest.java:101)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:491)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-4816) ConcurrentUpdateCloudSolrServer

2013-05-28 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668927#comment-13668927
 ] 

Joel Bernstein commented on SOLR-4816:
--

This implementation returns all the exceptions that occur as part of the 
response. You can see each exception and see which server it came from. 

It also returns the routes that were used so you can see which docs where 
routed to which server. Very useful for testing.



 ConcurrentUpdateCloudSolrServer
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816-sriesenberg.patch


 This issue adds a new Solr Cloud client called the 
 ConcurrentUpdateCloudSolrServer. This Solr Cloud client implements document 
 routing in the client so that document routing overhead is eliminated on the 
 Solr servers. Documents are batched up for each shard and then each batch is 
 sent in it's own thread. 
 With this client, Solr Cloud indexing throughput should scale linearly with 
 cluster size.
 This client also has robust failover built-in because the actual requests are 
 made using the LBHttpSolrServer. The list of urls used for the request to 
 each shard begins with the leader and is followed by that shard's replicas. 
 So the leader will be tried first and if it fails it will try the replicas.
 Sample usage:
 ConcurrentUpdateCloudServer client = new 
 ConcurrentUpdateCloudSolrServer(zkHostAddress);
 UpdateRequest request = new UpdateRequest();
 SolrInputDocument doc = new SolrInputDocument();
 doc.addField(id, 2);
 doc.addField(manu,BMW);
 request.add(doc);
 NamedList response = client.request(request);
 NamedList exceptions = response.get(exceptions); // contains any exceptions 
 from the shards
 NamedList responses = response.get(responses); // contains the responses 
 from shards without exception.
 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4816) ConcurrentUpdateCloudSolrServer

2013-05-28 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668927#comment-13668927
 ] 

Joel Bernstein edited comment on SOLR-4816 at 5/29/13 2:17 AM:
---

This implementation returns all the exceptions that occur as part of the 
response. You can see each exception and see which server it came from. 

It also returns the routes that were used so you can see which docs were routed 
to which server. Very useful for testing.



  was (Author: joel.bernstein):
This implementation returns all the exceptions that occur as part of the 
response. You can see each exception and see which server it came from. 

It also returns the routes that were used so you can see which docs where 
routed to which server. Very useful for testing.


  
 ConcurrentUpdateCloudSolrServer
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816-sriesenberg.patch


 This issue adds a new Solr Cloud client called the 
 ConcurrentUpdateCloudSolrServer. This Solr Cloud client implements document 
 routing in the client so that document routing overhead is eliminated on the 
 Solr servers. Documents are batched up for each shard and then each batch is 
 sent in it's own thread. 
 With this client, Solr Cloud indexing throughput should scale linearly with 
 cluster size.
 This client also has robust failover built-in because the actual requests are 
 made using the LBHttpSolrServer. The list of urls used for the request to 
 each shard begins with the leader and is followed by that shard's replicas. 
 So the leader will be tried first and if it fails it will try the replicas.
 Sample usage:
 ConcurrentUpdateCloudServer client = new 
 ConcurrentUpdateCloudSolrServer(zkHostAddress);
 UpdateRequest request = new UpdateRequest();
 SolrInputDocument doc = new SolrInputDocument();
 doc.addField(id, 2);
 doc.addField(manu,BMW);
 request.add(doc);
 NamedList response = client.request(request);
 NamedList exceptions = response.get(exceptions); // contains any exceptions 
 from the shards
 NamedList responses = response.get(responses); // contains the responses 
 from shards without exception.
 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4867) Admin UI - setting loglevel on root throws RangeError

2013-05-28 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668977#comment-13668977
 ] 

Shalin Shekhar Mangar commented on SOLR-4867:
-

We have enough time. Go ahead and back port it.

 Admin UI - setting loglevel on root throws RangeError
 -

 Key: SOLR-4867
 URL: https://issues.apache.org/jira/browse/SOLR-4867
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.3
Reporter: Stefan Matheis (steffkes)
Assignee: Stefan Matheis (steffkes)
 Fix For: 5.0, 4.4

 Attachments: SOLR-4867.patch


 [~dboychuck] reported on #solr that the Page for Logging/Level only shows a 
 spinning wheel w/o actually finishing at some time.
 bq. Uncaught RangeError: Maximum call stack size exceeded
 was the error message which led me to the suggestion, that the javascript 
 goes crazy while building up the loglevel tree.
 the attached patch includes an additional check for an empty logger-name, 
 which does not fix the root of the problem (will open another issue for that 
 one) but at least avoid confusion because the page doesn't get loaded at all.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4228) SolrPing - add methods for enable/disable

2013-05-28 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668985#comment-13668985
 ] 

Shawn Heisey commented on SOLR-4228:


For the 4x backport, I am getting an unrelated consistent test failure in 
solr/core.  The SolrJ tests pass, including the added test.  Precommit passes.  
Committed, r1487229.

 SolrPing - add methods for enable/disable
 -

 Key: SOLR-4228
 URL: https://issues.apache.org/jira/browse/SOLR-4228
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Affects Versions: 4.0
Reporter: Shawn Heisey
 Fix For: 4.4

 Attachments: SOLR-4228.patch, SOLR-4228.patch, SOLR-4228.patch, 
 SOLR-4228.patch, SOLR-4228.patch, SOLR-4228.patch, SOLR-4228.patch, 
 SOLR-4228.patch


 The new PingRequestHandler in Solr 4.0 takes over what actions.jsp used to do 
 in older versions.  Create methods in the SolrPing request object to access 
 this capability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4228) SolrPing - add methods for enable/disable

2013-05-28 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey resolved SOLR-4228.


   Resolution: Fixed
Fix Version/s: 5.0
 Assignee: Shawn Heisey

 SolrPing - add methods for enable/disable
 -

 Key: SOLR-4228
 URL: https://issues.apache.org/jira/browse/SOLR-4228
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Affects Versions: 4.0
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.0, 4.4

 Attachments: SOLR-4228.patch, SOLR-4228.patch, SOLR-4228.patch, 
 SOLR-4228.patch, SOLR-4228.patch, SOLR-4228.patch, SOLR-4228.patch, 
 SOLR-4228.patch


 The new PingRequestHandler in Solr 4.0 takes over what actions.jsp used to do 
 in older versions.  Create methods in the SolrPing request object to access 
 this capability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4863) SolrDynamicMBean still uses sourceId in dynamic stats

2013-05-28 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-4863.
-

Resolution: Fixed

Committed r1487236 on trunk and r1487237 on branch_4x.

Thanks Hoss!

 SolrDynamicMBean still uses sourceId in dynamic stats
 -

 Key: SOLR-4863
 URL: https://issues.apache.org/jira/browse/SOLR-4863
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.0
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.4

 Attachments: SOLR-4863.patch


 As noted in solr-user:
 http://www.mail-archive.com/solr-user@lucene.apache.org/msg82650.html
 SOLR-3329 removed the sourceId from SolrInfoMBean but it wasn't removed from 
 the dynamic stats. This leads to exceptions on access.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org