RE: JCC for Java - C++ and initializeClass

2013-09-02 Thread Toivo Henningsson
 -Original Message-
 From: Andi Vajda [mailto:va...@apache.org]
 Sent: den 2 september 2013 10:00
 To: Toivo Henningsson
 Cc: pylucene-dev@lucene.apache.org
 Subject: Re: JCC for Java - C++ and initializeClass
  My only question now is: Do you plan to keep the other three cases (calling
 static/nonstatic methods, access to instance fields) safe when it comes to
 initializeClass?
  Then I can write my code under that assumption.

 Yes, this has been quite stable and I don't anticipate any changes there.

Great, thanks!
 / Toivo

Toivo Henningsson, PhD
Software Engineer
Simulation  Optimization RD

Phone direct: +46 46 286 22 11
Email: toivo.hennings...@modelon.com



Modelon AB
Ideon Science Park
SE-223 70 Lund, Sweden Phone: +46 46 286 2200
Fax: +46 46 286 2201
Web: http://www.modelon.com
This email and any attachments are intended solely for the use of the 
individual or entity to whom it is addressed and may be confidential and/or 
privileged. If you are not one of the named recipients or have received this 
email in error, (i) you should not read, disclose, or copy it, (ii) please 
notify sender of your receipt by reply email and delete this email and all 
attachments, (iii) Modelon does not accept or assume any liability or 
responsibility for any use of or reliance on this email.



[jira] [Commented] (SOLR-5201) UIMAUpdateRequestProcessor should reuse the AnalysisEngine

2013-09-02 Thread Tommaso Teofili (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755939#comment-13755939
 ] 

Tommaso Teofili commented on SOLR-5201:
---

Thanks [~hossman] for your hints and [~johtani] for your patches.

The first option of reusing an _AnalysisEngine_ in each 
_UpdateRequestProcessor_ instance for reuse in batch update requests (first 
option/patch) is surely the easiest solution but the performance improvement 
depends on the no. of docs that are sent together in each update request.
The second option sounds nice but I wonder if that would cause a problem with 
multiple configurations (2 update chains with 2 different configurations of 
_UIMAUpdateRequestProcessorFactory_), I'll do some testing on this scenario 
using John's patch so that we can decide which design it's better to support.

 UIMAUpdateRequestProcessor should reuse the AnalysisEngine
 --

 Key: SOLR-5201
 URL: https://issues.apache.org/jira/browse/SOLR-5201
 Project: Solr
  Issue Type: Improvement
  Components: contrib - UIMA
Affects Versions: 4.4
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: 4.5, 5.0

 Attachments: SOLR-5201-ae-cache-every-request_branch_4x.patch, 
 SOLR-5201-ae-cache-only-single-request_branch_4x.patch


 As reported in http://markmail.org/thread/2psiyl4ukaejl4fx 
 UIMAUpdateRequestProcessor instantiates an AnalysisEngine for each request 
 which is bad for performance therefore it'd be nice if such AEs could be 
 reused whenever that's possible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5197) SolrCloud: 500 error with combination of debug and group in distributed search

2013-09-02 Thread Sannier Elodie (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755949#comment-13755949
 ] 

Sannier Elodie commented on SOLR-5197:
--

Group works fine without the debug parameter.



 SolrCloud: 500 error with combination of debug and group in distributed search
 --

 Key: SOLR-5197
 URL: https://issues.apache.org/jira/browse/SOLR-5197
 Project: Solr
  Issue Type: Bug
Reporter: Sannier Elodie
Priority: Minor

 With SolrCloud 4.4.0 with two shards, when grouping on a field
 and using the debug parameter in distributed mode, there is a 500 error.
 http://localhost:8983/solr/select?q=*:*group=truegroup.field=popularitydebug=true
 (idem with debug=timing, query or results)
 response
 lst name=responseHeader
 int name=status500/int
 int name=QTime109/int
 lst name=params
 str name=q*:*/str
 str name=group.fieldpopularity/str
 str name=debugtrue/str
 str name=grouptrue/str
 /lst
 /lst
 lst name=error
 str name=msg
 Server at http://10.76.76.157:8983/solr/collection1 returned non ok 
 status:500, message:Server Error
 /str
 str name=trace
 org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Server 
 at http://10.76.76.157:8983/solr/collection1 returned non ok status:500, 
 message:Server Error at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:385)
  at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
  at 
 org.apache.solr.handler.component.HttpShardHandler$1.call(HttpShardHandler.java:156)
  at 
 org.apache.solr.handler.component.HttpShardHandler$1.call(HttpShardHandler.java:119)
  at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at 
 java.util.concurrent.FutureTask.run(FutureTask.java:166) at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at 
 java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at 
 java.util.concurrent.FutureTask.run(FutureTask.java:166) at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
  at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  at java.lang.Thread.run(Thread.java:679)
 /str
 int name=code500/int
 /lst
 /response
 see http://markmail.org/thread/gauat2zdkxm6ldjx

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2368) Improve extended dismax (edismax) parser

2013-09-02 Thread Devendra Wangikar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755976#comment-13755976
 ] 

Devendra Wangikar commented on SOLR-2368:
-

1. do edismax parser supports  || operators ? 
2. Do edismax support 'not' for NOT operator ?

Thanks in advance
Devendra W


 Improve extended dismax (edismax) parser
 

 Key: SOLR-2368
 URL: https://issues.apache.org/jira/browse/SOLR-2368
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Reporter: Yonik Seeley
  Labels: QueryParser

 This is a mother issue to track further improvements for eDismax parser.
 The goal is to be able to deprecate and remove the old dismax once edismax 
 satisfies all usecases of dismax.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5168) ByteSliceReader assert trips with 32-bit oracle 1.7.0_25 + G1GC

2013-09-02 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755991#comment-13755991
 ] 

Dawid Weiss commented on LUCENE-5168:
-

There is definietely a bug in the VM lurking somewhere. I've traced the 
problematic loop and it's a mess -- readVInt gets inlined twice and updates to 
upto don't propagate, resulting in general havoc later on and an assertion. 
More details here.

http://mail.openjdk.java.net/pipermail/hotspot-dev/2013-September/010692.html

I don't know of any sensible workaround other than -XX:-DoEscapeAnalysis (which 
helps but slows things down considerably).



 ByteSliceReader assert trips with 32-bit oracle 1.7.0_25 + G1GC
 ---

 Key: LUCENE-5168
 URL: https://issues.apache.org/jira/browse/LUCENE-5168
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: java8-windows-4x-3075-console.txt, log.0025, log.0042, 
 log.0078, log.0086, log.0100


 This assertion trips (sometimes from different tests), if you run the 
 highlighting tests on branch_4x with r1512807.
 It reproduces about half the time, always only with 32bit + G1GC (other 
 combinations do not seem to trip it, i didnt try looping or anything really 
 though).
 {noformat}
 rmuir@beast:~/workspace/branch_4x$ svn up -r 1512807
 rmuir@beast:~/workspace/branch_4x$ ant clean
 rmuir@beast:~/workspace/branch_4x$ rm -rf .caches #this is important,
 otherwise master seed does not work!
 rmuir@beast:~/workspace/branch_4x/lucene/highlighter$ ant test
 -Dtests.jvms=2 -Dtests.seed=EBBFA6F4E80A7365 -Dargs=-server
 -XX:+UseG1GC
 {noformat}
 Originally showed up like this:
 {noformat}
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6874/
 Java: 32bit/jdk1.7.0_25 -server -XX:+UseG1GC
 1 tests failed.
 REGRESSION:  
 org.apache.lucene.search.postingshighlight.TestPostingsHighlighter.testUserFailedToIndexOffsets
 Error Message:
 Stack Trace:
 java.lang.AssertionError
 at 
 __randomizedtesting.SeedInfo.seed([EBBFA6F4E80A7365:1FBF811885F2D611]:0)
 at 
 org.apache.lucene.index.ByteSliceReader.readByte(ByteSliceReader.java:73)
 at org.apache.lucene.store.DataInput.readVInt(DataInput.java:108)
 at 
 org.apache.lucene.index.FreqProxTermsWriterPerField.flush(FreqProxTermsWriterPerField.java:453)
 at 
 org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
 at org.apache.lucene.index.TermsHash.flush(TermsHash.java:116)
 at org.apache.lucene.index.DocInverter.flush(DocInverter.java:53)
 at 
 org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:81)
 at 
 org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:501)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5168) ByteSliceReader assert trips with 32-bit oracle 1.7.0_25 + G1GC

2013-09-02 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755991#comment-13755991
 ] 

Dawid Weiss edited comment on LUCENE-5168 at 9/2/13 9:43 AM:
-

There is definietely a bug in the VM lurking somewhere. I've traced the 
problematic loop and it's a mess -- readVInt gets inlined twice and updates to 
upto don't propagate, resulting in general havoc later on and an assertion. 
More details here.

http://mail.openjdk.java.net/pipermail/hotspot-dev/2013-September/010692.html

I don't know of any sensible workaround other than -XX:-DoEscapeAnalysis (which 
helps but slows things down considerably). And I don't see the link to G1GC but 
I have very little idea of internal C2/opto workings in hotspot -- fingers 
crossed for the hotspot folks to find out where the core of the problem is.



  was (Author: dweiss):
There is definietely a bug in the VM lurking somewhere. I've traced the 
problematic loop and it's a mess -- readVInt gets inlined twice and updates to 
upto don't propagate, resulting in general havoc later on and an assertion. 
More details here.

http://mail.openjdk.java.net/pipermail/hotspot-dev/2013-September/010692.html

I don't know of any sensible workaround other than -XX:-DoEscapeAnalysis (which 
helps but slows things down considerably).


  
 ByteSliceReader assert trips with 32-bit oracle 1.7.0_25 + G1GC
 ---

 Key: LUCENE-5168
 URL: https://issues.apache.org/jira/browse/LUCENE-5168
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: java8-windows-4x-3075-console.txt, log.0025, log.0042, 
 log.0078, log.0086, log.0100


 This assertion trips (sometimes from different tests), if you run the 
 highlighting tests on branch_4x with r1512807.
 It reproduces about half the time, always only with 32bit + G1GC (other 
 combinations do not seem to trip it, i didnt try looping or anything really 
 though).
 {noformat}
 rmuir@beast:~/workspace/branch_4x$ svn up -r 1512807
 rmuir@beast:~/workspace/branch_4x$ ant clean
 rmuir@beast:~/workspace/branch_4x$ rm -rf .caches #this is important,
 otherwise master seed does not work!
 rmuir@beast:~/workspace/branch_4x/lucene/highlighter$ ant test
 -Dtests.jvms=2 -Dtests.seed=EBBFA6F4E80A7365 -Dargs=-server
 -XX:+UseG1GC
 {noformat}
 Originally showed up like this:
 {noformat}
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6874/
 Java: 32bit/jdk1.7.0_25 -server -XX:+UseG1GC
 1 tests failed.
 REGRESSION:  
 org.apache.lucene.search.postingshighlight.TestPostingsHighlighter.testUserFailedToIndexOffsets
 Error Message:
 Stack Trace:
 java.lang.AssertionError
 at 
 __randomizedtesting.SeedInfo.seed([EBBFA6F4E80A7365:1FBF811885F2D611]:0)
 at 
 org.apache.lucene.index.ByteSliceReader.readByte(ByteSliceReader.java:73)
 at org.apache.lucene.store.DataInput.readVInt(DataInput.java:108)
 at 
 org.apache.lucene.index.FreqProxTermsWriterPerField.flush(FreqProxTermsWriterPerField.java:453)
 at 
 org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
 at org.apache.lucene.index.TermsHash.flush(TermsHash.java:116)
 at org.apache.lucene.index.DocInverter.flush(DocInverter.java:53)
 at 
 org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:81)
 at 
 org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:501)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2548) Multithreaded faceting

2013-09-02 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756089#comment-13756089
 ] 

Erick Erickson commented on SOLR-2548:
--

See Gun's comments about UnInvertedField serializing the facet counts due to 
the placement of the new viz. the synch block. It's at the very end of the 
class

Do people think that the chance of uninverting the same field more than once 
and throwing away 2...N is frequent enough to guard against with a Future (or 
whatever?) It seems like this is an expensive enough operation that the 
complexity is reasonable.

 Multithreaded faceting
 --

 Key: SOLR-2548
 URL: https://issues.apache.org/jira/browse/SOLR-2548
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 3.1
Reporter: Janne Majaranta
Priority: Minor
  Labels: facet
 Attachments: SOLR-2548_4.2.1.patch, SOLR-2548_for_31x.patch, 
 SOLR-2548.patch


 Add multithreading support for faceting.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-2548) Multithreaded faceting

2013-09-02 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reassigned SOLR-2548:


Assignee: Erick Erickson

 Multithreaded faceting
 --

 Key: SOLR-2548
 URL: https://issues.apache.org/jira/browse/SOLR-2548
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 3.1
Reporter: Janne Majaranta
Assignee: Erick Erickson
Priority: Minor
  Labels: facet
 Attachments: SOLR-2548_4.2.1.patch, SOLR-2548_for_31x.patch, 
 SOLR-2548.patch


 Add multithreading support for faceting.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-5208) Support for the setting of core.properties key/values at create-time on Collections API

2013-09-02 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reassigned SOLR-5208:


Assignee: Erick Erickson

 Support for the setting of core.properties key/values at create-time on 
 Collections API
 ---

 Key: SOLR-5208
 URL: https://issues.apache.org/jira/browse/SOLR-5208
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Affects Versions: 4.4
Reporter: Tim Vaillancourt
Assignee: Erick Erickson

 As discussed on e-mail thread Sharing SolrCloud collection configs 
 w/overrides 
 (http://search-lucene.com/m/MUWXu1DIsqY1subj=Sharing+SolrCloud+collection+configs+w+overrides),
  Erick brought up a neat solution using HTTP params at create-time for the 
 Collection API.
 Essentially, this request is for a functionality that allows the setting of 
 variables (core.properties) on Collections API CREATE command.
 Erick's idea:
 Maybe it's as simple as allowing more params for creation like
 collection.coreName where each param of the form collection.blah=blort
 gets an entry in the properties file blah=blort?...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5208) Support for the setting of core.properties key/values at create-time on Collections API

2013-09-02 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756095#comment-13756095
 ] 

Erick Erickson commented on SOLR-5208:
--

Hey, man! I'm in training to be a manager, I just suggest stuff rather than, 
you know, actually do anything G...

I can drive it through, but it'll get done faster some kind person puts 
together a patch...

I guess on equestion is whether overriding the collection.blah is a good idea 
or whether it should be a different name like collectionProperty.blah... I have 
no real preference either way.


 Support for the setting of core.properties key/values at create-time on 
 Collections API
 ---

 Key: SOLR-5208
 URL: https://issues.apache.org/jira/browse/SOLR-5208
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Affects Versions: 4.4
Reporter: Tim Vaillancourt

 As discussed on e-mail thread Sharing SolrCloud collection configs 
 w/overrides 
 (http://search-lucene.com/m/MUWXu1DIsqY1subj=Sharing+SolrCloud+collection+configs+w+overrides),
  Erick brought up a neat solution using HTTP params at create-time for the 
 Collection API.
 Essentially, this request is for a functionality that allows the setting of 
 variables (core.properties) on Collections API CREATE command.
 Erick's idea:
 Maybe it's as simple as allowing more params for creation like
 collection.coreName where each param of the form collection.blah=blort
 gets an entry in the properties file blah=blort?...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5208) Support for the setting of core.properties key/values at create-time on Collections API

2013-09-02 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756119#comment-13756119
 ] 

Mark Miller commented on SOLR-5208:
---

The CoreAdmin api already supports this with the property.* param naming scheme 
on create calls - allowing this for the collections api is probably as simple 
as propagating any property.* params from the collections api call to the 
solrcore api subcalls. That seems like the best way to deal with this use case 
- the core properties should be persisted already, so very simple to add I 
think.

 Support for the setting of core.properties key/values at create-time on 
 Collections API
 ---

 Key: SOLR-5208
 URL: https://issues.apache.org/jira/browse/SOLR-5208
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Affects Versions: 4.4
Reporter: Tim Vaillancourt
Assignee: Erick Erickson

 As discussed on e-mail thread Sharing SolrCloud collection configs 
 w/overrides 
 (http://search-lucene.com/m/MUWXu1DIsqY1subj=Sharing+SolrCloud+collection+configs+w+overrides),
  Erick brought up a neat solution using HTTP params at create-time for the 
 Collection API.
 Essentially, this request is for a functionality that allows the setting of 
 variables (core.properties) on Collections API CREATE command.
 Erick's idea:
 Maybe it's as simple as allowing more params for creation like
 collection.coreName where each param of the form collection.blah=blort
 gets an entry in the properties file blah=blort?...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: solr performance testing

2013-09-02 Thread Mikhail Khludnev
01.09.2013 21:32 пользователь Kranti Parisa kranti.par...@gmail.com
написал:

 Mikhail,



 When you say benchmarking what are the key attributes that you are
looking at?

Hello Kranti,

Think about benchmarking different faceting algorithms. I don.t want to
bother with external requests and servlet and really like experiments plan
provided by lucene utils.


 Thanks  Regards,
 Kranti K Parisa
 http://www.linkedin.com/in/krantiparisa



 On Sat, Aug 31, 2013 at 3:44 PM, Erick Erickson erickerick...@gmail.com
wrote:

 Solrmeter can load the system up much more heavily than a few calls a
minute as I remember, although I'm not sure how up-to-date it is at this
point.

 Erick


 On Sat, Aug 31, 2013 at 3:27 PM, Mikhail Khludnev 
mkhlud...@griddynamics.com wrote:

 Hello Kranti,

 Definitely not Solrmeter, last time I saw it, it provides few calls in
a minute rate. Hence it's an analytical/monitoring tool like NewRelic
Sematext. I rather looking for low level benchmarking tool. I see lucene
has luceneutil/ and lucene/benchmark and I wonder which of them is easier
to adapt for testing Solr.


 On Sat, Aug 31, 2013 at 11:21 AM, Kranti Parisa kranti.par...@gmail.com
wrote:

 you can try
 https://code.google.com/p/solrmeter/

 or you can also run JMeter tests.

 or try free trails given by NewRelic or Sematext, they both have
extensive stats for the Solr instances

 Thanks  Regards,
 Kranti K Parisa
 http://www.linkedin.com/in/krantiparisa



 On Thu, Aug 29, 2013 at 6:32 AM, Mikhail Khludnev 
mkhlud...@griddynamics.com wrote:

 Hello,

 afaik http://code.google.com/a/apache-extras.org/p/luceneutil/ is
used for testing Lucene performance. What about Solr? Is it also supported
or there are separate well known facility?

 Thanks in advance

 --
 Sincerely yours
 Mikhail Khludnev
 Principal Engineer,
 Grid Dynamics






 --
 Sincerely yours
 Mikhail Khludnev
 Principal Engineer,
 Grid Dynamics






[jira] [Created] (SOLR-5209) cores/action=UNLOAD of last replica removes shard from clusterstate

2013-09-02 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-5209:
-

 Summary: cores/action=UNLOAD of last replica removes shard from 
clusterstate
 Key: SOLR-5209
 URL: https://issues.apache.org/jira/browse/SOLR-5209
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.4
Reporter: Christine Poerschke


The problem we saw was that unloading of an only replica of a shard deleted 
that shard's info from the clusterstate. Once it was gone then there was no 
easy way to re-create the shard (other than dropping and re-creating the whole 
collection's state).

This seems like a bug?

Overseer.java around line 600 has a comment and commented out code:
// TODO TODO TODO!!! if there are no replicas left for the slice, and the slice 
has no hash range, remove it
// if (newReplicas.size() == 0  slice.getRange() == null) {
// if there are no replicas left for the slice remove it


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5209) cores/action=UNLOAD of last replica removes shard from clusterstate

2013-09-02 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-5209:
--

Attachment: SOLR-5209.patch

 cores/action=UNLOAD of last replica removes shard from clusterstate
 ---

 Key: SOLR-5209
 URL: https://issues.apache.org/jira/browse/SOLR-5209
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.4
Reporter: Christine Poerschke
 Attachments: SOLR-5209.patch


 The problem we saw was that unloading of an only replica of a shard deleted 
 that shard's info from the clusterstate. Once it was gone then there was no 
 easy way to re-create the shard (other than dropping and re-creating the 
 whole collection's state).
 This seems like a bug?
 Overseer.java around line 600 has a comment and commented out code:
 // TODO TODO TODO!!! if there are no replicas left for the slice, and the 
 slice has no hash range, remove it
 // if (newReplicas.size() == 0  slice.getRange() == null) {
 // if there are no replicas left for the slice remove it

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5209) cores/action=UNLOAD of last replica removes shard from clusterstate

2013-09-02 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756189#comment-13756189
 ] 

Christine Poerschke commented on SOLR-5209:
---

+original clusterstate (extract)+
{noformat}
shards : {
  shard1:{ range:..., replicas:{ { core:collection1_shard1 } } },
  shard2:{ range:..., replicas:{ { core:collection1_shard2 } } }
}
{noformat}

+actual clusterstate after UNLOAD of collection1_shard1+
{noformat}
shards : {
  shard2:{ range:..., replicas:{ { core:collection1_shard2 } } }
}
{noformat}

+expected clusterstate after UNLOAD of collection1_shard1+
{noformat}
shards : {
  shard1:{ range:..., replicas:{} },
  shard2:{ range:..., replicas:{ { core:collection1_shard2 } } }
}
{noformat}


 cores/action=UNLOAD of last replica removes shard from clusterstate
 ---

 Key: SOLR-5209
 URL: https://issues.apache.org/jira/browse/SOLR-5209
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.4
Reporter: Christine Poerschke
 Attachments: SOLR-5209.patch


 The problem we saw was that unloading of an only replica of a shard deleted 
 that shard's info from the clusterstate. Once it was gone then there was no 
 easy way to re-create the shard (other than dropping and re-creating the 
 whole collection's state).
 This seems like a bug?
 Overseer.java around line 600 has a comment and commented out code:
 // TODO TODO TODO!!! if there are no replicas left for the slice, and the 
 slice has no hash range, remove it
 // if (newReplicas.size() == 0  slice.getRange() == null) {
 // if there are no replicas left for the slice remove it

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5189) Numeric DocValues Updates

2013-09-02 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated LUCENE-5189:
---

Attachment: LUCENE-5189.patch

Patch adds testStressMultiThreading and testUpdateOldSegments to ensure 
updating segments written with older format than Lucene45 is not supported.

 Numeric DocValues Updates
 -

 Key: LUCENE-5189
 URL: https://issues.apache.org/jira/browse/LUCENE-5189
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/index
Reporter: Shai Erera
Assignee: Shai Erera
 Attachments: LUCENE-5189.patch, LUCENE-5189.patch, LUCENE-5189.patch, 
 LUCENE-5189.patch, LUCENE-5189.patch


 In LUCENE-4258 we started to work on incremental field updates, however the 
 amount of changes are immense and hard to follow/consume. The reason is that 
 we targeted postings, stored fields, DV etc., all from the get go.
 I'd like to start afresh here, with numeric-dv-field updates only. There are 
 a couple of reasons to that:
 * NumericDV fields should be easier to update, if e.g. we write all the 
 values of all the documents in a segment for the updated field (similar to 
 how livedocs work, and previously norms).
 * It's a fairly contained issue, attempting to handle just one data type to 
 update, yet requires many changes to core code which will also be useful for 
 updating other data types.
 * It has value in and on itself, and we don't need to allow updating all the 
 data types in Lucene at once ... we can do that gradually.
 I have some working patch already which I'll upload next, explaining the 
 changes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5210) amend example's schema.xml and solrconfig.xml for blockjoin support

2013-09-02 Thread Mikhail Khludnev (JIRA)
Mikhail Khludnev created SOLR-5210:
--

 Summary: amend example's schema.xml and solrconfig.xml for 
blockjoin support
 Key: SOLR-5210
 URL: https://issues.apache.org/jira/browse/SOLR-5210
 Project: Solr
  Issue Type: Sub-task
Reporter: Mikhail Khludnev


I suppose it make sense to apply 
https://svn.apache.org/viewvc/lucene/dev/trunk/solr/core/src/test-files/solr/collection1/conf/solrconfig.xml?r1=1513290r2=1513289pathrev=1513290
 and 
https://svn.apache.org/viewvc/lucene/dev/trunk/solr/core/src/test-files/solr/collection1/conf/schema.xml?r1=1513290r2=1513289pathrev=1513290
 to example's config too provide out-of-the-box block join experience. 
WDYT?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5210) amend example's schema.xml and solrconfig.xml for blockjoin support

2013-09-02 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756255#comment-13756255
 ] 

Markus Jelsma commented on SOLR-5210:
-

Yes! I haven't really followed block join support and could get to know it via 
reading unit tests and their config files. But if it is integrated in the main 
example, it would safe me and everyone else some time :)

 amend example's schema.xml and solrconfig.xml for blockjoin support
 ---

 Key: SOLR-5210
 URL: https://issues.apache.org/jira/browse/SOLR-5210
 Project: Solr
  Issue Type: Sub-task
Reporter: Mikhail Khludnev
 Fix For: 4.5, 5.0


 I suppose it make sense to apply 
 https://svn.apache.org/viewvc/lucene/dev/trunk/solr/core/src/test-files/solr/collection1/conf/solrconfig.xml?r1=1513290r2=1513289pathrev=1513290
  and 
 https://svn.apache.org/viewvc/lucene/dev/trunk/solr/core/src/test-files/solr/collection1/conf/schema.xml?r1=1513290r2=1513289pathrev=1513290
  to example's config too provide out-of-the-box block join experience. 
 WDYT?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5197) Add a method to SegmentReader to get the current index heap memory size

2013-09-02 Thread Areek Zillur (JIRA)
Areek Zillur created LUCENE-5197:


 Summary: Add a method to SegmentReader to get the current index 
heap memory size
 Key: LUCENE-5197
 URL: https://issues.apache.org/jira/browse/LUCENE-5197
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/codecs
Reporter: Areek Zillur


It would be useful to at least estimate the index heap size being used by 
Lucene. Ideally a method exposing this information at the SegmentReader level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5197) Add a method to SegmentReader to get the current index heap memory size

2013-09-02 Thread Areek Zillur (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Areek Zillur updated LUCENE-5197:
-

Attachment: LUCENE-5197.patch

Initial patch file that only takes into account classes in the codecs package 
to estimate the index heap memory usage for a SegmentReader

 Add a method to SegmentReader to get the current index heap memory size
 ---

 Key: LUCENE-5197
 URL: https://issues.apache.org/jira/browse/LUCENE-5197
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/codecs
Reporter: Areek Zillur
 Attachments: LUCENE-5197.patch


 It would be useful to at least estimate the index heap size being used by 
 Lucene. Ideally a method exposing this information at the SegmentReader level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5197) Add a method to SegmentReader to get the current index heap memory size

2013-09-02 Thread Areek Zillur (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Areek Zillur updated LUCENE-5197:
-

Component/s: core/index

 Add a method to SegmentReader to get the current index heap memory size
 ---

 Key: LUCENE-5197
 URL: https://issues.apache.org/jira/browse/LUCENE-5197
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/codecs, core/index
Reporter: Areek Zillur
 Attachments: LUCENE-5197.patch


 It would be useful to at least estimate the index heap size being used by 
 Lucene. Ideally a method exposing this information at the SegmentReader level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5178) doc values should expose missing values (or allow configurable defaults)

2013-09-02 Thread Han Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756313#comment-13756313
 ] 

Han Jiang commented on LUCENE-5178:
---

During test I somehow hit a failure:

{noformat}
   [junit4] FAILURE 0.27s | TestRangeAccumulator.testMissingValues 
   [junit4] Throwable #1: org.junit.ComparisonFailure: expected:...(0)
   [junit4]   less than 10 ([8)
   [junit4]   less than or equal to 10 (]8)
   [junit4]   over 90 (8)
   [junit4]   9... but was:...(0)
   [junit4]   less than 10 ([28)
   [junit4]   less than or equal to 10 (2]8)
   [junit4]   over 90 (8)
   [junit4]   9...
   [junit4]at 
__randomizedtesting.SeedInfo.seed([815B6AA86D05329C:EBC638EE498F066D]:0)
   [junit4]at 
org.apache.lucene.facet.range.TestRangeAccumulator.testMissingValues(TestRangeAccumulator.java:670)
   [junit4]at java.lang.Thread.run(Thread.java:722)
{noformat}

Seed:
{noformat}
ant test  -Dtestcase=TestRangeAccumulator -Dtests.method=testMissingValues 
-Dtests.seed=815B6AA86D05329C -Dtests.slow=true -Dtests.postingsformat=Lucene41 
-Dtests.locale=ca -Dtests.timezone=Australia/Currie -Dtests.file.encoding=UTF-8
{noformat}

 doc values should expose missing values (or allow configurable defaults)
 

 Key: LUCENE-5178
 URL: https://issues.apache.org/jira/browse/LUCENE-5178
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Yonik Seeley
 Fix For: 5.0, 4.5

 Attachments: LUCENE-5178.patch, LUCENE-5178_reintegrate.patch


 DocValues should somehow allow a configurable default per-field.
 Possible implementations include setting it on the field in the document or 
 registration of an IndexWriter callback.
 If we don't make the default configurable, then another option is to have 
 DocValues fields keep track of whether a value was indexed for that document 
 or not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3069) Lucene should have an entirely memory resident term dictionary

2013-09-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756320#comment-13756320
 ] 

ASF subversion and git services commented on LUCENE-3069:
-

Commit 1519542 from [~billy] in branch 'dev/branches/lucene3069'
[ https://svn.apache.org/r1519542 ]

LUCENE-3069: update javadocs, fix impersonator bug

 Lucene should have an entirely memory resident term dictionary
 --

 Key: LUCENE-3069
 URL: https://issues.apache.org/jira/browse/LUCENE-3069
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index, core/search
Affects Versions: 4.0-ALPHA
Reporter: Simon Willnauer
Assignee: Han Jiang
  Labels: gsoc2013
 Fix For: 5.0, 4.5

 Attachments: df-ttf-estimate.txt, example.png, LUCENE-3069.patch, 
 LUCENE-3069.patch, LUCENE-3069.patch, LUCENE-3069.patch, LUCENE-3069.patch, 
 LUCENE-3069.patch, LUCENE-3069.patch, LUCENE-3069.patch, LUCENE-3069.patch, 
 LUCENE-3069.patch, LUCENE-3069.patch, LUCENE-3069.patch


 FST based TermDictionary has been a great improvement yet it still uses a 
 delta codec file for scanning to terms. Some environments have enough memory 
 available to keep the entire FST based term dict in memory. We should add a 
 TermDictionary implementation that encodes all needed information for each 
 term into the FST (custom fst.Output) and builds a FST from the entire term 
 not just the delta.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5198) Strengthen the function of Min should match, making it select BooleanClause as Occur.MUST according to the weight of query

2013-09-02 Thread HeXin (JIRA)
HeXin created LUCENE-5198:
-

 Summary: Strengthen the function of Min should match, making it 
select BooleanClause as Occur.MUST according to the weight of query
 Key: LUCENE-5198
 URL: https://issues.apache.org/jira/browse/LUCENE-5198
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Affects Versions: 4.4
Reporter: HeXin
Priority: Trivial


In some case, we want the value of mm to select BooleanClause as Occur.MUST can 
according to the weight of query.

Only if the weight larger than the threshold, it can be selected as Occur.MUST. 
The threshold can be configurable, equaling the minimum integer by default.

Any comments is welcomed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5203) Strengthen the function of Min should match, making it select BooleanClause as Occur.MUST according to the weight of query

2013-09-02 Thread HeXin (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756333#comment-13756333
 ] 

HeXin commented on SOLR-5203:
-

Sorry for mistake. I think it should be classified to lucene feature. 
So i create the lucene jira: https://issues.apache.org/jira/browse/LUCENE-5198 
and close this one.


 Strengthen the function of Min should match, making it select BooleanClause 
 as Occur.MUST according to the weight of query
 --

 Key: SOLR-5203
 URL: https://issues.apache.org/jira/browse/SOLR-5203
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 4.4
Reporter: HeXin
Priority: Minor
 Fix For: 4.5, 5.0


 In some case, we want the value of mm to select BooleanClause as Occur.MUST 
 can according to the weight of query. 
 Only if the weight larger than the threshold, it can be selected as 
 Occur.MUST. The threshold can be configurable, equaling the minimum integer 
 by default. 
 Any comments is welcomed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-5203) Strengthen the function of Min should match, making it select BooleanClause as Occur.MUST according to the weight of query

2013-09-02 Thread HeXin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HeXin closed SOLR-5203.
---

Resolution: Unresolved

 Strengthen the function of Min should match, making it select BooleanClause 
 as Occur.MUST according to the weight of query
 --

 Key: SOLR-5203
 URL: https://issues.apache.org/jira/browse/SOLR-5203
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 4.4
Reporter: HeXin
Priority: Minor
 Fix For: 4.5, 5.0


 In some case, we want the value of mm to select BooleanClause as Occur.MUST 
 can according to the weight of query. 
 Only if the weight larger than the threshold, it can be selected as 
 Occur.MUST. The threshold can be configurable, equaling the minimum integer 
 by default. 
 Any comments is welcomed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0-ea-b102) - Build # 3220 - Failure!

2013-09-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/3220/
Java: 32bit/jdk1.8.0-ea-b102 -server -XX:+UseG1GC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.search.similarities.TestDFRSimilarityFactory

Error Message:
Suite timeout exceeded (= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (= 720 msec).
at __randomizedtesting.SeedInfo.seed([499FBACF20EAC99E]:0)




Build Log:
[...truncated 9815 lines...]
   [junit4] Suite: org.apache.solr.search.similarities.TestDFRSimilarityFactory
   [junit4]   2 361669 T1481 oas.SolrTestCaseJ4.initCore initCore
   [junit4]   2 Creating dataDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\.\solrtest-TestDFRSimilarityFactory-1378174853108
   [junit4]   2 361672 T1481 oasc.SolrResourceLoader.init new 
SolrResourceLoader for directory: 
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test-files\solr\collection1\'
   [junit4]   2 361675 T1481 oasc.SolrResourceLoader.replaceClassLoader Adding 
'file:/C:/Users/JenkinsSlave/workspace/Lucene-Solr-trunk-Windows/solr/build/solr-core/test-files/solr/collection1/lib/classes/'
 to classloader
   [junit4]   2 361677 T1481 oasc.SolrResourceLoader.replaceClassLoader Adding 
'file:/C:/Users/JenkinsSlave/workspace/Lucene-Solr-trunk-Windows/solr/build/solr-core/test-files/solr/collection1/lib/README'
 to classloader
   [junit4]   2 361725 T1481 oasc.SolrConfig.init Using Lucene MatchVersion: 
LUCENE_50
   [junit4]   2 361730 T1481 oasc.SolrConfig.init Loaded SolrConfig: 
solrconfig-basic.xml
   [junit4]   2 361731 T1481 oass.IndexSchema.readSchema Reading Solr Schema 
from schema-dfr.xml
   [junit4]   2 361734 T1481 oass.IndexSchema.readSchema [null] Schema 
name=test
   [junit4]   2 361753 T1481 oass.IndexSchema.readSchema default search field 
in schema is text
   [junit4]   2 361753 T1481 oass.IndexSchema.readSchema unique key field: id
   [junit4]   2 361754 T1481 oasc.SolrResourceLoader.locateSolrHome JNDI not 
configured for solr (NoInitialContextEx)
   [junit4]   2 361754 T1481 oasc.SolrResourceLoader.locateSolrHome using 
system property solr.solr.home: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test-files\solr
   [junit4]   2 361754 T1481 oasc.SolrResourceLoader.init new 
SolrResourceLoader for directory: 
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test-files\solr\'
   [junit4]   2 361771 T1481 oasc.SolrResourceLoader.locateSolrHome JNDI not 
configured for solr (NoInitialContextEx)
   [junit4]   2 361771 T1481 oasc.SolrResourceLoader.locateSolrHome using 
system property solr.solr.home: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test-files\solr
   [junit4]   2 361772 T1481 oasc.SolrResourceLoader.init new 
SolrResourceLoader for deduced Solr Home: 
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test-files\solr\'
   [junit4]   2 361795 T1481 oasc.CoreContainer.init New CoreContainer 
13672666
   [junit4]   2 361795 T1481 oasc.CoreContainer.load Loading cores into 
CoreContainer 
[instanceDir=C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test-files\solr\]
   [junit4]   2 361796 T1481 oashc.HttpShardHandlerFactory.getParameter 
Setting socketTimeout to: 0
   [junit4]   2 361796 T1481 oashc.HttpShardHandlerFactory.getParameter 
Setting urlScheme to: http://
   [junit4]   2 361796 T1481 oashc.HttpShardHandlerFactory.getParameter 
Setting connTimeout to: 0
   [junit4]   2 361797 T1481 oashc.HttpShardHandlerFactory.getParameter 
Setting maxConnectionsPerHost to: 20
   [junit4]   2 361797 T1481 oashc.HttpShardHandlerFactory.getParameter 
Setting corePoolSize to: 0
   [junit4]   2 361797 T1481 oashc.HttpShardHandlerFactory.getParameter 
Setting maximumPoolSize to: 2147483647
   [junit4]   2 361797 T1481 oashc.HttpShardHandlerFactory.getParameter 
Setting maxThreadIdleTime to: 5
   [junit4]   2 361797 T1481 oashc.HttpShardHandlerFactory.getParameter 
Setting sizeOfQueue to: -1
   [junit4]   2 361797 T1481 oashc.HttpShardHandlerFactory.getParameter 
Setting fairnessPolicy to: false
   [junit4]   2 361798 T1481 oascsi.HttpClientUtil.createClient Creating new 
http client, 
config:maxConnectionsPerHost=20maxConnections=1socketTimeout=0connTimeout=0retry=false
   [junit4]   2 361806 T1481 oasl.LogWatcher.createWatcher SLF4J impl is 
org.slf4j.impl.Log4jLoggerFactory
   [junit4]   2 361806 T1481 oasl.LogWatcher.newRegisteredLogWatcher 
Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
   [junit4]   2 361812 T1482 oasc.CoreContainer.create Creating SolrCore 
'collection1' using instanceDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test-files\solr\collection1
   [junit4]   2 361812 T1482 oasc.SolrResourceLoader.init new 
SolrResourceLoader for directory: