JCC build fails on Mac OS X Mavericks

2014-03-11 Thread Peter Ganong
Hi,

I'm having trouble installing JCC. I don't know if this is the right forum,
but I figured I would post here, since
thishttp://lucene.apache.org/pylucene/mailing-lists.htmlpage said
this was the list to post build issues.

I'm trying to install JCC as a first step to installing PyLucene. I've
installed JDK 1.7, Python 2.7, Python 3.3, XCode, XCode Command Line Tools
and setuptools 3.1. I've pasted the error I'm getting below.

Thanks,

Peter

Macintosh-58:jcc ganong$ python setup.py build

found JAVAHOME =
/Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home

found JAVAFRAMEWORKS = /System/Library/Frameworks/JavaVM.framework

Loading source files for package org.apache.jcc...

Constructing Javadoc information...

Standard Doclet version 1.7.0_51

Building tree for all the packages and classes...

Generating javadoc/org/apache/jcc/PythonException.html...

Generating javadoc/org/apache/jcc/PythonVM.html...

Generating javadoc/org/apache/jcc/package-frame.html...

Generating javadoc/org/apache/jcc/package-summary.html...

Generating javadoc/org/apache/jcc/package-tree.html...

Generating javadoc/constant-values.html...

Generating javadoc/serialized-form.html...

Building index for all the packages and classes...

Generating javadoc/overview-tree.html...

Generating javadoc/index-all.html...

Generating javadoc/deprecated-list.html...

Building index for all classes...

Generating javadoc/allclasses-frame.html...

Generating javadoc/allclasses-noframe.html...

Generating javadoc/index.html...

Generating javadoc/help-doc.html...

running build

running build_py

writing /Users/ganong/Downloads/pylucene-4.6.1-1/jcc/jcc/config.py

copying jcc/config.py - build/lib.macosx-10.9-intel-2.7/jcc

copying jcc/classes/org/apache/jcc/PythonVM.class -
build/lib.macosx-10.9-intel-2.7/jcc/classes/org/apache/jcc

copying jcc/classes/org/apache/jcc/PythonException.class -
build/lib.macosx-10.9-intel-2.7/jcc/classes/org/apache/jcc

running build_ext

building 'jcc' extension

cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -g -Os
-pipe -fno-common -fno-strict-aliasing -fwrapv -mno-fused-madd
-DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes
-Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes
-DENABLE_DTRACE -dynamiclib -D_jcc_lib -DJCC_VER=2.19
-I/Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home/include
-I/Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home/include/darwin
-I_jcc -Ijcc/sources
-I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7
-c jcc/sources/jcc.cpp -o
build/temp.macosx-10.9-intel-2.7/jcc/sources/jcc.o -DPYTHON
-fno-strict-aliasing -Wno-write-strings

clang: error: unknown argument: '-mno-fused-madd'
[-Wunused-command-line-argument-hard-error-in-future]

clang: note: this will be a hard error (cannot be downgraded to a warning)
in the future

error: command 'cc' failed with exit status 1

-- 
Peter Ganong
PhD Candidate in Economics at Harvard
scholar.harvard.edu/ganong/


[jira] [Updated] (LUCENE-5518) minor hunspell optimizations

2014-03-11 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5518:


Attachment: LUCENE-5518.patch

ok now 3 times faster.

the condition check is moved before applyAffix, and the 
StringBuilder/String/utf8ToString stuff is removed (strips are deduplicated 
into a giant char[]).

Other things to speed this up are more complicated: i dont think this makes it 
too much worse right now.

 minor hunspell optimizations
 

 Key: LUCENE-5518
 URL: https://issues.apache.org/jira/browse/LUCENE-5518
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/analysis
Reporter: Robert Muir
 Attachments: LUCENE-5518.patch, LUCENE-5518.patch


 After benchmarking indexing speed on SOLR-3245, I ran a profiler and a couple 
 things stood out.
 There are other things I want to improve too, but these almost double the 
 speed for many dictionaries.
 * Hunspell supports two-stage affix stripping, but the vast majority of 
 dictionaries don't have any affixes that support it. So we just add a boolean 
 (Dictionary.twoStageAffix) that is false until we see one.
 * We use java.util.regex.Pattern for condition checks. This is slow, I 
 switched to o.a.l.automaton and its much faster, and uses slightly less RAM 
 too.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2894) Implement distributed pivot faceting

2014-03-11 Thread Elran Dvir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930026#comment-13930026
 ] 

Elran Dvir commented on SOLR-2894:
--

No.
It doesn't happen when I use facet.limit=-1 instead of the 
f.fieldname.facet.limit syntax.

Thanks.

 Implement distributed pivot faceting
 

 Key: SOLR-2894
 URL: https://issues.apache.org/jira/browse/SOLR-2894
 Project: Solr
  Issue Type: Improvement
Reporter: Erik Hatcher
 Fix For: 4.7

 Attachments: SOLR-2894-reworked.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 dateToObject.patch


 Following up on SOLR-792, pivot faceting currently only supports 
 undistributed mode.  Distributed pivot faceting needs to be implemented.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5847) The Admin GUI doesn't allow to abort a running dataimport

2014-03-11 Thread Paco Garcia (JIRA)
Paco Garcia created SOLR-5847:
-

 Summary: The Admin GUI doesn't allow to abort a running dataimport
 Key: SOLR-5847
 URL: https://issues.apache.org/jira/browse/SOLR-5847
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler, web gui
Affects Versions: 4.7
Reporter: Paco Garcia
Priority: Minor


With the changes introduced in 4.7.0 Release by SOLR-5517 (Return HTTP error on 
POST requests with no Content-Type), the jquery invocation to abort a running 
dataimport fails with HTTP error code 415.

The method POST should have some content in the body

See comments in SOLR-5517



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5517) Return HTTP error on POST requests with no Content-Type

2014-03-11 Thread Paco Garcia (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930065#comment-13930065
 ] 

Paco Garcia commented on SOLR-5517:
---

OK SOLR-5847

 Return HTTP error on POST requests with no Content-Type
 ---

 Key: SOLR-5517
 URL: https://issues.apache.org/jira/browse/SOLR-5517
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan Ernst
Assignee: Ryan Ernst
 Fix For: 4.7, 5.0

 Attachments: SOLR-5517.patch, SOLR-5517.patch, SOLR-5517.patch, 
 SOLR-5517.patch, SOLR-5517.patch


 While the http spec states requests without a content-type should be treated 
 as application/octet-stream, the html spec says instead that post requests 
 without a content-type should be treated as a form 
 (http://www.w3.org/MarkUp/html-spec/html-spec_8.html#SEC8.2.1).  It would be 
 nice to allow large search requests from html forms, and not have to rely on 
 the browser to set the content type (since the spec says it doesn't have to).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5476) Facet sampling

2014-03-11 Thread Rob Audenaerde (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930078#comment-13930078
 ] 

Rob Audenaerde commented on LUCENE-5476:


Thanks again for the good points. 

I currently have an {amortizeFacetCounts} that uses the {IndexSearcher} to 
retrieve a reader an a {FacetsConfig} to determine the {Term} for the 
{docFreq}. I'm not really sure how this will work for hierarchies though. I 
will also add the {{totalHits}} as upper bound, great idea Gilad.

I alse removed the exact assert and switch to an atLeast. Will add a patch soon.



 Facet sampling
 --

 Key: LUCENE-5476
 URL: https://issues.apache.org/jira/browse/LUCENE-5476
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Rob Audenaerde
 Attachments: LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, 
 LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, 
 SamplingComparison_SamplingFacetsCollector.java, SamplingFacetsCollector.java


 With LUCENE-5339 facet sampling disappeared. 
 When trying to display facet counts on large datasets (10M documents) 
 counting facets is rather expensive, as all the hits are collected and 
 processed. 
 Sampling greatly reduced this and thus provided a nice speedup. Could it be 
 brought back?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5476) Facet sampling

2014-03-11 Thread Rob Audenaerde (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930078#comment-13930078
 ] 

Rob Audenaerde edited comment on LUCENE-5476 at 3/11/14 8:10 AM:
-

Thanks again for the good points. 

I currently have an {{amortizeFacetCounts}} that uses the {{IndexSearcher}} to 
retrieve a reader an a {{FacetsConfig}} to determine the {{Term}} for the 
{{docFreq}}. I'm not really sure how this will work for hierarchies though. 

Using {{totalHits}} as upper bound is not really necessary I think; as the 
sampling rate is determined by the total number of hits and the sample size; so 
reversing this can never yield numbers greater that {{totalHits}}. 

I alse removed the exact assert and switch to an {{atLeast}}. Will add a patch 
soon.




was (Author: robau):
Thanks again for the good points. 

I currently have an {amortizeFacetCounts} that uses the {IndexSearcher} to 
retrieve a reader an a {FacetsConfig} to determine the {Term} for the 
{docFreq}. I'm not really sure how this will work for hierarchies though. I 
will also add the {{totalHits}} as upper bound, great idea Gilad.

I alse removed the exact assert and switch to an atLeast. Will add a patch soon.



 Facet sampling
 --

 Key: LUCENE-5476
 URL: https://issues.apache.org/jira/browse/LUCENE-5476
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Rob Audenaerde
 Attachments: LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, 
 LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, 
 SamplingComparison_SamplingFacetsCollector.java, SamplingFacetsCollector.java


 With LUCENE-5339 facet sampling disappeared. 
 When trying to display facet counts on large datasets (10M documents) 
 counting facets is rather expensive, as all the hits are collected and 
 processed. 
 Sampling greatly reduced this and thus provided a nice speedup. Could it be 
 brought back?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5476) Facet sampling

2014-03-11 Thread Rob Audenaerde (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rob Audenaerde updated LUCENE-5476:
---

Attachment: LUCENE-5476.patch

 Facet sampling
 --

 Key: LUCENE-5476
 URL: https://issues.apache.org/jira/browse/LUCENE-5476
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Rob Audenaerde
 Attachments: LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, 
 LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, 
 LUCENE-5476.patch, SamplingComparison_SamplingFacetsCollector.java, 
 SamplingFacetsCollector.java


 With LUCENE-5339 facet sampling disappeared. 
 When trying to display facet counts on large datasets (10M documents) 
 counting facets is rather expensive, as all the hits are collected and 
 processed. 
 Sampling greatly reduced this and thus provided a nice speedup. Could it be 
 brought back?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5502) equals method of TermsFilter might equate two different filters

2014-03-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930087#comment-13930087
 ] 

ASF subversion and git services commented on LUCENE-5502:
-

Commit 1576223 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1576223 ]

LUCENE-5502: Fix TermsFilter.equals.

 equals method of TermsFilter might equate two different filters
 ---

 Key: LUCENE-5502
 URL: https://issues.apache.org/jira/browse/LUCENE-5502
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/query/scoring
Affects Versions: 4.7
Reporter: Igor Motov
 Attachments: LUCENE-5502.patch, LUCENE-5502.patch, LUCENE-5502.patch


 If two terms filters have 1) the same number of terms, 2) use the same field 
 in all these terms and 3) term values happened to have the same hash codes, 
 these two filter are considered to be equal as long as the first term is the 
 same in both filters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5502) equals method of TermsFilter might equate two different filters

2014-03-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930088#comment-13930088
 ] 

ASF subversion and git services commented on LUCENE-5502:
-

Commit 1576227 from [~jpountz] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1576227 ]

LUCENE-5502: Fix TermsFilter.equals.

 equals method of TermsFilter might equate two different filters
 ---

 Key: LUCENE-5502
 URL: https://issues.apache.org/jira/browse/LUCENE-5502
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/query/scoring
Affects Versions: 4.7
Reporter: Igor Motov
 Attachments: LUCENE-5502.patch, LUCENE-5502.patch, LUCENE-5502.patch


 If two terms filters have 1) the same number of terms, 2) use the same field 
 in all these terms and 3) term values happened to have the same hash codes, 
 these two filter are considered to be equal as long as the first term is the 
 same in both filters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5502) equals method of TermsFilter might equate two different filters

2014-03-11 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-5502.
--

   Resolution: Fixed
Fix Version/s: 4.8
 Assignee: Adrien Grand

Committed, thanks Igor!

 equals method of TermsFilter might equate two different filters
 ---

 Key: LUCENE-5502
 URL: https://issues.apache.org/jira/browse/LUCENE-5502
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/query/scoring
Affects Versions: 4.7
Reporter: Igor Motov
Assignee: Adrien Grand
 Fix For: 4.8

 Attachments: LUCENE-5502.patch, LUCENE-5502.patch, LUCENE-5502.patch


 If two terms filters have 1) the same number of terms, 2) use the same field 
 in all these terms and 3) term values happened to have the same hash codes, 
 these two filter are considered to be equal as long as the first term is the 
 same in both filters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5476) Facet sampling

2014-03-11 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930111#comment-13930111
 ] 

Shai Erera commented on LUCENE-5476:


* Javadocs:
** From the class javadocs: _Note: the original set of hits will be available 
as documents..._
I think we should just write the original set of hits can be retrieved from 
getOriginal.. - I don't want anyone to be confused with the wording will be 
available as documents.
** Can you make NOT_CALCULATED static?
** Typo: samplingRato
** randomSeed: I think it should say if 0... not if null.

* getMatchingDocs -- can you move the totalHits calculation to 
{{getTotalHits()}}? And then call it only {{if (sampledDocs==null)}}?

* needsSampling -- I know it was suggested to make it protected for inheritance 
purposes, but as it looks now, all members are private so I don't see how can 
one extend the class only to override this method (e.g. he won't have access to 
sampleSize even). Maybe we should keep it private and when someone asks to 
extend, we know better what needs to be protected? For instance, I think it's 
also important that we allow overriding createSampledDocList, but for now let's 
keep it simple.

* I think that we need to document somewhere (maybe in class javadocs) that the 
returned sampled docs may include empty MatchingDocs instances (i.e. when no 
docs were sampled from a segment). Just so that we don't surprise anyone with 
empty instances. If people work w/ MatchingDocs as they should, by obtaining an 
iterator, it shouldn't be a problem, but better document it explicitly.
** Perhaps we should also say something about the returned 
MatchingDocs.totalHits, which are the original totalHits and not the sampled 
set size?

* About carryOver:
** Doesn't it handle the TODO at the beginning of createSample?
** Why does it need to always include the first document of the first segment? 
Rather you could initialize it to -1 and if {{carryOver == -1}} set it to a 
random index within that bin? Seems more random to me.

* amortizedFacetCounts:
** It's a matter of style so I don't mind if you keep this like you wrote, but 
I would write it as {{if (!needsSampling() || res == null)}} and then the rest 
of the method isn't indented. Your call.
** I think it's better to allocate {{childPath}} so that the first element is 
already the dimension. See what FacetsConfig.pathToString currently does. 
Currently it results in re-allocating the String[] for every label. Then you 
can call the pathToString variant which takes the array and the length.
*** Separately, would be good if FacetsConfig had an appendElement(Appendable, 
int idx) to append a specific element to the appendable. Then you could use a 
StringBuilder once, and skip the encoding done for the entire path except the 
last element.
** Perhaps we should cap this {{(int) (res.value.doubleValue() / 
this.samplingRate)}} by e.g. the number of non-deleted documents?

* About test:
** This comment is wrong: _//there will be 5000 hits._
** Why do you initialize your own Random? It's impossible to debug the test 
like that. You should use {{random()}} instead.
** This comment is wrong? _//because a randomindexer is used, the results will 
vary each time._ -- it's not because the random indexer, but because of random 
sampling no?
** Would you mind formatting the test code? I see empty lines, curly braces 
that are sometimes open in the next line etc. I can do it too before I commit 
it.

* I see that createSample still has the scores array bug -- it returns the 
original scores array, irrespective of the sampled docs. Before you fix it, can 
you please add a testcase which covers scores and fails?

I think we're very close!

 Facet sampling
 --

 Key: LUCENE-5476
 URL: https://issues.apache.org/jira/browse/LUCENE-5476
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Rob Audenaerde
 Attachments: LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, 
 LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, 
 LUCENE-5476.patch, SamplingComparison_SamplingFacetsCollector.java, 
 SamplingFacetsCollector.java


 With LUCENE-5339 facet sampling disappeared. 
 When trying to display facet counts on large datasets (10M documents) 
 counting facets is rather expensive, as all the hits are collected and 
 processed. 
 Sampling greatly reduced this and thus provided a nice speedup. Could it be 
 brought back?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5476) Facet sampling

2014-03-11 Thread Rob Audenaerde (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930118#comment-13930118
 ] 

Rob Audenaerde commented on LUCENE-5476:


Thanks Shai, I really appreciate all the feedback for you guys on this issue. 

I will try to fix the loose ends soon.

 Facet sampling
 --

 Key: LUCENE-5476
 URL: https://issues.apache.org/jira/browse/LUCENE-5476
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Rob Audenaerde
 Attachments: LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, 
 LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, 
 LUCENE-5476.patch, SamplingComparison_SamplingFacetsCollector.java, 
 SamplingFacetsCollector.java


 With LUCENE-5339 facet sampling disappeared. 
 When trying to display facet counts on large datasets (10M documents) 
 counting facets is rather expensive, as all the hits are collected and 
 processed. 
 Sampling greatly reduced this and thus provided a nice speedup. Could it be 
 brought back?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5476) Facet sampling

2014-03-11 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930121#comment-13930121
 ] 

Shai Erera commented on LUCENE-5476:


bq. Thanks Shai, I really appreciate all the feedback for you guys on this 
issue. 

No worries, it should be me thanking you for doing all this work!

 Facet sampling
 --

 Key: LUCENE-5476
 URL: https://issues.apache.org/jira/browse/LUCENE-5476
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Rob Audenaerde
 Attachments: LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, 
 LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, 
 LUCENE-5476.patch, SamplingComparison_SamplingFacetsCollector.java, 
 SamplingFacetsCollector.java


 With LUCENE-5339 facet sampling disappeared. 
 When trying to display facet counts on large datasets (10M documents) 
 counting facets is rather expensive, as all the hits are collected and 
 processed. 
 Sampling greatly reduced this and thus provided a nice speedup. Could it be 
 brought back?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5476) Facet sampling

2014-03-11 Thread Rob Audenaerde (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930118#comment-13930118
 ] 

Rob Audenaerde edited comment on LUCENE-5476 at 3/11/14 9:39 AM:
-

Thanks Shai, I really appreciate all the feedback from you guys on this issue. 

I will try to fix the loose ends soon.


was (Author: robau):
Thanks Shai, I really appreciate all the feedback for you guys on this issue. 

I will try to fix the loose ends soon.

 Facet sampling
 --

 Key: LUCENE-5476
 URL: https://issues.apache.org/jira/browse/LUCENE-5476
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Rob Audenaerde
 Attachments: LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, 
 LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, 
 LUCENE-5476.patch, SamplingComparison_SamplingFacetsCollector.java, 
 SamplingFacetsCollector.java


 With LUCENE-5339 facet sampling disappeared. 
 When trying to display facet counts on large datasets (10M documents) 
 counting facets is rather expensive, as all the hits are collected and 
 processed. 
 Sampling greatly reduced this and thus provided a nice speedup. Could it be 
 brought back?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Solr4.7 DataImport 500 Error please help

2014-03-11 Thread steben
HTTP Status 500 - {msg=SolrCore 'collection1' is not available due to init
failure: severeErrors,trace=org.apache.solr.common.SolrException: SolrCore
'collection1' is not available due to init failure: severeErrors at
org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:827) at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:317)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:225)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:927)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
at
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:999)
at
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:565)
at
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:307)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at
java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at
java.lang.Thread.run(Unknown Source) Caused by:
org.apache.solr.common.SolrException: severeErrors at
org.apache.solr.core.SolrCore.init(SolrCore.java:844) at
org.apache.solr.core.SolrCore.init(SolrCore.java:630) at
org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:562)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:597) at
org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:258) at
org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:250) at
java.util.concurrent.FutureTask.run(Unknown Source) at
java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) at
java.util.concurrent.FutureTask.run(Unknown Source) ... 3 more Caused by:
java.lang.NoSuchFieldError: severeErrors at
org.apache.solr.handler.dataimport.DataImportHandler.inform(DataImportHandler.java:121)
at
org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:631)
at org.apache.solr.core.SolrCore.init(SolrCore.java:835) ... 11 more
,code=500}

type Status report

message {msg=SolrCore 'collection1' is not available due to init failure:
severeErrors,trace=org.apache.solr.common.SolrException: SolrCore
'collection1' is not available due to init failure: severeErrors at
org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:827) at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:317)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:225)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:927)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
at
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:999)
at
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:565)
at
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:307)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at
java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at
java.lang.Thread.run(Unknown Source) Caused by:
org.apache.solr.common.SolrException: severeErrors at
org.apache.solr.core.SolrCore.init(SolrCore.java:844) at
org.apache.solr.core.SolrCore.init(SolrCore.java:630) at
org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:562)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:597) at
org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:258) at
org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:250) at
java.util.concurrent.FutureTask.run(Unknown Source) at
java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) at
java.util.concurrent.FutureTask.run(Unknown Source) ... 3 more Caused by:

Re: [VOTE] Move to Java 7 in Lucene/Solr 4.8, use Java 8 in trunk (once officially released)

2014-03-11 Thread Grant Ingersoll

On Mar 8, 2014, at 11:17 AM, Uwe Schindler u...@thetaphi.de wrote:

 [.] Move Lucene/Solr 4.8 (means branch_4x) to Java 7 and backport all Java 
 7-related issues (FileChannel improvements, diamond operator,...).

-0 -- Seems a little odd that we would force an upgrade on a minor version, 
which is not usually seen as best practice in development.  A lot of users will 
have to completely re-certify their whole infrastructure when changing to a new 
JVM, which is a significant undertaking and thus I don't think this decision 
should be made casually.  For instance, Dawid and I found a fairly nasty little 
bug where J6 swallows all exceptions in the thread completion service when the 
blocking queue is full, whereas J7 throws the exception.   Some people might be 
in for a rude awakening on tracking that one down if they are using 
ThreadCompletionService.

From what I see of our users in production, about 25% are on Java6 still, 75% 
are on either 7 or 8, with most of them being on 7.  I think Typesafe had an 
interesting survey recently on Java version adoption, which might be worth 
examining as well.

That being said, it's never easy to get people to go forward, so some times you 
just need to push them forward.  

 [.] Move Lucene/Solr trunk to Java 8 and allow closures in source code. This 
 would make some APIs much nicer. Our infrastructure mostly supports this, 
 only ECJ Javadoc linting is not yet possible, but forbidden-apis supports 
 Java 8 with all its crazy new stuff.


+1.  Better to do it before it is ever released and there is no time like the 
present.  Heck, by the time 5 is released, J9 will probably be out and we can 
have this debate all over again!

FWIW, this thread takes me back to the repeated debates on moving from JDK 1.4 
to 1.5.  What a horrible bikeshed that was.  Go read the archives if you like.

-Grant


Grant Ingersoll | @gsingers
http://www.lucidworks.com







ant idea on fresh checkout of 4.7

2014-03-11 Thread Grant Ingersoll
Hi,

I did a fresh checkout of 4.7 from SVN and ran ant idea at the top level and I 
get [1].  I presume I am missing the CDH Ivy repo somewhere.  Any one have the 
bits that need to be added handy?



[1]
 :: problems summary ::
[ivy:retrieve]  WARNINGS
[ivy:retrieve]  module not found: 
org.kitesdk#kite-morphlines-saxon;0.11.0
[ivy:retrieve]   local: tried
[ivy:retrieve]
/pathUsers/grantingersoll/.ivy2/local/org.kitesdk/kite-morphlines-saxon/0.11.0/ivys/ivy.xml
[ivy:retrieve]-- artifact 
org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
[ivy:retrieve]
/path/.ivy2/local/org.kitesdk/kite-morphlines-saxon/0.11.0/jars/kite-morphlines-saxon.jar
[ivy:retrieve]   shared: tried
[ivy:retrieve]
/path/.ivy2/shared/org.kitesdk/kite-morphlines-saxon/0.11.0/ivys/ivy.xml
[ivy:retrieve]-- artifact 
org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
[ivy:retrieve]
/path/.ivy2/shared/org.kitesdk/kite-morphlines-saxon/0.11.0/jars/kite-morphlines-saxon.jar
[ivy:retrieve]   public: tried
[ivy:retrieve]
http://repo1.maven.org/maven2/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
[ivy:retrieve]-- artifact 
org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
[ivy:retrieve]
http://repo1.maven.org/maven2/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
[ivy:retrieve]   cloudera: tried
[ivy:retrieve]
https://repository.cloudera.com/artifactory/repo/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
[ivy:retrieve]-- artifact 
org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
[ivy:retrieve]
https://repository.cloudera.com/artifactory/repo/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
[ivy:retrieve]   releases.cloudera.com: tried
[ivy:retrieve]
https://repository.cloudera.com/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
[ivy:retrieve]-- artifact 
org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
[ivy:retrieve]
https://repository.cloudera.com/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
[ivy:retrieve]   sonatype-releases: tried
[ivy:retrieve]
http://oss.sonatype.org/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
[ivy:retrieve]-- artifact 
org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
[ivy:retrieve]
http://oss.sonatype.org/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
[ivy:retrieve]   maven.restlet.org: tried
[ivy:retrieve]
http://maven.restlet.org/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
[ivy:retrieve]-- artifact 
org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
[ivy:retrieve]
http://maven.restlet.org/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
[ivy:retrieve]   svnkit-releases: tried
[ivy:retrieve]
http://maven.tmatesoft.com/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
[ivy:retrieve]-- artifact 
org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
[ivy:retrieve]
http://maven.tmatesoft.com/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
[ivy:retrieve]   working-chinese-mirror: tried
[ivy:retrieve]
http://uk.maven.org/maven2/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
[ivy:retrieve]-- artifact 
org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
[ivy:retrieve]
http://uk.maven.org/maven2/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
[ivy:retrieve]  module not found: 
org.kitesdk#kite-morphlines-hadoop-sequencefile;0.11.0
[ivy:retrieve]   local: tried
[ivy:retrieve]
/path/.ivy2/local/org.kitesdk/kite-morphlines-hadoop-sequencefile/0.11.0/ivys/ivy.xml
[ivy:retrieve]-- artifact 
org.kitesdk#kite-morphlines-hadoop-sequencefile;0.11.0!kite-morphlines-hadoop-sequencefile.jar:
[ivy:retrieve]
/path/.ivy2/local/org.kitesdk/kite-morphlines-hadoop-sequencefile/0.11.0/jars/kite-morphlines-hadoop-sequencefile.jar
[ivy:retrieve]   shared: tried
[ivy:retrieve]
/path/.ivy2/shared/org.kitesdk/kite-morphlines-hadoop-sequencefile/0.11.0/ivys/ivy.xml
[ivy:retrieve]-- artifact 
org.kitesdk#kite-morphlines-hadoop-sequencefile;0.11.0!kite-morphlines-hadoop-sequencefile.jar:
[ivy:retrieve]
/path/.ivy2/shared/org.kitesdk/kite-morphlines-hadoop-sequencefile/0.11.0/jars/kite-morphlines-hadoop-sequencefile.jar
[ivy:retrieve]   public: tried
[ivy:retrieve]

[jira] [Commented] (LUCENE-5515) Improve TopDocs#merge for pagination

2014-03-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930192#comment-13930192
 ] 

Michael McCandless commented on LUCENE-5515:


+1, thanks Martijn!

 Improve TopDocs#merge for pagination
 

 Key: LUCENE-5515
 URL: https://issues.apache.org/jira/browse/LUCENE-5515
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Martijn van Groningen
Assignee: Martijn van Groningen
Priority: Minor
 Fix For: 4.8

 Attachments: LUCENE-5515.patch, LUCENE-5515.patch


 If TopDocs#merge takes from and size into account it can be optimized to 
 create a hits ScoreDoc array equal to size instead of from+size what is now 
 the case.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1576096 - in /lucene/dev/branches/lucene5487/lucene: core/src/java/org/apache/lucene/search/ core/src/test/org/apache/lucene/search/ facet/src/java/org/apache/lucene/facet/ facet/src/

2014-03-11 Thread Michael McCandless
OK I'l try to consolidate the N FakeScorers that happen in one package.

I REALLY don't want to make these things public.

They are popping up like mushrooms!

Mike McCandless

http://blog.mikemccandless.com


On Mon, Mar 10, 2014 at 6:09 PM, Uwe Schindler u...@thetaphi.de wrote:
 Hi Mike,

 Would it not be better to have only one FakeScorer implementation in some 
 pkg-private class. This is too much code duplication for me!

 Uwe

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de


 -Original Message-
 From: mikemcc...@apache.org [mailto:mikemcc...@apache.org]
 Sent: Monday, March 10, 2014 10:42 PM
 To: comm...@lucene.apache.org
 Subject: svn commit: r1576096 - in
 /lucene/dev/branches/lucene5487/lucene:
 core/src/java/org/apache/lucene/search/
 core/src/test/org/apache/lucene/search/
 facet/src/java/org/apache/lucene/facet/
 facet/src/java/org/apache/lucene/facet/taxonomy/ grouping/src/java...

 Author: mikemccand
 Date: Mon Mar 10 21:41:44 2014
 New Revision: 1576096

 URL: http://svn.apache.org/r1576096
 Log:
 LUCENE-5487: throw OUE from FakeScorer.getWeight

 Modified:

 lucene/dev/branches/lucene5487/lucene/core/src/java/org/apache/lucene/
 search/BooleanScorer.java

 lucene/dev/branches/lucene5487/lucene/core/src/java/org/apache/lucene/
 search/IndexSearcher.java

 lucene/dev/branches/lucene5487/lucene/core/src/test/org/apache/lucene/
 search/TestBooleanScorer.java

 lucene/dev/branches/lucene5487/lucene/facet/src/java/org/apache/lucene
 /facet/DrillSidewaysScorer.java

 lucene/dev/branches/lucene5487/lucene/facet/src/java/org/apache/lucene
 /facet/taxonomy/TaxonomyFacetSumValueSource.java

 lucene/dev/branches/lucene5487/lucene/grouping/src/java/org/apache/luc
 ene/search/grouping/BlockGroupingCollector.java

 lucene/dev/branches/lucene5487/lucene/join/src/java/org/apache/lucene/
 search/join/TermsIncludingScoreQuery.java

 lucene/dev/branches/lucene5487/lucene/join/src/java/org/apache/lucene/
 search/join/ToParentBlockJoinCollector.java

 Modified:
 lucene/dev/branches/lucene5487/lucene/core/src/java/org/apache/lucene/
 search/BooleanScorer.java
 URL:
 http://svn.apache.org/viewvc/lucene/dev/branches/lucene5487/lucene/cor
 e/src/java/org/apache/lucene/search/BooleanScorer.java?rev=1576096r1=
 1576095r2=1576096view=diff
 ==
 
 ---
 lucene/dev/branches/lucene5487/lucene/core/src/java/org/apache/lucene/
 search/BooleanScorer.java (original)
 +++
 lucene/dev/branches/lucene5487/lucene/core/src/java/org/apache/lucen
 +++ e/search/BooleanScorer.java Mon Mar 10 21:41:44 2014
 @@ -153,6 +153,11 @@ final class BooleanScorer extends BulkSc
  public long cost() {
throw new UnsupportedOperationException();
  }
 +
 +@Override
 +public Weight getWeight() {
 +  throw new UnsupportedOperationException();
 +}
}

static final class Bucket {

 Modified:
 lucene/dev/branches/lucene5487/lucene/core/src/java/org/apache/lucene/
 search/IndexSearcher.java
 URL:
 http://svn.apache.org/viewvc/lucene/dev/branches/lucene5487/lucene/cor
 e/src/java/org/apache/lucene/search/IndexSearcher.java?rev=1576096r1=
 1576095r2=1576096view=diff
 ==
 
 ---
 lucene/dev/branches/lucene5487/lucene/core/src/java/org/apache/lucene/
 search/IndexSearcher.java (original)
 +++
 lucene/dev/branches/lucene5487/lucene/core/src/java/org/apache/lucen
 +++ e/search/IndexSearcher.java Mon Mar 10 21:41:44 2014
 @@ -805,6 +805,11 @@ public class IndexSearcher {
public long cost() {
  return 1;
}
 +
 +  @Override
 +  public Weight getWeight() {
 +throw new UnsupportedOperationException();
 +  }
  }

  private final FakeScorer fakeScorer = new FakeScorer();

 Modified:
 lucene/dev/branches/lucene5487/lucene/core/src/test/org/apache/lucene/
 search/TestBooleanScorer.java
 URL:
 http://svn.apache.org/viewvc/lucene/dev/branches/lucene5487/lucene/cor
 e/src/test/org/apache/lucene/search/TestBooleanScorer.java?rev=1576096
 r1=1576095r2=1576096view=diff
 ==
 
 ---
 lucene/dev/branches/lucene5487/lucene/core/src/test/org/apache/lucene/
 search/TestBooleanScorer.java (original)
 +++
 lucene/dev/branches/lucene5487/lucene/core/src/test/org/apache/lucen
 +++ e/search/TestBooleanScorer.java Mon Mar 10 21:41:44 2014
 @@ -240,6 +240,11 @@ public class TestBooleanScorer extends L
  public long cost() {
throw new UnsupportedOperationException();
  }
 +
 +@Override
 +public Weight getWeight() {
 +  throw new UnsupportedOperationException();
 +}
}

/** Throws UOE if Weight.scorer is called */

 Modified:
 lucene/dev/branches/lucene5487/lucene/facet/src/java/org/apache/lucene
 /facet/DrillSidewaysScorer.java
 URL:
 

[jira] [Updated] (SOLR-5827) Add boosting functionality to MoreLikeThisHandler

2014-03-11 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili updated SOLR-5827:
--

Attachment: SOLR-5827.1.patch

attached a new patch (based on trunk) which slightly improves the first one 
from Upayavira by moving the parameter to MoreLikeThisParams, using this 
feature in the MoreLikeThisHandlerTest and introduces a new constructor instead 
of changing the current one (which gets deprecated).

 Add boosting functionality to MoreLikeThisHandler
 -

 Key: SOLR-5827
 URL: https://issues.apache.org/jira/browse/SOLR-5827
 Project: Solr
  Issue Type: Improvement
  Components: MoreLikeThis
Reporter: Upayavira
Assignee: Tommaso Teofili
 Fix For: 4.8

 Attachments: SOLR-5827.1.patch, SOLR-5827.patch, SOLR-5827.patch


 The MoreLikeThisHandler facilitates the creation of a very simple yet 
 powerful recommendation engine. 
 It is possible to constrain the result set using filter queries. However, it 
 isn't possible to influence the scoring using function queries. Adding 
 function query boosting would allow for including such things as recency in 
 the relevancy calculations.
 Unfortunately, the boost= parameter is already in use, meaning we cannot 
 replicate the edismax boost/bf for additive/multiplicative boostings.
 My patch only touches the MoreLikeThisHandler, so the only really contentious 
 thing is to decide the parameters to configure it.
 I have a prototype working, and will upload a patch shortly. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5516) Forward information that trigger a merge to MergeScheduler

2014-03-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930201#comment-13930201
 ] 

Michael McCandless commented on LUCENE-5516:


+1

Maybe just fix the naming so it's either foundNewMerges or newMergesFound?  I 
think I see both in IndexWriter.java.

 Forward information that trigger a merge to MergeScheduler
 --

 Key: LUCENE-5516
 URL: https://issues.apache.org/jira/browse/LUCENE-5516
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Affects Versions: 4.7
Reporter: Simon Willnauer
Assignee: Simon Willnauer
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5516.patch


 Today we pass information about the merge trigger to the merge policy. Yet, 
 no matter if the MP finds a merge or not we call the MergeScheduler who runs 
  blocks even if we didn't find a merge. In some cases we don't even want 
 this to happen but inside the MergeScheduler we have no choice to opt out 
 since we don't know what triggered the merge. We should forward the infos we 
 have to the MergeScheduler as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5837) Add missing equals implementation for SolrDocument, SolrInputDocument and SolrInputField.

2014-03-11 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930203#comment-13930203
 ] 

Noble Paul commented on SOLR-5837:
--

The equals and hashCode() will fail only when you are streaming a document . 
Otherwise it is perfectly OK to use them

 Add missing equals implementation for SolrDocument, SolrInputDocument and 
 SolrInputField.
 -

 Key: SOLR-5837
 URL: https://issues.apache.org/jira/browse/SOLR-5837
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Assignee: Mark Miller
 Fix For: 4.8, 5.0

 Attachments: SOLR-5837.patch, SOLR-5837.patch


 While working on SOLR-5265 I tried comparing objects of SolrDocument, 
 SolrInputDocument and SolrInputField. These classes did not Override the 
 equals implementation. 
 The issue will Override equals and hashCode methods to the 3 classes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5487) Can we separate top scorer from sub scorer?

2014-03-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930207#comment-13930207
 ] 

ASF subversion and git services commented on LUCENE-5487:
-

Commit 1576274 from [~mikemccand] in branch 'dev/branches/lucene5487'
[ https://svn.apache.org/r1576274 ]

LUCENE-5487: consolidate the FakeScorers within one package

 Can we separate top scorer from sub scorer?
 ---

 Key: LUCENE-5487
 URL: https://issues.apache.org/jira/browse/LUCENE-5487
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5487.patch, LUCENE-5487.patch, LUCENE-5487.patch


 This is just an exploratory patch ... still many nocommits, but I
 think it may be promising.
 I find the two booleans we pass to Weight.scorer confusing, because
 they really only apply to whoever will call score(Collector) (just
 IndexSearcher and BooleanScorer).
 The params are pointless for the vast majority of scorers, because
 very, very few query scorers really need to change how top-scoring is
 done, and those scorers can *only* score top-level (throw throw UOE
 from nextDoc/advance).  It seems like these two types of scorers
 should be separately typed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5487) Can we separate top scorer from sub scorer?

2014-03-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930206#comment-13930206
 ] 

ASF subversion and git services commented on LUCENE-5487:
-

Commit 1576273 from [~mikemccand] in branch 'dev/branches/lucene5487'
[ https://svn.apache.org/r1576273 ]

LUCENE-5487: consolidate the FakeScorers within one package

 Can we separate top scorer from sub scorer?
 ---

 Key: LUCENE-5487
 URL: https://issues.apache.org/jira/browse/LUCENE-5487
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5487.patch, LUCENE-5487.patch, LUCENE-5487.patch


 This is just an exploratory patch ... still many nocommits, but I
 think it may be promising.
 I find the two booleans we pass to Weight.scorer confusing, because
 they really only apply to whoever will call score(Collector) (just
 IndexSearcher and BooleanScorer).
 The params are pointless for the vast majority of scorers, because
 very, very few query scorers really need to change how top-scoring is
 done, and those scorers can *only* score top-level (throw throw UOE
 from nextDoc/advance).  It seems like these two types of scorers
 should be separately typed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Morphlines JAR deps?

2014-03-11 Thread Dawid Weiss
Uhm, I get this on trunk:

::
::  UNRESOLVED DEPENDENCIES ::
::
:: org.kitesdk#kite-morphlines-saxon;0.11.0: not found
:: org.kitesdk#kite-morphlines-hadoop-sequencefile;0.11.0: not found
::

Help, anybody? :)

D.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Morphlines JAR deps?

2014-03-11 Thread Grant Ingersoll
It's there for 4.7 as well.

On Mar 11, 2014, at 6:57 AM, Dawid Weiss dawid.we...@gmail.com wrote:

 Uhm, I get this on trunk:
 
 ::
 ::  UNRESOLVED DEPENDENCIES ::
 ::
 :: org.kitesdk#kite-morphlines-saxon;0.11.0: not found
 :: org.kitesdk#kite-morphlines-hadoop-sequencefile;0.11.0: not found
 ::
 
 Help, anybody? :)
 
 D.
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 


Grant Ingersoll | @gsingers
http://www.lucidworks.com







[jira] [Commented] (LUCENE-5236) Use broadword bit selection in EliasFanoDecoder

2014-03-11 Thread Sebastiano Vigna (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930217#comment-13930217
 ] 

Sebastiano Vigna commented on LUCENE-5236:
--

Sorry guys—just happened to read at this thread. My 2€¢:

- Rank9/Select9 and full ranking structures, not rank/select-in-a-word 
algorithms.
- I'm using Long.bitCount(), Long.numberOfTralilingZeros() etc. everywhere 
since Jan 2013. Intrinsification is a bet, but it pays, and using them will 
convince them IBM and others to intrinsify. If only Oracle would add 
Long.clearLowestBitSet() ...
- The alternative algorithm for sideways addition (a.k.a. popcount) should be 
called broadword or Wilkes–Wheeler–Gill.
- Selection gets better and better every day. 
http://vigna.di.unimi.it/select.php tries to keep track of the improvement. The 
current code in it.unimi.dsi.bits.Fast is the best I'm aware of.

Ciao!

 Use broadword bit selection in EliasFanoDecoder
 ---

 Key: LUCENE-5236
 URL: https://issues.apache.org/jira/browse/LUCENE-5236
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Paul Elschot
Assignee: Adrien Grand
Priority: Minor
 Fix For: 4.6

 Attachments: LUCENE-5236.patch, LUCENE-5236.patch, LUCENE-5236.patch, 
 LUCENE-5236.patch, TestDocIdSetBenchmark.java


 Try and speed up decoding



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr4.7 DataImport 500 Error please help

2014-03-11 Thread Erick Erickson
Look at your Solr log, you should have
something more worthwhile there. I suspect
you have a typo in, say, your solrconfig.xml
file or schema.xml.

This will probably be 'catalina.out' on Tomcat,
though I'm guessing there because you've told
us nothing about how you got that error. Please
review:

http://wiki.apache.org/solr/UsingMailingLists

Best,
Erick

On Tue, Mar 11, 2014 at 5:34 AM, steben 513441...@qq.com wrote:
 HTTP Status 500 - {msg=SolrCore 'collection1' is not available due to init
 failure: severeErrors,trace=org.apache.solr.common.SolrException: SolrCore
 'collection1' is not available due to init failure: severeErrors at
 org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:827) at
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:317)
 at
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
 at
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
 at
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
 at
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:225)
 at
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
 at
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
 at
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
 at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:927)
 at
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
 at
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
 at
 org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:999)
 at
 org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:565)
 at
 org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:307)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at
 java.lang.Thread.run(Unknown Source) Caused by:
 org.apache.solr.common.SolrException: severeErrors at
 org.apache.solr.core.SolrCore.init(SolrCore.java:844) at
 org.apache.solr.core.SolrCore.init(SolrCore.java:630) at
 org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:562)
 at org.apache.solr.core.CoreContainer.create(CoreContainer.java:597) at
 org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:258) at
 org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:250) at
 java.util.concurrent.FutureTask.run(Unknown Source) at
 java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) at
 java.util.concurrent.FutureTask.run(Unknown Source) ... 3 more Caused by:
 java.lang.NoSuchFieldError: severeErrors at
 org.apache.solr.handler.dataimport.DataImportHandler.inform(DataImportHandler.java:121)
 at
 org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:631)
 at org.apache.solr.core.SolrCore.init(SolrCore.java:835) ... 11 more
 ,code=500}

 type Status report

 message {msg=SolrCore 'collection1' is not available due to init failure:
 severeErrors,trace=org.apache.solr.common.SolrException: SolrCore
 'collection1' is not available due to init failure: severeErrors at
 org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:827) at
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:317)
 at
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
 at
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
 at
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
 at
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:225)
 at
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
 at
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
 at
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
 at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:927)
 at
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
 at
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
 at
 org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:999)
 at
 org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:565)
 at
 org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:307)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at
 java.lang.Thread.run(Unknown Source) Caused by:
 org.apache.solr.common.SolrException: severeErrors at
 org.apache.solr.core.SolrCore.init(SolrCore.java:844) at
 

[jira] [Commented] (SOLR-5501) Ability to work with cold replicas

2014-03-11 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930247#comment-13930247
 ] 

Noble Paul commented on SOLR-5501:
--


The question is are you going to use a node exclusively for cold replicas? That 
means it will never be used for queries or as leader ?

or do you want to do it on a per replica basis 

say I can use an ADDREPLICA (SOLR-5130) call with an extra parameter (say 
hotreplica=false)

or I should be able to use a MODIFYCOLLECTION ( SOLR-5132 ) to make an existing 
replica to hotreplica=false 


 Ability to work with cold replicas
 --

 Key: SOLR-5501
 URL: https://issues.apache.org/jira/browse/SOLR-5501
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.5.1
Reporter: Manuel Lenormand
  Labels: performance
 Fix For: 4.7


 Following this conversation from the mailing list:
 http://lucene.472066.n3.nabble.com/Proposal-for-new-feature-cold-replicas-brainstorming-td4097501.html
 Should give the ability to use replicas mainly as backup cores and not for 
 handling high qps rate. 
 This way you would avoid using the caching ressources (solr and OS) used when 
 routing a query to a replica. 
 With many replicas it's harder hitting the solr cache (same query may hit 
 another replica) and having many replicas on the same instance would cause a 
 useless competition on the OS memory for caching.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5205) [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser

2014-03-11 Thread Nikhil Chhaochharia (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930253#comment-13930253
 ] 

Nikhil Chhaochharia commented on LUCENE-5205:
-

Looks good - we will be testing this over the next few days and will report 
back if we find any issues.

With StopFilter removed, the index size increased by 20% and there was no 
appreciable increase in the indexing time.
With StopFilter replaced by a SynonymFilter (all stopwords as synonyms), the 
index size almost doubled and the indexing time more than tripled. We will 
probably not be going forward with this option. (I had mistakenly mentioned the 
stats for an index with the StopFilter removed in my earlier comment)

 [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to 
 classic QueryParser
 ---

 Key: LUCENE-5205
 URL: https://issues.apache.org/jira/browse/LUCENE-5205
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/queryparser
Reporter: Tim Allison
  Labels: patch
 Fix For: 4.7

 Attachments: LUCENE-5205-cleanup-tests.patch, 
 LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, 
 LUCENE-5205_dateTestReInitPkgPrvt.patch, 
 LUCENE-5205_improve_stop_word_handling.patch, 
 LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, 
 SpanQueryParser_v1.patch.gz, patch.txt


 This parser extends QueryParserBase and includes functionality from:
 * Classic QueryParser: most of its syntax
 * SurroundQueryParser: recursive parsing for near and not clauses.
 * ComplexPhraseQueryParser: can handle near queries that include multiterms 
 (wildcard, fuzzy, regex, prefix),
 * AnalyzingQueryParser: has an option to analyze multiterms.
 At a high level, there's a first pass BooleanQuery/field parser and then a 
 span query parser handles all terminal nodes and phrases.
 Same as classic syntax:
 * term: test 
 * fuzzy: roam~0.8, roam~2
 * wildcard: te?t, test*, t*st
 * regex: /\[mb\]oat/
 * phrase: jakarta apache
 * phrase with slop: jakarta apache~3
 * default or clause: jakarta apache
 * grouping or clause: (jakarta apache)
 * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta
 * multiple fields: title:lucene author:hatcher
  
 Main additions in SpanQueryParser syntax vs. classic syntax:
 * Can require in order for phrases with slop with the \~ operator: 
 jakarta apache\~3
 * Can specify not near: fever bieber!\~3,10 ::
 find fever but not if bieber appears within 3 words before or 10 
 words after it.
 * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta 
 apache\]~3 lucene\]\~4 :: 
 find jakarta within 3 words of apache, and that hit has to be within 
 four words before lucene
 * Can also use \[\] for single level phrasal queries instead of  as in: 
 \[jakarta apache\]
 * Can use or grouping clauses in phrasal queries: apache (lucene solr)\~3 
 :: find apache and then either lucene or solr within three words.
 * Can use multiterms in phrasal queries: jakarta\~1 ap*che\~2
 * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ 
 /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like jakarta within two 
 words of ap*che and that hit has to be within ten words of something like 
 solr or that lucene regex.
 * Can require at least x number of hits at boolean level: apache AND (lucene 
 solr tika)~2
 * Can use negative only query: -jakarta :: Find all docs that don't contain 
 jakarta
 * Can use an edit distance  2 for fuzzy query via SlowFuzzyQuery (beware of 
 potential performance issues!).
 Trivial additions:
 * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, 
 prefix =2)
 * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance 
 =2: (jakarta~1 (OSA) vs jakarta~1(Levenshtein)
 This parser can be very useful for concordance tasks (see also LUCENE-5317 
 and LUCENE-5318) and for analytical search.  
 Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery.
 Most of the documentation is in the javadoc for SpanQueryParser.
 Any and all feedback is welcome.  Thank you.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Morphlines JAR deps?

2014-03-11 Thread Dawid Weiss
I filed this:
https://issues.apache.org/jira/browse/SOLR-5848

If you clear your local maven cache it will start failing for you
too... seems like one of the repositories
we're referencing is mutable (which is a bad thing (tm) in the maven world).

Dawid



On Tue, Mar 11, 2014 at 12:04 PM, Grant Ingersoll gsing...@apache.org wrote:
 It's there for 4.7 as well.

 On Mar 11, 2014, at 6:57 AM, Dawid Weiss dawid.we...@gmail.com wrote:

 Uhm, I get this on trunk:

 ::
 ::  UNRESOLVED DEPENDENCIES ::
 ::
 :: org.kitesdk#kite-morphlines-saxon;0.11.0: not found
 :: org.kitesdk#kite-morphlines-hadoop-sequencefile;0.11.0: not found
 ::

 Help, anybody? :)

 D.

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


 
 Grant Ingersoll | @gsingers
 http://www.lucidworks.com






-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5848) Morphlines is not resolving

2014-03-11 Thread Dawid Weiss (JIRA)
Dawid Weiss created SOLR-5848:
-

 Summary: Morphlines is not resolving
 Key: SOLR-5848
 URL: https://issues.apache.org/jira/browse/SOLR-5848
 Project: Solr
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Mark Miller
Priority: Critical


This version of morphlines does not resolve for me and Grant.
{code}
::
::  UNRESOLVED DEPENDENCIES ::
::
:: org.kitesdk#kite-morphlines-saxon;0.11.0: not found
:: org.kitesdk#kite-morphlines-hadoop-sequencefile;0.11.0: not found
{code}

Has this been deleted from Cloudera's repositories or something? This would be 
pretty bad -- maven repos should be immutable...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5475) add required attribute bugUrl to @BadApple

2014-03-11 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930263#comment-13930263
 ] 

Dawid Weiss commented on LUCENE-5475:
-

{code}
ant test-core -Dtests.class=*.TestGroupFiltering
{code}

results in:

{code}
   [junit4] JUnit4 says hi! Master seed: 2D863EA18DC5B931
   [junit4] Your default console's encoding may not display certain unicode 
glyphs: windows-1252
   [junit4] Executing 1 suite with 1 JVM.
   [junit4]
   [junit4] Started J0 PID(1364@dweiss-desktop).
   [junit4] Suite: org.apache.lucene.util.junitcompat.TestGroupFiltering
   [junit4] IGNOR/A 0.04s | TestGroupFiltering.testBar
   [junit4] Assumption #1: 'bar' test group is disabled (@Bar())
   [junit4] IGNOR/A 0.00s | TestGroupFiltering.testJira
   [junit4] Assumption #1: 'jira' test group is disabled (@Jira(bug=JIRA 
bug reference))
   [junit4] IGNOR/A 0.00s | TestGroupFiltering.testFoo
   [junit4] Assumption #1: 'foo' test group is disabled (@Foo())
   [junit4] IGNOR/A 0.00s | TestGroupFiltering.testFooBar
   [junit4] Assumption #1: 'foo' test group is disabled (@Foo())
   [junit4] Completed in 0.11s, 4 tests, 4 skipped
{code}

 add required attribute bugUrl to @BadApple
 --

 Key: LUCENE-5475
 URL: https://issues.apache.org/jira/browse/LUCENE-5475
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/test
Reporter: Robert Muir
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5475.patch


 This makes it impossible to tag a test as a badapple without a pointer to a 
 JIRA issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: Removal of Scorer.weight

2014-03-11 Thread mkhludnev
Github user mkhludnev commented on the pull request:

https://github.com/apache/lucene-solr/pull/40#issuecomment-37287237
  
Terry, 
So far, cleanup in Boolean* classes seems good, but I have to mention that 
the bunch of my custom queries need to distinguish scorers by obtaining Query 
through Weight. I feel like Scorer.getWeight is a useful thing in general.
Thanks


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5844) Backward Compatibility Has Broken For deleteById() at Solrj

2014-03-11 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930266#comment-13930266
 ] 

Noble Paul commented on SOLR-5844:
--

It would also help if you paste the error that you see. 

 Backward Compatibility Has Broken For deleteById() at Solrj
 ---

 Key: SOLR-5844
 URL: https://issues.apache.org/jira/browse/SOLR-5844
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.6, 4.6.1, 4.7
Reporter: Furkan KAMACI
Assignee: Noble Paul
 Fix For: 4.8


 I have started up a SolrCloud of 4.5.1 
 * When I use deleteById method of CloudSolrServer via 4.5.1 Solrj it works.
 * When I use deleteById method of CloudSolrServer via 4.6.0 Solrj it does not 
 work and does not throw error.
 * When I use deleteById method of CloudSolrServer via 4.6.1 Solrj it does not 
 work and does not throw error.
 * When I use deleteById method of CloudSolrServer via 4.7.0 Solrj it does not 
 work and does not throw error.
 So it seems that backward compatibility has broken since 4.6.0 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5475) add required attribute bugUrl to @BadApple

2014-03-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930270#comment-13930270
 ] 

ASF subversion and git services commented on LUCENE-5475:
-

Commit 1576292 from [~dawidweiss] in branch 'dev/trunk'
[ https://svn.apache.org/r1576292 ]

LUCENE-5475: upgraded randomized testing to 2.1.1. This will print full 
annotations on assumption-ignored tests. It also includes more fancy test 
filtering.:

 add required attribute bugUrl to @BadApple
 --

 Key: LUCENE-5475
 URL: https://issues.apache.org/jira/browse/LUCENE-5475
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/test
Reporter: Robert Muir
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5475.patch


 This makes it impossible to tag a test as a badapple without a pointer to a 
 JIRA issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5475) add required attribute bugUrl to @BadApple

2014-03-11 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930273#comment-13930273
 ] 

Dawid Weiss commented on LUCENE-5475:
-

RR 2.1.1 also includes boolean conditions on test groups. It's kind of fun, try 
it:
{code}
ant test-core -Dtests.filter=@foo and @bar
ant test-core -Dtests.filter=@foo and not @bar
ant test-core -Dtests.filter=@foo or @bar
ant test-core -Dtests.filter=@foo or @bar
ant test-core -Dtests.filter=default and not(@nightly or @slow)
{code}



 add required attribute bugUrl to @BadApple
 --

 Key: LUCENE-5475
 URL: https://issues.apache.org/jira/browse/LUCENE-5475
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/test
Reporter: Robert Muir
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5475.patch


 This makes it impossible to tag a test as a badapple without a pointer to a 
 JIRA issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Advanced boolean test group filtering.

2014-03-11 Thread Dawid Weiss
Not that I want to make the test framework any more complex than it
is, but since this functionality is already there I will take a moment
to explain it.

Every annotation marked with @TestGroup has a name (lowercased name
of the class by default) and a system property that controls its
enabled/ disabled state. By default the system property becomes the
name of the annotation prefixed with tests., so for:

  @TestGroup(enabled = false)
  public @interface AwaitsFix {
/** Point to JIRA entry. */
public String bugUrl();
  }

the name is 'awaitsfix' and the system property would be tests.awaitsfix.

Up until now you could enable or disable tests annotated with each
test group by setting the corresponding property to on or off. A test
was executed if all of its test groups were enabled or
assumption-ignored if any of them was disabled.

From now on (trunk) you can also pick all tests annotated with a
boolean combination of test groups:

ant test-core -Dtests.filter=@foo and @bar
ant test-core -Dtests.filter=@foo and not @bar
ant test-core -Dtests.filter=@foo or @bar
ant test-core -Dtests.filter=@foo or @bar

Such a filter ignores system properties for each group; if you still
want to take the default state of each test into account (resulting
from resolving system properties) you can use the 'default' keyword.
For example:

ant test-core -Dtests.filter=default and not(@nightly or @slow)

will run all the tests that would run normally (taking into account
your own exclusion rules, for example tests.slow=false) but also
exclude any @Nightly or @Slow tests.

You get the idea :)

Dawid

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: Removal of Scorer.weight

2014-03-11 Thread rmuir
Github user rmuir commented on the pull request:

https://github.com/apache/lucene-solr/pull/40#issuecomment-37290855
  
No matter what happens here, we should at least do the coord cleanup.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5849) write.lock is not removed by LogReplayer

2014-03-11 Thread pavan patel (JIRA)
pavan patel created SOLR-5849:
-

 Summary: write.lock is not removed by LogReplayer
 Key: SOLR-5849
 URL: https://issues.apache.org/jira/browse/SOLR-5849
 Project: Solr
  Issue Type: Bug
 Environment: Windows 7, Tomcat 7.0.52, Solr 4.3.0, jdk1.7.0_51 
Reporter: pavan patel


I my application I am using SolrEmbeddedServer inside tomcat. I have below 
configuration for my core:-

lockTypesimple/lockType
unlockOnStartuptrue/unlockOnStartup

updateLog
  str name=dir${solr.ulog.dir:}/str
/updateLog

 autoCommit 
   maxTime15000/maxTime 
   openSearcherfalse/openSearcher 
 /autoCommit

autoSoftCommit 
 maxTime1000/maxTime 
/autoSoftCommit

The issue I am facing is when I restart tocmat and in case there is any 
uncommitted data  in tlog, then I am getting below exception:-

org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
SimpleFSLock@F:\SHASTAMR1\Install\solr\conf\alerts\data\index\write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:84)
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:644)
at 
org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:77)
at 
org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)
at 
org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:197)
at 
org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:110)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:148)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:504)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:640)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:396)
at 
org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:100)
at 
org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:246)
at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:173)
at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1816)
at 
org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:150)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:68)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:54)

After restart I am not able to index anything into the solr. I debug the code 
and found out that LogReplayer during start up creates the SolrIndexWriter on 
core and that creates the write.lock file. Once all the leftover tlog's are 
indexed, the write.lock remains there, its not getting deleted. So when my 
application tries to add document the SolrIndexWriter is not able to create the 
lock because write.lock already exists.

This seems to be a bug in Solr 4.3.0, because I believe SolrIndexWriter created 
during  LogReplayer is not closed that causing the write.lock leftover in data 
directory.





--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5849) write.lock is not removed by LogReplayer

2014-03-11 Thread pavan patel (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pavan patel updated SOLR-5849:
--

Description: 
In my application I am using SolrEmbeddedServer inside tomcat. I have below 
configuration for my core:-

lockTypesimple/lockType
unlockOnStartuptrue/unlockOnStartup

updateLog
  str name=dir${solr.ulog.dir:}/str
/updateLog

 autoCommit 
   maxTime15000/maxTime 
   openSearcherfalse/openSearcher 
 /autoCommit

autoSoftCommit 
 maxTime1000/maxTime 
/autoSoftCommit

The issue I am facing is when I restart tocmat and in case there is any 
uncommitted data  in tlog, then I am getting below exception:-

org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
SimpleFSLock@F:\SHASTAMR1\Install\solr\conf\alerts\data\index\write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:84)
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:644)
at 
org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:77)
at 
org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)
at 
org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:197)
at 
org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:110)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:148)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:504)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:640)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:396)
at 
org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:100)
at 
org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:246)
at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:173)
at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1816)
at 
org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:150)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:68)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:54)

After restart I am not able to index anything into the solr. I debug the code 
and found out that LogReplayer during start up creates the SolrIndexWriter on 
core and that creates the write.lock file. Once all the leftover tlog's are 
indexed, the write.lock remains there, its not getting deleted. So when my 
application tries to add document the SolrIndexWriter is not able to create the 
lock because write.lock already exists.

This seems to be a bug in Solr 4.3.0, because I believe SolrIndexWriter created 
during  LogReplayer is not closed that causing the write.lock leftover in data 
directory.



  was:
I my application I am using SolrEmbeddedServer inside tomcat. I have below 
configuration for my core:-

lockTypesimple/lockType
unlockOnStartuptrue/unlockOnStartup

updateLog
  str name=dir${solr.ulog.dir:}/str
/updateLog

 autoCommit 
   maxTime15000/maxTime 
   openSearcherfalse/openSearcher 
 /autoCommit

autoSoftCommit 
 maxTime1000/maxTime 
/autoSoftCommit

The issue I am facing is when I restart tocmat and in case there is any 
uncommitted data  in tlog, then I am getting below exception:-

org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
SimpleFSLock@F:\SHASTAMR1\Install\solr\conf\alerts\data\index\write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:84)
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:644)
at 
org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:77)
at 
org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)
at 
org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:197)
at 
org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:110)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:148)
at 

[jira] [Updated] (SOLR-5849) write.lock is not removed by LogReplayer

2014-03-11 Thread pavan patel (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pavan patel updated SOLR-5849:
--

Description: 
In my application I am using SolrEmbeddedServer inside tomcat. I have below 
configuration for my core:-

lockTypesimple/lockType
unlockOnStartuptrue/unlockOnStartup

updateLog
  str name=dir${solr.ulog.dir:}/str
/updateLog

 autoCommit 
   maxTime15000/maxTime 
   openSearcherfalse/openSearcher 
 /autoCommit

autoSoftCommit 
 maxTime1000/maxTime 
/autoSoftCommit

The issue I am facing is when I restart tocmat and in case there is any 
uncommitted data  in tlog, then I am getting below exception:-

org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
SimpleFSLock@F:\mydir\Install\solr\conf\alerts\data\index\write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:84)
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:644)
at 
org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:77)
at 
org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)
at 
org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:197)
at 
org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:110)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:148)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:504)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:640)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:396)
at 
org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:100)
at 
org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:246)
at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:173)
at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1816)
at 
org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:150)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:68)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:54)

After restart I am not able to index anything into the solr. I debug the code 
and found out that LogReplayer during start up creates the SolrIndexWriter on 
core and that creates the write.lock file. Once all the leftover tlog's are 
indexed, the write.lock remains there, its not getting deleted. So when my 
application tries to add document the SolrIndexWriter is not able to create the 
lock because write.lock already exists.

This seems to be a bug in Solr 4.3.0, because I believe SolrIndexWriter created 
during  LogReplayer is not closed that causing the write.lock leftover in data 
directory.



  was:
In my application I am using SolrEmbeddedServer inside tomcat. I have below 
configuration for my core:-

lockTypesimple/lockType
unlockOnStartuptrue/unlockOnStartup

updateLog
  str name=dir${solr.ulog.dir:}/str
/updateLog

 autoCommit 
   maxTime15000/maxTime 
   openSearcherfalse/openSearcher 
 /autoCommit

autoSoftCommit 
 maxTime1000/maxTime 
/autoSoftCommit

The issue I am facing is when I restart tocmat and in case there is any 
uncommitted data  in tlog, then I am getting below exception:-

org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
SimpleFSLock@F:\SHASTAMR1\Install\solr\conf\alerts\data\index\write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:84)
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:644)
at 
org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:77)
at 
org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)
at 
org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:197)
at 
org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:110)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:148)
at 

[jira] [Updated] (SOLR-5849) write.lock is not removed by LogReplayer

2014-03-11 Thread pavan patel (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pavan patel updated SOLR-5849:
--

Description: 
In my application I am using SolrEmbeddedServer inside tomcat. I have below 
configuration for my core:-

lockTypesimple/lockType
unlockOnStartuptrue/unlockOnStartup

updateLog
  str name=dir${solr.ulog.dir:}/str
/updateLog

 autoCommit 
   maxTime15000/maxTime 
   openSearcherfalse/openSearcher 
 /autoCommit

autoSoftCommit 
 maxTime1000/maxTime 
/autoSoftCommit

The issue I am facing is when I restart tocmat and in case there is any 
uncommitted data  in tlog, then I am getting below exception:-

org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
SimpleFSLock@F:\mydir\Install\solr\conf\alerts\data\index\write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:84)
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:644)
at 
org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:77)
at 
org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)
at 
org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:197)
at 
org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:110)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:148)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:504)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:640)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:396)
at 
org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:100)
at 
org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:246)
at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:173)
at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1816)
at 
org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:150)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:68)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:54)

After restart I am not able to index anything into the solr. I debug the code 
and found out that LogReplayer during start up creates the SolrIndexWriter on 
core and that creates the write.lock file. Once all the leftover tlog's are 
indexed, the write.lock remains there, its not getting deleted. So when my 
application tries to add document the SolrIndexWriter is not able to create the 
lock because write.lock already exists.

This seems to be a bug in Solr 4.3.0, because I believe SolrIndexWriter created 
during  LogReplayer is not closed, and that is causing the write.lock leftover 
in data directory.



  was:
In my application I am using SolrEmbeddedServer inside tomcat. I have below 
configuration for my core:-

lockTypesimple/lockType
unlockOnStartuptrue/unlockOnStartup

updateLog
  str name=dir${solr.ulog.dir:}/str
/updateLog

 autoCommit 
   maxTime15000/maxTime 
   openSearcherfalse/openSearcher 
 /autoCommit

autoSoftCommit 
 maxTime1000/maxTime 
/autoSoftCommit

The issue I am facing is when I restart tocmat and in case there is any 
uncommitted data  in tlog, then I am getting below exception:-

org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
SimpleFSLock@F:\mydir\Install\solr\conf\alerts\data\index\write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:84)
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:644)
at 
org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:77)
at 
org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)
at 
org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:197)
at 
org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:110)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:148)
at 

[jira] [Commented] (LUCENE-5475) add required attribute bugUrl to @BadApple

2014-03-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930312#comment-13930312
 ] 

ASF subversion and git services commented on LUCENE-5475:
-

Commit 1576327 from [~dawidweiss] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1576327 ]

LUCENE-5475: upgraded randomized testing to 2.1.1. This will print full 
annotations on assumption-ignored tests. It also includes more fancy test 
filtering.

 add required attribute bugUrl to @BadApple
 --

 Key: LUCENE-5475
 URL: https://issues.apache.org/jira/browse/LUCENE-5475
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/test
Reporter: Robert Muir
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5475.patch


 This makes it impossible to tag a test as a badapple without a pointer to a 
 JIRA issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5831) Scale score PostFilter

2014-03-11 Thread Peter Keegan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930313#comment-13930313
 ] 

Peter Keegan commented on SOLR-5831:


 How is this performing compared to using the scale() function?
No comparison. I'm running Solr on a 4-vCPU EC2 instance and tested with 
SolrMeter.
On a production index (1.6 million docs) and production queries at a leisurely 
rate of 10 QPS:

1. scale() with function query:
Median response time: 3000 ms
Ave response time: 8000 ms
Load average: double digits 

2. PostFilter with maxscalehits=0 (rows=50):
Median response time: 18 ms
Ave response time: 108 ms
Load average:  1

3. PostFilter with maxscalehits=1:
Median response time: 21 ms
Ave response time: 120 ms
Load average: 1 

4. PostFiilter with maxscalehits=-1 (scale all hits)
Worse than #1. Most queries timed-out. 
This is not surprising since the PriorityQueue size is often huge from high hit 
counts, and all hits are delegated.

Regarding the QueryResultCache, are there any suggestions on how to determine 
its size in the context of the PostFilter?

Thanks,
Peter


 Scale score PostFilter
 --

 Key: SOLR-5831
 URL: https://issues.apache.org/jira/browse/SOLR-5831
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 4.7
Reporter: Peter Keegan
Priority: Minor
 Attachments: SOLR-5831.patch


 The ScaleScoreQParserPlugin is a PostFilter that performs score scaling.
 This is an alternative to using a function query wrapping a scale() wrapping 
 a query(). For example:
 select?qq={!edismax v='news' qf='title^2 
 body'}scaledQ=scale(product(query($qq),1),0,1)q={!func}sum(product(0.75,$scaledQ),product(0.25,field(myfield)))fq={!query
  v=$qq}
 The problem with this query is that it has to scale every hit. Usually, only 
 the returned hits need to be scaled,
 but there may be use cases where the number of hits to be scaled is greater 
 than the returned hit count,
 but less than or equal to the total hit count.
 Sample syntax:
 fq={!scalescore+l=0.0 u=1.0 maxscalehits=1 
 func=sum(product(sscore(),0.75),product(field(myfield),0.25))}
 l=0.0 u=1.0   //Scale scores to values between 0-1, inclusive 
 maxscalehits=1//The maximum number of result scores to scale (-1 = 
 all hits, 0 = results 'page' size)
 func=...  //Apply the composite function to each hit. The 
 scaled score value is accessed by the 'score()' value source
 All parameters are optional. The defaults are:
 l=0.0 u=1.0
 maxscalehits=0 (result window size)
 func=(null)
  
 Note: this patch is not complete, as it contains no test cases and may not 
 conform 
 to all the guidelines in http://wiki.apache.org/solr/HowToContribute. 
  
 I would appreciate any feedback on the usability and implementation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: ant idea on fresh checkout of 4.7

2014-03-11 Thread Steve Rowe
0.11.0 is the morphlines version specified for trunk, branch_4x, and the 
lucene_solr_4_7 branch, e.g. from trunk ivy-versions.properties:

org.kitesdk.kite-morphlines.version = 0.11.0

AFAICT, version 0.11.0 of those two artifacts are not available from the 
cloudera repositories - I can see 0.10.0, 0.10.0-* (several), and 0.12.0, but 
no 0.11.0: 

   
http://repository.cloudera.com/cloudera/libs-release-local/org/kitesdk/kite-morphlines-hadoop-sequencefile/
   
http://repository.cloudera.com/cloudera/libs-release-local/org/kitesdk/kite-morphlines-saxon/

   
https://repository.cloudera.com/cloudera/repo/org/kitesdk/kite-morphlines-hadoop-sequencefile/
   
https://repository.cloudera.com/cloudera/repo/org/kitesdk/kite-morphlines-saxon/

Mark Miller, do you know what’s going on?

I made a tarball of all the 0.11.0 files under my ~/.ivy2/cache/org.kitesdk/ 
directory and put them here:


http://people.apache.org/~sarowe/solr-dependencies-org.kitesdk-0.11.0.tar.bz2

Steve

On Mar 11, 2014, at 6:17 AM, Grant Ingersoll gsing...@apache.org wrote:

 Hi,
 
 I did a fresh checkout of 4.7 from SVN and ran ant idea at the top level and 
 I get [1].  I presume I am missing the CDH Ivy repo somewhere.  Any one have 
 the bits that need to be added handy?
 
 
 
 [1]
 :: problems summary ::
 [ivy:retrieve]  WARNINGS
 [ivy:retrieve]module not found: 
 org.kitesdk#kite-morphlines-saxon;0.11.0
 [ivy:retrieve] local: tried
 [ivy:retrieve]  
 /pathUsers/grantingersoll/.ivy2/local/org.kitesdk/kite-morphlines-saxon/0.11.0/ivys/ivy.xml
 [ivy:retrieve]  -- artifact 
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
 [ivy:retrieve]  
 /path/.ivy2/local/org.kitesdk/kite-morphlines-saxon/0.11.0/jars/kite-morphlines-saxon.jar
 [ivy:retrieve] shared: tried
 [ivy:retrieve]  
 /path/.ivy2/shared/org.kitesdk/kite-morphlines-saxon/0.11.0/ivys/ivy.xml
 [ivy:retrieve]  -- artifact 
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
 [ivy:retrieve]  
 /path/.ivy2/shared/org.kitesdk/kite-morphlines-saxon/0.11.0/jars/kite-morphlines-saxon.jar
 [ivy:retrieve] public: tried
 [ivy:retrieve]  
 http://repo1.maven.org/maven2/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
 [ivy:retrieve]  -- artifact 
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
 [ivy:retrieve]  
 http://repo1.maven.org/maven2/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
 [ivy:retrieve] cloudera: tried
 [ivy:retrieve]  
 https://repository.cloudera.com/artifactory/repo/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
 [ivy:retrieve]  -- artifact 
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
 [ivy:retrieve]  
 https://repository.cloudera.com/artifactory/repo/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
 [ivy:retrieve] releases.cloudera.com: tried
 [ivy:retrieve]  
 https://repository.cloudera.com/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
 [ivy:retrieve]  -- artifact 
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
 [ivy:retrieve]  
 https://repository.cloudera.com/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
 [ivy:retrieve] sonatype-releases: tried
 [ivy:retrieve]  
 http://oss.sonatype.org/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
 [ivy:retrieve]  -- artifact 
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
 [ivy:retrieve]  
 http://oss.sonatype.org/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
 [ivy:retrieve] maven.restlet.org: tried
 [ivy:retrieve]  
 http://maven.restlet.org/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
 [ivy:retrieve]  -- artifact 
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
 [ivy:retrieve]  
 http://maven.restlet.org/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
 [ivy:retrieve] svnkit-releases: tried
 [ivy:retrieve]  
 http://maven.tmatesoft.com/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
 [ivy:retrieve]  -- artifact 
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
 [ivy:retrieve]  
 http://maven.tmatesoft.com/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
 [ivy:retrieve] working-chinese-mirror: tried
 [ivy:retrieve]  
 

[jira] [Updated] (SOLR-5473) Make one state.json per collection

2014-03-11 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5473:
-

Attachment: SOLR-5473-74.patch

The last patch missed the testcase

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5473) Make one state.json per collection

2014-03-11 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5473:
-

Attachment: (was: SOLR-5473-74.patch)

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-5768) Add a distrib.singlePass parameter to make EXECUTE_QUERY phase fetch all fields and skip GET_FIELDS

2014-03-11 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reassigned SOLR-5768:
---

Assignee: Shalin Shekhar Mangar

 Add a distrib.singlePass parameter to make EXECUTE_QUERY phase fetch all 
 fields and skip GET_FIELDS
 ---

 Key: SOLR-5768
 URL: https://issues.apache.org/jira/browse/SOLR-5768
 Project: Solr
  Issue Type: Improvement
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 4.8, 5.0

 Attachments: SOLR-5768.diff


 Suggested by Yonik on solr-user:
 http://www.mail-archive.com/solr-user@lucene.apache.org/msg95045.html
 {quote}
 Although it seems like it should be relatively simple to make it work
 with other fields as well, by passing down the complete fl requested
 if some optional parameter is set (distrib.singlePass?)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: ant idea on fresh checkout of 4.7

2014-03-11 Thread Mark Miller
I'm west coast this week - still sleeping for an hour or so :) I'll get to the 
bottom of it when I get to the office. 

I assume technical difficulties but I have no clue. 

- Mark

 On Mar 11, 2014, at 6:16 AM, Steve Rowe sar...@gmail.com wrote:
 
 0.11.0 is the morphlines version specified for trunk, branch_4x, and the 
 lucene_solr_4_7 branch, e.g. from trunk ivy-versions.properties:
 
org.kitesdk.kite-morphlines.version = 0.11.0
 
 AFAICT, version 0.11.0 of those two artifacts are not available from the 
 cloudera repositories - I can see 0.10.0, 0.10.0-* (several), and 0.12.0, but 
 no 0.11.0: 
 
   
 http://repository.cloudera.com/cloudera/libs-release-local/org/kitesdk/kite-morphlines-hadoop-sequencefile/
   
 http://repository.cloudera.com/cloudera/libs-release-local/org/kitesdk/kite-morphlines-saxon/
 
   
 https://repository.cloudera.com/cloudera/repo/org/kitesdk/kite-morphlines-hadoop-sequencefile/
   
 https://repository.cloudera.com/cloudera/repo/org/kitesdk/kite-morphlines-saxon/
 
 Mark Miller, do you know what’s going on?
 
 I made a tarball of all the 0.11.0 files under my ~/.ivy2/cache/org.kitesdk/ 
 directory and put them here:
 

 http://people.apache.org/~sarowe/solr-dependencies-org.kitesdk-0.11.0.tar.bz2
 
 Steve
 
 On Mar 11, 2014, at 6:17 AM, Grant Ingersoll gsing...@apache.org wrote:
 
 Hi,
 
 I did a fresh checkout of 4.7 from SVN and ran ant idea at the top level and 
 I get [1].  I presume I am missing the CDH Ivy repo somewhere.  Any one have 
 the bits that need to be added handy?
 
 
 
 [1]
 :: problems summary ::
 [ivy:retrieve]  WARNINGS
 [ivy:retrieve]module not found: 
 org.kitesdk#kite-morphlines-saxon;0.11.0
 [ivy:retrieve] local: tried
 [ivy:retrieve]  
 /pathUsers/grantingersoll/.ivy2/local/org.kitesdk/kite-morphlines-saxon/0.11.0/ivys/ivy.xml
 [ivy:retrieve]  -- artifact 
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
 [ivy:retrieve]  
 /path/.ivy2/local/org.kitesdk/kite-morphlines-saxon/0.11.0/jars/kite-morphlines-saxon.jar
 [ivy:retrieve] shared: tried
 [ivy:retrieve]  
 /path/.ivy2/shared/org.kitesdk/kite-morphlines-saxon/0.11.0/ivys/ivy.xml
 [ivy:retrieve]  -- artifact 
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
 [ivy:retrieve]  
 /path/.ivy2/shared/org.kitesdk/kite-morphlines-saxon/0.11.0/jars/kite-morphlines-saxon.jar
 [ivy:retrieve] public: tried
 [ivy:retrieve]  
 http://repo1.maven.org/maven2/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
 [ivy:retrieve]  -- artifact 
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
 [ivy:retrieve]  
 http://repo1.maven.org/maven2/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
 [ivy:retrieve] cloudera: tried
 [ivy:retrieve]  
 https://repository.cloudera.com/artifactory/repo/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
 [ivy:retrieve]  -- artifact 
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
 [ivy:retrieve]  
 https://repository.cloudera.com/artifactory/repo/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
 [ivy:retrieve] releases.cloudera.com: tried
 [ivy:retrieve]  
 https://repository.cloudera.com/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
 [ivy:retrieve]  -- artifact 
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
 [ivy:retrieve]  
 https://repository.cloudera.com/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
 [ivy:retrieve] sonatype-releases: tried
 [ivy:retrieve]  
 http://oss.sonatype.org/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
 [ivy:retrieve]  -- artifact 
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
 [ivy:retrieve]  
 http://oss.sonatype.org/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
 [ivy:retrieve] maven.restlet.org: tried
 [ivy:retrieve]  
 http://maven.restlet.org/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
 [ivy:retrieve]  -- artifact 
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
 [ivy:retrieve]  
 http://maven.restlet.org/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
 [ivy:retrieve] svnkit-releases: tried
 [ivy:retrieve]  
 http://maven.tmatesoft.com/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
 [ivy:retrieve]  -- artifact 
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
 [ivy:retrieve]  
 

[jira] [Commented] (SOLR-5848) Morphlines is not resolving

2014-03-11 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930327#comment-13930327
 ] 

Steve Rowe commented on SOLR-5848:
--

AFAICT, version 0.11.0 of those two artifacts are not available from the 
cloudera repositories - I can see 0.10.0, 0.10.0-* (several), and 0.12.0, but 
no 0.11.0: 

  
http://repository.cloudera.com/cloudera/libs-release-local/org/kitesdk/kite-morphlines-hadoop-sequencefile/
  
http://repository.cloudera.com/cloudera/libs-release-local/org/kitesdk/kite-morphlines-saxon/

  
https://repository.cloudera.com/cloudera/repo/org/kitesdk/kite-morphlines-hadoop-sequencefile/
  
https://repository.cloudera.com/cloudera/repo/org/kitesdk/kite-morphlines-saxon/

I made a tarball of all the 0.11.0 files under my ~/.ivy2/cache/org.kitesdk/ 
directory and put them here:

   http://people.apache.org/~sarowe/solr-dependencies-org.kitesdk-0.11.0.tar.bz2

 Morphlines is not resolving
 ---

 Key: SOLR-5848
 URL: https://issues.apache.org/jira/browse/SOLR-5848
 Project: Solr
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Mark Miller
Priority: Critical

 This version of morphlines does not resolve for me and Grant.
 {code}
 ::
 ::  UNRESOLVED DEPENDENCIES ::
 ::
 :: org.kitesdk#kite-morphlines-saxon;0.11.0: not found
 :: org.kitesdk#kite-morphlines-hadoop-sequencefile;0.11.0: not found
 {code}
 Has this been deleted from Cloudera's repositories or something? This would 
 be pretty bad -- maven repos should be immutable...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5818) distrib search with custom comparator does not quite work correctly

2014-03-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930331#comment-13930331
 ] 

ASF subversion and git services commented on SOLR-5818:
---

Commit 1576344 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1576344 ]

SOLR-5818: Add lucene expressions as a dependency on solr-core-tests idea file

 distrib search with custom comparator does not quite work correctly
 ---

 Key: SOLR-5818
 URL: https://issues.apache.org/jira/browse/SOLR-5818
 Project: Solr
  Issue Type: Bug
Reporter: Ryan Ernst
Assignee: Ryan Ernst
 Fix For: 4.8, 5.0

 Attachments: SOLR-5818.patch


 In QueryComponent.doFieldSortValues, a scorer is never set on a custom 
 comparator.  We just need to add a fake scorer that can pass through the 
 score from the DocList.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5848) Morphlines is not resolving

2014-03-11 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930330#comment-13930330
 ] 

Robert Muir commented on SOLR-5848:
---

Steve can we commit that to the default ivy-settings.xml for now?

We need to unbreak trunk.

 Morphlines is not resolving
 ---

 Key: SOLR-5848
 URL: https://issues.apache.org/jira/browse/SOLR-5848
 Project: Solr
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Mark Miller
Priority: Critical

 This version of morphlines does not resolve for me and Grant.
 {code}
 ::
 ::  UNRESOLVED DEPENDENCIES ::
 ::
 :: org.kitesdk#kite-morphlines-saxon;0.11.0: not found
 :: org.kitesdk#kite-morphlines-hadoop-sequencefile;0.11.0: not found
 {code}
 Has this been deleted from Cloudera's repositories or something? This would 
 be pretty bad -- maven repos should be immutable...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: ant idea on fresh checkout of 4.7

2014-03-11 Thread Dawid Weiss
 still sleeping for an hour or so :)

LOL. Good for you! :)
http://blog.sleep-coding.com/wp-content/uploads/2013/05/sleep-coding-banner.jpg

Dawid

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5818) distrib search with custom comparator does not quite work correctly

2014-03-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930332#comment-13930332
 ] 

ASF subversion and git services commented on SOLR-5818:
---

Commit 1576346 from sha...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1576346 ]

SOLR-5818: Add lucene expressions as a dependency on solr-core-tests idea file

 distrib search with custom comparator does not quite work correctly
 ---

 Key: SOLR-5818
 URL: https://issues.apache.org/jira/browse/SOLR-5818
 Project: Solr
  Issue Type: Bug
Reporter: Ryan Ernst
Assignee: Ryan Ernst
 Fix For: 4.8, 5.0

 Attachments: SOLR-5818.patch


 In QueryComponent.doFieldSortValues, a scorer is never set on a custom 
 comparator.  We just need to add a fake scorer that can pass through the 
 score from the DocList.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5850) Race condition in ConcurrentUpdateSolrServer

2014-03-11 Thread Devansh Dhutia (JIRA)
Devansh Dhutia created SOLR-5850:


 Summary: Race condition in ConcurrentUpdateSolrServer
 Key: SOLR-5850
 URL: https://issues.apache.org/jira/browse/SOLR-5850
 Project: Solr
  Issue Type: Bug
  Components: search, SolrCloud, update
Affects Versions: 4.6
Reporter: Devansh Dhutia
Priority: Critical


Possibly related to SOLR-2308, we are seeing a Queue Full error message when 
issuing thousands of writes to the Solr Cloud. 

Each Update has 200 documents, and a commit is issued after 2000 documents have 
been added. 

The writes are spread out to all the servers in the cloud (2 in this case) and 
following is the stack trace from Solr: 

{code:xml}
?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeaderint name=status500/intint 
name=QTime101/int/lstlst name=errorstr name=msgQueue 
full/strstr name=t
racejava.lang.IllegalStateException: Queue full
at java.util.AbstractQueue.add(Unknown Source)
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner$1.writeTo(ConcurrentUpdateSolrServer.java:181)
at org.apache.http.entity.EntityTemplate.writeTo(EntityTemplate.java:72)
at 
org.apache.http.entity.HttpEntityWrapper.writeTo(HttpEntityWrapper.java:98)
at 
org.apache.http.impl.client.EntityEnclosingRequestWrapper$EntityWrapper.writeTo(EntityEnclosingRequestWrapper.java:108)
at 
org.apache.http.impl.entity.EntitySerializer.serialize(EntitySerializer.java:122)
at 
org.apache.http.impl.AbstractHttpClientConnection.sendRequestEntity(AbstractHttpClientConnection.java:271)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.sendRequestEntity(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doSendRequest(HttpRequestExecutor.java:257)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:715)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:520)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner.run(ConcurrentUpdateSolrServer.java:232)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
/strint name=code500/int/lst
/response
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5512) Remove redundant typing (diamond operator) in trunk

2014-03-11 Thread Furkan KAMACI (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Furkan KAMACI updated LUCENE-5512:
--

Attachment: LUCENE-5512.patch

I've finished removing redundant typing for trunk. 1221 files has affected from 
my patch. I've done it for both Lucene and Solr modules. 

I didn't use any automated tools for it and I've changed even the commented 
codes to avoid confusion.

After I finished removing redundant typing I have reviewed all my changes 
inside 1221 files and everything seems OK. Code is compiled successfully and 
all tests are passed.

My suggestion is that: if the voting ends up and result is OK to support Java 7 
we should apply this patch into trunk as soon as possible. Because it includes 
many changes and it make Lucene/Solr code much more readable. On the other hand 
I don't think that it will cause conflict (at least any real conflict) for 
people who develops code right now.

All in all I could have a chance to check nearly all classes of project and it 
was really good for me. I've noted some issues about project noted some good 
tips for me when I was checking all the Lucene/Solr project.

[~rcmuir] you can check the code and apply the patch if vote passes.

 Remove redundant typing (diamond operator) in trunk
 ---

 Key: LUCENE-5512
 URL: https://issues.apache.org/jira/browse/LUCENE-5512
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-5512.patch, LUCENE-5512.patch, LUCENE-5512.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5850) Race condition in ConcurrentUpdateSolrServer

2014-03-11 Thread Devansh Dhutia (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devansh Dhutia updated SOLR-5850:
-

Component/s: clients - java

 Race condition in ConcurrentUpdateSolrServer
 

 Key: SOLR-5850
 URL: https://issues.apache.org/jira/browse/SOLR-5850
 Project: Solr
  Issue Type: Bug
  Components: clients - java, search, SolrCloud, update
Affects Versions: 4.6
Reporter: Devansh Dhutia
Priority: Critical
  Labels: 500, cloud, error, update

 Possibly related to SOLR-2308, we are seeing a Queue Full error message when 
 issuing thousands of writes to the Solr Cloud. 
 Each Update has 200 documents, and a commit is issued after 2000 documents 
 have been added. 
 The writes are spread out to all the servers in the cloud (2 in this case) 
 and following is the stack trace from Solr: 
 {code:xml}
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status500/intint 
 name=QTime101/int/lstlst name=errorstr name=msgQueue 
 full/strstr name=t
 racejava.lang.IllegalStateException: Queue full
 at java.util.AbstractQueue.add(Unknown Source)
 at 
 org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner$1.writeTo(ConcurrentUpdateSolrServer.java:181)
 at 
 org.apache.http.entity.EntityTemplate.writeTo(EntityTemplate.java:72)
 at 
 org.apache.http.entity.HttpEntityWrapper.writeTo(HttpEntityWrapper.java:98)
 at 
 org.apache.http.impl.client.EntityEnclosingRequestWrapper$EntityWrapper.writeTo(EntityEnclosingRequestWrapper.java:108)
 at 
 org.apache.http.impl.entity.EntitySerializer.serialize(EntitySerializer.java:122)
 at 
 org.apache.http.impl.AbstractHttpClientConnection.sendRequestEntity(AbstractHttpClientConnection.java:271)
 at 
 org.apache.http.impl.conn.ManagedClientConnectionImpl.sendRequestEntity(ManagedClientConnectionImpl.java:197)
 at 
 org.apache.http.protocol.HttpRequestExecutor.doSendRequest(HttpRequestExecutor.java:257)
 at 
 org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
 at 
 org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:715)
 at 
 org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:520)
 at 
 org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
 at 
 org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
 at 
 org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
 at 
 org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner.run(ConcurrentUpdateSolrServer.java:232)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 /strint name=code500/int/lst
 /response
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5850) Race condition in ConcurrentUpdateSolrServer

2014-03-11 Thread Devansh Dhutia (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devansh Dhutia updated SOLR-5850:
-

Description: 
Possibly related to SOLR-2308, we are seeing a Queue Full error message when 
issuing writes to Solr Cloud

Each Update has 200 documents, and a commit is issued after 2000 documents have 
been added. 

The writes are spread out to all the servers in the cloud (2 in this case) and 
following is the stack trace from Solr: 

{code:xml}
?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeaderint name=status500/intint 
name=QTime101/int/lstlst name=errorstr name=msgQueue 
full/strstr name=t
racejava.lang.IllegalStateException: Queue full
at java.util.AbstractQueue.add(Unknown Source)
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner$1.writeTo(ConcurrentUpdateSolrServer.java:181)
at org.apache.http.entity.EntityTemplate.writeTo(EntityTemplate.java:72)
at 
org.apache.http.entity.HttpEntityWrapper.writeTo(HttpEntityWrapper.java:98)
at 
org.apache.http.impl.client.EntityEnclosingRequestWrapper$EntityWrapper.writeTo(EntityEnclosingRequestWrapper.java:108)
at 
org.apache.http.impl.entity.EntitySerializer.serialize(EntitySerializer.java:122)
at 
org.apache.http.impl.AbstractHttpClientConnection.sendRequestEntity(AbstractHttpClientConnection.java:271)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.sendRequestEntity(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doSendRequest(HttpRequestExecutor.java:257)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:715)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:520)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner.run(ConcurrentUpdateSolrServer.java:232)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
/strint name=code500/int/lst
/response
{code}

  was:
Possibly related to SOLR-2308, we are seeing a Queue Full error message when 
issuing thousands of writes to the Solr Cloud. 

Each Update has 200 documents, and a commit is issued after 2000 documents have 
been added. 

The writes are spread out to all the servers in the cloud (2 in this case) and 
following is the stack trace from Solr: 

{code:xml}
?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeaderint name=status500/intint 
name=QTime101/int/lstlst name=errorstr name=msgQueue 
full/strstr name=t
racejava.lang.IllegalStateException: Queue full
at java.util.AbstractQueue.add(Unknown Source)
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner$1.writeTo(ConcurrentUpdateSolrServer.java:181)
at org.apache.http.entity.EntityTemplate.writeTo(EntityTemplate.java:72)
at 
org.apache.http.entity.HttpEntityWrapper.writeTo(HttpEntityWrapper.java:98)
at 
org.apache.http.impl.client.EntityEnclosingRequestWrapper$EntityWrapper.writeTo(EntityEnclosingRequestWrapper.java:108)
at 
org.apache.http.impl.entity.EntitySerializer.serialize(EntitySerializer.java:122)
at 
org.apache.http.impl.AbstractHttpClientConnection.sendRequestEntity(AbstractHttpClientConnection.java:271)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.sendRequestEntity(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doSendRequest(HttpRequestExecutor.java:257)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:715)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:520)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner.run(ConcurrentUpdateSolrServer.java:232)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at 

[jira] [Commented] (SOLR-5848) Morphlines is not resolving

2014-03-11 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930379#comment-13930379
 ] 

Steve Rowe commented on SOLR-5848:
--

bq. Steve can we commit that to the default ivy-settings.xml for now?

I don't know how to make ivy pull from a remote Ivy cache, so I instead put up 
the v0.11.0 {{org/kitesdk/}} section of my {{~/.m2/repository/}} here: 
http://people.apache.org/~sarowe/.m2repo/ and modified 
{{lucene/ivy-settings.xml}} to pull from there.  After I deleted my 
{{~/.ivy2/cache/}}, {{ant resolve}} succeeded for me, so it appears to work.

bq. We need to unbreak trunk.

I've committed the {{lucene/ivy-settings.xml}} change to trunk.

I agree, but why not branch_4x too?  It's also affected.

 Morphlines is not resolving
 ---

 Key: SOLR-5848
 URL: https://issues.apache.org/jira/browse/SOLR-5848
 Project: Solr
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Mark Miller
Priority: Critical

 This version of morphlines does not resolve for me and Grant.
 {code}
 ::
 ::  UNRESOLVED DEPENDENCIES ::
 ::
 :: org.kitesdk#kite-morphlines-saxon;0.11.0: not found
 :: org.kitesdk#kite-morphlines-hadoop-sequencefile;0.11.0: not found
 {code}
 Has this been deleted from Cloudera's repositories or something? This would 
 be pretty bad -- maven repos should be immutable...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5848) Morphlines is not resolving

2014-03-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930378#comment-13930378
 ] 

ASF subversion and git services commented on SOLR-5848:
---

Commit 1576360 from [~steve_rowe] in branch 'dev/trunk'
[ https://svn.apache.org/r1576360 ]

SOLR-5848: add reference to temporary morphlines 0.11.0 download site to 
lucene/ivy-settings.xml to allow 'ant resolve' to work

 Morphlines is not resolving
 ---

 Key: SOLR-5848
 URL: https://issues.apache.org/jira/browse/SOLR-5848
 Project: Solr
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Mark Miller
Priority: Critical

 This version of morphlines does not resolve for me and Grant.
 {code}
 ::
 ::  UNRESOLVED DEPENDENCIES ::
 ::
 :: org.kitesdk#kite-morphlines-saxon;0.11.0: not found
 :: org.kitesdk#kite-morphlines-hadoop-sequencefile;0.11.0: not found
 {code}
 Has this been deleted from Cloudera's repositories or something? This would 
 be pretty bad -- maven repos should be immutable...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build # 9756 - Failure!

2014-03-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/9756/
Java: 32bit/jdk1.7.0_51 -server -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 28271 lines...]
check-licenses:
 [echo] License check under: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr
 [licenses] MISSING sha1 checksum file for: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/test-framework/lib/junit4-ant-2.1.1.jar
 [licenses] EXPECTED sha1 checksum file : 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/junit4-ant-2.1.1.jar.sha1

[...truncated 3 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:467: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:70: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build.xml:254: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/custom-tasks.xml:62:
 License check failed. Check the logs.

Total time: 52 minutes 39 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 32bit/jdk1.7.0_51 -server -XX:+UseSerialGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Comment Edited] (SOLR-5848) Morphlines is not resolving

2014-03-11 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930379#comment-13930379
 ] 

Steve Rowe edited comment on SOLR-5848 at 3/11/14 2:27 PM:
---

bq. Steve can we commit that to the default ivy-settings.xml for now?

I don't know how to make ivy pull from a remote Ivy cache, so I instead put up 
the v0.11.0 {{org/kitesdk/}} section of my {{\~/.m2/repository/}} here: 
http://people.apache.org/~sarowe/.m2repo/ and modified 
{{lucene/ivy-settings.xml}} to pull from there.  After I deleted my 
{{\~/.ivy2/cache/}}, {{ant resolve}} succeeded for me, so it appears to work.

bq. We need to unbreak trunk.

I've committed the {{lucene/ivy-settings.xml}} change to trunk.

I agree, but why not branch_4x too?  It's also affected.


was (Author: steve_rowe):
bq. Steve can we commit that to the default ivy-settings.xml for now?

I don't know how to make ivy pull from a remote Ivy cache, so I instead put up 
the v0.11.0 {{org/kitesdk/}} section of my {{~/.m2/repository/}} here: 
http://people.apache.org/~sarowe/.m2repo/ and modified 
{{lucene/ivy-settings.xml}} to pull from there.  After I deleted my 
{{~/.ivy2/cache/}}, {{ant resolve}} succeeded for me, so it appears to work.

bq. We need to unbreak trunk.

I've committed the {{lucene/ivy-settings.xml}} change to trunk.

I agree, but why not branch_4x too?  It's also affected.

 Morphlines is not resolving
 ---

 Key: SOLR-5848
 URL: https://issues.apache.org/jira/browse/SOLR-5848
 Project: Solr
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Mark Miller
Priority: Critical

 This version of morphlines does not resolve for me and Grant.
 {code}
 ::
 ::  UNRESOLVED DEPENDENCIES ::
 ::
 :: org.kitesdk#kite-morphlines-saxon;0.11.0: not found
 :: org.kitesdk#kite-morphlines-hadoop-sequencefile;0.11.0: not found
 {code}
 Has this been deleted from Cloudera's repositories or something? This would 
 be pretty bad -- maven repos should be immutable...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5848) Morphlines is not resolving

2014-03-11 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930399#comment-13930399
 ] 

Robert Muir commented on SOLR-5848:
---

Thank you! Yes, i just figured trunk was most important so anyone trying to 
checkout the source code for the first time is able to get it working.

 Morphlines is not resolving
 ---

 Key: SOLR-5848
 URL: https://issues.apache.org/jira/browse/SOLR-5848
 Project: Solr
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Mark Miller
Priority: Critical

 This version of morphlines does not resolve for me and Grant.
 {code}
 ::
 ::  UNRESOLVED DEPENDENCIES ::
 ::
 :: org.kitesdk#kite-morphlines-saxon;0.11.0: not found
 :: org.kitesdk#kite-morphlines-hadoop-sequencefile;0.11.0: not found
 {code}
 Has this been deleted from Cloudera's repositories or something? This would 
 be pretty bad -- maven repos should be immutable...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5205) [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser

2014-03-11 Thread Ahmet Arslan (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930402#comment-13930402
 ] 

Ahmet Arslan commented on LUCENE-5205:
--

Hi [~nikhil500], Instead of all stopwords as synonyms what do you think 
reducing all stop words to the same token?

the = ImpossibleToken
a = ImpossibleToken
for = ImpossibleToken

 [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to 
 classic QueryParser
 ---

 Key: LUCENE-5205
 URL: https://issues.apache.org/jira/browse/LUCENE-5205
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/queryparser
Reporter: Tim Allison
  Labels: patch
 Fix For: 4.7

 Attachments: LUCENE-5205-cleanup-tests.patch, 
 LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, 
 LUCENE-5205_dateTestReInitPkgPrvt.patch, 
 LUCENE-5205_improve_stop_word_handling.patch, 
 LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, 
 SpanQueryParser_v1.patch.gz, patch.txt


 This parser extends QueryParserBase and includes functionality from:
 * Classic QueryParser: most of its syntax
 * SurroundQueryParser: recursive parsing for near and not clauses.
 * ComplexPhraseQueryParser: can handle near queries that include multiterms 
 (wildcard, fuzzy, regex, prefix),
 * AnalyzingQueryParser: has an option to analyze multiterms.
 At a high level, there's a first pass BooleanQuery/field parser and then a 
 span query parser handles all terminal nodes and phrases.
 Same as classic syntax:
 * term: test 
 * fuzzy: roam~0.8, roam~2
 * wildcard: te?t, test*, t*st
 * regex: /\[mb\]oat/
 * phrase: jakarta apache
 * phrase with slop: jakarta apache~3
 * default or clause: jakarta apache
 * grouping or clause: (jakarta apache)
 * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta
 * multiple fields: title:lucene author:hatcher
  
 Main additions in SpanQueryParser syntax vs. classic syntax:
 * Can require in order for phrases with slop with the \~ operator: 
 jakarta apache\~3
 * Can specify not near: fever bieber!\~3,10 ::
 find fever but not if bieber appears within 3 words before or 10 
 words after it.
 * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta 
 apache\]~3 lucene\]\~4 :: 
 find jakarta within 3 words of apache, and that hit has to be within 
 four words before lucene
 * Can also use \[\] for single level phrasal queries instead of  as in: 
 \[jakarta apache\]
 * Can use or grouping clauses in phrasal queries: apache (lucene solr)\~3 
 :: find apache and then either lucene or solr within three words.
 * Can use multiterms in phrasal queries: jakarta\~1 ap*che\~2
 * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ 
 /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like jakarta within two 
 words of ap*che and that hit has to be within ten words of something like 
 solr or that lucene regex.
 * Can require at least x number of hits at boolean level: apache AND (lucene 
 solr tika)~2
 * Can use negative only query: -jakarta :: Find all docs that don't contain 
 jakarta
 * Can use an edit distance  2 for fuzzy query via SlowFuzzyQuery (beware of 
 potential performance issues!).
 Trivial additions:
 * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, 
 prefix =2)
 * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance 
 =2: (jakarta~1 (OSA) vs jakarta~1(Levenshtein)
 This parser can be very useful for concordance tasks (see also LUCENE-5317 
 and LUCENE-5318) and for analytical search.  
 Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery.
 Most of the documentation is in the javadoc for SpanQueryParser.
 Any and all feedback is welcome.  Thank you.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1632) Distributed IDF

2014-03-11 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930406#comment-13930406
 ] 

Markus Jelsma commented on SOLR-1632:
-

No, but i think this happened when the QueryCommand code
{code}
public StatsSource getStatsSource() { return statsSource; }
public QueryCommand setStatsSource(StatsSource dfSource) {
  this.statsSource = dfSource;
  return this;
}
{code}

got removed.

 Distributed IDF
 ---

 Key: SOLR-1632
 URL: https://issues.apache.org/jira/browse/SOLR-1632
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 1.5
Reporter: Andrzej Bialecki 
Assignee: Mark Miller
 Fix For: 4.7, 5.0

 Attachments: 3x_SOLR-1632_doesntwork.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, distrib-2.patch, 
 distrib.patch


 Distributed IDF is a valuable enhancement for distributed search across 
 non-uniform shards. This issue tracks the proposed implementation of an API 
 to support this functionality in Solr.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: ant idea on fresh checkout of 4.7

2014-03-11 Thread Simon Willnauer
watch out Grant is back ;) Welcome! :)

simon

On Tue, Mar 11, 2014 at 11:17 AM, Grant Ingersoll gsing...@apache.org wrote:
 Hi,

 I did a fresh checkout of 4.7 from SVN and ran ant idea at the top level and 
 I get [1].  I presume I am missing the CDH Ivy repo somewhere.  Any one have 
 the bits that need to be added handy?



 [1]
  :: problems summary ::
 [ivy:retrieve]  WARNINGS
 [ivy:retrieve]  module not found: 
 org.kitesdk#kite-morphlines-saxon;0.11.0
 [ivy:retrieve]   local: tried
 [ivy:retrieve]
 /pathUsers/grantingersoll/.ivy2/local/org.kitesdk/kite-morphlines-saxon/0.11.0/ivys/ivy.xml
 [ivy:retrieve]-- artifact 
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
 [ivy:retrieve]
 /path/.ivy2/local/org.kitesdk/kite-morphlines-saxon/0.11.0/jars/kite-morphlines-saxon.jar
 [ivy:retrieve]   shared: tried
 [ivy:retrieve]
 /path/.ivy2/shared/org.kitesdk/kite-morphlines-saxon/0.11.0/ivys/ivy.xml
 [ivy:retrieve]-- artifact 
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
 [ivy:retrieve]
 /path/.ivy2/shared/org.kitesdk/kite-morphlines-saxon/0.11.0/jars/kite-morphlines-saxon.jar
 [ivy:retrieve]   public: tried
 [ivy:retrieve]
 http://repo1.maven.org/maven2/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
 [ivy:retrieve]-- artifact 
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
 [ivy:retrieve]
 http://repo1.maven.org/maven2/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
 [ivy:retrieve]   cloudera: tried
 [ivy:retrieve]
 https://repository.cloudera.com/artifactory/repo/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
 [ivy:retrieve]-- artifact 
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
 [ivy:retrieve]
 https://repository.cloudera.com/artifactory/repo/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
 [ivy:retrieve]   releases.cloudera.com: tried
 [ivy:retrieve]
 https://repository.cloudera.com/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
 [ivy:retrieve]-- artifact 
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
 [ivy:retrieve]
 https://repository.cloudera.com/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
 [ivy:retrieve]   sonatype-releases: tried
 [ivy:retrieve]
 http://oss.sonatype.org/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
 [ivy:retrieve]-- artifact 
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
 [ivy:retrieve]
 http://oss.sonatype.org/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
 [ivy:retrieve]   maven.restlet.org: tried
 [ivy:retrieve]
 http://maven.restlet.org/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
 [ivy:retrieve]-- artifact 
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
 [ivy:retrieve]
 http://maven.restlet.org/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
 [ivy:retrieve]   svnkit-releases: tried
 [ivy:retrieve]
 http://maven.tmatesoft.com/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
 [ivy:retrieve]-- artifact 
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
 [ivy:retrieve]
 http://maven.tmatesoft.com/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
 [ivy:retrieve]   working-chinese-mirror: tried
 [ivy:retrieve]
 http://uk.maven.org/maven2/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
 [ivy:retrieve]-- artifact 
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
 [ivy:retrieve]
 http://uk.maven.org/maven2/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
 [ivy:retrieve]  module not found: 
 org.kitesdk#kite-morphlines-hadoop-sequencefile;0.11.0
 [ivy:retrieve]   local: tried
 [ivy:retrieve]
 /path/.ivy2/local/org.kitesdk/kite-morphlines-hadoop-sequencefile/0.11.0/ivys/ivy.xml
 [ivy:retrieve]-- artifact 
 org.kitesdk#kite-morphlines-hadoop-sequencefile;0.11.0!kite-morphlines-hadoop-sequencefile.jar:
 [ivy:retrieve]
 /path/.ivy2/local/org.kitesdk/kite-morphlines-hadoop-sequencefile/0.11.0/jars/kite-morphlines-hadoop-sequencefile.jar
 [ivy:retrieve]   shared: tried
 [ivy:retrieve]
 /path/.ivy2/shared/org.kitesdk/kite-morphlines-hadoop-sequencefile/0.11.0/ivys/ivy.xml
 [ivy:retrieve]-- artifact 
 org.kitesdk#kite-morphlines-hadoop-sequencefile;0.11.0!kite-morphlines-hadoop-sequencefile.jar:
 [ivy:retrieve]
 

[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4644 - Failure

2014-03-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4644/

All tests passed

Build Log:
[...truncated 28381 lines...]
check-licenses:
 [echo] License check under: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr
 [licenses] MISSING sha1 checksum file for: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/test-framework/lib/junit4-ant-2.1.1.jar
 [licenses] EXPECTED sha1 checksum file : 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/junit4-ant-2.1.1.jar.sha1

[...truncated 3 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/build.xml:467:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/build.xml:70:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/build.xml:254:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/lucene/tools/custom-tasks.xml:62:
 License check failed. Check the logs.

Total time: 105 minutes 38 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: ant idea on fresh checkout of 4.7

2014-03-11 Thread Michael Della Bitta
Cloudera just had to rebuild their Maven repo, maybe that's it?

http://community.cloudera.com/t5/CDH-Manual-Installation/Cloudera-repo-seems-down-https-repository-cloudera-com/m-p/7232

Michael Della Bitta

Applications Developer

o: +1 646 532 3062

appinions inc.

The Science of Influence Marketing

18 East 41st Street

New York, NY 10017

t: @appinions https://twitter.com/Appinions | g+:
plus.google.com/appinionshttps://plus.google.com/u/0/b/112002776285509593336/112002776285509593336/posts
w: appinions.com http://www.appinions.com/


On Tue, Mar 11, 2014 at 10:43 AM, Simon Willnauer simon.willna...@gmail.com
 wrote:

 watch out Grant is back ;) Welcome! :)

 simon

 On Tue, Mar 11, 2014 at 11:17 AM, Grant Ingersoll gsing...@apache.org
 wrote:
  Hi,
 
  I did a fresh checkout of 4.7 from SVN and ran ant idea at the top level
 and I get [1].  I presume I am missing the CDH Ivy repo somewhere.  Any one
 have the bits that need to be added handy?
 
 
 
  [1]
   :: problems summary ::
  [ivy:retrieve]  WARNINGS
  [ivy:retrieve]  module not found:
 org.kitesdk#kite-morphlines-saxon;0.11.0
  [ivy:retrieve]   local: tried
  [ivy:retrieve]
  
 /pathUsers/grantingersoll/.ivy2/local/org.kitesdk/kite-morphlines-saxon/0.11.0/ivys/ivy.xml
  [ivy:retrieve]-- artifact
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
  [ivy:retrieve]
  
 /path/.ivy2/local/org.kitesdk/kite-morphlines-saxon/0.11.0/jars/kite-morphlines-saxon.jar
  [ivy:retrieve]   shared: tried
  [ivy:retrieve]
  /path/.ivy2/shared/org.kitesdk/kite-morphlines-saxon/0.11.0/ivys/ivy.xml
  [ivy:retrieve]-- artifact
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
  [ivy:retrieve]
  
 /path/.ivy2/shared/org.kitesdk/kite-morphlines-saxon/0.11.0/jars/kite-morphlines-saxon.jar
  [ivy:retrieve]   public: tried
  [ivy:retrieve]
 http://repo1.maven.org/maven2/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
  [ivy:retrieve]-- artifact
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
  [ivy:retrieve]
 http://repo1.maven.org/maven2/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
  [ivy:retrieve]   cloudera: tried
  [ivy:retrieve]
 https://repository.cloudera.com/artifactory/repo/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
  [ivy:retrieve]-- artifact
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
  [ivy:retrieve]
 https://repository.cloudera.com/artifactory/repo/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
  [ivy:retrieve]   releases.cloudera.com: tried
  [ivy:retrieve]
 https://repository.cloudera.com/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
  [ivy:retrieve]-- artifact
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
  [ivy:retrieve]
 https://repository.cloudera.com/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
  [ivy:retrieve]   sonatype-releases: tried
  [ivy:retrieve]
 http://oss.sonatype.org/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
  [ivy:retrieve]-- artifact
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
  [ivy:retrieve]
 http://oss.sonatype.org/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
  [ivy:retrieve]   maven.restlet.org: tried
  [ivy:retrieve]
 http://maven.restlet.org/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
  [ivy:retrieve]-- artifact
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
  [ivy:retrieve]
 http://maven.restlet.org/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
  [ivy:retrieve]   svnkit-releases: tried
  [ivy:retrieve]
 http://maven.tmatesoft.com/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
  [ivy:retrieve]-- artifact
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
  [ivy:retrieve]
 http://maven.tmatesoft.com/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
  [ivy:retrieve]   working-chinese-mirror: tried
  [ivy:retrieve]
 http://uk.maven.org/maven2/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
  [ivy:retrieve]-- artifact
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
  [ivy:retrieve]
 http://uk.maven.org/maven2/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
  [ivy:retrieve]  module not found:
 org.kitesdk#kite-morphlines-hadoop-sequencefile;0.11.0
  [ivy:retrieve]   local: tried
  [ivy:retrieve]
  
 /path/.ivy2/local/org.kitesdk/kite-morphlines-hadoop-sequencefile/0.11.0/ivys/ivy.xml
  

[jira] [Commented] (SOLR-5501) Ability to work with cold replicas

2014-03-11 Thread Manuel Lenormand (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930414#comment-13930414
 ] 

Manuel Lenormand commented on SOLR-5501:


Cold-replicas nodes should not be permanent. They should be configurable in 
dependence to performance / cloud architecture, ie on replica basis.

I want to set hot/cold roles to existing replicas, but we can also support this 
setting on ADDREPLICA call. Anyway I'd favor MODIFYCOLLECTION to support this. 
Do we want to explicitly configure all replicas as hot_replicas from now on, or 
will it be implicit unless they are configured as cold_replicas?

In case the hot/cold replica feature would integrate to SOLR-5132, it leaves to 
this Jira only a change in HttpShardHandlerFactory.makeURLList:239 that 
currently shuffles the replicas instead of favoring the hot replicas..

 Ability to work with cold replicas
 --

 Key: SOLR-5501
 URL: https://issues.apache.org/jira/browse/SOLR-5501
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.5.1
Reporter: Manuel Lenormand
  Labels: performance
 Fix For: 4.7


 Following this conversation from the mailing list:
 http://lucene.472066.n3.nabble.com/Proposal-for-new-feature-cold-replicas-brainstorming-td4097501.html
 Should give the ability to use replicas mainly as backup cores and not for 
 handling high qps rate. 
 This way you would avoid using the caching ressources (solr and OS) used when 
 routing a query to a replica. 
 With many replicas it's harder hitting the solr cache (same query may hit 
 another replica) and having many replicas on the same instance would cause a 
 useless competition on the OS memory for caching.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5516) Forward information that trigger a merge to MergeScheduler

2014-03-11 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-5516:


Attachment: LUCENE-5516.patch

here is an updated patch fixing the naming. Thanks for looking at it mike

 Forward information that trigger a merge to MergeScheduler
 --

 Key: LUCENE-5516
 URL: https://issues.apache.org/jira/browse/LUCENE-5516
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Affects Versions: 4.7
Reporter: Simon Willnauer
Assignee: Simon Willnauer
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5516.patch, LUCENE-5516.patch


 Today we pass information about the merge trigger to the merge policy. Yet, 
 no matter if the MP finds a merge or not we call the MergeScheduler who runs 
  blocks even if we didn't find a merge. In some cases we don't even want 
 this to happen but inside the MergeScheduler we have no choice to opt out 
 since we don't know what triggered the merge. We should forward the infos we 
 have to the MergeScheduler as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1401 - Still Failing!

2014-03-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1401/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 
-XX:-UseSuperWord

All tests passed

Build Log:
[...truncated 16966 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/contrib/solr-map-reduce/test/temp/junit4-J0-20140311_142241_030.syserr
   [junit4]  JVM J0: stderr (verbatim) 
   [junit4] 2014-03-11 14:22:51.433 java[249:6403] Unable to load realm info 
from SCDynamicStore
   [junit4]  JVM J0: EOF 

[...truncated 11173 lines...]
check-licenses:
 [echo] License check under: 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr
 [licenses] MISSING sha1 checksum file for: 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/test-framework/lib/junit4-ant-2.1.1.jar
 [licenses] EXPECTED sha1 checksum file : 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/junit4-ant-2.1.1.jar.sha1

[...truncated 3 lines...]
BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:467: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:70: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build.xml:254: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/custom-tasks.xml:62:
 License check failed. Check the logs.

Total time: 102 minutes 30 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops 
-XX:+UseConcMarkSweepGC -XX:-UseSuperWord
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.7.0_51) - Build # 9649 - Failure!

2014-03-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/9649/
Java: 64bit/jdk1.7.0_51 -XX:-UseCompressedOops -XX:+UseG1GC -XX:-UseSuperWord

1 tests failed.
REGRESSION:  
org.apache.solr.client.solrj.impl.CloudSolrServerTest.testDistribSearch

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:41861 within 45000 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:41861 within 45000 ms
at 
__randomizedtesting.SeedInfo.seed([D0B1731F5A5A8913:5157FD072D05E92F]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.init(SolrZkClient.java:150)
at 
org.apache.solr.common.cloud.SolrZkClient.init(SolrZkClient.java:101)
at 
org.apache.solr.common.cloud.SolrZkClient.init(SolrZkClient.java:91)
at 
org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:89)
at 
org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:83)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.setUp(AbstractDistribZkTestBase.java:70)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.setUp(AbstractFullDistribZkTestBase.java:200)
at 
org.apache.solr.client.solrj.impl.CloudSolrServerTest.setUp(CloudSolrServerTest.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1617)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:860)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:876)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:359)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:783)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:443)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:835)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:771)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:782)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

Re: ant idea on fresh checkout of 4.7

2014-03-11 Thread Dmitry Kan
Hi Steve!
Thanks for sharing the ivy2 org.kitesdk cache piece, compiles fine (phew,
only 5 minutes 16 seconds long build). Looking forward for Cloudera maven
repo resolution.
Dmitry


On 11 March 2014 15:16, Steve Rowe sar...@gmail.com wrote:

 0.11.0 is the morphlines version specified for trunk, branch_4x, and the
 lucene_solr_4_7 branch, e.g. from trunk ivy-versions.properties:

 org.kitesdk.kite-morphlines.version = 0.11.0

 AFAICT, version 0.11.0 of those two artifacts are not available from the
 cloudera repositories - I can see 0.10.0, 0.10.0-* (several), and 0.12.0,
 but no 0.11.0:


 http://repository.cloudera.com/cloudera/libs-release-local/org/kitesdk/kite-morphlines-hadoop-sequencefile/
 

 http://repository.cloudera.com/cloudera/libs-release-local/org/kitesdk/kite-morphlines-saxon/
 


 https://repository.cloudera.com/cloudera/repo/org/kitesdk/kite-morphlines-hadoop-sequencefile/
 

 https://repository.cloudera.com/cloudera/repo/org/kitesdk/kite-morphlines-saxon/
 

 Mark Miller, do you know what's going on?

 I made a tarball of all the 0.11.0 files under my
 ~/.ivy2/cache/org.kitesdk/ directory and put them here:

 
 http://people.apache.org/~sarowe/solr-dependencies-org.kitesdk-0.11.0.tar.bz2
 

 Steve

 On Mar 11, 2014, at 6:17 AM, Grant Ingersoll gsing...@apache.org wrote:

  Hi,
 
  I did a fresh checkout of 4.7 from SVN and ran ant idea at the top level
 and I get [1].  I presume I am missing the CDH Ivy repo somewhere.  Any one
 have the bits that need to be added handy?
 
 
 
  [1]
  :: problems summary ::
  [ivy:retrieve]  WARNINGS
  [ivy:retrieve]module not found:
 org.kitesdk#kite-morphlines-saxon;0.11.0
  [ivy:retrieve] local: tried
  [ivy:retrieve]
  
 /pathUsers/grantingersoll/.ivy2/local/org.kitesdk/kite-morphlines-saxon/0.11.0/ivys/ivy.xml
  [ivy:retrieve]  -- artifact
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
  [ivy:retrieve]
  
 /path/.ivy2/local/org.kitesdk/kite-morphlines-saxon/0.11.0/jars/kite-morphlines-saxon.jar
  [ivy:retrieve] shared: tried
  [ivy:retrieve]
  /path/.ivy2/shared/org.kitesdk/kite-morphlines-saxon/0.11.0/ivys/ivy.xml
  [ivy:retrieve]  -- artifact
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
  [ivy:retrieve]
  
 /path/.ivy2/shared/org.kitesdk/kite-morphlines-saxon/0.11.0/jars/kite-morphlines-saxon.jar
  [ivy:retrieve] public: tried
  [ivy:retrieve]
 http://repo1.maven.org/maven2/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
  [ivy:retrieve]  -- artifact
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
  [ivy:retrieve]
 http://repo1.maven.org/maven2/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
  [ivy:retrieve] cloudera: tried
  [ivy:retrieve]
 https://repository.cloudera.com/artifactory/repo/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
  [ivy:retrieve]  -- artifact
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
  [ivy:retrieve]
 https://repository.cloudera.com/artifactory/repo/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
  [ivy:retrieve] releases.cloudera.com: tried
  [ivy:retrieve]
 https://repository.cloudera.com/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
  [ivy:retrieve]  -- artifact
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
  [ivy:retrieve]
 https://repository.cloudera.com/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
  [ivy:retrieve] sonatype-releases: tried
  [ivy:retrieve]
 http://oss.sonatype.org/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
  [ivy:retrieve]  -- artifact
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
  [ivy:retrieve]
 http://oss.sonatype.org/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
  [ivy:retrieve] maven.restlet.org: tried
  [ivy:retrieve]
 http://maven.restlet.org/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
  [ivy:retrieve]  -- artifact
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
  [ivy:retrieve]
 http://maven.restlet.org/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.jar
  [ivy:retrieve] svnkit-releases: tried
  [ivy:retrieve]
 http://maven.tmatesoft.com/content/repositories/releases/org/kitesdk/kite-morphlines-saxon/0.11.0/kite-morphlines-saxon-0.11.0.pom
  [ivy:retrieve]  -- artifact
 org.kitesdk#kite-morphlines-saxon;0.11.0!kite-morphlines-saxon.jar:
  [ivy:retrieve]
 

[jira] [Commented] (LUCENE-5516) Forward information that trigger a merge to MergeScheduler

2014-03-11 Thread Shay Banon (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930457#comment-13930457
 ] 

Shay Banon commented on LUCENE-5516:


+1, this looks great!. Exactly the info we would love to have to better control 
merges.

 Forward information that trigger a merge to MergeScheduler
 --

 Key: LUCENE-5516
 URL: https://issues.apache.org/jira/browse/LUCENE-5516
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Affects Versions: 4.7
Reporter: Simon Willnauer
Assignee: Simon Willnauer
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5516.patch, LUCENE-5516.patch


 Today we pass information about the merge trigger to the merge policy. Yet, 
 no matter if the MP finds a merge or not we call the MergeScheduler who runs 
  blocks even if we didn't find a merge. In some cases we don't even want 
 this to happen but inside the MergeScheduler we have no choice to opt out 
 since we don't know what triggered the merge. We should forward the infos we 
 have to the MergeScheduler as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1401 - Still Failing!

2014-03-11 Thread Robert Muir
i committed a fix

On Tue, Mar 11, 2014 at 11:05 AM, Policeman Jenkins Server
jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1401/
 Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 
 -XX:-UseSuperWord

 All tests passed

 Build Log:
 [...truncated 16966 lines...]
[junit4] JVM J0: stderr was not empty, see: 
 /Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/contrib/solr-map-reduce/test/temp/junit4-J0-20140311_142241_030.syserr
[junit4]  JVM J0: stderr (verbatim) 
[junit4] 2014-03-11 14:22:51.433 java[249:6403] Unable to load realm info 
 from SCDynamicStore
[junit4]  JVM J0: EOF 

 [...truncated 11173 lines...]
 check-licenses:
  [echo] License check under: 
 /Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr
  [licenses] MISSING sha1 checksum file for: 
 /Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/test-framework/lib/junit4-ant-2.1.1.jar
  [licenses] EXPECTED sha1 checksum file : 
 /Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/junit4-ant-2.1.1.jar.sha1

 [...truncated 3 lines...]
 BUILD FAILED
 /Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:467: The 
 following error occurred while executing this line:
 /Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:70: The following 
 error occurred while executing this line:
 /Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build.xml:254: The 
 following error occurred while executing this line:
 /Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/custom-tasks.xml:62:
  License check failed. Check the logs.

 Total time: 102 minutes 30 seconds
 Build step 'Invoke Ant' marked build as failure
 Description set: Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops 
 -XX:+UseConcMarkSweepGC -XX:-UseSuperWord
 Archiving artifacts
 Recording test results
 Email was triggered for: Failure
 Sending email for trigger: Failure




 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1576375 - in /lucene/dev/trunk/solr/licenses: junit4-ant-2.0.13.jar.sha1 junit4-ant-2.1.1.jar.sha1 randomizedtesting-runner-2.0.13.jar.sha1 randomizedtesting-runner-2.1.1.jar.sha1

2014-03-11 Thread Dawid Weiss
Damn, always forget about these. Thanks Robert.
On Mar 11, 2014 4:11 PM, rm...@apache.org wrote:

 Author: rmuir
 Date: Tue Mar 11 15:10:39 2014
 New Revision: 1576375

 URL: http://svn.apache.org/r1576375
 Log:
 fix jar checksums

 Added:
 lucene/dev/trunk/solr/licenses/junit4-ant-2.1.1.jar.sha1   (with props)
 lucene/dev/trunk/solr/licenses/randomizedtesting-runner-2.1.1.jar.sha1
   (with props)
 Removed:
 lucene/dev/trunk/solr/licenses/junit4-ant-2.0.13.jar.sha1
 lucene/dev/trunk/solr/licenses/randomizedtesting-runner-2.0.13.jar.sha1

 Added: lucene/dev/trunk/solr/licenses/junit4-ant-2.1.1.jar.sha1
 URL:
 http://svn.apache.org/viewvc/lucene/dev/trunk/solr/licenses/junit4-ant-2.1.1.jar.sha1?rev=1576375view=auto

 ==
 --- lucene/dev/trunk/solr/licenses/junit4-ant-2.1.1.jar.sha1 (added)
 +++ lucene/dev/trunk/solr/licenses/junit4-ant-2.1.1.jar.sha1 Tue Mar 11
 15:10:39 2014
 @@ -0,0 +1 @@
 +a8a7371e11a8b3a4a3eeea81ad3cedafe3e3550e

 Added:
 lucene/dev/trunk/solr/licenses/randomizedtesting-runner-2.1.1.jar.sha1
 URL:
 http://svn.apache.org/viewvc/lucene/dev/trunk/solr/licenses/randomizedtesting-runner-2.1.1.jar.sha1?rev=1576375view=auto

 ==
 --- lucene/dev/trunk/solr/licenses/randomizedtesting-runner-2.1.1.jar.sha1
 (added)
 +++ lucene/dev/trunk/solr/licenses/randomizedtesting-runner-2.1.1.jar.sha1
 Tue Mar 11 15:10:39 2014
 @@ -0,0 +1 @@
 +5908c4e714dab40ccc892993a21537c7c0d6210c





[jira] [Commented] (LUCENE-5517) stricter parsing for hunspell parseFlag()

2014-03-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930486#comment-13930486
 ] 

ASF subversion and git services commented on LUCENE-5517:
-

Commit 1576389 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1576389 ]

LUCENE-5517: stricter parsing for hunspell parseFlag

 stricter parsing for hunspell parseFlag()
 -

 Key: LUCENE-5517
 URL: https://issues.apache.org/jira/browse/LUCENE-5517
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
Reporter: Robert Muir
 Attachments: LUCENE-5517.patch


 I was trying to debug why a hunspell dictionary (an updated version fixes the 
 bug!) used so much ram, and the reason is the dictionary was buggy and didnt 
 have FLAG NUM (so each digit was treated as its own flag, leading to chaos).
 In many situations in the hunspell file (e.g. affix rule), the flag should 
 only be a single one. But today we don't detect this, we just take the first 
 one.
 We should throw exception here: in most cases hunspell itself is doing this 
 for the impacted dictionaries. In these cases the dictionary is buggy and in 
 some cases you do in fact get an error from hunspell commandline. We should 
 throw exception instead of emitting chaos...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5517) stricter parsing for hunspell parseFlag()

2014-03-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930489#comment-13930489
 ] 

ASF subversion and git services commented on LUCENE-5517:
-

Commit 1576394 from [~rcmuir] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1576394 ]

LUCENE-5517: stricter parsing for hunspell parseFlag

 stricter parsing for hunspell parseFlag()
 -

 Key: LUCENE-5517
 URL: https://issues.apache.org/jira/browse/LUCENE-5517
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
Reporter: Robert Muir
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5517.patch


 I was trying to debug why a hunspell dictionary (an updated version fixes the 
 bug!) used so much ram, and the reason is the dictionary was buggy and didnt 
 have FLAG NUM (so each digit was treated as its own flag, leading to chaos).
 In many situations in the hunspell file (e.g. affix rule), the flag should 
 only be a single one. But today we don't detect this, we just take the first 
 one.
 We should throw exception here: in most cases hunspell itself is doing this 
 for the impacted dictionaries. In these cases the dictionary is buggy and in 
 some cases you do in fact get an error from hunspell commandline. We should 
 throw exception instead of emitting chaos...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5517) stricter parsing for hunspell parseFlag()

2014-03-11 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-5517.
-

   Resolution: Fixed
Fix Version/s: 5.0
   4.8

 stricter parsing for hunspell parseFlag()
 -

 Key: LUCENE-5517
 URL: https://issues.apache.org/jira/browse/LUCENE-5517
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
Reporter: Robert Muir
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5517.patch


 I was trying to debug why a hunspell dictionary (an updated version fixes the 
 bug!) used so much ram, and the reason is the dictionary was buggy and didnt 
 have FLAG NUM (so each digit was treated as its own flag, leading to chaos).
 In many situations in the hunspell file (e.g. affix rule), the flag should 
 only be a single one. But today we don't detect this, we just take the first 
 one.
 We should throw exception here: in most cases hunspell itself is doing this 
 for the impacted dictionaries. In these cases the dictionary is buggy and in 
 some cases you do in fact get an error from hunspell commandline. We should 
 throw exception instead of emitting chaos...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5501) Ability to work with cold replicas

2014-03-11 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930494#comment-13930494
 ] 

Noble Paul commented on SOLR-5501:
--

Yes. I would like it to be possible from ADDREPLICA as well as MODIFYCOLLECTION.

By default all replicas are hot. We do not wish to change the default behavior 
anyway

 Ability to work with cold replicas
 --

 Key: SOLR-5501
 URL: https://issues.apache.org/jira/browse/SOLR-5501
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.5.1
Reporter: Manuel Lenormand
  Labels: performance
 Fix For: 4.7


 Following this conversation from the mailing list:
 http://lucene.472066.n3.nabble.com/Proposal-for-new-feature-cold-replicas-brainstorming-td4097501.html
 Should give the ability to use replicas mainly as backup cores and not for 
 handling high qps rate. 
 This way you would avoid using the caching ressources (solr and OS) used when 
 routing a query to a replica. 
 With many replicas it's harder hitting the solr cache (same query may hit 
 another replica) and having many replicas on the same instance would cause a 
 useless competition on the OS memory for caching.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5837) Add missing equals implementation for SolrDocument, SolrInputDocument and SolrInputField.

2014-03-11 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930509#comment-13930509
 ] 

Mark Miller commented on SOLR-5837:
---

Exactly - you point out why they should not be used other than for tests.

We don't want code that works except when it screws you.

 Add missing equals implementation for SolrDocument, SolrInputDocument and 
 SolrInputField.
 -

 Key: SOLR-5837
 URL: https://issues.apache.org/jira/browse/SOLR-5837
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Assignee: Mark Miller
 Fix For: 4.8, 5.0

 Attachments: SOLR-5837.patch, SOLR-5837.patch


 While working on SOLR-5265 I tried comparing objects of SolrDocument, 
 SolrInputDocument and SolrInputField. These classes did not Override the 
 equals implementation. 
 The issue will Override equals and hashCode methods to the 3 classes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5848) Morphlines is not resolving

2014-03-11 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930526#comment-13930526
 ] 

Mark Miller commented on SOLR-5848:
---

Hey guys, I'm getting this looked at.

 Morphlines is not resolving
 ---

 Key: SOLR-5848
 URL: https://issues.apache.org/jira/browse/SOLR-5848
 Project: Solr
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Mark Miller
Priority: Critical

 This version of morphlines does not resolve for me and Grant.
 {code}
 ::
 ::  UNRESOLVED DEPENDENCIES ::
 ::
 :: org.kitesdk#kite-morphlines-saxon;0.11.0: not found
 :: org.kitesdk#kite-morphlines-hadoop-sequencefile;0.11.0: not found
 {code}
 Has this been deleted from Cloudera's repositories or something? This would 
 be pretty bad -- maven repos should be immutable...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 2334 - Failure

2014-03-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java6/2334/

All tests passed

Build Log:
[...truncated 27817 lines...]
check-licenses:
 [echo] License check under: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/solr
 [licenses] MISSING sha1 checksum file for: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/solr/test-framework/lib/junit4-ant-2.1.1.jar
 [licenses] EXPECTED sha1 checksum file : 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/solr/licenses/junit4-ant-2.1.1.jar.sha1

[...truncated 3 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/build.xml:473:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/build.xml:70:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/solr/build.xml:254:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/lucene/tools/custom-tasks.xml:62:
 License check failed. Check the logs.

Total time: 97 minutes 55 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: [VOTE] Move to Java 7 in Lucene/Solr 4.8, use Java 8 in trunk (once officially released)

2014-03-11 Thread Mike Murphy
On Tue, Mar 11, 2014 at 6:11 AM, Grant Ingersoll gsing...@apache.org wrote:

 On Mar 8, 2014, at 11:17 AM, Uwe Schindler u...@thetaphi.de wrote:

 [.] Move Lucene/Solr 4.8 (means branch_4x) to Java 7 and backport all Java 
 7-related issues (FileChannel improvements, diamond operator,...).


 -0 -- Seems a little odd that we would force an upgrade on a minor version, 
 which is not usually seen as best practice in development.

I agree.  I also do not see it making a difference to potential developers.
What are the benefits to the project?  A developer will not make their
decision to get involved in Lucene/Solr based on branch4x being Java 6
vs Java 7.
If causes some users to not upgrade, that's also a bad thing for the project.

-Mike

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5848) Morphlines is not resolving

2014-03-11 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930584#comment-13930584
 ] 

Mark Miller commented on SOLR-5848:
---

This should be resolved shortly.

Also, FYI, there is also ongoing work to push Kite to Maven Central.

 Morphlines is not resolving
 ---

 Key: SOLR-5848
 URL: https://issues.apache.org/jira/browse/SOLR-5848
 Project: Solr
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Mark Miller
Priority: Critical

 This version of morphlines does not resolve for me and Grant.
 {code}
 ::
 ::  UNRESOLVED DEPENDENCIES ::
 ::
 :: org.kitesdk#kite-morphlines-saxon;0.11.0: not found
 :: org.kitesdk#kite-morphlines-hadoop-sequencefile;0.11.0: not found
 {code}
 Has this been deleted from Cloudera's repositories or something? This would 
 be pretty bad -- maven repos should be immutable...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5848) Morphlines is not resolving

2014-03-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930592#comment-13930592
 ] 

ASF subversion and git services commented on SOLR-5848:
---

Commit 1576429 from [~steve_rowe] in branch 'dev/trunk'
[ https://svn.apache.org/r1576429 ]

SOLR-5848: remove reference to temporary morphlines 0.11.0 download site from 
lucene/ivy-settings.xml - 'ant resolve' now works against the cloudera repos

 Morphlines is not resolving
 ---

 Key: SOLR-5848
 URL: https://issues.apache.org/jira/browse/SOLR-5848
 Project: Solr
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Mark Miller
Priority: Critical

 This version of morphlines does not resolve for me and Grant.
 {code}
 ::
 ::  UNRESOLVED DEPENDENCIES ::
 ::
 :: org.kitesdk#kite-morphlines-saxon;0.11.0: not found
 :: org.kitesdk#kite-morphlines-hadoop-sequencefile;0.11.0: not found
 {code}
 Has this been deleted from Cloudera's repositories or something? This would 
 be pretty bad -- maven repos should be immutable...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5848) Morphlines is not resolving

2014-03-11 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930596#comment-13930596
 ] 

Steve Rowe commented on SOLR-5848:
--

bq. This should be resolved shortly.

Thanks, Mark, I successfully ran {{ant resolve}} after removing the 
{{org.kitesdk/}} dir from {{~/.ivy2/cache/}} without the temporary 
people.apache.org repository, so I reverted my commit enabling the temporary 
repository.

 Morphlines is not resolving
 ---

 Key: SOLR-5848
 URL: https://issues.apache.org/jira/browse/SOLR-5848
 Project: Solr
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Mark Miller
Priority: Critical

 This version of morphlines does not resolve for me and Grant.
 {code}
 ::
 ::  UNRESOLVED DEPENDENCIES ::
 ::
 :: org.kitesdk#kite-morphlines-saxon;0.11.0: not found
 :: org.kitesdk#kite-morphlines-hadoop-sequencefile;0.11.0: not found
 {code}
 Has this been deleted from Cloudera's repositories or something? This would 
 be pretty bad -- maven repos should be immutable...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5848) Morphlines is not resolving

2014-03-11 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930599#comment-13930599
 ] 

Steve Rowe commented on SOLR-5848:
--

bq. 
http://people.apache.org/~sarowe/solr-dependencies-org.kitesdk-0.11.0.tar.bz2
bq. http://people.apache.org/~sarowe/.m2repo/ 

I'll leave those up for another 24 hours or so, then remove them.

 Morphlines is not resolving
 ---

 Key: SOLR-5848
 URL: https://issues.apache.org/jira/browse/SOLR-5848
 Project: Solr
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Mark Miller
Priority: Critical

 This version of morphlines does not resolve for me and Grant.
 {code}
 ::
 ::  UNRESOLVED DEPENDENCIES ::
 ::
 :: org.kitesdk#kite-morphlines-saxon;0.11.0: not found
 :: org.kitesdk#kite-morphlines-hadoop-sequencefile;0.11.0: not found
 {code}
 Has this been deleted from Cloudera's repositories or something? This would 
 be pretty bad -- maven repos should be immutable...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5419) Solr Admin UI Query Result Does Nothing at Error

2014-03-11 Thread Furkan KAMACI (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930624#comment-13930624
 ] 

Furkan KAMACI commented on SOLR-5419:
-

I've changed Fix Version/s and Modified Affects Version/s.

 Solr Admin UI Query Result Does Nothing at Error
 

 Key: SOLR-5419
 URL: https://issues.apache.org/jira/browse/SOLR-5419
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.5.1, 4.6, 4.6.1, 4.7
Reporter: Furkan KAMACI
Priority: Minor
 Fix For: 4.8

 Attachments: SOLR-5419.patch


 When you make a query into Solr via Solr Admin Page and if error occurs there 
 writes Loading.. and does nothing. 
 i.e. if you write an invalid Request Handler at Query page even response is 
 404 Not Found Loading... is still there.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5419) Solr Admin UI Query Result Does Nothing at Error

2014-03-11 Thread Furkan KAMACI (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Furkan KAMACI updated SOLR-5419:


Fix Version/s: (was: 4.7)
   4.8

 Solr Admin UI Query Result Does Nothing at Error
 

 Key: SOLR-5419
 URL: https://issues.apache.org/jira/browse/SOLR-5419
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.5.1, 4.6, 4.6.1, 4.7
Reporter: Furkan KAMACI
Priority: Minor
 Fix For: 4.8

 Attachments: SOLR-5419.patch


 When you make a query into Solr via Solr Admin Page and if error occurs there 
 writes Loading.. and does nothing. 
 i.e. if you write an invalid Request Handler at Query page even response is 
 404 Not Found Loading... is still there.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5419) Solr Admin UI Query Result Does Nothing at Error

2014-03-11 Thread Furkan KAMACI (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Furkan KAMACI updated SOLR-5419:


Affects Version/s: 4.7
   4.6
   4.6.1

 Solr Admin UI Query Result Does Nothing at Error
 

 Key: SOLR-5419
 URL: https://issues.apache.org/jira/browse/SOLR-5419
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.5.1, 4.6, 4.6.1, 4.7
Reporter: Furkan KAMACI
Priority: Minor
 Fix For: 4.8

 Attachments: SOLR-5419.patch


 When you make a query into Solr via Solr Admin Page and if error occurs there 
 writes Loading.. and does nothing. 
 i.e. if you write an invalid Request Handler at Query page even response is 
 404 Not Found Loading... is still there.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr4.7 DataImport 500 Error please help

2014-03-11 Thread Pradeep Pujari
It looks to me solr_home is not properly defined. 



 From: steben 513441...@qq.com
To: dev@lucene.apache.org 
Sent: Tuesday, March 11, 2014 2:34 AM
Subject: Solr4.7 DataImport  500 Error please help
 

HTTP Status 500 - {msg=SolrCore 'collection1' is not available due to init
failure: severeErrors,trace=org.apache.solr.common.SolrException: SolrCore
'collection1' is not available due to init failure: severeErrors at
org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:827) at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:317)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:225)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:927)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
at
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:999)
at
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:565)
at
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:307)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at
java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at
java.lang.Thread.run(Unknown Source) Caused by:
org.apache.solr.common.SolrException: severeErrors at
org.apache.solr.core.SolrCore.init(SolrCore.java:844) at
org.apache.solr.core.SolrCore.init(SolrCore.java:630) at
org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:562)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:597) at
org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:258) at
org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:250) at
java.util.concurrent.FutureTask.run(Unknown Source) at
java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) at
java.util.concurrent.FutureTask.run(Unknown Source) ... 3 more Caused by:
java.lang.NoSuchFieldError: severeErrors at
org.apache.solr.handler.dataimport.DataImportHandler.inform(DataImportHandler.java:121)
at
org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:631)
at org.apache.solr.core.SolrCore.init(SolrCore.java:835) ... 11 more
,code=500}

type Status report

message {msg=SolrCore 'collection1' is not available due to init failure:
severeErrors,trace=org.apache.solr.common.SolrException: SolrCore
'collection1' is not available due to init failure: severeErrors at
org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:827) at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:317)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:225)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:927)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
at
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:999)
at
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:565)
at
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:307)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at
java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at
java.lang.Thread.run(Unknown Source) Caused by:
org.apache.solr.common.SolrException: severeErrors at
org.apache.solr.core.SolrCore.init(SolrCore.java:844) at
org.apache.solr.core.SolrCore.init(SolrCore.java:630) at
org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:562)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:597) at
org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:258) at

[jira] [Commented] (SOLR-5032) Implement tool and/or API for moving a replica to a specific node

2014-03-11 Thread Furkan KAMACI (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930724#comment-13930724
 ] 

Furkan KAMACI commented on SOLR-5032:
-

[~otis] Could you explian the workflow here? I can work on this issue.

 Implement tool and/or API for moving a replica to a specific node
 -

 Key: SOLR-5032
 URL: https://issues.apache.org/jira/browse/SOLR-5032
 Project: Solr
  Issue Type: New Feature
Reporter: Otis Gospodnetic
Priority: Minor

 See http://search-lucene.com/m/Sri8gFljGw



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5848) Morphlines is not resolving

2014-03-11 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930731#comment-13930731
 ] 

Mark Miller commented on SOLR-5848:
---

Thanks Steve.

Here is the CDK issue around moving Kite to Maven Central if anyone is 
interested in following:

https://issues.cloudera.org/browse/CDK-281

 Morphlines is not resolving
 ---

 Key: SOLR-5848
 URL: https://issues.apache.org/jira/browse/SOLR-5848
 Project: Solr
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Mark Miller
Priority: Critical
 Fix For: 4.8, 5.0


 This version of morphlines does not resolve for me and Grant.
 {code}
 ::
 ::  UNRESOLVED DEPENDENCIES ::
 ::
 :: org.kitesdk#kite-morphlines-saxon;0.11.0: not found
 :: org.kitesdk#kite-morphlines-hadoop-sequencefile;0.11.0: not found
 {code}
 Has this been deleted from Cloudera's repositories or something? This would 
 be pretty bad -- maven repos should be immutable...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4904) Send internal doc ids and index version in distributed faceting to make queries more compact

2014-03-11 Thread Furkan KAMACI (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930732#comment-13930732
 ] 

Furkan KAMACI commented on SOLR-4904:
-

[~dmitry_key] is this issue still valid? I can work for this issue?

 Send internal doc ids and index version in distributed faceting to make 
 queries more compact
 

 Key: SOLR-4904
 URL: https://issues.apache.org/jira/browse/SOLR-4904
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 3.4, 4.3
Reporter: Dmitry Kan

 This is suggested by [~ab] at bbuzz conf 2013. This makes a lot of sense and 
 works nice with fixing the root cause of issue SOLR-4903.
 Basically QueryComponent could send internal lucene ids along with the index 
 version number so that in subsequent queries to other solr components, like 
 FacetComponent, the internal ids would be sent. The index version is required 
 to ensure we deal with the same index.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5848) Morphlines is not resolving

2014-03-11 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-5848.
---

   Resolution: Fixed
Fix Version/s: 5.0
   4.8

 Morphlines is not resolving
 ---

 Key: SOLR-5848
 URL: https://issues.apache.org/jira/browse/SOLR-5848
 Project: Solr
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Mark Miller
Priority: Critical
 Fix For: 4.8, 5.0


 This version of morphlines does not resolve for me and Grant.
 {code}
 ::
 ::  UNRESOLVED DEPENDENCIES ::
 ::
 :: org.kitesdk#kite-morphlines-saxon;0.11.0: not found
 :: org.kitesdk#kite-morphlines-hadoop-sequencefile;0.11.0: not found
 {code}
 Has this been deleted from Cloudera's repositories or something? This would 
 be pretty bad -- maven repos should be immutable...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5850) Race condition in ConcurrentUpdateSolrServer

2014-03-11 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930751#comment-13930751
 ] 

Mark Miller commented on SOLR-5850:
---

Interesting - have not seen this before. I'll try and take a look before long.

 Race condition in ConcurrentUpdateSolrServer
 

 Key: SOLR-5850
 URL: https://issues.apache.org/jira/browse/SOLR-5850
 Project: Solr
  Issue Type: Bug
  Components: clients - java, search, SolrCloud, update
Affects Versions: 4.6
Reporter: Devansh Dhutia
Priority: Critical
  Labels: 500, cloud, error, update

 Possibly related to SOLR-2308, we are seeing a Queue Full error message when 
 issuing writes to Solr Cloud
 Each Update has 200 documents, and a commit is issued after 2000 documents 
 have been added. 
 The writes are spread out to all the servers in the cloud (2 in this case) 
 and following is the stack trace from Solr: 
 {code:xml}
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status500/intint 
 name=QTime101/int/lstlst name=errorstr name=msgQueue 
 full/strstr name=t
 racejava.lang.IllegalStateException: Queue full
 at java.util.AbstractQueue.add(Unknown Source)
 at 
 org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner$1.writeTo(ConcurrentUpdateSolrServer.java:181)
 at 
 org.apache.http.entity.EntityTemplate.writeTo(EntityTemplate.java:72)
 at 
 org.apache.http.entity.HttpEntityWrapper.writeTo(HttpEntityWrapper.java:98)
 at 
 org.apache.http.impl.client.EntityEnclosingRequestWrapper$EntityWrapper.writeTo(EntityEnclosingRequestWrapper.java:108)
 at 
 org.apache.http.impl.entity.EntitySerializer.serialize(EntitySerializer.java:122)
 at 
 org.apache.http.impl.AbstractHttpClientConnection.sendRequestEntity(AbstractHttpClientConnection.java:271)
 at 
 org.apache.http.impl.conn.ManagedClientConnectionImpl.sendRequestEntity(ManagedClientConnectionImpl.java:197)
 at 
 org.apache.http.protocol.HttpRequestExecutor.doSendRequest(HttpRequestExecutor.java:257)
 at 
 org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
 at 
 org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:715)
 at 
 org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:520)
 at 
 org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
 at 
 org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
 at 
 org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
 at 
 org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner.run(ConcurrentUpdateSolrServer.java:232)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 /strint name=code500/int/lst
 /response
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-5837) Add missing equals implementation for SolrDocument, SolrInputDocument and SolrInputField.

2014-03-11 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reopened SOLR-5837:
---


I'm going to reopen this. It still bugs me - I think perhaps we should do with 
this with test code instead. It's what I was originally thinking - even the 
javadoc warnings don't make me feel warm and fuzzy about this. I was chatting 
with Robert about it this morning, and that confirmed my feelings that adding 
hashCode/equals to these classes just for tests is the wrong move.

 Add missing equals implementation for SolrDocument, SolrInputDocument and 
 SolrInputField.
 -

 Key: SOLR-5837
 URL: https://issues.apache.org/jira/browse/SOLR-5837
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Assignee: Mark Miller
 Fix For: 4.8, 5.0

 Attachments: SOLR-5837.patch, SOLR-5837.patch


 While working on SOLR-5265 I tried comparing objects of SolrDocument, 
 SolrInputDocument and SolrInputField. These classes did not Override the 
 equals implementation. 
 The issue will Override equals and hashCode methods to the 3 classes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >