[jira] [Commented] (SOLR-7730) speed-up faceting on doc values fields

2015-12-15 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15058463#comment-15058463
 ] 

Steve Rowe commented on SOLR-7730:
--

bq. attaching SOLR-7730-changes.patch move it from 5.3 to 5.4 Optimizations. 
Steve Rowe Should I commit it to trunk and 5x?

+1, LGTM.

In addtion to trunk and 5x, I think you should also commit it to the 
lucene_solr_5_4 branch, in case there is a 5.4.1 release.

> speed-up faceting on doc values fields
> --
>
> Key: SOLR-7730
> URL: https://issues.apache.org/jira/browse/SOLR-7730
> Project: Solr
>  Issue Type: Improvement
>  Components: faceting
>Affects Versions: 5.2.1
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: patch
> Fix For: 5.4
>
> Attachments: LUCENE-7730.patch, LUCENE-7730.patch, 
> SOLR-7730-changes.patch, SOLR-7730.patch
>
>
> every time we count facets on DocValues fields in Solr on many segments index 
> we see the unnecessary hotspot:
> {code}
> 
> at 
> org.apache.lucene.index.MultiFields.getMergedFieldInfos(MultiFields.java:248)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getFieldInfos(SlowCompositeReaderWrapper.java:239)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getSortedSetDocValues(SlowCompositeReaderWrapper.java:176)
> at 
> org.apache.solr.request.DocValuesFacets.getCounts(DocValuesFacets.java:72)
> at 
> org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:460) 
> {code}
> the reason is SlowCompositeReaderWrapper.getSortedSetDocValues() Line 136 and 
> SlowCompositeReaderWrapper.getSortedDocValues() Line 174
> before return composite doc values, SCWR merges segment field infos, which is 
> expensive, but after fieldinfo is merged, it checks *only* docvalue type in 
> it. This dv type check can be done much easier in per segment basis. 
> This patch gets some performance gain for those who count DV facets in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-15 Thread Mike Drob
3 is typically solved by adding a .gitignore or .gitkeep file in what would
be an empty directory, if the directory itself is important.


On Tue, Dec 15, 2015 at 12:21 PM, Dawid Weiss  wrote:

>
> Oh, just for completeness -- moving to git is not just about the version
> management, it's also:
>
> 1) all the scripts that currently do validations, etc.
> 2) what to do with svn:* properties
> 3) what to do with empty folders (not available in git).
>
> I don't volunteer to solve these :)
>
> Dawid
>
>
> On Tue, Dec 15, 2015 at 7:09 PM, Dawid Weiss 
> wrote:
>
>>
>> Ok, give me some time and I'll see what I can achieve. Now that I
>> actually wrote an SVN dump parser (validator and serializer) things are
>> under much better control...
>>
>> I'll try to achieve the following:
>>
>> 1) selectively drop unnecessary stuff from history (cms/, javadocs/, JARs
>> and perhaps other binaries),
>> 2) *preserve* history of all core sources. So svn log IndexWriter has to
>> go back all the way back to when Doug was young and pretty. Ooops, he's
>> still pretty of course.
>> 3) provide a way to link git history with svn revisions. I would,
>> ideally, include a "imported from svn:rev XXX" in the commit log message.
>> 4) annotate release tags and branches. I don't care much about interim
>> branches -- they are not important to me (please speak up if you think
>> otherwise).
>>
>> Dawid
>>
>> On Tue, Dec 15, 2015 at 7:03 PM, Robert Muir  wrote:
>>
>>> If Dawid is volunteering to sort out this mess, +1 to let him make it
>>> a move to git. I don't care if we disagree about JARs, I trust he will
>>> do a good job and that is more important.
>>>
>>> On Tue, Dec 15, 2015 at 12:44 PM, Dawid Weiss 
>>> wrote:
>>> >
>>> > It's not true that nobody is working on this. I have been working on
>>> the SVN
>>> > dump in the meantime. You would not believe how incredibly complex the
>>> > process of processing that (remote) dump is. Let me highlight a few key
>>> > issues:
>>> >
>>> > 1) There is no "one" Lucene SVN repository that can be transferred to
>>> git.
>>> > The history is a mess. Trunk, branches, tags -- all change paths at
>>> various
>>> > points in history. Entire projects are copied from *outside* the
>>> official
>>> > Lucene ASF path (when Solr, Nutch or Tika moved from the incubator, for
>>> > example).
>>> >
>>> > 2) The history of commits to Lucene's subpath of the SVN is ~50k
>>> commits.
>>> > ASF's commit history in which those 50k commits live is 1.8 *million*
>>> > commits. I think the git-svn sync crashes due to the sheer number of
>>> (empty)
>>> > commits in between actual changes.
>>> >
>>> > 3) There are a few commits that are gigantic. I mentioned Grant's 1.2G
>>> > patch, for example, but there are others (the second larger is
>>> 190megs, the
>>> > third is 136 megs).
>>> >
>>> > 4) The size of JARs is really not an issue. The entire SVN repo I
>>> mirrored
>>> > locally (including empty interim commits to cater for svn:mergeinfos)
>>> is 4G.
>>> > If you strip the stuff like javadocs and side projects (Nutch, Tika,
>>> Mahout)
>>> > then I bet the entire history can fit in 1G total. Of course stripping
>>> JARs
>>> > is also doable.
>>> >
>>> > 5) There is lots of junk at the main SVN path so you can't just
>>> version the
>>> > top-level folder. If you wanted to checkout /asf/lucene then the size
>>> of the
>>> > resulting folder is enormous -- I terminated the checkout after I
>>> reached
>>> > over 20 gigs. Well, technically you *could* do it, it'd preserve
>>> perfect
>>> > history, but I wouldn't want to git co a past version that checks out
>>> all
>>> > the tags, branches, etc. This has to be mapped in a sensible way.
>>> >
>>> > What I think is that all the above makes (straightforward) conversion
>>> to git
>>> > problematic. Especially moving paths are a problem -- how to mark tags/
>>> > branches, where the main line of development is, etc. This conversion
>>> would
>>> > have to be guided and hand-tuned to make sense. This effort would only
>>> pay
>>> > for itself if we move to git, otherwise I don't see the benefit. Paul's
>>> > script is fine for keeping short-term history.
>>> >
>>> > Dawid
>>> >
>>> > P.S. Either the SVN repo at Apache is broken or the SVN is broken,
>>> which
>>> > makes processing SVN history even more fun. This dump indicates Tika
>>> being
>>> > moved from the incubator to Lucene:
>>> >
>>> > svnrdump dump -r 712381 --incremental
>>> https://svn.apache.org/repos/asf/ >
>>> > out
>>> >
>>> > But when you dump just Lucene's subpath, the output is broken (last
>>> > changeset in the file is an invalid changeset, it carries no target):
>>> >
>>> > svnrdump dump -r 712381 --incremental
>>> > https://svn.apache.org/repos/asf/lucene > out
>>> >
>>> >
>>> >
>>> > On Tue, Dec 15, 2015 at 6:04 PM, Yonik Seeley 
>>> wrote:
>>> >>
>>> >> If we move to git, stripping 

Re: Lucene/Solr git mirror will soon turn off

2015-12-15 Thread Dawid Weiss
It's not true that nobody is working on this. I have been working on the
SVN dump in the meantime. You would not believe how incredibly complex the
process of processing that (remote) dump is. Let me highlight a few key
issues:

1) There is no "one" Lucene SVN repository that can be transferred to git.
The history is a mess. Trunk, branches, tags -- all change paths at various
points in history. Entire projects are copied from *outside* the official
Lucene ASF path (when Solr, Nutch or Tika moved from the incubator, for
example).

2) The history of commits to Lucene's subpath of the SVN is ~50k commits.
ASF's commit history in which those 50k commits live is 1.8 *million*
commits. I think the git-svn sync crashes due to the sheer number of
(empty) commits in between actual changes.

3) There are a few commits that are gigantic. I mentioned Grant's 1.2G
patch, for example, but there are others (the second larger is 190megs, the
third is 136 megs).

4) The size of JARs is really not an issue. The entire SVN repo I mirrored
locally (including empty interim commits to cater for svn:mergeinfos) is
4G. If you strip the stuff like javadocs and side projects (Nutch, Tika,
Mahout) then I bet the entire history can fit in 1G total. Of course
stripping JARs is also doable.

5) There is lots of junk at the main SVN path so you can't just version the
top-level folder. If you wanted to checkout /asf/lucene then the size of
the resulting folder is enormous -- I terminated the checkout after I
reached over 20 gigs. Well, technically you *could* do it, it'd preserve
perfect history, but I wouldn't want to git co a past version that checks
out all the tags, branches, etc. This has to be mapped in a sensible way.

What I think is that all the above makes (straightforward) conversion to
git problematic. Especially moving paths are a problem -- how to mark tags/
branches, where the main line of development is, etc. This conversion would
have to be guided and hand-tuned to make sense. This effort would only pay
for itself if we move to git, otherwise I don't see the benefit. Paul's
script is fine for keeping short-term history.

Dawid

P.S. Either the SVN repo at Apache is broken or the SVN is broken, which
makes processing SVN history even more fun. This dump indicates Tika being
moved from the incubator to Lucene:

svnrdump dump -r 712381 --incremental https://svn.apache.org/repos/asf/ >
out

But when you dump just Lucene's subpath, the output is broken (last
changeset in the file is an invalid changeset, it carries no target):

svnrdump dump -r 712381 --incremental
https://svn.apache.org/repos/asf/lucene > out



On Tue, Dec 15, 2015 at 6:04 PM, Yonik Seeley  wrote:

> If we move to git, stripping out jars seems to be an independent decision?
> Can you even strip out jars and preserve history (i.e. not change
> hashes and invalidate everyone's forks/clones)?
> I did run across this:
>
> http://stackoverflow.com/questions/17470780/is-it-possible-to-slim-a-git-repository-without-rewriting-history
>
> -Yonik
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Updated] (SOLR-8393) Component for Solr resource usage planning

2015-12-15 Thread Steve Molloy (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Molloy updated SOLR-8393:
---
Attachment: SOLR-8393.patch

Fix disappearing collections when using collection param (should not be able to 
modify clusterState's getCollections result...)

> Component for Solr resource usage planning
> --
>
> Key: SOLR-8393
> URL: https://issues.apache.org/jira/browse/SOLR-8393
> Project: Solr
>  Issue Type: Improvement
>Reporter: Steve Molloy
> Attachments: SOLR-8393.patch, SOLR-8393.patch, SOLR-8393.patch, 
> SOLR-8393.patch
>
>
> One question that keeps coming back is how much disk and RAM do I need to run 
> Solr. The most common response is that it highly depends on your data. While 
> true, it makes for frustrated users trying to plan their deployments. 
> The idea I'm bringing is to create a new component that will attempt to 
> extrapolate resources needed in the future by looking at resources currently 
> used. By adding a parameter for the target number of documents, current 
> resources are adapted by a ratio relative to current number of documents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-15 Thread Jack Krupansky
And if nobody steps up and "solves" the current technical issue will that
simply accelerate the (desired) shift to using git as the main repo for
future Lucene/Solr development? Would there be any downside to that outcome?

Is there any formal Apache policy for new projects as to whether they can
use git exclusively? Any examples of Apache projects that moved from svn to
git?

+1 for moving to git (with full non-jar history) if after all of this time
and hand-wringing "all the King's horses and all the King's men couldn't
put git-svn back together again". I'd rather see Lucene/Solr committers
focused on new feature development rather than doing Infra's job, and if
Infra can't do it easily, why not shift to a solution that has much less
downside and baggage and has a brighter future.

-- Jack Krupansky

On Tue, Dec 15, 2015 at 11:32 AM, Mark Miller  wrote:

> Anyone willing to lead this discussion to some kind of better resolution?
> Did that whole back and forth help with any ideas on the best path forward?
> I know it's a complicated issue, git / svn, the light side, the dark side,
> but doesn't GitHub also depend on this mirroring? It's going to be super
> annoying when I can no longer pull from a relatively up to date git remote.
>
> Who has boiled down the correct path?
>
> - Mark
>
> On Wed, Dec 9, 2015 at 6:07 AM Dawid Weiss  wrote:
>
>> FYI.
>>
>> - All of Lucene's SVN, incremental deltas, uncompressed: 5.0G
>> - the above, tar.bz2: 1.2G
>>
>> Sadly, I didn't succeed at recreating a local SVN repo from those
>> incremental dumps. svnadmin load fails with a cryptic error related to
>> the fact that revision number of node-copy operations refer to
>> original SVN numbers and they're apparently renumbered on import.
>> svnadmin isn't smart enough to somehow keep a reference of those
>> original numbers and svndumpfilter can't work with incremental dump
>> files... A seemingly trivial task of splitting a repo on a clean
>> boundary seems incredibly hard with SVN...
>>
>> If anybody wishes to play with the dump files, here they are:
>> http://goo.gl/m6q3J8
>>
>> Dawid
>>
>> On Tue, Dec 8, 2015 at 10:49 PM, Upayavira  wrote:
>> > You can't avoid having the history in SVN. The ASF has one large repo,
>> and
>> > won't be deleting that repo, so the history will survive in perpetuity,
>> > regardless of what we do now.
>> >
>> > Upayavira
>> >
>> > On Tue, Dec 8, 2015, at 09:24 PM, Doug Turnbull wrote:
>> >
>> > It seems you'd want to preserve that history in a frozen/archiced
>> Apache Svn
>> > repo for Lucene. Then make the new git repo slimmer before switching.
>> Folks
>> > that want very old versions or doing research can at least go through
>> the
>> > original SVN repo.
>> >
>> > On Tuesday, December 8, 2015, Dawid Weiss 
>> wrote:
>> >
>> > One more thing, perhaps of importance, the raw Lucene repo contains
>> > all the history of projects that then turned top-level (Nutch,
>> > Mahout). These could also be dropped (or ignored) when converting to
>> > git. If we agree JARs are not relevant, why should projects not
>> > directly related to Lucene/ Solr be?
>> >
>> > Dawid
>> >
>> > On Tue, Dec 8, 2015 at 10:05 PM, Dawid Weiss 
>> wrote:
>> >>> Don’t know how much we have of historic jars in our history.
>> >>
>> >> I actually do know. Or will know. In about ~10 hours. I wrote a script
>> >> that does the following:
>> >>
>> >> 1) git log all revisions touching
>> https://svn.apache.org/repos/asf/lucene
>> >> 2) grep revision numbers
>> >> 3) use svnrdump to get every single commit (revision) above, in
>> >> incremental mode.
>> >>
>> >> This will allow me to:
>> >>
>> >> 1) recreate only Lucene/ Solr SVN, locally.
>> >> 2) measure the size of SVN repo.
>> >> 3) measure the size of any conversion to git (even if it's one-by-one
>> >> checkout, then-sync with git).
>> >>
>> >> From what I see up until now size should not be an issue at all. Even
>> >> with all binary blobs so far the SVN incremental dumps measure ~3.7G
>> >> (and I'm about 75% done). There is one interesting super-large commit,
>> >> this one:
>> >>
>> >> svn log -r1240618 https://svn.apache.org/repos/asf/lucene
>> >>
>> 
>> >> r1240618 | gsingers | 2012-02-04 22:45:17 +0100 (Sat, 04 Feb 2012) | 1
>> >> line
>> >>
>> >> LUCENE-2748: bring in old Lucene docs
>> >>
>> >> This commit diff weights... wait for it... 1.3G! I didn't check what
>> >> it actually was.
>> >>
>> >> Will keep you posted.
>> >>
>> >> D.
>> >
>> > -
>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> > For additional commands, e-mail: dev-h...@lucene.apache.org
>> >
>> >
>> >
>> >
>> > --
>> > Doug Turnbull | Search Relevance Consultant | OpenSource Connections,
>> LLC |
>> > 240.476.9983
>> > 

Re: Lucene/Solr git mirror will soon turn off

2015-12-15 Thread Dawid Weiss
I know that, but I meant historical checkouts -- and if you add fake files
you're altering history :)

D.

On Tue, Dec 15, 2015 at 7:24 PM, Mike Drob  wrote:

> 3 is typically solved by adding a .gitignore or .gitkeep file in what
> would be an empty directory, if the directory itself is important.
>
>
> On Tue, Dec 15, 2015 at 12:21 PM, Dawid Weiss 
> wrote:
>
>>
>> Oh, just for completeness -- moving to git is not just about the version
>> management, it's also:
>>
>> 1) all the scripts that currently do validations, etc.
>> 2) what to do with svn:* properties
>> 3) what to do with empty folders (not available in git).
>>
>> I don't volunteer to solve these :)
>>
>> Dawid
>>
>>
>> On Tue, Dec 15, 2015 at 7:09 PM, Dawid Weiss 
>> wrote:
>>
>>>
>>> Ok, give me some time and I'll see what I can achieve. Now that I
>>> actually wrote an SVN dump parser (validator and serializer) things are
>>> under much better control...
>>>
>>> I'll try to achieve the following:
>>>
>>> 1) selectively drop unnecessary stuff from history (cms/, javadocs/,
>>> JARs and perhaps other binaries),
>>> 2) *preserve* history of all core sources. So svn log IndexWriter has to
>>> go back all the way back to when Doug was young and pretty. Ooops, he's
>>> still pretty of course.
>>> 3) provide a way to link git history with svn revisions. I would,
>>> ideally, include a "imported from svn:rev XXX" in the commit log message.
>>> 4) annotate release tags and branches. I don't care much about interim
>>> branches -- they are not important to me (please speak up if you think
>>> otherwise).
>>>
>>> Dawid
>>>
>>> On Tue, Dec 15, 2015 at 7:03 PM, Robert Muir  wrote:
>>>
 If Dawid is volunteering to sort out this mess, +1 to let him make it
 a move to git. I don't care if we disagree about JARs, I trust he will
 do a good job and that is more important.

 On Tue, Dec 15, 2015 at 12:44 PM, Dawid Weiss 
 wrote:
 >
 > It's not true that nobody is working on this. I have been working on
 the SVN
 > dump in the meantime. You would not believe how incredibly complex the
 > process of processing that (remote) dump is. Let me highlight a few
 key
 > issues:
 >
 > 1) There is no "one" Lucene SVN repository that can be transferred to
 git.
 > The history is a mess. Trunk, branches, tags -- all change paths at
 various
 > points in history. Entire projects are copied from *outside* the
 official
 > Lucene ASF path (when Solr, Nutch or Tika moved from the incubator,
 for
 > example).
 >
 > 2) The history of commits to Lucene's subpath of the SVN is ~50k
 commits.
 > ASF's commit history in which those 50k commits live is 1.8 *million*
 > commits. I think the git-svn sync crashes due to the sheer number of
 (empty)
 > commits in between actual changes.
 >
 > 3) There are a few commits that are gigantic. I mentioned Grant's 1.2G
 > patch, for example, but there are others (the second larger is
 190megs, the
 > third is 136 megs).
 >
 > 4) The size of JARs is really not an issue. The entire SVN repo I
 mirrored
 > locally (including empty interim commits to cater for svn:mergeinfos)
 is 4G.
 > If you strip the stuff like javadocs and side projects (Nutch, Tika,
 Mahout)
 > then I bet the entire history can fit in 1G total. Of course
 stripping JARs
 > is also doable.
 >
 > 5) There is lots of junk at the main SVN path so you can't just
 version the
 > top-level folder. If you wanted to checkout /asf/lucene then the size
 of the
 > resulting folder is enormous -- I terminated the checkout after I
 reached
 > over 20 gigs. Well, technically you *could* do it, it'd preserve
 perfect
 > history, but I wouldn't want to git co a past version that checks out
 all
 > the tags, branches, etc. This has to be mapped in a sensible way.
 >
 > What I think is that all the above makes (straightforward) conversion
 to git
 > problematic. Especially moving paths are a problem -- how to mark
 tags/
 > branches, where the main line of development is, etc. This conversion
 would
 > have to be guided and hand-tuned to make sense. This effort would
 only pay
 > for itself if we move to git, otherwise I don't see the benefit.
 Paul's
 > script is fine for keeping short-term history.
 >
 > Dawid
 >
 > P.S. Either the SVN repo at Apache is broken or the SVN is broken,
 which
 > makes processing SVN history even more fun. This dump indicates Tika
 being
 > moved from the incubator to Lucene:
 >
 > svnrdump dump -r 712381 --incremental
 https://svn.apache.org/repos/asf/ >
 > out
 >
 > But when you dump just Lucene's subpath, the output is broken (last

Re: Lucene/Solr git mirror will soon turn off

2015-12-15 Thread Doug Turnbull
I thought the general consensus at minimum was to investigate a git mirror
that stripped some artifacts out (jars etc) to lighten up the work of the
process. If at some point the project switched to git, such a mirror might
be a suitable git repo for the project with archived older versions in SVN.

I think probably what is lacking is a volunteer to figure it all out.

-Doug

On Tue, Dec 15, 2015 at 11:32 AM, Mark Miller  wrote:

> Anyone willing to lead this discussion to some kind of better resolution?
> Did that whole back and forth help with any ideas on the best path forward?
> I know it's a complicated issue, git / svn, the light side, the dark side,
> but doesn't GitHub also depend on this mirroring? It's going to be super
> annoying when I can no longer pull from a relatively up to date git remote.
>
> Who has boiled down the correct path?
>
> - Mark
>
> On Wed, Dec 9, 2015 at 6:07 AM Dawid Weiss  wrote:
>
>> FYI.
>>
>> - All of Lucene's SVN, incremental deltas, uncompressed: 5.0G
>> - the above, tar.bz2: 1.2G
>>
>> Sadly, I didn't succeed at recreating a local SVN repo from those
>> incremental dumps. svnadmin load fails with a cryptic error related to
>> the fact that revision number of node-copy operations refer to
>> original SVN numbers and they're apparently renumbered on import.
>> svnadmin isn't smart enough to somehow keep a reference of those
>> original numbers and svndumpfilter can't work with incremental dump
>> files... A seemingly trivial task of splitting a repo on a clean
>> boundary seems incredibly hard with SVN...
>>
>> If anybody wishes to play with the dump files, here they are:
>> http://goo.gl/m6q3J8
>>
>> Dawid
>>
>> On Tue, Dec 8, 2015 at 10:49 PM, Upayavira  wrote:
>> > You can't avoid having the history in SVN. The ASF has one large repo,
>> and
>> > won't be deleting that repo, so the history will survive in perpetuity,
>> > regardless of what we do now.
>> >
>> > Upayavira
>> >
>> > On Tue, Dec 8, 2015, at 09:24 PM, Doug Turnbull wrote:
>> >
>> > It seems you'd want to preserve that history in a frozen/archiced
>> Apache Svn
>> > repo for Lucene. Then make the new git repo slimmer before switching.
>> Folks
>> > that want very old versions or doing research can at least go through
>> the
>> > original SVN repo.
>> >
>> > On Tuesday, December 8, 2015, Dawid Weiss 
>> wrote:
>> >
>> > One more thing, perhaps of importance, the raw Lucene repo contains
>> > all the history of projects that then turned top-level (Nutch,
>> > Mahout). These could also be dropped (or ignored) when converting to
>> > git. If we agree JARs are not relevant, why should projects not
>> > directly related to Lucene/ Solr be?
>> >
>> > Dawid
>> >
>> > On Tue, Dec 8, 2015 at 10:05 PM, Dawid Weiss 
>> wrote:
>> >>> Don’t know how much we have of historic jars in our history.
>> >>
>> >> I actually do know. Or will know. In about ~10 hours. I wrote a script
>> >> that does the following:
>> >>
>> >> 1) git log all revisions touching
>> https://svn.apache.org/repos/asf/lucene
>> >> 2) grep revision numbers
>> >> 3) use svnrdump to get every single commit (revision) above, in
>> >> incremental mode.
>> >>
>> >> This will allow me to:
>> >>
>> >> 1) recreate only Lucene/ Solr SVN, locally.
>> >> 2) measure the size of SVN repo.
>> >> 3) measure the size of any conversion to git (even if it's one-by-one
>> >> checkout, then-sync with git).
>> >>
>> >> From what I see up until now size should not be an issue at all. Even
>> >> with all binary blobs so far the SVN incremental dumps measure ~3.7G
>> >> (and I'm about 75% done). There is one interesting super-large commit,
>> >> this one:
>> >>
>> >> svn log -r1240618 https://svn.apache.org/repos/asf/lucene
>> >>
>> 
>> >> r1240618 | gsingers | 2012-02-04 22:45:17 +0100 (Sat, 04 Feb 2012) | 1
>> >> line
>> >>
>> >> LUCENE-2748: bring in old Lucene docs
>> >>
>> >> This commit diff weights... wait for it... 1.3G! I didn't check what
>> >> it actually was.
>> >>
>> >> Will keep you posted.
>> >>
>> >> D.
>> >
>> > -
>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> > For additional commands, e-mail: dev-h...@lucene.apache.org
>> >
>> >
>> >
>> >
>> > --
>> > Doug Turnbull | Search Relevance Consultant | OpenSource Connections,
>> LLC |
>> > 240.476.9983
>> > Author:Relevant Search
>> > This e-mail and all contents, including attachments, is considered to be
>> > Company Confidential unless explicitly stated otherwise, regardless of
>> > whether attachments are marked as such.
>> >
>> >
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>> --
> - 

Re: Lucene/Solr git mirror will soon turn off

2015-12-15 Thread Mark Miller
Anyone willing to lead this discussion to some kind of better resolution?
Did that whole back and forth help with any ideas on the best path forward?
I know it's a complicated issue, git / svn, the light side, the dark side,
but doesn't GitHub also depend on this mirroring? It's going to be super
annoying when I can no longer pull from a relatively up to date git remote.

Who has boiled down the correct path?

- Mark

On Wed, Dec 9, 2015 at 6:07 AM Dawid Weiss  wrote:

> FYI.
>
> - All of Lucene's SVN, incremental deltas, uncompressed: 5.0G
> - the above, tar.bz2: 1.2G
>
> Sadly, I didn't succeed at recreating a local SVN repo from those
> incremental dumps. svnadmin load fails with a cryptic error related to
> the fact that revision number of node-copy operations refer to
> original SVN numbers and they're apparently renumbered on import.
> svnadmin isn't smart enough to somehow keep a reference of those
> original numbers and svndumpfilter can't work with incremental dump
> files... A seemingly trivial task of splitting a repo on a clean
> boundary seems incredibly hard with SVN...
>
> If anybody wishes to play with the dump files, here they are:
> http://goo.gl/m6q3J8
>
> Dawid
>
> On Tue, Dec 8, 2015 at 10:49 PM, Upayavira  wrote:
> > You can't avoid having the history in SVN. The ASF has one large repo,
> and
> > won't be deleting that repo, so the history will survive in perpetuity,
> > regardless of what we do now.
> >
> > Upayavira
> >
> > On Tue, Dec 8, 2015, at 09:24 PM, Doug Turnbull wrote:
> >
> > It seems you'd want to preserve that history in a frozen/archiced Apache
> Svn
> > repo for Lucene. Then make the new git repo slimmer before switching.
> Folks
> > that want very old versions or doing research can at least go through the
> > original SVN repo.
> >
> > On Tuesday, December 8, 2015, Dawid Weiss  wrote:
> >
> > One more thing, perhaps of importance, the raw Lucene repo contains
> > all the history of projects that then turned top-level (Nutch,
> > Mahout). These could also be dropped (or ignored) when converting to
> > git. If we agree JARs are not relevant, why should projects not
> > directly related to Lucene/ Solr be?
> >
> > Dawid
> >
> > On Tue, Dec 8, 2015 at 10:05 PM, Dawid Weiss 
> wrote:
> >>> Don’t know how much we have of historic jars in our history.
> >>
> >> I actually do know. Or will know. In about ~10 hours. I wrote a script
> >> that does the following:
> >>
> >> 1) git log all revisions touching
> https://svn.apache.org/repos/asf/lucene
> >> 2) grep revision numbers
> >> 3) use svnrdump to get every single commit (revision) above, in
> >> incremental mode.
> >>
> >> This will allow me to:
> >>
> >> 1) recreate only Lucene/ Solr SVN, locally.
> >> 2) measure the size of SVN repo.
> >> 3) measure the size of any conversion to git (even if it's one-by-one
> >> checkout, then-sync with git).
> >>
> >> From what I see up until now size should not be an issue at all. Even
> >> with all binary blobs so far the SVN incremental dumps measure ~3.7G
> >> (and I'm about 75% done). There is one interesting super-large commit,
> >> this one:
> >>
> >> svn log -r1240618 https://svn.apache.org/repos/asf/lucene
> >> 
> >> r1240618 | gsingers | 2012-02-04 22:45:17 +0100 (Sat, 04 Feb 2012) | 1
> >> line
> >>
> >> LUCENE-2748: bring in old Lucene docs
> >>
> >> This commit diff weights... wait for it... 1.3G! I didn't check what
> >> it actually was.
> >>
> >> Will keep you posted.
> >>
> >> D.
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
> >
> >
> >
> > --
> > Doug Turnbull | Search Relevance Consultant | OpenSource Connections,
> LLC |
> > 240.476.9983
> > Author:Relevant Search
> > This e-mail and all contents, including attachments, is considered to be
> > Company Confidential unless explicitly stated otherwise, regardless of
> > whether attachments are marked as such.
> >
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
- Mark
about.me/markrmiller


[jira] [Commented] (SOLR-8388) TestSolrQueryResponse (factor out, then extend)

2015-12-15 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15058305#comment-15058305
 ] 

Christine Poerschke commented on SOLR-8388:
---

Thanks Steve. Looking into.

> TestSolrQueryResponse (factor out, then extend)
> ---
>
> Key: SOLR-8388
> URL: https://issues.apache.org/jira/browse/SOLR-8388
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8388-part1of2.patch, SOLR-8388-part2of2.patch
>
>
> factor out 
> {{solr/core/src/test/org/apache/solr/response/TestSolrQueryResponse.java}} 
> from {{solr/core/src/test/org/apache/solr/servlet/ResponseHeaderTest.java}} 
> and then extend it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-15 Thread Mark Miller
I don't think you will get a volunteer until someone sums up the discussion
with a proposal that someone is not going to veto or something. We can't
expect everyone to read the same tea leaves and come to the same
conclusion.

Perhaps a stripped down mirror is the consensus. I'd rather we had some
agreement on what we were going to do though, rather than an agreement to
investigate. If we think stripping down is a technically feasible, and no
one is going to violently disagree still, then let's decide to do that.

- Mark



On Tue, Dec 15, 2015 at 11:39 AM Doug Turnbull <
dturnb...@opensourceconnections.com> wrote:

> I thought the general consensus at minimum was to investigate a git mirror
> that stripped some artifacts out (jars etc) to lighten up the work of the
> process. If at some point the project switched to git, such a mirror might
> be a suitable git repo for the project with archived older versions in SVN.
>
> I think probably what is lacking is a volunteer to figure it all out.
>
>
> -Doug
>
> On Tue, Dec 15, 2015 at 11:32 AM, Mark Miller 
> wrote:
>
>> Anyone willing to lead this discussion to some kind of better resolution?
>> Did that whole back and forth help with any ideas on the best path forward?
>> I know it's a complicated issue, git / svn, the light side, the dark side,
>> but doesn't GitHub also depend on this mirroring? It's going to be super
>> annoying when I can no longer pull from a relatively up to date git remote.
>>
>> Who has boiled down the correct path?
>>
>> - Mark
>>
>> On Wed, Dec 9, 2015 at 6:07 AM Dawid Weiss  wrote:
>>
>>> FYI.
>>>
>>> - All of Lucene's SVN, incremental deltas, uncompressed: 5.0G
>>> - the above, tar.bz2: 1.2G
>>>
>>> Sadly, I didn't succeed at recreating a local SVN repo from those
>>> incremental dumps. svnadmin load fails with a cryptic error related to
>>> the fact that revision number of node-copy operations refer to
>>> original SVN numbers and they're apparently renumbered on import.
>>> svnadmin isn't smart enough to somehow keep a reference of those
>>> original numbers and svndumpfilter can't work with incremental dump
>>> files... A seemingly trivial task of splitting a repo on a clean
>>> boundary seems incredibly hard with SVN...
>>>
>>> If anybody wishes to play with the dump files, here they are:
>>> http://goo.gl/m6q3J8
>>>
>>> Dawid
>>>
>>> On Tue, Dec 8, 2015 at 10:49 PM, Upayavira  wrote:
>>> > You can't avoid having the history in SVN. The ASF has one large repo,
>>> and
>>> > won't be deleting that repo, so the history will survive in perpetuity,
>>> > regardless of what we do now.
>>> >
>>> > Upayavira
>>> >
>>> > On Tue, Dec 8, 2015, at 09:24 PM, Doug Turnbull wrote:
>>> >
>>> > It seems you'd want to preserve that history in a frozen/archiced
>>> Apache Svn
>>> > repo for Lucene. Then make the new git repo slimmer before switching.
>>> Folks
>>> > that want very old versions or doing research can at least go through
>>> the
>>> > original SVN repo.
>>> >
>>> > On Tuesday, December 8, 2015, Dawid Weiss 
>>> wrote:
>>> >
>>> > One more thing, perhaps of importance, the raw Lucene repo contains
>>> > all the history of projects that then turned top-level (Nutch,
>>> > Mahout). These could also be dropped (or ignored) when converting to
>>> > git. If we agree JARs are not relevant, why should projects not
>>> > directly related to Lucene/ Solr be?
>>> >
>>> > Dawid
>>> >
>>> > On Tue, Dec 8, 2015 at 10:05 PM, Dawid Weiss 
>>> wrote:
>>> >>> Don’t know how much we have of historic jars in our history.
>>> >>
>>> >> I actually do know. Or will know. In about ~10 hours. I wrote a script
>>> >> that does the following:
>>> >>
>>> >> 1) git log all revisions touching
>>> https://svn.apache.org/repos/asf/lucene
>>> >> 2) grep revision numbers
>>> >> 3) use svnrdump to get every single commit (revision) above, in
>>> >> incremental mode.
>>> >>
>>> >> This will allow me to:
>>> >>
>>> >> 1) recreate only Lucene/ Solr SVN, locally.
>>> >> 2) measure the size of SVN repo.
>>> >> 3) measure the size of any conversion to git (even if it's one-by-one
>>> >> checkout, then-sync with git).
>>> >>
>>> >> From what I see up until now size should not be an issue at all. Even
>>> >> with all binary blobs so far the SVN incremental dumps measure ~3.7G
>>> >> (and I'm about 75% done). There is one interesting super-large commit,
>>> >> this one:
>>> >>
>>> >> svn log -r1240618 https://svn.apache.org/repos/asf/lucene
>>> >>
>>> 
>>> >> r1240618 | gsingers | 2012-02-04 22:45:17 +0100 (Sat, 04 Feb 2012) | 1
>>> >> line
>>> >>
>>> >> LUCENE-2748: bring in old Lucene docs
>>> >>
>>> >> This commit diff weights... wait for it... 1.3G! I didn't check what
>>> >> it actually was.
>>> >>
>>> >> Will keep you posted.
>>> >>
>>> >> D.
>>> >
>>> > 

[jira] [Reopened] (SOLR-8388) TestSolrQueryResponse (factor out, then extend)

2015-12-15 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke reopened SOLR-8388:
---

> TestSolrQueryResponse (factor out, then extend)
> ---
>
> Key: SOLR-8388
> URL: https://issues.apache.org/jira/browse/SOLR-8388
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8388-part1of2.patch, SOLR-8388-part2of2.patch
>
>
> factor out 
> {{solr/core/src/test/org/apache/solr/response/TestSolrQueryResponse.java}} 
> from {{solr/core/src/test/org/apache/solr/servlet/ResponseHeaderTest.java}} 
> and then extend it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-15 Thread Yonik Seeley
If we move to git, stripping out jars seems to be an independent decision?
Can you even strip out jars and preserve history (i.e. not change
hashes and invalidate everyone's forks/clones)?
I did run across this:
http://stackoverflow.com/questions/17470780/is-it-possible-to-slim-a-git-repository-without-rewriting-history

-Yonik

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-15 Thread Dawid Weiss
Ok, give me some time and I'll see what I can achieve. Now that I actually
wrote an SVN dump parser (validator and serializer) things are under much
better control...

I'll try to achieve the following:

1) selectively drop unnecessary stuff from history (cms/, javadocs/, JARs
and perhaps other binaries),
2) *preserve* history of all core sources. So svn log IndexWriter has to go
back all the way back to when Doug was young and pretty. Ooops, he's still
pretty of course.
3) provide a way to link git history with svn revisions. I would, ideally,
include a "imported from svn:rev XXX" in the commit log message.
4) annotate release tags and branches. I don't care much about interim
branches -- they are not important to me (please speak up if you think
otherwise).

Dawid

On Tue, Dec 15, 2015 at 7:03 PM, Robert Muir  wrote:

> If Dawid is volunteering to sort out this mess, +1 to let him make it
> a move to git. I don't care if we disagree about JARs, I trust he will
> do a good job and that is more important.
>
> On Tue, Dec 15, 2015 at 12:44 PM, Dawid Weiss 
> wrote:
> >
> > It's not true that nobody is working on this. I have been working on the
> SVN
> > dump in the meantime. You would not believe how incredibly complex the
> > process of processing that (remote) dump is. Let me highlight a few key
> > issues:
> >
> > 1) There is no "one" Lucene SVN repository that can be transferred to
> git.
> > The history is a mess. Trunk, branches, tags -- all change paths at
> various
> > points in history. Entire projects are copied from *outside* the official
> > Lucene ASF path (when Solr, Nutch or Tika moved from the incubator, for
> > example).
> >
> > 2) The history of commits to Lucene's subpath of the SVN is ~50k commits.
> > ASF's commit history in which those 50k commits live is 1.8 *million*
> > commits. I think the git-svn sync crashes due to the sheer number of
> (empty)
> > commits in between actual changes.
> >
> > 3) There are a few commits that are gigantic. I mentioned Grant's 1.2G
> > patch, for example, but there are others (the second larger is 190megs,
> the
> > third is 136 megs).
> >
> > 4) The size of JARs is really not an issue. The entire SVN repo I
> mirrored
> > locally (including empty interim commits to cater for svn:mergeinfos) is
> 4G.
> > If you strip the stuff like javadocs and side projects (Nutch, Tika,
> Mahout)
> > then I bet the entire history can fit in 1G total. Of course stripping
> JARs
> > is also doable.
> >
> > 5) There is lots of junk at the main SVN path so you can't just version
> the
> > top-level folder. If you wanted to checkout /asf/lucene then the size of
> the
> > resulting folder is enormous -- I terminated the checkout after I reached
> > over 20 gigs. Well, technically you *could* do it, it'd preserve perfect
> > history, but I wouldn't want to git co a past version that checks out all
> > the tags, branches, etc. This has to be mapped in a sensible way.
> >
> > What I think is that all the above makes (straightforward) conversion to
> git
> > problematic. Especially moving paths are a problem -- how to mark tags/
> > branches, where the main line of development is, etc. This conversion
> would
> > have to be guided and hand-tuned to make sense. This effort would only
> pay
> > for itself if we move to git, otherwise I don't see the benefit. Paul's
> > script is fine for keeping short-term history.
> >
> > Dawid
> >
> > P.S. Either the SVN repo at Apache is broken or the SVN is broken, which
> > makes processing SVN history even more fun. This dump indicates Tika
> being
> > moved from the incubator to Lucene:
> >
> > svnrdump dump -r 712381 --incremental https://svn.apache.org/repos/asf/
> >
> > out
> >
> > But when you dump just Lucene's subpath, the output is broken (last
> > changeset in the file is an invalid changeset, it carries no target):
> >
> > svnrdump dump -r 712381 --incremental
> > https://svn.apache.org/repos/asf/lucene > out
> >
> >
> >
> > On Tue, Dec 15, 2015 at 6:04 PM, Yonik Seeley  wrote:
> >>
> >> If we move to git, stripping out jars seems to be an independent
> decision?
> >> Can you even strip out jars and preserve history (i.e. not change
> >> hashes and invalidate everyone's forks/clones)?
> >> I did run across this:
> >>
> >>
> http://stackoverflow.com/questions/17470780/is-it-possible-to-slim-a-git-repository-without-rewriting-history
> >>
> >> -Yonik
> >>
> >> -
> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >> For additional commands, e-mail: dev-h...@lucene.apache.org
> >>
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (SOLR-8388) TestSolrQueryResponse (factor out, then extend)

2015-12-15 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15058262#comment-15058262
 ] 

Steve Rowe commented on SOLR-8388:
--

My Jenkins found a reproducible ReturnsFieldsTest.testToString failure (Linux, 
Oracle Java7, branch_5x):

{noformat}
  [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=ReturnFieldsTest 
-Dtests.method=testToString -Dtests.seed=4E6AE8A4D715B23B -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=sk -Dtests.timezone=Europe/Brussels -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] FAILURE 0.04s | ReturnFieldsTest.testToString <<<
   [junit4]> Throwable #1: org.junit.ComparisonFailure: 
expected:<...s=(globs=[],fields=[[score, test, id],okFieldNames=[null, score, 
test], id],reqFieldNames=...> but was:<...s=(globs=[],fields=[[test, score, 
id],okFieldNames=[null, test, score], id],reqFieldNames=...>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([4E6AE8A4D715B23B:9F4B5FA5E80FDF93]:0)
   [junit4]>at 
org.apache.solr.search.ReturnFieldsTest.testToString(ReturnFieldsTest.java:109)
{noformat}

> TestSolrQueryResponse (factor out, then extend)
> ---
>
> Key: SOLR-8388
> URL: https://issues.apache.org/jira/browse/SOLR-8388
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8388-part1of2.patch, SOLR-8388-part2of2.patch
>
>
> factor out 
> {{solr/core/src/test/org/apache/solr/response/TestSolrQueryResponse.java}} 
> from {{solr/core/src/test/org/apache/solr/servlet/ResponseHeaderTest.java}} 
> and then extend it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-15 Thread Scott Blum
Let's just move to git. It's almost 2016. I suspect many contributors are
probably primarily working off the github mirror anyway.  Is there any
great argument for delaying?
On Dec 15, 2015 11:51 AM, "Mark Miller"  wrote:

> I don't think you will get a volunteer until someone sums up the
> discussion with a proposal that someone is not going to veto or something.
> We can't expect everyone to read the same tea leaves and come to the same
> conclusion.
>
> Perhaps a stripped down mirror is the consensus. I'd rather we had some
> agreement on what we were going to do though, rather than an agreement to
> investigate. If we think stripping down is a technically feasible, and no
> one is going to violently disagree still, then let's decide to do that.
>
> - Mark
>
>
>
> On Tue, Dec 15, 2015 at 11:39 AM Doug Turnbull <
> dturnb...@opensourceconnections.com> wrote:
>
>> I thought the general consensus at minimum was to investigate a git
>> mirror that stripped some artifacts out (jars etc) to lighten up the work
>> of the process. If at some point the project switched to git, such a mirror
>> might be a suitable git repo for the project with archived older versions
>> in SVN.
>>
>> I think probably what is lacking is a volunteer to figure it all out.
>>
>>
>> -Doug
>>
>> On Tue, Dec 15, 2015 at 11:32 AM, Mark Miller 
>> wrote:
>>
>>> Anyone willing to lead this discussion to some kind of better
>>> resolution? Did that whole back and forth help with any ideas on the best
>>> path forward? I know it's a complicated issue, git / svn, the light side,
>>> the dark side, but doesn't GitHub also depend on this mirroring? It's going
>>> to be super annoying when I can no longer pull from a relatively up to date
>>> git remote.
>>>
>>> Who has boiled down the correct path?
>>>
>>> - Mark
>>>
>>> On Wed, Dec 9, 2015 at 6:07 AM Dawid Weiss 
>>> wrote:
>>>
 FYI.

 - All of Lucene's SVN, incremental deltas, uncompressed: 5.0G
 - the above, tar.bz2: 1.2G

 Sadly, I didn't succeed at recreating a local SVN repo from those
 incremental dumps. svnadmin load fails with a cryptic error related to
 the fact that revision number of node-copy operations refer to
 original SVN numbers and they're apparently renumbered on import.
 svnadmin isn't smart enough to somehow keep a reference of those
 original numbers and svndumpfilter can't work with incremental dump
 files... A seemingly trivial task of splitting a repo on a clean
 boundary seems incredibly hard with SVN...

 If anybody wishes to play with the dump files, here they are:
 http://goo.gl/m6q3J8

 Dawid

 On Tue, Dec 8, 2015 at 10:49 PM, Upayavira  wrote:
 > You can't avoid having the history in SVN. The ASF has one large
 repo, and
 > won't be deleting that repo, so the history will survive in
 perpetuity,
 > regardless of what we do now.
 >
 > Upayavira
 >
 > On Tue, Dec 8, 2015, at 09:24 PM, Doug Turnbull wrote:
 >
 > It seems you'd want to preserve that history in a frozen/archiced
 Apache Svn
 > repo for Lucene. Then make the new git repo slimmer before switching.
 Folks
 > that want very old versions or doing research can at least go through
 the
 > original SVN repo.
 >
 > On Tuesday, December 8, 2015, Dawid Weiss 
 wrote:
 >
 > One more thing, perhaps of importance, the raw Lucene repo contains
 > all the history of projects that then turned top-level (Nutch,
 > Mahout). These could also be dropped (or ignored) when converting to
 > git. If we agree JARs are not relevant, why should projects not
 > directly related to Lucene/ Solr be?
 >
 > Dawid
 >
 > On Tue, Dec 8, 2015 at 10:05 PM, Dawid Weiss 
 wrote:
 >>> Don’t know how much we have of historic jars in our history.
 >>
 >> I actually do know. Or will know. In about ~10 hours. I wrote a
 script
 >> that does the following:
 >>
 >> 1) git log all revisions touching
 https://svn.apache.org/repos/asf/lucene
 >> 2) grep revision numbers
 >> 3) use svnrdump to get every single commit (revision) above, in
 >> incremental mode.
 >>
 >> This will allow me to:
 >>
 >> 1) recreate only Lucene/ Solr SVN, locally.
 >> 2) measure the size of SVN repo.
 >> 3) measure the size of any conversion to git (even if it's one-by-one
 >> checkout, then-sync with git).
 >>
 >> From what I see up until now size should not be an issue at all. Even
 >> with all binary blobs so far the SVN incremental dumps measure ~3.7G
 >> (and I'm about 75% done). There is one interesting super-large
 commit,
 >> this one:
 >>
 >> svn log -r1240618 https://svn.apache.org/repos/asf/lucene
 >>
 

[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk-9-ea+95) - Build # 15210 - Failure!

2015-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15210/
Java: 64bit/jdk-9-ea+95 -XX:+UseCompressedOops -XX:+UseG1GC -XX:-CompactStrings

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:43234/collMinRf_1x3_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Timeout 
occured while waiting response from server at: 
http://127.0.0.1:43234/collMinRf_1x3_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([4883AA101CB37E2A:C0D795CAB24F13D2]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:635)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:982)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:807)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:609)
at 
org.apache.solr.cloud.HttpPartitionTest.testMinRf(HttpPartitionTest.java:194)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-5.x-Solaris (multiarch/jdk1.7.0) - Build # 259 - Failure!

2015-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Solaris/259/
Java: multiarch/jdk1.7.0 -d32 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.search.ReturnFieldsTest.testToString

Error Message:
expected:<...s=(globs=[],fields=[[score, test, id],okFieldNames=[null, score, 
test, id]],reqFieldNames=[id,...> but was:<...s=(globs=[],fields=[[id, score, 
test],okFieldNames=[null, id, score, test]],reqFieldNames=[id,...>

Stack Trace:
org.junit.ComparisonFailure: expected:<...s=(globs=[],fields=[[score, test, 
id],okFieldNames=[null, score, test, id]],reqFieldNames=[id,...> but 
was:<...s=(globs=[],fields=[[id, score, test],okFieldNames=[null, id, score, 
test]],reqFieldNames=[id,...>
at 
__randomizedtesting.SeedInfo.seed([EC8AEC777DC963ED:3DAB5B7642D30E45]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.search.ReturnFieldsTest.testToString(ReturnFieldsTest.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.7.0_80) - Build # 14913 - Failure!

2015-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14913/
Java: 32bit/jdk1.7.0_80 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.search.ReturnFieldsTest.testToString

Error Message:
expected:<...s=(globs=[],fields=[[score, test, id],okFieldNames=[null, score, 
test], id],reqFieldNames=...> but was:<...s=(globs=[],fields=[[test, score, 
id],okFieldNames=[null, test, score], id],reqFieldNames=...>

Stack Trace:
org.junit.ComparisonFailure: expected:<...s=(globs=[],fields=[[score, test, 
id],okFieldNames=[null, score, test], id],reqFieldNames=...> but 
was:<...s=(globs=[],fields=[[test, score, id],okFieldNames=[null, test, score], 
id],reqFieldNames=...>
at 
__randomizedtesting.SeedInfo.seed([80FDE798363497B6:51DC5099092EFA1E]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.search.ReturnFieldsTest.testToString(ReturnFieldsTest.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-8372) Canceled recovery can lead to data loss

2015-12-15 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-8372:
---
Attachment: SOLR-8372.patch

Here's a patch that allows bufferUpdates() to be called more than once, and 
removes the call to dropBufferedUpdates() from RecoveryStrategy.

Previously, if bufferUpdates() was called in a state!=ACTIVE, we simply 
returned w/o changing the state.  This is now logged at least.

This has an additional side effect of having buffered versions in our log that 
were never applied to the index.  This seems OK though... better not to lose 
updates in general.

> Canceled recovery can lead to data loss
> ---
>
> Key: SOLR-8372
> URL: https://issues.apache.org/jira/browse/SOLR-8372
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
> Attachments: SOLR-8372.patch
>
>
> A recovery via index replication tells the update log to start buffering 
> updates.  If that recovery is canceled for whatever reason by the replica, 
> the RecoveryStrategy calls ulog.dropBufferedUpdates() which stops buffering 
> and places the UpdateLog back in active mode.  If updates come from the 
> leader after this point (and before ReplicationStrategy retries recovery), 
> the update will be processed as normal and added to the transaction log. If 
> the server is bounced, those last updates to the transaction log look normal 
> (no FLAG_GAP) and can be used to determine who is more up to date. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7730) speed-up faceting on doc values fields

2015-12-15 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-7730:
---
Attachment: SOLR-7730-changes.patch

attaching [^SOLR-7730-changes.patch] move it from 5.3 to 5.4 Optimizations. 
[~steve_rowe] Should I commit it to trunk and 5x? 

> speed-up faceting on doc values fields
> --
>
> Key: SOLR-7730
> URL: https://issues.apache.org/jira/browse/SOLR-7730
> Project: Solr
>  Issue Type: Improvement
>  Components: faceting
>Affects Versions: 5.2.1
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: patch
> Fix For: 5.4
>
> Attachments: LUCENE-7730.patch, LUCENE-7730.patch, 
> SOLR-7730-changes.patch, SOLR-7730.patch
>
>
> every time we count facets on DocValues fields in Solr on many segments index 
> we see the unnecessary hotspot:
> {code}
> 
> at 
> org.apache.lucene.index.MultiFields.getMergedFieldInfos(MultiFields.java:248)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getFieldInfos(SlowCompositeReaderWrapper.java:239)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getSortedSetDocValues(SlowCompositeReaderWrapper.java:176)
> at 
> org.apache.solr.request.DocValuesFacets.getCounts(DocValuesFacets.java:72)
> at 
> org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:460) 
> {code}
> the reason is SlowCompositeReaderWrapper.getSortedSetDocValues() Line 136 and 
> SlowCompositeReaderWrapper.getSortedDocValues() Line 174
> before return composite doc values, SCWR merges segment field infos, which is 
> expensive, but after fieldinfo is merged, it checks *only* docvalue type in 
> it. This dv type check can be done much easier in per segment basis. 
> This patch gets some performance gain for those who count DV facets in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-15 Thread Dawid Weiss
Oh, just for completeness -- moving to git is not just about the version
management, it's also:

1) all the scripts that currently do validations, etc.
2) what to do with svn:* properties
3) what to do with empty folders (not available in git).

I don't volunteer to solve these :)

Dawid


On Tue, Dec 15, 2015 at 7:09 PM, Dawid Weiss  wrote:

>
> Ok, give me some time and I'll see what I can achieve. Now that I actually
> wrote an SVN dump parser (validator and serializer) things are under much
> better control...
>
> I'll try to achieve the following:
>
> 1) selectively drop unnecessary stuff from history (cms/, javadocs/, JARs
> and perhaps other binaries),
> 2) *preserve* history of all core sources. So svn log IndexWriter has to
> go back all the way back to when Doug was young and pretty. Ooops, he's
> still pretty of course.
> 3) provide a way to link git history with svn revisions. I would, ideally,
> include a "imported from svn:rev XXX" in the commit log message.
> 4) annotate release tags and branches. I don't care much about interim
> branches -- they are not important to me (please speak up if you think
> otherwise).
>
> Dawid
>
> On Tue, Dec 15, 2015 at 7:03 PM, Robert Muir  wrote:
>
>> If Dawid is volunteering to sort out this mess, +1 to let him make it
>> a move to git. I don't care if we disagree about JARs, I trust he will
>> do a good job and that is more important.
>>
>> On Tue, Dec 15, 2015 at 12:44 PM, Dawid Weiss 
>> wrote:
>> >
>> > It's not true that nobody is working on this. I have been working on
>> the SVN
>> > dump in the meantime. You would not believe how incredibly complex the
>> > process of processing that (remote) dump is. Let me highlight a few key
>> > issues:
>> >
>> > 1) There is no "one" Lucene SVN repository that can be transferred to
>> git.
>> > The history is a mess. Trunk, branches, tags -- all change paths at
>> various
>> > points in history. Entire projects are copied from *outside* the
>> official
>> > Lucene ASF path (when Solr, Nutch or Tika moved from the incubator, for
>> > example).
>> >
>> > 2) The history of commits to Lucene's subpath of the SVN is ~50k
>> commits.
>> > ASF's commit history in which those 50k commits live is 1.8 *million*
>> > commits. I think the git-svn sync crashes due to the sheer number of
>> (empty)
>> > commits in between actual changes.
>> >
>> > 3) There are a few commits that are gigantic. I mentioned Grant's 1.2G
>> > patch, for example, but there are others (the second larger is 190megs,
>> the
>> > third is 136 megs).
>> >
>> > 4) The size of JARs is really not an issue. The entire SVN repo I
>> mirrored
>> > locally (including empty interim commits to cater for svn:mergeinfos)
>> is 4G.
>> > If you strip the stuff like javadocs and side projects (Nutch, Tika,
>> Mahout)
>> > then I bet the entire history can fit in 1G total. Of course stripping
>> JARs
>> > is also doable.
>> >
>> > 5) There is lots of junk at the main SVN path so you can't just version
>> the
>> > top-level folder. If you wanted to checkout /asf/lucene then the size
>> of the
>> > resulting folder is enormous -- I terminated the checkout after I
>> reached
>> > over 20 gigs. Well, technically you *could* do it, it'd preserve perfect
>> > history, but I wouldn't want to git co a past version that checks out
>> all
>> > the tags, branches, etc. This has to be mapped in a sensible way.
>> >
>> > What I think is that all the above makes (straightforward) conversion
>> to git
>> > problematic. Especially moving paths are a problem -- how to mark tags/
>> > branches, where the main line of development is, etc. This conversion
>> would
>> > have to be guided and hand-tuned to make sense. This effort would only
>> pay
>> > for itself if we move to git, otherwise I don't see the benefit. Paul's
>> > script is fine for keeping short-term history.
>> >
>> > Dawid
>> >
>> > P.S. Either the SVN repo at Apache is broken or the SVN is broken, which
>> > makes processing SVN history even more fun. This dump indicates Tika
>> being
>> > moved from the incubator to Lucene:
>> >
>> > svnrdump dump -r 712381 --incremental https://svn.apache.org/repos/asf/
>> >
>> > out
>> >
>> > But when you dump just Lucene's subpath, the output is broken (last
>> > changeset in the file is an invalid changeset, it carries no target):
>> >
>> > svnrdump dump -r 712381 --incremental
>> > https://svn.apache.org/repos/asf/lucene > out
>> >
>> >
>> >
>> > On Tue, Dec 15, 2015 at 6:04 PM, Yonik Seeley 
>> wrote:
>> >>
>> >> If we move to git, stripping out jars seems to be an independent
>> decision?
>> >> Can you even strip out jars and preserve history (i.e. not change
>> >> hashes and invalidate everyone's forks/clones)?
>> >> I did run across this:
>> >>
>> >>
>> http://stackoverflow.com/questions/17470780/is-it-possible-to-slim-a-git-repository-without-rewriting-history
>> >>
>> >> -Yonik
>> >>
>> >> 

[jira] [Reopened] (SOLR-7730) speed-up faceting on doc values fields

2015-12-15 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev reopened SOLR-7730:


> speed-up faceting on doc values fields
> --
>
> Key: SOLR-7730
> URL: https://issues.apache.org/jira/browse/SOLR-7730
> Project: Solr
>  Issue Type: Improvement
>  Components: faceting
>Affects Versions: 5.2.1
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: patch
> Fix For: 5.4
>
> Attachments: LUCENE-7730.patch, LUCENE-7730.patch, SOLR-7730.patch
>
>
> every time we count facets on DocValues fields in Solr on many segments index 
> we see the unnecessary hotspot:
> {code}
> 
> at 
> org.apache.lucene.index.MultiFields.getMergedFieldInfos(MultiFields.java:248)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getFieldInfos(SlowCompositeReaderWrapper.java:239)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getSortedSetDocValues(SlowCompositeReaderWrapper.java:176)
> at 
> org.apache.solr.request.DocValuesFacets.getCounts(DocValuesFacets.java:72)
> at 
> org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:460) 
> {code}
> the reason is SlowCompositeReaderWrapper.getSortedSetDocValues() Line 136 and 
> SlowCompositeReaderWrapper.getSortedDocValues() Line 174
> before return composite doc values, SCWR merges segment field infos, which is 
> expensive, but after fieldinfo is merged, it checks *only* docvalue type in 
> it. This dv type check can be done much easier in per segment basis. 
> This patch gets some performance gain for those who count DV facets in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-15 Thread Mark Miller
Let's just make some JIRA issues. I'm not worried about volunteers for any
of it yet, just a direction we agree upon. Once we know where we are going,
we generally don't have a big volunteer problem. We haven't heard from Uwe
yet, but really does seem like moving to Git makes the most sense.

I'm certainly willing to spend some free time on this.

- Mark

On Tue, Dec 15, 2015 at 1:22 PM Dawid Weiss  wrote:

>
> Oh, just for completeness -- moving to git is not just about the version
> management, it's also:
>
> 1) all the scripts that currently do validations, etc.
> 2) what to do with svn:* properties
> 3) what to do with empty folders (not available in git).
>
> I don't volunteer to solve these :)
>
> Dawid
>
>
> On Tue, Dec 15, 2015 at 7:09 PM, Dawid Weiss 
> wrote:
>
>>
>> Ok, give me some time and I'll see what I can achieve. Now that I
>> actually wrote an SVN dump parser (validator and serializer) things are
>> under much better control...
>>
>> I'll try to achieve the following:
>>
>> 1) selectively drop unnecessary stuff from history (cms/, javadocs/, JARs
>> and perhaps other binaries),
>> 2) *preserve* history of all core sources. So svn log IndexWriter has to
>> go back all the way back to when Doug was young and pretty. Ooops, he's
>> still pretty of course.
>> 3) provide a way to link git history with svn revisions. I would,
>> ideally, include a "imported from svn:rev XXX" in the commit log message.
>> 4) annotate release tags and branches. I don't care much about interim
>> branches -- they are not important to me (please speak up if you think
>> otherwise).
>>
>> Dawid
>>
>> On Tue, Dec 15, 2015 at 7:03 PM, Robert Muir  wrote:
>>
>>> If Dawid is volunteering to sort out this mess, +1 to let him make it
>>> a move to git. I don't care if we disagree about JARs, I trust he will
>>> do a good job and that is more important.
>>>
>>> On Tue, Dec 15, 2015 at 12:44 PM, Dawid Weiss 
>>> wrote:
>>> >
>>> > It's not true that nobody is working on this. I have been working on
>>> the SVN
>>> > dump in the meantime. You would not believe how incredibly complex the
>>> > process of processing that (remote) dump is. Let me highlight a few key
>>> > issues:
>>> >
>>> > 1) There is no "one" Lucene SVN repository that can be transferred to
>>> git.
>>> > The history is a mess. Trunk, branches, tags -- all change paths at
>>> various
>>> > points in history. Entire projects are copied from *outside* the
>>> official
>>> > Lucene ASF path (when Solr, Nutch or Tika moved from the incubator, for
>>> > example).
>>> >
>>> > 2) The history of commits to Lucene's subpath of the SVN is ~50k
>>> commits.
>>> > ASF's commit history in which those 50k commits live is 1.8 *million*
>>> > commits. I think the git-svn sync crashes due to the sheer number of
>>> (empty)
>>> > commits in between actual changes.
>>> >
>>> > 3) There are a few commits that are gigantic. I mentioned Grant's 1.2G
>>> > patch, for example, but there are others (the second larger is
>>> 190megs, the
>>> > third is 136 megs).
>>> >
>>> > 4) The size of JARs is really not an issue. The entire SVN repo I
>>> mirrored
>>> > locally (including empty interim commits to cater for svn:mergeinfos)
>>> is 4G.
>>> > If you strip the stuff like javadocs and side projects (Nutch, Tika,
>>> Mahout)
>>> > then I bet the entire history can fit in 1G total. Of course stripping
>>> JARs
>>> > is also doable.
>>> >
>>> > 5) There is lots of junk at the main SVN path so you can't just
>>> version the
>>> > top-level folder. If you wanted to checkout /asf/lucene then the size
>>> of the
>>> > resulting folder is enormous -- I terminated the checkout after I
>>> reached
>>> > over 20 gigs. Well, technically you *could* do it, it'd preserve
>>> perfect
>>> > history, but I wouldn't want to git co a past version that checks out
>>> all
>>> > the tags, branches, etc. This has to be mapped in a sensible way.
>>> >
>>> > What I think is that all the above makes (straightforward) conversion
>>> to git
>>> > problematic. Especially moving paths are a problem -- how to mark tags/
>>> > branches, where the main line of development is, etc. This conversion
>>> would
>>> > have to be guided and hand-tuned to make sense. This effort would only
>>> pay
>>> > for itself if we move to git, otherwise I don't see the benefit. Paul's
>>> > script is fine for keeping short-term history.
>>> >
>>> > Dawid
>>> >
>>> > P.S. Either the SVN repo at Apache is broken or the SVN is broken,
>>> which
>>> > makes processing SVN history even more fun. This dump indicates Tika
>>> being
>>> > moved from the incubator to Lucene:
>>> >
>>> > svnrdump dump -r 712381 --incremental
>>> https://svn.apache.org/repos/asf/ >
>>> > out
>>> >
>>> > But when you dump just Lucene's subpath, the output is broken (last
>>> > changeset in the file is an invalid changeset, it carries no target):
>>> >
>>> > svnrdump dump -r 

[jira] [Commented] (SOLR-3229) TermVectorComponent does not return terms in distributed search

2015-12-15 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15058606#comment-15058606
 ] 

Hoss Man commented on SOLR-3229:


I don't remember this issue at all, and w/o digging into it's history or 
looking at teh commits, i'm going to reply simply to this sentence...

bq. UniqueKey should be required for distributed-search to get TV info back. 

I have no objection to this.  if that's not how it works now, then i'm 
surprised and If i'm responsible for the code/decision in question then my 
suspicion is that it's simply because this issue predated SolrCloud and most of 
the other current "rules" regarding "distributed search" -- back when it was a 
query time only concept and people manually partitioned their shards.  it 
certainly pre-dates distrib.singlePass.

open/link a new jira w/whatever changes you think make sense to the existing 
functionality.

> TermVectorComponent does not return terms in distributed search
> ---
>
> Key: SOLR-3229
> URL: https://issues.apache.org/jira/browse/SOLR-3229
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Affects Versions: 4.0-ALPHA
> Environment: Ubuntu 11.10, openjdk-6
>Reporter: Hang Xie
>Assignee: Hoss Man
>  Labels: patch
> Fix For: 4.0, Trunk
>
> Attachments: SOLR-3229.patch, TermVectorComponent.patch
>
>
> TermVectorComponent does not return terms in distributed search, the 
> distributedProcess() incorrectly uses Solr Unique Key to do subrequests, 
> while process() expects Lucene document ids. Also, parameters are transferred 
> in different format thus making distributed search returns no result.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8420) Date statistics: sumOfSquares overflows long

2015-12-15 Thread Tom Hill (JIRA)
Tom Hill created SOLR-8420:
--

 Summary: Date statistics: sumOfSquares overflows long
 Key: SOLR-8420
 URL: https://issues.apache.org/jira/browse/SOLR-8420
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other
Affects Versions: 5.4
Reporter: Tom Hill
Priority: Minor


The values for Dates are large enough that squaring them overflows a "long" 
field. This should be converted to a double. 

StatsValuesFactory.java, line 755 DateStatsValues#updateTypeSpecificStats Add a 
cast to double 

sumOfSquares += ( (double)value * value * count);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8419) TermVectorComponent distributed-search issues

2015-12-15 Thread David Smiley (JIRA)
David Smiley created SOLR-8419:
--

 Summary: TermVectorComponent distributed-search issues
 Key: SOLR-8419
 URL: https://issues.apache.org/jira/browse/SOLR-8419
 Project: Solr
  Issue Type: Improvement
  Components: SearchComponents - other
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.5


TermVectorComponent supports distributed-search since SOLR-3229 added it.  
Unlike most other components, this one tries to support schemas without a 
UniqueKey.  However it's logic for attempting to do this was made faulty with 
the introduction of distrib.singlePass, and furthermore this part wasn't tested 
any way.  In this issue I want to remove support for schemas lacking a 
UniqueKey with this component (only for distributed-search).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-8420) Date statistics: sumOfSquares overflows long

2015-12-15 Thread Tom Hill (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Hill updated SOLR-8420:
---
Comment: was deleted

(was: One line fix, plus tests.)

> Date statistics: sumOfSquares overflows long
> 
>
> Key: SOLR-8420
> URL: https://issues.apache.org/jira/browse/SOLR-8420
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Affects Versions: 5.4
>Reporter: Tom Hill
>Priority: Minor
> Attachments: 0001-Fix-overflow-in-date-statistics.patch
>
>
> The values for Dates are large enough that squaring them overflows a "long" 
> field. This should be converted to a double. 
> StatsValuesFactory.java, line 755 DateStatsValues#updateTypeSpecificStats Add 
> a cast to double 
> sumOfSquares += ( (double)value * value * count);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-15 Thread Robert Muir
If Dawid is volunteering to sort out this mess, +1 to let him make it
a move to git. I don't care if we disagree about JARs, I trust he will
do a good job and that is more important.

On Tue, Dec 15, 2015 at 12:44 PM, Dawid Weiss  wrote:
>
> It's not true that nobody is working on this. I have been working on the SVN
> dump in the meantime. You would not believe how incredibly complex the
> process of processing that (remote) dump is. Let me highlight a few key
> issues:
>
> 1) There is no "one" Lucene SVN repository that can be transferred to git.
> The history is a mess. Trunk, branches, tags -- all change paths at various
> points in history. Entire projects are copied from *outside* the official
> Lucene ASF path (when Solr, Nutch or Tika moved from the incubator, for
> example).
>
> 2) The history of commits to Lucene's subpath of the SVN is ~50k commits.
> ASF's commit history in which those 50k commits live is 1.8 *million*
> commits. I think the git-svn sync crashes due to the sheer number of (empty)
> commits in between actual changes.
>
> 3) There are a few commits that are gigantic. I mentioned Grant's 1.2G
> patch, for example, but there are others (the second larger is 190megs, the
> third is 136 megs).
>
> 4) The size of JARs is really not an issue. The entire SVN repo I mirrored
> locally (including empty interim commits to cater for svn:mergeinfos) is 4G.
> If you strip the stuff like javadocs and side projects (Nutch, Tika, Mahout)
> then I bet the entire history can fit in 1G total. Of course stripping JARs
> is also doable.
>
> 5) There is lots of junk at the main SVN path so you can't just version the
> top-level folder. If you wanted to checkout /asf/lucene then the size of the
> resulting folder is enormous -- I terminated the checkout after I reached
> over 20 gigs. Well, technically you *could* do it, it'd preserve perfect
> history, but I wouldn't want to git co a past version that checks out all
> the tags, branches, etc. This has to be mapped in a sensible way.
>
> What I think is that all the above makes (straightforward) conversion to git
> problematic. Especially moving paths are a problem -- how to mark tags/
> branches, where the main line of development is, etc. This conversion would
> have to be guided and hand-tuned to make sense. This effort would only pay
> for itself if we move to git, otherwise I don't see the benefit. Paul's
> script is fine for keeping short-term history.
>
> Dawid
>
> P.S. Either the SVN repo at Apache is broken or the SVN is broken, which
> makes processing SVN history even more fun. This dump indicates Tika being
> moved from the incubator to Lucene:
>
> svnrdump dump -r 712381 --incremental https://svn.apache.org/repos/asf/ >
> out
>
> But when you dump just Lucene's subpath, the output is broken (last
> changeset in the file is an invalid changeset, it carries no target):
>
> svnrdump dump -r 712381 --incremental
> https://svn.apache.org/repos/asf/lucene > out
>
>
>
> On Tue, Dec 15, 2015 at 6:04 PM, Yonik Seeley  wrote:
>>
>> If we move to git, stripping out jars seems to be an independent decision?
>> Can you even strip out jars and preserve history (i.e. not change
>> hashes and invalidate everyone's forks/clones)?
>> I did run across this:
>>
>> http://stackoverflow.com/questions/17470780/is-it-possible-to-slim-a-git-repository-without-rewriting-history
>>
>> -Yonik
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_66) - Build # 5474 - Failure!

2015-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5474/
Java: 32bit/jdk1.8.0_66 -client -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  'X val changed' for path 'x' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "params":{"wt":"json"},   
"context":{ "webapp":"/mmdq/u", "path":"/test1", 
"httpMethod":"GET"},   
"class":"org.apache.solr.core.BlobStoreTestRequestHandler",   "x":"X val"}

Stack Trace:
java.lang.AssertionError: Could not get expected value  'X val changed' for 
path 'x' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "params":{"wt":"json"},
  "context":{
"webapp":"/mmdq/u",
"path":"/test1",
"httpMethod":"GET"},
  "class":"org.apache.solr.core.BlobStoreTestRequestHandler",
  "x":"X val"}
at 
__randomizedtesting.SeedInfo.seed([95DDF873C6DFF776:4D90D524310252D6]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:458)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:257)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-8415) Provide command to switch between non/secure mode in ZK

2015-12-15 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-8415:

Attachment: SOLR-8415.patch

Attaching a new patch that includes some tests for converting both ways between 
secure and non secure nodes.

Docs should go on the wiki somewhere. I'll write them up as soon as somebody 
gives me a nudge to help find a good home for them.

> Provide command to switch between non/secure mode in ZK
> ---
>
> Key: SOLR-8415
> URL: https://issues.apache.org/jira/browse/SOLR-8415
> Project: Solr
>  Issue Type: Improvement
>  Components: security, SolrCloud
>Reporter: Mike Drob
> Fix For: Trunk
>
> Attachments: SOLR-8415.patch, SOLR-8415.patch
>
>
> We have the ability to run both with and without zk acls, but we don't have a 
> great way to switch between the two modes. Most common use case, I imagine, 
> would be upgrading from an old version that did not support this to a new 
> version that does, and wanting to protect all of the existing content in ZK, 
> but it is conceivable that a user might want to remove ACLs as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3838 - Failure

2015-12-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3838/

1 tests failed.
FAILED:  org.apache.solr.search.ReturnFieldsTest.testToString

Error Message:
expected:<...s=(globs=[],fields=[[score, test, id],okFieldNames=[null, score, 
test, id]],reqFieldNames=[id,...> but was:<...s=(globs=[],fields=[[id, score, 
test],okFieldNames=[null, id, score, test]],reqFieldNames=[id,...>

Stack Trace:
org.junit.ComparisonFailure: expected:<...s=(globs=[],fields=[[score, test, 
id],okFieldNames=[null, score, test, id]],reqFieldNames=[id,...> but 
was:<...s=(globs=[],fields=[[id, score, test],okFieldNames=[null, id, score, 
test]],reqFieldNames=[id,...>
at 
__randomizedtesting.SeedInfo.seed([6B722657F6CE495C:BA539156C9D424F4]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.search.ReturnFieldsTest.testToString(ReturnFieldsTest.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 

[jira] [Commented] (SOLR-3229) TermVectorComponent does not return terms in distributed search

2015-12-15 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15058557#comment-15058557
 ] 

David Smiley commented on SOLR-3229:


[~hossman] HighlightComponent, DebugComponent, and TermVectorComponent have a 
very similar bit of code in their finishStage() method. HighlightComponent & 
DebugComponent's version was recently found to be buggy -- SOLR-8060 and 
SOLR-8059.  The Highlight side was recently fixed and I'm about to do the same 
for Debug side.  But I'd like to refactor out some common lines of code between 
all 3 to ease maintenance.  However the TV side has this odd bit where it if it 
can't lookup the shard doc by it's unique key, that it adds this to the 
response any way (~line 458).  _I would rather we remove this; I think it's not 
something we should support.  UniqueKey should be required for 
distributed-search to get TV info back_.  The code that's here now incorrectly 
assumes that if it was unable to lookup the key in the resultIds that it's 
because the schema has no uniqueKey.  But another reason is just that it's a 
distrib.singlePass distributed search (related to the 2 bugs I'm looking at in 
Highlight & Debug components).  Do you support my recommendation?

> TermVectorComponent does not return terms in distributed search
> ---
>
> Key: SOLR-3229
> URL: https://issues.apache.org/jira/browse/SOLR-3229
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Affects Versions: 4.0-ALPHA
> Environment: Ubuntu 11.10, openjdk-6
>Reporter: Hang Xie
>Assignee: Hoss Man
>  Labels: patch
> Fix For: 4.0, Trunk
>
> Attachments: SOLR-3229.patch, TermVectorComponent.patch
>
>
> TermVectorComponent does not return terms in distributed search, the 
> distributedProcess() incorrectly uses Solr Unique Key to do subrequests, 
> while process() expects Lucene document ids. Also, parameters are transferred 
> in different format thus making distributed search returns no result.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8410) the "read" permission must include all 'read' paths

2015-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15058549#comment-15058549
 ] 

ASF subversion and git services commented on SOLR-8410:
---

Commit 1720223 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1720223 ]

SOLR-8410: Add all read paths to 'read' permission in 
RuleBasedAuthorizationPlugin

> the "read" permission must include all 'read' paths
> ---
>
> Key: SOLR-8410
> URL: https://issues.apache.org/jira/browse/SOLR-8410
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-8410.patch
>
>
> In {{RuleBasedAuthorizedPlugin}} "read" permission should also include the 
> following paths
> * /browse
> * /export
> * /spell
> * /suggest
> * /tvrh
> * /terms
> * /clustering 
> * /elevate



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-15 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15058575#comment-15058575
 ] 

Dawid Weiss commented on LUCENE-6933:
-

Thanks Mark. I don't think I'll use automated scripts, I'll most likely put 
together something that will translate raw history revision-by-revision 
(cleaning up the dump local SVN first). It can take a long time if it's a 
one-time conversion. I realize it's mind-bending, but let's see if it works. 
I'll need some time to work through it, these are huge files.

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>
> Goals:
> - selectively drop unnecessary stuff from history (cms/, javadocs/, JARs and 
> perhaps other binaries),
> - *preserve* history of all core sources. So svn log IndexWriter has to go 
> back all the way back to when Doug was young and pretty. Ooops, he's still 
> pretty of course.
> - provide a way to link git history with svn revisions. I would, ideally, 
> include a "imported from svn:rev XXX" in the commit log message.
> - annotate release tags and branches. I don't care much about interim 
> branches -- they are not important to me (please speak up if you think 
> otherwise).
> Non goals
> - no need to preserve "exact" history from SVN (the project may skip JARs, 
> etc.). Ability to build ancient versions is not an issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-15 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15058546#comment-15058546
 ] 

Mark Miller commented on LUCENE-6933:
-

For some reference, here is a wiki page managing Mavens migration to git: 
https://cwiki.apache.org/confluence/display/MAVEN/Git+Migration

Here is one of the infra JIRA's: 
https://issues.apache.org/jira/browse/INFRA-5266 Migrate Maven subprojects to 
git (surefire,scm,wagon)

Not all very relatable to us in a lot of ways, but a root to get into INFRA 
tickets for a past migration.

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>
> Goals:
> - selectively drop unnecessary stuff from history (cms/, javadocs/, JARs and 
> perhaps other binaries),
> - *preserve* history of all core sources. So svn log IndexWriter has to go 
> back all the way back to when Doug was young and pretty. Ooops, he's still 
> pretty of course.
> - provide a way to link git history with svn revisions. I would, ideally, 
> include a "imported from svn:rev XXX" in the commit log message.
> - annotate release tags and branches. I don't care much about interim 
> branches -- they are not important to me (please speak up if you think 
> otherwise).
> Non goals
> - no need to preserve "exact" history from SVN (the project may skip JARs, 
> etc.). Ability to build ancient versions is not an issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-15 Thread Dawid Weiss (JIRA)
Dawid Weiss created LUCENE-6933:
---

 Summary: Create a (cleaned up) SVN history in git
 Key: LUCENE-6933
 URL: https://issues.apache.org/jira/browse/LUCENE-6933
 Project: Lucene - Core
  Issue Type: Task
Reporter: Dawid Weiss
Assignee: Dawid Weiss


Goals:
- selectively drop unnecessary stuff from history (cms/, javadocs/, JARs and 
perhaps other binaries),
- *preserve* history of all core sources. So svn log IndexWriter has to go back 
all the way back to when Doug was young and pretty. Ooops, he's still pretty of 
course.
- provide a way to link git history with svn revisions. I would, ideally, 
include a "imported from svn:rev XXX" in the commit log message.
- annotate release tags and branches. I don't care much about interim branches 
-- they are not important to me (please speak up if you think otherwise).

Non goals
- no need to preserve "exact" history from SVN (the project may skip JARs, 
etc.). Ability to build ancient versions is not an issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8410) the "read" permission must include all 'read' paths

2015-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15058567#comment-15058567
 ] 

ASF subversion and git services commented on SOLR-8410:
---

Commit 1720226 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1720226 ]

SOLR-8410: Add all read paths to 'read' permission in 
RuleBasedAuthorizationPlugin

> the "read" permission must include all 'read' paths
> ---
>
> Key: SOLR-8410
> URL: https://issues.apache.org/jira/browse/SOLR-8410
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-8410.patch
>
>
> In {{RuleBasedAuthorizedPlugin}} "read" permission should also include the 
> following paths
> * /browse
> * /export
> * /spell
> * /suggest
> * /tvrh
> * /terms
> * /clustering 
> * /elevate



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8420) Date statistics: sumOfSquares overflows long

2015-12-15 Thread Tom Hill (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Hill updated SOLR-8420:
---
Attachment: 0001-Fix-overflow-in-date-statistics.patch

One line fix, plus tests.

> Date statistics: sumOfSquares overflows long
> 
>
> Key: SOLR-8420
> URL: https://issues.apache.org/jira/browse/SOLR-8420
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Affects Versions: 5.4
>Reporter: Tom Hill
>Priority: Minor
> Attachments: 0001-Fix-overflow-in-date-statistics.patch
>
>
> The values for Dates are large enough that squaring them overflows a "long" 
> field. This should be converted to a double. 
> StatsValuesFactory.java, line 755 DateStatsValues#updateTypeSpecificStats Add 
> a cast to double 
> sumOfSquares += ( (double)value * value * count);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8388) TestSolrQueryResponse (factor out, then extend)

2015-12-15 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-8388:
--
Attachment: SOLR-8388-part3of2.patch

fix the {{ReturnFieldsTest.testToString}} test added by part2of2 (the 
stringified fields include sets and the test incorrectly assumed a particular 
ordering for the sets' values)

> TestSolrQueryResponse (factor out, then extend)
> ---
>
> Key: SOLR-8388
> URL: https://issues.apache.org/jira/browse/SOLR-8388
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8388-part1of2.patch, SOLR-8388-part2of2.patch, 
> SOLR-8388-part3of2.patch
>
>
> factor out 
> {{solr/core/src/test/org/apache/solr/response/TestSolrQueryResponse.java}} 
> from {{solr/core/src/test/org/apache/solr/servlet/ResponseHeaderTest.java}} 
> and then extend it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.7.0_80) - Build # 5344 - Failure!

2015-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5344/
Java: 64bit/jdk1.7.0_80 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.search.ReturnFieldsTest.testToString

Error Message:
expected:<...s=(globs=[],fields=[[score, test, id],okFieldNames=[null, score, 
test, id]],reqFieldNames=[id,...> but was:<...s=(globs=[],fields=[[id, test, 
score],okFieldNames=[null, id, test, score]],reqFieldNames=[id,...>

Stack Trace:
org.junit.ComparisonFailure: expected:<...s=(globs=[],fields=[[score, test, 
id],okFieldNames=[null, score, test, id]],reqFieldNames=[id,...> but 
was:<...s=(globs=[],fields=[[id, test, score],okFieldNames=[null, id, test, 
score]],reqFieldNames=[id,...>
at 
__randomizedtesting.SeedInfo.seed([C9C9D225B44E3C25:18E865248B54518D]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.search.ReturnFieldsTest.testToString(ReturnFieldsTest.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_66) - Build # 14909 - Still Failing!

2015-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14909/
Java: 64bit/jdk1.8.0_66 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 1) Thread[id=8383, 
name=zkCallback-1507-thread-2, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=8109, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[3A02597BD6D43DB3]-EventThread,
 state=WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
3) Thread[id=8110, name=zkCallback-1507-thread-1, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)4) Thread[id=8384, 
name=zkCallback-1507-thread-3, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)5) Thread[id=8108, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[3A02597BD6D43DB3]-SendThread(127.0.0.1:41712),
 state=TIMED_WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:994)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 
   1) Thread[id=8383, name=zkCallback-1507-thread-2, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
   2) Thread[id=8109, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[3A02597BD6D43DB3]-EventThread,
 

[jira] [Resolved] (SOLR-8414) AbstractDistribZkTestBase.verifyReplicaStatus can throw NullPointerException

2015-12-15 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-8414.
---
   Resolution: Fixed
Fix Version/s: Trunk
   5.5

> AbstractDistribZkTestBase.verifyReplicaStatus can throw NullPointerException
> 
>
> Key: SOLR-8414
> URL: https://issues.apache.org/jira/browse/SOLR-8414
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8414.patch
>
>
> patch against trunk to follow



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8131) Make ManagedIndexSchemaFactory as the default in Solr

2015-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15057547#comment-15057547
 ] 

ASF subversion and git services commented on SOLR-8131:
---

Commit 1720083 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1720083 ]

SOLR-8131: Use SolrResourceLoader to instantiate ManagedIndexSchemaFactory when 
no schema factory is specified in solrconfig.xml

> Make ManagedIndexSchemaFactory as the default in Solr
> -
>
> Key: SOLR-8131
> URL: https://issues.apache.org/jira/browse/SOLR-8131
> Project: Solr
>  Issue Type: Wish
>  Components: Data-driven Schema, Schema and Analysis
>Reporter: Shalin Shekhar Mangar
>Assignee: Varun Thacker
>  Labels: difficulty-easy, impact-high
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8131-schemaless-fix.patch, 
> SOLR-8131-schemaless-fix.patch, SOLR-8131.patch, SOLR-8131.patch, 
> SOLR-8131.patch, SOLR-8131.patch, SOLR-8131.patch, SOLR-8131_5x.patch
>
>
> The techproducts and other examples shipped with Solr all use the 
> ClassicIndexSchemaFactory which disables all Schema APIs which need to modify 
> schema. It'd be nice to be able to support both read/write schema APIs 
> without needing to enable data-driven or schema-less mode.
> I propose to change all 5.x examples to explicitly use 
> ManagedIndexSchemaFactory and to enable ManagedIndexSchemaFactory by default 
> in trunk (6.x).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk-9-ea+95) - Build # 15207 - Failure!

2015-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15207/
Java: 64bit/jdk-9-ea+95 -XX:-UseCompressedOops -XX:+UseParallelGC 
-XX:-CompactStrings

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 1) Thread[id=7250, 
name=zkCallback-1198-thread-3, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)2) Thread[id=6957, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[655981353866AE9E]-SendThread(127.0.0.1:56879),
 state=TIMED_WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
 at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:940)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1003)
3) Thread[id=7249, name=zkCallback-1198-thread-2, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)4) Thread[id=6958, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[655981353866AE9E]-EventThread,
 state=WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:178) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2061)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
5) Thread[id=6959, name=zkCallback-1198-thread-1, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 
   1) Thread[id=7250, name=zkCallback-1198-thread-3, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest]
at jdk.internal.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218)
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143)
at 

Re: [RESULT] [VOTE] Release Lucene/Solr 5.4.0-RC1

2015-12-15 Thread Upayavira
Ahh, that's a Lucene one. I left the Solr one alone at 5.3.1, as it
stated "this tutorial was prepared with 5.3.1" which seemed consistent
within itself.

Still need someone to look at the back compat issue in trunk.

Upayavira

On Tue, Dec 15, 2015, at 08:26 AM, Anshum Gupta wrote:
> Hi Upayavira,
>
> Guess you missed updating the following page to
> http://lucene.apache.org/core/quickstart.html. I'll fix it.
>
> On Fri, Dec 11, 2015 at 9:33 AM, Anshum Gupta
>  wrote:
>> Thank you Upayavira!
>>
>> This is been among the smoothest releases in a long time.
>>
>> On Thu, Dec 10, 2015 at 2:29 PM, Upayavira  wrote:
>>> This vote has passed, with 11 +1 votes. I shall continue with the
>>>
remaining steps to publish the artifacts.
>>>
>>>
Thank you all!
>>>
>>>
Upayavira
>>>
>>>
On Wed, Dec 9, 2015, at 07:44 PM, Yonik Seeley wrote:
>>>
> +1
>>>
>
>>>
> -Yonik
>>>
>
>>>
> On Sat, Dec 5, 2015 at 5:58 AM, Upayavira  wrote:
>>>
> > Please vote for the RC1 release candidate for Lucene/Solr 5.4.0
>>>
> >
>>>
> > The artifacts can be downloaded from:
>>>
> > https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-rev178046
>>>
> >
>>>
> > You can run the smoke tester directly with this command:
>>>
> > python3 -u dev-tools/scripts/smokeTestRelease.py
>>>
> > https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-rev178046
>>>
> >
>>>
> > I will let this vote run until midnight (GMT) on Wednesday 9
> > December.
>>>
> >
>>>
> > Please cast your votes! (and let me know, politely :-) if I missed
>>>
> > anything)
>>>
> >
>>>
> > Upayavira
>>>
> >
>>>
> > ---
> > --
>>>
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>>
> > For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
> >
>>>
>
>>>
> -
>>>
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>>
> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>
>>>
>>>
-
>>>
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>>
For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>
>>
>>
>> --
>> Anshum Gupta
>>
>
>
>
> --
> Anshum Gupta


Re: [RESULT] [VOTE] Release Lucene/Solr 5.4.0-RC1

2015-12-15 Thread Anshum Gupta
Hi Upayavira,

Guess you missed updating the following page to
http://lucene.apache.org/core/quickstart.html. I'll fix it.

On Fri, Dec 11, 2015 at 9:33 AM, Anshum Gupta 
wrote:

> Thank you Upayavira!
>
> This is been among the smoothest releases in a long time.
>
> On Thu, Dec 10, 2015 at 2:29 PM, Upayavira  wrote:
>
>> This vote has passed, with 11 +1 votes. I shall continue with the
>> remaining steps to publish the artifacts.
>>
>> Thank you all!
>>
>> Upayavira
>>
>> On Wed, Dec 9, 2015, at 07:44 PM, Yonik Seeley wrote:
>> > +1
>> >
>> > -Yonik
>> >
>> > On Sat, Dec 5, 2015 at 5:58 AM, Upayavira  wrote:
>> > > Please vote for the RC1 release candidate for Lucene/Solr 5.4.0
>> > >
>> > > The artifacts can be downloaded from:
>> > >
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-rev178046
>> > >
>> > > You can run the smoke tester directly with this command:
>> > > python3 -u dev-tools/scripts/smokeTestRelease.py
>> > >
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-rev178046
>> > >
>> > > I will let this vote run until midnight (GMT) on Wednesday 9 December.
>> > >
>> > > Please cast your votes! (and let me know, politely :-) if I missed
>> > > anything)
>> > >
>> > > Upayavira
>> > >
>> > > -
>> > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> > > For additional commands, e-mail: dev-h...@lucene.apache.org
>> > >
>> >
>> > -
>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> > For additional commands, e-mail: dev-h...@lucene.apache.org
>> >
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>
>
> --
> Anshum Gupta
>



-- 
Anshum Gupta


[JENKINS-EA] Lucene-Solr-5.x-Linux (64bit/jdk-9-ea+95) - Build # 14910 - Still Failing!

2015-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14910/
Java: 64bit/jdk-9-ea+95 -XX:-UseCompressedOops -XX:+UseSerialGC 
-XX:-CompactStrings

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=11593, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)2) Thread[id=11592, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)3) Thread[id=11596, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)4) Thread[id=11594, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)5) Thread[id=11595, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=11593, name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest]
at jdk.internal.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
at 

[jira] [Resolved] (SOLR-8131) Make ManagedIndexSchemaFactory as the default in Solr

2015-12-15 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-8131.
-
Resolution: Fixed

> Make ManagedIndexSchemaFactory as the default in Solr
> -
>
> Key: SOLR-8131
> URL: https://issues.apache.org/jira/browse/SOLR-8131
> Project: Solr
>  Issue Type: Wish
>  Components: Data-driven Schema, Schema and Analysis
>Reporter: Shalin Shekhar Mangar
>Assignee: Varun Thacker
>  Labels: difficulty-easy, impact-high
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8131-schemaless-fix.patch, 
> SOLR-8131-schemaless-fix.patch, SOLR-8131.patch, SOLR-8131.patch, 
> SOLR-8131.patch, SOLR-8131.patch, SOLR-8131.patch, SOLR-8131_5x.patch
>
>
> The techproducts and other examples shipped with Solr all use the 
> ClassicIndexSchemaFactory which disables all Schema APIs which need to modify 
> schema. It'd be nice to be able to support both read/write schema APIs 
> without needing to enable data-driven or schema-less mode.
> I propose to change all 5.x examples to explicitly use 
> ManagedIndexSchemaFactory and to enable ManagedIndexSchemaFactory by default 
> in trunk (6.x).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8388) TestSolrQueryResponse (factor out, then extend)

2015-12-15 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-8388:
--
Attachment: SOLR-8388-part2of2.patch

additions:
* TestSolrQueryResponse.testName
* TestSolrQueryResponse.testValues
* TestSolrQueryResponse.testReturnFields
* TestSolrQueryResponse.testException
* TestSolrQueryResponse.testHttpCaching

also:
* SolrReturnFields.toString method
* ReturnFieldsTest.testToString test



> TestSolrQueryResponse (factor out, then extend)
> ---
>
> Key: SOLR-8388
> URL: https://issues.apache.org/jira/browse/SOLR-8388
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8388-part1of2.patch, SOLR-8388-part2of2.patch
>
>
> factor out 
> {{solr/core/src/test/org/apache/solr/response/TestSolrQueryResponse.java}} 
> from {{solr/core/src/test/org/apache/solr/servlet/ResponseHeaderTest.java}} 
> and then extend it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8208) DocTransformer executes sub-queries

2015-12-15 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-8208:
---
Attachment: SOLR-8208.patch

Initial patch,
I change the API a little bit to make it easier to parse.
{code}
[subquery f=fromField t=toField v=value start=0 rows=10]
{code}
Managing to work on sort and fl params. Am i on right track?


> DocTransformer executes sub-queries
> ---
>
> Key: SOLR-8208
> URL: https://issues.apache.org/jira/browse/SOLR-8208
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mikhail Khludnev
>  Labels: features, newbie
> Attachments: SOLR-8208.patch
>
>
> The initial idea was to return "from" side of query time join via 
> doctransformer. I suppose it isn't  query-time join specific, thus let to 
> specify any query and parameters for them, let's call them sub-query. But it 
> might be problematic to escape subquery parameters, including local ones, 
> e.g. what if subquery needs to specify own doctransformer in =\[..\] ?
> I suppose we can allow to specify subquery parameter prefix:
> {code}
> ..=id,[subquery paramPrefix=subq1. 
> fromIndex=othercore],score,..={!term f=child_id 
> v=$subq1.row.id}=3=price&..
> {code}   
> * {{paramPrefix=subq1.}} shifts parameters for subquery: {{subq1.q}} turns to 
> {{q}} for subquery, {{subq1.rows}} to {{rows}}
> * {{fromIndex=othercore}} optional param allows to run subquery on other 
> core, like it works on query time join
> * the itchiest one is to reference to document field from subquery 
> parameters, here I propose to use local param {{v}} and param deference 
> {{v=$param}} thus every document field implicitly introduces parameter for 
> subquery $\{paramPrefix\}row.$\{fieldName\}, thus above subquery is 
> q=child_id:, presumably we can drop "row." in the middle 
> (reducing to v=$subq1.id), until someone deal with {{rows}}, {{sort}} fields. 
> * \[subquery\], or \[query\], or ? 
> Caveat: it should be a way slow; it handles only search result page, not 
> entire result set. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: trunk backwards-compatibility update problem

2015-12-15 Thread Adrien Grand
Lucene54Codec was not defined in
lucene/backward-codecs/src/resources/META-INF/services/org.apache.lucene.codecs.Codec
on trunk. This should work now.

Le lun. 14 déc. 2015 à 20:55, Upayavira  a écrit :

> Running "python3 -u dev-tools/scripts/addBackcompatIndexes.py 5.4.0" on
> lucene_5x worked fine, however on trunk it gave the below error.
>
> I notice there is a
> lucene/core/src/java/org/apache/lucene/codecs/lucene54/Lucene54Codec.java
> in lucene_5x but not in trunk.
>
> Any ideas?
>
> Upayavira
>
> [junit4]   2> NOTE: reproduce with: ant test
> -Dtestcase=TestBackwardsCompatibility
> -Dtests.method=testNextIntoWrongField -Dtests.seed=29AA9C3E704E7F75
> -Dtests.slow=true -Dtests.locale=sr__#Latn -Dtests.timezone=Africa/Accra
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
>[junit4] ERROR   0.16s |
>TestBackwardsCompatibility.testNextIntoWrongField <<<
>[junit4]> Throwable #1: java.lang.IllegalArgumentException: Could
>not load codec 'Lucene54'.  Did you forget to add
>lucene-backward-codecs.jar?
>[junit4]>   at
>__randomizedtesting.SeedInfo.seed([29AA9C3E704E7F75:6F2D499D140AC549]:0)
>[junit4]>   at
>org.apache.lucene.index.SegmentInfos.readCodec(SegmentInfos.java:421)
>[junit4]>   at
>org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:340)
>[junit4]>   at
>
>  
> org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:53)
>[junit4]>   at
>
>  
> org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:50)
>[junit4]>   at
>
>  
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:671)
>[junit4]>   at
>
>  
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:73)
>[junit4]>   at
>org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:63)
>[junit4]>   at
>
>  
> org.apache.lucene.index.TestBackwardsCompatibility.testNextIntoWrongField(TestBackwardsCompatibility.java:1012)
>[junit4]>   at java.lang.Thread.run(Thread.java:745)
>[junit4]> Caused by: java.lang.IllegalArgumentException: An SPI
>class of type org.apache.lucene.codecs.Codec with name 'Lucene54'
>does not exist.  You need to add the corresponding JAR file
>supporting this SPI to your classpath.  The current classpath
>supports the following names: [Asserting, CheapBastard,
>FastCompressingStoredFields,
>FastDecompressionCompressingStoredFields,
>HighCompressionCompressingStoredFields, DummyCompressingStoredFields,
>SimpleText, Lucene60, Lucene50, Lucene53]
>[junit4]>   at
>org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:116)
>[junit4]>   at
>org.apache.lucene.codecs.Codec.forName(Codec.java:116)
>[junit4]>   at
>org.apache.lucene.index.SegmentInfos.readCodec(SegmentInfos.java:409)
>[junit4]>   ... 43 more
>[junit4]   2> NOTE: reproduce with: ant test
>-Dtestcase=TestBackwardsCompatibility
>-Dtests.method=testAddOldIndexesReader -Dtests.seed=29AA9C3E704E7F75
>-Dtests.slow=true -Dtests.locale=sr__#Latn
>-Dtests.timezone=Africa/Accra -Dtests.asserts=true
>-Dtests.file.encoding=US-ASCII
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (SOLR-8208) DocTransformer executes sub-queries

2015-12-15 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15057857#comment-15057857
 ] 

Upayavira commented on SOLR-8208:
-

This is useful stuff. Much needed. I assume this would work as well on block 
joins and alongside pseudo joins, which I think I'm seeing above?

Traditionally, in a local params query parser, the parameter v refers to the 
actual query string, so: {code}q={!lucene v=$qq}=field:(my search) {code} 
would be a valid syntax. I would suggest using n= (for name) or tag= for the 
field name of the newly created field to avoid association with this v= syntax.

Is a lookup based upon the ID of a field in the current document sufficient? I 
suspect it is.

Do you also support fromIndex - that is, executing the query against another 
core or collection? *That* would be the killer feature.

As to the fq={!tag=join}{!join blah} syntax, if you had [subquery fq=join], 
you wouldn't actually execute the join query, you would just locate the query 
object, and extract its key parameters to avoid the user from having to enter 
them multiple times. Having both options would be super cool.

> DocTransformer executes sub-queries
> ---
>
> Key: SOLR-8208
> URL: https://issues.apache.org/jira/browse/SOLR-8208
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mikhail Khludnev
>  Labels: features, newbie
> Attachments: SOLR-8208.patch
>
>
> The initial idea was to return "from" side of query time join via 
> doctransformer. I suppose it isn't  query-time join specific, thus let to 
> specify any query and parameters for them, let's call them sub-query. But it 
> might be problematic to escape subquery parameters, including local ones, 
> e.g. what if subquery needs to specify own doctransformer in =\[..\] ?
> I suppose we can allow to specify subquery parameter prefix:
> {code}
> ..=id,[subquery paramPrefix=subq1. 
> fromIndex=othercore],score,..={!term f=child_id 
> v=$subq1.row.id}=3=price&..
> {code}   
> * {{paramPrefix=subq1.}} shifts parameters for subquery: {{subq1.q}} turns to 
> {{q}} for subquery, {{subq1.rows}} to {{rows}}
> * {{fromIndex=othercore}} optional param allows to run subquery on other 
> core, like it works on query time join
> * the itchiest one is to reference to document field from subquery 
> parameters, here I propose to use local param {{v}} and param deference 
> {{v=$param}} thus every document field implicitly introduces parameter for 
> subquery $\{paramPrefix\}row.$\{fieldName\}, thus above subquery is 
> q=child_id:, presumably we can drop "row." in the middle 
> (reducing to v=$subq1.id), until someone deal with {{rows}}, {{sort}} fields. 
> * \[subquery\], or \[query\], or ? 
> Caveat: it should be a way slow; it handles only search result page, not 
> entire result set. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8208) DocTransformer executes sub-queries

2015-12-15 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15057812#comment-15057812
 ] 

Cao Manh Dat edited comment on SOLR-8208 at 12/15/15 10:33 AM:
---

Initial patch,
I change the API a little bit to make it easier to parse.
{code}
[subquery f=fromField t=toField v=value start=0 rows=10]
{code}

The result so far.
Input
{code}
doc("id", "4","name_s", "dave", "title_s", "MTS", "dept_ss_dv","Support", 
"dept_ss_dv","Engineering"))

doc("id","10", "dept_id_s", "Engineering", "text_t","These guys develop stuff", 
"salary_i_dv", "1000")
doc("id","13", "dept_id_s", "Support", "text_t","These guys help 
customers","salary_i_dv", "800")
{code}
Output
{code}
{
  "id": "4",
  "name_s_dv": "dave",
  "title_s_dv": "MTS",
  "dept_ss_dv": [
"Support",
"Engineering"
  ],
  "depts": [
{
  "id": "10",
  "dept_id_s_dv": "Engineering",
  "text_t": "These guys develop stuff",
  "salary_i_dv": 1000
},
{
  "id": "13",
  "dept_id_s_dv": "Support",
  "text_t": "These guys help customers",
  "salary_i_dv": 800
}
  ]
}
{code}

Managing to work on sort and fl params. Am i on right track?


was (Author: caomanhdat):
Initial patch,
I change the API a little bit to make it easier to parse.
{code}
[subquery f=fromField t=toField v=value start=0 rows=10]
{code}
Managing to work on sort and fl params. Am i on right track?


> DocTransformer executes sub-queries
> ---
>
> Key: SOLR-8208
> URL: https://issues.apache.org/jira/browse/SOLR-8208
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mikhail Khludnev
>  Labels: features, newbie
> Attachments: SOLR-8208.patch
>
>
> The initial idea was to return "from" side of query time join via 
> doctransformer. I suppose it isn't  query-time join specific, thus let to 
> specify any query and parameters for them, let's call them sub-query. But it 
> might be problematic to escape subquery parameters, including local ones, 
> e.g. what if subquery needs to specify own doctransformer in =\[..\] ?
> I suppose we can allow to specify subquery parameter prefix:
> {code}
> ..=id,[subquery paramPrefix=subq1. 
> fromIndex=othercore],score,..={!term f=child_id 
> v=$subq1.row.id}=3=price&..
> {code}   
> * {{paramPrefix=subq1.}} shifts parameters for subquery: {{subq1.q}} turns to 
> {{q}} for subquery, {{subq1.rows}} to {{rows}}
> * {{fromIndex=othercore}} optional param allows to run subquery on other 
> core, like it works on query time join
> * the itchiest one is to reference to document field from subquery 
> parameters, here I propose to use local param {{v}} and param deference 
> {{v=$param}} thus every document field implicitly introduces parameter for 
> subquery $\{paramPrefix\}row.$\{fieldName\}, thus above subquery is 
> q=child_id:, presumably we can drop "row." in the middle 
> (reducing to v=$subq1.id), until someone deal with {{rows}}, {{sort}} fields. 
> * \[subquery\], or \[query\], or ? 
> Caveat: it should be a way slow; it handles only search result page, not 
> entire result set. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6930) Decouple GeoPointField from NumericType

2015-12-15 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15057865#comment-15057865
 ] 

Michael McCandless commented on LUCENE-6930:


+1, {{LegacyNumericType}} is now deprecated in trunk (to be removed in 7.0), so 
we should migrate away from it ...

But we should maybe take this further, once we get all dimensional values based 
geo queries working well in trunk (e.g. at least {{DimensionalDistanceQuery}} 
and {{DimensionalDistanceRangeQuery}} are still missing?) and deprecate the 
postings based geo queries as well?

> Decouple GeoPointField from NumericType
> ---
>
> Key: LUCENE-6930
> URL: https://issues.apache.org/jira/browse/LUCENE-6930
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nicholas Knize
>
> {{GeoPointField}} currently relies on {{NumericTokenStream}} to create prefix 
> terms for a GeoPoint using the precision step defined in {{GeoPointField}}. 
> At search time {{GeoPointTermsEnum}} recurses to a max precision that is 
> computed by the Query parameters. This max precision is never the full 
> precision, so creating and indexing the full precision terms is useless and 
> wasteful (it was always a side effect of just using indexing logic from the 
> Numeric type). 
> Furthermore, since the numerical logic always stored high precision terms 
> first, the recursion in {{GeoPointTermsEnum}} required transient memory for 
> storing ranges. By moving the trie logic to its own {{GeoPointTokenStream}} 
> and reversing the term order (such that lower resolution terms are first), 
> the GeoPointTermsEnum can naturally traverse, enabling on-demand creation of 
> PrefixTerms. This will be done in a separate issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6931) Cutover BBoxStrategy to dimensional values

2015-12-15 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-6931:
--

 Summary: Cutover BBoxStrategy to dimensional values
 Key: LUCENE-6931
 URL: https://issues.apache.org/jira/browse/LUCENE-6931
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: Michael McCandless


Spinoff from LUCENE-6917...

{{BBoxStrategy}} uses {{LegacyNumericType}} but could probably cutover to 
dimensional values instead?

But this is a major change: it would require re-indexing on upgrade... 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8208) DocTransformer executes sub-queries

2015-12-15 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15057812#comment-15057812
 ] 

Cao Manh Dat edited comment on SOLR-8208 at 12/15/15 10:34 AM:
---

Initial patch,
I change the API a little bit to make it easier to parse.
{code}
[subquery f=fromField t=toField v=value start=0 rows=10]
{code}

The result so far.
Input
{code}
doc("id", "4","name_s", "dave", "title_s", "MTS", "dept_ss_dv","Support", 
"dept_ss_dv","Engineering"))

doc("id","10", "dept_id_s", "Engineering", "text_t","These guys develop stuff", 
"salary_i_dv", "1000")
doc("id","13", "dept_id_s", "Support", "text_t","These guys help 
customers","salary_i_dv", "800")
{code}

Query
{code}
q=name_s:dave=*,[subquery f=dept_ss_dv t=dept_id_s v=depts]
{code}

Output
{code}
{
  "id": "4",
  "name_s_dv": "dave",
  "title_s_dv": "MTS",
  "dept_ss_dv": [
"Support",
"Engineering"
  ],
  "depts": [
{
  "id": "10",
  "dept_id_s_dv": "Engineering",
  "text_t": "These guys develop stuff",
  "salary_i_dv": 1000
},
{
  "id": "13",
  "dept_id_s_dv": "Support",
  "text_t": "These guys help customers",
  "salary_i_dv": 800
}
  ]
}
{code}

Managing to work on sort and fl params. Am i on right track?


was (Author: caomanhdat):
Initial patch,
I change the API a little bit to make it easier to parse.
{code}
[subquery f=fromField t=toField v=value start=0 rows=10]
{code}

The result so far.
Input
{code}
doc("id", "4","name_s", "dave", "title_s", "MTS", "dept_ss_dv","Support", 
"dept_ss_dv","Engineering"))

doc("id","10", "dept_id_s", "Engineering", "text_t","These guys develop stuff", 
"salary_i_dv", "1000")
doc("id","13", "dept_id_s", "Support", "text_t","These guys help 
customers","salary_i_dv", "800")
{code}
Output
{code}
{
  "id": "4",
  "name_s_dv": "dave",
  "title_s_dv": "MTS",
  "dept_ss_dv": [
"Support",
"Engineering"
  ],
  "depts": [
{
  "id": "10",
  "dept_id_s_dv": "Engineering",
  "text_t": "These guys develop stuff",
  "salary_i_dv": 1000
},
{
  "id": "13",
  "dept_id_s_dv": "Support",
  "text_t": "These guys help customers",
  "salary_i_dv": 800
}
  ]
}
{code}

Managing to work on sort and fl params. Am i on right track?

> DocTransformer executes sub-queries
> ---
>
> Key: SOLR-8208
> URL: https://issues.apache.org/jira/browse/SOLR-8208
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mikhail Khludnev
>  Labels: features, newbie
> Attachments: SOLR-8208.patch
>
>
> The initial idea was to return "from" side of query time join via 
> doctransformer. I suppose it isn't  query-time join specific, thus let to 
> specify any query and parameters for them, let's call them sub-query. But it 
> might be problematic to escape subquery parameters, including local ones, 
> e.g. what if subquery needs to specify own doctransformer in =\[..\] ?
> I suppose we can allow to specify subquery parameter prefix:
> {code}
> ..=id,[subquery paramPrefix=subq1. 
> fromIndex=othercore],score,..={!term f=child_id 
> v=$subq1.row.id}=3=price&..
> {code}   
> * {{paramPrefix=subq1.}} shifts parameters for subquery: {{subq1.q}} turns to 
> {{q}} for subquery, {{subq1.rows}} to {{rows}}
> * {{fromIndex=othercore}} optional param allows to run subquery on other 
> core, like it works on query time join
> * the itchiest one is to reference to document field from subquery 
> parameters, here I propose to use local param {{v}} and param deference 
> {{v=$param}} thus every document field implicitly introduces parameter for 
> subquery $\{paramPrefix\}row.$\{fieldName\}, thus above subquery is 
> q=child_id:, presumably we can drop "row." in the middle 
> (reducing to v=$subq1.id), until someone deal with {{rows}}, {{sort}} fields. 
> * \[subquery\], or \[query\], or ? 
> Caveat: it should be a way slow; it handles only search result page, not 
> entire result set. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8409) Complex q param in Streaming Expression results in a bad query

2015-12-15 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15057921#comment-15057921
 ] 

Dennis Gove commented on SOLR-8409:
---

Interestingly, if I leave the q param out entirely I don't see any raised 
exception. Also, if I leave out a field to filter on I also don't see any 
raised exception. I've confirmed the solrconfig-streaming.xml doesn't include 
either default q or df settings so I'd expect to see an exception in both of 
these cases.
{code}
search(collection1, fl="id,a_s,a_i,a_f", sort="a_f asc, a_i asc")
search(collection1, fl="id,a_s,a_i,a_f", sort="a_f asc, a_i asc", q="foo")
{code}

> Complex q param in Streaming Expression results in a bad query
> --
>
> Key: SOLR-8409
> URL: https://issues.apache.org/jira/browse/SOLR-8409
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Dennis Gove
>Priority: Minor
>  Labels: streaming, streaming_api
> Attachments: SOLR-8409.patch
>
>
> When providing an expression like 
> {code}
> stream=search(people, fl="id,first", sort="first asc", 
> q="presentTitles:\"chief executive officer\" AND age:[36 TO *]")
> {code}
> the following error is seen.
> {code}
> no field name specified in query and no default specified via 'df' param
> {code}
> I believe the issue is related to the \" (escaped quotes) and the spaces in 
> the q field. If I remove the spaces then the query returns results as 
> expected (though I've yet to validate if those results are accurate).
> This requires some investigation to get down to the root cause. I would like 
> to fix it before Solr 6 is cut.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7730) speed-up faceting on doc values fields

2015-12-15 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev resolved SOLR-7730.

Resolution: Fixed

> speed-up faceting on doc values fields
> --
>
> Key: SOLR-7730
> URL: https://issues.apache.org/jira/browse/SOLR-7730
> Project: Solr
>  Issue Type: Improvement
>  Components: faceting
>Affects Versions: 5.2.1
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: patch
> Fix For: 5.4
>
> Attachments: LUCENE-7730.patch, LUCENE-7730.patch, 
> SOLR-7730-changes.patch, SOLR-7730.patch
>
>
> every time we count facets on DocValues fields in Solr on many segments index 
> we see the unnecessary hotspot:
> {code}
> 
> at 
> org.apache.lucene.index.MultiFields.getMergedFieldInfos(MultiFields.java:248)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getFieldInfos(SlowCompositeReaderWrapper.java:239)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getSortedSetDocValues(SlowCompositeReaderWrapper.java:176)
> at 
> org.apache.solr.request.DocValuesFacets.getCounts(DocValuesFacets.java:72)
> at 
> org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:460) 
> {code}
> the reason is SlowCompositeReaderWrapper.getSortedSetDocValues() Line 136 and 
> SlowCompositeReaderWrapper.getSortedDocValues() Line 174
> before return composite doc values, SCWR merges segment field infos, which is 
> expensive, but after fieldinfo is merged, it checks *only* docvalue type in 
> it. This dv type check can be done much easier in per segment basis. 
> This patch gets some performance gain for those who count DV facets in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2944 - Failure!

2015-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2944/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.handler.PingRequestHandlerTest.testPingInClusterWithNoHealthCheck

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:52435/solr/testSolrCloudCollection]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:52435/solr/testSolrCloudCollection]
at 
__randomizedtesting.SeedInfo.seed([5F9367AFC2610A47:B140D97F478D30A3]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:352)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:807)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at 
org.apache.solr.handler.PingRequestHandlerTest.testPingInClusterWithNoHealthCheck(PingRequestHandlerTest.java:200)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Commented] (SOLR-7730) speed-up faceting on doc values fields

2015-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15058739#comment-15058739
 ] 

ASF subversion and git services commented on SOLR-7730:
---

Commit 1720239 from m...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1720239 ]

SOLR-7730: mention in 5.4.0's Optimizations

> speed-up faceting on doc values fields
> --
>
> Key: SOLR-7730
> URL: https://issues.apache.org/jira/browse/SOLR-7730
> Project: Solr
>  Issue Type: Improvement
>  Components: faceting
>Affects Versions: 5.2.1
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: patch
> Fix For: 5.4
>
> Attachments: LUCENE-7730.patch, LUCENE-7730.patch, 
> SOLR-7730-changes.patch, SOLR-7730.patch
>
>
> every time we count facets on DocValues fields in Solr on many segments index 
> we see the unnecessary hotspot:
> {code}
> 
> at 
> org.apache.lucene.index.MultiFields.getMergedFieldInfos(MultiFields.java:248)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getFieldInfos(SlowCompositeReaderWrapper.java:239)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getSortedSetDocValues(SlowCompositeReaderWrapper.java:176)
> at 
> org.apache.solr.request.DocValuesFacets.getCounts(DocValuesFacets.java:72)
> at 
> org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:460) 
> {code}
> the reason is SlowCompositeReaderWrapper.getSortedSetDocValues() Line 136 and 
> SlowCompositeReaderWrapper.getSortedDocValues() Line 174
> before return composite doc values, SCWR merges segment field infos, which is 
> expensive, but after fieldinfo is merged, it checks *only* docvalue type in 
> it. This dv type check can be done much easier in per segment basis. 
> This patch gets some performance gain for those who count DV facets in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6926) Take matchCost into account for MUST_NOT clauses

2015-12-15 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15058762#comment-15058762
 ] 

Paul Elschot commented on LUCENE-6926:
--

I tried implementing this NOT wrapper, but it is not feasible because the 
nextDoc() implementation will have to do a linear scan as long as the wrapped 
iterator provides consecutive docs.
So this might be nice in theory, but it will not perform well.

That means that I can't easily improve on the latest patch, it looks good, and 
core tests pass here.

> Take matchCost into account for MUST_NOT clauses
> 
>
> Key: LUCENE-6926
> URL: https://issues.apache.org/jira/browse/LUCENE-6926
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-6926.patch, LUCENE-6926.patch
>
>
> ReqExclScorer potentially has two TwoPhaseIterators to check: the one for the 
> positive clause and the one for the negative clause. It should leverage the 
> match cost API to check the least costly one first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8395) query-time join (with scoring) for numeric fields

2015-12-15 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-8395:
---
Attachment: SOLR-8395.patch

It's nearly shocked me. The first path with multivalue fields ("uid_ls_dv", 
"rel_from_ls_dv") works out of the box even without LUCENE-5868!!
The answer is in 
[TrieField.createFields()|https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/schema/TrieField.java#L725]
 for mv dv numerics Solr creates SortedSetDVs encoded as numbers and it works 
fine as-is. See also SOLR-7878. 
Thus, the only way to break the test is to use single valued docval fields. 
That's what I did in the [^SOLR-8395.patch]. Now it fails
{code}
java.lang.IllegalStateException: unexpected docvalues type NUMERIC for field 
'rel_to_l_dv' (expected one of [SORTED, SORTED_SET]). Use UninvertingReader or 
index with docvalues.
..
at org.apache.lucene.index.DocValues.checkField(DocValues.java:208)
at org.apache.lucene.index.DocValues.getSortedSet(DocValues.java:306)
at 
org.apache.lucene.search.join.DocValuesTermsCollector.lambda$1(DocValuesTermsCollector.java:59)
at ..
at 
org.apache.lucene.search.join.JoinUtil.createJoinQuery(JoinUtil.java:146)
..
org.apache.solr.search.join.TestScoreJoinQPNoScore.testJoinNumeric(TestScoreJoinQPNoScore.java:71)
{code}

If you are going to work on it pls make sure ints and longs are covered both. I 
see one more trick in TrieField.createFields(). 
  

> query-time join (with scoring) for numeric fields
> -
>
> Key: SOLR-8395
> URL: https://issues.apache.org/jira/browse/SOLR-8395
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Mikhail Khludnev
>Priority: Minor
>  Labels: easytest, features, newbie, starter
> Fix For: 5.5
>
> Attachments: SOLR-8395.patch, SOLR-8395.patch
>
>
> since LUCENE-5868 we have an opportunity to improve SOLR-6234 to make it join 
> int and long fields. I suppose it's worth to add "simple" test in Solr 
> NoScore suite. IAlongside with that we can set _multipleValues_ parameters 
> giving _fromField_ cardinality declared in schema,   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8388) TestSolrQueryResponse (factor out, then extend)

2015-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15058828#comment-15058828
 ] 

ASF subversion and git services commented on SOLR-8388:
---

Commit 1720253 from [~cpoerschke] in branch 'dev/trunk'
[ https://svn.apache.org/r1720253 ]

SOLR-8388: ReturnFieldsTest.testToString() fix (don't assume ordering within 
sets' values)

> TestSolrQueryResponse (factor out, then extend)
> ---
>
> Key: SOLR-8388
> URL: https://issues.apache.org/jira/browse/SOLR-8388
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8388-part1of2.patch, SOLR-8388-part2of2.patch, 
> SOLR-8388-part3of2.patch
>
>
> factor out 
> {{solr/core/src/test/org/apache/solr/response/TestSolrQueryResponse.java}} 
> from {{solr/core/src/test/org/apache/solr/servlet/ResponseHeaderTest.java}} 
> and then extend it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8388) TestSolrQueryResponse (factor out, then extend)

2015-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15058884#comment-15058884
 ] 

ASF subversion and git services commented on SOLR-8388:
---

Commit 1720257 from [~cpoerschke] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1720257 ]

SOLR-8388: ReturnFieldsTest.testToString() fix (don't assume ordering within 
sets' values) (merge in revision 1720253 from trunk)

> TestSolrQueryResponse (factor out, then extend)
> ---
>
> Key: SOLR-8388
> URL: https://issues.apache.org/jira/browse/SOLR-8388
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8388-part1of2.patch, SOLR-8388-part2of2.patch, 
> SOLR-8388-part3of2.patch
>
>
> factor out 
> {{solr/core/src/test/org/apache/solr/response/TestSolrQueryResponse.java}} 
> from {{solr/core/src/test/org/apache/solr/servlet/ResponseHeaderTest.java}} 
> and then extend it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80) - Build # 14915 - Failure!

2015-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14915/
Java: 64bit/jdk1.7.0_80 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.search.ReturnFieldsTest.testToString

Error Message:
expected:<...s=[],fields=[score, [test, id],okFieldNames=[null, score, test, 
id]],reqFieldNames=[id,...> but was:<...s=[],fields=[score, [id, 
test],okFieldNames=[null, score, id, test]],reqFieldNames=[id,...>

Stack Trace:
org.junit.ComparisonFailure: expected:<...s=[],fields=[score, [test, 
id],okFieldNames=[null, score, test, id]],reqFieldNames=[id,...> but 
was:<...s=[],fields=[score, [id, test],okFieldNames=[null, score, id, 
test]],reqFieldNames=[id,...>
at 
__randomizedtesting.SeedInfo.seed([B7F092E8551B055A:66D125E96A0168F2]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.search.ReturnFieldsTest.testToString(ReturnFieldsTest.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Comment Edited] (SOLR-8374) Issue with _text_ field in schema file

2015-12-15 Thread Romit Singhai (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15058928#comment-15058928
 ] 

Romit Singhai edited comment on SOLR-8374 at 12/15/15 10:07 PM:


Hi Varun,

This information will be useful for people using Solr 5.2.1 on HDP2.3.2 as the 
comments in the schema.xml file are confusing.


was (Author: romits):
Hi Varun,

This information will be useful for people using Solr 5.2.1 on HDP2.3.2 as the 
comments the schema.xml file are confusing.

> Issue with _text_ field in schema file
> --
>
> Key: SOLR-8374
> URL: https://issues.apache.org/jira/browse/SOLR-8374
> Project: Solr
>  Issue Type: Bug
>  Components: Hadoop Integration
>Affects Versions: 5.2.1
>Reporter: Romit Singhai
>Priority: Critical
>  Labels: patch
>
> In the data_driven_schema_configs, the warning say that _text_ field  can be 
> removed if not needed. The  hadoop indexer fails  to index data  as ping 
> command could not find the collection required for indexing.
> The ping command for collection needs to be fixed (making _text_ optional) as 
> _text_ add significantly to index size even if not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8374) Issue with _text_ field in schema file

2015-12-15 Thread Romit Singhai (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15058928#comment-15058928
 ] 

Romit Singhai commented on SOLR-8374:
-

Hi Varun,

This information will be useful for people using Solr 5.2.1 on HDP2.3.2 as the 
comments the schema.xml file are confusing.

> Issue with _text_ field in schema file
> --
>
> Key: SOLR-8374
> URL: https://issues.apache.org/jira/browse/SOLR-8374
> Project: Solr
>  Issue Type: Bug
>  Components: Hadoop Integration
>Affects Versions: 5.2.1
>Reporter: Romit Singhai
>Priority: Critical
>  Labels: patch
>
> In the data_driven_schema_configs, the warning say that _text_ field  can be 
> removed if not needed. The  hadoop indexer fails  to index data  as ping 
> command could not find the collection required for indexing.
> The ping command for collection needs to be fixed (making _text_ optional) as 
> _text_ add significantly to index size even if not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8415) Provide command to switch between non/secure mode in ZK

2015-12-15 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15058733#comment-15058733
 ] 

Mark Miller commented on SOLR-8415:
---

bq. Docs should go on the wiki somewhere.

I'd start looking around 
https://cwiki.apache.org/confluence/display/solr/ZooKeeper+Access+Control

> Provide command to switch between non/secure mode in ZK
> ---
>
> Key: SOLR-8415
> URL: https://issues.apache.org/jira/browse/SOLR-8415
> Project: Solr
>  Issue Type: Improvement
>  Components: security, SolrCloud
>Reporter: Mike Drob
> Fix For: Trunk
>
> Attachments: SOLR-8415.patch, SOLR-8415.patch
>
>
> We have the ability to run both with and without zk acls, but we don't have a 
> great way to switch between the two modes. Most common use case, I imagine, 
> would be upgrading from an old version that did not support this to a new 
> version that does, and wanting to protect all of the existing content in ZK, 
> but it is conceivable that a user might want to remove ACLs as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8395) query-time join (with scoring) for numeric fields

2015-12-15 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-8395:
---
Summary: query-time join (with scoring) for numeric fields  (was: 
query-time join for numeric fields)

> query-time join (with scoring) for numeric fields
> -
>
> Key: SOLR-8395
> URL: https://issues.apache.org/jira/browse/SOLR-8395
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Mikhail Khludnev
>Priority: Minor
>  Labels: easytest, features, newbie, starter
> Fix For: 5.5
>
> Attachments: SOLR-8395.patch
>
>
> since LUCENE-5868 we have an opportunity to improve SOLR-6234 to make it join 
> int and long fields. I suppose it's worth to add "simple" test in Solr 
> NoScore suite. IAlongside with that we can set _multipleValues_ parameters 
> giving _fromField_ cardinality declared in schema,   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8372) Canceled recovery can lead to data loss

2015-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15058808#comment-15058808
 ] 

ASF subversion and git services commented on SOLR-8372:
---

Commit 1720250 from [~yo...@apache.org] in branch 'dev/trunk'
[ https://svn.apache.org/r1720250 ]

SOLR-8372: continue buffering if recovery is canceled/failed

> Canceled recovery can lead to data loss
> ---
>
> Key: SOLR-8372
> URL: https://issues.apache.org/jira/browse/SOLR-8372
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
> Attachments: SOLR-8372.patch
>
>
> A recovery via index replication tells the update log to start buffering 
> updates.  If that recovery is canceled for whatever reason by the replica, 
> the RecoveryStrategy calls ulog.dropBufferedUpdates() which stops buffering 
> and places the UpdateLog back in active mode.  If updates come from the 
> leader after this point (and before ReplicationStrategy retries recovery), 
> the update will be processed as normal and added to the transaction log. If 
> the server is bounced, those last updates to the transaction log look normal 
> (no FLAG_GAP) and can be used to determine who is more up to date. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8372) Canceled recovery can lead to data loss

2015-12-15 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-8372.

   Resolution: Fixed
Fix Version/s: Trunk
   5.5

> Canceled recovery can lead to data loss
> ---
>
> Key: SOLR-8372
> URL: https://issues.apache.org/jira/browse/SOLR-8372
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8372.patch
>
>
> A recovery via index replication tells the update log to start buffering 
> updates.  If that recovery is canceled for whatever reason by the replica, 
> the RecoveryStrategy calls ulog.dropBufferedUpdates() which stops buffering 
> and places the UpdateLog back in active mode.  If updates come from the 
> leader after this point (and before ReplicationStrategy retries recovery), 
> the update will be processed as normal and added to the transaction log. If 
> the server is bounced, those last updates to the transaction log look normal 
> (no FLAG_GAP) and can be used to determine who is more up to date. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7730) speed-up faceting on doc values fields

2015-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15058798#comment-15058798
 ] 

ASF subversion and git services commented on SOLR-7730:
---

Commit 1720248 from m...@apache.org in branch 'dev/branches/lucene_solr_5_4'
[ https://svn.apache.org/r1720248 ]

SOLR-7730: mention in 5.4.0's Optimizations

> speed-up faceting on doc values fields
> --
>
> Key: SOLR-7730
> URL: https://issues.apache.org/jira/browse/SOLR-7730
> Project: Solr
>  Issue Type: Improvement
>  Components: faceting
>Affects Versions: 5.2.1
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: patch
> Fix For: 5.4
>
> Attachments: LUCENE-7730.patch, LUCENE-7730.patch, 
> SOLR-7730-changes.patch, SOLR-7730.patch
>
>
> every time we count facets on DocValues fields in Solr on many segments index 
> we see the unnecessary hotspot:
> {code}
> 
> at 
> org.apache.lucene.index.MultiFields.getMergedFieldInfos(MultiFields.java:248)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getFieldInfos(SlowCompositeReaderWrapper.java:239)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getSortedSetDocValues(SlowCompositeReaderWrapper.java:176)
> at 
> org.apache.solr.request.DocValuesFacets.getCounts(DocValuesFacets.java:72)
> at 
> org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:460) 
> {code}
> the reason is SlowCompositeReaderWrapper.getSortedSetDocValues() Line 136 and 
> SlowCompositeReaderWrapper.getSortedDocValues() Line 174
> before return composite doc values, SCWR merges segment field infos, which is 
> expensive, but after fieldinfo is merged, it checks *only* docvalue type in 
> it. This dv type check can be done much easier in per segment basis. 
> This patch gets some performance gain for those who count DV facets in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8372) Canceled recovery can lead to data loss

2015-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15058814#comment-15058814
 ] 

ASF subversion and git services commented on SOLR-8372:
---

Commit 1720251 from [~yo...@apache.org] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1720251 ]

SOLR-8372: continue buffering if recovery is canceled/failed

> Canceled recovery can lead to data loss
> ---
>
> Key: SOLR-8372
> URL: https://issues.apache.org/jira/browse/SOLR-8372
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
> Attachments: SOLR-8372.patch
>
>
> A recovery via index replication tells the update log to start buffering 
> updates.  If that recovery is canceled for whatever reason by the replica, 
> the RecoveryStrategy calls ulog.dropBufferedUpdates() which stops buffering 
> and places the UpdateLog back in active mode.  If updates come from the 
> leader after this point (and before ReplicationStrategy retries recovery), 
> the update will be processed as normal and added to the transaction log. If 
> the server is bounced, those last updates to the transaction log look normal 
> (no FLAG_GAP) and can be used to determine who is more up to date. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 700 - Failure

2015-12-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/700/

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ERROR: SolrIndexSearcher opens=51 closes=50

Stack Trace:
java.lang.AssertionError: ERROR: SolrIndexSearcher opens=51 closes=50
at __randomizedtesting.SeedInfo.seed([3EDFC1D20AC92D22]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:453)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:225)
at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=8586, name=searcherExecutor-3600-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=8586, name=searcherExecutor-3600-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([3EDFC1D20AC92D22]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=8586, name=searcherExecutor-3600-thread-1, state=WAITING, 

[jira] [Commented] (SOLR-7996) Evaluate moving SolrIndexSearcher creation logic to a factory

2015-12-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15059047#comment-15059047
 ] 

Tomás Fernández Löbbe commented on SOLR-7996:
-

[~jej2003] (in reply to [this 
email|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201512.mbox/%3CCAL3VrCereW7L7xjDab7B8nkd0Xw9-HfPH_BoX=cn-_byb9r...@mail.gmail.com%3E]),
 some time ago I worked on a SolrSearcherFactory as part of SOLR-5621 (a more 
ambitious Jira than this), the idea now is slightly different, but maybe it 
helps, at least a similar thing is what I had in mind when I created the Jira 
(also, making the factory configurable).
Maybe we should also move the "wrapReader" method from SolrIndexSearcher to the 
factory?

> Evaluate moving SolrIndexSearcher creation logic to a factory
> -
>
> Key: SOLR-7996
> URL: https://issues.apache.org/jira/browse/SOLR-7996
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tomás Fernández Löbbe
>
> Moving this logic away from SolrCore is already a win, plus it should make it 
> easier to unit test and extend for advanced use cases.
> See discussion here: http://search-lucene.com/m/eHNlWNCtoeLwQp 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8421) improve error message when zkHost with multiple hosts and redundent chroot specified

2015-12-15 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15059132#comment-15059132
 ] 

Hoss Man commented on SOLR-8421:


At a minimum, after parsing the zkhosts but before any zk connections are 
attempted at all, we could hueristically look at the chroot we've parsed and 
log a WARN if it looks like it mistakenly contains other host:port pairs.

> improve error message when zkHost with multiple hosts and redundent chroot 
> specified
> 
>
> Key: SOLR-8421
> URL: https://issues.apache.org/jira/browse/SOLR-8421
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.4
> Environment: Solr Cloud
>Reporter: Dmitry Myaskovskiy
>
> If a user mistakenly tries to specify the chroot on every zk host:port  in 
> the zkhosts string, the error they get is confusing.
> we should try to improve the error/logging to make it more evident what the 
> problem is
> {panel:title=initial bug report from user}
> I'm trying to run Solr Cloud with following command:
> {code}
> ./bin/solr -f -c -z localhost:2181/test,localhost:2182/test
> {code}
> And getting error:
> {code}
> 749  ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified 
> in ZkHost but the znode doesn't exist. localhost:2181/test,localhost:2182/test
> {code}
> Node "/test" exists in zookeeper. And both:
> {code}
> ./bin/solr -f -c -z localhost:2181,localhost:2182
> {code}
> {code}
> ./bin/solr -f -c -z localhost:2181/test
> {code}
> works fine.
> But I cannot get it work with multiple nodes and chroot specified.
> {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8421) improve error message when zkHost with multiple hosts and redundent chroot specified

2015-12-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15059131#comment-15059131
 ] 

Tomás Fernández Löbbe commented on SOLR-8421:
-

I thought about it. I believe the bad error is because it's considering 
everything since the first slash the chroot ("/test,localhost:2182/test")

> improve error message when zkHost with multiple hosts and redundent chroot 
> specified
> 
>
> Key: SOLR-8421
> URL: https://issues.apache.org/jira/browse/SOLR-8421
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.4
> Environment: Solr Cloud
>Reporter: Dmitry Myaskovskiy
>
> If a user mistakenly tries to specify the chroot on every zk host:port  in 
> the zkhosts string, the error they get is confusing.
> we should try to improve the error/logging to make it more evident what the 
> problem is
> {panel:title=initial bug report from user}
> I'm trying to run Solr Cloud with following command:
> {code}
> ./bin/solr -f -c -z localhost:2181/test,localhost:2182/test
> {code}
> And getting error:
> {code}
> 749  ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified 
> in ZkHost but the znode doesn't exist. localhost:2181/test,localhost:2182/test
> {code}
> Node "/test" exists in zookeeper. And both:
> {code}
> ./bin/solr -f -c -z localhost:2181,localhost:2182
> {code}
> {code}
> ./bin/solr -f -c -z localhost:2181/test
> {code}
> works fine.
> But I cannot get it work with multiple nodes and chroot specified.
> {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2889 - Failure!

2015-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2889/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=9021, name=Thread-3402, 
state=RUNNABLE, group=TGRP-FullSolrCloudDistribCmdsTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=9021, name=Thread-3402, state=RUNNABLE, 
group=TGRP-FullSolrCloudDistribCmdsTest]
Caused by: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: 
IOException occured when talking to server at: 
http://127.0.0.1:59676/collection2_shard4_replica1
at __randomizedtesting.SeedInfo.seed([A900DE3E4B8F9D2D]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:635)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:982)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:807)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:167)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest$1IndexThread.run(FullSolrCloudDistribCmdsTest.java:642)
Caused by: org.apache.solr.client.solrj.SolrServerException: IOException 
occured when talking to server at: 
http://127.0.0.1:59676/collection2_shard4_replica1
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:589)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient$2.call(CloudSolrClient.java:608)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient$2.call(CloudSolrClient.java:605)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:232)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.http.NoHttpResponseException: 127.0.0.1:59676 failed to 
respond
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:143)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:480)
... 11 more


FAILED:  org.apache.solr.search.ReturnFieldsTest.testToString

Error Message:
expected:<...s=(globs=[],fields=[[score, test, id],okFieldNames=[null, score, 
test, id]],reqFieldNames=[id,...> but was:<...s=(globs=[],fields=[[id, test, 
score],okFieldNames=[null, id, test, score]],reqFieldNames=[id,...>

Stack Trace:
org.junit.ComparisonFailure: expected:<...s=(globs=[],fields=[[score, test, 
id],okFieldNames=[null, score, test, id]],reqFieldNames=[id,...> but 
was:<...s=(globs=[],fields=[[id, test, score],okFieldNames=[null, id, test, 

[jira] [Closed] (SOLR-8421) zkHost with chroot and multiple hosts not working

2015-12-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe closed SOLR-8421.
---
Resolution: Not A Problem

chroot needs to be added only once after the list of hosts, not after each 
host. See 
https://cwiki.apache.org/confluence/display/solr/Taking+Solr+to+Production#TakingSolrtoProduction-ZooKeeperchroot
Please use the users list for these kind of questions

> zkHost with chroot and multiple hosts not working
> -
>
> Key: SOLR-8421
> URL: https://issues.apache.org/jira/browse/SOLR-8421
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4
> Environment: Solr Cloud
>Reporter: Dmitry Myaskovskiy
>
> I'm trying to run Solr Cloud with following command:
> {code}
> ./bin/solr -f -c -z localhost:2181/test,localhost:2182/test
> {code}
> And getting error:
> {code}
> 749  ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified 
> in ZkHost but the znode doesn't exist. localhost:2181/test,localhost:2182/test
> {code}
> Node "/test" exists in zookeeper. And both:
> {code}
> ./bin/solr -f -c -z localhost:2181,localhost:2182
> {code}
> {code}
> ./bin/solr -f -c -z localhost:2181/test
> {code}
> works fine.
> But I cannot get it work with multiple nodes and chroot specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8421) improve error message when zkHost with multiple hosts and redundent chroot specified

2015-12-15 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8421:
---
Description: 
If a user mistakenly tries to specify the chroot on every zk host:port  in the 
zkhosts string, the error they get is confusing.

we should try to improve the error/logging to make it more evident what the 
problem is

{panel:title=initial bug report from user}
I'm trying to run Solr Cloud with following command:

{code}
./bin/solr -f -c -z localhost:2181/test,localhost:2182/test
{code}

And getting error:

{code}
749  ERROR (main) [   ] o.a.s.c.SolrCore 
null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified in 
ZkHost but the znode doesn't exist. localhost:2181/test,localhost:2182/test
{code}

Node "/test" exists in zookeeper. And both:

{code}
./bin/solr -f -c -z localhost:2181,localhost:2182
{code}

{code}
./bin/solr -f -c -z localhost:2181/test
{code}

works fine.

But I cannot get it work with multiple nodes and chroot specified.
{panel}

  was:
I'm trying to run Solr Cloud with following command:

{code}
./bin/solr -f -c -z localhost:2181/test,localhost:2182/test
{code}

And getting error:

{code}
749  ERROR (main) [   ] o.a.s.c.SolrCore 
null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified in 
ZkHost but the znode doesn't exist. localhost:2181/test,localhost:2182/test
{code}

Node "/test" exists in zookeeper. And both:

{code}
./bin/solr -f -c -z localhost:2181,localhost:2182
{code}

{code}
./bin/solr -f -c -z localhost:2181/test
{code}

works fine.

But I cannot get it work with multiple nodes and chroot specified.

 Issue Type: Improvement  (was: Bug)
Summary: improve error message when zkHost with multiple hosts and 
redundent chroot specified  (was: zkHost with chroot and multiple hosts not 
working)

> improve error message when zkHost with multiple hosts and redundent chroot 
> specified
> 
>
> Key: SOLR-8421
> URL: https://issues.apache.org/jira/browse/SOLR-8421
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.4
> Environment: Solr Cloud
>Reporter: Dmitry Myaskovskiy
>
> If a user mistakenly tries to specify the chroot on every zk host:port  in 
> the zkhosts string, the error they get is confusing.
> we should try to improve the error/logging to make it more evident what the 
> problem is
> {panel:title=initial bug report from user}
> I'm trying to run Solr Cloud with following command:
> {code}
> ./bin/solr -f -c -z localhost:2181/test,localhost:2182/test
> {code}
> And getting error:
> {code}
> 749  ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified 
> in ZkHost but the znode doesn't exist. localhost:2181/test,localhost:2182/test
> {code}
> Node "/test" exists in zookeeper. And both:
> {code}
> ./bin/solr -f -c -z localhost:2181,localhost:2182
> {code}
> {code}
> ./bin/solr -f -c -z localhost:2181/test
> {code}
> works fine.
> But I cannot get it work with multiple nodes and chroot specified.
> {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8421) improve error message when zkHost with multiple hosts and redundent chroot specified

2015-12-15 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15059140#comment-15059140
 ] 

Hoss Man commented on SOLR-8421:


tomas: sure, and maybe (in a weird situation) that's a totally valid and 
intended chroot, but having some better logging about which chroot is used, and 
maybe warning if the chroot looks suspicious, would help.

(Ideally we could connect w/o the chroot ourselves first, and log a very 
explicit warning if that path doesn't exist -- noting exactly what path is 
being attempted -- but i'm not sure how nicely that type of approach plays with 
the various ZK security models that i know people are / have-been working on 
supporting)

> improve error message when zkHost with multiple hosts and redundent chroot 
> specified
> 
>
> Key: SOLR-8421
> URL: https://issues.apache.org/jira/browse/SOLR-8421
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.4
> Environment: Solr Cloud
>Reporter: Dmitry Myaskovskiy
>
> If a user mistakenly tries to specify the chroot on every zk host:port  in 
> the zkhosts string, the error they get is confusing.
> we should try to improve the error/logging to make it more evident what the 
> problem is
> {panel:title=initial bug report from user}
> I'm trying to run Solr Cloud with following command:
> {code}
> ./bin/solr -f -c -z localhost:2181/test,localhost:2182/test
> {code}
> And getting error:
> {code}
> 749  ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified 
> in ZkHost but the znode doesn't exist. localhost:2181/test,localhost:2182/test
> {code}
> Node "/test" exists in zookeeper. And both:
> {code}
> ./bin/solr -f -c -z localhost:2181,localhost:2182
> {code}
> {code}
> ./bin/solr -f -c -z localhost:2181/test
> {code}
> works fine.
> But I cannot get it work with multiple nodes and chroot specified.
> {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8422) Basic Authentication plugin is not working correctly in solrcloud

2015-12-15 Thread Nirmala Venkatraman (JIRA)
Nirmala Venkatraman created SOLR-8422:
-

 Summary: Basic Authentication plugin is not working correctly in 
solrcloud
 Key: SOLR-8422
 URL: https://issues.apache.org/jira/browse/SOLR-8422
 Project: Solr
  Issue Type: Bug
  Components: Authentication
Affects Versions: 5.3.1
 Environment: Solrcloud
Reporter: Nirmala Venkatraman


Iam seeing a problem with basic auth on Solr5.3.1 . We have 5 node solrcloud 
with basic auth configured on sgdsolar1/2/3/4/7 , listening on port 8984.  We 
have 64 collections, each having 2 replicas distributed across the 5 servers in 
the solr cloud. A sample screen shot of the collections/shard locations shown 
below:-

Step 1 - Our solr indexing tool sends a request  to say any one of the  solr 
servers in the solrcloud and the request is sent to a server  which  doesn't 
have the collection
Here is the request sent by the indexing tool  to sgdsolar1, that includes the 
correct BasicAuth credentials

Step2 - Now sgdsolar1 routes  the request to sgdsolar2 that has the 
collection1, but no basic auth header is being passed. 

As a results sgdsolar2 throws a 401 error back to source server sgdsolar1 and 
all the way back to solr indexing tool
9.32.182.53 - - [15/Dec/2015:00:45:18 +] "GET 
/solr/collection1/get?_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4=unid,sequence,folderunid=xml=10
 HTTP/1.1" 401 366

2015-12-15 00:45:18.112 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
r:core_node1 x:collection1_shard1_replica1] 
o.a.s.s.RuleBasedAuthorizationPlugin request has come without principal. failed 
permission 
org.apache.solr.security.RuleBasedAuthorizationPlugin$Permission@5ebe8fca
2015-12-15 00:45:18.113 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
r:core_node1 x:collection1_shard1_replica1] o.a.s.s.HttpSolrCall USER_REQUIRED 
auth header null context : userPrincipal: [null] type: [READ], collections: 
[collection1,], Path: [/get] path : /get params 
:fl=unid,sequence,folderunid=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4=10=xml&_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!

Step 3 - In another solrcloud , if the indexing tool sends the solr get request 
to the server that has the collection1, I see that basic authentication working 
as expected.

I double checked and see both sgdsolar1/sgdsolar2 servers have the patched 
solr-core and solr-solrj jar files under the solr-webapp folder that were 
provided via earlier patches that Anshum/Noble worked on:-
SOLR-8167 fixes the POST issue 
SOLR-8326  fixing PKIAuthenticationPlugin.
SOLR-8355



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: JSON "fields" vs defaults

2015-12-15 Thread Jack Krupansky
Yonik? The doc is weak in this area. In fact, I see a comment on it from
Cassandra directed to you to verify the JSON to parameter mapping. It would
be nice to have a clear statement of the semantics for JSON "fields"
parameter and how it may or may not interact with the Solr fl parameter.

-- Jack Krupansky

On Thu, Dec 10, 2015 at 3:55 PM, Ryan Josal  wrote:

> I didn't see a Jira open in this, so I wanted to see if it's expected. If
> you pass "fields":[...] in a SOLR JSON API request, it does not override
> what's the default in the handler config.  I had fl=* as a default, so I
> saw "fields" have no effect, while "params":{"fl":...} worked as expected.
> After stepping through the debugger I noticed it was just appending
> "fields" at the end of everything else (including after solr config
> appends, if it makes a difference).
>
> If this is not expected I will create a Jira and maybe have time to
> provide a patch.
>
> Ryan
>


[jira] [Updated] (SOLR-8190) Implement Closeable on TupleStream

2015-12-15 Thread Jason Gerlowski (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski updated SOLR-8190:
--
Attachment: SOLR-8190.patch

Would be nice to get this rolling again.  To keep it up to date, I've updated 
the patch to apply cleanly off of trunk.

Tests still fail due to the NPE addressed in the (unresolved) SOLR-8091.

> Implement Closeable on TupleStream
> --
>
> Key: SOLR-8190
> URL: https://issues.apache.org/jira/browse/SOLR-8190
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
>Assignee: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-8190.patch, SOLR-8190.patch
>
>
> Implementing Closeable on TupleStream provides the ability to use 
> try-with-resources 
> (https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html)
>  in tests and in practice. This prevents TupleStreams from being left open 
> when there is an error in the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8191) CloudSolrStream close method NullPointerException

2015-12-15 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15059213#comment-15059213
 ] 

Jason Gerlowski commented on SOLR-8191:
---

The current patch still applies cleanly.  Fixing this NPE might not be hugely 
important, but this bug is blocking SOLR-8190, which would be a nice 
improvement IMO. (Not a huge deal, but still a nice little tidbit).

Looking at {{CloudSolrStream}} a little closer though, it seems odd to perform 
a null-check on {{close()}}, but not any of the other places that 
{{cloudSolrClient}} is used.  For instance, check out the protected 
{{constructStreams()}} method, which is invoked on each call to {{open()}}.

Those are just my observations at a glance.  I'm not very familiar with the 
SolrJ code, so maybe this isn't actually an issue.  Just wanted to mention it.  
I'm going to tinker around with this more tonight to see if I can learn more.

> CloudSolrStream close method NullPointerException
> -
>
> Key: SOLR-8191
> URL: https://issues.apache.org/jira/browse/SOLR-8191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8191.patch
>
>
> CloudSolrStream doesn't check if cloudSolrClient or solrStreams is null 
> yielding a NullPointerException in those cases when close() is called on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8421) zkHost with chroot and multiple hosts not working

2015-12-15 Thread Dmitry Myaskovskiy (JIRA)
Dmitry Myaskovskiy created SOLR-8421:


 Summary: zkHost with chroot and multiple hosts not working
 Key: SOLR-8421
 URL: https://issues.apache.org/jira/browse/SOLR-8421
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.4
 Environment: Solr Cloud
Reporter: Dmitry Myaskovskiy


I'm trying to run Solr Cloud with following command:

{code}
./bin/solr -f -c -z localhost:2181/test,localhost:2182/test
{code}

And getting error:

{code}
749  ERROR (main) [   ] o.a.s.c.SolrCore 
null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified in 
ZkHost but the znode doesn't exist. localhost:2181/test,localhost:2182/test
{code}

Node "/test" exists in zookeeper. And both:

{code}
./bin/solr -f -c -z localhost:2181,localhost:2182
{code}

{code}
./bin/solr -f -c -z localhost:2181/test
{code}

works fine.

But I cannot get it work with multiple nodes and chroot specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-8421) zkHost with chroot and multiple hosts not working

2015-12-15 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reopened SOLR-8421:


why don't we re-purpose this issue to try and make the error/logging more clear 
about what's happening in these cases?

> zkHost with chroot and multiple hosts not working
> -
>
> Key: SOLR-8421
> URL: https://issues.apache.org/jira/browse/SOLR-8421
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4
> Environment: Solr Cloud
>Reporter: Dmitry Myaskovskiy
>
> I'm trying to run Solr Cloud with following command:
> {code}
> ./bin/solr -f -c -z localhost:2181/test,localhost:2182/test
> {code}
> And getting error:
> {code}
> 749  ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified 
> in ZkHost but the znode doesn't exist. localhost:2181/test,localhost:2182/test
> {code}
> Node "/test" exists in zookeeper. And both:
> {code}
> ./bin/solr -f -c -z localhost:2181,localhost:2182
> {code}
> {code}
> ./bin/solr -f -c -z localhost:2181/test
> {code}
> works fine.
> But I cannot get it work with multiple nodes and chroot specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_66) - Build # 15215 - Still Failing!

2015-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15215/
Java: 32bit/jdk1.8.0_66 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestMiniSolrCloudClusterSSL

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([C2A9D8A16D754120]:0)




Build Log:
[...truncated 11024 lines...]
   [junit4] Suite: org.apache.solr.cloud.TestMiniSolrCloudClusterSSL
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestMiniSolrCloudClusterSSL_C2A9D8A16D754120-001/init-core-data-001
   [junit4]   2> 274168 INFO  
(SUITE-TestMiniSolrCloudClusterSSL-seed#[C2A9D8A16D754120]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false)
   [junit4]   2> 274170 INFO  
(SUITE-TestMiniSolrCloudClusterSSL-seed#[C2A9D8A16D754120]-worker) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 274170 INFO  (Thread-776) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 274170 INFO  (Thread-776) [] o.a.s.c.ZkTestServer Starting 
server
   [junit4]   2> 274270 INFO  
(SUITE-TestMiniSolrCloudClusterSSL-seed#[C2A9D8A16D754120]-worker) [] 
o.a.s.c.ZkTestServer start zk server on port:59148
   [junit4]   2> 274270 INFO  
(SUITE-TestMiniSolrCloudClusterSSL-seed#[C2A9D8A16D754120]-worker) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 274270 INFO  
(SUITE-TestMiniSolrCloudClusterSSL-seed#[C2A9D8A16D754120]-worker) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 274273 INFO  (zkCallback-183-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@111416 name:ZooKeeperConnection 
Watcher:127.0.0.1:59148 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2> 274273 INFO  
(SUITE-TestMiniSolrCloudClusterSSL-seed#[C2A9D8A16D754120]-worker) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 274273 INFO  
(SUITE-TestMiniSolrCloudClusterSSL-seed#[C2A9D8A16D754120]-worker) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 274273 INFO  
(SUITE-TestMiniSolrCloudClusterSSL-seed#[C2A9D8A16D754120]-worker) [] 
o.a.s.c.c.SolrZkClient makePath: /solr/solr.xml
   [junit4]   2> 274277 INFO  
(SUITE-TestMiniSolrCloudClusterSSL-seed#[C2A9D8A16D754120]-worker) [] 
o.a.s.c.c.SolrZkClient makePath: /solr/clusterprops.json
   [junit4]   2> 274292 INFO  (jetty-launcher-182-thread-1) [] 
o.e.j.s.Server jetty-9.3.6.v20151106
   [junit4]   2> 274292 INFO  (jetty-launcher-182-thread-2) [] 
o.e.j.s.Server jetty-9.3.6.v20151106
   [junit4]   2> 274292 INFO  (jetty-launcher-182-thread-4) [] 
o.e.j.s.Server jetty-9.3.6.v20151106
   [junit4]   2> 274292 INFO  (jetty-launcher-182-thread-5) [] 
o.e.j.s.Server jetty-9.3.6.v20151106
   [junit4]   2> 274292 INFO  (jetty-launcher-182-thread-3) [] 
o.e.j.s.Server jetty-9.3.6.v20151106
   [junit4]   2> 274295 INFO  (jetty-launcher-182-thread-1) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@5f0f78{/solr,null,AVAILABLE}
   [junit4]   2> 274295 INFO  (jetty-launcher-182-thread-2) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@ee8c81{/solr,null,AVAILABLE}
   [junit4]   2> 274295 INFO  (jetty-launcher-182-thread-3) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@19ef8f4{/solr,null,AVAILABLE}
   [junit4]   2> 274295 INFO  (jetty-launcher-182-thread-5) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@101d8d2{/solr,null,AVAILABLE}
   [junit4]   2> 274295 INFO  (jetty-launcher-182-thread-4) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@fb41ff{/solr,null,AVAILABLE}
   [junit4]   2> 274302 INFO  (jetty-launcher-182-thread-3) [] 
o.e.j.u.s.SslContextFactory x509=X509@f0c41(solrtest,h=[],w=[]) for 
SslContextFactory@d5c651(file:///home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/server/etc/test/solrtest.keystore,file:///home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/server/etc/test/solrtest.keystore)
   [junit4]   2> 274302 INFO  (jetty-launcher-182-thread-5) [] 
o.e.j.u.s.SslContextFactory x509=X509@19cdbe7(solrtest,h=[],w=[]) for 
SslContextFactory@962662(file:///home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/server/etc/test/solrtest.keystore,file:///home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/server/etc/test/solrtest.keystore)
   [junit4]   2> 274302 INFO  (jetty-launcher-182-thread-4) [] 
o.e.j.u.s.SslContextFactory x509=X509@228d94(solrtest,h=[],w=[]) for 

[jira] [Updated] (SOLR-8419) TermVectorComponent distributed-search issues

2015-12-15 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-8419:
---
Issue Type: Bug  (was: Improvement)

> TermVectorComponent distributed-search issues
> -
>
> Key: SOLR-8419
> URL: https://issues.apache.org/jira/browse/SOLR-8419
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 5.5
>
>
> TermVectorComponent supports distributed-search since SOLR-3229 added it.  
> Unlike most other components, this one tries to support schemas without a 
> UniqueKey.  However it's logic for attempting to do this was made faulty with 
> the introduction of distrib.singlePass, and furthermore this part wasn't 
> tested any way.  In this issue I want to remove support for schemas lacking a 
> UniqueKey with this component (only for distributed-search).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8419) TermVectorComponent distributed-search issues

2015-12-15 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-8419:
---
Attachment: SOLR_8419.patch

The attach patch:
* Fixes the invalid/confusing response when there's a distributed single-pass 
situation.
* Removed {{uniqueKeyFieldName}} as a key in the TV response NamedList.  Okay I 
didn't have to do this but this seemed totally out of place.  
HighlightComponent & DebugComponent don't do this.
* Added test that fails without these changes -- the distrib.singlePass case.

The changes also then allows for an eventual refactoring of common code in 
finishStage (the loop filling {{arr}}).  This is the part affected by a 
distrib.singlePass bug in 3 search components.  I won't do that refactoring 
here though; I'll do it in BNGS-8059.

Assuming tests pass I'll commit this in a couple days.

> TermVectorComponent distributed-search issues
> -
>
> Key: SOLR-8419
> URL: https://issues.apache.org/jira/browse/SOLR-8419
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 5.5
>
> Attachments: SOLR_8419.patch
>
>
> TermVectorComponent supports distributed-search since SOLR-3229 added it.  
> Unlike most other components, this one tries to support schemas without a 
> UniqueKey.  However it's logic for attempting to do this was made faulty with 
> the introduction of distrib.singlePass, and furthermore this part wasn't 
> tested any way.  In this issue I want to remove support for schemas lacking a 
> UniqueKey with this component (only for distributed-search).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-5.x - Build # 408 - Failure

2015-12-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/408/

No tests ran.

Build Log:
[...truncated 53104 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 461 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (10.2 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-5.5.0-src.tgz...
   [smoker] 28.7 MB in 0.04 sec (766.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.5.0.tgz...
   [smoker] 66.2 MB in 0.09 sec (753.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.5.0.zip...
   [smoker] 76.6 MB in 0.10 sec (786.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-5.5.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6170 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6170 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.5.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6170 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6170 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.5.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 214 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 214 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (22.0 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-5.5.0-src.tgz...
   [smoker] 37.4 MB in 0.46 sec (80.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.5.0.tgz...
   [smoker] 130.1 MB in 2.08 sec (62.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.5.0.zip...
   [smoker] 137.9 MB in 2.39 sec (57.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-5.5.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-5.5.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.5.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.5.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 7 ...
   [smoker] test solr example w/ Java 7...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.5.0-java7/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 

[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 261 - Still Failing!

2015-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/261/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

4 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestCoreDiscovery

Error Message:
ERROR: SolrIndexSearcher opens=27 closes=26

Stack Trace:
java.lang.AssertionError: ERROR: SolrIndexSearcher opens=27 closes=26
at __randomizedtesting.SeedInfo.seed([45F43F257D825341]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:453)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:225)
at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestCoreDiscovery

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestCoreDiscovery: 
1) Thread[id=14668, name=searcherExecutor-6412-thread-1, state=WAITING, 
group=TGRP-TestCoreDiscovery] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)   
  at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestCoreDiscovery: 
   1) Thread[id=14668, name=searcherExecutor-6412-thread-1, state=WAITING, 
group=TGRP-TestCoreDiscovery]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([45F43F257D825341]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestCoreDiscovery

Error Message:
There are still zombie threads that couldn't be terminated:

[JENKINS] Lucene-Solr-5.x-Solaris (64bit/jdk1.8.0) - Build # 260 - Still Failing!

2015-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Solaris/260/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.OverseerTest.testExternalClusterStateChangeBehavior

Error Message:
Illegal state, was: down expected:active clusterState:live 
nodes:[]collections:{c1=DocCollection(c1)={   "shards":{"shard1":{   
"parent":null,   "range":null,   "state":"active",   
"replicas":{"core_node1":{   "base_url":"http://127.0.0.1/solr;,
   "node_name":"node1",   "core":"core1",   "roles":"", 
  "state":"down",   "router":{"name":"implicit"}}, 
test=LazyCollectionRef(test)}

Stack Trace:
java.lang.AssertionError: Illegal state, was: down expected:active 
clusterState:live nodes:[]collections:{c1=DocCollection(c1)={
  "shards":{"shard1":{
  "parent":null,
  "range":null,
  "state":"active",
  "replicas":{"core_node1":{
  "base_url":"http://127.0.0.1/solr;,
  "node_name":"node1",
  "core":"core1",
  "roles":"",
  "state":"down",
  "router":{"name":"implicit"}}, test=LazyCollectionRef(test)}
at 
__randomizedtesting.SeedInfo.seed([F137852DF4029A5B:992986C11692C015]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.verifyReplicaStatus(AbstractDistribZkTestBase.java:237)
at 
org.apache.solr.cloud.OverseerTest.testExternalClusterStateChangeBehavior(OverseerTest.java:1262)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-8409) Complex q param in Streaming Expression results in a bad query

2015-12-15 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15057983#comment-15057983
 ] 

Dennis Gove commented on SOLR-8409:
---

I take that back. The file schema-streaming.xml contains the default query field
{code}
text
{code}

If I comment out that setting then I am able to replicate the failure described 
in this ticket - finally. I will create a couple valid tests replicating the 
issue and will commit the fix as soon as I can.

> Complex q param in Streaming Expression results in a bad query
> --
>
> Key: SOLR-8409
> URL: https://issues.apache.org/jira/browse/SOLR-8409
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Dennis Gove
>Priority: Minor
>  Labels: streaming, streaming_api
> Attachments: SOLR-8409.patch
>
>
> When providing an expression like 
> {code}
> stream=search(people, fl="id,first", sort="first asc", 
> q="presentTitles:\"chief executive officer\" AND age:[36 TO *]")
> {code}
> the following error is seen.
> {code}
> no field name specified in query and no default specified via 'df' param
> {code}
> I believe the issue is related to the \" (escaped quotes) and the spaces in 
> the q field. If I remove the spaces then the query returns results as 
> expected (though I've yet to validate if those results are accurate).
> This requires some investigation to get down to the root cause. I would like 
> to fix it before Solr 6 is cut.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8388) TestSolrQueryResponse (factor out, then extend)

2015-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15058073#comment-15058073
 ] 

ASF subversion and git services commented on SOLR-8388:
---

Commit 1720160 from [~cpoerschke] in branch 'dev/trunk'
[ https://svn.apache.org/r1720160 ]

SOLR-8388: more TestSolrQueryResponse.java tests; add SolrReturnFields.toString 
method, ReturnFieldsTest.testToString test;

> TestSolrQueryResponse (factor out, then extend)
> ---
>
> Key: SOLR-8388
> URL: https://issues.apache.org/jira/browse/SOLR-8388
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8388-part1of2.patch, SOLR-8388-part2of2.patch
>
>
> factor out 
> {{solr/core/src/test/org/apache/solr/response/TestSolrQueryResponse.java}} 
> from {{solr/core/src/test/org/apache/solr/servlet/ResponseHeaderTest.java}} 
> and then extend it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6908) TestGeoUtils.testGeoRelations is buggy with irregular rectangles

2015-12-15 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15058081#comment-15058081
 ] 

Steve Rowe edited comment on LUCENE-6908 at 12/15/15 2:00 PM:
--

My Jenkins found a reproducible failure in TestGeoUtils.testGeoRelations:

{noformat}
   [junit4] Suite: org.apache.lucene.util.TestGeoUtils
   [junit4]   1> doc=1921 matched but should not on iteration 229
   [junit4]   1>   lon=87.14082470163703 lat=-89.39206877723336 
distanceMeters=205662.45440744862 vs radiusMeters=203580.37384777897
   [junit4]   1> doc=2077 matched but should not on iteration 229
   [junit4]   1>   lon=63.26208980754018 lat=-89.36728684231639 
distanceMeters=204170.67218267516 vs radiusMeters=203580.37384777897
   [junit4]   2> NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestGeoUtils 
-Dtests.method=testGeoRelations -Dtests.seed=4513B1942DE0E2D3 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=lv -Dtests.timezone=America/St_Vincent -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] FAILURE 2.24s | TestGeoUtils.testGeoRelations <<<
   [junit4]> Throwable #1: java.lang.AssertionError: 2 incorrect hits (see 
above)
{noformat}


was (Author: steve_rowe):
My 

> TestGeoUtils.testGeoRelations is buggy with irregular rectangles
> 
>
> Key: LUCENE-6908
> URL: https://issues.apache.org/jira/browse/LUCENE-6908
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Nicholas Knize
> Attachments: LUCENE-6908.patch, LUCENE-6908.patch, LUCENE-6908.patch, 
> LUCENE-6908.patch
>
>
> The {{.testGeoRelations}} method doesn't exactly test the behavior of 
> GeoPoint*Query as its using the BKD split technique (instead of quad cell 
> division) to divide the space on each pass. For "large" distance queries this 
> can create a lot of irregular rectangles producing large radial distortion 
> error when using the cartesian approximation methods provided by 
> {{GeoUtils}}. This issue improves the accuracy of GeoUtils cartesian 
> approximation methods on irregular rectangles without having to cut over to 
> an expensive oblate geometry approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6908) TestGeoUtils.testGeoRelations is buggy with irregular rectangles

2015-12-15 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15058086#comment-15058086
 ] 

Steve Rowe commented on LUCENE-6908:


See also Policeman Jenkins failures at 
http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14886/ and 
http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15198/

> TestGeoUtils.testGeoRelations is buggy with irregular rectangles
> 
>
> Key: LUCENE-6908
> URL: https://issues.apache.org/jira/browse/LUCENE-6908
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Nicholas Knize
> Attachments: LUCENE-6908.patch, LUCENE-6908.patch, LUCENE-6908.patch, 
> LUCENE-6908.patch
>
>
> The {{.testGeoRelations}} method doesn't exactly test the behavior of 
> GeoPoint*Query as its using the BKD split technique (instead of quad cell 
> division) to divide the space on each pass. For "large" distance queries this 
> can create a lot of irregular rectangles producing large radial distortion 
> error when using the cartesian approximation methods provided by 
> {{GeoUtils}}. This issue improves the accuracy of GeoUtils cartesian 
> approximation methods on irregular rectangles without having to cut over to 
> an expensive oblate geometry approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6908) TestGeoUtils.testGeoRelations is buggy with irregular rectangles

2015-12-15 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15058081#comment-15058081
 ] 

Steve Rowe commented on LUCENE-6908:


My 

> TestGeoUtils.testGeoRelations is buggy with irregular rectangles
> 
>
> Key: LUCENE-6908
> URL: https://issues.apache.org/jira/browse/LUCENE-6908
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Nicholas Knize
> Attachments: LUCENE-6908.patch, LUCENE-6908.patch, LUCENE-6908.patch, 
> LUCENE-6908.patch
>
>
> The {{.testGeoRelations}} method doesn't exactly test the behavior of 
> GeoPoint*Query as its using the BKD split technique (instead of quad cell 
> division) to divide the space on each pass. For "large" distance queries this 
> can create a lot of irregular rectangles producing large radial distortion 
> error when using the cartesian approximation methods provided by 
> {{GeoUtils}}. This issue improves the accuracy of GeoUtils cartesian 
> approximation methods on irregular rectangles without having to cut over to 
> an expensive oblate geometry approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7730) speed-up faceting on doc values fields

2015-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15058747#comment-15058747
 ] 

ASF subversion and git services commented on SOLR-7730:
---

Commit 1720241 from m...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1720241 ]

SOLR-7730: mention in 5.4.0's Optimizations

> speed-up faceting on doc values fields
> --
>
> Key: SOLR-7730
> URL: https://issues.apache.org/jira/browse/SOLR-7730
> Project: Solr
>  Issue Type: Improvement
>  Components: faceting
>Affects Versions: 5.2.1
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: patch
> Fix For: 5.4
>
> Attachments: LUCENE-7730.patch, LUCENE-7730.patch, 
> SOLR-7730-changes.patch, SOLR-7730.patch
>
>
> every time we count facets on DocValues fields in Solr on many segments index 
> we see the unnecessary hotspot:
> {code}
> 
> at 
> org.apache.lucene.index.MultiFields.getMergedFieldInfos(MultiFields.java:248)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getFieldInfos(SlowCompositeReaderWrapper.java:239)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getSortedSetDocValues(SlowCompositeReaderWrapper.java:176)
> at 
> org.apache.solr.request.DocValuesFacets.getCounts(DocValuesFacets.java:72)
> at 
> org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:460) 
> {code}
> the reason is SlowCompositeReaderWrapper.getSortedSetDocValues() Line 136 and 
> SlowCompositeReaderWrapper.getSortedDocValues() Line 174
> before return composite doc values, SCWR merges segment field infos, which is 
> expensive, but after fieldinfo is merged, it checks *only* docvalue type in 
> it. This dv type check can be done much easier in per segment basis. 
> This patch gets some performance gain for those who count DV facets in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



SortingLeafReader and IndexWriter.addIndexes

2015-12-15 Thread John Wang
Hi folks:

I am interested in using the SortingLeafReader to sort my index. According
to examples, calling IndexWriter.addIndexes on the wrapper
SortingLeafReader would do the trick.

In the recent releases, IndexWriter.addIndexes api is now only taking a
CodecReader. Is there another way to do indexing sorting?

Appreciate any help.

Thanks

-John


  1   2   >