[jira] [Comment Edited] (LUCENE-5914) More options for stored fields compression

2014-12-01 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14231118#comment-14231118
 ] 

Robert Muir edited comment on LUCENE-5914 at 12/2/14 7:46 AM:
--

Applying the tuning parameters proposed for the deflate case here to the trunk 
code is a trivial and safe patch, and more compelling:

||impl||size||index time||force merge time||
|trunk_HC_level3_24576_512|269,857,651|118,150|28,313|

edited: impl name to make it clear i also bumped maxDocsPerChunk in this case 
too.


was (Author: rcmuir):
Applying the tuning parameters proposed for the deflate case here to the trunk 
code is a trivial and safe patch, and more compelling:

||impl||size||index time||force merge time||
|trunk_HC_level3_24576|269,857,651|118,150|28,313|


> More options for stored fields compression
> --
>
> Key: LUCENE-5914
> URL: https://issues.apache.org/jira/browse/LUCENE-5914
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
> Fix For: 5.0
>
> Attachments: LUCENE-5914.patch, LUCENE-5914.patch, LUCENE-5914.patch, 
> LUCENE-5914.patch, LUCENE-5914.patch
>
>
> Since we added codec-level compression in Lucene 4.1 I think I got about the 
> same amount of users complaining that compression was too aggressive and that 
> compression was too light.
> I think it is due to the fact that we have users that are doing very 
> different things with Lucene. For example if you have a small index that fits 
> in the filesystem cache (or is close to), then you might never pay for actual 
> disk seeks and in such a case the fact that the current stored fields format 
> needs to over-decompress data can sensibly slow search down on cheap queries.
> On the other hand, it is more and more common to use Lucene for things like 
> log analytics, and in that case you have huge amounts of data for which you 
> don't care much about stored fields performance. However it is very 
> frustrating to notice that the data that you store takes several times less 
> space when you gzip it compared to your index although Lucene claims to 
> compress stored fields.
> For that reason, I think it would be nice to have some kind of options that 
> would allow to trade speed for compression in the default codec.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5914) More options for stored fields compression

2014-12-01 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14231118#comment-14231118
 ] 

Robert Muir commented on LUCENE-5914:
-

Applying the tuning parameters proposed for the deflate case here to the trunk 
code is a trivial and safe patch, and more compelling:

||impl||size||index time||force merge time||
|trunk_HC_level3_24576|269,857,651|118,150|28,313|


> More options for stored fields compression
> --
>
> Key: LUCENE-5914
> URL: https://issues.apache.org/jira/browse/LUCENE-5914
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
> Fix For: 5.0
>
> Attachments: LUCENE-5914.patch, LUCENE-5914.patch, LUCENE-5914.patch, 
> LUCENE-5914.patch, LUCENE-5914.patch
>
>
> Since we added codec-level compression in Lucene 4.1 I think I got about the 
> same amount of users complaining that compression was too aggressive and that 
> compression was too light.
> I think it is due to the fact that we have users that are doing very 
> different things with Lucene. For example if you have a small index that fits 
> in the filesystem cache (or is close to), then you might never pay for actual 
> disk seeks and in such a case the fact that the current stored fields format 
> needs to over-decompress data can sensibly slow search down on cheap queries.
> On the other hand, it is more and more common to use Lucene for things like 
> log analytics, and in that case you have huge amounts of data for which you 
> don't care much about stored fields performance. However it is very 
> frustrating to notice that the data that you store takes several times less 
> space when you gzip it compared to your index although Lucene claims to 
> compress stored fields.
> For that reason, I think it would be nice to have some kind of options that 
> would allow to trade speed for compression in the default codec.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5914) More options for stored fields compression

2014-12-01 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14231066#comment-14231066
 ] 

Robert Muir commented on LUCENE-5914:
-

I also want to propose a new way to proceed here. In my opinion this issue 
tries to do a lot at once:

* make changes to the default codec
* support high and low compression options in the default codec with backwards 
compatibility
* provide some easy way to "choose" between supported options without having to 
use FilterCodec
* new lz4 implementation
* new deflate implementation

I think its too scary to do all at once. I would prefer we start by exposing 
the current CompressionMode.HIGH_COMPRESSION as the "high compression" option. 
At least for that one test dataset i used above (2GB highly compressible apache 
server logs), this is reasonably competitive with the deflate option on this 
issue:
||impl||size||index time||force merge time||
|trunk_HC|275,262,504|143,264|49,030|

But more importantly, HighCompressingCodec has been baking in our test suite 
for years, scary bugs knocked out of it, etc.
I think we should first figure out the plumbing to expose that, its something 
we could realistically do for lucene 5.0 and have confidence in. There is still 
plenty to do there to make that option work: exposing the configuration option, 
addressing concerns about back compat testing (we should generate back compat 
indexes both ways), and so on. But at least there is a huge head start on 
testing, code correctness, etc: its baked.

For new proposed formats (LZ4 with shared dictionary, deflate, whatever), I 
think we should address each one individually, adding to the codecs/ package 
first / getting into tests / baking in a similar way... doesn't need to be 
years but we should split these concerns.

> More options for stored fields compression
> --
>
> Key: LUCENE-5914
> URL: https://issues.apache.org/jira/browse/LUCENE-5914
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
> Fix For: 5.0
>
> Attachments: LUCENE-5914.patch, LUCENE-5914.patch, LUCENE-5914.patch, 
> LUCENE-5914.patch, LUCENE-5914.patch
>
>
> Since we added codec-level compression in Lucene 4.1 I think I got about the 
> same amount of users complaining that compression was too aggressive and that 
> compression was too light.
> I think it is due to the fact that we have users that are doing very 
> different things with Lucene. For example if you have a small index that fits 
> in the filesystem cache (or is close to), then you might never pay for actual 
> disk seeks and in such a case the fact that the current stored fields format 
> needs to over-decompress data can sensibly slow search down on cheap queries.
> On the other hand, it is more and more common to use Lucene for things like 
> log analytics, and in that case you have huge amounts of data for which you 
> don't care much about stored fields performance. However it is very 
> frustrating to notice that the data that you store takes several times less 
> space when you gzip it compared to your index although Lucene claims to 
> compress stored fields.
> For that reason, I think it would be nice to have some kind of options that 
> would allow to trade speed for compression in the default codec.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5914) More options for stored fields compression

2014-12-01 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14231040#comment-14231040
 ] 

Robert Muir commented on LUCENE-5914:
-

I opened LUCENE-6085 for the SI.attributes, which should help with cleanup.

I ran some benchmarks on various datasets to get an idea where this is at, they 
are disappointing. For geonames, the new format increases size of the stored 
fields 50%, for apache http server logs, it doubles the size. Indexing time is 
significantly slower for any datasets i test as well: there must be bugs in the 
lz4+shared dictionary?

||impl||size||index time||force merge time||
|trunk|372,845,278|101,745|15,976|
|patch(BEST_SPEED)|780,861,727|141,699|60,114|
|patch(BEST_COMPRESSION)|265,063,340|132,238|53,561|

To confirm its a bug and not just the cost of additional i/o (due to less 
compression with shared dictionaries), i set deflate level to 0, and indexed 
with the BEST_COMPRESSION layout to really jack up the size. Sure, it created a 
1.8GB stored field file, but in 126,093ms with 44,377ms merging. This is faster 
than both the options in the patch...

Anyway, this leads to more questions:
* Do we really need a completely separate lz4 impl for the shared dictionaries 
support? Its tough to understand e.g. why it reimplements the hashtable 
differently and so on.
* Do we really need to share code between different stored fields impls that 
have different use-cases and goals? I think the patch currently overshares 
here, and the additional abstractions make it hard to work with. 
* Along with the sharing approach above: we can still reuse code between 
formats though. for example the document<->byte stuff could be shared static 
methods. I would just avoid subclassing and interfaces because I get lost in 
the patch too easily. And we just need to be careful that any shared code is 
simple and clear because we have to assume the formats will evolve overtime.
* We shouldnt wrap the deflate case with zlib header/footer. This saves a 
little bit.

About the oversharing issue: I really think the separate formats should just be 
separate formats, it will make life easier. Its more than just a difference in 
compression algorithm and we shouldn't try to structure things so that can just 
be swapped in, i think its not the right tradeoff.

For example, with high compression its more important to lay it out in a way 
where bulk-merge doesn't cause re-compressions, even if it causes 'temporary' 
waste along segment boundaries. This is important because compression here gets 
very costly, and for e.g. "archiving" case, bulk merge should be potent as 
there shouldnt be so many deletions: we shouldnt bear the cost of 
re-compressing over and over. This gets much much worse if you try to use 
something "better" than gzip, too. 

On the other hand with low compression, we should ensure merging is still fast 
even in the presence of deletions: the shared dictionary approach is one way, 
another way is to just have at least the getMergeInstance() remember the 
current block and have "seek within block" optimization, which is probably 
simpler and better than what trunk does today.


> More options for stored fields compression
> --
>
> Key: LUCENE-5914
> URL: https://issues.apache.org/jira/browse/LUCENE-5914
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
> Fix For: 5.0
>
> Attachments: LUCENE-5914.patch, LUCENE-5914.patch, LUCENE-5914.patch, 
> LUCENE-5914.patch, LUCENE-5914.patch
>
>
> Since we added codec-level compression in Lucene 4.1 I think I got about the 
> same amount of users complaining that compression was too aggressive and that 
> compression was too light.
> I think it is due to the fact that we have users that are doing very 
> different things with Lucene. For example if you have a small index that fits 
> in the filesystem cache (or is close to), then you might never pay for actual 
> disk seeks and in such a case the fact that the current stored fields format 
> needs to over-decompress data can sensibly slow search down on cheap queries.
> On the other hand, it is more and more common to use Lucene for things like 
> log analytics, and in that case you have huge amounts of data for which you 
> don't care much about stored fields performance. However it is very 
> frustrating to notice that the data that you store takes several times less 
> space when you gzip it compared to your index although Lucene claims to 
> compress stored fields.
> For that reason, I think it would be nice to have some kind of options that 
> would allow to trade speed for compression in the default codec.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-

[jira] [Updated] (SOLR-6741) IPv6 Field Type

2014-12-01 Thread Steve Davids (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Davids updated SOLR-6741:
---
Attachment: SOLR-6741.patch

I attached a patch for IPv4 support which allows a prefix query, range queries, 
and CIDR notation which extends a TrieLongField. Hopefully this can serve as a 
good starting point. [~lnr0626] was also a contributor for this code.

> IPv6 Field Type
> ---
>
> Key: SOLR-6741
> URL: https://issues.apache.org/jira/browse/SOLR-6741
> Project: Solr
>  Issue Type: Improvement
>Reporter: Lloyd Ramey
> Attachments: SOLR-6741.patch
>
>
> It would be nice if Solr had a field type which could be used to index IPv6 
> data and supported efficient range queries. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Why do we have a CoreAdminHandler#Load action?

2014-12-01 Thread Varun Thacker
Is there a reason why we have a switch condition to LOAD in code? Can we
remove otherwise.

-- 


Regards,
Varun Thacker
http://www.vthacker.in/


[jira] [Commented] (SOLR-4792) stop shipping a war in 5.0

2014-12-01 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230847#comment-14230847
 ] 

Yonik Seeley commented on SOLR-4792:


Sorry for the somewhat off-topic...  here's what I tried:
{code}
/opt/code/lusolr5/solr$ cp -rp server s1
/opt/code/lusolr5/solr$ cd s1
/opt/code/lusolr5/solr/s1$ ../bin/solr start -e techproducts -d .
ERROR: start.jar file not found in /opt/code/lusolr5/solr/.!
Please check your -d parameter to set the correct Solr server directory.
/opt/code/lusolr5/solr/s1$ ../bin/solr start -e techproducts -d `pwd`
Waiting to see Solr listening on port 8983 [/]  
Started Solr server on port 8983 (pid=97723). Happy searching!
[...]
/opt/code/lusolr5/solr/s1$ find . -name data
/opt/code/lusolr5/solr/s1$ find .. -name data
../server/solr/techproducts/data
/opt/code/lusolr5/solr/s1$ 
{code}

So I guess my mistake was thinking that the examples like "techproducts" were 
portable.

Oh, and I do like how some commands are displayed, like:
{code}
Creating new core 'techproducts' using command:
http://localhost:8983/solr/admin/cores?action=CREATE&name=techproducts&instanceDir=techproducts
{code}

> stop shipping a war in 5.0
> --
>
> Key: SOLR-4792
> URL: https://issues.apache.org/jira/browse/SOLR-4792
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Reporter: Robert Muir
>Assignee: Mark Miller
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-4792.patch
>
>
> see the vote on the developer list.
> This is the first step: if we stop shipping a war then we are free to do 
> anything we want. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6806) Reduce the size of the main Solr binary download

2014-12-01 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230803#comment-14230803
 ] 

Yonik Seeley commented on SOLR-6806:


IMO, we don't currently really have off-line documentation.   We could cut out 
the javadoc without hurting newbies at all, and I think the off-line tutorial 
could go as well (why try to maintain a different online and offline version?)
A kitchen sink approach can also leave one with the impression of "bloated, 
complicated, confusing".

> Reduce the size of the main Solr binary download
> 
>
> Key: SOLR-6806
> URL: https://issues.apache.org/jira/browse/SOLR-6806
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>
> There has been a lot of recent discussion about how large the Solr download 
> is, and how to reduce its size.  The last release (4.10.2) weighs in at 143MB 
> for the tar and 149MB for the zip.
> Most users do not need the full download.  They may never need contrib 
> features, or they may only need one or two, with DIH being the most likely 
> choice.  They could likely get by with a download that's less than 40 MB.
> Our primary competition has a 29MB zip download for the release that's 
> current right now, and not too long ago, that was about 20MB.  I didn't look 
> very deep, but any additional features that might be available for download 
> were not immediately apparent on their website.  I'm sure they exist, but I 
> would guess that most users never need those features, so most users never 
> even see them.
> Solr, by contrast, has everything included ... a "kitchen sink" approach. 
> Once you get past the long download time and fire up the example, you're 
> presented with configs that include features you're likely to never use.
> Although this offers maximum flexibility, I think it also serves to cause 
> confusion in a new user.
> A much better option would be to create a core download that includes only a 
> minimum set of features, probably just the war, the example servlet 
> container, and an example config that only uses the functionality present in 
> the war.  We can create additional downloads that offer additional 
> functionality and configs ... DIH would be a very small addon that would 
> likely be downloaded frequently.
> SOLR-5103 describes a plugin infrastructure which would make it very easy to 
> offer a small core download and then let the user download additional 
> functionality using scripts or the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4792) stop shipping a war in 5.0

2014-12-01 Thread Jayson Minard (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230795#comment-14230795
 ] 

Jayson Minard commented on SOLR-4792:
-

ok, so final understanding:  WAR gone from maven, WAR gone from dist, but will 
remain in server/web-apps

correct?

> stop shipping a war in 5.0
> --
>
> Key: SOLR-4792
> URL: https://issues.apache.org/jira/browse/SOLR-4792
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Reporter: Robert Muir
>Assignee: Mark Miller
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-4792.patch
>
>
> see the vote on the developer list.
> This is the first step: if we stop shipping a war then we are free to do 
> anything we want. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-4792) stop shipping a war in 5.0

2014-12-01 Thread Jayson Minard (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayson Minard updated SOLR-4792:

Comment: was deleted

(was: Last note, because I'm probably boring people...

Keep it in Maven does not put it into the distribution but means that Solr 
doesn't have to be rebuilt just to create a WAR.  if not in the .tar.gz and not 
publicized, but still available in Maven then other's can link to it, document 
it, include it in other projects as a dependency, etc.  So if it is in 
server/webapps but not in /dist, then why not maven.)

> stop shipping a war in 5.0
> --
>
> Key: SOLR-4792
> URL: https://issues.apache.org/jira/browse/SOLR-4792
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Reporter: Robert Muir
>Assignee: Mark Miller
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-4792.patch
>
>
> see the vote on the developer list.
> This is the first step: if we stop shipping a war then we are free to do 
> anything we want. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: solr client sdk's/libraries for native platforms

2014-12-01 Thread david.w.smi...@gmail.com
I like the “last updated …” (rounded to the month) idea.  It may be
difficult to maintain a “last checked” distinction, and create somewhat
more of a burden on maintaining the list.  I think it’s useful to list out
old projects, maybe separately, and indicated as old.  This makes the page
a better comprehensive resource.

Thanks for volunteering Alex!

~ David Smiley
Freelance Apache Lucene/Solr Search Consultant/Developer
http://www.linkedin.com/in/davidwsmiley

On Mon, Dec 1, 2014 at 7:35 PM, Alexandre Rafalovitch 
wrote:

> What would be the reasonable cutoff for the client library last
> update? Say if it was not updated in 2 years - should it be included
> in the list? In 3? Included with a warning?
>
> Or do we list them all and let the user sort it out? Or put a
> last-checked date on the wiki and mention rough last update against
> each library?
>
> Regards,
>Alex.
> Personal: http://www.outerthoughts.com/ and @arafalov
> Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
> Solr popularizers community: https://www.linkedin.com/groups?gid=6713853
>
>
> On 1 December 2014 at 11:03, Eric Pugh 
> wrote:
> > I think in the vein of a “do-it-tocracy”, getting the Wiki updated is a
> perfectly good first step, and then if there is a better approach,
> hopefully that occurs.… ;-)
> >
> >
> >
> >> On Dec 1, 2014, at 10:51 AM, Alexandre Rafalovitch 
> wrote:
> >>
> >> On 1 December 2014 at 10:02, david.w.smi...@gmail.com
> >>  wrote:
> >>> I meant to reply earlier...
> >>>
> >>> On Mon, Nov 24, 2014 at 11:37 AM, Alexandre Rafalovitch <
> arafa...@gmail.com>
> >>> wrote:
> 
>  They are super-stale
> >>>
> >>>
> >>> Yup but it’s a wiki so feel free to freshen it up.  I’ll be doing that
> in a
> >>> bit.  It may also be helpful if these particular pages got more
> >>> prominence/visibility by being linked from the ref guide and/or the
> website.
> >>
> >> On the TODO list. If you are planning to update the client list, maybe
> >> we should coordinate, so we don't step on each other's toes. I am
> >> planning to do more than a minor tweak.
> >>
>  and there is no easy mechanism for people to
>  announce their additions. I am not even sure the announcements are
>  welcome on the user mailing list.
> >>>
> >>>
> >>> IMO the mailing list is an excellent place to announce new Solr
> integrations
> >>> in the ecosystem out there.  People announce various things on the
> list from
> >>> time to time.
> >> I haven't even announced solr-start.com on the list, wasn't sure
> >> whether it's appropriate. So, maybe it's ok, but I suspect that's not
> >> visible.
> >>
>  It comes down to the funnel/workflow. At the moment, the workflow
>  makes it _hard_ to maintain those pages. CMM level 1 kind of hard.
> >>> Can you recommend a fix or alternative?
> >>
> >> I thought that's what my previous emails were about?!? Setup a
> >> 'client-maintainer' mailing list seeded with SolrJ people, update the
> >> Wiki, make it more prominent. Organize a TodoMVC equivalent for Solr
> >> clients (with prizes?). Ensure it is a topic (with mentor) for
> >> Google's Summer of Code. Have somebody from core Solr to keep at least
> >> one eye on the client communities' mailing lists.
> >>
> >> I started doing that as an individual, but the traction was not there.
> >> It needs at least a couple of people to push in the same direction.
> >>
> >> Regards,
> >>   Alex.
> >>
> >> -
> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >> For additional commands, e-mail: dev-h...@lucene.apache.org
> >>
> >
> > -
> > Eric Pugh | Principal | OpenSource Connections, LLC | 434.466.1467 |
> http://www.opensourceconnections.com | My Free/Busy
> > Co-Author: Apache Solr 3 Enterprise Search Server
> > This e-mail and all contents, including attachments, is considered to be
> Company Confidential unless explicitly stated otherwise, regardless of
> whether attachments are marked as such.
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (SOLR-6806) Reduce the size of the main Solr binary download

2014-12-01 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230784#comment-14230784
 ] 

Hoss Man commented on SOLR-6806:


My broad strokes opinion on being concerned with the release sizes is simple: 

* Folks who care a lot about the num bytes they have to download should be 
encouraged to use the source releases and compile themselves
** as much as possible we should make it dead simple to "build" solr from 
source for these people with poor net connections who are concerned about 
saving bytes.
* the binary releases should strive for being as dead simple to use as 
possible, since their target user is a novice who doesn't (yet) know/understand 
what they need/want.
** if that means they are kind of big, because they include all contribs, that 
is (in my opinion) more new user friendly then having users discover after 
downloading & installing solr that they have to go download and install 27 
other micro plugins in order to get it to do what they want.

as Doug once wisely pointed out...

bq. The reason is that you don't optimize the out-of-box experience for 
advanced users: it's okay to frustrate them a bit, they're going to find what 
they need in the download.  The more important thing is not to confuse newbies. 
 A single download with documentation and examples is what newbies need.  While 
you might be somewhat annoyed by the extra baggage, how much harder does it 
really make your life?

> Reduce the size of the main Solr binary download
> 
>
> Key: SOLR-6806
> URL: https://issues.apache.org/jira/browse/SOLR-6806
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>
> There has been a lot of recent discussion about how large the Solr download 
> is, and how to reduce its size.  The last release (4.10.2) weighs in at 143MB 
> for the tar and 149MB for the zip.
> Most users do not need the full download.  They may never need contrib 
> features, or they may only need one or two, with DIH being the most likely 
> choice.  They could likely get by with a download that's less than 40 MB.
> Our primary competition has a 29MB zip download for the release that's 
> current right now, and not too long ago, that was about 20MB.  I didn't look 
> very deep, but any additional features that might be available for download 
> were not immediately apparent on their website.  I'm sure they exist, but I 
> would guess that most users never need those features, so most users never 
> even see them.
> Solr, by contrast, has everything included ... a "kitchen sink" approach. 
> Once you get past the long download time and fire up the example, you're 
> presented with configs that include features you're likely to never use.
> Although this offers maximum flexibility, I think it also serves to cause 
> confusion in a new user.
> A much better option would be to create a core download that includes only a 
> minimum set of features, probably just the war, the example servlet 
> container, and an example config that only uses the functionality present in 
> the war.  We can create additional downloads that offer additional 
> functionality and configs ... DIH would be a very small addon that would 
> likely be downloaded frequently.
> SOLR-5103 describes a plugin infrastructure which would make it very easy to 
> offer a small core download and then let the user download additional 
> functionality using scripts or the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5961) FunctionValues.exist(int) isn't returning false in cases where it should for many "math" based value sources

2014-12-01 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-5961.
-
Resolution: Fixed

remove 4.10.3, too risky

> FunctionValues.exist(int) isn't returning false in cases where it should for 
> many "math" based value sources
> 
>
> Key: LUCENE-5961
> URL: https://issues.apache.org/jira/browse/LUCENE-5961
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5961.patch, LUCENE-5961.patch, LUCENE-5961.patch
>
>
> The FunctionValues class contains an exist(int doc) method with a default 
> implementation that returns true - field based DocValues override this method 
> as appropriate, but most of the "function" based subclasses in the code 
> (typically anonymous subclasses of "FloatDocValues") don't override this 
> method when wrapping other ValueSources.
> So for example: the FunctionValues returned by 
> ProductFloatFunction.getValues() will say that a value exists for any doc, 
> even if that ProductFloatFunction wraps two FloatFieldSources that don't 
> exist for any docs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5961) FunctionValues.exist(int) isn't returning false in cases where it should for many "math" based value sources

2014-12-01 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5961:

Fix Version/s: (was: 4.10.3)

> FunctionValues.exist(int) isn't returning false in cases where it should for 
> many "math" based value sources
> 
>
> Key: LUCENE-5961
> URL: https://issues.apache.org/jira/browse/LUCENE-5961
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5961.patch, LUCENE-5961.patch, LUCENE-5961.patch
>
>
> The FunctionValues class contains an exist(int doc) method with a default 
> implementation that returns true - field based DocValues override this method 
> as appropriate, but most of the "function" based subclasses in the code 
> (typically anonymous subclasses of "FloatDocValues") don't override this 
> method when wrapping other ValueSources.
> So for example: the FunctionValues returned by 
> ProductFloatFunction.getValues() will say that a value exists for any doc, 
> even if that ProductFloatFunction wraps two FloatFieldSources that don't 
> exist for any docs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5961) FunctionValues.exist(int) isn't returning false in cases where it should for many "math" based value sources

2014-12-01 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230773#comment-14230773
 ] 

Hoss Man commented on LUCENE-5961:
--

I'm -0 to backporting this to 4.10.x...

I'm not convinced the benefits of the "fixed" behavior out-weigh the risk that 
this will cause problems for existing users who have code that depends on the 
current behavior, and will expect 4.10.3 to be a drop in replacement w/o 
needing to modify any of their lucene client code or solr queries/configs.

I'd rather let this fix wait for 5.0 (or a 4.11 if there was going to be one), 
when affected users are more likely to pay attention to MIGRATE.txt and the 
Solr upgrade instructions and take the time to fix their code/configs/queries 
if they really want the existing broken behavior...

https://svn.apache.org/viewvc/lucene/dev/trunk/lucene/MIGRATE.txt?r1=1632414&r2=1632413&pathrev=1632414
https://svn.apache.org/viewvc/lucene/dev/trunk/solr/CHANGES.txt?r1=1632414&r2=1632413&pathrev=1632414

> FunctionValues.exist(int) isn't returning false in cases where it should for 
> many "math" based value sources
> 
>
> Key: LUCENE-5961
> URL: https://issues.apache.org/jira/browse/LUCENE-5961
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 4.10.3, 5.0, Trunk
>
> Attachments: LUCENE-5961.patch, LUCENE-5961.patch, LUCENE-5961.patch
>
>
> The FunctionValues class contains an exist(int doc) method with a default 
> implementation that returns true - field based DocValues override this method 
> as appropriate, but most of the "function" based subclasses in the code 
> (typically anonymous subclasses of "FloatDocValues") don't override this 
> method when wrapping other ValueSources.
> So for example: the FunctionValues returned by 
> ProductFloatFunction.getValues() will say that a value exists for any doc, 
> even if that ProductFloatFunction wraps two FloatFieldSources that don't 
> exist for any docs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: solr client sdk's/libraries for native platforms

2014-12-01 Thread Alexandre Rafalovitch
What would be the reasonable cutoff for the client library last
update? Say if it was not updated in 2 years - should it be included
in the list? In 3? Included with a warning?

Or do we list them all and let the user sort it out? Or put a
last-checked date on the wiki and mention rough last update against
each library?

Regards,
   Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853


On 1 December 2014 at 11:03, Eric Pugh  wrote:
> I think in the vein of a “do-it-tocracy”, getting the Wiki updated is a 
> perfectly good first step, and then if there is a better approach, hopefully 
> that occurs.… ;-)
>
>
>
>> On Dec 1, 2014, at 10:51 AM, Alexandre Rafalovitch  
>> wrote:
>>
>> On 1 December 2014 at 10:02, david.w.smi...@gmail.com
>>  wrote:
>>> I meant to reply earlier...
>>>
>>> On Mon, Nov 24, 2014 at 11:37 AM, Alexandre Rafalovitch 
>>> wrote:

 They are super-stale
>>>
>>>
>>> Yup but it’s a wiki so feel free to freshen it up.  I’ll be doing that in a
>>> bit.  It may also be helpful if these particular pages got more
>>> prominence/visibility by being linked from the ref guide and/or the website.
>>
>> On the TODO list. If you are planning to update the client list, maybe
>> we should coordinate, so we don't step on each other's toes. I am
>> planning to do more than a minor tweak.
>>
 and there is no easy mechanism for people to
 announce their additions. I am not even sure the announcements are
 welcome on the user mailing list.
>>>
>>>
>>> IMO the mailing list is an excellent place to announce new Solr integrations
>>> in the ecosystem out there.  People announce various things on the list from
>>> time to time.
>> I haven't even announced solr-start.com on the list, wasn't sure
>> whether it's appropriate. So, maybe it's ok, but I suspect that's not
>> visible.
>>
 It comes down to the funnel/workflow. At the moment, the workflow
 makes it _hard_ to maintain those pages. CMM level 1 kind of hard.
>>> Can you recommend a fix or alternative?
>>
>> I thought that's what my previous emails were about?!? Setup a
>> 'client-maintainer' mailing list seeded with SolrJ people, update the
>> Wiki, make it more prominent. Organize a TodoMVC equivalent for Solr
>> clients (with prizes?). Ensure it is a topic (with mentor) for
>> Google's Summer of Code. Have somebody from core Solr to keep at least
>> one eye on the client communities' mailing lists.
>>
>> I started doing that as an individual, but the traction was not there.
>> It needs at least a couple of people to push in the same direction.
>>
>> Regards,
>>   Alex.
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>
> -
> Eric Pugh | Principal | OpenSource Connections, LLC | 434.466.1467 | 
> http://www.opensourceconnections.com | My Free/Busy
> Co-Author: Apache Solr 3 Enterprise Search Server
> This e-mail and all contents, including attachments, is considered to be 
> Company Confidential unless explicitly stated otherwise, regardless of 
> whether attachments are marked as such.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1641902 - in /lucene/dev/trunk/lucene: core/src/test/org/apache/lucene/index/TestIndexWriter.java test-framework/src/java/org/apache/lucene/mockfile/FilterFileSystem.java test-framewo

2014-12-01 Thread Robert Muir
-1.

our test framework doesnt need to support this. the problem here is
the test, its no good and doesnt work on windows. An assume is the
correct answer for a shitty test.

On Mon, Dec 1, 2014 at 5:46 PM, Chris Hostetter
 wrote:
>
> :  assumeFalse("this test can't run on Windows", Constants.WINDOWS);
> :
> :  MockDirectoryWrapper dir = newMockDirectory();
> : +if (TestUtil.isWindowsFS(dir)) {
> : +  dir.close();
> : +  assumeFalse("this test can't run on Windows", true);
> : +}
>
> this specific assume msg seems like a bad idea.
>
> ie: a new dev, who doesn't know about the FS mocking behavior of
> the test cases, who tries to run lucene tests on a mac and sees a
> test skipped with the message "this test can't run on Windows" i'm going
> to be confused as hell.
>
> I also have to wonder: rather then just a straight assumeFalse, wouldn't
> it be better in this case to just upwrap the mock "windowsfs" and just
> explicitly use the "real" fs for this particular test? (in hte interest of
> maximizing test coverage)
>
>
> -Hoss
> http://www.lucidworks.com/
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5914) More options for stored fields compression

2014-12-01 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-5914:
-
Attachment: LUCENE-5914.patch

Here is a new patch that iterates on Robert's:
 - improved compression for numerics:
 - floats and doubles representing small integers take 1 byte
 - other positive floats and doubles take 4 / 8 bytes
 - other floats and doubles (negative) take 5 / 9 bytes
 - doubles that are actually casted floats take 5 bytes
 - longs are compressed if they represent a timestamp (2 bits are used to 
encode for the fact that the number is a multiple of a second, hour, day, or is 
uncompressed)
 - clean up of the checkFooter calls in the reader
 - slightly better encoding of the offsets with the BEST_SPEED option by using 
monotonic encoding: this allows to just slurp a sequence of bytes and then 
decode a single value instead of having to decode lengths and sum them up in 
order to have offsets (the BEST_COMPRESSION option still does this however)
 - fixed some javadocs errors

> More options for stored fields compression
> --
>
> Key: LUCENE-5914
> URL: https://issues.apache.org/jira/browse/LUCENE-5914
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
> Fix For: 5.0
>
> Attachments: LUCENE-5914.patch, LUCENE-5914.patch, LUCENE-5914.patch, 
> LUCENE-5914.patch, LUCENE-5914.patch
>
>
> Since we added codec-level compression in Lucene 4.1 I think I got about the 
> same amount of users complaining that compression was too aggressive and that 
> compression was too light.
> I think it is due to the fact that we have users that are doing very 
> different things with Lucene. For example if you have a small index that fits 
> in the filesystem cache (or is close to), then you might never pay for actual 
> disk seeks and in such a case the fact that the current stored fields format 
> needs to over-decompress data can sensibly slow search down on cheap queries.
> On the other hand, it is more and more common to use Lucene for things like 
> log analytics, and in that case you have huge amounts of data for which you 
> don't care much about stored fields performance. However it is very 
> frustrating to notice that the data that you store takes several times less 
> space when you gzip it compared to your index although Lucene claims to 
> compress stored fields.
> For that reason, I think it would be nice to have some kind of options that 
> would allow to trade speed for compression in the default codec.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6741) IPv6 Field Type

2014-12-01 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230695#comment-14230695
 ] 

Hoss Man commented on SOLR-6741:


bq. It would be nice if Solr had a field type which could be used to index IPv6 
data and supported efficient range queries. 

better still would be a field type that understood how to parse & index both 
IPv4 and IPv6 and could likewise handle range & "prefix" (ie: 
{{ip:169.229.136.*}}) queries of both.

but i think doing the IPv6 part (correctly) is kind of blocked by LUCENE-5596 
and/or LUCENE-5879.

We could do something IPv4 specific in the meantime by wrapping TrieLongField 
-- but personally i'd rather have one single well done "IPField" class that 
deals with bytes under the hood and handles IPv4 string parsing as a slightly 
special case - storing them internally as IPv6 mapped addrs...
https://en.wikipedia.org/wiki/IPv6#IPv4-mapped_IPv6_addresses


> IPv6 Field Type
> ---
>
> Key: SOLR-6741
> URL: https://issues.apache.org/jira/browse/SOLR-6741
> Project: Solr
>  Issue Type: Improvement
>Reporter: Lloyd Ramey
>
> It would be nice if Solr had a field type which could be used to index IPv6 
> data and supported efficient range queries. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6767) Improve user experience when starting Solr in standalone mode using scripts

2014-12-01 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230647#comment-14230647
 ] 

Timothy Potter commented on SOLR-6767:
--

I committed a fix that affects this. Now, when you do:

{code}
bin/solr -e techproducts
{code}

The script does the following when creating the techproducts core:
{code}
mkdir -p server/solr/techproducts
cp -r server/solr/configsets/sample_techproducts_configs/conf 
server/solr/techproducts/conf
{code}

This leaves the {{configsets/sample_techproducts_configs}} untouched.

I believe adding some UI support for the configsets is a must-have for 5.0 
release. Ideally, the user will be able to go to the Admin UI and create a new 
core by only providing a name and selecting a configset from a drop down.


> Improve user experience when starting Solr in standalone mode using scripts
> ---
>
> Key: SOLR-6767
> URL: https://issues.apache.org/jira/browse/SOLR-6767
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools, web gui
>Reporter: Anshum Gupta
>
> As of now, starting Solr in standalone mode using './solr start' starts up 
> Solr without any core. Trying to create a core from coreadmin UI doesn't work 
> and errors out (when using defaults).
> bq. Error CREATEing SolrCore 'new_core': Unable to create core \[new_core\] 
> Caused by: Can't find resource 'solrconfig.xml' in classpath or 
> '/lucene-solr/solr/server/solr/new_core/conf'
> The only way to get it to work would be to use the /server/ 
> directory to be the instance directory and then, the core creation would 
> create unwanted  directories in there. The only way to clean that up being, 
> {code}
> > rm -rf .. ; svn up # (if it's a repo check out). 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Solr-Artifacts-5.x - Build # 674 - Failure

2014-12-01 Thread Michael McCandless
I committed a fix ... sorry for the noise!

Mike McCandless

http://blog.mikemccandless.com


On Mon, Dec 1, 2014 at 5:47 PM, Apache Jenkins Server
 wrote:
> Build: https://builds.apache.org/job/Solr-Artifacts-5.x/674/
>
> No tests ran.
>
> Build Log:
> [...truncated 10515 lines...]
> [javac] Compiling 61 source files to 
> /usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/lucene/build/backward-codecs/classes/java
> [javac] 
> /usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/lucene/backward-codecs/src/java/org/apache/lucene/codecs/lucene40/Lucene40CompoundWriter.java:309:
>  error: constructor IndexOutput in class IndexOutput cannot be applied to 
> given types;
> [javac]   super();
> [javac]   ^
> [javac]   required: String
> [javac]   found: no arguments
> [javac]   reason: actual and formal argument lists differ in length
> [javac] Note: Some input files use or override a deprecated API.
> [javac] Note: Recompile with -Xlint:deprecation for details.
> [javac] 1 error
>
> BUILD FAILED
> /usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/solr/build.xml:448:
>  The following error occurred while executing this line:
> /usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/solr/common-build.xml:407:
>  The following error occurred while executing this line:
> /usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/lucene/module-build.xml:439:
>  The following error occurred while executing this line:
> /usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/lucene/common-build.xml:514:
>  The following error occurred while executing this line:
> /usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/lucene/common-build.xml:1875:
>  Compile failed; see the compiler error output for details.
>
> Total time: 1 minute 8 seconds
> Build step 'Invoke Ant' marked build as failure
> Archiving artifacts
> Sending artifact delta relative to Solr-Artifacts-5.x #673
> Archived 3 artifacts
> Archive block size is 32768
> Received 0 blocks and 35923651 bytes
> Compression is 0.0%
> Took 30 sec
> Publishing Javadoc
> Email was triggered for: Failure
> Sending email for trigger: Failure
>
>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6084) Add reasonable IndexOutput.toString

2014-12-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230642#comment-14230642
 ] 

ASF subversion and git services commented on LUCENE-6084:
-

Commit 1642785 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1642785 ]

LUCENE-6084: what Hoss said

> Add reasonable IndexOutput.toString
> ---
>
> Key: LUCENE-6084
> URL: https://issues.apache.org/jira/browse/LUCENE-6084
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6084.patch, LUCENE-6084.patch
>
>
> In LUCENE-3539 we fixed IndexInput.toString to always include the 
> resourceDescription.
> I think we should do the same for IndexOutput?
> I don't think Lucene currently uses/relies on IndexOutput.toString, but e.g. 
> at least Elasticsearch does, and likely others, so I think it can only help 
> if you can see which path is open by this IndexOutput.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6084) Add reasonable IndexOutput.toString

2014-12-01 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230643#comment-14230643
 ] 

Michael McCandless commented on LUCENE-6084:


Thanks Hoss, I committed your version!

> Add reasonable IndexOutput.toString
> ---
>
> Key: LUCENE-6084
> URL: https://issues.apache.org/jira/browse/LUCENE-6084
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6084.patch, LUCENE-6084.patch
>
>
> In LUCENE-3539 we fixed IndexInput.toString to always include the 
> resourceDescription.
> I think we should do the same for IndexOutput?
> I don't think Lucene currently uses/relies on IndexOutput.toString, but e.g. 
> at least Elasticsearch does, and likely others, so I think it can only help 
> if you can see which path is open by this IndexOutput.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6084) Add reasonable IndexOutput.toString

2014-12-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230638#comment-14230638
 ] 

ASF subversion and git services commented on LUCENE-6084:
-

Commit 1642783 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1642783 ]

LUCENE-6084: add reasonable IndexOutput.toString

> Add reasonable IndexOutput.toString
> ---
>
> Key: LUCENE-6084
> URL: https://issues.apache.org/jira/browse/LUCENE-6084
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6084.patch, LUCENE-6084.patch
>
>
> In LUCENE-3539 we fixed IndexInput.toString to always include the 
> resourceDescription.
> I think we should do the same for IndexOutput?
> I don't think Lucene currently uses/relies on IndexOutput.toString, but e.g. 
> at least Elasticsearch does, and likely others, so I think it can only help 
> if you can see which path is open by this IndexOutput.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6694) Auto detect JAVA_HOME in bin\start.cmd

2014-12-01 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter resolved SOLR-6694.
--
   Resolution: Fixed
Fix Version/s: 5.0
   4.10.3

> Auto detect JAVA_HOME in bin\start.cmd
> --
>
> Key: SOLR-6694
> URL: https://issues.apache.org/jira/browse/SOLR-6694
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 4.10.2
> Environment: Windows
>Reporter: Jan Høydahl
>Assignee: Timothy Potter
> Fix For: 4.10.3, 5.0
>
>
> The start script requires JAVA_HOME to be set.
> The Java installer on Windows does not set JAVA_HOME, so it is an obstacle 
> for new users who wants to test. What the installer does is to set some 
> registry values, and we can detect those to find a JAVA_HOME to use. It will 
> give a better user experience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6694) Auto detect JAVA_HOME in bin\start.cmd

2014-12-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230636#comment-14230636
 ] 

ASF subversion and git services commented on SOLR-6694:
---

Commit 1642781 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1642781 ]

SOLR-6694: fix misplaced percent found when fixing this issue

> Auto detect JAVA_HOME in bin\start.cmd
> --
>
> Key: SOLR-6694
> URL: https://issues.apache.org/jira/browse/SOLR-6694
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 4.10.2
> Environment: Windows
>Reporter: Jan Høydahl
>Assignee: Timothy Potter
> Fix For: 4.10.3, 5.0
>
>
> The start script requires JAVA_HOME to be set.
> The Java installer on Windows does not set JAVA_HOME, so it is an obstacle 
> for new users who wants to test. What the installer does is to set some 
> registry values, and we can detect those to find a JAVA_HOME to use. It will 
> give a better user experience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6694) Auto detect JAVA_HOME in bin\start.cmd

2014-12-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230634#comment-14230634
 ] 

ASF subversion and git services commented on SOLR-6694:
---

Commit 1642780 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1642780 ]

SOLR-6694: fix misplaced percent found when fixing this issue

> Auto detect JAVA_HOME in bin\start.cmd
> --
>
> Key: SOLR-6694
> URL: https://issues.apache.org/jira/browse/SOLR-6694
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 4.10.2
> Environment: Windows
>Reporter: Jan Høydahl
>Assignee: Timothy Potter
>
> The start script requires JAVA_HOME to be set.
> The Java installer on Windows does not set JAVA_HOME, so it is an obstacle 
> for new users who wants to test. What the installer does is to set some 
> registry values, and we can detect those to find a JAVA_HOME to use. It will 
> give a better user experience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6084) Add reasonable IndexOutput.toString

2014-12-01 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230633#comment-14230633
 ] 

Hoss Man commented on LUCENE-6084:
--

I was about to commit this (didn't see you on IRC) but i'll defer to you as the 
expert...

{noformat}
Index: 
lucene/backward-codecs/src/java/org/apache/lucene/codecs/lucene40/Lucene40CompoundWriter.java
===
--- 
lucene/backward-codecs/src/java/org/apache/lucene/codecs/lucene40/Lucene40CompoundWriter.java
   (revision 1642776)
+++ 
lucene/backward-codecs/src/java/org/apache/lucene/codecs/lucene40/Lucene40CompoundWriter.java
   (working copy)
@@ -306,7 +306,7 @@
 
 DirectCFSIndexOutput(IndexOutput delegate, FileEntry entry,
 boolean isSeparate) {
-  super();
+  
super("DirectCFSIndexOutput("+delegate.toString()+",entry=\""+entry.toString()+"\",isSeparate=\""+isSeparate+")");
   this.delegate = delegate;
   this.entry = entry;
   entry.offset = offset = delegate.getFilePointer();
{noformat}

> Add reasonable IndexOutput.toString
> ---
>
> Key: LUCENE-6084
> URL: https://issues.apache.org/jira/browse/LUCENE-6084
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6084.patch, LUCENE-6084.patch
>
>
> In LUCENE-3539 we fixed IndexInput.toString to always include the 
> resourceDescription.
> I think we should do the same for IndexOutput?
> I don't think Lucene currently uses/relies on IndexOutput.toString, but e.g. 
> at least Elasticsearch does, and likely others, so I think it can only help 
> if you can see which path is open by this IndexOutput.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Solr-Artifacts-5.x - Build # 674 - Failure

2014-12-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-Artifacts-5.x/674/

No tests ran.

Build Log:
[...truncated 10515 lines...]
[javac] Compiling 61 source files to 
/usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/lucene/build/backward-codecs/classes/java
[javac] 
/usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/lucene/backward-codecs/src/java/org/apache/lucene/codecs/lucene40/Lucene40CompoundWriter.java:309:
 error: constructor IndexOutput in class IndexOutput cannot be applied to given 
types;
[javac]   super();
[javac]   ^
[javac]   required: String
[javac]   found: no arguments
[javac]   reason: actual and formal argument lists differ in length
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] 1 error

BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/solr/build.xml:448:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/solr/common-build.xml:407:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/lucene/module-build.xml:439:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/lucene/common-build.xml:514:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/lucene/common-build.xml:1875:
 Compile failed; see the compiler error output for details.

Total time: 1 minute 8 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Sending artifact delta relative to Solr-Artifacts-5.x #673
Archived 3 artifacts
Archive block size is 32768
Received 0 blocks and 35923651 bytes
Compression is 0.0%
Took 30 sec
Publishing Javadoc
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-6694) Auto detect JAVA_HOME in bin\start.cmd

2014-12-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230622#comment-14230622
 ] 

ASF subversion and git services commented on SOLR-6694:
---

Commit 1642777 from [~thelabdude] in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1642777 ]

SOLR-6694: auto-detect JAVA_HOME using the Windows registry

> Auto detect JAVA_HOME in bin\start.cmd
> --
>
> Key: SOLR-6694
> URL: https://issues.apache.org/jira/browse/SOLR-6694
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 4.10.2
> Environment: Windows
>Reporter: Jan Høydahl
>Assignee: Timothy Potter
>
> The start script requires JAVA_HOME to be set.
> The Java installer on Windows does not set JAVA_HOME, so it is an obstacle 
> for new users who wants to test. What the installer does is to set some 
> registry values, and we can detect those to find a JAVA_HOME to use. It will 
> give a better user experience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6084) Add reasonable IndexOutput.toString

2014-12-01 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230624#comment-14230624
 ] 

Michael McCandless commented on LUCENE-6084:


Argh, you're right!  Sorry :(  I'll fix.

> Add reasonable IndexOutput.toString
> ---
>
> Key: LUCENE-6084
> URL: https://issues.apache.org/jira/browse/LUCENE-6084
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6084.patch, LUCENE-6084.patch
>
>
> In LUCENE-3539 we fixed IndexInput.toString to always include the 
> resourceDescription.
> I think we should do the same for IndexOutput?
> I don't think Lucene currently uses/relies on IndexOutput.toString, but e.g. 
> at least Elasticsearch does, and likely others, so I think it can only help 
> if you can see which path is open by this IndexOutput.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4509) Disable HttpClient stale check for performance and fewer spurious connection errors.

2014-12-01 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230619#comment-14230619
 ] 

Shawn Heisey commented on SOLR-4509:


We have SOLR-5604 to address the deprecated HttpClient methods that we 
currently use.  I already attempted it once ... it's not going to be a trivial 
change, and my knowledge of HttpClient is too limited to be useful.  Thankfully 
those methods will stick around until HC 5.0 comes out, so there's not 
currently a pressing need.

If the method of dealing with stale checks that is being developed here is 
superior to HC 4.4, then we might want to ask that [~olegk] consider it for HC 
itself.  I haven't looked at either solution, and I doubt that I would 
understand it even if I did look.


> Disable HttpClient stale check for performance and fewer spurious connection 
> errors.
> 
>
> Key: SOLR-4509
> URL: https://issues.apache.org/jira/browse/SOLR-4509
> Project: Solr
>  Issue Type: Improvement
>  Components: search
> Environment: 5 node SmartOS cluster (all nodes living in same global 
> zone - i.e. same physical machine)
>Reporter: Ryan Zezeski
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: IsStaleTime.java, SOLR-4509-4_4_0.patch, 
> SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
> SOLR-4509.patch, baremetal-stale-nostale-med-latency.dat, 
> baremetal-stale-nostale-med-latency.svg, 
> baremetal-stale-nostale-throughput.dat, baremetal-stale-nostale-throughput.svg
>
>
> By disabling the Apache HTTP Client stale check I've witnessed a 2-4x 
> increase in throughput and reduction of over 100ms.  This patch was made in 
> the context of a project I'm leading, called Yokozuna, which relies on 
> distributed search.
> Here's the patch on Yokozuna: https://github.com/rzezeski/yokozuna/pull/26
> Here's a write-up I did on my findings: 
> http://www.zinascii.com/2013/solr-distributed-search-and-the-stale-check.html
> I'm happy to answer any questions or make changes to the patch to make it 
> acceptable.
> ReviewBoard: https://reviews.apache.org/r/28393/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-6084) Add reasonable IndexOutput.toString

2014-12-01 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reopened LUCENE-6084:
--

i think this broke compilation in backward-codecs on 5x?

> Add reasonable IndexOutput.toString
> ---
>
> Key: LUCENE-6084
> URL: https://issues.apache.org/jira/browse/LUCENE-6084
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6084.patch, LUCENE-6084.patch
>
>
> In LUCENE-3539 we fixed IndexInput.toString to always include the 
> resourceDescription.
> I think we should do the same for IndexOutput?
> I don't think Lucene currently uses/relies on IndexOutput.toString, but e.g. 
> at least Elasticsearch does, and likely others, so I think it can only help 
> if you can see which path is open by this IndexOutput.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6795) distrib.singlePass returns score even though not asked for

2014-12-01 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-6795.
-
   Resolution: Fixed
Fix Version/s: Trunk
   5.0
   4.10.3

I've backported this to 4.10.3 as well. Thanks Per!

> distrib.singlePass returns score even though not asked for
> --
>
> Key: SOLR-6795
> URL: https://issues.apache.org/jira/browse/SOLR-6795
> Project: Solr
>  Issue Type: Bug
>  Components: multicore, search
>Affects Versions: 5.0
>Reporter: Per Steffensen
>Assignee: Shalin Shekhar Mangar
>  Labels: distributed_search, search
> Fix For: 4.10.3, 5.0, Trunk
>
> Attachments: SOLR-6795.patch, fix.patch, 
> test_that_reveals_the_problem.patch
>
>
> If I pass distrib.singlePass in a request and do not ask for score back (fl 
> does not include score) it will return the score back anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6795) distrib.singlePass returns score even though not asked for

2014-12-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230611#comment-14230611
 ] 

ASF subversion and git services commented on SOLR-6795:
---

Commit 1642776 from sha...@apache.org in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1642776 ]

SOLR-6795: distrib.singlePass returns score even though not asked for

> distrib.singlePass returns score even though not asked for
> --
>
> Key: SOLR-6795
> URL: https://issues.apache.org/jira/browse/SOLR-6795
> Project: Solr
>  Issue Type: Bug
>  Components: multicore, search
>Affects Versions: 5.0
>Reporter: Per Steffensen
>Assignee: Shalin Shekhar Mangar
>  Labels: distributed_search, search
> Attachments: SOLR-6795.patch, fix.patch, 
> test_that_reveals_the_problem.patch
>
>
> If I pass distrib.singlePass in a request and do not ask for score back (fl 
> does not include score) it will return the score back anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 1925 - Still Failing!

2014-12-01 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1925/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseParallelGC (asserts: true)

1 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestDetails

Error Message:
java.io.IOException: MockDirectoryWrapper: file "replication.properties" is 
still open for writing

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: 
java.io.IOException: MockDirectoryWrapper: file "replication.properties" is 
still open for writing
at 
__randomizedtesting.SeedInfo.seed([8AC9644324F9D2CD:F0944736B2A1FB45]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:569)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.handler.TestReplicationHandler.getDetails(TestReplicationHandler.java:218)
at 
org.apache.solr.handler.TestReplicationHandler.doTestDetails(TestReplicationHandler.java:294)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apac

Re: svn commit: r1641902 - in /lucene/dev/trunk/lucene: core/src/test/org/apache/lucene/index/TestIndexWriter.java test-framework/src/java/org/apache/lucene/mockfile/FilterFileSystem.java test-framewo

2014-12-01 Thread Chris Hostetter

:  assumeFalse("this test can't run on Windows", Constants.WINDOWS);
:  
:  MockDirectoryWrapper dir = newMockDirectory();
: +if (TestUtil.isWindowsFS(dir)) {
: +  dir.close();
: +  assumeFalse("this test can't run on Windows", true);
: +}

this specific assume msg seems like a bad idea.

ie: a new dev, who doesn't know about the FS mocking behavior of 
the test cases, who tries to run lucene tests on a mac and sees a 
test skipped with the message "this test can't run on Windows" i'm going 
to be confused as hell.

I also have to wonder: rather then just a straight assumeFalse, wouldn't 
it be better in this case to just upwrap the mock "windowsfs" and just 
explicitly use the "real" fs for this particular test? (in hte interest of 
maximizing test coverage)


-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6795) distrib.singlePass returns score even though not asked for

2014-12-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230607#comment-14230607
 ] 

ASF subversion and git services commented on SOLR-6795:
---

Commit 1642775 from sha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1642775 ]

SOLR-6795: distrib.singlePass returns score even though not asked for

> distrib.singlePass returns score even though not asked for
> --
>
> Key: SOLR-6795
> URL: https://issues.apache.org/jira/browse/SOLR-6795
> Project: Solr
>  Issue Type: Bug
>  Components: multicore, search
>Affects Versions: 5.0
>Reporter: Per Steffensen
>Assignee: Shalin Shekhar Mangar
>  Labels: distributed_search, search
> Attachments: SOLR-6795.patch, fix.patch, 
> test_that_reveals_the_problem.patch
>
>
> If I pass distrib.singlePass in a request and do not ask for score back (fl 
> does not include score) it will return the score back anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6795) distrib.singlePass returns score even though not asked for

2014-12-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230604#comment-14230604
 ] 

ASF subversion and git services commented on SOLR-6795:
---

Commit 1642774 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1642774 ]

SOLR-6795: distrib.singlePass returns score even though not asked for

> distrib.singlePass returns score even though not asked for
> --
>
> Key: SOLR-6795
> URL: https://issues.apache.org/jira/browse/SOLR-6795
> Project: Solr
>  Issue Type: Bug
>  Components: multicore, search
>Affects Versions: 5.0
>Reporter: Per Steffensen
>Assignee: Shalin Shekhar Mangar
>  Labels: distributed_search, search
> Attachments: SOLR-6795.patch, fix.patch, 
> test_that_reveals_the_problem.patch
>
>
> If I pass distrib.singlePass in a request and do not ask for score back (fl 
> does not include score) it will return the score back anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Artifacts-5.x - Build # 698 - Failure

2014-12-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Artifacts-5.x/698/

No tests ran.

Build Log:
[...truncated 1308 lines...]
[javac] Compiling 61 source files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Artifacts-5.x/lucene/build/backward-codecs/classes/java
[javac] 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Artifacts-5.x/lucene/backward-codecs/src/java/org/apache/lucene/codecs/lucene40/Lucene40CompoundWriter.java:309:
 error: constructor IndexOutput in class IndexOutput cannot be applied to given 
types;
[javac]   super();
[javac]   ^
[javac]   required: String
[javac]   found: no arguments
[javac]   reason: actual and formal argument lists differ in length
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] 1 error

BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Artifacts-5.x/lucene/build.xml:451:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Artifacts-5.x/lucene/common-build.xml:2150:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Artifacts-5.x/lucene/module-build.xml:55:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Artifacts-5.x/lucene/common-build.xml:514:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Artifacts-5.x/lucene/common-build.xml:1875:
 Compile failed; see the compiler error output for details.

Total time: 40 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Publishing Javadoc
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2263 - Still Failing

2014-12-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2263/

All tests passed

Build Log:
[...truncated 3837 lines...]
[javac] Compiling 61 source files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/build/backward-codecs/classes/java
[javac] 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/backward-codecs/src/java/org/apache/lucene/codecs/lucene40/Lucene40CompoundWriter.java:309:
 error: constructor IndexOutput in class IndexOutput cannot be applied to given 
types;
[javac]   super();
[javac]   ^
[javac]   required: String
[javac]   found: no arguments
[javac]   reason: actual and formal argument lists differ in length
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] 1 error

BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:529:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:477:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:61:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/extra-targets.xml:39:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/build.xml:456:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:2150:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/module-build.xml:58:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/module-build.xml:55:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:514:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:1875:
 Compile failed; see the compiler error output for details.

Total time: 8 minutes 4 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Sending artifact delta relative to Lucene-Solr-Tests-5.x-Java7 #2259
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 464 bytes
Compression is 0.0%
Took 22 ms
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-4792) stop shipping a war in 5.0

2014-12-01 Thread Jayson Minard (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230570#comment-14230570
 ] 

Jayson Minard commented on SOLR-4792:
-

Last note, because I'm probably boring people...

Keep it in Maven does not put it into the distribution but means that Solr 
doesn't have to be rebuilt just to create a WAR.  if not in the .tar.gz and not 
publicized, but still available in Maven then other's can link to it, document 
it, include it in other projects as a dependency, etc.  So if it is in 
server/webapps but not in /dist, then why not maven.

> stop shipping a war in 5.0
> --
>
> Key: SOLR-4792
> URL: https://issues.apache.org/jira/browse/SOLR-4792
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Reporter: Robert Muir
>Assignee: Mark Miller
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-4792.patch
>
>
> see the vote on the developer list.
> This is the first step: if we stop shipping a war then we are free to do 
> anything we want. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4509) Disable HttpClient stale check for performance and fewer spurious connection errors.

2014-12-01 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230564#comment-14230564
 ] 

Gregory Chanan commented on SOLR-4509:
--

One other thing to consider, and I'm not sure if this applies to that release 
note, is that there are major API changes that in HttpClient 4.4 that you need 
to use to get the new features.  In general everything is done via builders, so 
you can't change many configuration settings after creating the httpclient.  
There are APIs in HttpSolrServer and elsewhere that let you change the 
configuration that would have to be reworked.

> Disable HttpClient stale check for performance and fewer spurious connection 
> errors.
> 
>
> Key: SOLR-4509
> URL: https://issues.apache.org/jira/browse/SOLR-4509
> Project: Solr
>  Issue Type: Improvement
>  Components: search
> Environment: 5 node SmartOS cluster (all nodes living in same global 
> zone - i.e. same physical machine)
>Reporter: Ryan Zezeski
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: IsStaleTime.java, SOLR-4509-4_4_0.patch, 
> SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
> SOLR-4509.patch, baremetal-stale-nostale-med-latency.dat, 
> baremetal-stale-nostale-med-latency.svg, 
> baremetal-stale-nostale-throughput.dat, baremetal-stale-nostale-throughput.svg
>
>
> By disabling the Apache HTTP Client stale check I've witnessed a 2-4x 
> increase in throughput and reduction of over 100ms.  This patch was made in 
> the context of a project I'm leading, called Yokozuna, which relies on 
> distributed search.
> Here's the patch on Yokozuna: https://github.com/rzezeski/yokozuna/pull/26
> Here's a write-up I did on my findings: 
> http://www.zinascii.com/2013/solr-distributed-search-and-the-stale-check.html
> I'm happy to answer any questions or make changes to the patch to make it 
> acceptable.
> ReviewBoard: https://reviews.apache.org/r/28393/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 690 - Failure

2014-12-01 Thread Michael McCandless
I'll dig.

Mike McCandless

http://blog.mikemccandless.com


On Mon, Dec 1, 2014 at 5:12 PM, Apache Jenkins Server
 wrote:
> Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/690/
>
> 2 tests failed.
> REGRESSION:  
> org.apache.lucene.index.TestDemoParallelLeafReader.testRandomMultipleSchemaGensSameField
>
> Error Message:
> Test abandoned because suite timeout was reached.
>
> Stack Trace:
> java.lang.Exception: Test abandoned because suite timeout was reached.
> at __randomizedtesting.SeedInfo.seed([1A30B36FF7E830A0]:0)
>
>
> FAILED:  
> junit.framework.TestSuite.org.apache.lucene.index.TestDemoParallelLeafReader
>
> Error Message:
> Suite timeout exceeded (>= 720 msec).
>
> Stack Trace:
> java.lang.Exception: Suite timeout exceeded (>= 720 msec).
> at __randomizedtesting.SeedInfo.seed([1A30B36FF7E830A0]:0)
>
>
>
>
> Build Log:
> [...truncated 1679 lines...]
>[junit4] Suite: org.apache.lucene.index.TestDemoParallelLeafReader
>[junit4]   2> ??? 01, 2014 10:21:13 ? 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
>[junit4]   2> WARNING: Suite execution timed out: 
> org.apache.lucene.index.TestDemoParallelLeafReader
>[junit4]   2>  jstack at approximately timeout time 
>[junit4]   2> 
> "TEST-TestDemoParallelLeafReader.testRandomMultipleSchemaGensSameField-seed#[1A30B36FF7E830A0]"
>  ID=22 RUNNABLE
>[junit4]   2>at sun.nio.fs.UnixNativeDispatcher.open0(Native 
> Method)
>[junit4]   2>at 
> sun.nio.fs.UnixNativeDispatcher.open(UnixNativeDispatcher.java:71)
>[junit4]   2>at 
> sun.nio.fs.UnixChannelFactory.open(UnixChannelFactory.java:258)
>[junit4]   2>at 
> sun.nio.fs.UnixChannelFactory.newFileChannel(UnixChannelFactory.java:136)
>[junit4]   2>at 
> sun.nio.fs.UnixChannelFactory.newFileChannel(UnixChannelFactory.java:149)
>[junit4]   2>at 
> sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:175)
>[junit4]   2>at 
> org.apache.lucene.mockfile.FilterFileSystemProvider.newFileChannel(FilterFileSystemProvider.java:180)
>[junit4]   2>at 
> org.apache.lucene.mockfile.DisableFsyncFS.newFileChannel(DisableFsyncFS.java:46)
>[junit4]   2>at 
> org.apache.lucene.mockfile.FilterFileSystemProvider.newFileChannel(FilterFileSystemProvider.java:180)
>[junit4]   2>at 
> org.apache.lucene.mockfile.HandleTrackingFS.newFileChannel(HandleTrackingFS.java:151)
>[junit4]   2>at 
> org.apache.lucene.mockfile.FilterFileSystemProvider.newFileChannel(FilterFileSystemProvider.java:180)
>[junit4]   2>at 
> org.apache.lucene.mockfile.HandleTrackingFS.newFileChannel(HandleTrackingFS.java:151)
>[junit4]   2>at 
> java.nio.channels.FileChannel.open(FileChannel.java:287)
>[junit4]   2>at 
> java.nio.channels.FileChannel.open(FileChannel.java:334)
>[junit4]   2>at 
> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:205)
>[junit4]   2>at 
> org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:80)
>[junit4]   2>at 
> org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:654)
>[junit4]   2>- locked 
> org.apache.lucene.store.MockDirectoryWrapper@3f04f430
>[junit4]   2>at 
> org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:80)
>[junit4]   2>at 
> org.apache.lucene.codecs.lucene50.Lucene50CompoundFormat.write(Lucene50CompoundFormat.java:91)
>[junit4]   2>at 
> org.apache.lucene.index.IndexWriter.createCompoundFile(IndexWriter.java:4504)
>[junit4]   2>at 
> org.apache.lucene.index.DocumentsWriterPerThread.sealFlushedSegment(DocumentsWriterPerThread.java:509)
>[junit4]   2>at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:476)
>[junit4]   2>at 
> org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:514)
>[junit4]   2>at 
> org.apache.lucene.index.DocumentsWriter.postUpdate(DocumentsWriter.java:379)
>[junit4]   2>at 
> org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:478)
>[junit4]   2>at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1398)
>[junit4]   2>at 
> org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1133)
>[junit4]   2>at 
> org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1118)
>[junit4]   2>at 
> org.apache.lucene.index.TestDemoParallelLeafReader$3.reindex(TestDemoParallelLeafReader.java:823)
>[junit4]   2>at 
> org.apache.lucene.index.TestDemoParallelLeafReader$ReindexingReader.getParallelLeafReader(TestDemoParallelLeafReader.java:394)
>[junit4]   2>- locked 
> org.apache.lucene.index.TestDemoParallelLeafReader$3@1b66d903
>[

[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 690 - Failure

2014-12-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/690/

2 tests failed.
REGRESSION:  
org.apache.lucene.index.TestDemoParallelLeafReader.testRandomMultipleSchemaGensSameField

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([1A30B36FF7E830A0]:0)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.index.TestDemoParallelLeafReader

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([1A30B36FF7E830A0]:0)




Build Log:
[...truncated 1679 lines...]
   [junit4] Suite: org.apache.lucene.index.TestDemoParallelLeafReader
   [junit4]   2> ??? 01, 2014 10:21:13 ? 
com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
   [junit4]   2> WARNING: Suite execution timed out: 
org.apache.lucene.index.TestDemoParallelLeafReader
   [junit4]   2>  jstack at approximately timeout time 
   [junit4]   2> 
"TEST-TestDemoParallelLeafReader.testRandomMultipleSchemaGensSameField-seed#[1A30B36FF7E830A0]"
 ID=22 RUNNABLE
   [junit4]   2>at sun.nio.fs.UnixNativeDispatcher.open0(Native Method)
   [junit4]   2>at 
sun.nio.fs.UnixNativeDispatcher.open(UnixNativeDispatcher.java:71)
   [junit4]   2>at 
sun.nio.fs.UnixChannelFactory.open(UnixChannelFactory.java:258)
   [junit4]   2>at 
sun.nio.fs.UnixChannelFactory.newFileChannel(UnixChannelFactory.java:136)
   [junit4]   2>at 
sun.nio.fs.UnixChannelFactory.newFileChannel(UnixChannelFactory.java:149)
   [junit4]   2>at 
sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:175)
   [junit4]   2>at 
org.apache.lucene.mockfile.FilterFileSystemProvider.newFileChannel(FilterFileSystemProvider.java:180)
   [junit4]   2>at 
org.apache.lucene.mockfile.DisableFsyncFS.newFileChannel(DisableFsyncFS.java:46)
   [junit4]   2>at 
org.apache.lucene.mockfile.FilterFileSystemProvider.newFileChannel(FilterFileSystemProvider.java:180)
   [junit4]   2>at 
org.apache.lucene.mockfile.HandleTrackingFS.newFileChannel(HandleTrackingFS.java:151)
   [junit4]   2>at 
org.apache.lucene.mockfile.FilterFileSystemProvider.newFileChannel(FilterFileSystemProvider.java:180)
   [junit4]   2>at 
org.apache.lucene.mockfile.HandleTrackingFS.newFileChannel(HandleTrackingFS.java:151)
   [junit4]   2>at 
java.nio.channels.FileChannel.open(FileChannel.java:287)
   [junit4]   2>at 
java.nio.channels.FileChannel.open(FileChannel.java:334)
   [junit4]   2>at 
org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:205)
   [junit4]   2>at 
org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:80)
   [junit4]   2>at 
org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:654)
   [junit4]   2>- locked 
org.apache.lucene.store.MockDirectoryWrapper@3f04f430
   [junit4]   2>at 
org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:80)
   [junit4]   2>at 
org.apache.lucene.codecs.lucene50.Lucene50CompoundFormat.write(Lucene50CompoundFormat.java:91)
   [junit4]   2>at 
org.apache.lucene.index.IndexWriter.createCompoundFile(IndexWriter.java:4504)
   [junit4]   2>at 
org.apache.lucene.index.DocumentsWriterPerThread.sealFlushedSegment(DocumentsWriterPerThread.java:509)
   [junit4]   2>at 
org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:476)
   [junit4]   2>at 
org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:514)
   [junit4]   2>at 
org.apache.lucene.index.DocumentsWriter.postUpdate(DocumentsWriter.java:379)
   [junit4]   2>at 
org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:478)
   [junit4]   2>at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1398)
   [junit4]   2>at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1133)
   [junit4]   2>at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1118)
   [junit4]   2>at 
org.apache.lucene.index.TestDemoParallelLeafReader$3.reindex(TestDemoParallelLeafReader.java:823)
   [junit4]   2>at 
org.apache.lucene.index.TestDemoParallelLeafReader$ReindexingReader.getParallelLeafReader(TestDemoParallelLeafReader.java:394)
   [junit4]   2>- locked 
org.apache.lucene.index.TestDemoParallelLeafReader$3@1b66d903
   [junit4]   2>at 
org.apache.lucene.index.TestDemoParallelLeafReader$ReindexingReader$1.warm(TestDemoParallelLeafReader.java:133)
   [junit4]   2>at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4097)
   [junit4]   2>at 
org.apache.lucene.index.IndexWriter.merge(IndexWriter.ja

[jira] [Commented] (SOLR-6693) Start script for windows fails with 32bit JRE

2014-12-01 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230544#comment-14230544
 ] 

Timothy Potter commented on SOLR-6693:
--

I've added Java version parsing to the script to determine if specific JVM 
flags should be enabled, but I think we can re-use this approach to solve this 
issue (with a little refactoring). I can do the work, but I don't have a 32-bit 
windows environment to test with.
{code}
@REM Add Java version specific flags if needed
set JAVAVER=
set JAVA_MAJOR=
set JAVA_BUILD=0

"%JAVA%" -version 2>&1 | findstr /i "version" > javavers
set /p JAVAVEROUT= Start script for windows fails with 32bit JRE
> -
>
> Key: SOLR-6693
> URL: https://issues.apache.org/jira/browse/SOLR-6693
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 4.10.2
> Environment: WINDOWS 8.1
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: bin\solr.cmd
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6693.patch
>
>
> *Reproduce:*
> # Install JRE8 from www.java.com (typically {{C:\Program Files 
> (x86)\Java\jre1.8.0_25}})
> # Run the command {{bin\solr start -V}}
> The result is:
> {{\Java\jre1.8.0_25\bin\java was unexpected at this time.}}
> *Reason*
> This comes from bad quoting of the {{%SOLR%}} variable. I think it's because 
> of the parenthesis that it freaks out. I think the same would apply for a 
> 32-bit JDK because of the (x86) in the path, but I have not tested.
> Tip: You can remove the line {{@ECHO OFF}} at the top to see exactly which is 
> the offending line
> *Solution*
> Quoting the lines where %JAVA% is printed, e.g. instead of
> {noformat}
>   @echo Using Java: %JAVA%
> {noformat}
> then use
> {noformat}
>   @echo "Using Java: %JAVA%"
> {noformat}
> This is needed several places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6694) Auto detect JAVA_HOME in bin\start.cmd

2014-12-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230534#comment-14230534
 ] 

ASF subversion and git services commented on SOLR-6694:
---

Commit 1642768 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1642768 ]

SOLR-6694: auto-detect JAVA_HOME using the Windows registry

> Auto detect JAVA_HOME in bin\start.cmd
> --
>
> Key: SOLR-6694
> URL: https://issues.apache.org/jira/browse/SOLR-6694
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 4.10.2
> Environment: Windows
>Reporter: Jan Høydahl
>Assignee: Timothy Potter
>
> The start script requires JAVA_HOME to be set.
> The Java installer on Windows does not set JAVA_HOME, so it is an obstacle 
> for new users who wants to test. What the installer does is to set some 
> registry values, and we can detect those to find a JAVA_HOME to use. It will 
> give a better user experience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6694) Auto detect JAVA_HOME in bin\start.cmd

2014-12-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230533#comment-14230533
 ] 

ASF subversion and git services commented on SOLR-6694:
---

Commit 1642767 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1642767 ]

SOLR-6694: auto-detect JAVA_HOME using the Windows registry

> Auto detect JAVA_HOME in bin\start.cmd
> --
>
> Key: SOLR-6694
> URL: https://issues.apache.org/jira/browse/SOLR-6694
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 4.10.2
> Environment: Windows
>Reporter: Jan Høydahl
>Assignee: Timothy Potter
>
> The start script requires JAVA_HOME to be set.
> The Java installer on Windows does not set JAVA_HOME, so it is an obstacle 
> for new users who wants to test. What the installer does is to set some 
> registry values, and we can detect those to find a JAVA_HOME to use. It will 
> give a better user experience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6694) Auto detect JAVA_HOME in bin\start.cmd

2014-12-01 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter reassigned SOLR-6694:


Assignee: Timothy Potter

> Auto detect JAVA_HOME in bin\start.cmd
> --
>
> Key: SOLR-6694
> URL: https://issues.apache.org/jira/browse/SOLR-6694
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 4.10.2
> Environment: Windows
>Reporter: Jan Høydahl
>Assignee: Timothy Potter
>
> The start script requires JAVA_HOME to be set.
> The Java installer on Windows does not set JAVA_HOME, so it is an obstacle 
> for new users who wants to test. What the installer does is to set some 
> registry values, and we can detect those to find a JAVA_HOME to use. It will 
> give a better user experience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6780) some param values are duplicated when they override defaults, or are combined with appends values, or are an invariant that overrides a request param

2014-12-01 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-6780:
---
Description: 

The {{DefaultSolrParams}} class, which is used as the basis for the 
implementation of {{defaults}}, {{appends}} and {{invariants}} params had a bug 
in it's implementation of {{getParameterNamesIterator()}} that could result in 
the same param key being returned multiple times.

In many code paths of Solr, this bug had no effects -- but in other cases, it 
resulted in code which iterated over the list of all parameters to take action 
multiple times for the (valid) key=value pairs.

There were 4 main areas where this bug had unexpected & problematic behavior 
for end users:

{panel:title=main problem areas & impacts}
* ExtractingRequestHandler
** "literal.\*" params will be duplicated if overridden by 
defaults/invariants/appends - this will result in redundent literal field=value 
params being added to the document.
** impact: multiple values in literal fields when not expected/desired
* FacetComponent
** "facet.\*" params will be duplicated if overridden by 
defaults/invariants/appends - this can result in redundent computation and 
identical facet.field, facet.query, or facet.range blocks in the response
** impact: wasted computation & increased response size
* SpellCheckComponent
** when "custom params" (ie: "spellcheck.\[dictionary name\].=" are 
used in used in defaults, appends, or invariants, it can cause redudent 
X= params to be used.
** when "spellcheck.collateParam.=" type params are used defaults, 
appends, or invariants, it can cause redundent = params to exist in the 
collation verification queries.
** impact: unclear to me at first glance, probably just wasted computation & 
increased response size
* AnalyticsComponent
** "olap.\*" params will be duplicated if overridden by 
defaults/invariants/appends - this can result in redundent computation
** impact: unclear to me at first glance, probably just wasted computation & 
increased response size
{panel}

Other less serious impacts were redundent values in "echoParams" as well as 
some small amounts of wasted computation in other code paths that iterated over 
the set of params (due to a slightly larger set of param values)



{panel:title=Original bug report: "Merging request parameters with defaults 
produce duplicate entries"}


When a parameter (e.g. echoParams) is specified and overrides the default on 
the handler, it actually generates two entries for that key with the same 
value. 

Most of the time it is just a confusion and not an issue, however, some 
components will do the work twice. For example faceting component as described 
in http://search-lucene.com/m/QTPaSlFUQ1/duplicate

It may also be connected to SOLR-6369

The cause seems to be the interplay between 
*DefaultSolrParams#getParameterNamesIterator()* which just returns param names 
in sequence and *SolrParams#toNamedList()* which uses the first (override then 
default) value for each key, without deduplication.

It's easily reproducible in trunk against schemaless example with 
bq. curl 
"http://localhost:8983/solr/schemaless/select?indent=true&echoParams=all";

I've also spot checked it and it seems to be reproducible back to Solr 4.1.

{panel}

  was:
When a parameter (e.g. echoParams) is specified and overrides the default on 
the handler, it actually generates two entries for that key with the same 
value. 

Most of the time it is just a confusion and not an issue, however, some 
components will do the work twice. For example faceting component as described 
in http://search-lucene.com/m/QTPaSlFUQ1/duplicate

It may also be connected to SOLR-6369

The cause seems to be the interplay between 
*DefaultSolrParams#getParameterNamesIterator()* which just returns param names 
in sequence and *SolrParams#toNamedList()* which uses the first (override then 
default) value for each key, without deduplication.

It's easily reproducible in trunk against schemaless example with 
bq. curl 
"http://localhost:8983/solr/schemaless/select?indent=true&echoParams=all";

I've also spot checked it and it seems to be reproducible back to Solr 4.1.


Summary: some param values are duplicated when they override defaults, 
or are combined with appends values, or are an invariant that overrides a 
request param  (was: Merging request parameters with defaults produce duplicate 
entries)


This is committed to trunk & 5x.

Backport to the 4.10.x branch was clean w/o any precommit problems -- still 
running tests.  I'm going to hold off on committing the 4.10.x backport for 24 
hours to give jenkins some time to hate me.

I've also updated the summary & description of hte bug to be more focused on 
the user impacts, to aid in people searching in the future.


> some param values are duplicated when they override defaults, or are combined 
> with appends value

[jira] [Commented] (SOLR-4792) stop shipping a war in 5.0

2014-12-01 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230528#comment-14230528
 ] 

Shawn Heisey commented on SOLR-4792:


My opinion right now:  As long as the example works and it's not supremely 
painful to adapt for production, we're probably good.

My production install currently uses bits pulled from an older 4.x example -- 
jetty and a war.  Jetty in my install has been upgraded to an 8.x release 
higher than what is currently in Solr, but otherwise it's virtually identical.  
I built my own init script for CentOS 6.

I'd like an easy upgrade path beyond the 4.9.1 that I'm currently using, but 
I'm not afraid of a little work.  Sticking close to the example is important so 
that there aren't too many unusual bits to explain when I need help.


> stop shipping a war in 5.0
> --
>
> Key: SOLR-4792
> URL: https://issues.apache.org/jira/browse/SOLR-4792
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Reporter: Robert Muir
>Assignee: Mark Miller
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-4792.patch
>
>
> see the vote on the developer list.
> This is the first step: if we stop shipping a war then we are free to do 
> anything we want. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6795) distrib.singlePass returns score even though not asked for

2014-12-01 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6795:

Attachment: SOLR-6795.patch

Thanks Per. I have moved your test to DistributedQueryComponentOptimizationTest 
which is where I've added the tests for this feature.

I'll commit once the test suite passes.

> distrib.singlePass returns score even though not asked for
> --
>
> Key: SOLR-6795
> URL: https://issues.apache.org/jira/browse/SOLR-6795
> Project: Solr
>  Issue Type: Bug
>  Components: multicore, search
>Affects Versions: 5.0
>Reporter: Per Steffensen
>Assignee: Shalin Shekhar Mangar
>  Labels: distributed_search, search
> Attachments: SOLR-6795.patch, fix.patch, 
> test_that_reveals_the_problem.patch
>
>
> If I pass distrib.singlePass in a request and do not ask for score back (fl 
> does not include score) it will return the score back anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4792) stop shipping a war in 5.0

2014-12-01 Thread Jayson Minard (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230497#comment-14230497
 ] 

Jayson Minard commented on SOLR-4792:
-

{quote}
We have already discussed and had a vote. This issue is the result of that. The 
vote and previous discussions are available in the archives.
{quote}

Well, the result of the vote did not end up in this issue.  Only the "break 
Solr deployment for all users" part seemed to have been recorded.  And then 
acted upon.  Great.  The archives are helpful for fun reading but don't fix the 
problem.

{quote}
 5 kind of came out of nowhere, and so this seemed the easiest transition path.
{quote}

easiest for whom?

{quote}
No, it won't be in maven - there will be no official WAR support.
{quote}

translation:  "make love, not WARs"

How about, remove the WAR when something worthwhile has replaced the 
functionality that people use in the app servers in which they host the WAR.

Even ElasticSearch provides Transport Wares so that people can use WAR 
deployment, because in some places, it is just the rule of law.  And you can 
convince the laws to change, when you have something sufficiently manageable as 
an alternative.  Here, that is lacking.  People trial this side by side with 
ES, and one will look better.  

I guess a follow-on list of activities is:  someone create a solr-war-packaging 
project outside the code base so that it is "not official" (I now have 
https://github.com/bremeld/solr-in-a-war so will get on it based on current 
trunk).  and maybe someone write the JIRA issues for making the Solr runner 
into a real project.  Since that doesn't exist yet, I'll continue Solr-undertow 
to help in the meantime.  But would be nice if it was in the same direction as 
what was intended rather than duplicate effort.


> stop shipping a war in 5.0
> --
>
> Key: SOLR-4792
> URL: https://issues.apache.org/jira/browse/SOLR-4792
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Reporter: Robert Muir
>Assignee: Mark Miller
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-4792.patch
>
>
> see the vote on the developer list.
> This is the first step: if we stop shipping a war then we are free to do 
> anything we want. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6084) Add reasonable IndexOutput.toString

2014-12-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230490#comment-14230490
 ] 

ASF subversion and git services commented on LUCENE-6084:
-

Commit 1642762 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1642762 ]

LUCENE-6084: add reasonable IndexOutput.toString

> Add reasonable IndexOutput.toString
> ---
>
> Key: LUCENE-6084
> URL: https://issues.apache.org/jira/browse/LUCENE-6084
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6084.patch, LUCENE-6084.patch
>
>
> In LUCENE-3539 we fixed IndexInput.toString to always include the 
> resourceDescription.
> I think we should do the same for IndexOutput?
> I don't think Lucene currently uses/relies on IndexOutput.toString, but e.g. 
> at least Elasticsearch does, and likely others, so I think it can only help 
> if you can see which path is open by this IndexOutput.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6084) Add reasonable IndexOutput.toString

2014-12-01 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-6084.

Resolution: Fixed

> Add reasonable IndexOutput.toString
> ---
>
> Key: LUCENE-6084
> URL: https://issues.apache.org/jira/browse/LUCENE-6084
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6084.patch, LUCENE-6084.patch
>
>
> In LUCENE-3539 we fixed IndexInput.toString to always include the 
> resourceDescription.
> I think we should do the same for IndexOutput?
> I don't think Lucene currently uses/relies on IndexOutput.toString, but e.g. 
> at least Elasticsearch does, and likely others, so I think it can only help 
> if you can see which path is open by this IndexOutput.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6554) Speed up overseer operations for collections with stateFormat > 1

2014-12-01 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230488#comment-14230488
 ] 

Ramkumar Aiyengar commented on SOLR-6554:
-

Nice, that's a pretty neat speedup :-)

+1 to make stateFormat=2 as the default. The idea of dropping support for 
stateFormat=1 makes me a bit nervous though -- it should be tested in the wild 
for a while without users consciously opting in into it.  May be schedule it 
for removal in 6 (thereby also giving users one full release to either get 
downtime or recreate indices)?

> Speed up overseer operations for collections with stateFormat > 1
> -
>
> Key: SOLR-6554
> URL: https://issues.apache.org/jira/browse/SOLR-6554
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.0, Trunk
>Reporter: Shalin Shekhar Mangar
> Attachments: SOLR-6554-batching-refactor.patch, 
> SOLR-6554-batching-refactor.patch, SOLR-6554-batching-refactor.patch, 
> SOLR-6554-batching-refactor.patch, SOLR-6554.patch, SOLR-6554.patch, 
> SOLR-6554.patch, SOLR-6554.patch, SOLR-6554.patch, SOLR-6554.patch, 
> SOLR-6554.patch, SOLR-6554.patch
>
>
> Right now (after SOLR-5473 was committed), a node watches a collection only 
> if stateFormat=1 or if that node hosts at least one core belonging to that 
> collection.
> This means that a node which is the overseer operates on all collections but 
> watches only a few. So any read goes directly to zookeeper which slows down 
> overseer operations.
> Let's have the overseer node watch all collections always and never remove 
> those watches (except when the collection itself is deleted).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6084) Add reasonable IndexOutput.toString

2014-12-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230485#comment-14230485
 ] 

ASF subversion and git services commented on LUCENE-6084:
-

Commit 1642761 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1642761 ]

LUCENE-6084: add reasonable IndexOutput.toString

> Add reasonable IndexOutput.toString
> ---
>
> Key: LUCENE-6084
> URL: https://issues.apache.org/jira/browse/LUCENE-6084
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6084.patch, LUCENE-6084.patch
>
>
> In LUCENE-3539 we fixed IndexInput.toString to always include the 
> resourceDescription.
> I think we should do the same for IndexOutput?
> I don't think Lucene currently uses/relies on IndexOutput.toString, but e.g. 
> at least Elasticsearch does, and likely others, so I think it can only help 
> if you can see which path is open by this IndexOutput.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6780) Merging request parameters with defaults produce duplicate entries

2014-12-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230476#comment-14230476
 ] 

ASF subversion and git services commented on SOLR-6780:
---

Commit 1642760 from hoss...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1642760 ]

SOLR-6780: Fixed a bug in how default/appends/invariants params were affecting 
the set of all keys found in the request parameters, resulting in some 
key=value param pairs being duplicated. (merge r1642740)

> Merging request parameters with defaults produce duplicate entries
> --
>
> Key: SOLR-6780
> URL: https://issues.apache.org/jira/browse/SOLR-6780
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.1, 5.0, Trunk
>Reporter: Alexandre Rafalovitch
>Assignee: Hoss Man
>  Labels: parameters
> Fix For: 4.10.3, 5.0, Trunk
>
> Attachments: SOLR-6780.patch
>
>
> When a parameter (e.g. echoParams) is specified and overrides the default on 
> the handler, it actually generates two entries for that key with the same 
> value. 
> Most of the time it is just a confusion and not an issue, however, some 
> components will do the work twice. For example faceting component as 
> described in http://search-lucene.com/m/QTPaSlFUQ1/duplicate
> It may also be connected to SOLR-6369
> The cause seems to be the interplay between 
> *DefaultSolrParams#getParameterNamesIterator()* which just returns param 
> names in sequence and *SolrParams#toNamedList()* which uses the first 
> (override then default) value for each key, without deduplication.
> It's easily reproducible in trunk against schemaless example with 
> bq. curl 
> "http://localhost:8983/solr/schemaless/select?indent=true&echoParams=all";
> I've also spot checked it and it seems to be reproducible back to Solr 4.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6084) Add reasonable IndexOutput.toString

2014-12-01 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230432#comment-14230432
 ] 

Robert Muir commented on LUCENE-6084:
-

+1, thank you!

> Add reasonable IndexOutput.toString
> ---
>
> Key: LUCENE-6084
> URL: https://issues.apache.org/jira/browse/LUCENE-6084
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6084.patch, LUCENE-6084.patch
>
>
> In LUCENE-3539 we fixed IndexInput.toString to always include the 
> resourceDescription.
> I think we should do the same for IndexOutput?
> I don't think Lucene currently uses/relies on IndexOutput.toString, but e.g. 
> at least Elasticsearch does, and likely others, so I think it can only help 
> if you can see which path is open by this IndexOutput.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6791) Solr start script failed when last 2 digits of solr port is less than 24 and run by non-root user

2014-12-01 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter resolved SOLR-6791.
--
   Resolution: Fixed
Fix Version/s: 5.0
   4.10.3

Fixed as part of the solution for SOLR-6726 (didn't make sense to separate 
these into different commits)

> Solr start script failed when last 2 digits of solr port is less than 24 and 
> run by non-root user
> -
>
> Key: SOLR-6791
> URL: https://issues.apache.org/jira/browse/SOLR-6791
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 4.10.2
>Reporter: Chaiyasit (Sit) Manovit
>Assignee: Timothy Potter
>Priority: Minor
> Fix For: 4.10.3, 5.0
>
>
> Due to following two lines of code, it would try to use port number less than 
> 1024 which are privileged ports
> {code}
> -Dcom.sun.management.jmxremote.port=10${SOLR_PORT: -2} \
> -Dcom.sun.management.jmxremote.rmi.port=10${SOLR_PORT: -2}"
> {code}
> Maybe the prefix should be changed to 20. (Too high a number may risk 
> colliding with default port for embedded ZooKeeper.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6726) Specifying different ports with the new bin/solr script fails to start solr instances

2014-12-01 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230417#comment-14230417
 ] 

Timothy Potter commented on SOLR-6726:
--

So now the script uses 1$SOLR_PORT for the JMX RMI port (18983 if using the 
default 8983) but you can also just set the RMI port in solr.in.sh (.cmd). I'm 
sure you can still break things if you do: -p 2022 as the stop port will be 
computed to 1022, which is not allowed unless Solr is launched by root. I also 
added some more help information about port assignments used by the script.

> Specifying different ports with the new bin/solr script fails to start solr 
> instances
> -
>
> Key: SOLR-6726
> URL: https://issues.apache.org/jira/browse/SOLR-6726
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Timothy Potter
>Priority: Minor
> Fix For: 4.10.3, 5.0
>
>
> As I recall, I tried to specify different ports when bringing up 4 instances 
> (7200, 7300, 7400) and the startup script failed. I'll confirm this and maybe 
> propose a fix if I can reproduce. Assigning it to me so I make sure it's 
> checked.
> I'm at Lucene Revolution this week, so if anyone wants to pick this up feel 
> free.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6726) Specifying different ports with the new bin/solr script fails to start solr instances

2014-12-01 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter resolved SOLR-6726.
--
   Resolution: Fixed
Fix Version/s: 5.0
   4.10.3

> Specifying different ports with the new bin/solr script fails to start solr 
> instances
> -
>
> Key: SOLR-6726
> URL: https://issues.apache.org/jira/browse/SOLR-6726
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Timothy Potter
>Priority: Minor
> Fix For: 4.10.3, 5.0
>
>
> As I recall, I tried to specify different ports when bringing up 4 instances 
> (7200, 7300, 7400) and the startup script failed. I'll confirm this and maybe 
> propose a fix if I can reproduce. Assigning it to me so I make sure it's 
> checked.
> I'm at Lucene Revolution this week, so if anyone wants to pick this up feel 
> free.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6726) Specifying different ports with the new bin/solr script fails to start solr instances

2014-12-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230397#comment-14230397
 ] 

ASF subversion and git services commented on SOLR-6726:
---

Commit 1642749 from [~thelabdude] in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1642749 ]

SOLR-6726: better strategy for selecting the JMX RMI port based on SOLR_PORT in 
bin/solr

> Specifying different ports with the new bin/solr script fails to start solr 
> instances
> -
>
> Key: SOLR-6726
> URL: https://issues.apache.org/jira/browse/SOLR-6726
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Timothy Potter
>Priority: Minor
>
> As I recall, I tried to specify different ports when bringing up 4 instances 
> (7200, 7300, 7400) and the startup script failed. I'll confirm this and maybe 
> propose a fix if I can reproduce. Assigning it to me so I make sure it's 
> checked.
> I'm at Lucene Revolution this week, so if anyone wants to pick this up feel 
> free.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6726) Specifying different ports with the new bin/solr script fails to start solr instances

2014-12-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230352#comment-14230352
 ] 

ASF subversion and git services commented on SOLR-6726:
---

Commit 1642747 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1642747 ]

SOLR-6726: better strategy for selecting the JMX RMI port based on SOLR_PORT in 
bin/solr

> Specifying different ports with the new bin/solr script fails to start solr 
> instances
> -
>
> Key: SOLR-6726
> URL: https://issues.apache.org/jira/browse/SOLR-6726
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Timothy Potter
>Priority: Minor
>
> As I recall, I tried to specify different ports when bringing up 4 instances 
> (7200, 7300, 7400) and the startup script failed. I'll confirm this and maybe 
> propose a fix if I can reproduce. Assigning it to me so I make sure it's 
> checked.
> I'm at Lucene Revolution this week, so if anyone wants to pick this up feel 
> free.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6726) Specifying different ports with the new bin/solr script fails to start solr instances

2014-12-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230330#comment-14230330
 ] 

ASF subversion and git services commented on SOLR-6726:
---

Commit 1642745 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1642745 ]

SOLR-6726: better strategy for selecting the JMX RMI port based on SOLR_PORT in 
bin/solr

> Specifying different ports with the new bin/solr script fails to start solr 
> instances
> -
>
> Key: SOLR-6726
> URL: https://issues.apache.org/jira/browse/SOLR-6726
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Timothy Potter
>Priority: Minor
>
> As I recall, I tried to specify different ports when bringing up 4 instances 
> (7200, 7300, 7400) and the startup script failed. I'll confirm this and maybe 
> propose a fix if I can reproduce. Assigning it to me so I make sure it's 
> checked.
> I'm at Lucene Revolution this week, so if anyone wants to pick this up feel 
> free.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2262 - Still Failing

2014-12-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2262/

2 tests failed.
REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([DAB84C63D6DB210B]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeySafeLeaderTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([DAB84C63D6DB210B]:0)




Build Log:
[...truncated 10456 lines...]
   [junit4] Suite: org.apache.solr.cloud.ChaosMonkeySafeLeaderTest
   [junit4]   2> Creating dataDir: 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/test/J2/temp/solr.cloud.ChaosMonkeySafeLeaderTest-DAB84C63D6DB210B-001/init-core-data-001
   [junit4]   2> 2000301 T4360 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(false) and clientAuth (true)
   [junit4]   2> 2000301 T4360 
oas.BaseDistributedSearchTestCase.initHostContext Setting hostContext system 
property: /
   [junit4]   2> 2000306 T4360 oas.SolrTestCaseJ4.setUp ###Starting 
testDistribSearch
   [junit4]   2> 2000307 T4360 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   1> client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 2000308 T4361 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2> 2000407 T4360 oasc.ZkTestServer.run start zk server on 
port:21278
   [junit4]   2> 2000408 T4360 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2> 2000409 T4360 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2> 2000412 T4368 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@490d1dff 
name:ZooKeeperConnection Watcher:127.0.0.1:21278 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 2000412 T4360 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2> 2000413 T4360 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2> 2000413 T4360 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2> 2000416 T4360 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2> 2000416 T4360 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2> 2000417 T4371 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@562f94f2 
name:ZooKeeperConnection Watcher:127.0.0.1:21278/solr got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 2000418 T4360 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2> 2000418 T4360 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2> 2000418 T4360 oascc.SolrZkClient.makePath makePath: 
/collections/collection1
   [junit4]   2> 2000419 T4360 oascc.SolrZkClient.makePath makePath: 
/collections/collection1/shards
   [junit4]   2> 2000420 T4360 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection
   [junit4]   2> 2000421 T4360 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection/shards
   [junit4]   2> 2000422 T4360 oasc.AbstractZkTestCase.putConfig put 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 2000423 T4360 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.xml
   [junit4]   2> 2000425 T4360 oasc.AbstractZkTestCase.putConfig put 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/test-files/solr/collection1/conf/schema15.xml
 to /configs/conf1/schema.xml
   [junit4]   2> 2000425 T4360 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/schema.xml
   [junit4]   2> 2000427 T4360 oasc.AbstractZkTestCase.putConfig put 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 2000427 T4360 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 2000428 T4360 oasc.AbstractZkTestCase.putConfig put 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2> 2000429 T4360 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/stopwords.txt
   [junit4]   2> 2000430 T4360 oasc.AbstractZkTestCase.p

[jira] [Commented] (LUCENE-5987) Make indexwriter a mere mortal when exceptions strike

2014-12-01 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230302#comment-14230302
 ] 

Michael McCandless commented on LUCENE-5987:


I'll try to tackle this (make abort a tragedy): it's a ridiculous situation, 
today, that IW can throw an exception and it turns out that exception silently 
just deleted tons of previously indexed documents and when you close/commit the 
IW, they are gone.

> Make indexwriter a mere mortal when exceptions strike
> -
>
> Key: LUCENE-5987
> URL: https://issues.apache.org/jira/browse/LUCENE-5987
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Robert Muir
>Assignee: Michael McCandless
>
> IndexWriter's exception handling is overly complicated. Every method in 
> general reads like this:
> {code}
> try {
>   try {
> try { 
>  ...
>  // lock order: COMPLICATED
>  synchronized(this or that) {
>  }
>  ...
>} finally {
>  if (!success5) {
>deleter.deleteThisFileOrThat();
>  }
> ...
>   }
> }
> {code}
> Part of the problem is it acts like its an invincible superhero, e.g. can 
> take a disk full on merge or flush to the face and just keep on trucking, and 
> you can somehow fix the root cause and then just go about making commits on 
> the same instance.
> But we have a hard enough time ensuring exceptions dont do the wrong thing 
> (e.g. cause corruption), and I don't think we really test this crazy behavior 
> anywhere: e.g. making commits AFTER hitting disk full and so on.
> It would probably be simpler if when such things happen, IW just considered 
> them "tragic" just like OOM and rolled itself back, instead of doing all 
> kinds of really scary stuff to try to "keep itself healthy" (like the little 
> dance it plays with IFD in mergeMiddle manually deleting CFS files).
> Besides, without something like a WAL, Indexwriter isn't really fit to be a 
> superhero anyway: it can't prevent you from losing data in such situations. 
> It just doesn't have the right tools for the job.
> edit: just to be clear I am referring to abort (low level exception during 
> flush) and exceptions during merge. For simple non-aborting cases like 
> analyzer errors, of course we can deal with this. We already made great 
> progress on turning a lot of BS exceptions that would cause aborts into 
> non-aborting ones recently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 1968 - Failure!

2014-12-01 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1968/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC (asserts: 
true)

1 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.testDistribSearch

Error Message:
Timeout occured while waiting response from server at: http://127.0.0.1:51797

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:51797
at 
__randomizedtesting.SeedInfo.seed([38E4D94C91E573FE:B9025754E6BA13C2]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:581)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.cloud.ShardSplitTest.splitShard(ShardSplitTest.java:532)
at 
org.apache.solr.cloud.ShardSplitTest.incompleteOrOverlappingCustomRangeTest(ShardSplitTest.java:151)
at org.apache.solr.cloud.ShardSplitTest.doTest(ShardSplitTest.java:103)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFail

[jira] [Assigned] (LUCENE-5987) Make indexwriter a mere mortal when exceptions strike

2014-12-01 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless reassigned LUCENE-5987:
--

Assignee: Michael McCandless

> Make indexwriter a mere mortal when exceptions strike
> -
>
> Key: LUCENE-5987
> URL: https://issues.apache.org/jira/browse/LUCENE-5987
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Robert Muir
>Assignee: Michael McCandless
>
> IndexWriter's exception handling is overly complicated. Every method in 
> general reads like this:
> {code}
> try {
>   try {
> try { 
>  ...
>  // lock order: COMPLICATED
>  synchronized(this or that) {
>  }
>  ...
>} finally {
>  if (!success5) {
>deleter.deleteThisFileOrThat();
>  }
> ...
>   }
> }
> {code}
> Part of the problem is it acts like its an invincible superhero, e.g. can 
> take a disk full on merge or flush to the face and just keep on trucking, and 
> you can somehow fix the root cause and then just go about making commits on 
> the same instance.
> But we have a hard enough time ensuring exceptions dont do the wrong thing 
> (e.g. cause corruption), and I don't think we really test this crazy behavior 
> anywhere: e.g. making commits AFTER hitting disk full and so on.
> It would probably be simpler if when such things happen, IW just considered 
> them "tragic" just like OOM and rolled itself back, instead of doing all 
> kinds of really scary stuff to try to "keep itself healthy" (like the little 
> dance it plays with IFD in mergeMiddle manually deleting CFS files).
> Besides, without something like a WAL, Indexwriter isn't really fit to be a 
> superhero anyway: it can't prevent you from losing data in such situations. 
> It just doesn't have the right tools for the job.
> edit: just to be clear I am referring to abort (low level exception during 
> flush) and exceptions during merge. For simple non-aborting cases like 
> analyzer errors, of course we can deal with this. We already made great 
> progress on turning a lot of BS exceptions that would cause aborts into 
> non-aborting ones recently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6811) TestLBHttpSolrServer.testSimple stall: 2 CloserThreads waiting for same lock?

2014-12-01 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-6811:
---
Attachment: td.1.txt
td.2.txt
td.3.txt

thread dumps i took while it was stalled

> TestLBHttpSolrServer.testSimple stall: 2 CloserThreads waiting for same lock? 
> --
>
> Key: SOLR-6811
> URL: https://issues.apache.org/jira/browse/SOLR-6811
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: td.1.txt, td.2.txt, td.3.txt
>
>
> got a stall today in TestLBHttpSolrServer.testSimple on 5x branch
> looking at the stack dumps, it seems like there are 2 instances of 
> CloserThread wait()ing to be notified on the same "lock" Object?
> 2 things seem suspicious:
> a) what are they waiting for? what thread is expected to be notifying? 
> (because i don't see anything else running that might do the job)
> b) why are there 2 instances of CloserThread?  from a quick skim it seems 
> like there should only be one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6811) TestLBHttpSolrServer.testSimple stall: 2 CloserThreads waiting for same lock?

2014-12-01 Thread Hoss Man (JIRA)
Hoss Man created SOLR-6811:
--

 Summary: TestLBHttpSolrServer.testSimple stall: 2 CloserThreads 
waiting for same lock? 
 Key: SOLR-6811
 URL: https://issues.apache.org/jira/browse/SOLR-6811
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man


got a stall today in TestLBHttpSolrServer.testSimple on 5x branch

looking at the stack dumps, it seems like there are 2 instances of CloserThread 
wait()ing to be notified on the same "lock" Object?

2 things seem suspicious:

a) what are they waiting for? what thread is expected to be notifying? (because 
i don't see anything else running that might do the job)
b) why are there 2 instances of CloserThread?  from a quick skim it seems like 
there should only be one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6554) Speed up overseer operations for collections with stateFormat > 1

2014-12-01 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230266#comment-14230266
 ] 

Shalin Shekhar Mangar commented on SOLR-6554:
-

bq. What kind of confidence do we have for stateFormat = 2 now? 

I am pretty confident that it works and works well. But the client side needs a 
fix before 5.0 -- SOLR-6521.

bq. It would really be nice to drop 1 from 5x rather than deal with both for a 
full major version again.

I agree but how do we drop 1? I haven't thought through the migration process. 
It would probably need us to write the state to both places for one Solr 
release to auto-convert. Otherwise an upgrade would require down-time. We could 
definitely use stateFormat=2 as the default in 5.0 as it is already as 
performant as stateFormat=1 for the single collection use-case.

> Speed up overseer operations for collections with stateFormat > 1
> -
>
> Key: SOLR-6554
> URL: https://issues.apache.org/jira/browse/SOLR-6554
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.0, Trunk
>Reporter: Shalin Shekhar Mangar
> Attachments: SOLR-6554-batching-refactor.patch, 
> SOLR-6554-batching-refactor.patch, SOLR-6554-batching-refactor.patch, 
> SOLR-6554-batching-refactor.patch, SOLR-6554.patch, SOLR-6554.patch, 
> SOLR-6554.patch, SOLR-6554.patch, SOLR-6554.patch, SOLR-6554.patch, 
> SOLR-6554.patch, SOLR-6554.patch
>
>
> Right now (after SOLR-5473 was committed), a node watches a collection only 
> if stateFormat=1 or if that node hosts at least one core belonging to that 
> collection.
> This means that a node which is the overseer operates on all collections but 
> watches only a few. So any read goes directly to zookeeper which slows down 
> overseer operations.
> Let's have the overseer node watch all collections always and never remove 
> those watches (except when the collection itself is deleted).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6521) CloudSolrServer should synchronize cache cluster state loading

2014-12-01 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6521:

 Priority: Critical  (was: Major)
Fix Version/s: Trunk
   5.0

> CloudSolrServer should synchronize cache cluster state loading
> --
>
> Key: SOLR-6521
> URL: https://issues.apache.org/jira/browse/SOLR-6521
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Jessica Cheng Mallet
>Assignee: Noble Paul
>Priority: Critical
>  Labels: SolrCloud
> Fix For: 5.0, Trunk
>
>
> Under heavy load-testing with the new solrj client that caches the cluster 
> state instead of setting a watcher, I started seeing lots of zk connection 
> loss on the client-side when refreshing the CloudSolrServer 
> collectionStateCache, and this was causing crazy client-side 99.9% latency 
> (~15 sec). I swapped the cache out with guava's LoadingCache (which does 
> locking to ensure only one thread loads the content under one key while the 
> other threads that want the same key wait) and the connection loss went away 
> and the 99.9% latency also went down to just about 1 sec.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6610) ZkController.publishAndWaitForDownStates always times out when a new cluster is started

2014-12-01 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-6610.
-
Resolution: Fixed

> ZkController.publishAndWaitForDownStates always times out when a new cluster 
> is started
> ---
>
> Key: SOLR-6610
> URL: https://issues.apache.org/jira/browse/SOLR-6610
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Jessica Cheng Mallet
>Assignee: Noble Paul
>  Labels: solrcloud
> Fix For: 4.10.3, 5.0, Trunk
>
> Attachments: SOLR-6610.patch
>
>
> Using stateFormat=2, our solr always takes a while to start up and spits out 
> this warning line:
> {quote}
> WARN  - 2014-10-08 17:30:24.290; org.apache.solr.cloud.ZkController; Timed 
> out waiting to see all nodes published as DOWN in our cluster state.
> {quote}
> Looking at the code, this is probably because 
> ZkController.publishAndWaitForDownStates is called in ZkController.init, 
> which gets called via ZkContainer.initZookeeper in CoreContainer.load before 
> any of the stateFormat=2 collection watches are set in the 
> CoreContainer.preRegisterInZk call a few lines later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6610) ZkController.publishAndWaitForDownStates always times out when a new cluster is started

2014-12-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230221#comment-14230221
 ] 

ASF subversion and git services commented on SOLR-6610:
---

Commit 1642732 from sha...@apache.org in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1642732 ]

SOLR-6610: Slow startup of new clusters because 
ZkController.publishAndWaitForDownStates always times out

> ZkController.publishAndWaitForDownStates always times out when a new cluster 
> is started
> ---
>
> Key: SOLR-6610
> URL: https://issues.apache.org/jira/browse/SOLR-6610
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Jessica Cheng Mallet
>Assignee: Noble Paul
>  Labels: solrcloud
> Fix For: 4.10.3, 5.0, Trunk
>
> Attachments: SOLR-6610.patch
>
>
> Using stateFormat=2, our solr always takes a while to start up and spits out 
> this warning line:
> {quote}
> WARN  - 2014-10-08 17:30:24.290; org.apache.solr.cloud.ZkController; Timed 
> out waiting to see all nodes published as DOWN in our cluster state.
> {quote}
> Looking at the code, this is probably because 
> ZkController.publishAndWaitForDownStates is called in ZkController.init, 
> which gets called via ZkContainer.initZookeeper in CoreContainer.load before 
> any of the stateFormat=2 collection watches are set in the 
> CoreContainer.preRegisterInZk call a few lines later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6610) ZkController.publishAndWaitForDownStates always times out when a new cluster is started

2014-12-01 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6610:

Summary: ZkController.publishAndWaitForDownStates always times out when a 
new cluster is started  (was: In stateFormat=2, 
ZkController.publishAndWaitForDownStates always times out)

> ZkController.publishAndWaitForDownStates always times out when a new cluster 
> is started
> ---
>
> Key: SOLR-6610
> URL: https://issues.apache.org/jira/browse/SOLR-6610
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Jessica Cheng Mallet
>Assignee: Noble Paul
>  Labels: solrcloud
> Fix For: 4.10.3, 5.0, Trunk
>
> Attachments: SOLR-6610.patch
>
>
> Using stateFormat=2, our solr always takes a while to start up and spits out 
> this warning line:
> {quote}
> WARN  - 2014-10-08 17:30:24.290; org.apache.solr.cloud.ZkController; Timed 
> out waiting to see all nodes published as DOWN in our cluster state.
> {quote}
> Looking at the code, this is probably because 
> ZkController.publishAndWaitForDownStates is called in ZkController.init, 
> which gets called via ZkContainer.initZookeeper in CoreContainer.load before 
> any of the stateFormat=2 collection watches are set in the 
> CoreContainer.preRegisterInZk call a few lines later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6706) /update/json/docs throws RuntimeException if a nested structure contains a non-leaf float field

2014-12-01 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-6706.
-
Resolution: Fixed

> /update/json/docs throws RuntimeException if a nested structure contains a 
> non-leaf float field
> ---
>
> Key: SOLR-6706
> URL: https://issues.apache.org/jira/browse/SOLR-6706
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 4.10.2, 5.0, Trunk
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
> Fix For: 4.10.3, 5.0, Trunk
>
> Attachments: SOLR-6706.patch
>
>
> The following JSON throws an exception:
> {code}
> {
> "a_string" : "abc",
> "a_num" : 2.0,
> "a" : {
> "b" : [
> {"id":"1", "title" : "test1"},
> {"id":"2", "title" : "test2"}
> ]
> }
> }
> {code}
> {code}
> curl 
> 'http://localhost:8983/solr/collection1/update/json/docs?split=/a/b&f=id:/a/b/id&f=title_s:/a/b/title&indent=on'
>  -H 'Content-type:application/json' -d @test2.json
> {
>   "responseHeader":{
> "status":500,
> "QTime":0},
>   "error":{
> "msg":"unexpected token 3",
> "trace":"java.lang.RuntimeException: unexpected token 3\n\tat 
> org.apache.solr.common.util.JsonRecordReader$Node.handleObjectStart(JsonRecordReader.java:400)\n\tat
>  
> org.apache.solr.common.util.JsonRecordReader$Node.parse(JsonRecordReader.java:281)\n\tat
>  
> org.apache.solr.common.util.JsonRecordReader$Node.access$200(JsonRecordReader.java:152)\n\tat
>  
> org.apache.solr.common.util.JsonRecordReader.streamRecords(JsonRecordReader.java:136)\n\tat
>  
> org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.handleSplitMode(JsonLoader.java:200)\n\tat
>  
> org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.processUpdate(JsonLoader.java:120)\n\tat
>  
> org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.load(JsonLoader.java:106)\n\tat
>  org.apache.solr.handler.loader.JsonLoader.load(JsonLoader.java:68)\n\tat 
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:99)\n\tat
>  
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)\n\tat
>  org.apache.solr.core.SolrCore.execute(SolrCore.java:1967)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:368)\n\tat 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)\n\tat
>  
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)\n\tat
>  
> org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)\n\tat
>  
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014)\n\tat
>  org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)\n\tat 
> org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)\n\tat 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)\n\tat
>  
> org.eclipse.jetty.server.bio.S

[jira] [Commented] (SOLR-6706) /update/json/docs throws RuntimeException if a nested structure contains a non-leaf float field

2014-12-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230210#comment-14230210
 ] 

ASF subversion and git services commented on SOLR-6706:
---

Commit 1642730 from sha...@apache.org in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1642730 ]

SOLR-6706: /update/json/docs throws RuntimeException if a nested structure 
contains a non-leaf float field

> /update/json/docs throws RuntimeException if a nested structure contains a 
> non-leaf float field
> ---
>
> Key: SOLR-6706
> URL: https://issues.apache.org/jira/browse/SOLR-6706
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 4.10.2, 5.0, Trunk
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
> Fix For: 4.10.3, 5.0, Trunk
>
> Attachments: SOLR-6706.patch
>
>
> The following JSON throws an exception:
> {code}
> {
> "a_string" : "abc",
> "a_num" : 2.0,
> "a" : {
> "b" : [
> {"id":"1", "title" : "test1"},
> {"id":"2", "title" : "test2"}
> ]
> }
> }
> {code}
> {code}
> curl 
> 'http://localhost:8983/solr/collection1/update/json/docs?split=/a/b&f=id:/a/b/id&f=title_s:/a/b/title&indent=on'
>  -H 'Content-type:application/json' -d @test2.json
> {
>   "responseHeader":{
> "status":500,
> "QTime":0},
>   "error":{
> "msg":"unexpected token 3",
> "trace":"java.lang.RuntimeException: unexpected token 3\n\tat 
> org.apache.solr.common.util.JsonRecordReader$Node.handleObjectStart(JsonRecordReader.java:400)\n\tat
>  
> org.apache.solr.common.util.JsonRecordReader$Node.parse(JsonRecordReader.java:281)\n\tat
>  
> org.apache.solr.common.util.JsonRecordReader$Node.access$200(JsonRecordReader.java:152)\n\tat
>  
> org.apache.solr.common.util.JsonRecordReader.streamRecords(JsonRecordReader.java:136)\n\tat
>  
> org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.handleSplitMode(JsonLoader.java:200)\n\tat
>  
> org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.processUpdate(JsonLoader.java:120)\n\tat
>  
> org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.load(JsonLoader.java:106)\n\tat
>  org.apache.solr.handler.loader.JsonLoader.load(JsonLoader.java:68)\n\tat 
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:99)\n\tat
>  
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)\n\tat
>  org.apache.solr.core.SolrCore.execute(SolrCore.java:1967)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:368)\n\tat 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)\n\tat
>  
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)\n\tat
>  
> org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)\n\tat
>  
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014)\n\tat
>  org

[jira] [Assigned] (SOLR-6653) bin/solr start script should return error code >0 when something fails

2014-12-01 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter reassigned SOLR-6653:


Assignee: Timothy Potter

> bin/solr start script should return error code >0 when something fails
> --
>
> Key: SOLR-6653
> URL: https://issues.apache.org/jira/browse/SOLR-6653
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 4.10.1
>Reporter: Jan Høydahl
>Assignee: Timothy Potter
>  Labels: bin/solr
>
> In order to be able to include {{bin/solr}} in scripts, it should be possible 
> to test the return value for success or failure. Examples:
> {noformat}
> jan:solr janhoy$ bin/solr start
> Waiting to see Solr listening on port 8983 [/]  
> Started Solr server on port 8983 (pid=47354). Happy searching!
> jan:solr janhoy$ echo $?
> 0
> jan:solr janhoy$ bin/solr start
> Solr already running on port 8983 (pid: 47354)!
> Please use the 'restart' command if you want to restart this node.
> jan:solr janhoy$ echo $?
> 0
> {noformat}
> The last command should return status 1
> {noformat}
> jan:solr janhoy$ bin/solr stop -p 1234
> No process found for Solr node running on port 1234
> jan:solr janhoy$ echo $?
> 0
> {noformat}
> Same here. Probably other places too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6610) In stateFormat=2, ZkController.publishAndWaitForDownStates always times out

2014-12-01 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6610:

Fix Version/s: 4.10.3

> In stateFormat=2, ZkController.publishAndWaitForDownStates always times out
> ---
>
> Key: SOLR-6610
> URL: https://issues.apache.org/jira/browse/SOLR-6610
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Jessica Cheng Mallet
>Assignee: Noble Paul
>  Labels: solrcloud
> Fix For: 4.10.3, 5.0, Trunk
>
> Attachments: SOLR-6610.patch
>
>
> Using stateFormat=2, our solr always takes a while to start up and spits out 
> this warning line:
> {quote}
> WARN  - 2014-10-08 17:30:24.290; org.apache.solr.cloud.ZkController; Timed 
> out waiting to see all nodes published as DOWN in our cluster state.
> {quote}
> Looking at the code, this is probably because 
> ZkController.publishAndWaitForDownStates is called in ZkController.init, 
> which gets called via ZkContainer.initZookeeper in CoreContainer.load before 
> any of the stateFormat=2 collection watches are set in the 
> CoreContainer.preRegisterInZk call a few lines later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-6610) In stateFormat=2, ZkController.publishAndWaitForDownStates always times out

2014-12-01 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reopened SOLR-6610:
-

Reopening to backport to 4.10.3

> In stateFormat=2, ZkController.publishAndWaitForDownStates always times out
> ---
>
> Key: SOLR-6610
> URL: https://issues.apache.org/jira/browse/SOLR-6610
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Jessica Cheng Mallet
>Assignee: Noble Paul
>  Labels: solrcloud
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6610.patch
>
>
> Using stateFormat=2, our solr always takes a while to start up and spits out 
> this warning line:
> {quote}
> WARN  - 2014-10-08 17:30:24.290; org.apache.solr.cloud.ZkController; Timed 
> out waiting to see all nodes published as DOWN in our cluster state.
> {quote}
> Looking at the code, this is probably because 
> ZkController.publishAndWaitForDownStates is called in ZkController.init, 
> which gets called via ZkContainer.initZookeeper in CoreContainer.load before 
> any of the stateFormat=2 collection watches are set in the 
> CoreContainer.preRegisterInZk call a few lines later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6685) ConcurrentModificationException in Overseer Stats API

2014-12-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230196#comment-14230196
 ] 

ASF subversion and git services commented on SOLR-6685:
---

Commit 1642729 from sha...@apache.org in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1642729 ]

SOLR-6685: ConcurrentModificationException in Overseer Status API

> ConcurrentModificationException in Overseer Stats API
> -
>
> Key: SOLR-6685
> URL: https://issues.apache.org/jira/browse/SOLR-6685
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.1
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 4.10.3, 5.0, Trunk
>
> Attachments: SOLR-6685.patch, SOLR-6685.patch
>
>
> I just found a concurrent modification exception in 
> OverseerCollectionProcessor while iterating over the overseer stats. The 
> iteration should be synchronized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6685) ConcurrentModificationException in Overseer Stats API

2014-12-01 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-6685.
-
Resolution: Fixed

> ConcurrentModificationException in Overseer Stats API
> -
>
> Key: SOLR-6685
> URL: https://issues.apache.org/jira/browse/SOLR-6685
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.1
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 4.10.3, 5.0, Trunk
>
> Attachments: SOLR-6685.patch, SOLR-6685.patch
>
>
> I just found a concurrent modification exception in 
> OverseerCollectionProcessor while iterating over the overseer stats. The 
> iteration should be synchronized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2927) SolrIndexSearcher's register do not match close and SolrCore's closeSearcher

2014-12-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230194#comment-14230194
 ] 

ASF subversion and git services commented on SOLR-2927:
---

Commit 1642728 from sha...@apache.org in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1642728 ]

SOLR-2927: Solr does not unregister all mbeans upon exception in constructor 
causing memory leaks

> SolrIndexSearcher's register do not match close and SolrCore's closeSearcher
> 
>
> Key: SOLR-2927
> URL: https://issues.apache.org/jira/browse/SOLR-2927
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.0-ALPHA
> Environment: JDK1.6/CentOS
>Reporter: tom liu
>Assignee: Shalin Shekhar Mangar
> Fix For: 4.10.3, 5.0, Trunk
>
> Attachments: SOLR-2927.patch, mbean-leak-jira.png
>
>
> # SolrIndexSearcher's register method put the name of searcher, but 
> SolrCore's closeSearcher method remove name of currentSearcher on 
> infoRegistry.
> # SolrIndexSearcher's register method put the name of cache, but 
> SolrIndexSearcher's close do not remove the name of cache.
> so, there maybe lost some memory leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-2927) SolrIndexSearcher's register do not match close and SolrCore's closeSearcher

2014-12-01 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-2927.
-
Resolution: Fixed

> SolrIndexSearcher's register do not match close and SolrCore's closeSearcher
> 
>
> Key: SOLR-2927
> URL: https://issues.apache.org/jira/browse/SOLR-2927
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.0-ALPHA
> Environment: JDK1.6/CentOS
>Reporter: tom liu
>Assignee: Shalin Shekhar Mangar
> Fix For: 4.10.3, 5.0, Trunk
>
> Attachments: SOLR-2927.patch, mbean-leak-jira.png
>
>
> # SolrIndexSearcher's register method put the name of searcher, but 
> SolrCore's closeSearcher method remove name of currentSearcher on 
> infoRegistry.
> # SolrIndexSearcher's register method put the name of cache, but 
> SolrIndexSearcher's close do not remove the name of cache.
> so, there maybe lost some memory leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4792) stop shipping a war in 5.0

2014-12-01 Thread Steve Molloy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230170#comment-14230170
 ] 

Steve Molloy commented on SOLR-4792:


bq. We have already discussed and had a vote.
I didn't want to imply it wasn't discussed or that there wasn't any valid 
reason, I already said I agree with the change and we have moved away from 
using the war directly as soon as the decision was made. But still, linking 
this ticket to what it's actually blocking would make it clear why it is needed 
and why it's worth breaking some current integrations for people who check Jira 
but don't read the threads on the mailing list.
Anyway, just wanted to help as we've been ready for the change for a while on 
our side. :)

> stop shipping a war in 5.0
> --
>
> Key: SOLR-4792
> URL: https://issues.apache.org/jira/browse/SOLR-4792
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Reporter: Robert Muir
>Assignee: Mark Miller
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-4792.patch
>
>
> see the vote on the developer list.
> This is the first step: if we stop shipping a war then we are free to do 
> anything we want. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6780) Merging request parameters with defaults produce duplicate entries

2014-12-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230159#comment-14230159
 ] 

ASF subversion and git services commented on SOLR-6780:
---

Commit 1642727 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1642727 ]

SOLR-6780: Fixed a bug in how default/appends/invariants params were affecting 
the set of all keys found in the request parameters, resulting in some 
key=value param pairs being duplicated.

> Merging request parameters with defaults produce duplicate entries
> --
>
> Key: SOLR-6780
> URL: https://issues.apache.org/jira/browse/SOLR-6780
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.1, 5.0, Trunk
>Reporter: Alexandre Rafalovitch
>Assignee: Hoss Man
>  Labels: parameters
> Fix For: 4.10.3, 5.0, Trunk
>
> Attachments: SOLR-6780.patch
>
>
> When a parameter (e.g. echoParams) is specified and overrides the default on 
> the handler, it actually generates two entries for that key with the same 
> value. 
> Most of the time it is just a confusion and not an issue, however, some 
> components will do the work twice. For example faceting component as 
> described in http://search-lucene.com/m/QTPaSlFUQ1/duplicate
> It may also be connected to SOLR-6369
> The cause seems to be the interplay between 
> *DefaultSolrParams#getParameterNamesIterator()* which just returns param 
> names in sequence and *SolrParams#toNamedList()* which uses the first 
> (override then default) value for each key, without deduplication.
> It's easily reproducible in trunk against schemaless example with 
> bq. curl 
> "http://localhost:8983/solr/schemaless/select?indent=true&echoParams=all";
> I've also spot checked it and it seems to be reproducible back to Solr 4.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6706) /update/json/docs throws RuntimeException if a nested structure contains a non-leaf float field

2014-12-01 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6706:

Fix Version/s: 4.10.3

> /update/json/docs throws RuntimeException if a nested structure contains a 
> non-leaf float field
> ---
>
> Key: SOLR-6706
> URL: https://issues.apache.org/jira/browse/SOLR-6706
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 4.10.2, 5.0, Trunk
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
> Fix For: 4.10.3, 5.0, Trunk
>
> Attachments: SOLR-6706.patch
>
>
> The following JSON throws an exception:
> {code}
> {
> "a_string" : "abc",
> "a_num" : 2.0,
> "a" : {
> "b" : [
> {"id":"1", "title" : "test1"},
> {"id":"2", "title" : "test2"}
> ]
> }
> }
> {code}
> {code}
> curl 
> 'http://localhost:8983/solr/collection1/update/json/docs?split=/a/b&f=id:/a/b/id&f=title_s:/a/b/title&indent=on'
>  -H 'Content-type:application/json' -d @test2.json
> {
>   "responseHeader":{
> "status":500,
> "QTime":0},
>   "error":{
> "msg":"unexpected token 3",
> "trace":"java.lang.RuntimeException: unexpected token 3\n\tat 
> org.apache.solr.common.util.JsonRecordReader$Node.handleObjectStart(JsonRecordReader.java:400)\n\tat
>  
> org.apache.solr.common.util.JsonRecordReader$Node.parse(JsonRecordReader.java:281)\n\tat
>  
> org.apache.solr.common.util.JsonRecordReader$Node.access$200(JsonRecordReader.java:152)\n\tat
>  
> org.apache.solr.common.util.JsonRecordReader.streamRecords(JsonRecordReader.java:136)\n\tat
>  
> org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.handleSplitMode(JsonLoader.java:200)\n\tat
>  
> org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.processUpdate(JsonLoader.java:120)\n\tat
>  
> org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.load(JsonLoader.java:106)\n\tat
>  org.apache.solr.handler.loader.JsonLoader.load(JsonLoader.java:68)\n\tat 
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:99)\n\tat
>  
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)\n\tat
>  org.apache.solr.core.SolrCore.execute(SolrCore.java:1967)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:368)\n\tat 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)\n\tat
>  
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)\n\tat
>  
> org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)\n\tat
>  
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014)\n\tat
>  org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)\n\tat 
> org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)\n\tat 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)\n\tat
>  
> org.eclipse.jetty.server.bio

[jira] [Updated] (SOLR-6685) ConcurrentModificationException in Overseer Stats API

2014-12-01 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6685:

Fix Version/s: 4.10.3

> ConcurrentModificationException in Overseer Stats API
> -
>
> Key: SOLR-6685
> URL: https://issues.apache.org/jira/browse/SOLR-6685
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.1
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 4.10.3, 5.0, Trunk
>
> Attachments: SOLR-6685.patch, SOLR-6685.patch
>
>
> I just found a concurrent modification exception in 
> OverseerCollectionProcessor while iterating over the overseer stats. The 
> iteration should be synchronized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-6706) /update/json/docs throws RuntimeException if a nested structure contains a non-leaf float field

2014-12-01 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reopened SOLR-6706:
-

Reopening to backport to 4.10.3

> /update/json/docs throws RuntimeException if a nested structure contains a 
> non-leaf float field
> ---
>
> Key: SOLR-6706
> URL: https://issues.apache.org/jira/browse/SOLR-6706
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 4.10.2, 5.0, Trunk
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6706.patch
>
>
> The following JSON throws an exception:
> {code}
> {
> "a_string" : "abc",
> "a_num" : 2.0,
> "a" : {
> "b" : [
> {"id":"1", "title" : "test1"},
> {"id":"2", "title" : "test2"}
> ]
> }
> }
> {code}
> {code}
> curl 
> 'http://localhost:8983/solr/collection1/update/json/docs?split=/a/b&f=id:/a/b/id&f=title_s:/a/b/title&indent=on'
>  -H 'Content-type:application/json' -d @test2.json
> {
>   "responseHeader":{
> "status":500,
> "QTime":0},
>   "error":{
> "msg":"unexpected token 3",
> "trace":"java.lang.RuntimeException: unexpected token 3\n\tat 
> org.apache.solr.common.util.JsonRecordReader$Node.handleObjectStart(JsonRecordReader.java:400)\n\tat
>  
> org.apache.solr.common.util.JsonRecordReader$Node.parse(JsonRecordReader.java:281)\n\tat
>  
> org.apache.solr.common.util.JsonRecordReader$Node.access$200(JsonRecordReader.java:152)\n\tat
>  
> org.apache.solr.common.util.JsonRecordReader.streamRecords(JsonRecordReader.java:136)\n\tat
>  
> org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.handleSplitMode(JsonLoader.java:200)\n\tat
>  
> org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.processUpdate(JsonLoader.java:120)\n\tat
>  
> org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.load(JsonLoader.java:106)\n\tat
>  org.apache.solr.handler.loader.JsonLoader.load(JsonLoader.java:68)\n\tat 
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:99)\n\tat
>  
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)\n\tat
>  org.apache.solr.core.SolrCore.execute(SolrCore.java:1967)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:368)\n\tat 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)\n\tat
>  
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)\n\tat
>  
> org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)\n\tat
>  
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014)\n\tat
>  org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)\n\tat 
> org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)\n\tat 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)\n\tat
>  
> org.eclipse.jetty.server.bi

[jira] [Updated] (SOLR-2927) SolrIndexSearcher's register do not match close and SolrCore's closeSearcher

2014-12-01 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-2927:

Fix Version/s: 4.10.3

> SolrIndexSearcher's register do not match close and SolrCore's closeSearcher
> 
>
> Key: SOLR-2927
> URL: https://issues.apache.org/jira/browse/SOLR-2927
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.0-ALPHA
> Environment: JDK1.6/CentOS
>Reporter: tom liu
>Assignee: Shalin Shekhar Mangar
> Fix For: 4.10.3, 5.0, Trunk
>
> Attachments: SOLR-2927.patch, mbean-leak-jira.png
>
>
> # SolrIndexSearcher's register method put the name of searcher, but 
> SolrCore's closeSearcher method remove name of currentSearcher on 
> infoRegistry.
> # SolrIndexSearcher's register method put the name of cache, but 
> SolrIndexSearcher's close do not remove the name of cache.
> so, there maybe lost some memory leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-6685) ConcurrentModificationException in Overseer Stats API

2014-12-01 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reopened SOLR-6685:
-

Reopening to backport it to 4.10.3

> ConcurrentModificationException in Overseer Stats API
> -
>
> Key: SOLR-6685
> URL: https://issues.apache.org/jira/browse/SOLR-6685
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.1
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6685.patch, SOLR-6685.patch
>
>
> I just found a concurrent modification exception in 
> OverseerCollectionProcessor while iterating over the overseer stats. The 
> iteration should be synchronized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-2927) SolrIndexSearcher's register do not match close and SolrCore's closeSearcher

2014-12-01 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reopened SOLR-2927:
-

Reopening to backport in 4.10.3

> SolrIndexSearcher's register do not match close and SolrCore's closeSearcher
> 
>
> Key: SOLR-2927
> URL: https://issues.apache.org/jira/browse/SOLR-2927
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.0-ALPHA
> Environment: JDK1.6/CentOS
>Reporter: tom liu
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-2927.patch, mbean-leak-jira.png
>
>
> # SolrIndexSearcher's register method put the name of searcher, but 
> SolrCore's closeSearcher method remove name of currentSearcher on 
> infoRegistry.
> # SolrIndexSearcher's register method put the name of cache, but 
> SolrIndexSearcher's close do not remove the name of cache.
> so, there maybe lost some memory leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4799) SQLEntityProcessor for zipper join

2014-12-01 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230118#comment-14230118
 ] 

Mikhail Khludnev commented on SOLR-4799:


[~noble.paul] did you catch the recent patch?

> SQLEntityProcessor for zipper join
> --
>
> Key: SOLR-4799
> URL: https://issues.apache.org/jira/browse/SOLR-4799
> Project: Solr
>  Issue Type: New Feature
>  Components: contrib - DataImportHandler
>Reporter: Mikhail Khludnev
>Priority: Minor
>  Labels: DIH, dataimportHandler, dih
> Attachments: SOLR-4799.patch, SOLR-4799.patch, SOLR-4799.patch, 
> SOLR-4799.patch, SOLR-4799.patch
>
>
> DIH is mostly considered as a playground tool, and real usages end up with 
> SolrJ. I want to contribute few improvements target DIH performance.
> This one provides performant approach for joining SQL Entities with miserable 
> memory at contrast to 
> http://wiki.apache.org/solr/DataImportHandler#CachedSqlEntityProcessor  
> The idea is:
> * parent table is explicitly ordered by it’s PK in SQL
> * children table is explicitly ordered by parent_id FK in SQL
> * children entity processor joins ordered resultsets by ‘zipper’ algorithm.
> Do you think it’s worth to contribute it into DIH?
> cc: [~goksron] [~jdyer]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5873) AssertionError in ToChildBlockJoinScorer.advance

2014-12-01 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated LUCENE-5873:
-
Attachment: LUCENE-5873.patch

here is the patch changes assert to explicit if/throw.
[~varunthacker], your test is awersome! Appreciate!
[~shalinmangar], constants might be not ideal, I rely on your sense of beauty.
Thanks in advance. 

> AssertionError in ToChildBlockJoinScorer.advance
> 
>
> Key: LUCENE-5873
> URL: https://issues.apache.org/jira/browse/LUCENE-5873
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Varun Thacker
>Priority: Minor
> Attachments: LUCENE-5873.patch, LUCENE-5873.patch
>
>
> When using ToChildBJQ and searching via IndexSearcher.search(Query query, 
> Filter filter, int n) if we provide a filter which matches both parent and 
> child documents we get this error 
> {noformat}
> java.lang.AssertionError
>   at 
> __randomizedtesting.SeedInfo.seed([C346722DC1E4810C:A08F176AE828FA1D]:0)
>   at 
> org.apache.lucene.search.join.ToChildBlockJoinQuery$ToChildBlockJoinScorer.advance(ToChildBlockJoinQuery.java:286)
>   at 
> org.apache.lucene.search.FilteredQuery$LeapFrogScorer.advanceToNextCommonDoc(FilteredQuery.java:274)
>   at 
> org.apache.lucene.search.FilteredQuery$LeapFrogScorer.nextDoc(FilteredQuery.java:286)
>   at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:192)
>   at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:163)
>   at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:614)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:483)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:440)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:273)
>   at 
> org.apache.lucene.search.join.TestBlockJoinValidation.testValidationForToChildBjqWithChildFilterQuery(TestBlockJoinValidation.java:124)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
>   at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
>   at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
>   at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
>   at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
>   at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
>   at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
>   at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
>   at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
>   at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
>   at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsR

[jira] [Resolved] (SOLR-5641) REST API to modify request handlers

2014-12-01 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-5641.
--
   Resolution: Duplicate
Fix Version/s: SOLR-6607

> REST API to modify request handlers
> ---
>
> Key: SOLR-5641
> URL: https://issues.apache.org/jira/browse/SOLR-5641
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Willy Solaligue
> Fix For: SOLR-6607
>
> Attachments: SOLR-5641.patch, SolrConfigApiDocumentation.pdf
>
>
> There should be a REST API to allow modify request handlers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6787) API to manage blobs in Solr

2014-12-01 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6787:
-
Description: 
A special collection called .system needs to be created by the user to 
store/manage blobs. The schema/solrconfig of that collection need to be 
automatically supplied by the system so that there are no errors

APIs need to be created to manage the content of that collection

{code}
#create a new jar or add a new version of a jar
curl -X POST -H 'Content-Type: application/octet-stream' -d @mycomponent.jar 
http://localhost:8983/solr/.system/blob/mycomponent

#  GET on the end point would give a list of jars and other details
curl http://localhost:8983/solr/.system/blob 
# GET on the end point with jar name would give  details of various versions of 
the available jars
curl http://localhost:8983/solr/.system/blob/mycomponent
# GET on the end point with jar name and version with a wt=filestream to get 
the actual file
curl http://localhost:8983/solr/.system/blob/mycomponent/1?wt=filestream
{code}

  was:
A special collection called .system needs to be created by the user to 
store/manage blobs. The schema/solrconfig of that collection need to be 
automatically supplied by the system so that there are no errors

APIs need to be created to manage the content of that collection

{code}
#create a new jar or add a new version of a jar
curl -X POST -H 'Content-Type: application/octet-stream' -d @mycomponent.jar 
http://localhost:8983/solr/.system/blob/mycomponent

#  GET on the end point would give a list of jars and other details
curl http://localhost:8983/solr/.system/blob 
# GET on the end point with jar name would give  details of various versions of 
the available jars
curl http://localhost:8983/solr/.system/blob/mycomponent
{code}


> API to manage blobs in  Solr
> 
>
> Key: SOLR-6787
> URL: https://issues.apache.org/jira/browse/SOLR-6787
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> A special collection called .system needs to be created by the user to 
> store/manage blobs. The schema/solrconfig of that collection need to be 
> automatically supplied by the system so that there are no errors
> APIs need to be created to manage the content of that collection
> {code}
> #create a new jar or add a new version of a jar
> curl -X POST -H 'Content-Type: application/octet-stream' -d @mycomponent.jar 
> http://localhost:8983/solr/.system/blob/mycomponent
> #  GET on the end point would give a list of jars and other details
> curl http://localhost:8983/solr/.system/blob 
> # GET on the end point with jar name would give  details of various versions 
> of the available jars
> curl http://localhost:8983/solr/.system/blob/mycomponent
> # GET on the end point with jar name and version with a wt=filestream to get 
> the actual file
> curl http://localhost:8983/solr/.system/blob/mycomponent/1?wt=filestream
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4792) stop shipping a war in 5.0

2014-12-01 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230102#comment-14230102
 ] 

Mark Miller commented on SOLR-4792:
---

No, it won't be in maven - there will be no official WAR support.

> stop shipping a war in 5.0
> --
>
> Key: SOLR-4792
> URL: https://issues.apache.org/jira/browse/SOLR-4792
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Reporter: Robert Muir
>Assignee: Mark Miller
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-4792.patch
>
>
> see the vote on the developer list.
> This is the first step: if we stop shipping a war then we are free to do 
> anything we want. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4792) stop shipping a war in 5.0

2014-12-01 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230082#comment-14230082
 ] 

Alexandre Rafalovitch commented on SOLR-4792:
-

So, to be clear on the possible compromise. 

Do not ship war in 5.0 itself (as part of download). But make it available as a 
maven package still for those who want to bundle it themselves in a different 
way (Tomcat, O/S packages, etc). Maybe mark it deprecated somewhere in the WAR 
(if possible).

That would work the way I see the world and the messaging around Solr.

> stop shipping a war in 5.0
> --
>
> Key: SOLR-4792
> URL: https://issues.apache.org/jira/browse/SOLR-4792
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Reporter: Robert Muir
>Assignee: Mark Miller
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-4792.patch
>
>
> see the vote on the developer list.
> This is the first step: if we stop shipping a war then we are free to do 
> anything we want. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3881) frequent OOM in LanguageIdentifierUpdateProcessor

2014-12-01 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-3881:
---
Attachment: SOLR-3881.patch

1. LangDetectLanguageIdentifierUpdateProcessor.detectLanguage() still uses 
concatFields(), but it shouldn't – that was the whole point about moving it to 
TikaLanguageIdentifierUpdateProcessor; instead, 
LangDetectLanguageIdentifierUpdateProcessor.detectLanguage() should loop over 
inputFields and call detector.append() (similarly to what concatFields() does).
[VZ] LangDetectLanguageIdentifierUpdateProcessor.detectLanguage() changed to 
use old flow with limit on field and max total on detector.
Each field value appended to detector.

2. concatFields() and getExpectedSize() should move to 
TikaLanguageIdentifierUpdateProcessor.
[VZ] Moved to TikaLanguageIdentifierUpdateProcessor. Tests using concatFields() 
moved to TikaLanguageIdentifierUpdateProcessorFactoryTest.

3. LanguageIdentifierUpdateProcessor.getExpectedSize() still takes a 
maxAppendSize, which didn't get renamed, but that param could be removed 
entirely, since maxFieldValueChars is available as a data member.
[VZ] Argument removed.

4. There are a bunch of whitespace changes in 
LanguageIdentifierUpdateProcessorFactoryTestCase.java - it makes reviewing 
patches significantly harder when they include changes like this. Your IDE 
should have settings that make it stop doing this.
[VZ] Whitespaces removed.

5. There is still some import reordering in 
TikaLanguageIdentifierUpdateProcessor.java.
[VZ] Fixed.

One last thing:
The total chars default should be its own setting; I was thinking we could make 
it double the per-value default?
[VZ] added default value to maxTotalChars and changed both to 10K like in 
com.cybozu.labs.langdetect.Detector.maxLength
Thanks for adding the total chars default, but you didn't make it double the 
field value chars default, as I suggested. Not sure if that's better - if the 
user specifies multiple fields and the first one is the only one that's used to 
determine the language because it's larger than the total char default, is that 
an issue? I was thinking that it would be better to visit at least one other 
field (hence the idea of total = 2 * per-field), but that wouldn't fully 
address the issue. What do you think?
[VZ] i think in most cases it will be only one field, but since both parameters 
are optional we should not restrict result if only per field specified more 
then 10K.
Updated total default value to 20K. 


> frequent OOM in LanguageIdentifierUpdateProcessor
> -
>
> Key: SOLR-3881
> URL: https://issues.apache.org/jira/browse/SOLR-3881
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 4.0
> Environment: CentOS 6.x, JDK 1.6, (java -server -Xms2G -Xmx2G 
> -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=)
>Reporter: Rob Tulloh
> Fix For: 4.9, Trunk
>
> Attachments: SOLR-3881.patch, SOLR-3881.patch, SOLR-3881.patch, 
> SOLR-3881.patch, SOLR-3881.patch
>
>
> We are seeing frequent failures from Solr causing it to OOM. Here is the 
> stack trace we observe when this happens:
> {noformat}
> Caused by: java.lang.OutOfMemoryError: Java heap space
> at java.util.Arrays.copyOf(Arrays.java:2882)
> at 
> java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
> at 
> java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:390)
> at java.lang.StringBuffer.append(StringBuffer.java:224)
> at 
> org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.concatFields(LanguageIdentifierUpdateProcessor.java:286)
> at 
> org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.process(LanguageIdentifierUpdateProcessor.java:189)
> at 
> org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.processAdd(LanguageIdentifierUpdateProcessor.java:171)
> at 
> org.apache.solr.handler.BinaryUpdateRequestHandler$2.update(BinaryUpdateRequestHandler.java:90)
> at 
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:140)
> at 
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:120)
> at 
> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:221)
> at 
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:105)
> at 
> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:186)
> at 
> org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:112)
> at 
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarsh

  1   2   >