[jira] [Commented] (SOLR-7956) There are interrupts on shutdown in places that can cause ChannelAlreadyClosed exceptions which prevents proper closing of transaction logs.

2015-08-26 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715413#comment-14715413
 ] 

Scott Blum commented on SOLR-7956:
--

fsyncService.shutdownNow() in IndexFetcher.cleanup() seems particularly 
dangerous, as that executor's tasks dive directly into channel code

 There are interrupts on shutdown in places that can cause 
 ChannelAlreadyClosed exceptions which prevents proper closing of transaction 
 logs.
 

 Key: SOLR-7956
 URL: https://issues.apache.org/jira/browse/SOLR-7956
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: Trunk, 5.4

 Attachments: SOLR-7956.patch


 Found this while beast testing HttpPartitionTest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7977) SOLR_HOST in solr.in.sh doesn't apply to Jetty's host property

2015-08-26 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715395#comment-14715395
 ] 

Shawn Heisey edited comment on SOLR-7977 at 8/26/15 7:49 PM:
-

bq. I can envision situations where the host that the user wants to include in 
zookeeper is entirely different

In any sanely set up network, those two will of course always be identical ... 
but users have a way of inventing strange networking setups with address 
translation where not everything matches up the way it would in a 
straightforward network.  Also, a user may want to specify the hostname to 
SolrCloud, but still listen on all interfaces, not just the one specified by 
the hostname.


was (Author: elyograg):
bq. I can envision situations where the host that the user wants to include in 
zookeeper is entirely different

In any sanely set up network, those two will of course always be identical ... 
but users have a way of inventing strange networking setups with address 
translation where not everything matches up the way it would in a 
straightforward network.

 SOLR_HOST in solr.in.sh doesn't apply to Jetty's host property
 --

 Key: SOLR-7977
 URL: https://issues.apache.org/jira/browse/SOLR-7977
 Project: Solr
  Issue Type: Bug
  Components: security, SolrCloud
Reporter: Shalin Shekhar Mangar
  Labels: impact-medium
 Fix For: Trunk, 5.4


 [~sdavids] pointed out that the SOLR_HOST config option in solr.in.sh doesn't 
 set Jetty's host property (solr.jetty.host) so it still binds to all net 
 interfaces. Perhaps it should apply to jetty as well because the user 
 explicitly wants us to bind to specific IP?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7956) There are interrupts on shutdown in places that can cause ChannelAlreadyClosed exceptions which prevents proper closing of transaction logs.

2015-08-26 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715407#comment-14715407
 ] 

Scott Blum commented on SOLR-7956:
--

I also see a bunch of calls to ExecutorUtil.shutdownNowAndAwaitTermination(), 
that seems pretty dangerous.  It's difficult to audit exactly how those 
executors are being used, but it seems worrisome.

 There are interrupts on shutdown in places that can cause 
 ChannelAlreadyClosed exceptions which prevents proper closing of transaction 
 logs.
 

 Key: SOLR-7956
 URL: https://issues.apache.org/jira/browse/SOLR-7956
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: Trunk, 5.4

 Attachments: SOLR-7956.patch


 Found this while beast testing HttpPartitionTest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7956) There are interrupts on shutdown in places that can cause ChannelAlreadyClosed exceptions which prevents proper closing of transaction logs.

2015-08-26 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715320#comment-14715320
 ] 

Scott Blum commented on SOLR-7956:
--

Interrupts permanently breaking index writers is our #1 production issue right 
now, so I've just started looking at all the code and JIRAs related to this 
problem.  Glad I found this one, I might try a backport to our ~5.2.1 instance.

One question, have you also audited the existing Future.cancel(true) calls?  I 
see several of these in Solr.  The ones in ReplicationHandler and CommitTracker 
seem suspect to me, not sure about HttpShardHandler (maybe that one doesn't hit 
any core stuff directly).

 There are interrupts on shutdown in places that can cause 
 ChannelAlreadyClosed exceptions which prevents proper closing of transaction 
 logs.
 

 Key: SOLR-7956
 URL: https://issues.apache.org/jira/browse/SOLR-7956
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: Trunk, 5.4

 Attachments: SOLR-7956.patch


 Found this while beast testing HttpPartitionTest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7980) CorruptIndex exceptions in ChaosMonkeySafeLeaderTest

2015-08-26 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-7980:
--

 Summary: CorruptIndex exceptions in ChaosMonkeySafeLeaderTest
 Key: SOLR-7980
 URL: https://issues.apache.org/jira/browse/SOLR-7980
 Project: Solr
  Issue Type: Bug
Reporter: Yonik Seeley


Found during investigation of SOLR-7836
If you loop the test long enough, you'll occasionally see CorruptIndex 
exceptions, although it won't cause the test to fail.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7978) Really fix the example/files update-script Java version issues

2015-08-26 Thread Erik Hatcher (JIRA)
Erik Hatcher created SOLR-7978:
--

 Summary: Really fix the example/files update-script Java version 
issues
 Key: SOLR-7978
 URL: https://issues.apache.org/jira/browse/SOLR-7978
 Project: Solr
  Issue Type: Bug
  Components: examples
Affects Versions: 5.3
Reporter: Erik Hatcher
Assignee: Erik Hatcher
 Fix For: 5.4


SOLR-7652 addressed this issue by having a Java7 version of the script for 5x 
and a Java8 version on trunk.  5x on Java8 is broken though.  I wager that 
there's got to be some incantations that can make the same script work on Java 
7 and 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7979) Typo in CoreAdminHandler log message

2015-08-26 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-7979:

Attachment: SOLR-7979.patch

Trivial patch.

 Typo in CoreAdminHandler log message
 

 Key: SOLR-7979
 URL: https://issues.apache.org/jira/browse/SOLR-7979
 Project: Solr
  Issue Type: Bug
Reporter: Mike Drob
Priority: Trivial
 Fix For: Trunk

 Attachments: SOLR-7979.patch


  2015-08-26 17:55:18,616 ERROR 
 org.apache.solr.handler.admin.CoreAdminHandler: Cound not find core to call 
 recovery:corename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7954) ArrayIndexOutOfBoundsException from distributed HLL serialization logic when using using stats.field={!cardinality=1.0} in a distributed query

2015-08-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715188#comment-14715188
 ] 

ASF subversion and git services commented on SOLR-7954:
---

Commit 1697977 from hoss...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1697977 ]

SOLR-7954: Fixed an integer overflow bug in the HyperLogLog code used by the 
'cardinality' option of stats.field to prevent ArrayIndexOutOfBoundsException 
in a distributed search when a large precision is selected and a large number 
of values exist in each shard (merge r1697969)

 ArrayIndexOutOfBoundsException from distributed HLL serialization logic when 
 using using stats.field={!cardinality=1.0} in a distributed query
 --

 Key: SOLR-7954
 URL: https://issues.apache.org/jira/browse/SOLR-7954
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.2.1
 Environment: SolrCloud 4 node cluster.
 Ubuntu 12.04
 OS Type 64 bit
Reporter: Modassar Ather
Assignee: Hoss Man
 Attachments: SOLR-7954.patch, SOLR-7954.patch, SOLR-7954.patch


 User reports indicate that using {{stats.field=\{!cardinality=1.0\}foo}} on a 
 field that has extremely high cardinality on a single shard (example: 150K 
 unique values) can lead to ArrayIndexOutOfBoundsException: 3 on the shard 
 during serialization of the HLL values.
 using cardinality=0.9 (or lower) doesn't produce the same symptoms, 
 suggesting the problem is specific to large log2m and regwidth values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7979) Typo in CoreAdminHandler log message

2015-08-26 Thread Mike Drob (JIRA)
Mike Drob created SOLR-7979:
---

 Summary: Typo in CoreAdminHandler log message
 Key: SOLR-7979
 URL: https://issues.apache.org/jira/browse/SOLR-7979
 Project: Solr
  Issue Type: Bug
Reporter: Mike Drob
Priority: Trivial
 Fix For: Trunk
 Attachments: SOLR-7979.patch

 2015-08-26 17:55:18,616 ERROR org.apache.solr.handler.admin.CoreAdminHandler: 
Cound not find core to call recovery:corename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7956) There are interrupts on shutdown in places that can cause ChannelAlreadyClosed exceptions which prevents proper closing of transaction logs.

2015-08-26 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715371#comment-14715371
 ] 

Yonik Seeley commented on SOLR-7956:


bq. The ones in ReplicationHandler and CommitTracker seem suspect to me

I think you're right... the one in CommitTracker does look suspect, esp since 
we now share the IndexWriter across UpdateHandlers on a core reload.

 There are interrupts on shutdown in places that can cause 
 ChannelAlreadyClosed exceptions which prevents proper closing of transaction 
 logs.
 

 Key: SOLR-7956
 URL: https://issues.apache.org/jira/browse/SOLR-7956
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: Trunk, 5.4

 Attachments: SOLR-7956.patch


 Found this while beast testing HttpPartitionTest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7956) There are interrupts on shutdown in places that can cause ChannelAlreadyClosed exceptions which prevents proper closing of transaction logs.

2015-08-26 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715392#comment-14715392
 ] 

Scott Blum commented on SOLR-7956:
--

Supposing I make the following adjustments:

pending.cancel(true) = pending.cancel(false)
scheduler.shutdownNow() = scheduler.shutdown();

Then, do I need to wait for the scheduler to actually terminate before exiting 
close()?

 There are interrupts on shutdown in places that can cause 
 ChannelAlreadyClosed exceptions which prevents proper closing of transaction 
 logs.
 

 Key: SOLR-7956
 URL: https://issues.apache.org/jira/browse/SOLR-7956
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: Trunk, 5.4

 Attachments: SOLR-7956.patch


 Found this while beast testing HttpPartitionTest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7977) SOLR_HOST in solr.in.sh doesn't apply to Jetty's host property

2015-08-26 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715395#comment-14715395
 ] 

Shawn Heisey commented on SOLR-7977:


bq. I can envision situations where the host that the user wants to include in 
zookeeper is entirely different

In any sanely set up network, those two will of course always be identical ... 
but users have a way of inventing strange networking setups with address 
translation where not everything matches up the way it would in a 
straightforward network.

 SOLR_HOST in solr.in.sh doesn't apply to Jetty's host property
 --

 Key: SOLR-7977
 URL: https://issues.apache.org/jira/browse/SOLR-7977
 Project: Solr
  Issue Type: Bug
  Components: security, SolrCloud
Reporter: Shalin Shekhar Mangar
  Labels: impact-medium
 Fix For: Trunk, 5.4


 [~sdavids] pointed out that the SOLR_HOST config option in solr.in.sh doesn't 
 set Jetty's host property (solr.jetty.host) so it still binds to all net 
 interfaces. Perhaps it should apply to jetty as well because the user 
 explicitly wants us to bind to specific IP?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4638) If IndexWriter is interrupted on close and is using a channel (mmap/nio), it can throw a ClosedByInterruptException and prevent you from opening a new IndexWriter in t

2015-08-26 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715302#comment-14715302
 ] 

Scott Blum commented on LUCENE-4638:


Any update on this?  In our mostly-stock 5.2.1 Solr deployment, we are hitting 
a point where we get cores into a permanently wedged state all the time, and 
there seems to be no fix except to restart the entire node (JVM).  The 
IndexWriter gets into a broken state with ClosedByInterrupt, and it never gets 
out of it, and no new IndexWriter (maybe also no new searchers) can be created. 
 This is one of our biggest operational issues right now.

 If IndexWriter is interrupted on close and is using a channel (mmap/nio), it 
 can throw a ClosedByInterruptException and prevent you from opening a new 
 IndexWriter in the same proceses if you are using Native locks.
 --

 Key: LUCENE-4638
 URL: https://issues.apache.org/jira/browse/LUCENE-4638
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Mark Miller
 Fix For: 4.9, Trunk

 Attachments: LUCENE-4638.patch


 The ClosedByInterruptException will prevent the index from being unlocked in 
 close. If you try and close again, the call will hang. If you are using 
 native locks and try to open a new IndexWriter, it will fail to get the lock. 
 If you try IW#forceUnlock, it wont work because the not fully closed IW will 
 still have the lock.
 ideas:
 * On ClosedByInterruptException, IW should continue trying to close what it 
 can and unlock the index? Generally I have see the exception trigger in 
 commitInternal.
 * We should add a non static forceUnlock to IW that lets you remove the lock 
 and start a new IW?
 * We should make the lock protected so IW sub classes could unlock the index 
 in advanced use cases?
 * others?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6759) Integrate lat/long BKD and spatial 3d, part 2

2015-08-26 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715397#comment-14715397
 ] 

Karl Wright commented on LUCENE-6759:
-

ok, I'm finally done travelling for the moment.  [~mikemccand], where do things 
stand?  I notice that my latest patch didn't get committed, FWIW.  Also, do you 
want me to implement your idea of having both isWithin's need to pass before 
the assert triggers?


 Integrate lat/long BKD and spatial 3d, part 2
 -

 Key: LUCENE-6759
 URL: https://issues.apache.org/jira/browse/LUCENE-6759
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
 Attachments: LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 This is just a continuation of LUCENE-6699, which became too big.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7966) Solr Admin pages should set X-Frame-Options to DENY

2015-08-26 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712571#comment-14712571
 ] 

Uwe Schindler commented on SOLR-7966:
-

New patch looks good (have not tried). I did not know this test :-)

 Solr Admin pages should set X-Frame-Options to DENY
 ---

 Key: SOLR-7966
 URL: https://issues.apache.org/jira/browse/SOLR-7966
 Project: Solr
  Issue Type: Bug
Reporter: Yonik Seeley
Priority: Trivial
 Attachments: SOLR-7966.patch, SOLR-7966.patch


 Security scan software reported that Solr's admin interface is vulnerable to 
 clickjacking, which is fixable with the X-Frame-Options HTTP header.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Should we EOL support for 32 bit systems?

2015-08-26 Thread Dawid Weiss
Plus, it is fun as well.  A bit like finding these:

https://goo.gl/CQZPYh

Dawid

On Tue, Aug 25, 2015 at 11:37 PM, Jan Høydahl jan@cominvent.com wrote:
 Another reason to keep 32bit support is that people who already have 
 installed Java JRE from their browser tend to have the 32bit version (even if 
 on 64bit OS). So they will be able to test-run Solr without re-installing 
 Java.

 --
 Jan Høydahl, search solution architect
 Cominvent AS - www.cominvent.com

 25. aug. 2015 kl. 18.47 skrev Erick Erickson erickerick...@gmail.com:

 Fair enough, just wanted to be sure we weren't making extra work for 
 ourselves.
 Well, actually extra work for you since you seem to be the one who interacts
 with the compiler folks the most ;).

 On Tue, Aug 25, 2015 at 9:43 AM, Uwe Schindler u...@thetaphi.de wrote:
 Hi,

 From a Java point of view, there is no real difference to not support 32 
 bit versions. The bug with JDK 9 are just bugs that could also have 
 happened with 64 bit versions. It is just easier to trigger this bug with 
 32 bits, but I am almost sure the underlying bug also affect 64 bits.

 So why should we no longer support all platforms Java supports? Bitness 
 does not matter for our code? Should we then also no longer support Sparc, 
 PowerPC, or ARM platform?
 -1 to add arbitrary restrictions on our runtime environment. If we want 
 this, we should disallow all platforms we don't test on and of course also 
 al processor variants we don't test on! :-)

 Uwe

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de


 -Original Message-
 From: Erick Erickson [mailto:erickerick...@gmail.com]
 Sent: Tuesday, August 25, 2015 6:23 PM
 To: dev@lucene.apache.org
 Subject: Should we EOL support for 32 bit systems?

 I have no real skin in this game, but I thought it worth asking after Uwe's
 recent e-mail about disabling 32bits with -server tests.

 I guess it boils down to who is using 32-bit versions?. Are we spending
 time/energy supporting a configuration that is not useful to enough people
 to merit the effort?

 I'm perfectly content if the response is That's a really stupid question 
 to ask,
 of course we must continue to support 32-bit OSs.
 Although some evidence would be nice ;)

 It's just that nobody I work with is running 32 bit OS's. Whether that's 
 just my
 limited exposure to people running small systems is certainly a valid
 question.

 If we _do_ decide to drop support for 32 bit systems, what's the right
 version? 6.0?

 Random thoughts on a slow Tuesday morning

 Erick

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7977) SOLR_HOST in solr.in.sh doesn't apply to Jetty's host property

2015-08-26 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715243#comment-14715243
 ] 

Shawn Heisey commented on SOLR-7977:


I don't think that property was initially intended to be applicable to Jetty.  
It's how the hostname gets overridden for SolrCloud.

I can envision situations where the host that the user wants to include in 
zookeeper is entirely different from the host they want to use for network 
binding.


 SOLR_HOST in solr.in.sh doesn't apply to Jetty's host property
 --

 Key: SOLR-7977
 URL: https://issues.apache.org/jira/browse/SOLR-7977
 Project: Solr
  Issue Type: Bug
  Components: security, SolrCloud
Reporter: Shalin Shekhar Mangar
  Labels: impact-medium
 Fix For: Trunk, 5.4


 [~sdavids] pointed out that the SOLR_HOST config option in solr.in.sh doesn't 
 set Jetty's host property (solr.jetty.host) so it still binds to all net 
 interfaces. Perhaps it should apply to jetty as well because the user 
 explicitly wants us to bind to specific IP?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7888) Make Lucene's AnalyzingInfixSuggester.lookup() method that takes a BooleanQuery filter parameter available in Solr

2015-08-26 Thread Arcadius Ahouansou (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arcadius Ahouansou updated SOLR-7888:
-
Attachment: SOLR-7888.patch

Updated from trunk and added {{testContextFilterIsTrimmed()}}

 Make Lucene's AnalyzingInfixSuggester.lookup() method that takes a 
 BooleanQuery filter parameter available in Solr
 --

 Key: SOLR-7888
 URL: https://issues.apache.org/jira/browse/SOLR-7888
 Project: Solr
  Issue Type: New Feature
  Components: Suggester
Affects Versions: 5.2.1
Reporter: Arcadius Ahouansou
Assignee: Jan Høydahl
 Fix For: 5.4

 Attachments: SOLR-7888.patch, SOLR-7888.patch, SOLR-7888.patch, 
 SOLR-7888.patch, SOLR-7888.patch, SOLR-7888.patch


  LUCENE-6464 has introduced a very flexible lookup method that takes as 
 parameter a BooleanQuery that is used for filtering results.
 This ticket is to expose that method to Solr.
 This would allow user to do:
 {code}
 /suggest?suggest=truesuggest.build=truesuggest.q=termsuggest.contextFilterQuery=contexts:tennis
 /suggest?suggest=truesuggest.build=truesuggest.q=termsuggest.contextFilterQuery=contexts:golf
  AND contexts:football
 {code}
 etc
 Given that the context filtering in currently only implemented by the 
 {code}AnalyzingInfixSuggester{code} and by the 
 {code}BlendedInfixSuggester{code}, this initial implementation will support 
 only these 2 lookup implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7971) Reduce memory allocated by JavaBinCodec to encode large strings

2015-08-26 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-7971:

Attachment: SOLR-7971-directbuffer.patch

bq. I only glanced at it, but it's probably a little too simplistic. You can't 
cut UTF16 in random places, encode it as UTF8 and get the same bytes because of 
2 char code points.

Duh, of course, I used to know that once and yet...

bq. We should really consider returning to how v1 of the protocol handled 
things since it had to do no buffering at all since it simply used 
String.length(). We just need to consider how to handle back compat of course.

Until we go there, how about this patch which has a modified UTF16toUTF8 method 
that writes to ByteBuffer directly using an intermediate scratch array?

 Reduce memory allocated by JavaBinCodec to encode large strings
 ---

 Key: SOLR-7971
 URL: https://issues.apache.org/jira/browse/SOLR-7971
 Project: Solr
  Issue Type: Sub-task
  Components: Response Writers, SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: Trunk, 5.4

 Attachments: SOLR-7971-directbuffer.patch, 
 SOLR-7971-directbuffer.patch, SOLR-7971.patch


 As discussed in SOLR-7927, we can reduce the buffer memory allocated by 
 JavaBinCodec while writing large strings.
 https://issues.apache.org/jira/browse/SOLR-7927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14700420#comment-14700420
 {quote}
 The maximum Unicode code point (as of Unicode 8 anyway) is U+10 
 ([http://www.unicode.org/glossary/#code_point]).  This is encoded in UTF-16 
 as surrogate pair {{\uDBFF\uDFFF}}, which takes up two Java chars, and is 
 represented in UTF-8 as the 4-byte sequence {{F4 8F BF BF}}.  This is likely 
 where the mistaken 4-bytes-per-Java-char formulation came from: the maximum 
 number of UTF-8 bytes required to represent a Unicode *code point* is 4.
 The maximum Java char is {{\u}}, which is represented in UTF-8 as the 
 3-byte sequence {{EF BF BF}}.
 So I think it's safe to switch to using 3 bytes per Java char (the unit of 
 measurement returned by {{String.length()}}), like 
 {{CompressingStoredFieldsWriter.writeField()}} does.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6759) Integrate lat/long BKD and spatial 3d, part 2

2015-08-26 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14714399#comment-14714399
 ] 

Karl Wright commented on LUCENE-6759:
-

[~mikemccand] Yes, that should do it.  If you have both, anyways. ;-)

 Integrate lat/long BKD and spatial 3d, part 2
 -

 Key: LUCENE-6759
 URL: https://issues.apache.org/jira/browse/LUCENE-6759
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
 Attachments: LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 This is just a continuation of LUCENE-6699, which became too big.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7954) ArrayIndexOutOfBoundsException from distributed HLL serialization logic when using using stats.field={!cardinality=1.0} in a distributed query

2015-08-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14714473#comment-14714473
 ] 

ASF subversion and git services commented on SOLR-7954:
---

Commit 1697969 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1697969 ]

SOLR-7954: Fixed an integer overflow bug in the HyperLogLog code used by the 
'cardinality' option of stats.field to prevent ArrayIndexOutOfBoundsException 
in a distributed search when a large precision is selected and a large number 
of values exist in each shard

 ArrayIndexOutOfBoundsException from distributed HLL serialization logic when 
 using using stats.field={!cardinality=1.0} in a distributed query
 --

 Key: SOLR-7954
 URL: https://issues.apache.org/jira/browse/SOLR-7954
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.2.1
 Environment: SolrCloud 4 node cluster.
 Ubuntu 12.04
 OS Type 64 bit
Reporter: Modassar Ather
Assignee: Hoss Man
 Attachments: SOLR-7954.patch, SOLR-7954.patch, SOLR-7954.patch


 User reports indicate that using {{stats.field=\{!cardinality=1.0\}foo}} on a 
 field that has extremely high cardinality on a single shard (example: 150K 
 unique values) can lead to ArrayIndexOutOfBoundsException: 3 on the shard 
 during serialization of the HLL values.
 using cardinality=0.9 (or lower) doesn't produce the same symptoms, 
 suggesting the problem is specific to large log2m and regwidth values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6759) Integrate lat/long BKD and spatial 3d, part 2

2015-08-26 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715191#comment-14715191
 ] 

David Smiley commented on LUCENE-6759:
--

bq. Maybe, we could just fix the test so that if isWithin differs between the 
quantized and unquantized x,y,z, we skip checking that hit?

Yes; this is also the approach done RandomSpatialOpFuzzyPrefixTreeTest.  The 
fuzzy in the name here because of the issue being discussed.

 Integrate lat/long BKD and spatial 3d, part 2
 -

 Key: LUCENE-6759
 URL: https://issues.apache.org/jira/browse/LUCENE-6759
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
 Attachments: LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 This is just a continuation of LUCENE-6699, which became too big.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6759) Integrate lat/long BKD and spatial 3d, part 2

2015-08-26 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715623#comment-14715623
 ] 

Michael McCandless commented on LUCENE-6759:


OK I committed, beasted, hit this failure:

{noformat}
ant test  -Dtestcase=TestGeo3DPointField -Dtests.method=testRandomMedium* 
-Dtests.seed=E1D51F3E8B12E79D -Dtests.multiplier=10 -Dtests.iters=5 
-Dtests.slow=true 
-Dtests.linedocsfile=/lucenedata/hudson.enwiki.random.lines.txt.fixed 
-Dtests.locale=pl -Dtests.timezone=America/Inuvik -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[junit4:pickseed] Seed property 'tests.seed' already defined: E1D51F3E8B12E79D
   [junit4] JUnit4 says 今日は! Master seed: E1D51F3E8B12E79D
   [junit4] Executing 1 suite with 1 JVM.
   [junit4] 
   [junit4] Started J0 PID(62227@localhost).
   [junit4] Suite: org.apache.lucene.bkdtree3d.TestGeo3DPointField
   [junit4] OK  70.2s | TestGeo3DPointField.testRandomMedium {#0 
seed=[E1D51F3E8B12E79D:5C0B2896CA7784FB]}
   [junit4]   2 sie 26, 2015 4:11:15 PM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2 WARNING: Uncaught exception in thread: 
Thread[T0,5,TGRP-TestGeo3DPointField]
   [junit4]   2 java.lang.AssertionError: T0: iter=425 id=1237 docID=1237 
lat=0.005231514023315527 lon=0.0034278119211296914 expected false but got: true 
deleted?=false
   [junit4]   2   point1=[lat=0.005231514023315527, 
lon=0.0034278119211296914], iswithin=false
   [junit4]   2   point2=[X=1.0010991445151618, Y=0.003431592678386528, 
Z=0.00523734247369568], iswithin=false
   [junit4]   2   query=PointInGeo3DShapeQuery: field=point:PlanetModel: 
PlanetModel.WGS84 Shape: GeoCircle: {planetmodel=PlanetModel.WGS84, 
center=[lat=0.006204988457123483, lon=0.003379977917811208], 
radius=7.780831828380698E-4(0.04458088248672737)}
   [junit4]   2at 
__randomizedtesting.SeedInfo.seed([E1D51F3E8B12E79D]:0)
   [junit4]   2at org.junit.Assert.fail(Assert.java:93)
   [junit4]   2at 
org.apache.lucene.bkdtree3d.TestGeo3DPointField$4._run(TestGeo3DPointField.java:632)
   [junit4]   2at 
org.apache.lucene.bkdtree3d.TestGeo3DPointField$4.run(TestGeo3DPointField.java:521)
{noformat}

 Integrate lat/long BKD and spatial 3d, part 2
 -

 Key: LUCENE-6759
 URL: https://issues.apache.org/jira/browse/LUCENE-6759
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
 Attachments: LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 This is just a continuation of LUCENE-6699, which became too big.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6759) Integrate lat/long BKD and spatial 3d, part 2

2015-08-26 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715680#comment-14715680
 ] 

Karl Wright commented on LUCENE-6759:
-

Huh. Even odder, tried the repro line and got this:

{code}
ant test  -Dtestcase=TestGeo3DPointField -Dtests.method=testRandomMedium* 
-Dtests.seed=E1D51F3E8B12E79D -Dtests.multiplier=10 -Dtests.iters=5 
-Dtests.slow=true 
-Dtests.linedocsfile=/lucenedata/hudson.enwiki.random.lines.txt.fixed 
-Dtests.locale=pl -Dtests.timezone=America/Inuvik -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

...

BUILD SUCCESSFUL
Total time: 13 minutes 14 seconds
{code}

So now I'm very puzzled, [~mikemccand].

 Integrate lat/long BKD and spatial 3d, part 2
 -

 Key: LUCENE-6759
 URL: https://issues.apache.org/jira/browse/LUCENE-6759
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
 Attachments: LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 This is just a continuation of LUCENE-6699, which became too big.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6759) Integrate lat/long BKD and spatial 3d, part 2

2015-08-26 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715521#comment-14715521
 ] 

Michael McCandless commented on LUCENE-6759:


Ugh sorry I thought I had committed the latest patch ... I'll do that shortly.

And I'll also fix the test to skip checking a hit when the quantization changed 
the expected result...

 Integrate lat/long BKD and spatial 3d, part 2
 -

 Key: LUCENE-6759
 URL: https://issues.apache.org/jira/browse/LUCENE-6759
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
 Attachments: LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 This is just a continuation of LUCENE-6699, which became too big.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715538#comment-14715538
 ] 

ASF subversion and git services commented on LUCENE-6699:
-

Commit 1698004 from [~mikemccand] in branch 'dev/branches/lucene6699'
[ https://svn.apache.org/r1698004 ]

LUCENE-6699: increase fudge factor; don't check hits if quantization changed 
the expected result

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7971) Reduce memory allocated by JavaBinCodec to encode large strings

2015-08-26 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715540#comment-14715540
 ] 

Yonik Seeley commented on SOLR-7971:


Making 3 copies really feels heavyweight (the current code makes a single copy 
in the case of large strings).
string-scratch, scratch-off-heap-buffer, off-heap-buffer-scratch, 
scratch-write
And this doesn't even use less system memory... just less heap memory.

I wonder how it would do against Mikhail's idea of just calculating the UTF8 
length first.
If we keep with DirectByteBuffer, we should at least eliminate one of the 
copies by encoding directly to it?


 Reduce memory allocated by JavaBinCodec to encode large strings
 ---

 Key: SOLR-7971
 URL: https://issues.apache.org/jira/browse/SOLR-7971
 Project: Solr
  Issue Type: Sub-task
  Components: Response Writers, SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: Trunk, 5.4

 Attachments: SOLR-7971-directbuffer.patch, 
 SOLR-7971-directbuffer.patch, SOLR-7971.patch


 As discussed in SOLR-7927, we can reduce the buffer memory allocated by 
 JavaBinCodec while writing large strings.
 https://issues.apache.org/jira/browse/SOLR-7927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14700420#comment-14700420
 {quote}
 The maximum Unicode code point (as of Unicode 8 anyway) is U+10 
 ([http://www.unicode.org/glossary/#code_point]).  This is encoded in UTF-16 
 as surrogate pair {{\uDBFF\uDFFF}}, which takes up two Java chars, and is 
 represented in UTF-8 as the 4-byte sequence {{F4 8F BF BF}}.  This is likely 
 where the mistaken 4-bytes-per-Java-char formulation came from: the maximum 
 number of UTF-8 bytes required to represent a Unicode *code point* is 4.
 The maximum Java char is {{\u}}, which is represented in UTF-8 as the 
 3-byte sequence {{EF BF BF}}.
 So I think it's safe to switch to using 3 bytes per Java char (the unit of 
 measurement returned by {{String.length()}}), like 
 {{CompressingStoredFieldsWriter.writeField()}} does.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7950) Invalid auth scheme configuration of Http client when using Kerberos (SPNEGO)

2015-08-26 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715574#comment-14715574
 ] 

Gregory Chanan commented on SOLR-7950:
--

+1.

Patch looks good and thanks for the explanation.  It sounds like from what 
Ishan is saying there may be some more work integrating the HADOOP-12082 change 
with Solr's own Basic Auth Plugin, but that can be done in another jira.

 Invalid auth scheme configuration of Http client when using Kerberos (SPNEGO)
 -

 Key: SOLR-7950
 URL: https://issues.apache.org/jira/browse/SOLR-7950
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.3, Trunk
Reporter: Hrishikesh Gadre
Assignee: Gregory Chanan
 Attachments: solr-7950-v2.patch, solr-7950.patch


 When using kerberos authentication mechanism (SPNEGO auth scheme), the Apache 
 Http client is incorrectly configured with *all* auth schemes (e.g. Basic, 
 Digest, NTLM, Kerberos, Negotiate etc.) instead of just 'Negotiate'. 
 This issue was identified after configuring Solr with both Basic + Negotiate 
 authentication schemes simultaneously. The problem in this case is that Http 
 client is configured with Kerberos credentials and the default (and 
 incorrect) auth scheme configuration prefers Basic authentication over 
 Kerberos. Since the basic authentication credentials are missing, the 
 authentication and as a result the Http request fails. (I ran into this 
 problem while creating a collection where there is an internal communication 
 between Solr servers).
 The root cause for this issue is that, AbstractHttpClient::getAuthSchemes() 
 API call prepares an AuthSchemeRegistry instance with all possible 
 authentication schemes. Hence when we register the SPNEGO auth scheme in Solr 
 codebase, it overrides the previous configuration for SPNEGO - but doesn't 
 remove the other auth schemes from the client configuration. Please take a 
 look at relevant code snippet.
 https://github.com/apache/lucene-solr/blob/trunk/solr/solrj/src/java/org/apache/solr/client/solrj/impl/Krb5HttpClientConfigurer.java#L80
 A trivial fix would be to prepare a new AuthSchemeRegistry instance 
 configured with just SPENGO mechanism and set it in the HttpClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7942) fix error logging when index is locked on startup, add deprecation warning if unlockOnStartup is configured

2015-08-26 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-7942:
---
Attachment: SOLR-7942.patch

Here's what i had in mind, with the {{true // 'fail' in trunk}} changed to 
{{false // warn in 5x}} when backporting.

Anyone have concerns/feedack about the various warning/err msgs?

 fix error logging when index is locked on startup, add deprecation warning if 
 unlockOnStartup is configured
 ---

 Key: SOLR-7942
 URL: https://issues.apache.org/jira/browse/SOLR-7942
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Hoss Man
Priority: Minor
 Attachments: SOLR-7942.patch


 LUCENE-6508 removed support for unlockOnStartup, but the way the changes are 
 made are inconsistent with how other config options have been deprecated in 
 the past, and cause a confusing error message if an index is locked on 
 startup...
 * in 5x, the SolrConfig constructor should log a warning if the 
 unlockOnStartup option is specified (regardless of whether it's true or 
 false) so users know they need to cleanup their configs and change their 
 expectations (even if they never get - or have not yet gotten - a lock 
 problem)
 * in SolrCore, the LockObtainFailedException that is thrown if the index is 
 locked should _not_ say {{Solr now longer supports forceful unlocking via 
 'unlockOnStartup'}}
 ** besides the no/now typo, this wording missleads users into thinking that 
 the LockObtainFailedException error is in some way related to that config 
 option -- creating an implication that using that option lead them to this 
 error.  we shouldn't mention that here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7981) term based ValueSourceParsers should support an option to run an analyzer for hte specified field on the input

2015-08-26 Thread Hoss Man (JIRA)
Hoss Man created SOLR-7981:
--

 Summary: term based ValueSourceParsers should support an option to 
run an analyzer for hte specified field on the input
 Key: SOLR-7981
 URL: https://issues.apache.org/jira/browse/SOLR-7981
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man


The following functions all take exactly 2 arguments: a field name, and a term 
value...

* idf
* termfreq
* tf
* totaltermfreq

...we should consider adding an optional third argument to indicate if an 
analyzer for the specified field should be used on the input to find the real 
Term to consider.

For example, the following might all result in equivilent numeric values for 
all docs assuming simple plural stemming and lowercasing...

{noformat}
termfreq(foo_t,'Bicycles',query) // use the query analyzer for field foo_t on 
input Bicycles
termfreq(foo_t,'Bicycles',index) // use the index analyzer for field foo_t on 
input Bicycles
termfreq(foo_t,'bicycle',none) // no analyzer used to construct Term
termfreq(foo_t,'bicycle') // legacy 2 arg syntax, same as 'none'
{noformat}

(Special error checking needed if analyzer creates more then one term for the 
given input string)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7950) Invalid auth scheme configuration of Http client when using Kerberos (SPNEGO)

2015-08-26 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715499#comment-14715499
 ] 

Ishan Chattopadhyaya commented on SOLR-7950:


bq. So we may have to introduce Basic authentication scheme (in addition to 
SPNEGO) in Solr by some other way.
We already have basic auth, 
https://cwiki.apache.org/confluence/display/solr/Basic+Authentication+Plugin
However, we still don't have it working together as yet.

bq. I don't think we use the Hadoop security framework in Solr.
The hadoop-auth's kerberos authentication filters are used. The support for 
delegation tokens still isn't there.

 Invalid auth scheme configuration of Http client when using Kerberos (SPNEGO)
 -

 Key: SOLR-7950
 URL: https://issues.apache.org/jira/browse/SOLR-7950
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.3, Trunk
Reporter: Hrishikesh Gadre
Assignee: Gregory Chanan
 Attachments: solr-7950-v2.patch, solr-7950.patch


 When using kerberos authentication mechanism (SPNEGO auth scheme), the Apache 
 Http client is incorrectly configured with *all* auth schemes (e.g. Basic, 
 Digest, NTLM, Kerberos, Negotiate etc.) instead of just 'Negotiate'. 
 This issue was identified after configuring Solr with both Basic + Negotiate 
 authentication schemes simultaneously. The problem in this case is that Http 
 client is configured with Kerberos credentials and the default (and 
 incorrect) auth scheme configuration prefers Basic authentication over 
 Kerberos. Since the basic authentication credentials are missing, the 
 authentication and as a result the Http request fails. (I ran into this 
 problem while creating a collection where there is an internal communication 
 between Solr servers).
 The root cause for this issue is that, AbstractHttpClient::getAuthSchemes() 
 API call prepares an AuthSchemeRegistry instance with all possible 
 authentication schemes. Hence when we register the SPNEGO auth scheme in Solr 
 codebase, it overrides the previous configuration for SPNEGO - but doesn't 
 remove the other auth schemes from the client configuration. Please take a 
 look at relevant code snippet.
 https://github.com/apache/lucene-solr/blob/trunk/solr/solrj/src/java/org/apache/solr/client/solrj/impl/Krb5HttpClientConfigurer.java#L80
 A trivial fix would be to prepare a new AuthSchemeRegistry instance 
 configured with just SPENGO mechanism and set it in the HttpClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7980) CorruptIndex exceptions in ChaosMonkeySafeLeaderTest

2015-08-26 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715446#comment-14715446
 ] 

Yonik Seeley commented on SOLR-7980:


Examples:

{code}
  2 25342 WARN  (qtp1987916571-72) [n:127.0.0.1:32803_ c:collection1 s:shard2 
r:core_node1 x:collection1] o.a.s.h.ReplicationHandler Could not read check
sum from index file: _0_SimpleText_0.pst
  2 org.apache.lucene.index.CorruptIndexException: codec footer mismatch (file 
truncated?): actual footer=808464432 vs expected footer=-1071082520 
(resource=MMapIndexInput(path=/opt/code/lusolr_trunk2/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeySafeLeaderTest_38F55D766B70D5BD-001/shard-1-001/cores/collection1/data/index/_0_SimpleText_0.pst))
  2at org.apache.lucene.codecs.CodecUtil.validateFooter(CodecUtil.java:416)
  2at 
org.apache.lucene.codecs.CodecUtil.retrieveChecksum(CodecUtil.java:401)
  2at 
org.apache.solr.handler.ReplicationHandler.getFileList(ReplicationHandler.java:558)
  2at 
org.apache.solr.handler.ReplicationHandler.handleRequestBody(ReplicationHandler.java:247)
  2at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:151)
  2at org.apache.solr.core.SolrCore.execute(SolrCore.java:2079)
  2at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:667)
  2at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
  2at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:210)
  2at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
  2at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
  2at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:106)
  2at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
  2at 
org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83)
  2at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:364)
  2at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
  2at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
  2at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
  2at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
  2at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
  2at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
  2at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
  2at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
  2at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
  2at org.eclipse.jetty.server.Server.handle(Server.java:499)
  2at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
  2at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
  2at 
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
  2at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
  2at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
  2at java.lang.Thread.run(Thread.java:745)
{code}

{code}
2 19962 WARN  (RecoveryThread-collection1) [n:127.0.0.1:49515_ c:collection1 
s:shard1 r:core_node2 x:collection1] o.a.s.h.IndexFetcher Could not retrie
ve checksum from file.
  2 java.lang.IllegalArgumentException: Seeking to negative position: 
MMapIndexInput(path=/opt/code/lusolr_trunk2/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeySafeLeaderTest_1142D018587DF1D8-001/shard-2-001/cores/collection1/data/index/_0.fdx)
  2at 
org.apache.lucene.store.ByteBufferIndexInput$SingleBufferImpl.seek(ByteBufferIndexInput.java:408)
  2at 
org.apache.lucene.codecs.CodecUtil.retrieveChecksum(CodecUtil.java:400)
  2at 
org.apache.solr.handler.IndexFetcher.compareFile(IndexFetcher.java:876)
  2at 
org.apache.solr.handler.IndexFetcher.downloadIndexFiles(IndexFetcher.java:839)
  2at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:437)
  2at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:265)
  2at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:382)
  2at 
org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:162)
  2at 
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:437)
  2at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:227)
  2 Caused by: java.lang.IllegalArgumentException
  2at java.nio.Buffer.position(Buffer.java:244)
  2at 

[jira] [Commented] (LUCENE-6759) Integrate lat/long BKD and spatial 3d, part 2

2015-08-26 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715658#comment-14715658
 ] 

Karl Wright commented on LUCENE-6759:
-

So I tried to reproduce this in geo3d-land exclusively, and coded this:

{code}
c = new 
GeoCircle(PlanetModel.WGS84,0.006204988457123483,0.003379977917811208,7.780831828380698E-4);
p1 = new 
GeoPoint(PlanetModel.WGS84,0.005231514023315527,0.0034278119211296914);
assertTrue(!c.isWithin(p1));
xyzb = new XYZBounds();
c.getBounds(xyzb);
area = GeoAreaFactory.makeGeoArea(PlanetModel.WGS84, 
  xyzb.getMinimumX(), xyzb.getMaximumX(), xyzb.getMinimumY(), 
xyzb.getMaximumY(), xyzb.getMinimumZ(), xyzb.getMaximumZ());
// Doesn't have to be true, but is...
assertTrue(!area.isWithin(p1));

{code}

The exact point in question shows up as outside of *even* the bounds computed 
for the circle.  So honestly I don't know how it wound up getting included?  
Unless, perhaps, the descent decisions were made based on the approximation?

Looking at the code to see how to delve deeper...

 Integrate lat/long BKD and spatial 3d, part 2
 -

 Key: LUCENE-6759
 URL: https://issues.apache.org/jira/browse/LUCENE-6759
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
 Attachments: LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 This is just a continuation of LUCENE-6699, which became too big.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7844) Zookeeper session expiry during shard leader election can cause multiple leaders.

2015-08-26 Thread Mike Roberts (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Roberts updated SOLR-7844:
---
Attachment: SOLR-7844.patch

Initial attempt at a patch.

When we reconnect after a zookeeper session expiry we cancel any ongoing leader 
elections. The current ongoing election threads check status in a few places to 
ensure that no 'leader specific' operations are taken after the election has 
been canceled.

I don't think this is bulletproof, but it does significantly reduce the size of 
the window in which this scenario can occur.

 Zookeeper session expiry during shard leader election can cause multiple 
 leaders.
 -

 Key: SOLR-7844
 URL: https://issues.apache.org/jira/browse/SOLR-7844
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.4
Reporter: Mike Roberts
 Attachments: SOLR-7844.patch


 If the ZooKeeper session expires for a host during shard leader election, the 
 ephemeral leader_elect nodes are removed. However the threads that were 
 processing the election are still present (and could believe the host won the 
 election). They will then incorrectly create leader nodes once a new 
 ZooKeeper session is established.
 This introduces a subtle race condition that could cause two hosts to become 
 leader.
 Scenario:
 a three machine cluster, all of the machines are restarting at approximately 
 the same time.
 The first machine starts, writes a leader_elect ephemeral node, it's the only 
 candidate in the election so it wins and starts the leadership process. As it 
 knows it has peers, it begins to block waiting for the peers to arrive.
 During this period of blocking[1] the ZK connection drops and the session 
 expires.
 A new ZK session is established, and ElectionContext.cancelElection is 
 called. Then register() is called and a new set of leader_elect ephemeral 
 nodes are created.
 During the period between the ZK session expiring, and new set of 
 leader_elect nodes being created the second machine starts.
 It creates its leader_elect ephemeral nodes, as there are no other nodes it 
 wins the election and starts the leadership process. As its still missing one 
 of its peers, it begins to block waiting for the third machine to join.
 There is now a race between machine1  machine2, both of whom think they are 
 the leader.
 So far, this isn't too bad, because the machine that loses the race will fail 
 when it tries to create the /collection/name/leader/shard1 node (as it 
 already exists), and will rejoin the election.
 While this is happening, machine3 has started and has queued for leadership 
 behind machine2.
 If the loser of the race is machine2, when it rejoins the election it cancels 
 the current context, deleting it's leader_elect ephemeral nodes.
 At this point, machine3 believes it has become leader (the watcher it has on 
 the leader_elect node fires), and it runs the LeaderElector::checkIfIAmLeader 
 method. This method DELETES the current /collection/name/leader/shard1 node, 
 then starts the leadership process (as all three machines are now running, it 
 does not block to wait).
 So, machine1 won the race with machine2 and declared its leadership and 
 created the nodes. However, machine3 has just deleted them, and recreated 
 them for itself. So machine1 and machine3 both believe they are the leader.
 I am thinking that the fix should be to cancel  close all election contexts 
 immediately on reconnect (we do cancel them, however it's run serially which 
 has blocking issues, and just canceling does not cause the wait loop to 
 exit). That election context logic already has checks on the closed flag, so 
 they should exit if they see it has been closed.
 I'm working on a patch for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1697131 - in /lucene/dev/trunk/lucene: JRE_VERSION_MIGRATION.txt test-framework/src/java/org/apache/lucene/util/TestUtil.java

2015-08-26 Thread Chris Hostetter

Uwe: I'm still concerned about this change and the way it might result in 
confusing failure messages in the future (if the whitespace def of other 
characters changes) ... can you please explain your choice 
Assert.assertTrue - assert ?


On Mon, 24 Aug 2015, Chris Hostetter wrote:

: Uwe: why did you change this from Assert.assertTrue to assert ?
: 
: In the old code the test would fail every time with a clear explanation of 
: hte problem -- in your new code, if assertions are randomly disabled by 
: the test framework, then the sanity check won't run and instead we'll get 
: a strange failure from whatever test called this method.
: 
: 
: 
: : 
==
: : --- 
lucene/dev/trunk/lucene/test-framework/src/java/org/apache/lucene/util/TestUtil.java
 (original)
: : +++ 
lucene/dev/trunk/lucene/test-framework/src/java/org/apache/lucene/util/TestUtil.java
 Sat Aug 22 21:33:47 2015
: : @@ -35,6 +35,7 @@ import java.util.Collections;
: :  import java.util.HashMap;
: :  import java.util.Iterator;
: :  import java.util.List;
: : +import java.util.Locale;
: :  import java.util.Map;
: :  import java.util.NoSuchElementException;
: :  import java.util.Random;
: : @@ -1188,7 +1189,7 @@ public final class TestUtil {
: :int offset = nextInt(r, 0, WHITESPACE_CHARACTERS.length-1);
: :char c = WHITESPACE_CHARACTERS[offset];
: :// sanity check
: : -  Assert.assertTrue(Not really whitespace? (@+offset+):  + c, 
Character.isWhitespace(c));
: : +  assert Character.isWhitespace(c) : String.format(Locale.ENGLISH, 
Not really whitespace? WHITESPACE_CHARACTERS[%d] is '\\u%04X', offset, (int) 
c);
: :out.append(c);
: :  }
: :  return out.toString();
: : @@ -1307,9 +1308,9 @@ public final class TestUtil {
: :  '\u001E',
: :  '\u001F',
: :  '\u0020',
: : -// '\u0085', faild sanity check?
: : +// '\u0085', failed sanity check?
: :  '\u1680',
: : -'\u180E',
: : +// '\u180E', no longer whitespace in Unicode 7.0 (Java 9)!
: :  '\u2000',
: :  '\u2001',
: :  '\u2002',
: : 
: : 
: : 
: 
: -Hoss
: http://www.lucidworks.com/
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 2623 - Still Failing!

2015-08-26 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2623/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.lucene.TestMergeSchedulerExternal.testSubclassConcurrentMergeScheduler

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([A2D285B04961A340:2553381D4D41D944]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.lucene.TestMergeSchedulerExternal.testSubclassConcurrentMergeScheduler(TestMergeSchedulerExternal.java:116)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 1086 lines...]
   [junit4] Suite: org.apache.lucene.TestMergeSchedulerExternal
   [junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=TestMergeSchedulerExternal 
-Dtests.method=testSubclassConcurrentMergeScheduler 
-Dtests.seed=A2D285B04961A340 -Dtests.slow=true -Dtests.locale=it 
-Dtests.timezone=Pacific/Majuro -Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] FAILURE 0.57s J0 | 
TestMergeSchedulerExternal.testSubclassConcurrentMergeScheduler 
   [junit4] Throwable #1: java.lang.AssertionError
   [junit4]at 

[jira] [Commented] (SOLR-7972) VelocityResponseWriter template encoding issue

2015-08-26 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712721#comment-14712721
 ] 

Erik Hatcher commented on SOLR-7972:


[~anshumg] thanks!   I just woke up to the build errors and was fixing it when 
I noticed you beat me to it.  And thanks for removing the 5.3.1 fix version - 
if we do make such a release this can be merged over there but maybe no need 
yet.

 VelocityResponseWriter template encoding issue
 --

 Key: SOLR-7972
 URL: https://issues.apache.org/jira/browse/SOLR-7972
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.3
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Critical
 Fix For: Trunk, 5.4

 Attachments: SOLR-7972.patch


 I'm not sure when this got introduced (5.0 maybe?) - the .vm templates are 
 loaded using ISO-8859-1 rather than UTF-8 as it should be. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6759) Integrate lat/long BKD and spatial 3d, part 2

2015-08-26 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712793#comment-14712793
 ] 

Michael McCandless commented on LUCENE-6759:


OK I'll disable this assert: it is too anal.

 Integrate lat/long BKD and spatial 3d, part 2
 -

 Key: LUCENE-6759
 URL: https://issues.apache.org/jira/browse/LUCENE-6759
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
 Attachments: LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 This is just a continuation of LUCENE-6699, which became too big.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6570) Make BooleanQuery immutable

2015-08-26 Thread Greg Huber (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712704#comment-14712704
 ] 

Greg Huber commented on LUCENE-6570:


Hello,

I am extending AnalyzingInfixSuggester for use with my suggester where I change 
the query to be a AND rather than an OR in the finishQuery(..) method.

ie

/**
 * Subclass can override this to tweak the Query before searching.
 */
protected Query finishQuery(Builder in, boolean allTermsRequired) {

// Update contexts to be ANDs (MUST) rather than ORs (SHOULD)
for (BooleanClause booleanClause : in.build().clauses()) {
// Change the contexts to be MUST (will be the only BooleanQuery 
and others will be TermQuery)
if (booleanClause.getQuery() instanceof BooleanQuery) {
BooleanQuery bq = (BooleanQuery) booleanClause.getQuery();
for (BooleanClause bc : bq) {
bc.setOccur(BooleanClause.Occur.MUST);
}
// We are done
break;
}
}

return in.build();
}

It says that BooleanClause.setOccur(..) is depreciated and will be immutable in 
6.0, how would I then be able to do this?

 Make BooleanQuery immutable
 ---

 Key: LUCENE-6570
 URL: https://issues.apache.org/jira/browse/LUCENE-6570
 Project: Lucene - Core
  Issue Type: Task
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.3, 6.0

 Attachments: LUCENE-6570.patch


 In the same spirit as LUCENE-6531 for the PhraseQuery, we should make 
 BooleanQuery immutable.
 The plan is the following:
  - create BooleanQuery.Builder with the same setters as BooleanQuery today 
 (except setBoost) and a build() method that returns a BooleanQuery
  - remove setters from BooleanQuery (except setBoost)
 I would also like to add some static utility methods for common use-cases of 
 this query, for instance:
  - static BooleanQuery disjunction(Query... queries) to create a disjunction
  - static BooleanQuery conjunction(Query... queries) to create a conjunction
  - static BooleanQuery filtered(Query query, Query... filters) to create a 
 filtered query
 Hopefully this will help keep tests not too verbose, and the latter will also 
 help with the FilteredQuery derecation/removal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6570) Make BooleanQuery immutable

2015-08-26 Thread Greg Huber (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712704#comment-14712704
 ] 

Greg Huber edited comment on LUCENE-6570 at 8/26/15 8:36 AM:
-

Hello,

I am extending AnalyzingInfixSuggester for use with my suggester where I change 
the query to be a AND rather than an OR in the finishQuery(..) method.

ie

/**
 * Subclass can override this to tweak the Query before searching.
 */
protected Query finishQuery(Builder in, boolean allTermsRequired) {

// Update contexts to be ANDs (MUST) rather than ORs (SHOULD)
for (BooleanClause booleanClause : in.build().clauses()) {
// Change the contexts to be MUST (will be the only BooleanQuery 
and others will be TermQuery)
if (booleanClause.getQuery() instanceof BooleanQuery) {
BooleanQuery bq = (BooleanQuery) booleanClause.getQuery();
for (BooleanClause bc : bq) {
bc.setOccur(BooleanClause.Occur.MUST);
}
// We are done
break;
}
}

return in.build();
}

It says that BooleanClause.setOccur(..) is depreciated and will be immutable in 
6.0, how would I then be able to do this?

Cheers Greg


was (Author: gregh99):
Hello,

I am extending AnalyzingInfixSuggester for use with my suggester where I change 
the query to be a AND rather than an OR in the finishQuery(..) method.

ie

/**
 * Subclass can override this to tweak the Query before searching.
 */
protected Query finishQuery(Builder in, boolean allTermsRequired) {

// Update contexts to be ANDs (MUST) rather than ORs (SHOULD)
for (BooleanClause booleanClause : in.build().clauses()) {
// Change the contexts to be MUST (will be the only BooleanQuery 
and others will be TermQuery)
if (booleanClause.getQuery() instanceof BooleanQuery) {
BooleanQuery bq = (BooleanQuery) booleanClause.getQuery();
for (BooleanClause bc : bq) {
bc.setOccur(BooleanClause.Occur.MUST);
}
// We are done
break;
}
}

return in.build();
}

It says that BooleanClause.setOccur(..) is depreciated and will be immutable in 
6.0, how would I then be able to do this?

 Make BooleanQuery immutable
 ---

 Key: LUCENE-6570
 URL: https://issues.apache.org/jira/browse/LUCENE-6570
 Project: Lucene - Core
  Issue Type: Task
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.3, 6.0

 Attachments: LUCENE-6570.patch


 In the same spirit as LUCENE-6531 for the PhraseQuery, we should make 
 BooleanQuery immutable.
 The plan is the following:
  - create BooleanQuery.Builder with the same setters as BooleanQuery today 
 (except setBoost) and a build() method that returns a BooleanQuery
  - remove setters from BooleanQuery (except setBoost)
 I would also like to add some static utility methods for common use-cases of 
 this query, for instance:
  - static BooleanQuery disjunction(Query... queries) to create a disjunction
  - static BooleanQuery conjunction(Query... queries) to create a conjunction
  - static BooleanQuery filtered(Query query, Query... filters) to create a 
 filtered query
 Hopefully this will help keep tests not too verbose, and the latter will also 
 help with the FilteredQuery derecation/removal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6570) Make BooleanQuery immutable

2015-08-26 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712881#comment-14712881
 ] 

Uwe Schindler commented on LUCENE-6570:
---

The boolean clauses have to be created with MUST by from the beginning.

 Make BooleanQuery immutable
 ---

 Key: LUCENE-6570
 URL: https://issues.apache.org/jira/browse/LUCENE-6570
 Project: Lucene - Core
  Issue Type: Task
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.3, 6.0

 Attachments: LUCENE-6570.patch


 In the same spirit as LUCENE-6531 for the PhraseQuery, we should make 
 BooleanQuery immutable.
 The plan is the following:
  - create BooleanQuery.Builder with the same setters as BooleanQuery today 
 (except setBoost) and a build() method that returns a BooleanQuery
  - remove setters from BooleanQuery (except setBoost)
 I would also like to add some static utility methods for common use-cases of 
 this query, for instance:
  - static BooleanQuery disjunction(Query... queries) to create a disjunction
  - static BooleanQuery conjunction(Query... queries) to create a conjunction
  - static BooleanQuery filtered(Query query, Query... filters) to create a 
 filtered query
 Hopefully this will help keep tests not too verbose, and the latter will also 
 help with the FilteredQuery derecation/removal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6570) Make BooleanQuery immutable

2015-08-26 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712881#comment-14712881
 ] 

Uwe Schindler edited comment on LUCENE-6570 at 8/26/15 10:13 AM:
-

The boolean clauses have to be created with MUST from the beginning.


was (Author: thetaphi):
The boolean clauses have to be created with MUST by from the beginning.

 Make BooleanQuery immutable
 ---

 Key: LUCENE-6570
 URL: https://issues.apache.org/jira/browse/LUCENE-6570
 Project: Lucene - Core
  Issue Type: Task
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.3, 6.0

 Attachments: LUCENE-6570.patch


 In the same spirit as LUCENE-6531 for the PhraseQuery, we should make 
 BooleanQuery immutable.
 The plan is the following:
  - create BooleanQuery.Builder with the same setters as BooleanQuery today 
 (except setBoost) and a build() method that returns a BooleanQuery
  - remove setters from BooleanQuery (except setBoost)
 I would also like to add some static utility methods for common use-cases of 
 this query, for instance:
  - static BooleanQuery disjunction(Query... queries) to create a disjunction
  - static BooleanQuery conjunction(Query... queries) to create a conjunction
  - static BooleanQuery filtered(Query query, Query... filters) to create a 
 filtered query
 Hopefully this will help keep tests not too verbose, and the latter will also 
 help with the FilteredQuery derecation/removal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7975) Support payloads on primitive types

2015-08-26 Thread Jamie Johnson (JIRA)
Jamie Johnson created SOLR-7975:
---

 Summary: Support payloads on primitive types
 Key: SOLR-7975
 URL: https://issues.apache.org/jira/browse/SOLR-7975
 Project: Solr
  Issue Type: New Feature
  Components: clients - java, Server
Reporter: Jamie Johnson


Currently payloads are supported through the use of an analysis chain, this 
limits the ability to provide payloads on primitive fields like Trie, Bool, etc 
without copying these classes and adding the ability in custom code.  It would 
be great if payloads could be added to these field types in a pluggable way 
similar to what is supported for non primitive types, perhaps through extending 
the base primitive implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6570) Make BooleanQuery immutable

2015-08-26 Thread Greg Huber (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712958#comment-14712958
 ] 

Greg Huber commented on LUCENE-6570:


I guess it will work as its public, not much else I can do! 

Cheers Greg. :-)

 Make BooleanQuery immutable
 ---

 Key: LUCENE-6570
 URL: https://issues.apache.org/jira/browse/LUCENE-6570
 Project: Lucene - Core
  Issue Type: Task
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.3, 6.0

 Attachments: LUCENE-6570.patch


 In the same spirit as LUCENE-6531 for the PhraseQuery, we should make 
 BooleanQuery immutable.
 The plan is the following:
  - create BooleanQuery.Builder with the same setters as BooleanQuery today 
 (except setBoost) and a build() method that returns a BooleanQuery
  - remove setters from BooleanQuery (except setBoost)
 I would also like to add some static utility methods for common use-cases of 
 this query, for instance:
  - static BooleanQuery disjunction(Query... queries) to create a disjunction
  - static BooleanQuery conjunction(Query... queries) to create a conjunction
  - static BooleanQuery filtered(Query query, Query... filters) to create a 
 filtered query
 Hopefully this will help keep tests not too verbose, and the latter will also 
 help with the FilteredQuery derecation/removal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7976) Jetty http and https connectors use different property names for host/port

2015-08-26 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-7976:
---

 Summary: Jetty http and https connectors use different property 
names for host/port
 Key: SOLR-7976
 URL: https://issues.apache.org/jira/browse/SOLR-7976
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.4, 5.3
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: Trunk, 5.4


Jetty http and https connectors use different property names for host/port i.e. 
jetty.host vs solr.jetty.host and jetty.port vs solr.jetty.port.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7971) Reduce memory allocated by JavaBinCodec to encode large strings

2015-08-26 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-7971:

Attachment: SOLR-7971-directbuffer.patch

Here's another idea as a patch to further reduce heap requirement. In this 
patch I use a direct byte buffer to hold the encoded bytes and limit the 
intermediate on-heap buffer to 64KB only. This optimization kicks in only if 
the max bytes required by the string being serialized is greater than 64KB.

With this patch I can index the same 100MB JSON document with 1200MB of heap.

[~ysee...@gmail.com] - Thoughts?

 Reduce memory allocated by JavaBinCodec to encode large strings
 ---

 Key: SOLR-7971
 URL: https://issues.apache.org/jira/browse/SOLR-7971
 Project: Solr
  Issue Type: Sub-task
  Components: Response Writers, SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: Trunk, 5.4

 Attachments: SOLR-7971-directbuffer.patch, SOLR-7971.patch


 As discussed in SOLR-7927, we can reduce the buffer memory allocated by 
 JavaBinCodec while writing large strings.
 https://issues.apache.org/jira/browse/SOLR-7927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14700420#comment-14700420
 {quote}
 The maximum Unicode code point (as of Unicode 8 anyway) is U+10 
 ([http://www.unicode.org/glossary/#code_point]).  This is encoded in UTF-16 
 as surrogate pair {{\uDBFF\uDFFF}}, which takes up two Java chars, and is 
 represented in UTF-8 as the 4-byte sequence {{F4 8F BF BF}}.  This is likely 
 where the mistaken 4-bytes-per-Java-char formulation came from: the maximum 
 number of UTF-8 bytes required to represent a Unicode *code point* is 4.
 The maximum Java char is {{\u}}, which is represented in UTF-8 as the 
 3-byte sequence {{EF BF BF}}.
 So I think it's safe to switch to using 3 bytes per Java char (the unit of 
 measurement returned by {{String.length()}}), like 
 {{CompressingStoredFieldsWriter.writeField()}} does.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7977) SOLR_HOST in solr.in.sh doesn't apply to Jetty's host property

2015-08-26 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-7977:
---

 Summary: SOLR_HOST in solr.in.sh doesn't apply to Jetty's host 
property
 Key: SOLR-7977
 URL: https://issues.apache.org/jira/browse/SOLR-7977
 Project: Solr
  Issue Type: Bug
  Components: security, SolrCloud
Reporter: Shalin Shekhar Mangar
 Fix For: Trunk, 5.4


[~sdavids] pointed out that the SOLR_HOST config option in solr.in.sh doesn't 
set Jetty's host property (solr.jetty.host) so it still binds to all net 
interfaces. Perhaps it should apply to jetty as well because the user 
explicitly wants us to bind to specific IP?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 776 - Still Failing

2015-08-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/776/

3 tests failed.
REGRESSION:  org.apache.solr.cloud.TestRebalanceLeaders.test

Error Message:
No live SolrServers available to handle this request:[http://127.0.0.1:57241, 
http://127.0.0.1:56384, http://127.0.0.1:40438, http://127.0.0.1:50878, 
http://127.0.0.1:41922]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:57241, http://127.0.0.1:56384, 
http://127.0.0.1:40438, http://127.0.0.1:50878, http://127.0.0.1:41922]
at 
__randomizedtesting.SeedInfo.seed([1A55CE4403CDBAD5:9201F19EAD31D72D]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:352)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1086)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:857)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:800)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.TestRebalanceLeaders.issueCommands(TestRebalanceLeaders.java:281)
at 
org.apache.solr.cloud.TestRebalanceLeaders.rebalanceLeaderTest(TestRebalanceLeaders.java:109)
at 
org.apache.solr.cloud.TestRebalanceLeaders.test(TestRebalanceLeaders.java:75)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-6570) Make BooleanQuery immutable

2015-08-26 Thread Greg Huber (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712935#comment-14712935
 ] 

Greg Huber commented on LUCENE-6570:


btw, I have multiple contexts so call

AnalyzingInfixSuggester.suggester.lookup(term, contexts, nMax, true, true);

which will then call  AnalyzingInfixSuggester.toQuery(..) eventually, which 
adds the context with the BooleanClause.Occur.SHOULD.  Its a private method so 
is there a way to override this?

private BooleanQuery toQuery(SetBytesRef contextInfo) {
if (contextInfo == null || contextInfo.isEmpty()) {
  return null;
}

BooleanQuery.Builder contextFilter = new BooleanQuery.Builder();
for (BytesRef context : contextInfo) {
  addContextToQuery(contextFilter, context, BooleanClause.Occur.SHOULD);
}
return contextFilter.build();
  }

 Make BooleanQuery immutable
 ---

 Key: LUCENE-6570
 URL: https://issues.apache.org/jira/browse/LUCENE-6570
 Project: Lucene - Core
  Issue Type: Task
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.3, 6.0

 Attachments: LUCENE-6570.patch


 In the same spirit as LUCENE-6531 for the PhraseQuery, we should make 
 BooleanQuery immutable.
 The plan is the following:
  - create BooleanQuery.Builder with the same setters as BooleanQuery today 
 (except setBoost) and a build() method that returns a BooleanQuery
  - remove setters from BooleanQuery (except setBoost)
 I would also like to add some static utility methods for common use-cases of 
 this query, for instance:
  - static BooleanQuery disjunction(Query... queries) to create a disjunction
  - static BooleanQuery conjunction(Query... queries) to create a conjunction
  - static BooleanQuery filtered(Query query, Query... filters) to create a 
 filtered query
 Hopefully this will help keep tests not too verbose, and the latter will also 
 help with the FilteredQuery derecation/removal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_60) - Build # 5202 - Failure!

2015-08-26 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5202/
Java: 64bit/jdk1.8.0_60 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandlerBackup.doTestBackup

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([B466C333BCD6B09]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandlerBackup

Error Message:
Suite timeout exceeded (= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (= 720 msec).
at __randomizedtesting.SeedInfo.seed([B466C333BCD6B09]:0)




Build Log:
[...truncated 11052 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandlerBackup
   [junit4]   2 Creating dataDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup_B466C333BCD6B09-001\init-core-data-001
   [junit4]   2 2667536 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[B466C333BCD6B09]) [ 
   ] o.a.s.SolrTestCaseJ4 ###Starting testBackupOnCommit
   [junit4]   2 2667537 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[B466C333BCD6B09]) [ 
   ] o.a.s.SolrTestCaseJ4 Writing core.properties file to 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup_B466C333BCD6B09-001\solr-instance-001\collection1
   [junit4]   2 2667546 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[B466C333BCD6B09]) [ 
   ] o.e.j.s.Server jetty-9.2.13.v20150730
   [junit4]   2 2667548 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[B466C333BCD6B09]) [ 
   ] o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@52f8bdf8{/solr,null,AVAILABLE}
   [junit4]   2 2667550 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[B466C333BCD6B09]) [ 
   ] o.e.j.s.ServerConnector Started 
ServerConnector@39be83e5{HTTP/1.1}{127.0.0.1:57610}
   [junit4]   2 2667550 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[B466C333BCD6B09]) [ 
   ] o.e.j.s.Server Started @2678201ms
   [junit4]   2 2667550 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[B466C333BCD6B09]) [ 
   ] o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup_B466C333BCD6B09-001\solr-instance-001\collection1\data,
 hostContext=/solr, hostPort=57610}
   [junit4]   2 2667551 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[B466C333BCD6B09]) [ 
   ] o.a.s.s.SolrDispatchFilter SolrDispatchFilter.init(): 
sun.misc.Launcher$AppClassLoader@4e0e2f2a
   [junit4]   2 2667551 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[B466C333BCD6B09]) [ 
   ] o.a.s.c.SolrResourceLoader new SolrResourceLoader for directory: 
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup_B466C333BCD6B09-001\solr-instance-001\'
   [junit4]   2 2667569 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[B466C333BCD6B09]) [ 
   ] o.a.s.c.SolrXmlConfig Loading container configuration from 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup_B466C333BCD6B09-001\solr-instance-001\solr.xml
   [junit4]   2 2667577 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[B466C333BCD6B09]) [ 
   ] o.a.s.c.CoresLocator Config-defined core root directory: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup_B466C333BCD6B09-001\solr-instance-001\.
   [junit4]   2 2667577 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[B466C333BCD6B09]) [ 
   ] o.a.s.c.CoreContainer New CoreContainer 14928761
   [junit4]   2 2667577 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[B466C333BCD6B09]) [ 
   ] o.a.s.c.CoreContainer Loading cores into CoreContainer 
[instanceDir=C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup_B466C333BCD6B09-001\solr-instance-001\]
   [junit4]   2 2667577 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[B466C333BCD6B09]) [ 
   ] o.a.s.c.CoreContainer loading shared library: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup_B466C333BCD6B09-001\solr-instance-001\lib
   [junit4]   2 2667577 WARN  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[B466C333BCD6B09]) [ 
   ] o.a.s.c.SolrResourceLoader 

[jira] [Commented] (SOLR-7961) Add version command to bin/solr start script

2015-08-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713044#comment-14713044
 ] 

ASF subversion and git services commented on SOLR-7961:
---

Commit 1697910 from jan...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1697910 ]

SOLR-7961: Add version command to bin/solr start script. Also adds -h for help 
(backport)

 Add version command to bin/solr start script
 

 Key: SOLR-7961
 URL: https://issues.apache.org/jira/browse/SOLR-7961
 Project: Solr
  Issue Type: New Feature
  Components: scripts and tools
Reporter: Jan Høydahl
Assignee: Jan Høydahl
Priority: Trivial
 Fix For: 5.4

 Attachments: SOLR-7961.patch


 It would be nice to be able to tell which version of Solr you have. You can 
 get it with the {{status}} command today, but only if Solr is already 
 running. Proposal:
 {noformat}
 $ bin/solr -version
 5.3.0
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7961) Add version command to bin/solr start script

2015-08-26 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-7961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-7961.
---
Resolution: Fixed

 Add version command to bin/solr start script
 

 Key: SOLR-7961
 URL: https://issues.apache.org/jira/browse/SOLR-7961
 Project: Solr
  Issue Type: New Feature
  Components: scripts and tools
Reporter: Jan Høydahl
Assignee: Jan Høydahl
Priority: Trivial
 Fix For: 5.4

 Attachments: SOLR-7961.patch


 It would be nice to be able to tell which version of Solr you have. You can 
 get it with the {{status}} command today, but only if Solr is already 
 running. Proposal:
 {noformat}
 $ bin/solr -version
 5.3.0
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr 5.3: is SOLR-7622 actually in?

2015-08-26 Thread Erik Hatcher
Looks like it’s in, but definitely confusing in JIRA.   Noble?  Ryan?



 On Aug 26, 2015, at 8:43 AM, Alexandre Rafalovitch arafa...@gmail.com wrote:
 
 In the release notes, SOLR-7622 is listed as new, but the issue itself
 is marked unresolved and not targeted. Nor did it look finished.
 
 Just a - belated - sanity check.
 
 Regards,
Alex.
 
 Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
 http://www.solr-start.com/
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-SmokeRelease-5.x - Build # 302 - Still Failing

2015-08-26 Thread Michael McCandless
Thanks Shalin.

Mike McCandless

http://blog.mikemccandless.com


On Wed, Aug 26, 2015 at 8:57 AM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
 Noble hasn't completed the post install steps which add backwards
 compatibility tests for the 5.3.0 release. Since he is on vacation for
 a few days, I'll add those indexes to avoid this failure.

 On Wed, Aug 26, 2015 at 8:59 AM, Apache Jenkins Server
 jenk...@builds.apache.org wrote:
 Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/302/

 No tests ran.

 Build Log:
 [...truncated 52844 lines...]
 prepare-release-no-sign:
 [mkdir] Created dir: 
 /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist
  [copy] Copying 461 files to 
 /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/lucene
  [copy] Copying 245 files to 
 /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/solr
[smoker] Java 1.7 
 JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7
[smoker] Java 1.8 
 JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
[smoker] NOTE: output encoding is UTF-8
[smoker]
[smoker] Load release URL 
 file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/...
[smoker]
[smoker] Test Lucene...
[smoker]   test basics...
[smoker]   get KEYS
[smoker] 0.1 MB in 0.01 sec (12.1 MB/sec)
[smoker]   check changes HTML...
[smoker]   download lucene-5.4.0-src.tgz...
[smoker] 28.5 MB in 0.04 sec (708.0 MB/sec)
[smoker] verify md5/sha1 digests
[smoker]   download lucene-5.4.0.tgz...
[smoker] 65.8 MB in 0.09 sec (716.3 MB/sec)
[smoker] verify md5/sha1 digests
[smoker]   download lucene-5.4.0.zip...
[smoker] 76.0 MB in 0.11 sec (690.8 MB/sec)
[smoker] verify md5/sha1 digests
[smoker]   unpack lucene-5.4.0.tgz...
[smoker] verify JAR metadata/identity/no javax.* or java.* classes...
[smoker] test demo with 1.7...
[smoker]   got 6063 hits for query lucene
[smoker] checkindex with 1.7...
[smoker] test demo with 1.8...
[smoker]   got 6063 hits for query lucene
[smoker] checkindex with 1.8...
[smoker] check Lucene's javadoc JAR
[smoker]   unpack lucene-5.4.0.zip...
[smoker] verify JAR metadata/identity/no javax.* or java.* classes...
[smoker] test demo with 1.7...
[smoker]   got 6063 hits for query lucene
[smoker] checkindex with 1.7...
[smoker] test demo with 1.8...
[smoker]   got 6063 hits for query lucene
[smoker] checkindex with 1.8...
[smoker] check Lucene's javadoc JAR
[smoker]   unpack lucene-5.4.0-src.tgz...
[smoker] make sure no JARs/WARs in src dist...
[smoker] run ant validate
[smoker] run tests w/ Java 7 and testArgs='-Dtests.slow=false'...
[smoker] test demo with 1.7...
[smoker]   got 213 hits for query lucene
[smoker] checkindex with 1.7...
[smoker] generate javadocs w/ Java 7...
[smoker]
[smoker] Crawl/parse...
[smoker]
[smoker] Verify...
[smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
[smoker] test demo with 1.8...
[smoker]   got 213 hits for query lucene
[smoker] checkindex with 1.8...
[smoker] generate javadocs w/ Java 8...
[smoker]
[smoker] Crawl/parse...
[smoker]
[smoker] Verify...
[smoker]   confirm all releases have coverage in 
 TestBackwardsCompatibility
[smoker] find all past Lucene releases...
[smoker] run TestBackwardsCompatibility..
[smoker] Releases that don't seem to be tested:
[smoker]   5.3.0
[smoker] Traceback (most recent call last):
[smoker]   File 
 /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py,
  line 1449, in module
[smoker] main()
[smoker]   File 
 /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py,
  line 1394, in main
[smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
 c.is_signed, ' '.join(c.test_args))
[smoker]   File 
 /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py,
  line 1432, in smokeTest
[smoker] unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' 
 % version, svnRevision, version, testArgs, baseURL)
[smoker]   File 
 /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py,
  line 583, in unpackAndVerify
[smoker] verifyUnpacked(java, project, artifact, unpackPath, 
 svnRevision, version, testArgs, tmpDir, baseURL)
[smoker]   File 
 

[jira] [Commented] (LUCENE-6570) Make BooleanQuery immutable

2015-08-26 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712947#comment-14712947
 ] 

Adrien Grand commented on LUCENE-6570:
--

Maybe you can override `addContextToQuery` to override the clause?

 Make BooleanQuery immutable
 ---

 Key: LUCENE-6570
 URL: https://issues.apache.org/jira/browse/LUCENE-6570
 Project: Lucene - Core
  Issue Type: Task
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.3, 6.0

 Attachments: LUCENE-6570.patch


 In the same spirit as LUCENE-6531 for the PhraseQuery, we should make 
 BooleanQuery immutable.
 The plan is the following:
  - create BooleanQuery.Builder with the same setters as BooleanQuery today 
 (except setBoost) and a build() method that returns a BooleanQuery
  - remove setters from BooleanQuery (except setBoost)
 I would also like to add some static utility methods for common use-cases of 
 this query, for instance:
  - static BooleanQuery disjunction(Query... queries) to create a disjunction
  - static BooleanQuery conjunction(Query... queries) to create a conjunction
  - static BooleanQuery filtered(Query query, Query... filters) to create a 
 filtered query
 Hopefully this will help keep tests not too verbose, and the latter will also 
 help with the FilteredQuery derecation/removal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7961) Add version command to bin/solr start script

2015-08-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713032#comment-14713032
 ] 

ASF subversion and git services commented on SOLR-7961:
---

Commit 1697904 from jan...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1697904 ]

SOLR-7961: Add version command to bin/solr start script. Also adds -h for help

 Add version command to bin/solr start script
 

 Key: SOLR-7961
 URL: https://issues.apache.org/jira/browse/SOLR-7961
 Project: Solr
  Issue Type: New Feature
  Components: scripts and tools
Reporter: Jan Høydahl
Assignee: Jan Høydahl
Priority: Trivial
 Fix For: 5.4

 Attachments: SOLR-7961.patch


 It would be nice to be able to tell which version of Solr you have. You can 
 get it with the {{status}} command today, but only if Solr is already 
 running. Proposal:
 {noformat}
 $ bin/solr -version
 5.3.0
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-SmokeRelease-5.x - Build # 302 - Still Failing

2015-08-26 Thread Shalin Shekhar Mangar
Noble hasn't completed the post install steps which add backwards
compatibility tests for the 5.3.0 release. Since he is on vacation for
a few days, I'll add those indexes to avoid this failure.

On Wed, Aug 26, 2015 at 8:59 AM, Apache Jenkins Server
jenk...@builds.apache.org wrote:
 Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/302/

 No tests ran.

 Build Log:
 [...truncated 52844 lines...]
 prepare-release-no-sign:
 [mkdir] Created dir: 
 /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist
  [copy] Copying 461 files to 
 /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/lucene
  [copy] Copying 245 files to 
 /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/solr
[smoker] Java 1.7 
 JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7
[smoker] Java 1.8 
 JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
[smoker] NOTE: output encoding is UTF-8
[smoker]
[smoker] Load release URL 
 file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/...
[smoker]
[smoker] Test Lucene...
[smoker]   test basics...
[smoker]   get KEYS
[smoker] 0.1 MB in 0.01 sec (12.1 MB/sec)
[smoker]   check changes HTML...
[smoker]   download lucene-5.4.0-src.tgz...
[smoker] 28.5 MB in 0.04 sec (708.0 MB/sec)
[smoker] verify md5/sha1 digests
[smoker]   download lucene-5.4.0.tgz...
[smoker] 65.8 MB in 0.09 sec (716.3 MB/sec)
[smoker] verify md5/sha1 digests
[smoker]   download lucene-5.4.0.zip...
[smoker] 76.0 MB in 0.11 sec (690.8 MB/sec)
[smoker] verify md5/sha1 digests
[smoker]   unpack lucene-5.4.0.tgz...
[smoker] verify JAR metadata/identity/no javax.* or java.* classes...
[smoker] test demo with 1.7...
[smoker]   got 6063 hits for query lucene
[smoker] checkindex with 1.7...
[smoker] test demo with 1.8...
[smoker]   got 6063 hits for query lucene
[smoker] checkindex with 1.8...
[smoker] check Lucene's javadoc JAR
[smoker]   unpack lucene-5.4.0.zip...
[smoker] verify JAR metadata/identity/no javax.* or java.* classes...
[smoker] test demo with 1.7...
[smoker]   got 6063 hits for query lucene
[smoker] checkindex with 1.7...
[smoker] test demo with 1.8...
[smoker]   got 6063 hits for query lucene
[smoker] checkindex with 1.8...
[smoker] check Lucene's javadoc JAR
[smoker]   unpack lucene-5.4.0-src.tgz...
[smoker] make sure no JARs/WARs in src dist...
[smoker] run ant validate
[smoker] run tests w/ Java 7 and testArgs='-Dtests.slow=false'...
[smoker] test demo with 1.7...
[smoker]   got 213 hits for query lucene
[smoker] checkindex with 1.7...
[smoker] generate javadocs w/ Java 7...
[smoker]
[smoker] Crawl/parse...
[smoker]
[smoker] Verify...
[smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
[smoker] test demo with 1.8...
[smoker]   got 213 hits for query lucene
[smoker] checkindex with 1.8...
[smoker] generate javadocs w/ Java 8...
[smoker]
[smoker] Crawl/parse...
[smoker]
[smoker] Verify...
[smoker]   confirm all releases have coverage in TestBackwardsCompatibility
[smoker] find all past Lucene releases...
[smoker] run TestBackwardsCompatibility..
[smoker] Releases that don't seem to be tested:
[smoker]   5.3.0
[smoker] Traceback (most recent call last):
[smoker]   File 
 /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py,
  line 1449, in module
[smoker] main()
[smoker]   File 
 /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py,
  line 1394, in main
[smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
 c.is_signed, ' '.join(c.test_args))
[smoker]   File 
 /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py,
  line 1432, in smokeTest
[smoker] unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % 
 version, svnRevision, version, testArgs, baseURL)
[smoker]   File 
 /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py,
  line 583, in unpackAndVerify
[smoker] verifyUnpacked(java, project, artifact, unpackPath, 
 svnRevision, version, testArgs, tmpDir, baseURL)
[smoker]   File 
 /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py,
  line 762, in verifyUnpacked
[smoker] confirmAllReleasesAreTestedForBackCompat(unpackPath)

[jira] [Resolved] (SOLR-7972) VelocityResponseWriter template encoding issue

2015-08-26 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher resolved SOLR-7972.

Resolution: Fixed

 VelocityResponseWriter template encoding issue
 --

 Key: SOLR-7972
 URL: https://issues.apache.org/jira/browse/SOLR-7972
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.3
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Critical
 Fix For: Trunk, 5.4

 Attachments: SOLR-7972.patch


 I'm not sure when this got introduced (5.0 maybe?) - the .vm templates are 
 loaded using ISO-8859-1 rather than UTF-8 as it should be. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6763) Make MultiPhraseQuery immutable

2015-08-26 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-6763:


 Summary: Make MultiPhraseQuery immutable
 Key: LUCENE-6763
 URL: https://issues.apache.org/jira/browse/LUCENE-6763
 Project: Lucene - Core
  Issue Type: Task
Reporter: Adrien Grand
Priority: Minor


We should make MultiPhraseQuery immutable similarly to PhraseQuery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Solr 5.3: is SOLR-7622 actually in?

2015-08-26 Thread Alexandre Rafalovitch
In the release notes, SOLR-7622 is listed as new, but the issue itself
is marked unresolved and not targeted. Nor did it look finished.

Just a - belated - sanity check.

Regards,
Alex.

Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
http://www.solr-start.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6764) Payloads should be compressed

2015-08-26 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-6764:


 Summary: Payloads should be compressed
 Key: LUCENE-6764
 URL: https://issues.apache.org/jira/browse/LUCENE-6764
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Priority: Minor


I think we should at least try to do something simple, eg. deduplicate or apply 
simple LZ77 compression. For instance if you use enclosing html tags to give 
different weights to individual terms, there might be lots of repetitions as 
there are not that many unique html tags.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7569) Create an API to force a leader election between nodes

2015-08-26 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-7569:
---
Attachment: SOLR-7569.patch

Adding a wait for recoveries to finish after the recovery operation in the test.

 Create an API to force a leader election between nodes
 --

 Key: SOLR-7569
 URL: https://issues.apache.org/jira/browse/SOLR-7569
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
  Labels: difficulty-medium, impact-high
 Attachments: SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, 
 SOLR-7569.patch, SOLR-7569_lir_down_state_test.patch


 There are many reasons why Solr will not elect a leader for a shard e.g. all 
 replicas' last published state was recovery or due to bugs which cause a 
 leader to be marked as 'down'. While the best solution is that they never get 
 into this state, we need a manual way to fix this when it does get into this  
 state. Right now we can do a series of dance involving bouncing the node 
 (since recovery paths between bouncing and REQUESTRECOVERY are different), 
 but that is difficult when running a large cluster. Although it is possible 
 that such a manual API may lead to some data loss but in some cases, it is 
 the only possible option to restore availability.
 This issue proposes to build a new collection API which can be used to force 
 replicas into recovering a leader while avoiding data loss on a best effort 
 basis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7971) Reduce memory allocated by JavaBinCodec to encode large strings

2015-08-26 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713398#comment-14713398
 ] 

Mikhail Khludnev commented on SOLR-7971:


Shalin, 
* couldn't it turn that calling frequent call allocateDirect()clear() takes 
too much time? in this case isn't it worth to reuse directBuffer across 
writeStr() calls as JavaBinCodec's field.
* I've got that buffering is necessary just because we need to calculate length 
of encoded bytes for starting tag, is it a big problem if we loop 
ByteUtils.UTF16toUTF8() twice, the first time to calculate the length and 
completely dropping the content, then writing content in the second time loop.
* just curious, how much efforts does it take to extend javabin format by 
http-like chunks?  

 Reduce memory allocated by JavaBinCodec to encode large strings
 ---

 Key: SOLR-7971
 URL: https://issues.apache.org/jira/browse/SOLR-7971
 Project: Solr
  Issue Type: Sub-task
  Components: Response Writers, SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: Trunk, 5.4

 Attachments: SOLR-7971-directbuffer.patch, SOLR-7971.patch


 As discussed in SOLR-7927, we can reduce the buffer memory allocated by 
 JavaBinCodec while writing large strings.
 https://issues.apache.org/jira/browse/SOLR-7927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14700420#comment-14700420
 {quote}
 The maximum Unicode code point (as of Unicode 8 anyway) is U+10 
 ([http://www.unicode.org/glossary/#code_point]).  This is encoded in UTF-16 
 as surrogate pair {{\uDBFF\uDFFF}}, which takes up two Java chars, and is 
 represented in UTF-8 as the 4-byte sequence {{F4 8F BF BF}}.  This is likely 
 where the mistaken 4-bytes-per-Java-char formulation came from: the maximum 
 number of UTF-8 bytes required to represent a Unicode *code point* is 4.
 The maximum Java char is {{\u}}, which is represented in UTF-8 as the 
 3-byte sequence {{EF BF BF}}.
 So I think it's safe to switch to using 3 bytes per Java char (the unit of 
 measurement returned by {{String.length()}}), like 
 {{CompressingStoredFieldsWriter.writeField()}} does.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7885) Add support for loading HTTP resources

2015-08-26 Thread Aaron LaBella (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713468#comment-14713468
 ] 

Aaron LaBella commented on SOLR-7885:
-

Can someone take a look at this patch and commit if approved?
Thanks.

 Add support for loading HTTP resources
 --

 Key: SOLR-7885
 URL: https://issues.apache.org/jira/browse/SOLR-7885
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler, SolrJ
Affects Versions: 5.3
Reporter: Aaron LaBella
 Attachments: SOLR-7885-1.patch, SOLR-7885-2.patch


 I have a need to be able to load data import handler configuration files from 
 an HTTP server instead of the local file system.  So, I modified 
 {code}org.apache.solr.core.SolrResourceLoader{code} and some of the 
 respective dataimport files in {code}org.apache.solr.handler.dataimport{code} 
 to be able to support doing this.  
 {code}solrconfig.xml{code} now has the option to define a parameter: 
 *configRemote*, and if defined (and it's an HTTP(s) URL), it'll attempt to 
 load the resource.  If successfully, it'll also persist the resource to the 
 local file system so that it is available on a solr server restart per chance 
 that the remote resource is currently unavailable.
 Lastly, to be consistent with the pattern that already exists in 
 SolrResourceLoader, this feature is *disabled* by default, and requires the 
 setting of an additional JVM property: 
 {code}-Dsolr.allow.http.resourceloading=true{code}.
 Please review and let me know if there is anything else that needs to be done 
 in order for this patch to make the next release.  As far as I can tell, it's 
 fully tested and ready to go.
 Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: JDK9 b78 Jenkins tests disabled with 32bits/-server compiler (for now)

2015-08-26 Thread Uwe Schindler
There is now an issue @ OpenJDK:

https://bugs.openjdk.java.net/browse/JDK-8134468

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Uwe Schindler [mailto:u...@thetaphi.de]
 Sent: Tuesday, August 25, 2015 2:14 PM
 To: dev@lucene.apache.org
 Cc: rory.odonn...@oracle.com; Balchandra Vaidya; Dawid Weiss
 Subject: JDK9 b78 Jenkins tests disabled with 32bits/-server compiler (for
 now)
 
 Hi,
 
 after some digging with Dawid and running tests:
 - JDK 9b78, 64bits seems fine
 - JDK 9b78, 32bits seems fine with -client compiler
 - JDK 9b78, 32bits with -server compiler is heavily broken, it fails every 
 test
 run. You see the first test failing ASAP if you enable -Xbatch and tiered
 compilation,
 
 So I disabled the -server variant for now on Jenkins to keep failed runs low. 
 I
 will now open bug report.
 
 Uwe
 
 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de
 
 
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712798#comment-14712798
 ] 

ASF subversion and git services commented on LUCENE-6699:
-

Commit 1697865 from [~mikemccand] in branch 'dev/branches/lucene6699'
[ https://svn.apache.org/r1697865 ]

LUCENE-6699: comment out assert

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7888) Make Lucene's AnalyzingInfixSuggester.lookup() method that takes a BooleanQuery filter parameter available in Solr

2015-08-26 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713078#comment-14713078
 ] 

Jan Høydahl commented on SOLR-7888:
---

I think this is close to committable. If there are no objections to moving 
{{CONTEXTS_FIELD_NAME}} to {{Lookup.java}} by tomorrow, I'll do a round of 
final reviews  manual testing before committing.

 Make Lucene's AnalyzingInfixSuggester.lookup() method that takes a 
 BooleanQuery filter parameter available in Solr
 --

 Key: SOLR-7888
 URL: https://issues.apache.org/jira/browse/SOLR-7888
 Project: Solr
  Issue Type: New Feature
  Components: Suggester
Affects Versions: 5.2.1
Reporter: Arcadius Ahouansou
Assignee: Jan Høydahl
 Fix For: 5.4

 Attachments: SOLR-7888.patch, SOLR-7888.patch, SOLR-7888.patch, 
 SOLR-7888.patch, SOLR-7888.patch


  LUCENE-6464 has introduced a very flexible lookup method that takes as 
 parameter a BooleanQuery that is used for filtering results.
 This ticket is to expose that method to Solr.
 This would allow user to do:
 {code}
 /suggest?suggest=truesuggest.build=truesuggest.q=termsuggest.contextFilterQuery=contexts:tennis
 /suggest?suggest=truesuggest.build=truesuggest.q=termsuggest.contextFilterQuery=contexts:golf
  AND contexts:football
 {code}
 etc
 Given that the context filtering in currently only implemented by the 
 {code}AnalyzingInfixSuggester{code} and by the 
 {code}BlendedInfixSuggester{code}, this initial implementation will support 
 only these 2 lookup implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 2623 - Still Failing!

2015-08-26 Thread Michael McCandless
This is https://issues.apache.org/jira/browse/LUCENE-6750

I don't see why the test fails, and it won't fail on beasting for me on Linux.

It seems to always fail on OS X ...

Mike McCandless

http://blog.mikemccandless.com


On Wed, Aug 26, 2015 at 4:52 AM, Policeman Jenkins Server
jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2623/
 Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

 1 tests failed.
 FAILED:  
 org.apache.lucene.TestMergeSchedulerExternal.testSubclassConcurrentMergeScheduler

 Error Message:


 Stack Trace:
 java.lang.AssertionError
 at 
 __randomizedtesting.SeedInfo.seed([A2D285B04961A340:2553381D4D41D944]:0)
 at org.junit.Assert.fail(Assert.java:92)
 at org.junit.Assert.assertTrue(Assert.java:43)
 at org.junit.Assert.assertTrue(Assert.java:54)
 at 
 org.apache.lucene.TestMergeSchedulerExternal.testSubclassConcurrentMergeScheduler(TestMergeSchedulerExternal.java:116)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:497)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at java.lang.Thread.run(Thread.java:745)




 Build Log:
 [...truncated 1086 lines...]
[junit4] Suite: org.apache.lucene.TestMergeSchedulerExternal
[junit4]   2 NOTE: reproduce with: ant test  
 -Dtestcase=TestMergeSchedulerExternal 
 

[jira] [Commented] (SOLR-7922) JSON API facet doesnt return facet with attribute that equals to 0

2015-08-26 Thread Yaniv Hemi (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713451#comment-14713451
 ] 

Yaniv Hemi commented on SOLR-7922:
--

Hi,
this issue wasn't added into Solr 5.3?

 JSON API facet doesnt return facet with attribute that equals to 0
 --

 Key: SOLR-7922
 URL: https://issues.apache.org/jira/browse/SOLR-7922
 Project: Solr
  Issue Type: Bug
  Components: Facet Module
Affects Versions: 5.3
Reporter: Yaniv Hemi
Assignee: Yonik Seeley
Priority: Critical
  Labels: facet, json, jsonapi
 Attachments: SOLR-7922.patch


 regular facet returns  0,33739, but JSON Facet API returns all values and 
 counts without 0.
 see the example:
 {code:json}
 {
   responseHeader:{
 status:0,
 QTime:9,
 params:{
   q:*:*,
   json.facet:{\n\tfacetForMeta_i_interactionSentiment: {\t\t\ntype : 
 terms, \n\t\tfield : Meta_i_interactionSentiment\n\t}\n},
   facet.field:Meta_i_interactionSentiment,
   indent:true,
   fq:[channel:TelcoDefaultChannel,
 content_type:PARENT],
   rows:0,
   wt:json,
   facet:true}},
   response:{numFound:167857,start:0,maxScore:1.0,docs:[]
   },
   facet_counts:{
 facet_queries:{},
 facet_fields:{
   Meta_i_interactionSentiment:[
 -1,33743,
 0,33739,
 -2,33499,
 2,33451,
 1,33425]},
 facet_dates:{},
 facet_ranges:{},
 facet_intervals:{},
 facet_heatmaps:{}},
   facets:{
 count:167857,
 facetForMeta_i_interactionSentiment:{
   buckets:[{
   val:-1,
   count:33743},
 {
   val:-2,
   count:33499},
 {
   val:2,
   count:33451},
 {
   val:1,
   count:33425}]}}}
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 2623 - Still Failing!

2015-08-26 Thread Dawid Weiss
It's a sign. Your license for the Holy Apple has expired...

Dawid

On Wed, Aug 26, 2015 at 3:08 PM, Michael McCandless
luc...@mikemccandless.com wrote:
 This is https://issues.apache.org/jira/browse/LUCENE-6750

 I don't see why the test fails, and it won't fail on beasting for me on Linux.

 It seems to always fail on OS X ...

 Mike McCandless

 http://blog.mikemccandless.com


 On Wed, Aug 26, 2015 at 4:52 AM, Policeman Jenkins Server
 jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2623/
 Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

 1 tests failed.
 FAILED:  
 org.apache.lucene.TestMergeSchedulerExternal.testSubclassConcurrentMergeScheduler

 Error Message:


 Stack Trace:
 java.lang.AssertionError
 at 
 __randomizedtesting.SeedInfo.seed([A2D285B04961A340:2553381D4D41D944]:0)
 at org.junit.Assert.fail(Assert.java:92)
 at org.junit.Assert.assertTrue(Assert.java:43)
 at org.junit.Assert.assertTrue(Assert.java:54)
 at 
 org.apache.lucene.TestMergeSchedulerExternal.testSubclassConcurrentMergeScheduler(TestMergeSchedulerExternal.java:116)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:497)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at java.lang.Thread.run(Thread.java:745)




 Build Log:
 [...truncated 1086 lines...]

[jira] [Commented] (SOLR-7971) Reduce memory allocated by JavaBinCodec to encode large strings

2015-08-26 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713437#comment-14713437
 ] 

Shalin Shekhar Mangar commented on SOLR-7971:
-

bq. couldn't it turn that calling frequent call allocateDirect()clear() takes 
too much time? in this case isn't it worth to reuse directBuffer across 
writeStr() calls as JavaBinCodec's field.

Yes, allocateDirect() can be slower and we should reuse the buffer as much as 
possible. This was just an idea as a patch. I don't intend to commit it as it 
is.

bq. I've got that buffering is necessary just because we need to calculate 
length of encoded bytes for starting tag, is it a big problem if we loop 
ByteUtils.UTF16toUTF8() twice, the first time to calculate the length and 
completely dropping the content, then writing content in the second time loop.

Hmm, interesting idea. We could also have a method calcUTF16toUTF8Length which 
avoids all the bitwise operators and just returns the required length.

bq. just curious, how much efforts does it take to extend javabin format by 
http-like chunks?

It should be possible. We'll need a new chunked type and an upgrade to the 
JavaBin version. Or we may be able to get away with modifying only the LogCodec 
in TransactionLog.

 Reduce memory allocated by JavaBinCodec to encode large strings
 ---

 Key: SOLR-7971
 URL: https://issues.apache.org/jira/browse/SOLR-7971
 Project: Solr
  Issue Type: Sub-task
  Components: Response Writers, SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: Trunk, 5.4

 Attachments: SOLR-7971-directbuffer.patch, SOLR-7971.patch


 As discussed in SOLR-7927, we can reduce the buffer memory allocated by 
 JavaBinCodec while writing large strings.
 https://issues.apache.org/jira/browse/SOLR-7927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14700420#comment-14700420
 {quote}
 The maximum Unicode code point (as of Unicode 8 anyway) is U+10 
 ([http://www.unicode.org/glossary/#code_point]).  This is encoded in UTF-16 
 as surrogate pair {{\uDBFF\uDFFF}}, which takes up two Java chars, and is 
 represented in UTF-8 as the 4-byte sequence {{F4 8F BF BF}}.  This is likely 
 where the mistaken 4-bytes-per-Java-char formulation came from: the maximum 
 number of UTF-8 bytes required to represent a Unicode *code point* is 4.
 The maximum Java char is {{\u}}, which is represented in UTF-8 as the 
 3-byte sequence {{EF BF BF}}.
 So I think it's safe to switch to using 3 bytes per Java char (the unit of 
 measurement returned by {{String.length()}}), like 
 {{CompressingStoredFieldsWriter.writeField()}} does.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7971) Reduce memory allocated by JavaBinCodec to encode large strings

2015-08-26 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713470#comment-14713470
 ] 

Yonik Seeley commented on SOLR-7971:


bq.  limit the intermediate on-heap buffer to 64KB only

I only glanced at it, but it's probably a little too simplistic.  You can't cut 
UTF16 in random places, encode it as UTF8 and get the same bytes because of 2 
char code points.

bq. I've got that buffering is necessary just because we need to calculate 
length of encoded bytes for starting tag

Yeah, the length is really the only reason we need to buffer and copy.

We should really consider returning to how v1 of the protocol handled things 
since it had to do no buffering at all since it simply used String.length().  
We just need to consider how to handle back compat of course.


 Reduce memory allocated by JavaBinCodec to encode large strings
 ---

 Key: SOLR-7971
 URL: https://issues.apache.org/jira/browse/SOLR-7971
 Project: Solr
  Issue Type: Sub-task
  Components: Response Writers, SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: Trunk, 5.4

 Attachments: SOLR-7971-directbuffer.patch, SOLR-7971.patch


 As discussed in SOLR-7927, we can reduce the buffer memory allocated by 
 JavaBinCodec while writing large strings.
 https://issues.apache.org/jira/browse/SOLR-7927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14700420#comment-14700420
 {quote}
 The maximum Unicode code point (as of Unicode 8 anyway) is U+10 
 ([http://www.unicode.org/glossary/#code_point]).  This is encoded in UTF-16 
 as surrogate pair {{\uDBFF\uDFFF}}, which takes up two Java chars, and is 
 represented in UTF-8 as the 4-byte sequence {{F4 8F BF BF}}.  This is likely 
 where the mistaken 4-bytes-per-Java-char formulation came from: the maximum 
 number of UTF-8 bytes required to represent a Unicode *code point* is 4.
 The maximum Java char is {{\u}}, which is represented in UTF-8 as the 
 3-byte sequence {{EF BF BF}}.
 So I think it's safe to switch to using 3 bytes per Java char (the unit of 
 measurement returned by {{String.length()}}), like 
 {{CompressingStoredFieldsWriter.writeField()}} does.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6750) TestMergeSchedulerExternal failure

2015-08-26 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713616#comment-14713616
 ] 

Steve Rowe commented on LUCENE-6750:


New fail on Policeman Jenkins 
[http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2623/], this time on 
branch_5x: 

{noformat}
   [junit4] Suite: org.apache.lucene.TestMergeSchedulerExternal
   [junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=TestMergeSchedulerExternal 
-Dtests.method=testSubclassConcurrentMergeScheduler 
-Dtests.seed=A2D285B04961A340 -Dtests.slow=true -Dtests.locale=it 
-Dtests.timezone=Pacific/Majuro -Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] FAILURE 0.57s J0 | 
TestMergeSchedulerExternal.testSubclassConcurrentMergeScheduler 
   [junit4] Throwable #1: java.lang.AssertionError
   [junit4]at 
__randomizedtesting.SeedInfo.seed([A2D285B04961A340:2553381D4D41D944]:0)
   [junit4]at 
org.apache.lucene.TestMergeSchedulerExternal.testSubclassConcurrentMergeScheduler(TestMergeSchedulerExternal.java:116)
   [junit4]at java.lang.Thread.run(Thread.java:745)
   [junit4]   2 NOTE: test params are: codec=Asserting(Lucene53): 
{id=PostingsFormat(name=MockRandom)}, docValues:{}, sim=DefaultSimilarity, 
locale=it, timezone=Pacific/Majuro
   [junit4]   2 NOTE: Mac OS X 10.8.5 x86_64/Oracle Corporation 1.8.0_60 
(64-bit)/cpus=3,threads=1,free=368533088,total=518979584
   [junit4]   2 NOTE: All tests run in this JVM: [TestLogMergePolicy, 
TestSameTokenSamePosition, TestCustomNorms, TestTerms, Test2BPostings, 
TestSpansEnum, TestBinaryDocValuesUpdates, TestDocValuesScoring, 
TestSparseFixedBitSet, TestByteArrayDataInput, FuzzyTermOnShortTermsTest, 
TestLucene50StoredFieldsFormat, TestIndexWriterDelete, TestTopDocsCollector, 
TestIndexWriter, TestWildcard, TestSimilarityProvider, TestBytesRefArray, 
TestQueryRescorer, TestSpans, TestFilterCachingPolicy, TestNot, 
TestSpanNotQuery, TestTermRangeFilter, TestScorerPerf, TestTermsEnum, 
TestApproximationSearchEquivalence, TestMinimize, TestSizeBoundedForceMerge, 
TestReadOnlyIndex, TestConjunctions, TestLongBitSet, TestMinShouldMatch2, 
TestLucene50SegmentInfoFormat, TestCharTermAttributeImpl, 
TestRecyclingIntBlockAllocator, TestQueryBuilder, TestPhrasePrefixQuery, 
TestIndexWriterNRTIsCurrent, TestNeverDelete, TestMatchNoDocsQuery, 
TestSegmentReader, TestIntsRef, TestBytesStore, TestPackedTokenAttributeImpl, 
TestTransactions, TestSimpleExplanationsOfNonMatches, TestFieldsReader, 
TestSearch, TestLucene50DocValuesFormat, Test2BPositions, 
TestForceMergeForever, TestMultiFields, TestNorms, 
TestFrequencyTrackingRingBuffer, TestFastCompressionMode, TestNRTReaderCleanup, 
TestBooleanMinShouldMatch, TestElevationComparator, TestLRUFilterCache, 
TestPerFieldPostingsFormat2, TestSloppyPhraseQuery2, TestBytesRefHash, 
TestIndexCommit, TestWindowsMMap, 
TestLucene50StoredFieldsFormatHighCompression, TestBinaryDocument, 
TestSleepingLockWrapper, TestNRTReaderWithThreads, TestNamedSPILoader, 
TestMultiLevelSkipList, TestDirectoryReaderReopen, 
TestConcurrentMergeScheduler, TestDateSort, TestBoolean2, 
TestMultiThreadTermVectors, TestRecyclingByteBlockAllocator, 
Test2BPostingsBytes, TestCachingWrapperFilter, TestMultiPhraseEnum, 
TestDuelingCodecsAtNight, TestPrefixQuery, TestMergeSchedulerExternal]
{noformat}

 TestMergeSchedulerExternal failure
 --

 Key: LUCENE-6750
 URL: https://issues.apache.org/jira/browse/LUCENE-6750
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: Trunk, 5.4
Reporter: Steve Rowe

 Policeman Jenkins found a failure on OS X 
 [http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2649/] that I can't 
 reproduce on OS X 10.10.4 using Oracle Java 1.8.0_20, even after beasting 200 
 total suite iterations with the seed:
 {noformat}
[junit4] Suite: org.apache.lucene.TestMergeSchedulerExternal
[junit4]   2 NOTE: reproduce with: ant test  
 -Dtestcase=TestMergeSchedulerExternal 
 -Dtests.method=testSubclassConcurrentMergeScheduler 
 -Dtests.seed=3AF868F9E00E5EBA -Dtests.slow=true -Dtests.locale=ru 
 -Dtests.timezone=Europe/London -Dtests.asserts=true 
 -Dtests.file.encoding=ISO-8859-1
[junit4] FAILURE 0.37s J1 | 
 TestMergeSchedulerExternal.testSubclassConcurrentMergeScheduler 
[junit4] Throwable #1: java.lang.AssertionError
[junit4]  at 
 __randomizedtesting.SeedInfo.seed([3AF868F9E00E5EBA:BD79D554E42E24BE]:0)
[junit4]  at 
 org.apache.lucene.TestMergeSchedulerExternal.testSubclassConcurrentMergeScheduler(TestMergeSchedulerExternal.java:116)
[junit4]  at java.lang.Thread.run(Thread.java:745)
[junit4]   2 NOTE: test params are: codec=Asserting(Lucene53): 
 {id=PostingsFormat(name=Memory doPackFST= true)}, docValues:{}, 
 sim=DefaultSimilarity, locale=ru, 

[jira] [Updated] (LUCENE-6750) TestMergeSchedulerExternal failure

2015-08-26 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-6750:
---
Affects Version/s: 5.4
   Trunk

 TestMergeSchedulerExternal failure
 --

 Key: LUCENE-6750
 URL: https://issues.apache.org/jira/browse/LUCENE-6750
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: Trunk, 5.4
Reporter: Steve Rowe

 Policeman Jenkins found a failure on OS X 
 [http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2649/] that I can't 
 reproduce on OS X 10.10.4 using Oracle Java 1.8.0_20, even after beasting 200 
 total suite iterations with the seed:
 {noformat}
[junit4] Suite: org.apache.lucene.TestMergeSchedulerExternal
[junit4]   2 NOTE: reproduce with: ant test  
 -Dtestcase=TestMergeSchedulerExternal 
 -Dtests.method=testSubclassConcurrentMergeScheduler 
 -Dtests.seed=3AF868F9E00E5EBA -Dtests.slow=true -Dtests.locale=ru 
 -Dtests.timezone=Europe/London -Dtests.asserts=true 
 -Dtests.file.encoding=ISO-8859-1
[junit4] FAILURE 0.37s J1 | 
 TestMergeSchedulerExternal.testSubclassConcurrentMergeScheduler 
[junit4] Throwable #1: java.lang.AssertionError
[junit4]  at 
 __randomizedtesting.SeedInfo.seed([3AF868F9E00E5EBA:BD79D554E42E24BE]:0)
[junit4]  at 
 org.apache.lucene.TestMergeSchedulerExternal.testSubclassConcurrentMergeScheduler(TestMergeSchedulerExternal.java:116)
[junit4]  at java.lang.Thread.run(Thread.java:745)
[junit4]   2 NOTE: test params are: codec=Asserting(Lucene53): 
 {id=PostingsFormat(name=Memory doPackFST= true)}, docValues:{}, 
 sim=DefaultSimilarity, locale=ru, timezone=Europe/London
[junit4]   2 NOTE: Mac OS X 10.8.5 x86_64/Oracle Corporation 1.8.0_51 
 (64-bit)/cpus=3,threads=1,free=16232544,total=54853632
[junit4]   2 NOTE: All tests run in this JVM: [TestDateSort, 
 TestWildcardRandom, TestIndexWriterMergePolicy, TestPackedInts, 
 TestSpansAdvanced, TestBooleanOr, TestParallelReaderEmptyIndex, 
 TestFixedBitDocIdSet, TestIndexWriterDeleteByQuery, Test4GBStoredFields, 
 TestMultiThreadTermVectors, TestIndexWriterConfig, TestToken, 
 TestMergeSchedulerExternal]
[junit4] Completed [21/401] on J1 in 0.39s, 2 tests, 1 failure  
 FAILURES!
 {noformat} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6759) Integrate lat/long BKD and spatial 3d, part 2

2015-08-26 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713598#comment-14713598
 ] 

Michael McCandless commented on LUCENE-6759:


Oh wait, we need to do more than simply disable the assert, because the test 
will still fail, just a bit later when it verifies all hits (the assert was 
just early detection):

{noformat}
   [junit4] Suite: org.apache.lucene.bkdtree3d.TestGeo3DPointField
   [junit4]   2 Aug 26, 2015 8:48:39 AM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2 WARNING: Uncaught exception in thread: 
Thread[T0,5,TGRP-TestGeo3DPointField]
   [junit4]   2 java.lang.AssertionError: T0: iter=63 id=71226 docID=71226 
lat=-0.004763555725376775 lon=-0.0076479587074575126 expected false but got: 
true deleted?=false
   [junit4]   2   point1=[lat=-0.004763555725376775, 
lon=-0.0076479587074575126], iswithin=true
   [junit4]   2   point2=[X=1.0010781049211872, Y=-0.007656353133570567, 
Z=-0.0047688666958216885], iswithin=false
   [junit4]   2   query=PointInGeo3DShapeQuery: field=point:PlanetModel: 
PlanetModel.WGS84 Shape: GeoCircle: {planetmodel=PlanetModel.WGS84, 
center=[lat=-7.573175600018171E-4, lon=-0.001184769535031697], 
radius=0.007585721238160122(0.4346298115093282)}
   [junit4]   2at 
__randomizedtesting.SeedInfo.seed([D75138C6C25D1BCF]:0)
   [junit4]   2at org.junit.Assert.fail(Assert.java:93)
   [junit4]   2at 
org.apache.lucene.bkdtree3d.TestGeo3DPointField$4._run(TestGeo3DPointField.java:625)
   [junit4]   2at 
org.apache.lucene.bkdtree3d.TestGeo3DPointField$4.run(TestGeo3DPointField.java:521)
   [junit4]   2 
   [junit4]   2 Aug 26, 2015 8:48:40 AM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2 WARNING: Uncaught exception in thread: 
Thread[T2,5,TGRP-TestGeo3DPointField]
   [junit4]   2 java.lang.AssertionError: T2: iter=62 id=71226 docID=71226 
lat=-0.004763555725376775 lon=-0.0076479587074575126 expected false but got: 
true deleted?=false
   [junit4]   2   point1=[lat=-0.004763555725376775, 
lon=-0.0076479587074575126], iswithin=true
   [junit4]   2   point2=[X=1.0010781049211872, Y=-0.007656353133570567, 
Z=-0.0047688666958216885], iswithin=false
   [junit4]   2   query=PointInGeo3DShapeQuery: field=point:PlanetModel: 
PlanetModel.WGS84 Shape: GeoCircle: {planetmodel=PlanetModel.WGS84, 
center=[lat=-7.573175600018171E-4, lon=-0.001184769535031697], 
radius=0.007585721238160122(0.4346298115093282)}
   [junit4]   2at 
__randomizedtesting.SeedInfo.seed([D75138C6C25D1BCF]:0)
   [junit4]   2at org.junit.Assert.fail(Assert.java:93)
   [junit4]   2at 
org.apache.lucene.bkdtree3d.TestGeo3DPointField$4._run(TestGeo3DPointField.java:625)
   [junit4]   2at 
org.apache.lucene.bkdtree3d.TestGeo3DPointField$4.run(TestGeo3DPointField.java:521)
   [junit4]   2 
   [junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestGeo3DPointField 
-Dtests.method=testRandomMedium -Dtests.seed=D75138C6C25D1BCF 
-Dtests.multiplier=10 -Dtests.slow=true -Dtests.locale=de_GR 
-Dtests.timezone=America/Managua -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] ERROR   62.4s | TestGeo3DPointField.testRandomMedium 
{noformat}

So how to correspondingly fix the test?  Right now, in (intentionally) 
quantizes the double x,y,z of the point to match what the doc values 
pack/unpack did ...

Maybe, we could just fix the test so that if isWithin differs between the 
quantized and unquantized x,y,z, we skip checking that hit?

 Integrate lat/long BKD and spatial 3d, part 2
 -

 Key: LUCENE-6759
 URL: https://issues.apache.org/jira/browse/LUCENE-6759
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
 Attachments: LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 This is just a continuation of LUCENE-6699, which became too big.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Should we EOL support for 32 bit systems?

2015-08-26 Thread Shawn Heisey
On 8/25/2015 10:23 AM, Erick Erickson wrote:
 I have no real skin in this game, but I thought it worth asking after
 Uwe's recent e-mail about disabling 32bits with -server tests.
 
 I guess it boils down to who is using 32-bit versions?. Are we
 spending time/energy supporting a configuration that is not useful to
 enough people to merit the effort?
 
 I'm perfectly content if the response is That's a really stupid
 question to ask, of course we must continue to support 32-bit OSs.
 Although some evidence would be nice ;)
 
 It's just that nobody I work with is running 32 bit OS's. Whether
 that's just my limited exposure to people running small systems is
 certainly a valid question.

As long as Oracle has 32-bit versions of Java available for easy
download from the main website, we probably need to keep testing and
supporting it.  Lucene and Solr won't scale on a 32-bit Java, but I
think there are still plenty of people who start there, and some who
actually run it on 32-bit.

That said, I think we do need to be thinking about dropping support in
the future, even though we can't really do so yet.

Intel stopped mass-producing 32-bit chips for the server market in 2005,
and stopped mass production of 32-bit chips for the consumer market in
2006.  For nearly the last decade, it has been very difficult to buy a
computer incapable of 64-bit operation.  Since Vista and Windows 7 came
on the scene, Microsoft has been pushing 64-bit client operating
systems.  Server 2008R2 and Server 2012 are only available in 64-bit
editions.  Macs have been 64-bit for a VERY long time.

I think the biggest market for 32-bit Java is browsers on Windows.
Virtually all installs of Firefox and Chrome for Windows are 32-bit, and
require the 32-bit Java.  I bet that if the major browser vendors were
to all put out 64-bit versions on their main download links, downloads
of 32-bit Java would begin to dwindle rapidly.

On Windows 10, it looks like Microsoft has finally gone 64-bit by
default with the Edge browser.  This might force the others to follow
suit.  When that happens, I think Oracle may strongly consider dropping
32-bit support from the next major Java version ... and even if they
don't do it at that time, they probably will do so on the next major
version after that.

I just went to java.com with Microsoft Edge.  It says In Windows 10,
the Edge browser does not support plug-ins and therefore will not run
Java. Switch to a different browser (Firefox or Internet Explorer 11) to
run the Java plug-in.  They are not going to be helpful in pushing
64-bit Java. :)

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7950) Invalid auth scheme configuration of Http client when using Kerberos (SPNEGO)

2015-08-26 Thread Hrishikesh Gadre (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated SOLR-7950:
---
Attachment: solr-7950-v2.patch

[~gchanan]

We don't support Basic + Negotiate now, right? So we need another solr patch 
to expose the underlying problem?

Yes that is correct. I identified this issue while working on LDAP integration 
(Please refer to  
[HADOOP-12082|https://issues.apache.org/jira/browse/HADOOP-12082]). I don't 
think we use the Hadoop security framework in Solr. So we may have to introduce 
Basic authentication scheme (in addition to SPNEGO) in Solr by some other way.

There's no fall back mechanism?

No. When server supports multiple authentication schemes, client needs to pick 
one scheme to use (based on list of preferences). The default configuration 
prefers BASIC scheme over SPENGO. Hence the client attempts to use basic auth 
scheme. But since the username/password credentials are not configured - the 
authentication fails. With my patch, we explicitly configure client to use 
SPNEGO.

Or can you prefer SPNego over basic?

Yes. This can be done by adding SPNEGO before BASIC in the preference list. 
Here is how the default preferences are initialized.

http://grepcode.com/file/repo1.maven.org/maven2/org.apache.httpcomponents/httpclient/4.4.1/org/apache/http/impl/client/AbstractHttpClient.java#AbstractHttpClient.createAuthSchemeRegistry%28%29

I think when we configure HttpClientUtil with Krb5HttpClientConfigurer, we 
intend to use Kerberos authentication mechanism on the client side. Hence I 
think just configuring one auth scheme (SPNEGO in this case) is preferable than 
SPNEGO  BASIC

I have also updated the patch to fix a unit test failure.

 Invalid auth scheme configuration of Http client when using Kerberos (SPNEGO)
 -

 Key: SOLR-7950
 URL: https://issues.apache.org/jira/browse/SOLR-7950
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.3, Trunk
Reporter: Hrishikesh Gadre
Assignee: Gregory Chanan
 Attachments: solr-7950-v2.patch, solr-7950.patch


 When using kerberos authentication mechanism (SPNEGO auth scheme), the Apache 
 Http client is incorrectly configured with *all* auth schemes (e.g. Basic, 
 Digest, NTLM, Kerberos, Negotiate etc.) instead of just 'Negotiate'. 
 This issue was identified after configuring Solr with both Basic + Negotiate 
 authentication schemes simultaneously. The problem in this case is that Http 
 client is configured with Kerberos credentials and the default (and 
 incorrect) auth scheme configuration prefers Basic authentication over 
 Kerberos. Since the basic authentication credentials are missing, the 
 authentication and as a result the Http request fails. (I ran into this 
 problem while creating a collection where there is an internal communication 
 between Solr servers).
 The root cause for this issue is that, AbstractHttpClient::getAuthSchemes() 
 API call prepares an AuthSchemeRegistry instance with all possible 
 authentication schemes. Hence when we register the SPNEGO auth scheme in Solr 
 codebase, it overrides the previous configuration for SPNEGO - but doesn't 
 remove the other auth schemes from the client configuration. Please take a 
 look at relevant code snippet.
 https://github.com/apache/lucene-solr/blob/trunk/solr/solrj/src/java/org/apache/solr/client/solrj/impl/Krb5HttpClientConfigurer.java#L80
 A trivial fix would be to prepare a new AuthSchemeRegistry instance 
 configured with just SPENGO mechanism and set it in the HttpClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: svn commit: r1697131 - in /lucene/dev/trunk/lucene: JRE_VERSION_MIGRATION.txt test-framework/src/java/org/apache/lucene/util/TestUtil.java

2015-08-26 Thread Uwe Schindler
Hi Hoss,

sorry did not see your message!

1) The test framework never randomly disables asserts (it cannot do this, 
because asserts are enabled before the VM starts - you cannot do that at 
runtime). So to disable asserts you have to pass this explicitly to the test 
runner, and the default is asserts enabled. Currently Policeman Jenkins does 
not disable asserts at the moment, but it could do that. For normal developers 
running tests, they are always enabled (we also have a test that validates 
this). In addition, our tests should not check the JVM for correctness, it is 
just there to make an assumption about how WS should behave. So when this 
assert fails, the test is still not wrong.

2) Now the answer: I changed it because of performance. The String.format / 
String concatenation is executed on every single character and if the assert 
does not fail. If you create a large whitespace string this takes endless (I 
tried it), because it creates a new string, formats it,... A native Java assert 
only calls the string part if the assert actually failed.

Based on (1) and (2) this is OK. If you still want to revert to 
Assert.assertTrue like before, please use an if-condition instead:
if (not whitespace) Assert.fail(String.format(error message));

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de

 -Original Message-
 From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
 Sent: Thursday, August 27, 2015 12:15 AM
 To: u...@thetaphi.de; Lucene Dev
 Subject: Re: svn commit: r1697131 - in /lucene/dev/trunk/lucene:
 JRE_VERSION_MIGRATION.txt test-
 framework/src/java/org/apache/lucene/util/TestUtil.java
 
 
 Uwe: I'm still concerned about this change and the way it might result in
 confusing failure messages in the future (if the whitespace def of other
 characters changes) ... can you please explain your choice Assert.assertTrue
 - assert ?
 
 
 On Mon, 24 Aug 2015, Chris Hostetter wrote:
 
 : Uwe: why did you change this from Assert.assertTrue to assert ?
 :
 : In the old code the test would fail every time with a clear explanation of
 : hte problem -- in your new code, if assertions are randomly disabled by
 : the test framework, then the sanity check won't run and instead we'll get
 : a strange failure from whatever test called this method.
 :
 :
 :
 : :
 ==
 
 : : --- lucene/dev/trunk/lucene/test-
 framework/src/java/org/apache/lucene/util/TestUtil.java (original)
 : : +++ lucene/dev/trunk/lucene/test-
 framework/src/java/org/apache/lucene/util/TestUtil.java Sat Aug 22
 21:33:47 2015
 : : @@ -35,6 +35,7 @@ import java.util.Collections;
 : :  import java.util.HashMap;
 : :  import java.util.Iterator;
 : :  import java.util.List;
 : : +import java.util.Locale;
 : :  import java.util.Map;
 : :  import java.util.NoSuchElementException;
 : :  import java.util.Random;
 : : @@ -1188,7 +1189,7 @@ public final class TestUtil {
 : :int offset = nextInt(r, 0, WHITESPACE_CHARACTERS.length-1);
 : :char c = WHITESPACE_CHARACTERS[offset];
 : :// sanity check
 : : -  Assert.assertTrue(Not really whitespace? (@+offset+):  + c,
 Character.isWhitespace(c));
 : : +  assert Character.isWhitespace(c) : String.format(Locale.ENGLISH, 
 Not
 really whitespace? WHITESPACE_CHARACTERS[%d] is '\\u%04X', offset, (int)
 c);
 : :out.append(c);
 : :  }
 : :  return out.toString();
 : : @@ -1307,9 +1308,9 @@ public final class TestUtil {
 : :  '\u001E',
 : :  '\u001F',
 : :  '\u0020',
 : : -// '\u0085', faild sanity check?
 : : +// '\u0085', failed sanity check?
 : :  '\u1680',
 : : -'\u180E',
 : : +// '\u180E', no longer whitespace in Unicode 7.0 (Java 9)!
 : :  '\u2000',
 : :  '\u2001',
 : :  '\u2002',
 : :
 : :
 : :
 :
 : -Hoss
 : http://www.lucidworks.com/
 :
 
 -Hoss
 http://www.lucidworks.com/
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6759) Integrate lat/long BKD and spatial 3d, part 2

2015-08-26 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715759#comment-14715759
 ] 

Karl Wright commented on LUCENE-6759:
-

I did a repeat run, being sure to ant clean first, and it still passed.

Hmm.


 Integrate lat/long BKD and spatial 3d, part 2
 -

 Key: LUCENE-6759
 URL: https://issues.apache.org/jira/browse/LUCENE-6759
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
 Attachments: LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 This is just a continuation of LUCENE-6699, which became too big.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7950) Invalid auth scheme configuration of Http client when using Kerberos (SPNEGO)

2015-08-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715776#comment-14715776
 ] 

ASF subversion and git services commented on SOLR-7950:
---

Commit 1698037 from gcha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1698037 ]

SOLR-7950: Invalid auth scheme configuration of Http client when using Kerberos 
(SPNEGO)

 Invalid auth scheme configuration of Http client when using Kerberos (SPNEGO)
 -

 Key: SOLR-7950
 URL: https://issues.apache.org/jira/browse/SOLR-7950
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.3, Trunk
Reporter: Hrishikesh Gadre
Assignee: Gregory Chanan
 Attachments: solr-7950-v2.patch, solr-7950.patch


 When using kerberos authentication mechanism (SPNEGO auth scheme), the Apache 
 Http client is incorrectly configured with *all* auth schemes (e.g. Basic, 
 Digest, NTLM, Kerberos, Negotiate etc.) instead of just 'Negotiate'. 
 This issue was identified after configuring Solr with both Basic + Negotiate 
 authentication schemes simultaneously. The problem in this case is that Http 
 client is configured with Kerberos credentials and the default (and 
 incorrect) auth scheme configuration prefers Basic authentication over 
 Kerberos. Since the basic authentication credentials are missing, the 
 authentication and as a result the Http request fails. (I ran into this 
 problem while creating a collection where there is an internal communication 
 between Solr servers).
 The root cause for this issue is that, AbstractHttpClient::getAuthSchemes() 
 API call prepares an AuthSchemeRegistry instance with all possible 
 authentication schemes. Hence when we register the SPNEGO auth scheme in Solr 
 codebase, it overrides the previous configuration for SPNEGO - but doesn't 
 remove the other auth schemes from the client configuration. Please take a 
 look at relevant code snippet.
 https://github.com/apache/lucene-solr/blob/trunk/solr/solrj/src/java/org/apache/solr/client/solrj/impl/Krb5HttpClientConfigurer.java#L80
 A trivial fix would be to prepare a new AuthSchemeRegistry instance 
 configured with just SPENGO mechanism and set it in the HttpClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: svn commit: r1697131 - in /lucene/dev/trunk/lucene: JRE_VERSION_MIGRATION.txt test-framework/src/java/org/apache/lucene/util/TestUtil.java

2015-08-26 Thread Chris Hostetter

: 1) The test framework never randomly disables asserts (it cannot do 
: this, because asserts are enabled before the VM starts - you cannot do 

Hmm, ok -- i thought this was happening when the JVM was forked.

In any case, my underlying concern still seems valid to me: confusing 
nonsensical test failures could occur if someone or some jenkins runs with 
-Dtests.asserts=false (which we should really do in at least one jenkins 
job to ensure we're not relying on side effects of method called in an 
assert ... wasn't that the whole point of adding tests.asserts in 
LUCENE-6019?)

: (we also have a test that validates this). In addition, our tests should 
: not check the JVM for correctness, it is just there to make an 
: assumption about how WS should behave. So when this assert fails, the 
: test is still not wrong.

i think having sanity checks about our assumptions of the JVM are useful 
-- just like we (last time i checked) have sanity checks that filesystems 
behave as we expect.

It's one thing to say we don't need a check that 2 + 2 == 4 but in cases 
like this where assumptions (about hte definition of whitespace) can 
evidently change between JVM versions, it's nice to have a sanity check 
that our tests are testing the right thing (ie: fail fast because the JVM 
isn't behaving the way we expect, don't fail slow with a weird 
obscure NPE or AIOBE error because our code expected 2+2==4 and the JVM 
gave us -42 instead)


: 2) Now the answer: I changed it because of performance. The 

Ah, ok -- totally fair point.  Good call.

: Based on (1) and (2) this is OK. If you still want to revert to 
: Assert.assertTrue like before, please use an if-condition instead:
:   if (not whitespace) Assert.fail(String.format(error message));

Actaully, the more i think about it, the more i want to:

a) add an explicit test that loops over this array and fails with a clear 
your JVM is weird and doesn't consider thi char whitespace, that's messed 
up and this test and possible some other code needs updated if this is the 
new Java/unicode rules

b) leave the plain assert you have in place as a fallback to provide a 
clera assertion msg in case someone runs a test method/class that uses 
this utility method but don't run the sanity check test mentioned in (a) 
because of -Dtestcase or -Dtest.method.

(yes, i realize only one test method currently uses it nad the utility has 
already been refactored up to live in that same test class, but this is 
all about future proofing in case it ever gets refactred again)



-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 320 - Failure

2015-08-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/320/

1 tests failed.
REGRESSION:  org.apache.solr.cloud.BasicDistributedZkTest.test

Error Message:
commitWithin did not work on node: http://127.0.0.1:40118/vx_nn/fj/collection1 
expected:68 but was:67

Stack Trace:
java.lang.AssertionError: commitWithin did not work on node: 
http://127.0.0.1:40118/vx_nn/fj/collection1 expected:68 but was:67
at 
__randomizedtesting.SeedInfo.seed([19C9AF29BF843D27:919D90F3117850DF]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.BasicDistributedZkTest.test(BasicDistributedZkTest.java:333)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

[jira] [Updated] (SOLR-7795) Fold Interval Faceting into Range Faceting

2015-08-26 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-7795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-7795:

Attachment: SOLR-7795.patch

Added unit test

 Fold Interval Faceting into Range Faceting
 --

 Key: SOLR-7795
 URL: https://issues.apache.org/jira/browse/SOLR-7795
 Project: Solr
  Issue Type: Task
Reporter: Tomás Fernández Löbbe
 Fix For: Trunk, 5.4

 Attachments: SOLR-7795.patch, SOLR-7795.patch, SOLR-7795.patch, 
 SOLR-7795.patch


 Now that range faceting supports a filter and a dv method, and that 
 interval faceting is supported on fields with {{docValues=false}}, I think we 
 should make it so that interval faceting is just a different way of 
 specifying ranges in range faceting, allowing users to indicate specific 
 ranges.
 I propose we use the same syntax for intervals, but under the range 
 parameter family:
 {noformat}
 facet.range=price
 f.price.facet.range.set=[0,10]
 f.price.facet.range.set=(10,100]
 {noformat}
 The counts for those ranges would come in the response also inside of the 
 range_facets section. I'm not sure if it's better to include the ranges in 
 the counts section, or in a different section (intervals?sets?buckets?). 
 I'm open to suggestions. 
 {code}
 facet_ranges:{
   price:{
 counts:[
   [0,10],3,
   (10,100],2]
}
 }
 {code}
 or…
 {code}
 facet_ranges:{
   price:{
 intervals:[
   [0,10],3,
   (10,100],2]
}
 }
 {code}
 We should support people specifying both things on the same field.
 Once this is done, interval faceting could be deprecated, as all it's 
 functionality is now possible through range queries. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7789) Introduce a ConfigSet management API

2015-08-26 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated SOLR-7789:
-
Attachment: SOLR-7789.patch

Here's a new version of the patch.  Changes:

- rebased to lastest trunk
- Fixed typo pointed out by Mark
- Added comment to OverseerCollectionMessageHandler
- Renamed OverseerProcessor - OverseerTaskProcessor to go along with the 
OverseerTaskQueue name.

I plan on committing this soon if I don't hear any objections and will file 
follow on jiras for the suggestions above.


 Introduce a ConfigSet management API
 

 Key: SOLR-7789
 URL: https://issues.apache.org/jira/browse/SOLR-7789
 Project: Solr
  Issue Type: New Feature
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: SOLR-7789.patch, SOLR-7789.patch, SOLR-7789.patch, 
 SOLR-7789.patch, SOLR-7789.patch


 SOLR-5955 describes a feature to automatically create a ConfigSet, based on 
 another one, from a collection API call (i.e. one step collection creation).  
 Discussion there yielded SOLR-7742, Immutable ConfigSet support.  To close 
 the loop, we need support for a ConfigSet management API.
 The simplest ConfigSet API could have one operation:
 create a new config set, based on an existing one, possible modifying the 
 ConfigSet properties.  Note you need to be able to modify the ConfigSet 
 properties at creation time because otherwise Immutable could not be changed.
 Another logical operation to support is ConfigSet deletion; that may be more 
 complicated to implement than creation because you need to handle the case 
 where a collection is already using the configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7789) Introduce a ConfigSet management API

2015-08-26 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715775#comment-14715775
 ] 

Gregory Chanan commented on SOLR-7789:
--

Oh and forgot to mention, I was able to reproduce with failures with the 
beasting script and just turning down the number of iterations allowed it to 
pass with 100 iterations, 8 concurrent.

 Introduce a ConfigSet management API
 

 Key: SOLR-7789
 URL: https://issues.apache.org/jira/browse/SOLR-7789
 Project: Solr
  Issue Type: New Feature
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: SOLR-7789.patch, SOLR-7789.patch, SOLR-7789.patch, 
 SOLR-7789.patch, SOLR-7789.patch


 SOLR-5955 describes a feature to automatically create a ConfigSet, based on 
 another one, from a collection API call (i.e. one step collection creation).  
 Discussion there yielded SOLR-7742, Immutable ConfigSet support.  To close 
 the loop, we need support for a ConfigSet management API.
 The simplest ConfigSet API could have one operation:
 create a new config set, based on an existing one, possible modifying the 
 ConfigSet properties.  Note you need to be able to modify the ConfigSet 
 properties at creation time because otherwise Immutable could not be changed.
 Another logical operation to support is ConfigSet deletion; that may be more 
 complicated to implement than creation because you need to handle the case 
 where a collection is already using the configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7950) Invalid auth scheme configuration of Http client when using Kerberos (SPNEGO)

2015-08-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715777#comment-14715777
 ] 

ASF subversion and git services commented on SOLR-7950:
---

Commit 1698039 from gcha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1698039 ]

SOLR-7950: Invalid auth scheme configuration of Http client when using Kerberos 
(SPNEGO)

 Invalid auth scheme configuration of Http client when using Kerberos (SPNEGO)
 -

 Key: SOLR-7950
 URL: https://issues.apache.org/jira/browse/SOLR-7950
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.3, Trunk
Reporter: Hrishikesh Gadre
Assignee: Gregory Chanan
 Fix For: Trunk, 5.4

 Attachments: solr-7950-v2.patch, solr-7950.patch


 When using kerberos authentication mechanism (SPNEGO auth scheme), the Apache 
 Http client is incorrectly configured with *all* auth schemes (e.g. Basic, 
 Digest, NTLM, Kerberos, Negotiate etc.) instead of just 'Negotiate'. 
 This issue was identified after configuring Solr with both Basic + Negotiate 
 authentication schemes simultaneously. The problem in this case is that Http 
 client is configured with Kerberos credentials and the default (and 
 incorrect) auth scheme configuration prefers Basic authentication over 
 Kerberos. Since the basic authentication credentials are missing, the 
 authentication and as a result the Http request fails. (I ran into this 
 problem while creating a collection where there is an internal communication 
 between Solr servers).
 The root cause for this issue is that, AbstractHttpClient::getAuthSchemes() 
 API call prepares an AuthSchemeRegistry instance with all possible 
 authentication schemes. Hence when we register the SPNEGO auth scheme in Solr 
 codebase, it overrides the previous configuration for SPNEGO - but doesn't 
 remove the other auth schemes from the client configuration. Please take a 
 look at relevant code snippet.
 https://github.com/apache/lucene-solr/blob/trunk/solr/solrj/src/java/org/apache/solr/client/solrj/impl/Krb5HttpClientConfigurer.java#L80
 A trivial fix would be to prepare a new AuthSchemeRegistry instance 
 configured with just SPENGO mechanism and set it in the HttpClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7950) Invalid auth scheme configuration of Http client when using Kerberos (SPNEGO)

2015-08-26 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan resolved SOLR-7950.
--
   Resolution: Fixed
Fix Version/s: 5.4
   Trunk

Thanks for the patch Hrishikesh, committed to Trunk and 5x.

 Invalid auth scheme configuration of Http client when using Kerberos (SPNEGO)
 -

 Key: SOLR-7950
 URL: https://issues.apache.org/jira/browse/SOLR-7950
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.3, Trunk
Reporter: Hrishikesh Gadre
Assignee: Gregory Chanan
 Fix For: Trunk, 5.4

 Attachments: solr-7950-v2.patch, solr-7950.patch


 When using kerberos authentication mechanism (SPNEGO auth scheme), the Apache 
 Http client is incorrectly configured with *all* auth schemes (e.g. Basic, 
 Digest, NTLM, Kerberos, Negotiate etc.) instead of just 'Negotiate'. 
 This issue was identified after configuring Solr with both Basic + Negotiate 
 authentication schemes simultaneously. The problem in this case is that Http 
 client is configured with Kerberos credentials and the default (and 
 incorrect) auth scheme configuration prefers Basic authentication over 
 Kerberos. Since the basic authentication credentials are missing, the 
 authentication and as a result the Http request fails. (I ran into this 
 problem while creating a collection where there is an internal communication 
 between Solr servers).
 The root cause for this issue is that, AbstractHttpClient::getAuthSchemes() 
 API call prepares an AuthSchemeRegistry instance with all possible 
 authentication schemes. Hence when we register the SPNEGO auth scheme in Solr 
 codebase, it overrides the previous configuration for SPNEGO - but doesn't 
 remove the other auth schemes from the client configuration. Please take a 
 look at relevant code snippet.
 https://github.com/apache/lucene-solr/blob/trunk/solr/solrj/src/java/org/apache/solr/client/solrj/impl/Krb5HttpClientConfigurer.java#L80
 A trivial fix would be to prepare a new AuthSchemeRegistry instance 
 configured with just SPENGO mechanism and set it in the HttpClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7746) Ping requests stopped working with distrib=true in Solr 5.2.1

2015-08-26 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715804#comment-14715804
 ] 

Gregory Chanan commented on SOLR-7746:
--

1. You should not modify TestMiniSolrCloudCluster.  That's for testing whether 
the MiniSolrCloudCluster itself works.  Write a test that uses the 
MiniSolrCloudCluster, there should be a number of examples.  Or maybe you don't 
even need it -- can SolrPingTest satisfy what you need?

2. {code}
+  // Send distributed and non-distributed ping query
+  cloudSolrClient.setDefaultCollection(collectionName);
+  SolrPing req = new SolrPing();
+  req.setDistrib(true);
+  SolrPingResponse rsp = req.process(cloudSolrClient, collectionName);
+  assertEquals(0, rsp.getStatus()); 
+  
+  cloudSolrClient.setDefaultCollection(collectionName);
+  req = new SolrPing();
+  req.setDistrib(false);
+  rsp = req.process(cloudSolrClient, collectionName);
+  assertEquals(0, rsp.getStatus());   
{code}
Most of this code is unnecessary, you set the default collection multiple 
times, you pass the collectionName even though it's set, you create a new 
request when it would just suffice to set distrib.

3. {code}
+  public SolrPing setDistrib(boolean distrib) {   
+params.add(distrib, distrib ? true : false);
+return this;
+  }
{code}

You shouldn't modify SolrPing just to test it.  Just extend getParams on 
SolrPing for your distrib example.

4. Can you just have a single one of these instead of putting it in each clause?
{code}
  // Send an error or return
+  if( ex != null ) {
+throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, 
+Ping query caused exception: +ex.getMessage(), ex );
+  }
+} else {
{code}

 Ping requests stopped working with distrib=true in Solr 5.2.1
 -

 Key: SOLR-7746
 URL: https://issues.apache.org/jira/browse/SOLR-7746
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.2.1
Reporter: Alexey Serba
 Attachments: SOLR-7746.patch, SOLR-7746.patch, SOLR-7746.patch, 
 SOLR-7746.patch


 {noformat:title=steps to reproduce}
 # start 1 node SolrCloud cluster
 sh ./bin/solr -c -p 
 # create a test collection (we won’t use it, but I just want to it to load 
 solr configs to Zk)
 ./bin/solr create_collection -c test -d sample_techproducts_configs -p 
 # create another test collection with 2 shards
 curl 
 'http://localhost:/solr/admin/collections?action=CREATEname=test2numShards=2replicationFactor=1maxShardsPerNode=2collection.configName=test'
 # try distrib ping request
 curl 
 'http://localhost:/solr/test2/admin/ping?wt=jsondistrib=trueindent=true'
 ...
   error:{
 msg:Ping query caused exception: Error from server at 
 http://192.168.59.3:/solr/test2_shard2_replica1: Cannot execute the 
 PingRequestHandler recursively
 ...
 {noformat}
 {noformat:title=Exception}
 2116962 [qtp599601600-13] ERROR org.apache.solr.core.SolrCore  [test2 shard2 
 core_node1 test2_shard2_replica1] – org.apache.solr.common.SolrException: 
 Cannot execute the PingRequestHandler recursively
   at 
 org.apache.solr.handler.PingRequestHandler.handlePing(PingRequestHandler.java:246)
   at 
 org.apache.solr.handler.PingRequestHandler.handleRequestBody(PingRequestHandler.java:211)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7746) Ping requests stopped working with distrib=true in Solr 5.2.1

2015-08-26 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715882#comment-14715882
 ] 

Gregory Chanan commented on SOLR-7746:
--

I compared what happens in 5.1 vs your version of trunk for the commands listed 
in the description:
5.1:
{noformat}
curl 
'http://localhost:8889/solr/test2/admin/ping?wt=jsondistrib=trueindent=true'
{
  responseHeader:{
status:0,
QTime:26,
params:{
  df:text,
  echoParams:all,
  indent:true,
  q:solrpingquery,
  distrib:true,
  wt:json,
  preferLocalShards:false,
  rows:10}},
  status:OK}
{noformat}

Trunk:
{noformat}
curl 
'http://localhost:8885/solr/test2/admin/ping?wt=jsondistrib=trueindent=true'
{
  responseHeader:{
status:0,
QTime:28,
params:{
  q:{!lucene}*:*,
  distrib:true,
  df:text,
  preferLocalShards:false,
  indent:true,
  echoParams:all,
  rows:10,
  wt:json}},
  status:OK}
{noformat}

Looks good, the change in the q parameter looks like it's that way because the 
solrconfig.xml being used is different.

 Ping requests stopped working with distrib=true in Solr 5.2.1
 -

 Key: SOLR-7746
 URL: https://issues.apache.org/jira/browse/SOLR-7746
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.2.1
Reporter: Alexey Serba
 Attachments: SOLR-7746.patch, SOLR-7746.patch, SOLR-7746.patch, 
 SOLR-7746.patch


 {noformat:title=steps to reproduce}
 # start 1 node SolrCloud cluster
 sh ./bin/solr -c -p 
 # create a test collection (we won’t use it, but I just want to it to load 
 solr configs to Zk)
 ./bin/solr create_collection -c test -d sample_techproducts_configs -p 
 # create another test collection with 2 shards
 curl 
 'http://localhost:/solr/admin/collections?action=CREATEname=test2numShards=2replicationFactor=1maxShardsPerNode=2collection.configName=test'
 # try distrib ping request
 curl 
 'http://localhost:/solr/test2/admin/ping?wt=jsondistrib=trueindent=true'
 ...
   error:{
 msg:Ping query caused exception: Error from server at 
 http://192.168.59.3:/solr/test2_shard2_replica1: Cannot execute the 
 PingRequestHandler recursively
 ...
 {noformat}
 {noformat:title=Exception}
 2116962 [qtp599601600-13] ERROR org.apache.solr.core.SolrCore  [test2 shard2 
 core_node1 test2_shard2_replica1] – org.apache.solr.common.SolrException: 
 Cannot execute the PingRequestHandler recursively
   at 
 org.apache.solr.handler.PingRequestHandler.handlePing(PingRequestHandler.java:246)
   at 
 org.apache.solr.handler.PingRequestHandler.handleRequestBody(PingRequestHandler.java:211)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7795) Fold Interval Faceting into Range Faceting

2015-08-26 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-7795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-7795:

Attachment: SOLR-7795.patch

Fix pivots use case. Added some tests with pivots + interval facets. I'll add 
some more.

 Fold Interval Faceting into Range Faceting
 --

 Key: SOLR-7795
 URL: https://issues.apache.org/jira/browse/SOLR-7795
 Project: Solr
  Issue Type: Task
Reporter: Tomás Fernández Löbbe
 Fix For: Trunk, 5.4

 Attachments: SOLR-7795.patch, SOLR-7795.patch, SOLR-7795.patch


 Now that range faceting supports a filter and a dv method, and that 
 interval faceting is supported on fields with {{docValues=false}}, I think we 
 should make it so that interval faceting is just a different way of 
 specifying ranges in range faceting, allowing users to indicate specific 
 ranges.
 I propose we use the same syntax for intervals, but under the range 
 parameter family:
 {noformat}
 facet.range=price
 f.price.facet.range.set=[0,10]
 f.price.facet.range.set=(10,100]
 {noformat}
 The counts for those ranges would come in the response also inside of the 
 range_facets section. I'm not sure if it's better to include the ranges in 
 the counts section, or in a different section (intervals?sets?buckets?). 
 I'm open to suggestions. 
 {code}
 facet_ranges:{
   price:{
 counts:[
   [0,10],3,
   (10,100],2]
}
 }
 {code}
 or…
 {code}
 facet_ranges:{
   price:{
 intervals:[
   [0,10],3,
   (10,100],2]
}
 }
 {code}
 We should support people specifying both things on the same field.
 Once this is done, interval faceting could be deprecated, as all it's 
 functionality is now possible through range queries. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7789) Introduce a ConfigSet management API

2015-08-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14716099#comment-14716099
 ] 

ASF subversion and git services commented on SOLR-7789:
---

Commit 1698079 from gcha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1698079 ]

SOLR-7789: Introduce a ConfigSet management API

 Introduce a ConfigSet management API
 

 Key: SOLR-7789
 URL: https://issues.apache.org/jira/browse/SOLR-7789
 Project: Solr
  Issue Type: New Feature
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: SOLR-7789.patch, SOLR-7789.patch, SOLR-7789.patch, 
 SOLR-7789.patch, SOLR-7789.patch


 SOLR-5955 describes a feature to automatically create a ConfigSet, based on 
 another one, from a collection API call (i.e. one step collection creation).  
 Discussion there yielded SOLR-7742, Immutable ConfigSet support.  To close 
 the loop, we need support for a ConfigSet management API.
 The simplest ConfigSet API could have one operation:
 create a new config set, based on an existing one, possible modifying the 
 ConfigSet properties.  Note you need to be able to modify the ConfigSet 
 properties at creation time because otherwise Immutable could not be changed.
 Another logical operation to support is ConfigSet deletion; that may be more 
 complicated to implement than creation because you need to handle the case 
 where a collection is already using the configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6762) CheckIndex cannot fix indexes that have individual segments with missing or corrupt .si files because sanity checks will fail trying to read the index initially.

2015-08-26 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715983#comment-14715983
 ] 

Mike Drob commented on LUCENE-6762:
---

I've almost got a patch ready for this, probably will finish it up tomorrow.

 CheckIndex cannot fix indexes that have individual segments with missing or 
 corrupt .si files because sanity checks will fail trying to read the index 
 initially.
 ---

 Key: LUCENE-6762
 URL: https://issues.apache.org/jira/browse/LUCENE-6762
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mark Miller
Priority: Minor

 Seems like we should still be able to partially recover by dropping these 
 segments with CheckIndex.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.8.0_60) - Build # 13735 - Failure!

2015-08-26 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13735/
Java: 32bit/jdk1.8.0_60 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [TransactionLog]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [TransactionLog]
at __randomizedtesting.SeedInfo.seed([451FCED23C6C9A08]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:236)
at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 10427 lines...]
   [junit4] Suite: org.apache.solr.cloud.HttpPartitionTest
   [junit4]   2 Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_451FCED23C6C9A08-001/init-core-data-001
   [junit4]   2 717648 INFO  
(SUITE-HttpPartitionTest-seed#[451FCED23C6C9A08]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: 
/_lek/qm
   [junit4]   2 717650 INFO  
(TEST-HttpPartitionTest.test-seed#[451FCED23C6C9A08]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2 717650 INFO  (Thread-2181) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2 717651 INFO  (Thread-2181) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2 717751 INFO  
(TEST-HttpPartitionTest.test-seed#[451FCED23C6C9A08]) [] 
o.a.s.c.ZkTestServer start zk server on port:34917
   [junit4]   2 717751 INFO  
(TEST-HttpPartitionTest.test-seed#[451FCED23C6C9A08]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2 717751 INFO  
(TEST-HttpPartitionTest.test-seed#[451FCED23C6C9A08]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2 717753 INFO  (zkCallback-1484-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@2deaf2 name:ZooKeeperConnection 
Watcher:127.0.0.1:34917 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2 717753 INFO  
(TEST-HttpPartitionTest.test-seed#[451FCED23C6C9A08]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2 717754 INFO  
(TEST-HttpPartitionTest.test-seed#[451FCED23C6C9A08]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2 717754 INFO  
(TEST-HttpPartitionTest.test-seed#[451FCED23C6C9A08]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2 717756 INFO  

[jira] [Commented] (SOLR-7789) Introduce a ConfigSet management API

2015-08-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715938#comment-14715938
 ] 

ASF subversion and git services commented on SOLR-7789:
---

Commit 1698043 from gcha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1698043 ]

SOLR-7789: Introduce a ConfigSet management API

 Introduce a ConfigSet management API
 

 Key: SOLR-7789
 URL: https://issues.apache.org/jira/browse/SOLR-7789
 Project: Solr
  Issue Type: New Feature
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: SOLR-7789.patch, SOLR-7789.patch, SOLR-7789.patch, 
 SOLR-7789.patch, SOLR-7789.patch


 SOLR-5955 describes a feature to automatically create a ConfigSet, based on 
 another one, from a collection API call (i.e. one step collection creation).  
 Discussion there yielded SOLR-7742, Immutable ConfigSet support.  To close 
 the loop, we need support for a ConfigSet management API.
 The simplest ConfigSet API could have one operation:
 create a new config set, based on an existing one, possible modifying the 
 ConfigSet properties.  Note you need to be able to modify the ConfigSet 
 properties at creation time because otherwise Immutable could not be changed.
 Another logical operation to support is ConfigSet deletion; that may be more 
 complicated to implement than creation because you need to handle the case 
 where a collection is already using the configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 939 - Still Failing

2015-08-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/939/

1 tests failed.
REGRESSION:  
org.apache.lucene.codecs.lucene46.TestLucene46SegmentInfoFormat.testRandomExceptions

Error Message:
hit unexpected FileNotFoundException: file=_3g.si

Stack Trace:
java.lang.AssertionError: hit unexpected FileNotFoundException: file=_3g.si
at 
__randomizedtesting.SeedInfo.seed([1A10EDD0B2491A33:723F83142C474393]:0)
at 
org.apache.lucene.index.IndexFileDeleter.deleteFile(IndexFileDeleter.java:753)
at 
org.apache.lucene.index.IndexFileDeleter.deletePendingFiles(IndexFileDeleter.java:530)
at 
org.apache.lucene.index.IndexFileDeleter.deleteNewFiles(IndexFileDeleter.java:733)
at 
org.apache.lucene.index.IndexWriter.deleteNewFiles(IndexWriter.java:4700)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4218)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3664)
at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:1929)
at 
org.apache.lucene.index.IndexWriter.doAfterSegmentFlushed(IndexWriter.java:4731)
at 
org.apache.lucene.index.DocumentsWriter$MergePendingEvent.process(DocumentsWriter.java:695)
at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4757)
at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4748)
at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1476)
at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1254)
at 
org.apache.lucene.index.BaseIndexFileFormatTestCase.testRandomExceptions(BaseIndexFileFormatTestCase.java:429)
at 
org.apache.lucene.index.BaseSegmentInfoFormatTestCase.testRandomExceptions(BaseSegmentInfoFormatTestCase.java:48)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-7789) Introduce a ConfigSet management API

2015-08-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14716088#comment-14716088
 ] 

ASF subversion and git services commented on SOLR-7789:
---

Commit 1698072 from gcha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1698072 ]

SOLR-7789: fix jira number in CHANGES.txt

 Introduce a ConfigSet management API
 

 Key: SOLR-7789
 URL: https://issues.apache.org/jira/browse/SOLR-7789
 Project: Solr
  Issue Type: New Feature
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: SOLR-7789.patch, SOLR-7789.patch, SOLR-7789.patch, 
 SOLR-7789.patch, SOLR-7789.patch


 SOLR-5955 describes a feature to automatically create a ConfigSet, based on 
 another one, from a collection API call (i.e. one step collection creation).  
 Discussion there yielded SOLR-7742, Immutable ConfigSet support.  To close 
 the loop, we need support for a ConfigSet management API.
 The simplest ConfigSet API could have one operation:
 create a new config set, based on an existing one, possible modifying the 
 ConfigSet properties.  Note you need to be able to modify the ConfigSet 
 properties at creation time because otherwise Immutable could not be changed.
 Another logical operation to support is ConfigSet deletion; that may be more 
 complicated to implement than creation because you need to handle the case 
 where a collection is already using the configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org