[jira] [Updated] (SOLR-7484) Refactor SolrDispatchFilter.doFilter(...) method

2015-04-30 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-7484:
---
Attachment: SOLR-7484.patch

Some more refactoring, it's still WIP but I'll continue to build on this 
tomorrow morning. Still need to move stuff into smaller methods and document.

This moves things into a 3 stage process for HttpSolrCall (renamed SolrCall).
* Construct - Initialize variables
* Set context - Sets the path, handler, etc. still working on populating it 
with processed information e.g. collection name etc.
* {.call()}} - This also calls {{setContext}} and then processes the request or 
returns RETRY/FORWARD/etc. action. to the filter.

 Refactor SolrDispatchFilter.doFilter(...) method
 

 Key: SOLR-7484
 URL: https://issues.apache.org/jira/browse/SOLR-7484
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Attachments: SOLR-7484.patch, SOLR-7484.patch, SOLR-7484.patch, 
 SOLR-7484.patch, SOLR-7484.patch, SOLR-7484.patch


 Currently almost everything that's done in SDF.doFilter() is sequential. We 
 should refactor it to clean up the code and make things easier to manage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2197 - Failure!

2015-04-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2197/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.testRateLimitedReplication

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([AFE2376513F7757E:297642901C45F493]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.handler.TestReplicationHandler.testRateLimitedReplication(TestReplicationHandler.java:1320)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 10032 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandler
   [junit4]   2 Creating dataDir: 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler
 

[jira] [Updated] (SOLR-4685) JSON response write modification to support RAW JSON

2015-04-30 Thread Bill Bell (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bell updated SOLR-4685:

Attachment: SOLR-4685.5.1.patch

Patch for 5.1

 JSON response write modification to support RAW JSON
 

 Key: SOLR-4685
 URL: https://issues.apache.org/jira/browse/SOLR-4685
 Project: Solr
  Issue Type: Improvement
Reporter: Bill Bell
Priority: Minor
 Attachments: SOLR-4685.1.patch, SOLR-4685.5.1.patch, 
 SOLR-4685.SOLR_4_5.patch


 If the field ends with _json allow the field to return raw JSON.
 For example the field,
 office_json -- string
 I already put into the field raw JSON already escaped. I want it to come with 
 no double quotes and not escaped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6045) Refator classifier APIs to work better with multi threading

2015-04-30 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili updated LUCENE-6045:

Description: 
In 
https://issues.apache.org/jira/browse/LUCENE-4345?focusedCommentId=13454729page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13454729
 [~simonw] pointed out that the current Classifier API doesn't work well in 
multi threading environments: 

bq. The interface you defined has some problems with respect to Multi-Threading 
IMO. The interface itself suggests that this class is stateful and you have to 
call methods in a certain order and at the same you need to make sure that it 
is not published for read access before training is done. I think it would be 
wise to pass in all needed objects as constructor arguments and make the 
references final so it can be shared across threads and add an interface that 
represents the trained model computed offline? In this case it doesn't really 
matter but in the future it might make sense. We can also skip the model 
interface entirely and remove the training method until we have some impls that 
really need to be trained.

I missed that at that point but I think for 6.0 (?) it would be wise to 
rearrange the API to address that properly.

  was:
In 
https://issues.apache.org/jira/browse/LUCENE-4345?focusedCommentId=13454729page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13454729
 [~simonw] pointed out that the current Classifier API doesn't work well in 
multi threading environments: 

bq. The interface you defined has some problems with respect to Multi-Threading 
IMO. The interface itself suggests that this class is stateful and you have to 
call methods in a certain order and at the same you need to make sure that it 
is not published for read access before training is done. I think it would be 
wise to pass in all needed objects as constructor arguments and make the 
references final so it can be shared across threads and add an interface that 
represents the trained model computed offline? In this case it doesn't really 
matter but in the future it might make sense. We can also skip the model 
interface entirely and remove the training method until we have some impls that 
really need to be trained.

I missed that at that point but I think for 5.0 it would be wise to rearrange 
the API to address that properly.


 Refator classifier APIs to work better with multi threading
 ---

 Key: LUCENE-6045
 URL: https://issues.apache.org/jira/browse/LUCENE-6045
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/classification
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: Trunk


 In 
 https://issues.apache.org/jira/browse/LUCENE-4345?focusedCommentId=13454729page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13454729
  [~simonw] pointed out that the current Classifier API doesn't work well in 
 multi threading environments: 
 bq. The interface you defined has some problems with respect to 
 Multi-Threading IMO. The interface itself suggests that this class is 
 stateful and you have to call methods in a certain order and at the same you 
 need to make sure that it is not published for read access before training is 
 done. I think it would be wise to pass in all needed objects as constructor 
 arguments and make the references final so it can be shared across threads 
 and add an interface that represents the trained model computed offline? In 
 this case it doesn't really matter but in the future it might make sense. We 
 can also skip the model interface entirely and remove the training method 
 until we have some impls that really need to be trained.
 I missed that at that point but I think for 6.0 (?) it would be wise to 
 rearrange the API to address that properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6045) Refator classifier APIs to work better with multi threading

2015-04-30 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili updated LUCENE-6045:

Fix Version/s: Trunk

 Refator classifier APIs to work better with multi threading
 ---

 Key: LUCENE-6045
 URL: https://issues.apache.org/jira/browse/LUCENE-6045
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/classification
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: Trunk


 In 
 https://issues.apache.org/jira/browse/LUCENE-4345?focusedCommentId=13454729page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13454729
  [~simonw] pointed out that the current Classifier API doesn't work well in 
 multi threading environments: 
 bq. The interface you defined has some problems with respect to 
 Multi-Threading IMO. The interface itself suggests that this class is 
 stateful and you have to call methods in a certain order and at the same you 
 need to make sure that it is not published for read access before training is 
 done. I think it would be wise to pass in all needed objects as constructor 
 arguments and make the references final so it can be shared across threads 
 and add an interface that represents the trained model computed offline? In 
 this case it doesn't really matter but in the future it might make sense. We 
 can also skip the model interface entirely and remove the training method 
 until we have some impls that really need to be trained.
 I missed that at that point but I think for 5.0 it would be wise to rearrange 
 the API to address that properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7492) Missing partials files for Solr Admin UI

2015-04-30 Thread Upayavira (JIRA)
Upayavira created SOLR-7492:
---

 Summary: Missing partials files for Solr Admin UI
 Key: SOLR-7492
 URL: https://issues.apache.org/jira/browse/SOLR-7492
 Project: Solr
  Issue Type: Bug
  Components: web gui
Reporter: Upayavira
Priority: Minor
 Fix For: 5.2


SOLR-7382 reported a partials directory that was misplaced. The files in this 
dir should, actually, be in solr/webapp/web/partials. I'll attached a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7477) multi-select support for facet module

2015-04-30 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-7477.

Resolution: Fixed

 multi-select support for facet module
 -

 Key: SOLR-7477
 URL: https://issues.apache.org/jira/browse/SOLR-7477
 Project: Solr
  Issue Type: New Feature
  Components: Facet Module
Reporter: Yonik Seeley
 Fix For: 5.2

 Attachments: SOLR-7477.patch


 Multi-select support essentially means (at a minimum) support for excluding 
 tagged filters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



AngularJS Admin UI first pass complete

2015-04-30 Thread Upayavira
As tracked on SOLR-5507 and linked tickets, we should now have working
code for every tab in the admin UI. There are some bugs I'm aware of,
and likely some that I'm not.

My plan now is to make some kind of tarball/zip available, and ask
people on the Solr user list to try it out on their particular
installations, and to try their own particular favourite features, and
see if they find anything wrong/broken.

Once that's done, switching should just be a question of changing the
welcome-file in web.xml.

That's my plan, but I'm open to suggestions from others as to how we can
get this solidified and tested.

*Then* we can get on with things like a collections API page, an
explains viewer, etc, etc, etc.

Any suggestions/proposals as to what approach to take now?

Thx,

Upayavira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7492) Missing partials files for Solr Admin UI

2015-04-30 Thread Upayavira (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Upayavira updated SOLR-7492:

Attachment: SOLR-7492.patch

Partials (templates) required for documents, files and plugins tabs on 
AngularJS admin UI.

 Missing partials files for Solr Admin UI
 

 Key: SOLR-7492
 URL: https://issues.apache.org/jira/browse/SOLR-7492
 Project: Solr
  Issue Type: Bug
  Components: web gui
Reporter: Upayavira
Priority: Minor
 Fix For: 5.2

 Attachments: SOLR-7492.patch


 SOLR-7382 reported a partials directory that was misplaced. The files in this 
 dir should, actually, be in solr/webapp/web/partials. I'll attached a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7477) multi-select support for facet module

2015-04-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521247#comment-14521247
 ] 

ASF subversion and git services commented on SOLR-7477:
---

Commit 1676945 from [~yo...@apache.org] in branch 'dev/trunk'
[ https://svn.apache.org/r1676945 ]

SOLR-7477: more tests for excludeTags

 multi-select support for facet module
 -

 Key: SOLR-7477
 URL: https://issues.apache.org/jira/browse/SOLR-7477
 Project: Solr
  Issue Type: New Feature
  Components: Facet Module
Reporter: Yonik Seeley
 Fix For: 5.2

 Attachments: SOLR-7477.patch


 Multi-select support essentially means (at a minimum) support for excluding 
 tagged filters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7491) Add segments tab support to AngularJS Admin UI

2015-04-30 Thread Upayavira (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Upayavira updated SOLR-7491:

Attachment: SOLR-7491.patch

Patch to add segments support.

I added an autorefresh option, so that if you have regular indexing going on, 
you can see segment merging/etc as it happens.

 Add segments tab support to AngularJS Admin UI
 

 Key: SOLR-7491
 URL: https://issues.apache.org/jira/browse/SOLR-7491
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Reporter: Upayavira
Priority: Minor
 Fix For: 5.2

 Attachments: SOLR-7491.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7491) Add segments tab support to AngularJS Admin UI

2015-04-30 Thread Upayavira (JIRA)
Upayavira created SOLR-7491:
---

 Summary: Add segments tab support to AngularJS Admin UI
 Key: SOLR-7491
 URL: https://issues.apache.org/jira/browse/SOLR-7491
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Reporter: Upayavira
Priority: Minor
 Fix For: 5.2






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7361) Main Jetty thread blocked by core loading delays HTTP listener from binding if core loading is slow

2015-04-30 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-7361:
--
Attachment: SOLR-7361.patch

Here is my current progress.

By default CoreContainer still waits on load, but has a new async load option.

JettySolrServer still waits on load, but has a new async load option.

SolrDispatchFilter turns on the async load option and also returns a 503 on 
request while a core is loading (though it seems perhaps you can get a 510 
instead depending on timing due to the stateformat=2 stuff).

I think that is back compat and gives us the new behavior we want.

 Main Jetty thread blocked by core loading delays HTTP listener from binding 
 if core loading is slow
 ---

 Key: SOLR-7361
 URL: https://issues.apache.org/jira/browse/SOLR-7361
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Timothy Potter
Assignee: Mark Miller
 Fix For: Trunk, 5.2

 Attachments: SOLR-7361.patch, SOLR-7361.patch, SOLR-7361.patch, 
 SOLR-7361.patch, SOLR-7361.patch, SOLR-7361.patch


 During server startup, the CoreContainer uses an ExecutorService to load 
 cores in multiple back-ground threads but then blocks until cores are loaded, 
 see: CoreContainer#load around line 290 on trunk (invokeAll). From the 
 JavaDoc on that method, we have:
 {quote}
 Executes the given tasks, returning a list of Futures holding their status 
 and results when all complete. Future.isDone() is true for each element of 
 the returned list.
 {quote}
 In other words, this is a blocking call.
 This delays the Jetty HTTP listener from binding and accepting requests until 
 all cores are loaded. Do we need to block the main thread?
 Also, prior to this happening, the node is registered as a live node in ZK, 
 which makes it a candidate for receiving requests from the Overseer, such as 
 to service a create collection request. The problem of course is that the 
 node listed in /live_nodes isn't accepting requests yet. So we either need to 
 unblock the main thread during server loading or maybe wait longer before we 
 register as a live node ... not sure which is the better way forward?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4685) JSON response write modification to support RAW JSON

2015-04-30 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521413#comment-14521413
 ] 

Noble Paul edited comment on SOLR-4685 at 4/30/15 12:21 PM:


I fail to see , how can you claim that this is completely backward compatible. 
What if I already have a field with an _json suffix and I have  coded my 
client app to read escaped JSON string?

I totally understand the motivation and need for this . Probably , we should 
add an extra request flag to turn this on . like {{json.key.suffix=_json}}. 
Which means any field ending with {{_json}} will be unescaped


was (Author: noble.paul):
I fail to see , how can you claim that this is completely backward compatible. 
What if I already have a field with an _json suffix and I have  coded my 
client app to read escaped JSON string?

I totally understand the motivation and need for this . Probably , we should 
add an extra request flag to turn this on . like {{raw.json.suffix=_json}}. 
Which means any field ending with {{_json}} will be unescaped

 JSON response write modification to support RAW JSON
 

 Key: SOLR-4685
 URL: https://issues.apache.org/jira/browse/SOLR-4685
 Project: Solr
  Issue Type: Improvement
Reporter: Bill Bell
Priority: Minor
 Attachments: SOLR-4685.1.patch, SOLR-4685.5.1.patch, 
 SOLR-4685.SOLR_4_5.patch


 If the field ends with _json allow the field to return raw JSON.
 For example the field,
 office_json -- string
 I already put into the field raw JSON already escaped. I want it to come with 
 no double quotes and not escaped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-4685) JSON response write modification to support RAW JSON

2015-04-30 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-4685:


Assignee: Noble Paul

 JSON response write modification to support RAW JSON
 

 Key: SOLR-4685
 URL: https://issues.apache.org/jira/browse/SOLR-4685
 Project: Solr
  Issue Type: Improvement
Reporter: Bill Bell
Assignee: Noble Paul
Priority: Minor
 Attachments: SOLR-4685.1.patch, SOLR-4685.5.1.patch, 
 SOLR-4685.SOLR_4_5.patch


 If the field ends with _json allow the field to return raw JSON.
 For example the field,
 office_json -- string
 I already put into the field raw JSON already escaped. I want it to come with 
 no double quotes and not escaped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7477) multi-select support for facet module

2015-04-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521483#comment-14521483
 ] 

ASF subversion and git services commented on SOLR-7477:
---

Commit 1676980 from [~yo...@apache.org] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1676980 ]

SOLR-7477: more tests for excludeTags

 multi-select support for facet module
 -

 Key: SOLR-7477
 URL: https://issues.apache.org/jira/browse/SOLR-7477
 Project: Solr
  Issue Type: New Feature
  Components: Facet Module
Reporter: Yonik Seeley
 Fix For: 5.2

 Attachments: SOLR-7477.patch


 Multi-select support essentially means (at a minimum) support for excluding 
 tagged filters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Where Search Meets Machine Learning

2015-04-30 Thread Doug Turnbull
Hi Joaquin

Very neat, thanks for sharing,

Viewing search relevance as something akin to a classification problem is
actually a driving narrative in Taming Search http://manning.com/turnbull.
We generalize the relevance problem as one of measuring the similarity
between features of content (locations of restaurants, price of a product,
the words in the body of articles, expanded synonyms in articles, etc) and
features of a query (the search terms, user usage history, any location,
etc). What makes search interesting is that unlike other classification
systems, search has built in similarity systems (largely TF*IDF).

So we actually cut the other direction from your talk. It appears that you
amend the search engine to change the underlying scoring to be based on
machine learning constructs. In our book, we work the opposite way. We
largely enable feature similarity classifications between document and
query by massaging features into terms and use the built in TF*IDF or other
relevant similarity approach.

We feel this plays to the advantages of a search engine. Search engines
already have some basic text analysis built in. They've also been heavily
optimized for most forms of text-based similarity. If you can massage text
such that your TF*IDF similarity reflects a rough proportion of text-based
features important to your users, this tends to reflect their intuitive
notions of relevance. A lot of this work involves feature section, or what
we term in the book feature modeling. What features should you introduce to
your documents that can be used to generate good signals at ranking time.

You can read more about our thoughts here
http://java.dzone.com/articles/solr-and-elasticsearch.

That all being said, what makes your stuff interesting is when you have
enough supervised training data over good-enough features. This can be hard
to do for a broad swatch of middle tier search applications, but
increasingly useful as scale goes up. I'd be interested to hear your
thoughts on this article
http://opensourceconnections.com/blog/2014/10/08/when-click-scoring-can-hurt-search-relevance-a-roadmap-to-better-signals-processing-in-search/
I wrote about collecting click tracking and other relevance feedback data:

Good stuff! Again, thanks for sharing,
-Doug



On Wed, Apr 29, 2015 at 6:58 PM, J. Delgado joaquin.delg...@gmail.com
wrote:

 Here is a presentation on the topic:

 http://www.slideshare.net/joaquindelgado1/where-search-meets-machine-learning04252015final

 Search can be viewed as a combination of a) A problem of constraint
 satisfaction, which is the process of finding a solution to a set of
 constraints (query) that impose conditions that the variables (fields) must
 satisfy with a resulting object (document) being a solution in the feasible
 region (result set), plus b) A scoring/ranking problem of assigning values
 to different alternatives, according to some convenient scale. This
 ultimately provides a mechanism to sort various alternatives in the result
 set in order of importance, value or preference. In particular scoring in
 search has evolved from being a document centric calculation (e.g. TF-IDF)
 proper from its information retrieval roots, to a function that is more
 context sensitive (e.g. include geo-distance ranking) or user centric (e.g.
 takes user parameters for personalization) as well as other factors that
 depend on the domain and task at hand. However, most system that
 incorporate machine learning techniques to perform classification or
 generate scores for these specialized tasks do so as a post retrieval
 re-ranking function, outside of search! In this talk I show ways of
 incorporating advanced scoring functions, based on supervised learning and
 bid scaling models, into popular search engines such as Elastic Search and
 potentially SOLR. I'll provide practical examples of how to construct such
 ML Scoring plugins in search to generalize the application of a search
 engine as a model evaluator for supervised learning tasks. This will
 facilitate the building of systems that can do computational advertising,
 recommendations and specialized search systems, applicable to many domains.

 Code to support it (only elastic search for now):
 https://github.com/sdhu/elasticsearch-prediction

 -- J







-- 
*Doug Turnbull **| *Search Relevance Consultant | OpenSource Connections,
LLC | 240.476.9983 | http://www.opensourceconnections.com
Author: Taming Search http://manning.com/turnbull from Manning
Publications
This e-mail and all contents, including attachments, is considered to be
Company Confidential unless explicitly stated otherwise, regardless
of whether attachments are marked as such.


[jira] [Commented] (SOLR-7484) Refactor SolrDispatchFilter.doFilter(...) method

2015-04-30 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521427#comment-14521427
 ] 

Noble Paul commented on SOLR-7484:
--

Please separate out the SOLR-7275 changes and let's stay true to the 
description of the ticket

 Refactor SolrDispatchFilter.doFilter(...) method
 

 Key: SOLR-7484
 URL: https://issues.apache.org/jira/browse/SOLR-7484
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Attachments: SOLR-7484.patch, SOLR-7484.patch, SOLR-7484.patch, 
 SOLR-7484.patch, SOLR-7484.patch, SOLR-7484.patch


 Currently almost everything that's done in SDF.doFilter() is sequential. We 
 should refactor it to clean up the code and make things easier to manage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7024) bin/solr: Improve java detection and error messages

2015-04-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521432#comment-14521432
 ] 

David Smiley commented on SOLR-7024:


The bin/solr script on 5x will report an error to the user if Java isn't 
installed, but it will claim Java 8:
{code}
echo 2 A working Java 8 is required to run Solr!
{code} 

 bin/solr: Improve java detection and error messages
 ---

 Key: SOLR-7024
 URL: https://issues.apache.org/jira/browse/SOLR-7024
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
 Environment: Linux bigindy5 3.10.0-123.9.2.el7.x86_64 #1 SMP Tue Oct 
 28 18:05:26 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.0, Trunk

 Attachments: SOLR-7024.patch, SOLR-7024.patch, SOLR-7024.patch, 
 SOLR-7024.patch, SOLR-7024.patch, SOLR-7024.patch, SOLR-7024.patch, 
 SOLR-7024.patch, SOLR-7024.patch, SOLR-7024.patch


 Java detection needs a bit of an overhaul.  One example: When running the 
 shell script, if JAVA_HOME is set, but does not point to a valid java home, 
 Solr will not start, but the error message is unhelpful, especially to users 
 who actually DO have the right java version installed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-7231) Allow DIH to create single geo-field from lat/lon metadata extracted via Tika

2015-04-30 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-7231:


Assignee: Noble Paul

 Allow DIH to create single geo-field from lat/lon metadata extracted via Tika
 -

 Key: SOLR-7231
 URL: https://issues.apache.org/jira/browse/SOLR-7231
 Project: Solr
  Issue Type: Improvement
Reporter: Tim Allison
Assignee: Noble Paul
Priority: Trivial
 Attachments: SOLR-7231.patch, test_jpeg.jpg


 Tika can extract latitude and longitude data from image (and other) files.  
 It would be handy to allow the user to choose to have DIH populate a single 
 geofield (LatLonType or RPT) from the two metadata values extracted by Tika.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6460) TermsQuery should rewrite to BooleanQuery if 50 terms

2015-04-30 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521456#comment-14521456
 ] 

Adrien Grand commented on LUCENE-6460:
--

+1

 TermsQuery should rewrite to BooleanQuery if  50 terms
 ---

 Key: LUCENE-6460
 URL: https://issues.apache.org/jira/browse/LUCENE-6460
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Reporter: David Smiley
Priority: Minor

 If there aren't many terms in a TermsQuery (perhaps 50), it should be faster 
 for TermsQuery to rewrite to a BooleanQuery so that there is 
 disjunction/skipping.  Above some number of terms, there is overhead in 
 BQ/DisjunctionScorer's PriorityQueue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4685) JSON response write modification to support RAW JSON

2015-04-30 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521413#comment-14521413
 ] 

Noble Paul commented on SOLR-4685:
--

I fail to see , how can you claim that this is completely backward compatible. 
What if I already have a field with an _json suffix and I have  coded my 
client app to read escaped JSON string?

I totally understand the motivation and need for this . Probably , we should 
add an extra request flag to turn this on . like {{raw.json.suffix=_json}}. 
Which means any field ending with {{_json}} will be unescaped

 JSON response write modification to support RAW JSON
 

 Key: SOLR-4685
 URL: https://issues.apache.org/jira/browse/SOLR-4685
 Project: Solr
  Issue Type: Improvement
Reporter: Bill Bell
Priority: Minor
 Attachments: SOLR-4685.1.patch, SOLR-4685.5.1.patch, 
 SOLR-4685.SOLR_4_5.patch


 If the field ends with _json allow the field to return raw JSON.
 For example the field,
 office_json -- string
 I already put into the field raw JSON already escaped. I want it to come with 
 no double quotes and not escaped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6460) TermsQuery should rewrite to BooleanQuery if 50 terms

2015-04-30 Thread David Smiley (JIRA)
David Smiley created LUCENE-6460:


 Summary: TermsQuery should rewrite to BooleanQuery if  50 terms
 Key: LUCENE-6460
 URL: https://issues.apache.org/jira/browse/LUCENE-6460
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Reporter: David Smiley
Priority: Minor


If there aren't many terms in a TermsQuery (perhaps 50), it should be faster 
for TermsQuery to rewrite to a BooleanQuery so that there is 
disjunction/skipping.  Above some number of terms, there is overhead in 
BQ/DisjunctionScorer's PriorityQueue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7377) SOLR Streaming Expressions

2015-04-30 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521601#comment-14521601
 ] 

Noble Paul commented on SOLR-7377:
--

The patch is mostly moving files from one package to another. Is it possible to 
get  a patch that just adds the new functionality and do the package change 
later? 

 SOLR Streaming Expressions
 --

 Key: SOLR-7377
 URL: https://issues.apache.org/jira/browse/SOLR-7377
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Reporter: Dennis Gove
Priority: Minor
 Fix For: Trunk

 Attachments: SOLR-7377.patch, SOLR-7377.patch, SOLR-7377.patch, 
 SOLR-7377.patch


 It would be beneficial to add an expression-based interface to Streaming API 
 described in SOLR-7082. Right now that API requires streaming requests to 
 come in from clients as serialized bytecode of the streaming classes. The 
 suggestion here is to support string expressions which describe the streaming 
 operations the client wishes to perform. 
 {code:java}
 search(collection1, q=*:*, fl=id,fieldA,fieldB, sort=fieldA asc)
 {code}
 With this syntax in mind, one can now express arbitrarily complex stream 
 queries with a single string.
 {code:java}
 // merge two distinct searches together on common fields
 merge(
   search(collection1, q=id:(0 3 4), fl=id,a_s,a_i,a_f, sort=a_f asc, a_s 
 asc),
   search(collection2, q=id:(1 2), fl=id,a_s,a_i,a_f, sort=a_f asc, a_s 
 asc),
   on=a_f asc, a_s asc)
 // find top 20 unique records of a search
 top(
   n=20,
   unique(
 search(collection1, q=*:*, fl=id,a_s,a_i,a_f, sort=a_f desc),
 over=a_f desc),
   sort=a_f desc)
 {code}
 The syntax would support
 1. Configurable expression names (eg. via solrconfig.xml one can map unique 
 to a class implementing a Unique stream class) This allows users to build 
 their own streams and use as they wish.
 2. Named parameters (of both simple and expression types)
 3. Unnamed, type-matched parameters (to support requiring N streams as 
 arguments to another stream)
 4. Positional parameters
 The main goal here is to make streaming as accessible as possible and define 
 a syntax for running complex queries across large distributed systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7462) ArrayIndexOutOfBoundsException in RecordingJSONParser.java

2015-04-30 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521611#comment-14521611
 ] 

Erick Erickson commented on SOLR-7462:
--

I'm in the same boat as Shawn for the next 10 days or so, although there's a 
long, boring airplane trip looming in my future. I'll see what I can do if 
nobody gets to it first.

 ArrayIndexOutOfBoundsException in RecordingJSONParser.java
 --

 Key: SOLR-7462
 URL: https://issues.apache.org/jira/browse/SOLR-7462
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.1
Reporter: Scott Dawson
 Attachments: SOLR-7462.patch


 With Solr 5.1 I'm getting an occasional fatal exception during indexing. It's 
 an ArrayIndexOutOfBoundsException at line 61 of 
 org/apache/solr/util/RecordingJSONParser.java. Looking at the code (see 
 below), it seems obvious that the if-statement at line 60 should use a 
 greater-than sign instead of greater-than-or-equals.
   @Override
   public CharArr getStringChars() throws IOException {
 CharArr chars = super.getStringChars();
 recordStr(chars.toString());
 position = getPosition();
 // if reading a String , the getStringChars do not return the closing 
 single quote or double quote
 //so, try to capture that
 if(chars.getArray().length =chars.getStart()+chars.size()) { // line 
 60
   char next = chars.getArray()[chars.getStart() + chars.size()]; // line 
 61
   if(next =='' || next == '\'') {
 recordChar(next);
   }
 }
 return chars;
   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org




Re: JEP 248: Make G1 the Default Garbage Collector

2015-04-30 Thread Christian Thalinger

 On Apr 30, 2015, at 7:29 AM, Uwe Schindler uschind...@apache.org wrote:
 
 Hi Kirk, hi Mark,
 
 the Lucene/Solr/Elasticsearch people still recommend to their users to not 
 use G1GC, although for this type of application (full text search with the 
 requirement for very low response times and no pauses) is a good candidate 
 for G1GC. On the other hand, heap sizes for typical Lucene applications 
 should not be too high, because most of the processing is done on memory 
 mapped files off-heap. So heaps should be at most 1/4 of the physical RAM 
 available, because Lucene relies on the fact that the index files reside in 
 file system cache (too large heaps are contra-productive here).
 
 See also our recommendations for Apache Solr and Elasticsearch:
 http://wiki.apache.org/solr/ShawnHeisey#GC_Tuning_for_Solr
 http://www.elastic.co/guide/en/elasticsearch/guide/current/_don_8217_t_touch_these_settings.html
 
 Currently Lucene's indexing sometimes caused serious data corruption with 
 G1GC - leading to data loss, which was mainly caused by some bugs around G1GC 
 and its use of additional memory barriers and the very close interaction with 
 Hotspot, that seemed to broke some optimizations. We had (only in combination 
 with G1GC during our test suites) simple assert statements *sometimes* 
 failing that should never fail unless there is a bug in the JVM.

In fact there was a bug with asserts triggering when they shouldn’t:

https://bugs.openjdk.java.net/browse/JDK-8006960 
https://bugs.openjdk.java.net/browse/JDK-8006960

 
 We are aware that Java 8u40 declared G1GC as production ready, so we are 
 still looking at failures in our extensive testing infrastructure. Indeed, I 
 have no seen any G1GC related problems recently, but that is not necessarily 
 a sign for correctness.
 
 Uwe
 
 P.S.: It was nice to meet you last week on JAX!
 
 -
 Uwe Schindler
 uschind...@apache.org 
 ASF Member, Apache Lucene PMC / Committer
 Bremen, Germany
 http://lucene.apache.org/
 
 -Original Message-
 From: hotspot-dev [mailto:hotspot-dev-boun...@openjdk.java.net] On
 Behalf Of Kirk Pepperdine
 Sent: Wednesday, April 29, 2015 9:11 AM
 To: hotspot-...@openjdk.java.net Source Developers
 Subject: Re: JEP 248: Make G1 the Default Garbage Collector
 
 Hi all,
 
 Is the G1 ready for this? I see many people moving to G1 but also I’m not
 sure that we’ve got the tunable correct. I’ve been sorting through a number
 of recent tuning engagements and my  conclusion is that I would like the
 collector to be aggressive about collecting tenured regions at the beginning
 of a JVM’s life time but then become less aggressive over time. The reason is
 the residual waste that I see left behind because certain regions never hit
 the threshold needed to be included in the CSET. But, on aggregate, the
 number of regions in this state does start to retain a significant about of 
 dead
 data. The only way to see the effects is to run regular Full GCs.. which of
 course you don’t really want to do. However, the problem seems to settle
 down a wee bit over time which is why I was thinking that being aggressive
 about what is collected in the early stages of a JVMs life should lead to 
 better
 packing and hence less waste.
 
 Note, I don’t really care about the memory waste, only it’s effect on cycle
 frequencies and pause times.
 
 Sorry but I don’t have anything formal about this as I (and I believe many
 others) are still sorting out what to make of the G1 in prod. Generally the
 overall results are good but sometimes it’s not that way up front and how to
 improve things is sometimes challenging.
 
 On a side note, the move to Tiered in 8 has also caused a bit of grief.
 Metaspace has caused a bit of grief and even parallelStream, which works,
 has come with some interesting side effect. Everyone has been so enamored
 with Lambdas (rightfully so) that the other stuff has been completely
 forgotten and some of it has surprised people. I guess I’ll be submitting a 
 talk
 for J1 on some of the field experience I’ve had with the other stuff.
 
 Regards,
 Kirk
 
 
 On Apr 28, 2015, at 11:02 PM, mark.reinh...@oracle.com wrote:
 
 New JEP Candidate: http://openjdk.java.net/jeps/248
 
 - Mark
 



[jira] [Commented] (SOLR-7462) ArrayIndexOutOfBoundsException in RecordingJSONParser.java

2015-04-30 Thread Scott Dawson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521528#comment-14521528
 ] 

Scott Dawson commented on SOLR-7462:


Shawn, Erick - is it likely that this patch will be included in Solr 5.2?

 ArrayIndexOutOfBoundsException in RecordingJSONParser.java
 --

 Key: SOLR-7462
 URL: https://issues.apache.org/jira/browse/SOLR-7462
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.1
Reporter: Scott Dawson
 Attachments: SOLR-7462.patch


 With Solr 5.1 I'm getting an occasional fatal exception during indexing. It's 
 an ArrayIndexOutOfBoundsException at line 61 of 
 org/apache/solr/util/RecordingJSONParser.java. Looking at the code (see 
 below), it seems obvious that the if-statement at line 60 should use a 
 greater-than sign instead of greater-than-or-equals.
   @Override
   public CharArr getStringChars() throws IOException {
 CharArr chars = super.getStringChars();
 recordStr(chars.toString());
 position = getPosition();
 // if reading a String , the getStringChars do not return the closing 
 single quote or double quote
 //so, try to capture that
 if(chars.getArray().length =chars.getStart()+chars.size()) { // line 
 60
   char next = chars.getArray()[chars.getStart() + chars.size()]; // line 
 61
   if(next =='' || next == '\'') {
 recordChar(next);
   }
 }
 return chars;
   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7231) Allow DIH to create single geo-field from lat/lon metadata extracted via Tika

2015-04-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521561#comment-14521561
 ] 

ASF subversion and git services commented on SOLR-7231:
---

Commit 1677004 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1677004 ]

SOLR-7231: DIH-TikaEntityprocessor, create lat-lon field from Metadata

 Allow DIH to create single geo-field from lat/lon metadata extracted via Tika
 -

 Key: SOLR-7231
 URL: https://issues.apache.org/jira/browse/SOLR-7231
 Project: Solr
  Issue Type: Improvement
Reporter: Tim Allison
Assignee: Noble Paul
Priority: Trivial
 Attachments: SOLR-7231.patch, test_jpeg.jpg


 Tika can extract latitude and longitude data from image (and other) files.  
 It would be handy to allow the user to choose to have DIH populate a single 
 geofield (LatLonType or RPT) from the two metadata values extracted by Tika.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4685) JSON response write modification to support RAW JSON

2015-04-30 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521594#comment-14521594
 ] 

Noble Paul commented on SOLR-4685:
--

bq.If we want any better support, it feels like it should be in the form of a 
proper JSON fieldType

+1 . This should be the right way to solve this 



 JSON response write modification to support RAW JSON
 

 Key: SOLR-4685
 URL: https://issues.apache.org/jira/browse/SOLR-4685
 Project: Solr
  Issue Type: Improvement
Reporter: Bill Bell
Assignee: Noble Paul
Priority: Minor
 Attachments: SOLR-4685.1.patch, SOLR-4685.5.1.patch, 
 SOLR-4685.SOLR_4_5.patch


 If the field ends with _json allow the field to return raw JSON.
 For example the field,
 office_json -- string
 I already put into the field raw JSON already escaped. I want it to come with 
 no double quotes and not escaped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6045) Refator classifier APIs to work better with multi threading

2015-04-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521541#comment-14521541
 ] 

ASF subversion and git services commented on LUCENE-6045:
-

Commit 1676997 from [~teofili] in branch 'dev/trunk'
[ https://svn.apache.org/r1676997 ]

LUCENE-6045 - refactor Classifier API to work better with multithreading

 Refator classifier APIs to work better with multi threading
 ---

 Key: LUCENE-6045
 URL: https://issues.apache.org/jira/browse/LUCENE-6045
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/classification
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: Trunk


 In 
 https://issues.apache.org/jira/browse/LUCENE-4345?focusedCommentId=13454729page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13454729
  [~simonw] pointed out that the current Classifier API doesn't work well in 
 multi threading environments: 
 bq. The interface you defined has some problems with respect to 
 Multi-Threading IMO. The interface itself suggests that this class is 
 stateful and you have to call methods in a certain order and at the same you 
 need to make sure that it is not published for read access before training is 
 done. I think it would be wise to pass in all needed objects as constructor 
 arguments and make the references final so it can be shared across threads 
 and add an interface that represents the trained model computed offline? In 
 this case it doesn't really matter but in the future it might make sense. We 
 can also skip the model interface entirely and remove the training method 
 until we have some impls that really need to be trained.
 I missed that at that point but I think for 6.0 (?) it would be wise to 
 rearrange the API to address that properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4685) JSON response write modification to support RAW JSON

2015-04-30 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521547#comment-14521547
 ] 

Yonik Seeley commented on SOLR-4685:


Seems like SOLR-7376 covers this functionality sufficiently?

If we want any better support, it feels like it should be in the form of a 
proper JSON fieldType.  That would allow:
- proper validation on ingest
- future features like structured indexing of the JSON
- optional raw-writing to the appropriate response handler automatically

Then one could make a dynamicField of *_json instead of a hacky json.key.suffix 

 JSON response write modification to support RAW JSON
 

 Key: SOLR-4685
 URL: https://issues.apache.org/jira/browse/SOLR-4685
 Project: Solr
  Issue Type: Improvement
Reporter: Bill Bell
Assignee: Noble Paul
Priority: Minor
 Attachments: SOLR-4685.1.patch, SOLR-4685.5.1.patch, 
 SOLR-4685.SOLR_4_5.patch


 If the field ends with _json allow the field to return raw JSON.
 For example the field,
 office_json -- string
 I already put into the field raw JSON already escaped. I want it to come with 
 no double quotes and not escaped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: JEP 248: Make G1 the Default Garbage Collector

2015-04-30 Thread Uwe Schindler
Hi Kirk, hi Mark,

the Lucene/Solr/Elasticsearch people still recommend to their users to not use 
G1GC, although for this type of application (full text search with the 
requirement for very low response times and no pauses) is a good candidate for 
G1GC. On the other hand, heap sizes for typical Lucene applications should not 
be too high, because most of the processing is done on memory mapped files 
off-heap. So heaps should be at most 1/4 of the physical RAM available, because 
Lucene relies on the fact that the index files reside in file system cache (too 
large heaps are contra-productive here).

See also our recommendations for Apache Solr and Elasticsearch:
http://wiki.apache.org/solr/ShawnHeisey#GC_Tuning_for_Solr
http://www.elastic.co/guide/en/elasticsearch/guide/current/_don_8217_t_touch_these_settings.html

Currently Lucene's indexing sometimes caused serious data corruption with G1GC 
- leading to data loss, which was mainly caused by some bugs around G1GC and 
its use of additional memory barriers and the very close interaction with 
Hotspot, that seemed to broke some optimizations. We had (only in combination 
with G1GC during our test suites) simple assert statements *sometimes* 
failing that should never fail unless there is a bug in the JVM.

We are aware that Java 8u40 declared G1GC as production ready, so we are 
still looking at failures in our extensive testing infrastructure. Indeed, I 
have no seen any G1GC related problems recently, but that is not necessarily a 
sign for correctness.

Uwe

P.S.: It was nice to meet you last week on JAX!

-
Uwe Schindler
uschind...@apache.org 
ASF Member, Apache Lucene PMC / Committer
Bremen, Germany
http://lucene.apache.org/

 -Original Message-
 From: hotspot-dev [mailto:hotspot-dev-boun...@openjdk.java.net] On
 Behalf Of Kirk Pepperdine
 Sent: Wednesday, April 29, 2015 9:11 AM
 To: hotspot-...@openjdk.java.net Source Developers
 Subject: Re: JEP 248: Make G1 the Default Garbage Collector
 
 Hi all,
 
 Is the G1 ready for this? I see many people moving to G1 but also I’m not
 sure that we’ve got the tunable correct. I’ve been sorting through a number
 of recent tuning engagements and my  conclusion is that I would like the
 collector to be aggressive about collecting tenured regions at the beginning
 of a JVM’s life time but then become less aggressive over time. The reason is
 the residual waste that I see left behind because certain regions never hit
 the threshold needed to be included in the CSET. But, on aggregate, the
 number of regions in this state does start to retain a significant about of 
 dead
 data. The only way to see the effects is to run regular Full GCs.. which of
 course you don’t really want to do. However, the problem seems to settle
 down a wee bit over time which is why I was thinking that being aggressive
 about what is collected in the early stages of a JVMs life should lead to 
 better
 packing and hence less waste.
 
 Note, I don’t really care about the memory waste, only it’s effect on cycle
 frequencies and pause times.
 
 Sorry but I don’t have anything formal about this as I (and I believe many
 others) are still sorting out what to make of the G1 in prod. Generally the
 overall results are good but sometimes it’s not that way up front and how to
 improve things is sometimes challenging.
 
 On a side note, the move to Tiered in 8 has also caused a bit of grief.
 Metaspace has caused a bit of grief and even parallelStream, which works,
 has come with some interesting side effect. Everyone has been so enamored
 with Lambdas (rightfully so) that the other stuff has been completely
 forgotten and some of it has surprised people. I guess I’ll be submitting a 
 talk
 for J1 on some of the field experience I’ve had with the other stuff.
 
 Regards,
 Kirk
 
 
 On Apr 28, 2015, at 11:02 PM, mark.reinh...@oracle.com wrote:
 
  New JEP Candidate: http://openjdk.java.net/jeps/248
 
  - Mark


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: JEP 248: Make G1 the Default Garbage Collector

2015-04-30 Thread Kirk Pepperdine
Hi Uwe,

I’m currently dealing with a customer trying to use Lucene/Solr/Elasticsearch 
and I expected that would be a perfect candidate for the G1 but I think 
that other “off-heap” solutions might also suffer. And, as you now know, it 
takes a serious amount of digging to sort these problems out. I am certain 
there are very few dev teams with the talent on board to work through the 
diagnostic process.

Regards,
Kirk
PS, yes it was indeed nice to meet you @ JAX.

On Apr 30, 2015, at 4:29 PM, Uwe Schindler uschind...@apache.org wrote:

 Hi Kirk, hi Mark,
 
 the Lucene/Solr/Elasticsearch people still recommend to their users to not 
 use G1GC, although for this type of application (full text search with the 
 requirement for very low response times and no pauses) is a good candidate 
 for G1GC. On the other hand, heap sizes for typical Lucene applications 
 should not be too high, because most of the processing is done on memory 
 mapped files off-heap. So heaps should be at most 1/4 of the physical RAM 
 available, because Lucene relies on the fact that the index files reside in 
 file system cache (too large heaps are contra-productive here).
 
 See also our recommendations for Apache Solr and Elasticsearch:
 http://wiki.apache.org/solr/ShawnHeisey#GC_Tuning_for_Solr
 http://www.elastic.co/guide/en/elasticsearch/guide/current/_don_8217_t_touch_these_settings.html
 
 Currently Lucene's indexing sometimes caused serious data corruption with 
 G1GC - leading to data loss, which was mainly caused by some bugs around G1GC 
 and its use of additional memory barriers and the very close interaction with 
 Hotspot, that seemed to broke some optimizations. We had (only in combination 
 with G1GC during our test suites) simple assert statements *sometimes* 
 failing that should never fail unless there is a bug in the JVM.
 
 We are aware that Java 8u40 declared G1GC as production ready, so we are 
 still looking at failures in our extensive testing infrastructure. Indeed, I 
 have no seen any G1GC related problems recently, but that is not necessarily 
 a sign for correctness.
 
 Uwe
 
 P.S.: It was nice to meet you last week on JAX!
 
 -
 Uwe Schindler
 uschind...@apache.org 
 ASF Member, Apache Lucene PMC / Committer
 Bremen, Germany
 http://lucene.apache.org/
 
 -Original Message-
 From: hotspot-dev [mailto:hotspot-dev-boun...@openjdk.java.net] On
 Behalf Of Kirk Pepperdine
 Sent: Wednesday, April 29, 2015 9:11 AM
 To: hotspot-...@openjdk.java.net Source Developers
 Subject: Re: JEP 248: Make G1 the Default Garbage Collector
 
 Hi all,
 
 Is the G1 ready for this? I see many people moving to G1 but also I’m not
 sure that we’ve got the tunable correct. I’ve been sorting through a number
 of recent tuning engagements and my  conclusion is that I would like the
 collector to be aggressive about collecting tenured regions at the beginning
 of a JVM’s life time but then become less aggressive over time. The reason is
 the residual waste that I see left behind because certain regions never hit
 the threshold needed to be included in the CSET. But, on aggregate, the
 number of regions in this state does start to retain a significant about of 
 dead
 data. The only way to see the effects is to run regular Full GCs.. which of
 course you don’t really want to do. However, the problem seems to settle
 down a wee bit over time which is why I was thinking that being aggressive
 about what is collected in the early stages of a JVMs life should lead to 
 better
 packing and hence less waste.
 
 Note, I don’t really care about the memory waste, only it’s effect on cycle
 frequencies and pause times.
 
 Sorry but I don’t have anything formal about this as I (and I believe many
 others) are still sorting out what to make of the G1 in prod. Generally the
 overall results are good but sometimes it’s not that way up front and how to
 improve things is sometimes challenging.
 
 On a side note, the move to Tiered in 8 has also caused a bit of grief.
 Metaspace has caused a bit of grief and even parallelStream, which works,
 has come with some interesting side effect. Everyone has been so enamored
 with Lambdas (rightfully so) that the other stuff has been completely
 forgotten and some of it has surprised people. I guess I’ll be submitting a 
 talk
 for J1 on some of the field experience I’ve had with the other stuff.
 
 Regards,
 Kirk
 
 
 On Apr 28, 2015, at 11:02 PM, mark.reinh...@oracle.com wrote:
 
 New JEP Candidate: http://openjdk.java.net/jeps/248
 
 - Mark
 



signature.asc
Description: Message signed with OpenPGP using GPGMail


[jira] [Commented] (LUCENE-6045) Refator classifier APIs to work better with multi threading

2015-04-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521544#comment-14521544
 ] 

ASF subversion and git services commented on LUCENE-6045:
-

Commit 1676998 from [~teofili] in branch 'dev/trunk'
[ https://svn.apache.org/r1676998 ]

LUCENE-6045 - removed train exceptions

 Refator classifier APIs to work better with multi threading
 ---

 Key: LUCENE-6045
 URL: https://issues.apache.org/jira/browse/LUCENE-6045
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/classification
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: Trunk


 In 
 https://issues.apache.org/jira/browse/LUCENE-4345?focusedCommentId=13454729page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13454729
  [~simonw] pointed out that the current Classifier API doesn't work well in 
 multi threading environments: 
 bq. The interface you defined has some problems with respect to 
 Multi-Threading IMO. The interface itself suggests that this class is 
 stateful and you have to call methods in a certain order and at the same you 
 need to make sure that it is not published for read access before training is 
 done. I think it would be wise to pass in all needed objects as constructor 
 arguments and make the references final so it can be shared across threads 
 and add an interface that represents the trained model computed offline? In 
 this case it doesn't really matter but in the future it might make sense. We 
 can also skip the model interface entirely and remove the training method 
 until we have some impls that really need to be trained.
 I missed that at that point but I think for 6.0 (?) it would be wise to 
 rearrange the API to address that properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7231) Allow DIH to create single geo-field from lat/lon metadata extracted via Tika

2015-04-30 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-7231.
--
   Resolution: Fixed
Fix Version/s: 5.2
   Trunk

 Allow DIH to create single geo-field from lat/lon metadata extracted via Tika
 -

 Key: SOLR-7231
 URL: https://issues.apache.org/jira/browse/SOLR-7231
 Project: Solr
  Issue Type: Improvement
Reporter: Tim Allison
Assignee: Noble Paul
Priority: Trivial
 Fix For: Trunk, 5.2

 Attachments: SOLR-7231.patch, test_jpeg.jpg


 Tika can extract latitude and longitude data from image (and other) files.  
 It would be handy to allow the user to choose to have DIH populate a single 
 geofield (LatLonType or RPT) from the two metadata values extracted by Tika.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6454) Support Member Methods in VariableContext

2015-04-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521794#comment-14521794
 ] 

ASF subversion and git services commented on LUCENE-6454:
-

Commit 1677022 from [~rjernst] in branch 'dev/trunk'
[ https://svn.apache.org/r1677022 ]

LUCENE-6454: Added distinction between member variable and method in expression 
helper VariableContext

 Support Member Methods in VariableContext
 -

 Key: LUCENE-6454
 URL: https://issues.apache.org/jira/browse/LUCENE-6454
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Jack Conradson
 Attachments: LUCENE-6454.patch, LUCENE-6454.patch


 The Javascript compiler now supports simple member methods being processed by 
 expression Bindings.  The VariableContext should also support being able to 
 parse member methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7490) Update by query feature

2015-04-30 Thread Praneeth (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521907#comment-14521907
 ] 

Praneeth commented on SOLR-7490:


Thanks for the points. Its true that the all the document fields need to be 
stored in order to implement this feature through atomic updates.

bq. could well result in a massive amount of work being done by Solr as the 
result of a single call

Would this be significantly more work for Solr than what it does for 
{{deleteByQuery}}? 

I'm not very much familiar with working of DocValues and cannot at the moment 
comment on doing it through updatable DocValues. I'll look further into it. As 
you mentioned, optimistic locking is a primary concern here and could result in 
a lot of work for Solr.

I think now I understand some of the primary concerns and I will look into 
these areas and post back here.

 Update by query feature
 ---

 Key: SOLR-7490
 URL: https://issues.apache.org/jira/browse/SOLR-7490
 Project: Solr
  Issue Type: New Feature
Reporter: Praneeth
Priority: Minor

 An update feature similar to the {{deleteByQuery}} would be very useful. Say, 
 the user wants to update a field of all documents in the index that match a 
 given criteria. I have encountered this use case in my project and it looks 
 like it could be a useful first class solr/lucene feature. I want to check if 
 this is something we would want to support in coming releases of Solr and 
 Lucene, are there scenarios that will prevent us from doing this, 
 feasibility, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Review needed for SOLR-7121

2015-04-30 Thread S G
It's been a month since I updated my pull request with all the test-cases.

I would really appreciate if someone could review and merge the below pull
request.

This patch:
1) Makes the nodes more resilient to crashes,
2) Improves cloud stability and
3) Prevents distributed deadlocks.

Thanks
Sachin


On Tue, Mar 31, 2015 at 4:30 PM, S G sg.online.em...@gmail.com wrote:

 Hi,

 I have opened a pull request for
 https://issues.apache.org/jira/browse/SOLR-7121
 at https://github.com/apache/lucene-solr/pull/132


 This PR allows clients to specify some threshold values beyond which the
 targeted core can declare itself unhealthy and proactively go down to
 recover.
 When the load improves, the downed cores come up automatically.
 Such behavior will help machines survive longer by not hitting their
 hardware limits.

 The PR includes tests for all the ill-health cases.
 If someone can review this and help me get it committed, it would be much
 appreciated.

 Thanks
 Sachin





[jira] [Commented] (LUCENE-6454) Support Member Methods in VariableContext

2015-04-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521798#comment-14521798
 ] 

ASF subversion and git services commented on LUCENE-6454:
-

Commit 1677023 from [~rjernst] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1677023 ]

LUCENE-6454: Added distinction between member variable and method in expression 
helper VariableContext (merged r1677022)

 Support Member Methods in VariableContext
 -

 Key: LUCENE-6454
 URL: https://issues.apache.org/jira/browse/LUCENE-6454
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Jack Conradson
 Attachments: LUCENE-6454.patch, LUCENE-6454.patch


 The Javascript compiler now supports simple member methods being processed by 
 expression Bindings.  The VariableContext should also support being able to 
 parse member methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7487) check-example-lucene-match-version is looking in the wrong place - luceneMatchVersion incorrect in 5.1 sample configs

2015-04-30 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521817#comment-14521817
 ] 

Ryan Ernst commented on SOLR-7487:
--

You probably want to update {{dev-tools/scripts/addVersion.py}} as well so 
these get updated when versions are bumped?

 check-example-lucene-match-version is looking in the wrong place - 
 luceneMatchVersion incorrect in 5.1 sample configs
 -

 Key: SOLR-7487
 URL: https://issues.apache.org/jira/browse/SOLR-7487
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.1
Reporter: Hoss Man
Assignee: Timothy Potter
Priority: Blocker
 Fix For: 5.2

 Attachments: SOLR-7487.patch


 As noted by Scott Dawson on the mailing list, the luceneMatchVersion in the 
 5.1 sample configs all still lists 5.0.0.
 The root cause seems to be because the check-example-lucene-match-version 
 task in solr/build.xml is looking in the wrong place -- it's still scaning 
 for instances of luceneMatchVersion in the {{example}} directory instead of 
 the {{server/solr/configset}}
 TODO:
 * fix the luceneMatchVersion value in all sample configsets on 5x
 * update the check to look in the correct directory
 * update the check to fail to be smarter know that we have a more predictable 
 directory structure
 ** fail if no subdirs found
 ** fail if any subdir doesn't contain conf/solrconfig.xml
 ** fail if any conf/solrconfig.xml doesn't contain a luceneMatchVersion
 ** fail if any luceneMatchVersion doesn't have the expected value



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: AngularJS Admin UI first pass complete

2015-04-30 Thread Chris Hostetter

: Once that's done, switching should just be a question of changing the
: welcome-file in web.xml.

My suggestion, along the lines of what we did with the previous UI 
change...

1) ensure that the new UI is fully functional in 5x using an alt path 
(believe this is already true?)

2) add an Upgrading note to CHANGES.txt for 5.2 directing people to 
the new experimental UI and the URL to access it, note that it 
will likely become the default in 5.3 and users are encouraged to try it 
out and file bugs if they notice any problems.

3) update trunk so that it *is* the default UI, and make the old UI 
available at some alternate path (eg /solr/old_ui).  Add an Upgrading 
note pointing out that the URLs may be slightly different, and how to 
access the old UI if they have problems

* backport trunk changes (#3) to 5x for 5.3 (or later if problems/delays 
pop up)


: *Then* we can get on with things like a collections API page, an
: explains viewer, etc, etc, etc.

+1


-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6454) Support Member Methods in VariableContext

2015-04-30 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst resolved LUCENE-6454.

   Resolution: Fixed
Fix Version/s: 5.2
   Trunk
 Assignee: Ryan Ernst

Thanks Jack!

 Support Member Methods in VariableContext
 -

 Key: LUCENE-6454
 URL: https://issues.apache.org/jira/browse/LUCENE-6454
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Jack Conradson
Assignee: Ryan Ernst
 Fix For: Trunk, 5.2

 Attachments: LUCENE-6454.patch, LUCENE-6454.patch


 The Javascript compiler now supports simple member methods being processed by 
 expression Bindings.  The VariableContext should also support being able to 
 parse member methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7487) check-example-lucene-match-version is looking in the wrong place - luceneMatchVersion incorrect in 5.1 sample configs

2015-04-30 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521959#comment-14521959
 ] 

Hoss Man edited comment on SOLR-7487 at 4/30/15 6:13 PM:
-

build.xml changes look good to me.

I've updated the patch with my own crude attempt at the addVersion.py changes 
-- totally untested since evidently addVersion requires python 3.3 (flush on 
print?) and that doesn't seem to be available for the ubuntu version on my 
laptop.

EDIT: whoops ... didn't see you already solved that tim.


was (Author: hossman):
build.xml changes look good to me.

I've updated the patch with my own crude attempt at the addVersion.py changes 
-- totally untested since evidently addVersion requires python 3.3 (flush on 
print?) and that doesn't seem to be available for the ubuntu version on my 
laptop.

 check-example-lucene-match-version is looking in the wrong place - 
 luceneMatchVersion incorrect in 5.1 sample configs
 -

 Key: SOLR-7487
 URL: https://issues.apache.org/jira/browse/SOLR-7487
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.1
Reporter: Hoss Man
Assignee: Timothy Potter
Priority: Blocker
 Fix For: 5.2

 Attachments: SOLR-7487.patch, SOLR-7487.patch, SOLR-7487.patch


 As noted by Scott Dawson on the mailing list, the luceneMatchVersion in the 
 5.1 sample configs all still lists 5.0.0.
 The root cause seems to be because the check-example-lucene-match-version 
 task in solr/build.xml is looking in the wrong place -- it's still scaning 
 for instances of luceneMatchVersion in the {{example}} directory instead of 
 the {{server/solr/configset}}
 TODO:
 * fix the luceneMatchVersion value in all sample configsets on 5x
 * update the check to look in the correct directory
 * update the check to fail to be smarter know that we have a more predictable 
 directory structure
 ** fail if no subdirs found
 ** fail if any subdir doesn't contain conf/solrconfig.xml
 ** fail if any conf/solrconfig.xml doesn't contain a luceneMatchVersion
 ** fail if any luceneMatchVersion doesn't have the expected value



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 833 - Failure

2015-04-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/833/

5 tests failed.
REGRESSION:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
The Monkey ran for over 20 seconds and no jetties were stopped - this is worth 
investigating!

Stack Trace:
java.lang.AssertionError: The Monkey ran for over 20 seconds and no jetties 
were stopped - this is worth investigating!
at 
__randomizedtesting.SeedInfo.seed([9A42D5030E65BEE5:1216EAD9A099D31D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.apache.solr.cloud.ChaosMonkey.stopTheMonkey(ChaosMonkey.java:537)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:189)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-7231) Allow DIH to create single geo-field from lat/lon metadata extracted via Tika

2015-04-30 Thread Tim Allison (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521814#comment-14521814
 ] 

Tim Allison commented on SOLR-7231:
---

Thank you!

 Allow DIH to create single geo-field from lat/lon metadata extracted via Tika
 -

 Key: SOLR-7231
 URL: https://issues.apache.org/jira/browse/SOLR-7231
 Project: Solr
  Issue Type: Improvement
Reporter: Tim Allison
Assignee: Noble Paul
Priority: Trivial
 Fix For: Trunk, 5.2

 Attachments: SOLR-7231.patch, test_jpeg.jpg


 Tika can extract latitude and longitude data from image (and other) files.  
 It would be handy to allow the user to choose to have DIH populate a single 
 geofield (LatLonType or RPT) from the two metadata values extracted by Tika.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7484) Refactor SolrDispatchFilter.doFilter(...) method

2015-04-30 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-7484:
---
Attachment: SOLR-7484.patch

Updated patch, without the SolrRequestContext and a few more methods extracted 
out. I've just moved the methods out with some comments around the call and 
haven't really changed much as I wouldn't want to make this an invasive change.
We can revisit this once after the first commit I think.

 Refactor SolrDispatchFilter.doFilter(...) method
 

 Key: SOLR-7484
 URL: https://issues.apache.org/jira/browse/SOLR-7484
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Attachments: SOLR-7484.patch, SOLR-7484.patch, SOLR-7484.patch, 
 SOLR-7484.patch, SOLR-7484.patch, SOLR-7484.patch, SOLR-7484.patch


 Currently almost everything that's done in SDF.doFilter() is sequential. We 
 should refactor it to clean up the code and make things easier to manage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3038 - Failure

2015-04-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3038/

2 tests failed.
REGRESSION:  org.apache.solr.cloud.MultiThreadedOCPTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=6313, 
name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, 
group=TGRP-MultiThreadedOCPTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=6313, 
name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, 
group=TGRP-MultiThreadedOCPTest]
at 
__randomizedtesting.SeedInfo.seed([1FD11A82D96D185B:97852558779175A3]:0)
Caused by: java.lang.AssertionError: Too many closes on SolrCore
at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B]:0)
at org.apache.solr.core.SolrCore.close(SolrCore.java:1138)
at org.apache.solr.common.util.IOUtils.closeQuietly(IOUtils.java:31)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:535)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:494)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:598)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:212)
at 
org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1219)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:148)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)


REGRESSION:  org.apache.solr.cloud.RecoveryZkTest.test

Error Message:
shard1 is not consistent.  Got 871 from 
http://127.0.0.1:47816/qrifm/t/collection1lastClient and got 240 from 
http://127.0.0.1:47826/qrifm/t/collection1

Stack Trace:
java.lang.AssertionError: shard1 is not consistent.  Got 871 from 
http://127.0.0.1:47816/qrifm/t/collection1lastClient and got 240 from 
http://127.0.0.1:47826/qrifm/t/collection1
at 
__randomizedtesting.SeedInfo.seed([1FD11A82D96D185B:97852558779175A3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.apache.solr.cloud.RecoveryZkTest.test(RecoveryZkTest.java:123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 

[jira] [Commented] (SOLR-7487) check-example-lucene-match-version is looking in the wrong place - luceneMatchVersion incorrect in 5.1 sample configs

2015-04-30 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521886#comment-14521886
 ] 

Timothy Potter commented on SOLR-7487:
--

Yup, good catch (got hasty), thanks Ryan.

 check-example-lucene-match-version is looking in the wrong place - 
 luceneMatchVersion incorrect in 5.1 sample configs
 -

 Key: SOLR-7487
 URL: https://issues.apache.org/jira/browse/SOLR-7487
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.1
Reporter: Hoss Man
Assignee: Timothy Potter
Priority: Blocker
 Fix For: 5.2

 Attachments: SOLR-7487.patch


 As noted by Scott Dawson on the mailing list, the luceneMatchVersion in the 
 5.1 sample configs all still lists 5.0.0.
 The root cause seems to be because the check-example-lucene-match-version 
 task in solr/build.xml is looking in the wrong place -- it's still scaning 
 for instances of luceneMatchVersion in the {{example}} directory instead of 
 the {{server/solr/configset}}
 TODO:
 * fix the luceneMatchVersion value in all sample configsets on 5x
 * update the check to look in the correct directory
 * update the check to fail to be smarter know that we have a more predictable 
 directory structure
 ** fail if no subdirs found
 ** fail if any subdir doesn't contain conf/solrconfig.xml
 ** fail if any conf/solrconfig.xml doesn't contain a luceneMatchVersion
 ** fail if any luceneMatchVersion doesn't have the expected value



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7493) Requests aren't distributed evenly if the collection isn't present locally

2015-04-30 Thread Jeff Wartes (JIRA)
Jeff Wartes created SOLR-7493:
-

 Summary: Requests aren't distributed evenly if the collection 
isn't present locally
 Key: SOLR-7493
 URL: https://issues.apache.org/jira/browse/SOLR-7493
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.0
Reporter: Jeff Wartes


I had a SolrCloud cluster where every node is behind a simple round-robin load 
balancer.
This cluster had two collections (A, B), and the slices of each were 
partitioned such that one collection (A) used two thirds of the nodes, and the 
other collection (B) used the remaining third of the nodes.

I observed that every request for collection B that the load balancer sent to a 
node with (only) slices for collection A got proxied to one *specific* node 
hosting a slice for collection B. This node started running pretty hot, for 
obvious reasons.

This meant that one specific node was handling the fan-out for slightly more 
than two-thirds of the requests against collection B.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7493) Requests aren't distributed evenly if the collection isn't present locally

2015-04-30 Thread Jeff Wartes (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Wartes updated SOLR-7493:
--
Labels:   (was: pat)

 Requests aren't distributed evenly if the collection isn't present locally
 --

 Key: SOLR-7493
 URL: https://issues.apache.org/jira/browse/SOLR-7493
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.0
Reporter: Jeff Wartes
 Attachments: SOLR-7493.patch


 I had a SolrCloud cluster where every node is behind a simple round-robin 
 load balancer.
 This cluster had two collections (A, B), and the slices of each were 
 partitioned such that one collection (A) used two thirds of the nodes, and 
 the other collection (B) used the remaining third of the nodes.
 I observed that every request for collection B that the load balancer sent to 
 a node with (only) slices for collection A got proxied to one *specific* node 
 hosting a slice for collection B. This node started running pretty hot, for 
 obvious reasons.
 This meant that one specific node was handling the fan-out for slightly more 
 than two-thirds of the requests against collection B.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: JEP 248: Make G1 the Default Garbage Collector

2015-04-30 Thread kedar mhaswade
The JEP says it's slated for default on JDK9, which is at least a year
away. People's adoption of JDK9 will happen even more slowly. Should we
further delay G1's being the default GC if most of the development occurs
on improving G1?

On Thu, Apr 30, 2015 at 8:06 AM, Christian Thalinger 
christian.thalin...@oracle.com wrote:


  On Apr 30, 2015, at 7:29 AM, Uwe Schindler uschind...@apache.org
 wrote:
 
  Hi Kirk, hi Mark,
 
  the Lucene/Solr/Elasticsearch people still recommend to their users to
 not use G1GC, although for this type of application (full text search with
 the requirement for very low response times and no pauses) is a good
 candidate for G1GC. On the other hand, heap sizes for typical Lucene
 applications should not be too high, because most of the processing is done
 on memory mapped files off-heap. So heaps should be at most 1/4 of the
 physical RAM available, because Lucene relies on the fact that the index
 files reside in file system cache (too large heaps are contra-productive
 here).
 
  See also our recommendations for Apache Solr and Elasticsearch:
  http://wiki.apache.org/solr/ShawnHeisey#GC_Tuning_for_Solr
 
 http://www.elastic.co/guide/en/elasticsearch/guide/current/_don_8217_t_touch_these_settings.html
 
  Currently Lucene's indexing sometimes caused serious data corruption
 with G1GC - leading to data loss, which was mainly caused by some bugs
 around G1GC and its use of additional memory barriers and the very close
 interaction with Hotspot, that seemed to broke some optimizations. We had
 (only in combination with G1GC during our test suites) simple assert
 statements *sometimes* failing that should never fail unless there is a bug
 in the JVM.

 In fact there was a bug with asserts triggering when they shouldn’t:

 https://bugs.openjdk.java.net/browse/JDK-8006960 
 https://bugs.openjdk.java.net/browse/JDK-8006960

 
  We are aware that Java 8u40 declared G1GC as production ready, so we
 are still looking at failures in our extensive testing infrastructure.
 Indeed, I have no seen any G1GC related problems recently, but that is not
 necessarily a sign for correctness.
 
  Uwe
 
  P.S.: It was nice to meet you last week on JAX!
 
  -
  Uwe Schindler
  uschind...@apache.org
  ASF Member, Apache Lucene PMC / Committer
  Bremen, Germany
  http://lucene.apache.org/
 
  -Original Message-
  From: hotspot-dev [mailto:hotspot-dev-boun...@openjdk.java.net] On
  Behalf Of Kirk Pepperdine
  Sent: Wednesday, April 29, 2015 9:11 AM
  To: hotspot-...@openjdk.java.net Source Developers
  Subject: Re: JEP 248: Make G1 the Default Garbage Collector
 
  Hi all,
 
  Is the G1 ready for this? I see many people moving to G1 but also I’m
 not
  sure that we’ve got the tunable correct. I’ve been sorting through a
 number
  of recent tuning engagements and my  conclusion is that I would like the
  collector to be aggressive about collecting tenured regions at the
 beginning
  of a JVM’s life time but then become less aggressive over time. The
 reason is
  the residual waste that I see left behind because certain regions never
 hit
  the threshold needed to be included in the CSET. But, on aggregate, the
  number of regions in this state does start to retain a significant
 about of dead
  data. The only way to see the effects is to run regular Full GCs..
 which of
  course you don’t really want to do. However, the problem seems to settle
  down a wee bit over time which is why I was thinking that being
 aggressive
  about what is collected in the early stages of a JVMs life should lead
 to better
  packing and hence less waste.
 
  Note, I don’t really care about the memory waste, only it’s effect on
 cycle
  frequencies and pause times.
 
  Sorry but I don’t have anything formal about this as I (and I believe
 many
  others) are still sorting out what to make of the G1 in prod. Generally
 the
  overall results are good but sometimes it’s not that way up front and
 how to
  improve things is sometimes challenging.
 
  On a side note, the move to Tiered in 8 has also caused a bit of grief.
  Metaspace has caused a bit of grief and even parallelStream, which
 works,
  has come with some interesting side effect. Everyone has been so
 enamored
  with Lambdas (rightfully so) that the other stuff has been completely
  forgotten and some of it has surprised people. I guess I’ll be
 submitting a talk
  for J1 on some of the field experience I’ve had with the other stuff.
 
  Regards,
  Kirk
 
 
  On Apr 28, 2015, at 11:02 PM, mark.reinh...@oracle.com wrote:
 
  New JEP Candidate: http://openjdk.java.net/jeps/248
 
  - Mark
 




[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_45) - Build # 4749 - Still Failing!

2015-04-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4749/
Java: 32bit/jdk1.8.0_45 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.DeleteShardTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.cloud.DeleteShardTest: 
1) Thread[id=175, 
name=OverseerStateUpdate-93743310794194950-127.0.0.1:53560__-n_01, 
state=TIMED_WAITING, group=Overseer state updater.] at 
java.lang.Object.wait(Native Method) at 
org.apache.solr.cloud.DistributedQueue$LatchWatcher.await(DistributedQueue.java:276)
 at 
org.apache.solr.cloud.DistributedQueue.getChildren(DistributedQueue.java:320)   
  at org.apache.solr.cloud.DistributedQueue.peek(DistributedQueue.java:594) 
at 
org.apache.solr.cloud.DistributedQueue.peek(DistributedQueue.java:572) 
at org.apache.solr.cloud.Overseer$ClusterStateUpdater.run(Overseer.java:189)
 at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.DeleteShardTest: 
   1) Thread[id=175, 
name=OverseerStateUpdate-93743310794194950-127.0.0.1:53560__-n_01, 
state=TIMED_WAITING, group=Overseer state updater.]
at java.lang.Object.wait(Native Method)
at 
org.apache.solr.cloud.DistributedQueue$LatchWatcher.await(DistributedQueue.java:276)
at 
org.apache.solr.cloud.DistributedQueue.getChildren(DistributedQueue.java:320)
at 
org.apache.solr.cloud.DistributedQueue.peek(DistributedQueue.java:594)
at 
org.apache.solr.cloud.DistributedQueue.peek(DistributedQueue.java:572)
at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.run(Overseer.java:189)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([B3694E564BAB4CC4]:0)




Build Log:
[...truncated 9232 lines...]
   [junit4] Suite: org.apache.solr.cloud.DeleteShardTest
   [junit4]   2 Creating dataDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.DeleteShardTest
 B3694E564BAB4CC4-001\init-core-data-001
   [junit4]   2 27898 T82 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(false) and clientAuth (false)
   [junit4]   2 27898 T82 oas.BaseDistributedSearchTestCase.initHostContext 
Setting hostContext system property: /_/
   [junit4]   2 27917 T82 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   2 27921 T83 oasc.ZkTestServer$2$1.setClientPort client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2 27923 T83 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2 28122 T82 oasc.ZkTestServer.run start zk server on port:53541
   [junit4]   2 28174 T82 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 28270 T82 oascc.ConnectionManager.waitForConnected Waiting for 
client to connect to ZooKeeper
   [junit4]   2 28337 T90 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@ed2e39 name:ZooKeeperConnection 
Watcher:127.0.0.1:53541 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2 28337 T82 oascc.ConnectionManager.waitForConnected Client is 
connected to ZooKeeper
   [junit4]   2 28338 T82 oascc.SolrZkClient.createZkACLProvider Using default 
ZkACLProvider
   [junit4]   2 28343 T82 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2 28395 T82 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 28397 T82 oascc.ConnectionManager.waitForConnected Waiting for 
client to connect to ZooKeeper
   [junit4]   2 28400 T93 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@4a49e7 name:ZooKeeperConnection 
Watcher:127.0.0.1:53541/solr got event WatchedEvent state:SyncConnected 
type:None path:null path:null type:None
   [junit4]   2 28400 T82 oascc.ConnectionManager.waitForConnected Client is 
connected to ZooKeeper
   [junit4]   2 28400 T82 oascc.SolrZkClient.createZkACLProvider Using default 
ZkACLProvider
   [junit4]   2 28404 T82 oascc.SolrZkClient.makePath makePath: 
/collections/collection1
   [junit4]   2 28409 T82 oascc.SolrZkClient.makePath makePath: 
/collections/collection1/shards
   [junit4]   2 28413 T82 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection
   [junit4]   2 28416 T82 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection/shards
   [junit4]   2 28420 T82 oasc.AbstractZkTestCase.putConfig put 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\core\src\test-files\solr\collection1\conf\solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2 28421 T82 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.xml
   [junit4]   2 28426 T82 oasc.AbstractZkTestCase.putConfig put 

[jira] [Updated] (SOLR-7493) Requests aren't distributed evenly if the collection isn't present locally

2015-04-30 Thread Jeff Wartes (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Wartes updated SOLR-7493:
--
Attachment: SOLR-7493.patch

It looks like this happens because SolrDispatchFilter's getRemoteCoreURL 
eventually takes the first viable entry from a HashMap.values list of cores. 

HashMap.values ordering is always the same, if you load the HashMap with the 
same data in the same order. So if the list from ZK is presented in the same 
order on every node, every node will use the same ordering on every request.

There might be a better solution, but this patch would randomize that ordering 
per-request. 
My environment is a bit messed up at the moment, so I haven't done much more 
than verify this compiles.

 Requests aren't distributed evenly if the collection isn't present locally
 --

 Key: SOLR-7493
 URL: https://issues.apache.org/jira/browse/SOLR-7493
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.0
Reporter: Jeff Wartes
 Attachments: SOLR-7493.patch


 I had a SolrCloud cluster where every node is behind a simple round-robin 
 load balancer.
 This cluster had two collections (A, B), and the slices of each were 
 partitioned such that one collection (A) used two thirds of the nodes, and 
 the other collection (B) used the remaining third of the nodes.
 I observed that every request for collection B that the load balancer sent to 
 a node with (only) slices for collection A got proxied to one *specific* node 
 hosting a slice for collection B. This node started running pretty hot, for 
 obvious reasons.
 This meant that one specific node was handling the fan-out for slightly more 
 than two-thirds of the requests against collection B.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7493) Requests aren't distributed evenly if the collection isn't present locally

2015-04-30 Thread Jeff Wartes (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Wartes updated SOLR-7493:
--
Labels: pat  (was: )

 Requests aren't distributed evenly if the collection isn't present locally
 --

 Key: SOLR-7493
 URL: https://issues.apache.org/jira/browse/SOLR-7493
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.0
Reporter: Jeff Wartes
  Labels: pat
 Attachments: SOLR-7493.patch


 I had a SolrCloud cluster where every node is behind a simple round-robin 
 load balancer.
 This cluster had two collections (A, B), and the slices of each were 
 partitioned such that one collection (A) used two thirds of the nodes, and 
 the other collection (B) used the remaining third of the nodes.
 I observed that every request for collection B that the load balancer sent to 
 a node with (only) slices for collection A got proxied to one *specific* node 
 hosting a slice for collection B. This node started running pretty hot, for 
 obvious reasons.
 This meant that one specific node was handling the fan-out for slightly more 
 than two-thirds of the requests against collection B.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7487) check-example-lucene-match-version is looking in the wrong place - luceneMatchVersion incorrect in 5.1 sample configs

2015-04-30 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-7487:
-
Attachment: SOLR-7487.patch

Updated patch includes change to addVersion.py to walk the correct directory: 
solr/server/solr/configsets (was solr/example)

 check-example-lucene-match-version is looking in the wrong place - 
 luceneMatchVersion incorrect in 5.1 sample configs
 -

 Key: SOLR-7487
 URL: https://issues.apache.org/jira/browse/SOLR-7487
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.1
Reporter: Hoss Man
Assignee: Timothy Potter
Priority: Blocker
 Fix For: 5.2

 Attachments: SOLR-7487.patch, SOLR-7487.patch


 As noted by Scott Dawson on the mailing list, the luceneMatchVersion in the 
 5.1 sample configs all still lists 5.0.0.
 The root cause seems to be because the check-example-lucene-match-version 
 task in solr/build.xml is looking in the wrong place -- it's still scaning 
 for instances of luceneMatchVersion in the {{example}} directory instead of 
 the {{server/solr/configset}}
 TODO:
 * fix the luceneMatchVersion value in all sample configsets on 5x
 * update the check to look in the correct directory
 * update the check to fail to be smarter know that we have a more predictable 
 directory structure
 ** fail if no subdirs found
 ** fail if any subdir doesn't contain conf/solrconfig.xml
 ** fail if any conf/solrconfig.xml doesn't contain a luceneMatchVersion
 ** fail if any luceneMatchVersion doesn't have the expected value



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7487) check-example-lucene-match-version is looking in the wrong place - luceneMatchVersion incorrect in 5.1 sample configs

2015-04-30 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-7487:
---
Attachment: SOLR-7487.patch

build.xml changes look good to me.

I've updated the patch with my own crude attempt at the addVersion.py changes 
-- totally untested since evidently addVersion requires python 3.3 (flush on 
print?) and that doesn't seem to be available for the ubuntu version on my 
laptop.

 check-example-lucene-match-version is looking in the wrong place - 
 luceneMatchVersion incorrect in 5.1 sample configs
 -

 Key: SOLR-7487
 URL: https://issues.apache.org/jira/browse/SOLR-7487
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.1
Reporter: Hoss Man
Assignee: Timothy Potter
Priority: Blocker
 Fix For: 5.2

 Attachments: SOLR-7487.patch, SOLR-7487.patch, SOLR-7487.patch


 As noted by Scott Dawson on the mailing list, the luceneMatchVersion in the 
 5.1 sample configs all still lists 5.0.0.
 The root cause seems to be because the check-example-lucene-match-version 
 task in solr/build.xml is looking in the wrong place -- it's still scaning 
 for instances of luceneMatchVersion in the {{example}} directory instead of 
 the {{server/solr/configset}}
 TODO:
 * fix the luceneMatchVersion value in all sample configsets on 5x
 * update the check to look in the correct directory
 * update the check to fail to be smarter know that we have a more predictable 
 directory structure
 ** fail if no subdirs found
 ** fail if any subdir doesn't contain conf/solrconfig.xml
 ** fail if any conf/solrconfig.xml doesn't contain a luceneMatchVersion
 ** fail if any luceneMatchVersion doesn't have the expected value



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7363) Expand component throws an Exception when the results have been collapsed and grouped

2015-04-30 Thread Brandon Chapman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521995#comment-14521995
 ] 

Brandon Chapman commented on SOLR-7363:
---

[~joel.bernstein], did you get a chance to evaluate this? 

 Expand component throws an Exception when the results have been collapsed and 
 grouped
 -

 Key: SOLR-7363
 URL: https://issues.apache.org/jira/browse/SOLR-7363
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other
Affects Versions: 4.10.3
Reporter: Brandon Chapman

 The expand component does not work when used on a result that has been both 
 collapsed and grouped. This is counter-intuitive as collapsing and grouping 
 work together with no issues.
 {code}
 {
   responseHeader:{
 status:500,
 QTime:1198,
 params:{
   fl:psid,
   indent:true,
   q:*:*,
   expand:true,
   group.field:merchant,
   group:true,
   wt:json,
   fq:{!collapse field=groupId},
   rows:1}},
   grouped:{
 merchant:{
   matches:71652,
   groups:[{
   groupValue:sears,
   doclist:{numFound:30672,start:0,docs:[
   {
 psid:3047500675628000}]
   }}]}},
   error:{
 trace:java.lang.NullPointerException\n\tat 
 org.apache.solr.handler.component.ExpandComponent.process(ExpandComponent.java:193)\n\tat
  
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)\n\tat
  
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)\n\tat
  org.apache.solr.core.SolrCore.execute(SolrCore.java:1976)\n\tat 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)\n\tat
  
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)\n\tat
  
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)\n\tat
  
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)\n\tat
  
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)\n\tat
  
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)\n\tat
  
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)\n\tat
  
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)\n\tat
  
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)\n\tat
  
 org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:929)\n\tat
  
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)\n\tat
  
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)\n\tat
  
 org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1002)\n\tat
  
 org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:585)\n\tat
  
 org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310)\n\tat
  
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat
  
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat
  java.lang.Thread.run(Thread.java:744)\n,
 code:500}}
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_45) - Build # 12518 - Failure!

2015-04-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12518/
Java: 64bit/jdk1.8.0_45 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 45396 lines...]
-documentation-lint:
 [echo] checking for broken html...
[jtidy] Checking for broken html (such as invalid tags)...
   [delete] Deleting directory 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/jtidy_tmp
 [echo] Checking for broken links...
 [exec] 
 [exec] Crawl/parse...
 [exec] 
 [exec] Verify...
 [echo] Checking for missing docs...
 [exec] Traceback (most recent call last):
 [exec]   File 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/dev-tools/scripts/checkJavaDocs.py,
 line 384, in module
 [exec] if checkPackageSummaries(sys.argv[1], level):
 [exec]   File 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/dev-tools/scripts/checkJavaDocs.py,
 line 364, in checkPackageSummaries
 [exec] if checkClassSummaries(fullPath):
 [exec]   File 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/dev-tools/scripts/checkJavaDocs.py,
 line 225, in checkClassSummaries
 [exec] raise RuntimeError('failed to locate javadoc item in %s, line 
%d? last line: %s' % (fullPath, lineCount, line.rstrip()))
 [exec] RuntimeError: failed to locate javadoc item in 
build/docs/classification/org/apache/lucene/classification/KNearestNeighborClassifier.html,
 line 150? last line: /tr

BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:526: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:90: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build.xml:135: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build.xml:165: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:2476: 
exec returned: 1

Total time: 48 minutes 59 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Lucene/Solr Revolution 2015 - Austin Oct 13-16 - CFP ends next Week

2015-04-30 Thread Chris Hostetter


(cross posted, please confine any replies to general@lucene)

A quick reminder and/or heads up for htose who haven't heard yet: this 
year's Lucene/Solr Revolution is happeing in Austin Texas in October.  The 
CFP and Early bird registration are currently open.  (CFP ends May 8, 
Early Bird ends May 31)


http://lucenerevolution.org/

More details below...

- - -

Are you a developer, business practitioner, data scientist, or Solr 
enthusiast doing something interesting with Lucene/Solr? The last day to 
submit your proposal http://lucenerevolution.org/call-for-papers/ for 
Lucene/Solr Revolution 2015 is May 8. Don't miss your chance to represent 
the Solr community by speaking at this year's conference.


Last year, speakers from companies like Twitter, Airbnb, and Bloomberg 
shared how they are using Lucene and Solr to solve complex business 
problems and build mission-critical apps. If you are doing something 
innovative with Lucene/Solr and other open source tools, have best 
practices insight at any level, or just have something cool to share that 
is Solr-related, we want to hear from you!


Call for Papers is open through May 8. Submit your proposal now 
http://lucenerevolution.org/call-for-papers/.


Not submitting a talk this year but still want to attend? Save up to $500 
on conference registration packages when you register 
http://lucenerevolution.org/register/ by May 31.


Stay up to date on everything Revolution by following us on Twitter 
@lucenesolrrev https://twitter.com/lucenesolrrev or joining us on 
Facebook https://www.facebook.com/LuceneSolrRevolution.




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: AngularJS Admin UI first pass complete

2015-04-30 Thread Chris Hostetter

: Yes, the new UI is available via http://localhost:8983/solr/index.html.
...
: As to how URLs are managed, the web.xml welcome-file currently causes
: http://localhost:8983/solr/ to point to admin.html. The new UI is
: accessible via http://localhost:8983/solr/index.html. Switch the
: welcome-file and the old UI will still be available via
: http://localhost:8983/solr/admin.html for as long as we decide to leave
: it there.

We might want to tweak those URLs slightly to be a little less confusing 
... ie old.html and new.html (at least until after 5.3) ... i mean, i 
just the paragraph you rwote above, but w/o looking at it i've already 
forgoten if index.html is hte new one or hte old one.

So imagine if a 5.2 user has in their browser...

http://localhost:8983/solr/admin.html#/collection1

...and is trying to remember if this is the new one (that they should get 
use to / help test) or the fuly qualified name of the old one that is 
going away.

Likewise imagine if a 5.42 user sees the same URL, and reads a note that 
the old UI is going to be removed in 6.0 and is trying to figure out if 
that affects him.


-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7214) JSON Facet API

2015-04-30 Thread Crawdaddy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522177#comment-14522177
 ] 

Crawdaddy commented on SOLR-7214:
-

Yonik, I think I found a JSON faceting bug when sub-faceting a field on 
unique(another_field).  As part of the upgrade from HS to Solr 5.1, I wanted to 
A/B test my queries between the two. I setup two identical 5-shard Solr 
installs, 35M docs each - one running HS 0.09 and and the other Solr 5.1.  
Issuing my facet query, I noticed that the unique counts were different between 
the two.  

This query, issued to my Solr 5.1 instance, demonstrates the inconsistency 
between native facets and JSON facets (limits set low enough to repro the 
issue):

rows=0q=John Lennonfq=keywords:[* TO 
*]facet=truefacet.pivot=keywords,top_private_domain_sfacet.limit=10
json.facet={
  keywords:{
terms:{
  field:keywords,
  limit: 2,
  facet:{   
   unique_domains: 'unique(top_private_domain_s)'
  }
}
  }
}

A snippet of the results shows that the native facets return at least 10 unique 
values (there are more) for the keyword Paul McCartney:

   facet_pivot:{
  keywords,top_private_domain_s:[{
  field:keywords,
  value:Paul McCartney,
  count:602,
  pivot:[{
  field:top_private_domain_s,
  value:taringa.net,
  count:35},
{
  field:top_private_domain_s,
  value:dailymail.co.uk,
  count:34},
{
  field:top_private_domain_s,
  value:beatlesbible.com,
  count:33},
{
  field:top_private_domain_s,
  value:examiner.com,
  count:22},
{
  field:top_private_domain_s,
  value:blogspot.com,
  count:14},
{
  field:top_private_domain_s,
  value:musicradar.com,
  count:13},
{
  field:top_private_domain_s,
  value:liverpoolecho.co.uk,
  count:11},
{
  field:top_private_domain_s,
  value:rollingstone.com,
  count:11},
{
  field:top_private_domain_s,
  value:about.com,
  count:9},
{
  field:top_private_domain_s,
  value:answers.com,
  count:8}]},

...

But the JSON facets say there's only 4 unique values:

 facets:{
count:11859,
keywords:{
  buckets:[{
  val:Paul McCartney,
  count:602,
  unique_domains:4}]}}}

The results are correct when issuing the same search in Heliosearch:

facets:{
count:11859,
keywords:{
  buckets:[{
  val:Paul McCartney,
  count:602,
  unique_domains:228}]}}}

In all cases the doc count (602) is the same so I know it's hitting the same 
documents.

Any advice you can offer as to whether you think this is a bug, or if the 
behavior is intentionally different between the two systems, would be much 
appreciated.  If it is a bug but you think there's a workaround, that'd be 
great to know too.




 JSON Facet API
 --

 Key: SOLR-7214
 URL: https://issues.apache.org/jira/browse/SOLR-7214
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Assignee: Yonik Seeley
 Fix For: 5.1

 Attachments: SOLR-7214.patch


 Overview is here: http://yonik.com/json-facet-api/
 The structured nature of nested sub-facets are more naturally expressed in a 
 nested structure like JSON rather than the flat structure that normal query 
 parameters provide.
 Goals:
 - First class JSON support
 - Easier programmatic construction of complex nested facet commands
 - Support a much more canonical response format that is easier for clients to 
 parse
 - First class analytics support
 - Support a cleaner way to do distributed faceting
 - Support better integration with other search features



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7490) Update by query feature

2015-04-30 Thread Praneeth (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522198#comment-14522198
 ] 

Praneeth commented on SOLR-7490:


Ya but isn't update also effectively a mark for delete and then index the 
document ? So, I thought it wouldn't cost much on the solr side to index a 
stream of documents. 

Considering a query that qualifies everything, Solr ends up re-importing the 
whole data from itself which is basically an optimize operation I think, where 
we end up rewriting the whole index? But it seems to make it easy to change the 
schema without having to do anything after (basically change the schema and 
issue an update by query qualifying the whole index) which basically supports 
uptime re-indexing of a solr collection with new schema I guess.

With atomic updates, as you say, we will be exposing the freedom of updating a 
huge set of documents in one request. We will be pushing Solr too much unless 
it is used wisely.

 Update by query feature
 ---

 Key: SOLR-7490
 URL: https://issues.apache.org/jira/browse/SOLR-7490
 Project: Solr
  Issue Type: New Feature
Reporter: Praneeth
Priority: Minor

 An update feature similar to the {{deleteByQuery}} would be very useful. Say, 
 the user wants to update a field of all documents in the index that match a 
 given criteria. I have encountered this use case in my project and it looks 
 like it could be a useful first class solr/lucene feature. I want to check if 
 this is something we would want to support in coming releases of Solr and 
 Lucene, are there scenarios that will prevent us from doing this, 
 feasibility, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7214) JSON Facet API

2015-04-30 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522291#comment-14522291
 ] 

Yonik Seeley commented on SOLR-7214:


Hmmm, something weird is going on.  Lots of code changed between lucene/solr 4 
and 5, so it wasn't necessarily a straightforward port.  You probably hit a bug 
I introduced.  What's the field type of keywords (single or multiValued?)  I 
assume top_private_domain_s is a standard single valued string.

Also, what happens if you add distrib=false (a non-distributed request on a 
single shard)?

 JSON Facet API
 --

 Key: SOLR-7214
 URL: https://issues.apache.org/jira/browse/SOLR-7214
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Assignee: Yonik Seeley
 Fix For: 5.1

 Attachments: SOLR-7214.patch


 Overview is here: http://yonik.com/json-facet-api/
 The structured nature of nested sub-facets are more naturally expressed in a 
 nested structure like JSON rather than the flat structure that normal query 
 parameters provide.
 Goals:
 - First class JSON support
 - Easier programmatic construction of complex nested facet commands
 - Support a much more canonical response format that is easier for clients to 
 parse
 - First class analytics support
 - Support a cleaner way to do distributed faceting
 - Support better integration with other search features



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2244 - Failure!

2015-04-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2244/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 45348 lines...]
-documentation-lint:
 [echo] checking for broken html...
[jtidy] Checking for broken html (such as invalid tags)...
   [delete] Deleting directory 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/jtidy_tmp
 [echo] Checking for broken links...
 [exec] 
 [exec] Crawl/parse...
 [exec] 
 [exec] Verify...
 [echo] Checking for missing docs...
 [exec] Traceback (most recent call last):
 [exec]   File 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/dev-tools/scripts/checkJavaDocs.py,
 line 384, in module
 [exec] if checkPackageSummaries(sys.argv[1], level):
 [exec]   File 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/dev-tools/scripts/checkJavaDocs.py,
 line 364, in checkPackageSummaries
 [exec] if checkClassSummaries(fullPath):
 [exec]   File 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/dev-tools/scripts/checkJavaDocs.py,
 line 225, in checkClassSummaries
 [exec] raise RuntimeError('failed to locate javadoc item in %s, line 
%d? last line: %s' % (fullPath, lineCount, line.rstrip()))
 [exec] RuntimeError: failed to locate javadoc item in 
build/docs/classification/org/apache/lucene/classification/BooleanPerceptronClassifier.html,
 line 153? last line: /tr

BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:526: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:90: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build.xml:135: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build.xml:165: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/common-build.xml:2476: 
exec returned: 1

Total time: 87 minutes 23 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: JEP 248: Make G1 the Default Garbage Collector

2015-04-30 Thread Dawid Weiss
Hi folks,

I'm the guy who tried to get to the bottom of the G1GC byteslice
assert issue. It's not a false assert, it's a true miscompilation that
somehow misses variable update. I know because I live-debugged it on
assembly level... I also tried to help Vladimir Kozlov to pinpoint the
issue but without much success. The buggy scenario is really fragile,
not always reproducible and the compiled method is huge. Life.

I haven't tried with any recent Java release. I will try to, time
permitting -- it's my idée fixe to find out what's happening.

Dawid

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7090) Cross collection join

2015-04-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522189#comment-14522189
 ] 

David Smiley commented on SOLR-7090:


FYI I noticed we've got two cache regenerators with identical implementations: 
SolrPluginUtils.IdentityRegenerator (currently showing as unused) and 
NoOpRegenerator.  I propose we remove the former both because it's currently 
not used and because referencing from solrconfig.xml is weird due to the inner 
class reference.

 Cross collection join
 -

 Key: SOLR-7090
 URL: https://issues.apache.org/jira/browse/SOLR-7090
 Project: Solr
  Issue Type: New Feature
Reporter: Ishan Chattopadhyaya
 Fix For: Trunk, 5.2

 Attachments: SOLR-7090.patch


 Although SOLR-4905 supports joins across collections in Cloud mode, there are 
 limitations, (i) the secondary collection must be replicated at each node 
 where the primary collection has a replica, (ii) the secondary collection 
 must be singly sharded.
 This issue explores ideas/possibilities of cross collection joins, even 
 across nodes. This will be helpful for users who wish to maintain boosts or 
 signals in a secondary, more frequently updated collection, and perform query 
 time join of these boosts/signals with results from the primary collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7486) HttpSolrClient.shutdown() should call close() and not vice versa

2015-04-30 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera resolved SOLR-7486.
--
Resolution: Fixed

Committed to 5x.

 HttpSolrClient.shutdown() should call close() and not vice versa
 

 Key: SOLR-7486
 URL: https://issues.apache.org/jira/browse/SOLR-7486
 Project: Solr
  Issue Type: Bug
Reporter: Shai Erera
Assignee: Shai Erera
 Fix For: 5.2

 Attachments: SOLR-7486.patch


 HttpSolrClient.shutdown() is deprecated, however close() calls it instead of 
 the other way around. If anyone extends HttpSolrClient, he needs to override 
 both methods to make sure things are closed properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: AngularJS Admin UI first pass complete

2015-04-30 Thread Upayavira


On Thu, Apr 30, 2015, at 07:25 PM, Chris Hostetter wrote:
 
 : Once that's done, switching should just be a question of changing the
 : welcome-file in web.xml.
 
 My suggestion, along the lines of what we did with the previous UI 
 change...
 
 1) ensure that the new UI is fully functional in 5x using an alt path 
 (believe this is already true?)
 
 2) add an Upgrading note to CHANGES.txt for 5.2 directing people to 
 the new experimental UI and the URL to access it, note that it 
 will likely become the default in 5.3 and users are encouraged to try it 
 out and file bugs if they notice any problems.
 
 3) update trunk so that it *is* the default UI, and make the old UI 
 available at some alternate path (eg /solr/old_ui).  Add an Upgrading 
 note pointing out that the URLs may be slightly different, and how to 
 access the old UI if they have problems
 
 * backport trunk changes (#3) to 5x for 5.3 (or later if problems/delays 
 pop up)
 
 
 : *Then* we can get on with things like a collections API page, an
 : explains viewer, etc, etc, etc.
 
 +1

Yes, the new UI is available via http://localhost:8983/solr/index.html.

Thanks, that helps. And I can add a 2a which is once 5.2 is out, ask for
feedback on the user list.

As to how URLs are managed, the web.xml welcome-file currently causes
http://localhost:8983/solr/ to point to admin.html. The new UI is
accessible via http://localhost:8983/solr/index.html. Switch the
welcome-file and the old UI will still be available via
http://localhost:8983/solr/admin.html for as long as we decide to leave
it there.

Otherwise, your plan looks good. I'll have until 5.2 is feature-frozen
to hunt for bugs/etc.

Upayavira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60-ea-b12) - Build # 12345 - Failure!

2015-04-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12345/
Java: 64bit/jdk1.8.0_60-ea-b12 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.BasicDistributedZkTest.test

Error Message:
commitWithin did not work on node: http://127.0.0.1:36820/collection1 
expected:68 but was:67

Stack Trace:
java.lang.AssertionError: commitWithin did not work on node: 
http://127.0.0.1:36820/collection1 expected:68 but was:67
at 
__randomizedtesting.SeedInfo.seed([7A0CE66C51EB4895:F258D9B6FF17256D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.BasicDistributedZkTest.test(BasicDistributedZkTest.java:344)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

[jira] [Updated] (SOLR-7484) Refactor SolrDispatchFilter.doFilter(...) method

2015-04-30 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-7484:
---
Attachment: SOLR-7484.patch

Fixes an NPE.

 Refactor SolrDispatchFilter.doFilter(...) method
 

 Key: SOLR-7484
 URL: https://issues.apache.org/jira/browse/SOLR-7484
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Attachments: SOLR-7484.patch, SOLR-7484.patch, SOLR-7484.patch, 
 SOLR-7484.patch, SOLR-7484.patch, SOLR-7484.patch, SOLR-7484.patch, 
 SOLR-7484.patch


 Currently almost everything that's done in SDF.doFilter() is sequential. We 
 should refactor it to clean up the code and make things easier to manage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7486) HttpSolrClient.shutdown() should call close() and not vice versa

2015-04-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522256#comment-14522256
 ] 

ASF subversion and git services commented on SOLR-7486:
---

Commit 1677072 from [~shaie] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1677072 ]

SOLR-7486: HttpSolrClient.shutdown() should call close() and not vice versa

 HttpSolrClient.shutdown() should call close() and not vice versa
 

 Key: SOLR-7486
 URL: https://issues.apache.org/jira/browse/SOLR-7486
 Project: Solr
  Issue Type: Bug
Reporter: Shai Erera
Assignee: Shai Erera
 Fix For: 5.2

 Attachments: SOLR-7486.patch


 HttpSolrClient.shutdown() is deprecated, however close() calls it instead of 
 the other way around. If anyone extends HttpSolrClient, he needs to override 
 both methods to make sure things are closed properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7490) Update by query feature

2015-04-30 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522109#comment-14522109
 ] 

Erick Erickson commented on SOLR-7490:
--

bq: Would this be significantly more work for Solr than what it does for 
deleteByQuery?

Absolutely. deleteByQuery just marks each doc as deleted, which is a _much_ 
cheaper operation than re-indexing each and every one of the affected docs. 
e.g. updating by a q=*:* would re-index every document in the corpus, possibly 
billions.

DocValues still wouldn't be cheap in this case, but not nearly as bad as 
arbitrary fields. And do note that DocValues are limited to non-text types 
(string, numeric and the like). But that's most often the use-case here I think.

 Update by query feature
 ---

 Key: SOLR-7490
 URL: https://issues.apache.org/jira/browse/SOLR-7490
 Project: Solr
  Issue Type: New Feature
Reporter: Praneeth
Priority: Minor

 An update feature similar to the {{deleteByQuery}} would be very useful. Say, 
 the user wants to update a field of all documents in the index that match a 
 given criteria. I have encountered this use case in my project and it looks 
 like it could be a useful first class solr/lucene feature. I want to check if 
 this is something we would want to support in coming releases of Solr and 
 Lucene, are there scenarios that will prevent us from doing this, 
 feasibility, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3039 - Still Failing

2015-04-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3039/

1 tests failed.
FAILED:  org.apache.solr.cloud.MultiThreadedOCPTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=6606, 
name=parallelCoreAdminExecutor-2373-thread-15, state=RUNNABLE, 
group=TGRP-MultiThreadedOCPTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=6606, 
name=parallelCoreAdminExecutor-2373-thread-15, state=RUNNABLE, 
group=TGRP-MultiThreadedOCPTest]
at 
__randomizedtesting.SeedInfo.seed([421E87F42D34CC4C:CA4AB82E83C8A1B4]:0)
Caused by: java.lang.AssertionError: Too many closes on SolrCore
at __randomizedtesting.SeedInfo.seed([421E87F42D34CC4C]:0)
at org.apache.solr.core.SolrCore.close(SolrCore.java:1138)
at org.apache.solr.common.util.IOUtils.closeQuietly(IOUtils.java:31)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:535)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:494)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:598)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:212)
at 
org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1219)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:148)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 10215 lines...]
   [junit4] Suite: org.apache.solr.cloud.MultiThreadedOCPTest
   [junit4]   2 Creating dataDir: 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/test/J3/temp/solr.cloud.MultiThreadedOCPTest
 421E87F42D34CC4C-001/init-core-data-001
   [junit4]   2 1384905 T6267 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(false) and clientAuth (false)
   [junit4]   2 1384906 T6267 
oas.BaseDistributedSearchTestCase.initHostContext Setting hostContext system 
property: /
   [junit4]   2 1384916 T6267 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   2 1384917 T6268 oasc.ZkTestServer$2$1.setClientPort client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2 1384917 T6268 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2 1385017 T6267 oasc.ZkTestServer.run start zk server on 
port:56167
   [junit4]   2 1385018 T6267 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 1385019 T6267 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2 1385026 T6275 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@4c5088a2 
name:ZooKeeperConnection Watcher:127.0.0.1:56167 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2 1385027 T6267 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2 1385028 T6267 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2 1385028 T6267 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2 1385033 T6267 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 1385035 T6267 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2 1385037 T6278 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@19b0474b 
name:ZooKeeperConnection Watcher:127.0.0.1:56167/solr got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2 1385038 T6267 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2 1385039 T6267 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2 1385039 T6267 oascc.SolrZkClient.makePath makePath: 
/collections/collection1
   [junit4]   2 1385044 T6267 oascc.SolrZkClient.makePath makePath: 
/collections/collection1/shards
   [junit4]   2 1385046 T6267 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection
   [junit4]   2 1385049 T6267 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection/shards
   [junit4]   2 1385052 T6267 oasc.AbstractZkTestCase.putConfig put 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2 1385054 T6267 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.xml
   [junit4]   2 1385061 T6267 oasc.AbstractZkTestCase.putConfig put 

[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_45) - Build # 4751 - Still Failing!

2015-04-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4751/
Java: 64bit/jdk1.8.0_45 -XX:+UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 45426 lines...]
-documentation-lint:
 [echo] checking for broken html...
[jtidy] Checking for broken html (such as invalid tags)...
   [delete] Deleting directory 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\build\jtidy_tmp
 [echo] Checking for broken links...
 [exec] 
 [exec] Crawl/parse...
 [exec] 
 [exec] Verify...
 [echo] Checking for missing docs...
 [exec] Traceback (most recent call last):
 [exec]   File 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\dev-tools/scripts/checkJavaDocs.py,
 line 384, in module
 [exec] if checkPackageSummaries(sys.argv[1], level):
 [exec]   File 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\dev-tools/scripts/checkJavaDocs.py,
 line 364, in checkPackageSummaries
 [exec] if checkClassSummaries(fullPath):
 [exec]   File 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\dev-tools/scripts/checkJavaDocs.py,
 line 225, in checkClassSummaries
 [exec] raise RuntimeError('failed to locate javadoc item in %s, line 
%d? last line: %s' % (fullPath, lineCount, line.rstrip()))
 [exec] RuntimeError: failed to locate javadoc item in 
build/docs/classification\org\apache\lucene\classification/BooleanPerceptronClassifier.html,
 line 153? last line: /tr

BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:526: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:90: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\build.xml:135: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\build.xml:165: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\common-build.xml:2476:
 exec returned: 1

Total time: 73 minutes 41 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-7485) replace shards.info with ShardParams.SHARDS_INFO in TestTolerantSearch.java and CloudSolrClientTest.java

2015-04-30 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522773#comment-14522773
 ] 

Shalin Shekhar Mangar commented on SOLR-7485:
-

Maybe having the string constants in some tests isn't that bad? If someone 
accidentally changes the constant's value, there will be at least 1 test which 
catches the back-compat break.

 replace shards.info with ShardParams.SHARDS_INFO in TestTolerantSearch.java 
 and CloudSolrClientTest.java
 --

 Key: SOLR-7485
 URL: https://issues.apache.org/jira/browse/SOLR-7485
 Project: Solr
  Issue Type: Improvement
Reporter: Christine Poerschke
Priority: Minor

 various other tests already use ShardParams.SHARDS_INFO e.g. 
 TestDistributedSearch.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-7493) Requests aren't distributed evenly if the collection isn't present locally

2015-04-30 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reassigned SOLR-7493:
---

Assignee: Shalin Shekhar Mangar

 Requests aren't distributed evenly if the collection isn't present locally
 --

 Key: SOLR-7493
 URL: https://issues.apache.org/jira/browse/SOLR-7493
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.0
Reporter: Jeff Wartes
Assignee: Shalin Shekhar Mangar
 Attachments: SOLR-7493.patch


 I had a SolrCloud cluster where every node is behind a simple round-robin 
 load balancer.
 This cluster had two collections (A, B), and the slices of each were 
 partitioned such that one collection (A) used two thirds of the nodes, and 
 the other collection (B) used the remaining third of the nodes.
 I observed that every request for collection B that the load balancer sent to 
 a node with (only) slices for collection A got proxied to one *specific* node 
 hosting a slice for collection B. This node started running pretty hot, for 
 obvious reasons.
 This meant that one specific node was handling the fan-out for slightly more 
 than two-thirds of the requests against collection B.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3041 - Failure

2015-04-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3041/

1 tests failed.
REGRESSION:  org.apache.solr.cloud.RecoveryZkTest.test

Error Message:
shard1 is not consistent.  Got 748 from 
http://127.0.0.1:36774/collection1lastClient and got 162 from 
http://127.0.0.1:36832/collection1

Stack Trace:
java.lang.AssertionError: shard1 is not consistent.  Got 748 from 
http://127.0.0.1:36774/collection1lastClient and got 162 from 
http://127.0.0.1:36832/collection1
at 
__randomizedtesting.SeedInfo.seed([4BA0979A77E77304:C3F4A840D91B1EFC]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.apache.solr.cloud.RecoveryZkTest.test(RecoveryZkTest.java:123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-7214) JSON Facet API

2015-04-30 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522355#comment-14522355
 ] 

Yonik Seeley commented on SOLR-7214:


Ok, thanks for narrowing the issue down!

 JSON Facet API
 --

 Key: SOLR-7214
 URL: https://issues.apache.org/jira/browse/SOLR-7214
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Assignee: Yonik Seeley
 Fix For: 5.1

 Attachments: SOLR-7214.patch


 Overview is here: http://yonik.com/json-facet-api/
 The structured nature of nested sub-facets are more naturally expressed in a 
 nested structure like JSON rather than the flat structure that normal query 
 parameters provide.
 Goals:
 - First class JSON support
 - Easier programmatic construction of complex nested facet commands
 - Support a much more canonical response format that is easier for clients to 
 parse
 - First class analytics support
 - Support a cleaner way to do distributed faceting
 - Support better integration with other search features



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_45) - Build # 4750 - Still Failing!

2015-04-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4750/
Java: 64bit/jdk1.8.0_45 -XX:-UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 45282 lines...]
-documentation-lint:
 [echo] checking for broken html...
[jtidy] Checking for broken html (such as invalid tags)...
   [delete] Deleting directory 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\build\jtidy_tmp
 [echo] Checking for broken links...
 [exec] 
 [exec] Crawl/parse...
 [exec] 
 [exec] Verify...
 [echo] Checking for missing docs...
 [exec] Traceback (most recent call last):
 [exec]   File 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\dev-tools/scripts/checkJavaDocs.py,
 line 384, in module
 [exec] if checkPackageSummaries(sys.argv[1], level):
 [exec]   File 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\dev-tools/scripts/checkJavaDocs.py,
 line 364, in checkPackageSummaries
 [exec] if checkClassSummaries(fullPath):
 [exec]   File 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\dev-tools/scripts/checkJavaDocs.py,
 line 225, in checkClassSummaries
 [exec] raise RuntimeError('failed to locate javadoc item in %s, line 
%d? last line: %s' % (fullPath, lineCount, line.rstrip()))
 [exec] RuntimeError: failed to locate javadoc item in 
build/docs/classification\org\apache\lucene\classification/BooleanPerceptronClassifier.html,
 line 153? last line: /tr

BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:526: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:90: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\build.xml:135: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\build.xml:165: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\common-build.xml:2476:
 exec returned: 1

Total time: 69 minutes 19 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.9.0-ea-b60) - Build # 12349 - Failure!

2015-04-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12349/
Java: 32bit/jdk1.9.0-ea-b60 -server -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.lucene.spatial.composite.CompositeStrategyTest.testOperations {#6 
seed=[B2B9C743868F80F5:2826A9642D5B4E9E]}

Error Message:
[Intersects] Should have matched I#3:Circle(Pt(x=-94.0,y=-12.0), d=45.0° 
5000.49km) Q:Circle(Pt(x=86.0,y=12.0), d=64.5° 7172.92km)

Stack Trace:
java.lang.AssertionError: [Intersects] Should have matched 
I#3:Circle(Pt(x=-94.0,y=-12.0), d=45.0° 5000.49km) Q:Circle(Pt(x=86.0,y=12.0), 
d=64.5° 7172.92km)
at 
__randomizedtesting.SeedInfo.seed([B2B9C743868F80F5:2826A9642D5B4E9E]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.lucene.spatial.prefix.RandomSpatialOpStrategyTestCase.fail(RandomSpatialOpStrategyTestCase.java:127)
at 
org.apache.lucene.spatial.prefix.RandomSpatialOpStrategyTestCase.testOperation(RandomSpatialOpStrategyTestCase.java:121)
at 
org.apache.lucene.spatial.prefix.RandomSpatialOpStrategyTestCase.testOperationRandomShapes(RandomSpatialOpStrategyTestCase.java:56)
at 
org.apache.lucene.spatial.composite.CompositeStrategyTest.testOperations(CompositeStrategyTest.java:99)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:502)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 8453 lines...]
   

[jira] [Commented] (SOLR-7494) Distributed unique() facet function is incorrect for higher cardinality fields.

2015-04-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522669#comment-14522669
 ] 

ASF subversion and git services commented on SOLR-7494:
---

Commit 1677092 from [~yo...@apache.org] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1677092 ]

SOLR-7494: fix unique() function for high cardinality fields

 Distributed unique() facet function is incorrect for higher cardinality 
 fields.
 ---

 Key: SOLR-7494
 URL: https://issues.apache.org/jira/browse/SOLR-7494
 Project: Solr
  Issue Type: Bug
  Components: Facet Module
Reporter: Yonik Seeley
 Attachments: SOLR-7494.patch


 As described in SOLR-7214, the unique function seems wildly off, but only if 
 you have enough values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7494) Distributed unique() facet function is incorrect for higher cardinality fields.

2015-04-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522666#comment-14522666
 ] 

ASF subversion and git services commented on SOLR-7494:
---

Commit 1677091 from [~yo...@apache.org] in branch 'dev/trunk'
[ https://svn.apache.org/r1677091 ]

SOLR-7494: fix unique() function for high cardinality fields

 Distributed unique() facet function is incorrect for higher cardinality 
 fields.
 ---

 Key: SOLR-7494
 URL: https://issues.apache.org/jira/browse/SOLR-7494
 Project: Solr
  Issue Type: Bug
  Components: Facet Module
Reporter: Yonik Seeley
 Attachments: SOLR-7494.patch


 As described in SOLR-7214, the unique function seems wildly off, but only if 
 you have enough values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.9.0-ea-b60) - Build # 12349 - Failure!

2015-04-30 Thread david.w.smi...@gmail.com
The root cause is that Spatial4j’s DistanceUtils.distHaversineRAD returned
NaN for a pair of anti-podal points.  I filled a bug in Spatial4j:
https://github.com/spatial4j/spatial4j/issues/104

On Thu, Apr 30, 2015 at 10:19 PM david.w.smi...@gmail.com 
david.w.smi...@gmail.com wrote:

 I’ll look into it.


 On Thu, Apr 30, 2015 at 10:17 PM Policeman Jenkins Server 
 jenk...@thetaphi.de wrote:

 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12349/
 Java: 32bit/jdk1.9.0-ea-b60 -server -XX:+UseG1GC

 1 tests failed.
 FAILED:
 org.apache.lucene.spatial.composite.CompositeStrategyTest.testOperations
 {#6 seed=[B2B9C743868F80F5:2826A9642D5B4E9E]}

 Error Message:
 [Intersects] Should have matched I#3:Circle(Pt(x=-94.0,y=-12.0), d=45.0°
 5000.49km) Q:Circle(Pt(x=86.0,y=12.0), d=64.5° 7172.92km)

 Stack Trace:
 java.lang.AssertionError: [Intersects] Should have matched
 I#3:Circle(Pt(x=-94.0,y=-12.0), d=45.0° 5000.49km)
 Q:Circle(Pt(x=86.0,y=12.0), d=64.5° 7172.92km)
 at
 __randomizedtesting.SeedInfo.seed([B2B9C743868F80F5:2826A9642D5B4E9E]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at
 org.apache.lucene.spatial.prefix.RandomSpatialOpStrategyTestCase.fail(RandomSpatialOpStrategyTestCase.java:127)
 at
 org.apache.lucene.spatial.prefix.RandomSpatialOpStrategyTestCase.testOperation(RandomSpatialOpStrategyTestCase.java:121)
 at
 org.apache.lucene.spatial.prefix.RandomSpatialOpStrategyTestCase.testOperationRandomShapes(RandomSpatialOpStrategyTestCase.java:56)
 at
 org.apache.lucene.spatial.composite.CompositeStrategyTest.testOperations(CompositeStrategyTest.java:99)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:502)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
 at
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
 at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
 at
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at
 

[jira] [Commented] (SOLR-7024) bin/solr: Improve java detection and error messages

2015-04-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522727#comment-14522727
 ] 

ASF subversion and git services commented on SOLR-7024:
---

Commit 1677094 from [~elyograg] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1677094 ]

SOLR-7024: Correct Java version in error message to 7 or later on 5x.

 bin/solr: Improve java detection and error messages
 ---

 Key: SOLR-7024
 URL: https://issues.apache.org/jira/browse/SOLR-7024
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
 Environment: Linux bigindy5 3.10.0-123.9.2.el7.x86_64 #1 SMP Tue Oct 
 28 18:05:26 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.0, Trunk

 Attachments: SOLR-7024.patch, SOLR-7024.patch, SOLR-7024.patch, 
 SOLR-7024.patch, SOLR-7024.patch, SOLR-7024.patch, SOLR-7024.patch, 
 SOLR-7024.patch, SOLR-7024.patch, SOLR-7024.patch


 Java detection needs a bit of an overhaul.  One example: When running the 
 shell script, if JAVA_HOME is set, but does not point to a valid java home, 
 Solr will not start, but the error message is unhelpful, especially to users 
 who actually DO have the right java version installed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7024) bin/solr: Improve java detection and error messages

2015-04-30 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522728#comment-14522728
 ] 

Shawn Heisey commented on SOLR-7024:


oops!  I committed a fix.

 bin/solr: Improve java detection and error messages
 ---

 Key: SOLR-7024
 URL: https://issues.apache.org/jira/browse/SOLR-7024
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
 Environment: Linux bigindy5 3.10.0-123.9.2.el7.x86_64 #1 SMP Tue Oct 
 28 18:05:26 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.0, Trunk

 Attachments: SOLR-7024.patch, SOLR-7024.patch, SOLR-7024.patch, 
 SOLR-7024.patch, SOLR-7024.patch, SOLR-7024.patch, SOLR-7024.patch, 
 SOLR-7024.patch, SOLR-7024.patch, SOLR-7024.patch


 Java detection needs a bit of an overhaul.  One example: When running the 
 shell script, if JAVA_HOME is set, but does not point to a valid java home, 
 Solr will not start, but the error message is unhelpful, especially to users 
 who actually DO have the right java version installed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5750) Backup/Restore API for SolrCloud

2015-04-30 Thread Damien Kamerman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522594#comment-14522594
 ] 

Damien Kamerman commented on SOLR-5750:
---

Only snapshot if the index version has changed.

 Backup/Restore API for SolrCloud
 

 Key: SOLR-5750
 URL: https://issues.apache.org/jira/browse/SOLR-5750
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Varun Thacker
 Fix For: Trunk, 5.2


 We should have an easy way to do backups and restores in SolrCloud. The 
 ReplicationHandler supports a backup command which can create snapshots of 
 the index but that is too little.
 The command should be able to backup:
 # Snapshots of all indexes or indexes from the leader or the shards
 # Config set
 # Cluster state
 # Cluster properties
 # Aliases
 # Overseer work queue?
 A restore should be able to completely restore the cloud i.e. no manual steps 
 required other than bringing nodes back up or setting up a new cloud cluster.
 SOLR-5340 will be a part of this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7121) Solr nodes should go down based on configurable thresholds and not rely on resource exhaustion

2015-04-30 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-7121:
--
Attachment: SOLR-7121.patch

Here is a patch file for the pull request with a bit of cleanup and updated to 
trunk.

 Solr nodes should go down based on configurable thresholds and not rely on 
 resource exhaustion
 --

 Key: SOLR-7121
 URL: https://issues.apache.org/jira/browse/SOLR-7121
 Project: Solr
  Issue Type: New Feature
Reporter: Sachin Goyal
 Attachments: SOLR-7121.patch, SOLR-7121.patch, SOLR-7121.patch, 
 SOLR-7121.patch, SOLR-7121.patch, SOLR-7121.patch


 Currently, there is no way to control when a Solr node goes down.
 If the server is having high GC pauses or too many threads or is just getting 
 too many queries due to some bad load-balancer, the cores in the machine keep 
 on serving unless they exhaust the machine's resources and everything comes 
 to a stall.
 Such a slow-dying core can affect other cores as well by taking huge time to 
 serve their distributed queries.
 There should be a way to specify some threshold values beyond which the 
 targeted core can its ill-health and proactively go down to recover.
 When the load improves, the core should come up automatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7494) Distributed unique() facet function is incorrect for higher cardinality fields.

2015-04-30 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-7494:
---
Attachment: SOLR-7494.patch

Here's the fix.

 Distributed unique() facet function is incorrect for higher cardinality 
 fields.
 ---

 Key: SOLR-7494
 URL: https://issues.apache.org/jira/browse/SOLR-7494
 Project: Solr
  Issue Type: Bug
  Components: Facet Module
Reporter: Yonik Seeley
 Attachments: SOLR-7494.patch


 As described in SOLR-7214, the unique function seems wildly off, but only if 
 you have enough values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.9.0-ea-b60) - Build # 12349 - Failure!

2015-04-30 Thread david.w.smi...@gmail.com
I’ll look into it.

On Thu, Apr 30, 2015 at 10:17 PM Policeman Jenkins Server 
jenk...@thetaphi.de wrote:

 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12349/
 Java: 32bit/jdk1.9.0-ea-b60 -server -XX:+UseG1GC

 1 tests failed.
 FAILED:
 org.apache.lucene.spatial.composite.CompositeStrategyTest.testOperations
 {#6 seed=[B2B9C743868F80F5:2826A9642D5B4E9E]}

 Error Message:
 [Intersects] Should have matched I#3:Circle(Pt(x=-94.0,y=-12.0), d=45.0°
 5000.49km) Q:Circle(Pt(x=86.0,y=12.0), d=64.5° 7172.92km)

 Stack Trace:
 java.lang.AssertionError: [Intersects] Should have matched
 I#3:Circle(Pt(x=-94.0,y=-12.0), d=45.0° 5000.49km)
 Q:Circle(Pt(x=86.0,y=12.0), d=64.5° 7172.92km)
 at
 __randomizedtesting.SeedInfo.seed([B2B9C743868F80F5:2826A9642D5B4E9E]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at
 org.apache.lucene.spatial.prefix.RandomSpatialOpStrategyTestCase.fail(RandomSpatialOpStrategyTestCase.java:127)
 at
 org.apache.lucene.spatial.prefix.RandomSpatialOpStrategyTestCase.testOperation(RandomSpatialOpStrategyTestCase.java:121)
 at
 org.apache.lucene.spatial.prefix.RandomSpatialOpStrategyTestCase.testOperationRandomShapes(RandomSpatialOpStrategyTestCase.java:56)
 at
 org.apache.lucene.spatial.composite.CompositeStrategyTest.testOperations(CompositeStrategyTest.java:99)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:502)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
 at
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
 at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
 at
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at
 org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 

[jira] [Commented] (LUCENE-3449) Fix FixedBitSet.nextSetBit/prevSetBit to support the common usage pattern in every programming book

2015-04-30 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522563#comment-14522563
 ] 

Yonik Seeley commented on LUCENE-3449:
--

Well, I was just bit by this.  My fault of course, but it wouldn't have 
happened if I hadn't needed 2 checks in the loop instead of one:
1. check if nextSetBit() returned NO_MORE_DOCS
2. check that the value returned by step #1 isn't the last bit

If we just had a single sentinel bit, both checks could be combined into one.

 Fix FixedBitSet.nextSetBit/prevSetBit to support the common usage pattern in 
 every programming book
 ---

 Key: LUCENE-3449
 URL: https://issues.apache.org/jira/browse/LUCENE-3449
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/other
Affects Versions: 3.4, 4.0-ALPHA
Reporter: Uwe Schindler
Priority: Minor
 Attachments: LUCENE-3449.patch


 The usage pattern for nextSetBit/prevSetBit is the following:
 {code:java}
 for(int i=bs.nextSetBit(0); i=0; i=bs.nextSetBit(i+1)) {
  // operate on index i here
 }
 {code}
 The problem is that the i+1 at the end can be bs.length(), but the code in 
 nextSetBit does not allow this (same applies to prevSetBit(0)). The above 
 usage pattern is in every programming book, so it should really be supported. 
 The check has to be done in all cases (with the current impl in the calling 
 code).
 If the check is done inside xxxSetBit() it can also be optimized to be only 
 called seldom and not all the time, like in the ugly looking replacement, 
 thats currently needed:
 {code:java}
 for(int i=bs.nextSetBit(0); i=0; i=(ibs.length()-1) ? bs.nextSetBit(i+1) : 
 -1) {
  // operate on index i here
 }
 {code}
 We should change this and allow out-of bounds indexes for those two methods 
 (they already do some checks in that direction). Enforcing this with an 
 assert is unuseable on the client side.
 The test code for FixedBitSet also uses this, horrible. Please support the 
 common usage pattern for BitSets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2245 - Still Failing!

2015-04-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2245/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 45344 lines...]
-documentation-lint:
 [echo] checking for broken html...
[jtidy] Checking for broken html (such as invalid tags)...
   [delete] Deleting directory 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/jtidy_tmp
 [echo] Checking for broken links...
 [exec] 
 [exec] Crawl/parse...
 [exec] 
 [exec] Verify...
 [echo] Checking for missing docs...
 [exec] Traceback (most recent call last):
 [exec]   File 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/dev-tools/scripts/checkJavaDocs.py,
 line 384, in module
 [exec] if checkPackageSummaries(sys.argv[1], level):
 [exec]   File 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/dev-tools/scripts/checkJavaDocs.py,
 line 364, in checkPackageSummaries
 [exec] if checkClassSummaries(fullPath):
 [exec]   File 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/dev-tools/scripts/checkJavaDocs.py,
 line 225, in checkClassSummaries
 [exec] raise RuntimeError('failed to locate javadoc item in %s, line 
%d? last line: %s' % (fullPath, lineCount, line.rstrip()))
 [exec] RuntimeError: failed to locate javadoc item in 
build/docs/classification/org/apache/lucene/classification/BooleanPerceptronClassifier.html,
 line 153? last line: /tr

BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:526: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:90: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build.xml:135: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build.xml:165: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/common-build.xml:2476: 
exec returned: 1

Total time: 89 minutes 4 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Resolved] (SOLR-7494) Distributed unique() facet function is incorrect for higher cardinality fields.

2015-04-30 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-7494.

   Resolution: Fixed
Fix Version/s: 5.2

 Distributed unique() facet function is incorrect for higher cardinality 
 fields.
 ---

 Key: SOLR-7494
 URL: https://issues.apache.org/jira/browse/SOLR-7494
 Project: Solr
  Issue Type: Bug
  Components: Facet Module
Reporter: Yonik Seeley
 Fix For: 5.2

 Attachments: SOLR-7494.patch


 As described in SOLR-7214, the unique function seems wildly off, but only if 
 you have enough values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_60-ea-b12) - Build # 12523 - Failure!

2015-04-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12523/
Java: 32bit/jdk1.8.0_60-ea-b12 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestSolrConfigHandlerConcurrent.test

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([B9ABA73E577178BF:31FF98E4F98D1547]:0)
at 
org.apache.solr.handler.TestSolrConfigHandlerConcurrent.test(TestSolrConfigHandlerConcurrent.java:118)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 10632 lines...]
   [junit4] Suite: org.apache.solr.handler.TestSolrConfigHandlerConcurrent
   [junit4]   2 Creating dataDir: 

[jira] [Created] (SOLR-7494) Distributed unique() facet function is incorrect for higher cardinality fields.

2015-04-30 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-7494:
--

 Summary: Distributed unique() facet function is incorrect for 
higher cardinality fields.
 Key: SOLR-7494
 URL: https://issues.apache.org/jira/browse/SOLR-7494
 Project: Solr
  Issue Type: Bug
  Components: Facet Module
Reporter: Yonik Seeley


As described in SOLR-7214, the unique function seems wildly off, but only if 
you have enough values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7214) JSON Facet API

2015-04-30 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522476#comment-14522476
 ] 

Yonik Seeley commented on SOLR-7214:


OK, I don't yet know what's behind it, but I have reproduced it.  I'll open 
another issue.

 JSON Facet API
 --

 Key: SOLR-7214
 URL: https://issues.apache.org/jira/browse/SOLR-7214
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Assignee: Yonik Seeley
 Fix For: 5.1

 Attachments: SOLR-7214.patch


 Overview is here: http://yonik.com/json-facet-api/
 The structured nature of nested sub-facets are more naturally expressed in a 
 nested structure like JSON rather than the flat structure that normal query 
 parameters provide.
 Goals:
 - First class JSON support
 - Easier programmatic construction of complex nested facet commands
 - Support a much more canonical response format that is easier for clients to 
 parse
 - First class analytics support
 - Support a cleaner way to do distributed faceting
 - Support better integration with other search features



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7214) JSON Facet API

2015-04-30 Thread Crawdaddy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522327#comment-14522327
 ] 

Crawdaddy commented on SOLR-7214:
-

keywords is multi-valued and top_private_domain_s is a standard single valued 
string.

Looks like it is a distrib problem - the numbers do look more realistic on a 
per-shard basis.  Both HS and Solr 5.1 report the same per-shard numbers for 
Paul McCartney across my 5 shards:
70 + 76 + 90 + 78 + 48 = 362

I would assume then that 362 goes to the 228 number I saw above, once the list 
is uniq'd.



 JSON Facet API
 --

 Key: SOLR-7214
 URL: https://issues.apache.org/jira/browse/SOLR-7214
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Assignee: Yonik Seeley
 Fix For: 5.1

 Attachments: SOLR-7214.patch


 Overview is here: http://yonik.com/json-facet-api/
 The structured nature of nested sub-facets are more naturally expressed in a 
 nested structure like JSON rather than the flat structure that normal query 
 parameters provide.
 Goals:
 - First class JSON support
 - Easier programmatic construction of complex nested facet commands
 - Support a much more canonical response format that is easier for clients to 
 parse
 - First class analytics support
 - Support a cleaner way to do distributed faceting
 - Support better integration with other search features



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-2347) Dump WordNet to SOLR Synonym format

2015-04-30 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved LUCENE-2347.
--
   Resolution: Not A Problem
Fix Version/s: 3.4
   4.0-ALPHA

the SynonymFilterFactory has supported a format option which can be set to 
wordnet since Lucene/Solr 3.4 (via LUCENE-3233) so a tool like this isn't 
generally needed.

 Dump WordNet to SOLR Synonym format
 ---

 Key: LUCENE-2347
 URL: https://issues.apache.org/jira/browse/LUCENE-2347
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/analysis
Affects Versions: 3.0.1
Reporter: Bill Bell
 Fix For: 4.0-ALPHA, 3.4

 Attachments: Syns2Solr.java


 This enhancement allows you to dump v2 of WordNet to SOLR synonym format! Get 
 all your syns loaded easily.
 1. You can load all synonyms from http://wordnetcode.princeton.edu/2.0/ 
 WordNet V2 to SOLR by first using the Sys2Index program
 http://lucene.apache.org/java/2_2_0/api/org/apache/lucene/wordnet/Syns2Index.html
 Get WNprolog from http://wordnetcode.princeton.edu/2.0/
 2. We modified this program to work with SOLR (See attached) on 
 amidev.kaango.com in /vol/src/lucene/contrib/wordnet
 vi 
 /vol/src/lucene/contrib/wordnet/src/java/org/apache/lucene/wordnet/Syns2Solr.java
 3. Run ant
 4. java -classpath 
 /vol/src/lucene/build/contrib/wordnet/lucene-wordnet-3.1-dev.jar 
 org.apache.lucene.wordnet.Syns2Solr prolog/wn_s.pl solr  index_synonyms.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7363) Expand component throws an Exception when the results have been collapsed and grouped

2015-04-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522390#comment-14522390
 ] 

Joel Bernstein commented on SOLR-7363:
--

I haven't had a chance to look into this deeper. The thing that needs to be 
done is to understand why Grouping either doesn't use the DocList or is putting 
it in a different place. Other components, like Highlighting also use the 
DocList, so it might interesting to know if/how they interact with Grouping.

I personally am pretty swamped for the near future, so I won't have too much 
time to dig deeper.

 Expand component throws an Exception when the results have been collapsed and 
 grouped
 -

 Key: SOLR-7363
 URL: https://issues.apache.org/jira/browse/SOLR-7363
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other
Affects Versions: 4.10.3
Reporter: Brandon Chapman

 The expand component does not work when used on a result that has been both 
 collapsed and grouped. This is counter-intuitive as collapsing and grouping 
 work together with no issues.
 {code}
 {
   responseHeader:{
 status:500,
 QTime:1198,
 params:{
   fl:psid,
   indent:true,
   q:*:*,
   expand:true,
   group.field:merchant,
   group:true,
   wt:json,
   fq:{!collapse field=groupId},
   rows:1}},
   grouped:{
 merchant:{
   matches:71652,
   groups:[{
   groupValue:sears,
   doclist:{numFound:30672,start:0,docs:[
   {
 psid:3047500675628000}]
   }}]}},
   error:{
 trace:java.lang.NullPointerException\n\tat 
 org.apache.solr.handler.component.ExpandComponent.process(ExpandComponent.java:193)\n\tat
  
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)\n\tat
  
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)\n\tat
  org.apache.solr.core.SolrCore.execute(SolrCore.java:1976)\n\tat 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)\n\tat
  
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)\n\tat
  
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)\n\tat
  
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)\n\tat
  
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)\n\tat
  
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)\n\tat
  
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)\n\tat
  
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)\n\tat
  
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)\n\tat
  
 org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:929)\n\tat
  
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)\n\tat
  
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)\n\tat
  
 org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1002)\n\tat
  
 org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:585)\n\tat
  
 org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310)\n\tat
  
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat
  
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat
  java.lang.Thread.run(Thread.java:744)\n,
 code:500}}
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7363) Expand component throws an Exception when the results have been collapsed and grouped

2015-04-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522390#comment-14522390
 ] 

Joel Bernstein edited comment on SOLR-7363 at 4/30/15 10:26 PM:


I haven't had a chance to look into this deeper. The thing that needs to be 
done is to understand why Grouping either doesn't use the DocList or is putting 
it in a different place. Other components, like Highlighting also use the 
DocList, so it might be interesting to know if/how they interact with Grouping.

I personally am pretty swamped for the near future, so I won't have too much 
time to dig deeper.


was (Author: joel.bernstein):
I haven't had a chance to look into this deeper. The thing that needs to be 
done is to understand why Grouping either doesn't use the DocList or is putting 
it in a different place. Other components, like Highlighting also use the 
DocList, so it might interesting to know if/how they interact with Grouping.

I personally am pretty swamped for the near future, so I won't have too much 
time to dig deeper.

 Expand component throws an Exception when the results have been collapsed and 
 grouped
 -

 Key: SOLR-7363
 URL: https://issues.apache.org/jira/browse/SOLR-7363
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other
Affects Versions: 4.10.3
Reporter: Brandon Chapman

 The expand component does not work when used on a result that has been both 
 collapsed and grouped. This is counter-intuitive as collapsing and grouping 
 work together with no issues.
 {code}
 {
   responseHeader:{
 status:500,
 QTime:1198,
 params:{
   fl:psid,
   indent:true,
   q:*:*,
   expand:true,
   group.field:merchant,
   group:true,
   wt:json,
   fq:{!collapse field=groupId},
   rows:1}},
   grouped:{
 merchant:{
   matches:71652,
   groups:[{
   groupValue:sears,
   doclist:{numFound:30672,start:0,docs:[
   {
 psid:3047500675628000}]
   }}]}},
   error:{
 trace:java.lang.NullPointerException\n\tat 
 org.apache.solr.handler.component.ExpandComponent.process(ExpandComponent.java:193)\n\tat
  
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)\n\tat
  
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)\n\tat
  org.apache.solr.core.SolrCore.execute(SolrCore.java:1976)\n\tat 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)\n\tat
  
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)\n\tat
  
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)\n\tat
  
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)\n\tat
  
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)\n\tat
  
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)\n\tat
  
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)\n\tat
  
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)\n\tat
  
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)\n\tat
  
 org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:929)\n\tat
  
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)\n\tat
  
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)\n\tat
  
 org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1002)\n\tat
  
 org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:585)\n\tat
  
 org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310)\n\tat
  
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat
  
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat
  java.lang.Thread.run(Thread.java:744)\n,
 code:500}}
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7377) SOLR Streaming Expressions

2015-04-30 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522440#comment-14522440
 ] 

Dennis Gove commented on SOLR-7377:
---

I'm not totally against doing that but I feel like the refactoring is a 
required piece of this patch. I could, however, create a new ticket with just 
the refactoring and then make this one depend on that one. 

I am worried that such a ticket might look like unnecessary refactoring. 
Without the expression stuff added here I think the streaming stuff has a 
reasonable home in org.apache.solr.client.solrj.io.

That said, I certainly understand the benefit of smaller patches.

 SOLR Streaming Expressions
 --

 Key: SOLR-7377
 URL: https://issues.apache.org/jira/browse/SOLR-7377
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Reporter: Dennis Gove
Priority: Minor
 Fix For: Trunk

 Attachments: SOLR-7377.patch, SOLR-7377.patch, SOLR-7377.patch, 
 SOLR-7377.patch


 It would be beneficial to add an expression-based interface to Streaming API 
 described in SOLR-7082. Right now that API requires streaming requests to 
 come in from clients as serialized bytecode of the streaming classes. The 
 suggestion here is to support string expressions which describe the streaming 
 operations the client wishes to perform. 
 {code:java}
 search(collection1, q=*:*, fl=id,fieldA,fieldB, sort=fieldA asc)
 {code}
 With this syntax in mind, one can now express arbitrarily complex stream 
 queries with a single string.
 {code:java}
 // merge two distinct searches together on common fields
 merge(
   search(collection1, q=id:(0 3 4), fl=id,a_s,a_i,a_f, sort=a_f asc, a_s 
 asc),
   search(collection2, q=id:(1 2), fl=id,a_s,a_i,a_f, sort=a_f asc, a_s 
 asc),
   on=a_f asc, a_s asc)
 // find top 20 unique records of a search
 top(
   n=20,
   unique(
 search(collection1, q=*:*, fl=id,a_s,a_i,a_f, sort=a_f desc),
 over=a_f desc),
   sort=a_f desc)
 {code}
 The syntax would support
 1. Configurable expression names (eg. via solrconfig.xml one can map unique 
 to a class implementing a Unique stream class) This allows users to build 
 their own streams and use as they wish.
 2. Named parameters (of both simple and expression types)
 3. Unnamed, type-matched parameters (to support requiring N streams as 
 arguments to another stream)
 4. Positional parameters
 The main goal here is to make streaming as accessible as possible and define 
 a syntax for running complex queries across large distributed systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >