[jira] [Commented] (SOLR-2212) NoMergePolicy class does not load

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607541#comment-15607541
 ] 

ASF subversion and git services commented on SOLR-2212:
---

Commit 768c7e2648557d10f231f49a7c76eb040cbbcb0e in lucene-solr's branch 
refs/heads/master from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=768c7e2 ]

SOLR-2212: Add a factory class corresponding to Lucene's NoMergePolicy


> NoMergePolicy class does not load
> -
>
> Key: SOLR-2212
> URL: https://issues.apache.org/jira/browse/SOLR-2212
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 3.1, 4.0-ALPHA
>Reporter: Lance Norskog
> Attachments: SOLR-2212.patch
>
>
> Solr cannot use the Lucene NoMergePolicy class. It will not instantiate 
> correctly when loading the core.
> Other MergePolicy classes work, including the BalancedSegmentMergePolicy.
> This is in trunk and 3.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1085) SolrJ client java does not support moreLikeThis querys and results

2016-10-25 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607509#comment-15607509
 ] 

Shalin Shekhar Mangar commented on SOLR-1085:
-

Hi Dat, your patch does not have any changes to SolrQuery like the ones made by 
previous patches. Did you forget to include all files in the patch?

> SolrJ client java does not support moreLikeThis querys and results
> --
>
> Key: SOLR-1085
> URL: https://issues.apache.org/jira/browse/SOLR-1085
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
> Environment: SolrJ java client
>Reporter: Maurice Jumelet
> Fix For: 4.9, 6.0
>
> Attachments: SOLR-1085.4.2.1.patch, SOLR-1085.patch, SOLR-1085.patch, 
> solrj-java-morelikethis.patch
>
>
> Although SOLR supports the more like this querys (see 
> http://wiki.apache.org/solr/MoreLikeThis) these type of query are currently 
> not supported by the SOLR java client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-2039) Multivalued fields with dynamic names does not work properly with DIH

2016-10-25 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-2039.
-
   Resolution: Fixed
Fix Version/s: master (7.0)
   6.3

Thanks ruslan and Dat for the patches!

> Multivalued fields with dynamic names does not work properly with DIH
> -
>
> Key: SOLR-2039
> URL: https://issues.apache.org/jira/browse/SOLR-2039
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 1.4.1
> Environment: Windows XP, Default Solr 1.4 install with jetty
>Reporter: K A
>Assignee: Shalin Shekhar Mangar
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-2039.patch, SOLR-2039.patch
>
>
> Attempting to use multiValued fields using the DataImportHandler with dynamic 
> names does not seem to work as intended.
> Setting up the following in schema.xml
>  multiValued="true" /> 
> Then attempting to import a multiValued field through a RDBMS using children 
> records with a dynamic name. The snippet from data-config.xml
> 
>>
>   column="TEXT_VALUE" />
>
>  
> Results in only the FIRST record being imported. If we change the field name 
> in the above example to a constant (ie above field entry becomes:  name="metadata_record_s" column="TEXT_VALUE" />). The multiple records are 
> correctly imported.
> This was posted on solr-user, and others have reported the problem on the 
> mailing list archive.
> The workaround was to use a javascript transform to perform the same 
> behavior. Changes in data-config.xml become:
>  transformer="script:f1"/> 
>    function f1(row) {
> var name = row.get('CORE_DESC_TERM');
> var value = row.get('TEXT_VALUE')
> row.put(name+ '_s', value);
>
> return row;
>   }
>  ]]> 
> This results in multivalued field with a dynamic name assigned.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2039) Multivalued fields with dynamic names does not work properly with DIH

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607492#comment-15607492
 ] 

ASF subversion and git services commented on SOLR-2039:
---

Commit 279647a303750408d10f6f8a6c27a066176fe49e in lucene-solr's branch 
refs/heads/branch_6x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=279647a ]

SOLR-2039: Multivalued fields with dynamic names does not work properly with DIH

(cherry picked from commit b8d9647)


> Multivalued fields with dynamic names does not work properly with DIH
> -
>
> Key: SOLR-2039
> URL: https://issues.apache.org/jira/browse/SOLR-2039
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 1.4.1
> Environment: Windows XP, Default Solr 1.4 install with jetty
>Reporter: K A
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-2039.patch, SOLR-2039.patch
>
>
> Attempting to use multiValued fields using the DataImportHandler with dynamic 
> names does not seem to work as intended.
> Setting up the following in schema.xml
>  multiValued="true" /> 
> Then attempting to import a multiValued field through a RDBMS using children 
> records with a dynamic name. The snippet from data-config.xml
> 
>>
>   column="TEXT_VALUE" />
>
>  
> Results in only the FIRST record being imported. If we change the field name 
> in the above example to a constant (ie above field entry becomes:  name="metadata_record_s" column="TEXT_VALUE" />). The multiple records are 
> correctly imported.
> This was posted on solr-user, and others have reported the problem on the 
> mailing list archive.
> The workaround was to use a javascript transform to perform the same 
> behavior. Changes in data-config.xml become:
>  transformer="script:f1"/> 
>    function f1(row) {
> var name = row.get('CORE_DESC_TERM');
> var value = row.get('TEXT_VALUE')
> row.put(name+ '_s', value);
>
> return row;
>   }
>  ]]> 
> This results in multivalued field with a dynamic name assigned.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2039) Multivalued fields with dynamic names does not work properly with DIH

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607490#comment-15607490
 ] 

ASF subversion and git services commented on SOLR-2039:
---

Commit b8d9647307c5559706aeec3aad32c2e416188979 in lucene-solr's branch 
refs/heads/master from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b8d9647 ]

SOLR-2039: Multivalued fields with dynamic names does not work properly with DIH


> Multivalued fields with dynamic names does not work properly with DIH
> -
>
> Key: SOLR-2039
> URL: https://issues.apache.org/jira/browse/SOLR-2039
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 1.4.1
> Environment: Windows XP, Default Solr 1.4 install with jetty
>Reporter: K A
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-2039.patch, SOLR-2039.patch
>
>
> Attempting to use multiValued fields using the DataImportHandler with dynamic 
> names does not seem to work as intended.
> Setting up the following in schema.xml
>  multiValued="true" /> 
> Then attempting to import a multiValued field through a RDBMS using children 
> records with a dynamic name. The snippet from data-config.xml
> 
>>
>   column="TEXT_VALUE" />
>
>  
> Results in only the FIRST record being imported. If we change the field name 
> in the above example to a constant (ie above field entry becomes:  name="metadata_record_s" column="TEXT_VALUE" />). The multiple records are 
> correctly imported.
> This was posted on solr-user, and others have reported the problem on the 
> mailing list archive.
> The workaround was to use a javascript transform to perform the same 
> behavior. Changes in data-config.xml become:
>  transformer="script:f1"/> 
>    function f1(row) {
> var name = row.get('CORE_DESC_TERM');
> var value = row.get('TEXT_VALUE')
> row.put(name+ '_s', value);
>
> return row;
>   }
>  ]]> 
> This results in multivalued field with a dynamic name assigned.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-2039) Multivalued fields with dynamic names does not work properly with DIH

2016-10-25 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reassigned SOLR-2039:
---

Assignee: Shalin Shekhar Mangar

> Multivalued fields with dynamic names does not work properly with DIH
> -
>
> Key: SOLR-2039
> URL: https://issues.apache.org/jira/browse/SOLR-2039
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 1.4.1
> Environment: Windows XP, Default Solr 1.4 install with jetty
>Reporter: K A
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-2039.patch, SOLR-2039.patch
>
>
> Attempting to use multiValued fields using the DataImportHandler with dynamic 
> names does not seem to work as intended.
> Setting up the following in schema.xml
>  multiValued="true" /> 
> Then attempting to import a multiValued field through a RDBMS using children 
> records with a dynamic name. The snippet from data-config.xml
> 
>>
>   column="TEXT_VALUE" />
>
>  
> Results in only the FIRST record being imported. If we change the field name 
> in the above example to a constant (ie above field entry becomes:  name="metadata_record_s" column="TEXT_VALUE" />). The multiple records are 
> correctly imported.
> This was posted on solr-user, and others have reported the problem on the 
> mailing list archive.
> The workaround was to use a javascript transform to perform the same 
> behavior. Changes in data-config.xml become:
>  transformer="script:f1"/> 
>    function f1(row) {
> var name = row.get('CORE_DESC_TERM');
> var value = row.get('TEXT_VALUE')
> row.put(name+ '_s', value);
>
> return row;
>   }
>  ]]> 
> This results in multivalued field with a dynamic name assigned.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_102) - Build # 2040 - Unstable!

2016-10-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2040/
Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseSerialGC

6 tests failed.
FAILED:  org.apache.solr.search.facet.TestJsonFacets.testStatsDistrib {p0=DV}

Error Message:
mismatch: 'A'!='B' @ facets/cat0/buckets/[0]/val

Stack Trace:
java.lang.RuntimeException: mismatch: 'A'!='B' @ facets/cat0/buckets/[0]/val
at 
__randomizedtesting.SeedInfo.seed([207517C5094C183C:FCB5BEB364CDBCF6]:0)
at org.apache.solr.SolrTestCaseHS.matchJSON(SolrTestCaseHS.java:161)
at org.apache.solr.SolrTestCaseHS.assertJQ(SolrTestCaseHS.java:143)
at 
org.apache.solr.SolrTestCaseHS$Client$Tester.assertJQ(SolrTestCaseHS.java:255)
at org.apache.solr.SolrTestCaseHS$Client.testJQ(SolrTestCaseHS.java:296)
at 
org.apache.solr.search.facet.TestJsonFacets.doStatsTemplated(TestJsonFacets.java:1152)
at 
org.apache.solr.search.facet.TestJsonFacets.doStats(TestJsonFacets.java:361)
at 
org.apache.solr.search.facet.TestJsonFacets.testStatsDistrib(TestJsonFacets.java:322)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
   

[jira] [Commented] (SOLR-9660) in GroupingSpecification factor [group](sort|offset|limit) into [group](sortSpec)

2016-10-25 Thread Judith Silverman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607455#comment-15607455
 ] 

Judith Silverman commented on SOLR-9660:


Christine, I have no suggestions about the failing test but here are a couple 
of questions. Is it necessary to deprecate the GroupingSpecification accessors 
like getGroupOffset(), rather than simply modifying the definitions as you did? 
They could still be useful as wrappers.  Not that I feel strongly about it, but 
your answer could tell me something about Solr philosophy.

On a related note: now that you have added new SortSpec constructors, could you 
rewrite the old ones in terms of the new ones using the initial values of num 
and offset as the last two arguments:
   
public SortSpec(Sort sort, List fields) { 
 this(sort, fields, num, offset ); 
}  
   
?  I missed that in my SOLR-6203 patch but it jumped out at me now.  Similarly, 
the new weightSortSpec() function could be defined in terms of a 4-parameter 
version: weightSortSpec(SortSpec originalSortSpec, Sort nullEquivalent, int 
count, int offset) to make it more self-contained.

Thanks,
Judith

> in GroupingSpecification factor [group](sort|offset|limit) into 
> [group](sortSpec)
> -
>
> Key: SOLR-9660
> URL: https://issues.apache.org/jira/browse/SOLR-9660
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-9660.patch
>
>
> This is split out and adapted from and towards the SOLR-6203 changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6203) cast exception while searching with sort function and result grouping

2016-10-25 Thread Judith Silverman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607409#comment-15607409
 ] 

Judith Silverman commented on SOLR-6203:


Christine, thanks for taking an interest in SOLR-6203.  I will be happy to lend 
a pair of eyes but it will take a while for me to remind myself how the patch 
worked and to catch up with the changes to Solr since 4.10.  I do have some 
elementary questions about the patch for SOLR-9660 which I will post under that 
jira.  Thanks again.

> cast exception while searching with sort function and result grouping
> -
>
> Key: SOLR-6203
> URL: https://issues.apache.org/jira/browse/SOLR-6203
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.7, 4.8
>Reporter: Nate Dire
>Assignee: Christine Poerschke
> Attachments: README, SOLR-6203-unittest.patch, 
> SOLR-6203-unittest.patch, SOLR-6203.patch
>
>
> After upgrading from 4.5.1 to 4.7+, a schema including a {{"*"}} dynamic 
> field as text gets a cast exception when using a sort function and result 
> grouping.  
> Repro (with example config):
> # Add {{"*"}} dynamic field as a {{TextField}}, eg:
> {noformat}
> 
> {noformat}
> #  Create  sharded collection
> {noformat}
> curl 
> 'http://localhost:8983/solr/admin/collections?action=CREATE=test=2=2'
> {noformat}
> # Add example docs (query must have some results)
> # Submit query which sorts on a function result and uses result grouping:
> {noformat}
> {
>   "responseHeader": {
> "status": 500,
> "QTime": 50,
> "params": {
>   "sort": "sqrt(popularity) desc",
>   "indent": "true",
>   "q": "*:*",
>   "_": "1403709010008",
>   "group.field": "manu",
>   "group": "true",
>   "wt": "json"
> }
>   },
>   "error": {
> "msg": "java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef",
> "code": 500
>   }
> }
> {noformat}
> Source exception from log:
> {noformat}
> ERROR - 2014-06-25 08:10:10.055; org.apache.solr.common.SolrException; 
> java.lang.ClassCastException: java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef
> at 
> org.apache.solr.schema.FieldType.marshalStringSortValue(FieldType.java:981)
> at org.apache.solr.schema.TextField.marshalSortValue(TextField.java:176)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.serializeSearchGroup(SearchGroupsResultTransformer.java:125)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:65)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:43)
> at 
> org.apache.solr.search.grouping.CommandHandler.processResult(CommandHandler.java:193)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:340)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   ...
> {noformat}
> It looks like {{serializeSearchGroup}} is matching the sort expression as the 
> {{"*"}} dynamic field, which is a TextField in the repro.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5245) Create a test for SOLR-5243.

2016-10-25 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-5245.
-
   Resolution: Fixed
Fix Version/s: (was: 6.0)
   (was: 4.9)
   master (7.0)
   6.3

Thanks Dat!

> Create a test for SOLR-5243.
> 
>
> Key: SOLR-5245
> URL: https://issues.apache.org/jira/browse/SOLR-5245
> Project: Solr
>  Issue Type: Test
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Shalin Shekhar Mangar
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-5245.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5245) Create a test for SOLR-5243.

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607365#comment-15607365
 ] 

ASF subversion and git services commented on SOLR-5245:
---

Commit de360d62cc4fe87d05a61b0c78a2e2077b4eb73c in lucene-solr's branch 
refs/heads/branch_6x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=de360d6 ]

SOLR-5245: Add a test to ensure that election contexts are keyed off both 
collection name and coreNodeName so that killing a shard in one collection does 
not result in leader election in a different collection.

(cherry picked from commit 62bc90d)


> Create a test for SOLR-5243.
> 
>
> Key: SOLR-5245
> URL: https://issues.apache.org/jira/browse/SOLR-5245
> Project: Solr
>  Issue Type: Test
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Shalin Shekhar Mangar
> Fix For: 4.9, 6.0
>
> Attachments: SOLR-5245.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5245) Create a test for SOLR-5243.

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607363#comment-15607363
 ] 

ASF subversion and git services commented on SOLR-5245:
---

Commit 62bc90d7d2d586fd587c7a133fff83e535892764 in lucene-solr's branch 
refs/heads/master from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=62bc90d ]

SOLR-5245: Add a test to ensure that election contexts are keyed off both 
collection name and coreNodeName so that killing a shard in one collection does 
not result in leader election in a different collection.


> Create a test for SOLR-5243.
> 
>
> Key: SOLR-5245
> URL: https://issues.apache.org/jira/browse/SOLR-5245
> Project: Solr
>  Issue Type: Test
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Shalin Shekhar Mangar
> Fix For: 4.9, 6.0
>
> Attachments: SOLR-5245.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-5245) Create a test for SOLR-5243.

2016-10-25 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reassigned SOLR-5245:
---

Assignee: Shalin Shekhar Mangar  (was: Mark Miller)

> Create a test for SOLR-5243.
> 
>
> Key: SOLR-5245
> URL: https://issues.apache.org/jira/browse/SOLR-5245
> Project: Solr
>  Issue Type: Test
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Shalin Shekhar Mangar
> Fix For: 4.9, 6.0
>
> Attachments: SOLR-5245.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_102) - Build # 18142 - Still Unstable!

2016-10-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18142/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseG1GC

7 tests failed.
FAILED:  org.apache.solr.security.BasicAuthStandaloneTest.testBasicAuth

Error Message:
Invalid jsonError 401   
 HTTP ERROR: 401 Problem accessing 
/solr/admin/authentication. Reason: Bad credentials http://eclipse.org/jetty;>Powered by Jetty:// 9.3.8.v20160314 
  

Stack Trace:
java.lang.AssertionError: Invalid json 


Error 401 


HTTP ERROR: 401
Problem accessing /solr/admin/authentication. Reason:
Bad credentials
http://eclipse.org/jetty;>Powered by Jetty:// 
9.3.8.v20160314



at 
__randomizedtesting.SeedInfo.seed([FE4836182561EE6A:4226400A81326D10]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.security.BasicAuthIntegrationTest.verifySecurityStatus(BasicAuthIntegrationTest.java:256)
at 
org.apache.solr.security.BasicAuthIntegrationTest.verifySecurityStatus(BasicAuthIntegrationTest.java:237)
at 
org.apache.solr.security.BasicAuthStandaloneTest.testBasicAuth(BasicAuthStandaloneTest.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+140) - Build # 18141 - Unstable!

2016-10-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18141/
Java: 32bit/jdk-9-ea+140 -server -XX:+UseConcMarkSweepGC

6 tests failed.
FAILED:  org.apache.solr.search.facet.TestJsonFacets.testStatsDistrib {p0=ENUM}

Error Message:
mismatch: 'A'!='B' @ facets/cat0/buckets/[0]/val

Stack Trace:
java.lang.RuntimeException: mismatch: 'A'!='B' @ facets/cat0/buckets/[0]/val
at 
__randomizedtesting.SeedInfo.seed([D054C871D43C78F0:C946107B9BDDC3A]:0)
at org.apache.solr.SolrTestCaseHS.matchJSON(SolrTestCaseHS.java:161)
at org.apache.solr.SolrTestCaseHS.assertJQ(SolrTestCaseHS.java:143)
at 
org.apache.solr.SolrTestCaseHS$Client$Tester.assertJQ(SolrTestCaseHS.java:255)
at org.apache.solr.SolrTestCaseHS$Client.testJQ(SolrTestCaseHS.java:296)
at 
org.apache.solr.search.facet.TestJsonFacets.doStatsTemplated(TestJsonFacets.java:1152)
at 
org.apache.solr.search.facet.TestJsonFacets.doStats(TestJsonFacets.java:361)
at 
org.apache.solr.search.facet.TestJsonFacets.testStatsDistrib(TestJsonFacets.java:322)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_102) - Build # 539 - Still Unstable!

2016-10-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/539/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.lucene.replicator.IndexReplicationClientTest.testConsistencyOnExceptions

Error Message:
Captured an uncaught exception in thread: Thread[id=18, 
name=ReplicationThread-index, state=RUNNABLE, 
group=TGRP-IndexReplicationClientTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=18, name=ReplicationThread-index, 
state=RUNNABLE, group=TGRP-IndexReplicationClientTest]
at 
__randomizedtesting.SeedInfo.seed([4B68EF7EC8BB0D3E:C4E608DEDAD7FEC1]:0)
Caused by: java.lang.AssertionError: handler failed too many times: -1
at __randomizedtesting.SeedInfo.seed([4B68EF7EC8BB0D3E]:0)
at 
org.apache.lucene.replicator.IndexReplicationClientTest$4.handleUpdateException(IndexReplicationClientTest.java:304)
at 
org.apache.lucene.replicator.ReplicationClient$ReplicationThread.run(ReplicationClient.java:77)




Build Log:
[...truncated 8247 lines...]
   [junit4] Suite: org.apache.lucene.replicator.IndexReplicationClientTest
   [junit4]   2> ?? 25, 2016 6:00:48 ?? 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2> ??: Uncaught exception in thread: 
Thread[ReplicationThread-index,5,TGRP-IndexReplicationClientTest]
   [junit4]   2> java.lang.AssertionError: handler failed too many times: -1
   [junit4]   2>at 
__randomizedtesting.SeedInfo.seed([4B68EF7EC8BB0D3E]:0)
   [junit4]   2>at 
org.apache.lucene.replicator.IndexReplicationClientTest$4.handleUpdateException(IndexReplicationClientTest.java:304)
   [junit4]   2>at 
org.apache.lucene.replicator.ReplicationClient$ReplicationThread.run(ReplicationClient.java:77)
   [junit4]   2> 
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=IndexReplicationClientTest 
-Dtests.method=testConsistencyOnExceptions -Dtests.seed=4B68EF7EC8BB0D3E 
-Dtests.slow=true -Dtests.locale=zh-TW 
-Dtests.timezone=America/North_Dakota/New_Salem -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR   3.32s J1 | 
IndexReplicationClientTest.testConsistencyOnExceptions <<<
   [junit4]> Throwable #1: 
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=18, name=ReplicationThread-index, 
state=RUNNABLE, group=TGRP-IndexReplicationClientTest]
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([4B68EF7EC8BB0D3E:C4E608DEDAD7FEC1]:0)
   [junit4]> Caused by: java.lang.AssertionError: handler failed too many 
times: -1
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([4B68EF7EC8BB0D3E]:0)
   [junit4]>at 
org.apache.lucene.replicator.IndexReplicationClientTest$4.handleUpdateException(IndexReplicationClientTest.java:304)
   [junit4]>at 
org.apache.lucene.replicator.ReplicationClient$ReplicationThread.run(ReplicationClient.java:77)
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): {}, 
docValues:{}, maxPointsInLeafNode=1215, maxMBSortInHeap=7.571521884974462, 
sim=ClassicSimilarity, locale=zh-TW, timezone=America/North_Dakota/New_Salem
   [junit4]   2> NOTE: Windows 10 10.0 amd64/Oracle Corporation 1.8.0_102 
(64-bit)/cpus=3,threads=1,free=47684344,total=67108864
   [junit4]   2> NOTE: All tests run in this JVM: 
[IndexAndTaxonomyRevisionTest, SessionTokenTest, IndexReplicationClientTest]
   [junit4] Completed [5/9 (1!)] on J1 in 3.87s, 4 tests, 1 error <<< FAILURES!

[...truncated 64877 lines...]


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-9481) BasicAuthPlugin should support standalone mode

2016-10-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606885#comment-15606885
 ] 

Jan Høydahl commented on SOLR-9481:
---

Quick copy/paste instructions for testing:
{code}
cd solr
ant server
echo '{ "authentication": { "class": "solr.BasicAuthPlugin" }, "authorization": 
{ "class": "solr.RuleBasedAuthorizationPlugin" } }' > server/solr/security.json
bin/solr start
bin/solr create -c foo
# Add user
curl http://localhost:8983/solr/admin/authentication \
  -H 'Content-type:application/json' \
  -d '{"set-user": {"solr" : "solr"}}'
# Verify security.json
cat server/solr/security.json
# Set permissions
curl http://localhost:8983/solr/admin/authorization \
  -H 'Content-type:application/json' \
  -d '{ "set-permission": {"name":"all", "role": "admin"}, "set-user-role" : 
{"solr": ["admin"]}}' 
# Will return error
curl http://localhost:8983/solr/admin/info/system
# Will succeed
curl -u solr:solr http://localhost:8983/solr/admin/info/system
{code}

> BasicAuthPlugin should support standalone mode
> --
>
> Key: SOLR-9481
> URL: https://issues.apache.org/jira/browse/SOLR-9481
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: authentication
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9481.patch, SOLR-9481.patch
>
>
> The BasicAuthPlugin currently only supports SolrCloud, and reads users and 
> credentials from ZK /security.json
> Add support for standalone mode operation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9481) BasicAuthPlugin should support standalone mode

2016-10-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606756#comment-15606756
 ] 

Jan Høydahl commented on SOLR-9481:
---

This is now committed to master. Appreciate if someone takes it for a spin 
before I backport to 6.x.
Note however that it will only work for single-node due to SOLR-9640, and it 
will not work with SSL due to {{urlScheme}} being resolved from ZK only by 
hardcoding.

> BasicAuthPlugin should support standalone mode
> --
>
> Key: SOLR-9481
> URL: https://issues.apache.org/jira/browse/SOLR-9481
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: authentication
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9481.patch, SOLR-9481.patch
>
>
> The BasicAuthPlugin currently only supports SolrCloud, and reads users and 
> credentials from ZK /security.json
> Add support for standalone mode operation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9481) BasicAuthPlugin should support standalone mode

2016-10-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606747#comment-15606747
 ] 

Jan Høydahl commented on SOLR-9481:
---

Turned out the problem was that {{SecurityConfHandlerLocal.SECURITY_JSON_PATH}} 
was initialized as static final, but the test changed {{solr.solr.home}} prior 
to running, so the wrong solr home was used in the test.

> BasicAuthPlugin should support standalone mode
> --
>
> Key: SOLR-9481
> URL: https://issues.apache.org/jira/browse/SOLR-9481
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: authentication
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9481.patch, SOLR-9481.patch
>
>
> The BasicAuthPlugin currently only supports SolrCloud, and reads users and 
> credentials from ZK /security.json
> Add support for standalone mode operation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9481) BasicAuthPlugin should support standalone mode

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606742#comment-15606742
 ] 

ASF subversion and git services commented on SOLR-9481:
---

Commit d25a6181612fa00a8e5a1c1e6d889b6d21053486 in lucene-solr's branch 
refs/heads/master from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d25a618 ]

SOLR-9481: Authentication and Authorization plugins now work in standalone 
mode, including edit API


> BasicAuthPlugin should support standalone mode
> --
>
> Key: SOLR-9481
> URL: https://issues.apache.org/jira/browse/SOLR-9481
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: authentication
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9481.patch, SOLR-9481.patch
>
>
> The BasicAuthPlugin currently only supports SolrCloud, and reads users and 
> credentials from ZK /security.json
> Add support for standalone mode operation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9691) Streaming expressions need to be able to use a metric computed by the facet stream as a field in other streams.

2016-10-25 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606705#comment-15606705
 ] 

Timothy Potter commented on SOLR-9691:
--

bonus play! thanks Joel, that works great

> Streaming expressions need to be able to use a metric computed by the facet 
> stream as a field in other streams.
> ---
>
> Key: SOLR-9691
> URL: https://issues.apache.org/jira/browse/SOLR-9691
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 6.2.1
>Reporter: Timothy Potter
>
> Using 6.2.1, I want to use a facet stream to do an intermediate count and 
> then sum up the counts in a rollup stream, i.e. something like:
> {code}
> rollup(
>   sort(
> hashJoin(
>   search(products,
>   q="*:*",
>   fl="product_id,model_num",
>   sort="product_id asc",
>   qt="/export",
>   partitionKeys="product_id"),
>   hashed=facet(transactions, q="*:*", buckets="product_id", 
> bucketSizeLimit=100, bucketSorts="product_id asc", count(*)),
>   on="product_id"
> ), 
> by="model_num asc"
>   ), 
>   over="model_num",
>   sum(count(*))
> )
> {code}
> Basically, I want to get a count of each product_id from the transactions 
> collection (# of transactions per product) and then join that with the 
> products table to generate a projection containing:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "product_id": "1234",
> "count(*)": 4,
> "model_num": "X"
>   },
>   {
> "product_id": "5678",
> "count(*)": 5,
> "model_num": "Y"
>   },
>   ...
> ]
>   }
> }
> {code}
> This works, but the outer rollup doesn't recognize the count(*) as a field. I 
> get this error:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "EXCEPTION": "Invalid expression sum(count(*)) - expected 
> sum(columnName)",
> "EOF": true
>   }
> ]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9691) Streaming expressions need to be able to use a metric computed by the facet stream as a field in other streams.

2016-10-25 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606547#comment-15606547
 ] 

Joel Bernstein commented on SOLR-9691:
--

By the way you can avoid the sort function if you sort the search by the 
model_num. The hashJoin doesn't require a sort and should maintain the sort of 
the left side of the join.

> Streaming expressions need to be able to use a metric computed by the facet 
> stream as a field in other streams.
> ---
>
> Key: SOLR-9691
> URL: https://issues.apache.org/jira/browse/SOLR-9691
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 6.2.1
>Reporter: Timothy Potter
>
> Using 6.2.1, I want to use a facet stream to do an intermediate count and 
> then sum up the counts in a rollup stream, i.e. something like:
> {code}
> rollup(
>   sort(
> hashJoin(
>   search(products,
>   q="*:*",
>   fl="product_id,model_num",
>   sort="product_id asc",
>   qt="/export",
>   partitionKeys="product_id"),
>   hashed=facet(transactions, q="*:*", buckets="product_id", 
> bucketSizeLimit=100, bucketSorts="product_id asc", count(*)),
>   on="product_id"
> ), 
> by="model_num asc"
>   ), 
>   over="model_num",
>   sum(count(*))
> )
> {code}
> Basically, I want to get a count of each product_id from the transactions 
> collection (# of transactions per product) and then join that with the 
> products table to generate a projection containing:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "product_id": "1234",
> "count(*)": 4,
> "model_num": "X"
>   },
>   {
> "product_id": "5678",
> "count(*)": 5,
> "model_num": "Y"
>   },
>   ...
> ]
>   }
> }
> {code}
> This works, but the outer rollup doesn't recognize the count(*) as a field. I 
> get this error:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "EXCEPTION": "Invalid expression sum(count(*)) - expected 
> sum(columnName)",
> "EOF": true
>   }
> ]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9579) Reuse lucene FieldType in createField flow during ingestion

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606540#comment-15606540
 ] 

ASF subversion and git services commented on SOLR-9579:
---

Commit 941c5e92ba6ff76e913746caf68e05b563983f17 in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=941c5e9 ]

SOLR-9579: fix intellij compilation: add lucene core dependency to the langid 
contrib


> Reuse lucene FieldType in createField flow during ingestion
> ---
>
> Key: SOLR-9579
> URL: https://issues.apache.org/jira/browse/SOLR-9579
> Project: Solr
>  Issue Type: Improvement
>  Components: Schema and Analysis
>Affects Versions: 6.x, master (7.0)
> Environment: This has been primarily tested on Windows 8 and Windows 
> Server 2012 R2
>Reporter: John Call
>Priority: Minor
>  Labels: gc, memory, reuse
> Fix For: master (7.0)
>
> Attachments: SOLR-9579.patch, SOLR-9579.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> During ingestion createField in FieldType is being called for each field on 
> each document. For the subclasses of FieldType without their own 
> implementation of createField the lucene version of FieldType is created to 
> be stored along with the value. However the lucene FieldType object is 
> identical when created from the same SchemaField. In testing ingestion of one 
> million rows with 22 field each we were creating 22 million lucene FieldType 
> objects when only 22 are needed. Solr should lazily initialize a lucene 
> FieldType for each SchemaField and reuse them for future ingestion. Not only 
> does this relieve memory usage but also relieves significant pressure on the 
> gc.
> There are also subclasses of Solr FieldType which create separate Lucene 
> FieldType for stored fields instead of reusing the static in StoredField.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7521) Simplify PackedInts

2016-10-25 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606537#comment-15606537
 ] 

Adrien Grand commented on LUCENE-7521:
--

Option 2 sounds easier so I can look into it if you think it is necessary.

> Simplify PackedInts
> ---
>
> Key: LUCENE-7521
> URL: https://issues.apache.org/jira/browse/LUCENE-7521
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7521.patch
>
>
> We have a lot of specialization in PackedInts about how to keep packed arrays 
> of longs in memory. However, most use-cases have slowly moved to DirectWriter 
> and DirectMonotonicWriter and most specializations we have are barely used 
> for performance-sensitive operations, so I'd like to clean this up a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9188) BlockUnknown property makes inter-node communication impossible

2016-10-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606473#comment-15606473
 ] 

Jan Høydahl commented on SOLR-9188:
---

Perhaps open a new JIRA to fix this since this one is already released in 6.2.1.

> BlockUnknown property makes inter-node communication impossible
> ---
>
> Key: SOLR-9188
> URL: https://issues.apache.org/jira/browse/SOLR-9188
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 6.0
>Reporter: Piotr Tempes
>Assignee: Noble Paul
>Priority: Critical
>  Labels: BasicAuth, Security
> Fix For: 6.2.1, 6.3, master (7.0)
>
> Attachments: SOLR-9188.patch, solr9188-errorlog.txt
>
>
> When I setup my solr cloud without blockUnknown property it works as 
> expected. When I want to block non authenticated requests I get following 
> stacktrace during startup (see attached file).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 470 - Still Unstable!

2016-10-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/470/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

6 tests failed.
FAILED:  org.apache.solr.search.facet.TestJsonFacets.testStatsDistrib {p0=DV}

Error Message:
mismatch: 'A'!='B' @ facets/cat0/buckets/[0]/val

Stack Trace:
java.lang.RuntimeException: mismatch: 'A'!='B' @ facets/cat0/buckets/[0]/val
at 
__randomizedtesting.SeedInfo.seed([A49D63747AED620C:785DCA02176CC6C6]:0)
at org.apache.solr.SolrTestCaseHS.matchJSON(SolrTestCaseHS.java:161)
at org.apache.solr.SolrTestCaseHS.assertJQ(SolrTestCaseHS.java:143)
at 
org.apache.solr.SolrTestCaseHS$Client$Tester.assertJQ(SolrTestCaseHS.java:255)
at org.apache.solr.SolrTestCaseHS$Client.testJQ(SolrTestCaseHS.java:296)
at 
org.apache.solr.search.facet.TestJsonFacets.doStatsTemplated(TestJsonFacets.java:1152)
at 
org.apache.solr.search.facet.TestJsonFacets.doStats(TestJsonFacets.java:361)
at 
org.apache.solr.search.facet.TestJsonFacets.testStatsDistrib(TestJsonFacets.java:322)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[jira] [Commented] (SOLR-7604) Create a test for explicitly testing the .system collection schema & config

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606409#comment-15606409
 ] 

ASF subversion and git services commented on SOLR-7604:
---

Commit 87d4c3efa164c6960e01c1107d35a4f1de7565af in lucene-solr's branch 
refs/heads/branch_6x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=87d4c3e ]

SOLR-7604: add testcase to verify the schema of .system collection


> Create a test for explicitly testing the .system collection schema & config
> ---
>
> Key: SOLR-7604
> URL: https://issues.apache.org/jira/browse/SOLR-7604
> Project: Solr
>  Issue Type: Task
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Minor
> Fix For: 6.3
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7604) Create a test for explicitly testing the .system collection schema & config

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606411#comment-15606411
 ] 

ASF subversion and git services commented on SOLR-7604:
---

Commit 8394ff80f3b685e36c46d5098ab602e75576eecf in lucene-solr's branch 
refs/heads/branch_6x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8394ff8 ]

SOLR-7604: add testcase to verify the schema of .system collection


> Create a test for explicitly testing the .system collection schema & config
> ---
>
> Key: SOLR-7604
> URL: https://issues.apache.org/jira/browse/SOLR-7604
> Project: Solr
>  Issue Type: Task
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Minor
> Fix For: 6.3
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7604) Create a test for explicitly testing the .system collection schema & config

2016-10-25 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-7604.
--
   Resolution: Fixed
Fix Version/s: 6.3

> Create a test for explicitly testing the .system collection schema & config
> ---
>
> Key: SOLR-7604
> URL: https://issues.apache.org/jira/browse/SOLR-7604
> Project: Solr
>  Issue Type: Task
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Minor
> Fix For: 6.3
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7604) Create a test for explicitly testing the .system collection schema & config

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606405#comment-15606405
 ] 

ASF subversion and git services commented on SOLR-7604:
---

Commit 9303112981527640f24968fb811c9ff71e1ae830 in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9303112 ]

SOLR-7604: add testcase to verify the schema of .system collection


> Create a test for explicitly testing the .system collection schema & config
> ---
>
> Key: SOLR-7604
> URL: https://issues.apache.org/jira/browse/SOLR-7604
> Project: Solr
>  Issue Type: Task
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7604) Create a test for explicitly testing the .system collection schema & config

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606403#comment-15606403
 ] 

ASF subversion and git services commented on SOLR-7604:
---

Commit 34ad8577b6fac0e48cc1885f2fe40b0abf60bd79 in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=34ad857 ]

SOLR-7604: add testcase to verify the schema of .system collection


> Create a test for explicitly testing the .system collection schema & config
> ---
>
> Key: SOLR-7604
> URL: https://issues.apache.org/jira/browse/SOLR-7604
> Project: Solr
>  Issue Type: Task
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-7565) Adding new Manning Publications MEAP "Relevant Search" to official Solr website book section and news

2016-10-25 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-7565.
---
Resolution: Duplicate

The other issue is more recent, so this one can be closed.

> Adding new Manning Publications MEAP "Relevant Search" to official Solr 
> website book section and news
> -
>
> Key: SOLR-7565
> URL: https://issues.apache.org/jira/browse/SOLR-7565
> Project: Solr
>  Issue Type: Task
>  Components: documentation
>Reporter: Nicole Butterfield
>Priority: Minor
>  Labels: documentation, easyfix, patch
> Attachments: Turnbull-RS-HI.jpg
>
>
> Doug Turnbull, John Berryman and Manning Publications are proud to announce 
> the new MEAP Relevant Search.
> Relevant Search demystifies relevance work. It teaches you how to return 
> engaging search results to your users, helping you understand and leverage 
> the internals of Lucene-based search engines. Relevant Search walks through 
> several real-world problems using a cohesive philosophy that combines text 
> analysis, query building, and score shaping to express business ranking rules 
> to the search engine. It outlines how to guide the engineering process by 
> monitoring search user behavior and shifting the enterprise to a search-first 
> culture focused on humans, not computers. You'll see how the search engine 
> provides a deeply pluggable platform for integrating search ranking with 
> machine learning, ontologies, personalization, domain-specific expertise, and 
> other enriching sources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9657) Create a new TemplateUpdateRequestProcessorFactory

2016-10-25 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606278#comment-15606278
 ] 

Alexandre Rafalovitch commented on SOLR-9657:
-

This matches commit 9d692cde53c25230d6db2663816f313cf356535b on the master that 
- for some reasons - did not link up with Jira.

> Create a new TemplateUpdateRequestProcessorFactory
> --
>
> Key: SOLR-9657
> URL: https://issues.apache.org/jira/browse/SOLR-9657
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9657.patch
>
>
> Unlike other URPs, this will operate on request parameters
> example:
> {code}
> processor=Template=fname:${somefield}some_string${someotherfield}
> {code}
> The actual name of the class is {{TemplateUpdateProcessorFactory}} and it is 
> possible to optionally drop the {{UpdateProcessorfactory}} part.  The 
> {{Template.field}} specifies a field name as well as a template. The 
> {{Template.field}} parameter is multivalued , so , it is possible to add 
> multiple fields or a multivalued field with same name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9657) Create a new TemplateUpdateRequestProcessorFactory

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606271#comment-15606271
 ] 

ASF subversion and git services commented on SOLR-9657:
---

Commit fdb4dd3b322a517ff7f9df2ef64001120e89854c in lucene-solr's branch 
refs/heads/branch_6x from [~arafalov]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fdb4dd3 ]

SOLR-9657: Fixed Javadocs and added example


> Create a new TemplateUpdateRequestProcessorFactory
> --
>
> Key: SOLR-9657
> URL: https://issues.apache.org/jira/browse/SOLR-9657
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9657.patch
>
>
> Unlike other URPs, this will operate on request parameters
> example:
> {code}
> processor=Template=fname:${somefield}some_string${someotherfield}
> {code}
> The actual name of the class is {{TemplateUpdateProcessorFactory}} and it is 
> possible to optionally drop the {{UpdateProcessorfactory}} part.  The 
> {{Template.field}} specifies a field name as well as a template. The 
> {{Template.field}} parameter is multivalued , so , it is possible to add 
> multiple fields or a multivalued field with same name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7522) Make the Lucene jar an OSGi bundle

2016-10-25 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606270#comment-15606270
 ] 

Uwe Schindler commented on LUCENE-7522:
---

LUCENE-1344 was closed in the wrong way. I reopened and fixed the status to 
"Won't fix". This was an issue of bulk-closing issues.

> Make the Lucene jar an OSGi bundle
> --
>
> Key: LUCENE-7522
> URL: https://issues.apache.org/jira/browse/LUCENE-7522
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Affects Versions: 6.2.1
>Reporter: Michal Hlavac
>
> Add support for OSGi. LUCENE-1344 added this feature to previous versions, 
> but now lucene jars are not OSGi bundles. There are OSGi bundles from 
> Servicemix, but I think lucene should add this feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (LUCENE-1344) Make the Lucene jar an OSGi bundle

2016-10-25 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-1344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler closed LUCENE-1344.
-
   Resolution: Won't Fix
Fix Version/s: (was: 3.3)
   (was: 4.0-ALPHA)

> Make the Lucene jar an OSGi bundle
> --
>
> Key: LUCENE-1344
> URL: https://issues.apache.org/jira/browse/LUCENE-1344
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Reporter: Nicolas Lalevée
>Assignee: Ryan McKinley
>Priority: Minor
> Attachments: LUCENE-1344-3.0-branch.patch, LUCENE-1344-maven.patch, 
> LUCENE-1344-r679133.patch, LUCENE-1344-r690675.patch, 
> LUCENE-1344-r690691.patch, LUCENE-1344-r696747.patch, LUCENE-1344.patch, 
> LUCENE-1344.patch, LUCENE-1344.patch, LUCENE-1344.patch, LUCENE-1344.patch, 
> LUCENE-1344.patch, MANIFEST.MF.diff, lucene_trunk.patch
>
>
> In order to use Lucene in an OSGi environment, some additional headers are 
> needed in the manifest of the jar. As Lucene has no dependency, it is pretty 
> straight forward and it ill be easy to maintain I think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-1344) Make the Lucene jar an OSGi bundle

2016-10-25 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-1344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler reopened LUCENE-1344:
---

Reopening to change the fix status ("Fixed" is wrong)

> Make the Lucene jar an OSGi bundle
> --
>
> Key: LUCENE-1344
> URL: https://issues.apache.org/jira/browse/LUCENE-1344
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Reporter: Nicolas Lalevée
>Assignee: Ryan McKinley
>Priority: Minor
> Attachments: LUCENE-1344-3.0-branch.patch, LUCENE-1344-maven.patch, 
> LUCENE-1344-r679133.patch, LUCENE-1344-r690675.patch, 
> LUCENE-1344-r690691.patch, LUCENE-1344-r696747.patch, LUCENE-1344.patch, 
> LUCENE-1344.patch, LUCENE-1344.patch, LUCENE-1344.patch, LUCENE-1344.patch, 
> LUCENE-1344.patch, MANIFEST.MF.diff, lucene_trunk.patch
>
>
> In order to use Lucene in an OSGi environment, some additional headers are 
> needed in the manifest of the jar. As Lucene has no dependency, it is pretty 
> straight forward and it ill be easy to maintain I think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7522) Make the Lucene jar an OSGi bundle

2016-10-25 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606262#comment-15606262
 ] 

Uwe Schindler commented on LUCENE-7522:
---

In addition the current structure of packages inside those JARS is incompatible 
with OSGI (duplicate package names). This will be an issue with the module 
system of Java 9, but we are focusing on that instead of OSGI.

> Make the Lucene jar an OSGi bundle
> --
>
> Key: LUCENE-7522
> URL: https://issues.apache.org/jira/browse/LUCENE-7522
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Affects Versions: 6.2.1
>Reporter: Michal Hlavac
>
> Add support for OSGi. LUCENE-1344 added this feature to previous versions, 
> but now lucene jars are not OSGi bundles. There are OSGi bundles from 
> Servicemix, but I think lucene should add this feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9691) Streaming expressions need to be able to use a metric computed by the facet stream as a field in other streams.

2016-10-25 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606254#comment-15606254
 ] 

Joel Bernstein commented on SOLR-9691:
--

Yes let's keep this open.

I believe the root cause of the bug is that the SumMetric is parsing the 
count(*) as a CountMetric, rather then treating it as a string. I think the 
solution is to make the double quotes work in this scenario.

> Streaming expressions need to be able to use a metric computed by the facet 
> stream as a field in other streams.
> ---
>
> Key: SOLR-9691
> URL: https://issues.apache.org/jira/browse/SOLR-9691
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 6.2.1
>Reporter: Timothy Potter
>
> Using 6.2.1, I want to use a facet stream to do an intermediate count and 
> then sum up the counts in a rollup stream, i.e. something like:
> {code}
> rollup(
>   sort(
> hashJoin(
>   search(products,
>   q="*:*",
>   fl="product_id,model_num",
>   sort="product_id asc",
>   qt="/export",
>   partitionKeys="product_id"),
>   hashed=facet(transactions, q="*:*", buckets="product_id", 
> bucketSizeLimit=100, bucketSorts="product_id asc", count(*)),
>   on="product_id"
> ), 
> by="model_num asc"
>   ), 
>   over="model_num",
>   sum(count(*))
> )
> {code}
> Basically, I want to get a count of each product_id from the transactions 
> collection (# of transactions per product) and then join that with the 
> products table to generate a projection containing:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "product_id": "1234",
> "count(*)": 4,
> "model_num": "X"
>   },
>   {
> "product_id": "5678",
> "count(*)": 5,
> "model_num": "Y"
>   },
>   ...
> ]
>   }
> }
> {code}
> This works, but the outer rollup doesn't recognize the count(*) as a field. I 
> get this error:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "EXCEPTION": "Invalid expression sum(count(*)) - expected 
> sum(columnName)",
> "EOF": true
>   }
> ]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7522) Make the Lucene jar an OSGi bundle

2016-10-25 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606240#comment-15606240
 ] 

Dawid Weiss commented on LUCENE-7522:
-

If you search the mailing list I think the last consensus was that we simple 
are not enough of OSGi experts to maintain the metadata in a reasonable way, so 
instead the plan was to release plain JARs and leave OSGi (and any other 
packaging) for downstream maintainers.

> Make the Lucene jar an OSGi bundle
> --
>
> Key: LUCENE-7522
> URL: https://issues.apache.org/jira/browse/LUCENE-7522
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Affects Versions: 6.2.1
>Reporter: Michal Hlavac
>
> Add support for OSGi. LUCENE-1344 added this feature to previous versions, 
> but now lucene jars are not OSGi bundles. There are OSGi bundles from 
> Servicemix, but I think lucene should add this feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9188) BlockUnknown property makes inter-node communication impossible

2016-10-25 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-9188:
-
Attachment: SOLR-9188.patch

> BlockUnknown property makes inter-node communication impossible
> ---
>
> Key: SOLR-9188
> URL: https://issues.apache.org/jira/browse/SOLR-9188
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 6.0
>Reporter: Piotr Tempes
>Assignee: Noble Paul
>Priority: Critical
>  Labels: BasicAuth, Security
> Fix For: 6.2.1, 6.3, master (7.0)
>
> Attachments: SOLR-9188.patch, solr9188-errorlog.txt
>
>
> When I setup my solr cloud without blockUnknown property it works as 
> expected. When I want to block non authenticated requests I get following 
> stacktrace during startup (see attached file).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-9188) BlockUnknown property makes inter-node communication impossible

2016-10-25 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reopened SOLR-9188:
--

its still broken

> BlockUnknown property makes inter-node communication impossible
> ---
>
> Key: SOLR-9188
> URL: https://issues.apache.org/jira/browse/SOLR-9188
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 6.0
>Reporter: Piotr Tempes
>Assignee: Noble Paul
>Priority: Critical
>  Labels: BasicAuth, Security
> Fix For: 6.2.1, 6.3, master (7.0)
>
> Attachments: solr9188-errorlog.txt
>
>
> When I setup my solr cloud without blockUnknown property it works as 
> expected. When I want to block non authenticated requests I get following 
> stacktrace during startup (see attached file).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9188) BlockUnknown property makes inter-node communication impossible

2016-10-25 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606224#comment-15606224
 ] 

Noble Paul commented on SOLR-9188:
--

I figured it out. In our JUnit tests only {{getPathInfo()}} works and in normal 
webapp only {{getServletPath()}} works. So, the fix is to do both checks

> BlockUnknown property makes inter-node communication impossible
> ---
>
> Key: SOLR-9188
> URL: https://issues.apache.org/jira/browse/SOLR-9188
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 6.0
>Reporter: Piotr Tempes
>Assignee: Noble Paul
>Priority: Critical
>  Labels: BasicAuth, Security
> Fix For: 6.2.1, 6.3, master (7.0)
>
> Attachments: solr9188-errorlog.txt
>
>
> When I setup my solr cloud without blockUnknown property it works as 
> expected. When I want to block non authenticated requests I get following 
> stacktrace during startup (see attached file).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9691) Streaming expressions need to be able to use a metric computed by the facet stream as a field in other streams.

2016-10-25 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606175#comment-15606175
 ] 

Timothy Potter commented on SOLR-9691:
--

oh snap! that worked beautifully ~ thanks guys! should we keep this open as I 
still think it should just work with {{sum(count(*))}} without having to use 
select but maybe it's sufficient to just doc this?

> Streaming expressions need to be able to use a metric computed by the facet 
> stream as a field in other streams.
> ---
>
> Key: SOLR-9691
> URL: https://issues.apache.org/jira/browse/SOLR-9691
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 6.2.1
>Reporter: Timothy Potter
>
> Using 6.2.1, I want to use a facet stream to do an intermediate count and 
> then sum up the counts in a rollup stream, i.e. something like:
> {code}
> rollup(
>   sort(
> hashJoin(
>   search(products,
>   q="*:*",
>   fl="product_id,model_num",
>   sort="product_id asc",
>   qt="/export",
>   partitionKeys="product_id"),
>   hashed=facet(transactions, q="*:*", buckets="product_id", 
> bucketSizeLimit=100, bucketSorts="product_id asc", count(*)),
>   on="product_id"
> ), 
> by="model_num asc"
>   ), 
>   over="model_num",
>   sum(count(*))
> )
> {code}
> Basically, I want to get a count of each product_id from the transactions 
> collection (# of transactions per product) and then join that with the 
> products table to generate a projection containing:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "product_id": "1234",
> "count(*)": 4,
> "model_num": "X"
>   },
>   {
> "product_id": "5678",
> "count(*)": 5,
> "model_num": "Y"
>   },
>   ...
> ]
>   }
> }
> {code}
> This works, but the outer rollup doesn't recognize the count(*) as a field. I 
> get this error:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "EXCEPTION": "Invalid expression sum(count(*)) - expected 
> sum(columnName)",
> "EOF": true
>   }
> ]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9691) Streaming expressions need to be able to use a metric computed by the facet stream as a field in other streams.

2016-10-25 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606069#comment-15606069
 ] 

Joel Bernstein commented on SOLR-9691:
--

Just wrap the facet expression in a select function.

> Streaming expressions need to be able to use a metric computed by the facet 
> stream as a field in other streams.
> ---
>
> Key: SOLR-9691
> URL: https://issues.apache.org/jira/browse/SOLR-9691
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 6.2.1
>Reporter: Timothy Potter
>
> Using 6.2.1, I want to use a facet stream to do an intermediate count and 
> then sum up the counts in a rollup stream, i.e. something like:
> {code}
> rollup(
>   sort(
> hashJoin(
>   search(products,
>   q="*:*",
>   fl="product_id,model_num",
>   sort="product_id asc",
>   qt="/export",
>   partitionKeys="product_id"),
>   hashed=facet(transactions, q="*:*", buckets="product_id", 
> bucketSizeLimit=100, bucketSorts="product_id asc", count(*)),
>   on="product_id"
> ), 
> by="model_num asc"
>   ), 
>   over="model_num",
>   sum(count(*))
> )
> {code}
> Basically, I want to get a count of each product_id from the transactions 
> collection (# of transactions per product) and then join that with the 
> products table to generate a projection containing:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "product_id": "1234",
> "count(*)": 4,
> "model_num": "X"
>   },
>   {
> "product_id": "5678",
> "count(*)": 5,
> "model_num": "Y"
>   },
>   ...
> ]
>   }
> }
> {code}
> This works, but the outer rollup doesn't recognize the count(*) as a field. I 
> get this error:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "EXCEPTION": "Invalid expression sum(count(*)) - expected 
> sum(columnName)",
> "EOF": true
>   }
> ]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9691) Streaming expressions need to be able to use a metric computed by the facet stream as a field in other streams.

2016-10-25 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606062#comment-15606062
 ] 

Joel Bernstein commented on SOLR-9691:
--


As a work around you should be able to use the select() function to change the 
name of the count(*) field.

> Streaming expressions need to be able to use a metric computed by the facet 
> stream as a field in other streams.
> ---
>
> Key: SOLR-9691
> URL: https://issues.apache.org/jira/browse/SOLR-9691
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 6.2.1
>Reporter: Timothy Potter
>
> Using 6.2.1, I want to use a facet stream to do an intermediate count and 
> then sum up the counts in a rollup stream, i.e. something like:
> {code}
> rollup(
>   sort(
> hashJoin(
>   search(products,
>   q="*:*",
>   fl="product_id,model_num",
>   sort="product_id asc",
>   qt="/export",
>   partitionKeys="product_id"),
>   hashed=facet(transactions, q="*:*", buckets="product_id", 
> bucketSizeLimit=100, bucketSorts="product_id asc", count(*)),
>   on="product_id"
> ), 
> by="model_num asc"
>   ), 
>   over="model_num",
>   sum(count(*))
> )
> {code}
> Basically, I want to get a count of each product_id from the transactions 
> collection (# of transactions per product) and then join that with the 
> products table to generate a projection containing:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "product_id": "1234",
> "count(*)": 4,
> "model_num": "X"
>   },
>   {
> "product_id": "5678",
> "count(*)": 5,
> "model_num": "Y"
>   },
>   ...
> ]
>   }
> }
> {code}
> This works, but the outer rollup doesn't recognize the count(*) as a field. I 
> get this error:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "EXCEPTION": "Invalid expression sum(count(*)) - expected 
> sum(columnName)",
> "EOF": true
>   }
> ]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9686) Adding the book "Relevant Search" to the book resource list. https://www.manning.com/books/relevant-search

2016-10-25 Thread Christopher Kaufmann (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605927#comment-15605927
 ] 

Christopher Kaufmann commented on SOLR-9686:


Dear Mr. Rafalovitch,

Here is the permanent discount code for the Solr book's list: *relsepc* (40%
off Relevant Search, all formats)

Cheers

Christopher Kaufmann
Manning Publications - Marketing
c...@manning.com

On Mon, Oct 24, 2016 at 2:46 PM, Alexandre Rafalovitch (JIRA) <



> Adding the book "Relevant Search" to the book resource list. 
> https://www.manning.com/books/relevant-search
> --
>
> Key: SOLR-9686
> URL: https://issues.apache.org/jira/browse/SOLR-9686
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
> Environment: Book resource list
>Reporter: Christopher Kaufmann
>Priority: Minor
>  Labels: documentation
>
> Relevant Search demystifies relevance work and shows you that a search engine 
> is a programmable relevance framework. You'll learn how to apply 
> Elasticsearch or Solr to your business's unique ranking problems. The book 
> demonstrates how to program relevance and how to incorporate secondary data 
> sources, taxonomies, text analytics, and personalization. By the end, you’ll 
> be able to achieve a virtuous cycle of provable, measurable relevance 
> improvements over a search product’s lifetime.
> *Here is the link to the book on our website: 
> https://www.manning.com/books/relevant-search



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9691) Streaming expressions need to be able to use a metric computed by the facet stream as a field in other streams.

2016-10-25 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605874#comment-15605874
 ] 

Timothy Potter edited comment on SOLR-9691 at 10/25/16 5:08 PM:


yes, the Tuple has {{count(*)}} as a key in the map ... 
{code}
update: columnName="count(*)", tuple: 
org.apache.solr.client.solrj.io.Tuple@4f5219db, {product_id=1234, count(*)=4, 
model_num=X}
{code}

so adding the double quotes around the {{count(*)}} arg passed to {{sum()}} 
doesn't work because the double-quotes

sounds more like this is a bug vs. an improvement ...


was (Author: thelabdude):
yes, the Tuple has count(*) as a key in the map ... 

update: columnName="count(*)", tuple: 
org.apache.solr.client.solrj.io.Tuple@4f5219db, {product_id=1234, count(*)=4, 
model_num=X}

so adding the double quotes around the count(*) arg passed to sum() doesn't 
work because the double-quotes

sounds more like this is a bug vs. an improvement ...

> Streaming expressions need to be able to use a metric computed by the facet 
> stream as a field in other streams.
> ---
>
> Key: SOLR-9691
> URL: https://issues.apache.org/jira/browse/SOLR-9691
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 6.2.1
>Reporter: Timothy Potter
>
> Using 6.2.1, I want to use a facet stream to do an intermediate count and 
> then sum up the counts in a rollup stream, i.e. something like:
> {code}
> rollup(
>   sort(
> hashJoin(
>   search(products,
>   q="*:*",
>   fl="product_id,model_num",
>   sort="product_id asc",
>   qt="/export",
>   partitionKeys="product_id"),
>   hashed=facet(transactions, q="*:*", buckets="product_id", 
> bucketSizeLimit=100, bucketSorts="product_id asc", count(*)),
>   on="product_id"
> ), 
> by="model_num asc"
>   ), 
>   over="model_num",
>   sum(count(*))
> )
> {code}
> Basically, I want to get a count of each product_id from the transactions 
> collection (# of transactions per product) and then join that with the 
> products table to generate a projection containing:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "product_id": "1234",
> "count(*)": 4,
> "model_num": "X"
>   },
>   {
> "product_id": "5678",
> "count(*)": 5,
> "model_num": "Y"
>   },
>   ...
> ]
>   }
> }
> {code}
> This works, but the outer rollup doesn't recognize the count(*) as a field. I 
> get this error:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "EXCEPTION": "Invalid expression sum(count(*)) - expected 
> sum(columnName)",
> "EOF": true
>   }
> ]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9691) Streaming expressions need to be able to use a metric computed by the facet stream as a field in other streams.

2016-10-25 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605874#comment-15605874
 ] 

Timothy Potter commented on SOLR-9691:
--

yes, the Tuple has count(*) as a key in the map ... 

update: columnName="count(*)", tuple: 
org.apache.solr.client.solrj.io.Tuple@4f5219db, {product_id=1234, count(*)=4, 
model_num=X}

so adding the double quotes around the count(*) arg passed to sum() doesn't 
work because the double-quotes

sounds more like this is a bug vs. an improvement ...

> Streaming expressions need to be able to use a metric computed by the facet 
> stream as a field in other streams.
> ---
>
> Key: SOLR-9691
> URL: https://issues.apache.org/jira/browse/SOLR-9691
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 6.2.1
>Reporter: Timothy Potter
>
> Using 6.2.1, I want to use a facet stream to do an intermediate count and 
> then sum up the counts in a rollup stream, i.e. something like:
> {code}
> rollup(
>   sort(
> hashJoin(
>   search(products,
>   q="*:*",
>   fl="product_id,model_num",
>   sort="product_id asc",
>   qt="/export",
>   partitionKeys="product_id"),
>   hashed=facet(transactions, q="*:*", buckets="product_id", 
> bucketSizeLimit=100, bucketSorts="product_id asc", count(*)),
>   on="product_id"
> ), 
> by="model_num asc"
>   ), 
>   over="model_num",
>   sum(count(*))
> )
> {code}
> Basically, I want to get a count of each product_id from the transactions 
> collection (# of transactions per product) and then join that with the 
> products table to generate a projection containing:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "product_id": "1234",
> "count(*)": 4,
> "model_num": "X"
>   },
>   {
> "product_id": "5678",
> "count(*)": 5,
> "model_num": "Y"
>   },
>   ...
> ]
>   }
> }
> {code}
> This works, but the outer rollup doesn't recognize the count(*) as a field. I 
> get this error:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "EXCEPTION": "Invalid expression sum(count(*)) - expected 
> sum(columnName)",
> "EOF": true
>   }
> ]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9536) OldBackupDirectory timestamp init bug causes NPEs from SnapShooter?

2016-10-25 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-9536.
---
   Resolution: Fixed
Fix Version/s: master (7.0)
   6.3

Thanks guys!

> OldBackupDirectory timestamp init bug causes NPEs from SnapShooter?
> ---
>
> Key: SOLR-9536
> URL: https://issues.apache.org/jira/browse/SOLR-9536
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Mark Miller
> Fix For: 6.3, master (7.0)
>
>
> On IRC, a 6.2.0 user reported getting an NPE from 
> SnapShooter.deleteOldBackups L244, with the only other frame of the 
> stacktrace being {{lambda$createSnapAsync$1}} L196 (it was a screenshot, not 
> text easily cut/paste here)
> The problematic L244 is...
> {code}
>   if (obd.getTimestamp().isPresent()) {
> {code}
> ..and i believe the root of the issue is that while {{getTimestamp()}} is 
> declared to return an {{Optional}}, there is no guarantee that the 
> {{Optional}} instance is itself non-null...
> {code}
>private Optional timestamp;
>   public OldBackupDirectory(URI basePath, String dirName) {
> this.dirName = Preconditions.checkNotNull(dirName);
> this.basePath = Preconditions.checkNotNull(basePath);
> Matcher m = dirNamePattern.matcher(dirName);
> if (m.find()) {
>   try {
> this.timestamp = Optional.of(new 
> SimpleDateFormat(SnapShooter.DATE_FMT, Locale.ROOT).parse(m.group(1)));
>   } catch (ParseException e) {
> this.timestamp = Optional.empty();
>   }
> }
>   }
> {code}
> Allthough i'm not 100% certain, I believe the way the user was triggering 
> this bug was by configuring classic replication configured with something 
> like {{commit}} -- so that usage may be 
> neccessary to trigger the exception?
> Alternatively: perhaps this exception gets logged the *first* time anyone 
> tries to use any code that involves SnapShooter -- and after that a timestamp 
> file *is* created and teh problem neer manifests itself again?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9536) OldBackupDirectory timestamp init bug causes NPEs from SnapShooter?

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605810#comment-15605810
 ] 

ASF subversion and git services commented on SOLR-9536:
---

Commit c15c8af66db5c2c84cdf95520a61f78d512c5911 in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c15c8af ]

SOLR-9536: Add hossman to CHANGES.


> OldBackupDirectory timestamp init bug causes NPEs from SnapShooter?
> ---
>
> Key: SOLR-9536
> URL: https://issues.apache.org/jira/browse/SOLR-9536
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Mark Miller
>
> On IRC, a 6.2.0 user reported getting an NPE from 
> SnapShooter.deleteOldBackups L244, with the only other frame of the 
> stacktrace being {{lambda$createSnapAsync$1}} L196 (it was a screenshot, not 
> text easily cut/paste here)
> The problematic L244 is...
> {code}
>   if (obd.getTimestamp().isPresent()) {
> {code}
> ..and i believe the root of the issue is that while {{getTimestamp()}} is 
> declared to return an {{Optional}}, there is no guarantee that the 
> {{Optional}} instance is itself non-null...
> {code}
>private Optional timestamp;
>   public OldBackupDirectory(URI basePath, String dirName) {
> this.dirName = Preconditions.checkNotNull(dirName);
> this.basePath = Preconditions.checkNotNull(basePath);
> Matcher m = dirNamePattern.matcher(dirName);
> if (m.find()) {
>   try {
> this.timestamp = Optional.of(new 
> SimpleDateFormat(SnapShooter.DATE_FMT, Locale.ROOT).parse(m.group(1)));
>   } catch (ParseException e) {
> this.timestamp = Optional.empty();
>   }
> }
>   }
> {code}
> Allthough i'm not 100% certain, I believe the way the user was triggering 
> this bug was by configuring classic replication configured with something 
> like {{commit}} -- so that usage may be 
> neccessary to trigger the exception?
> Alternatively: perhaps this exception gets logged the *first* time anyone 
> tries to use any code that involves SnapShooter -- and after that a timestamp 
> file *is* created and teh problem neer manifests itself again?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9536) OldBackupDirectory timestamp init bug causes NPEs from SnapShooter?

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605812#comment-15605812
 ] 

ASF subversion and git services commented on SOLR-9536:
---

Commit c1d1e6098a6c4dcc2d6f031b1299545f79972794 in lucene-solr's branch 
refs/heads/branch_6x from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c1d1e60 ]

SOLR-9536: Add hossman to CHANGES.


> OldBackupDirectory timestamp init bug causes NPEs from SnapShooter?
> ---
>
> Key: SOLR-9536
> URL: https://issues.apache.org/jira/browse/SOLR-9536
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Mark Miller
>
> On IRC, a 6.2.0 user reported getting an NPE from 
> SnapShooter.deleteOldBackups L244, with the only other frame of the 
> stacktrace being {{lambda$createSnapAsync$1}} L196 (it was a screenshot, not 
> text easily cut/paste here)
> The problematic L244 is...
> {code}
>   if (obd.getTimestamp().isPresent()) {
> {code}
> ..and i believe the root of the issue is that while {{getTimestamp()}} is 
> declared to return an {{Optional}}, there is no guarantee that the 
> {{Optional}} instance is itself non-null...
> {code}
>private Optional timestamp;
>   public OldBackupDirectory(URI basePath, String dirName) {
> this.dirName = Preconditions.checkNotNull(dirName);
> this.basePath = Preconditions.checkNotNull(basePath);
> Matcher m = dirNamePattern.matcher(dirName);
> if (m.find()) {
>   try {
> this.timestamp = Optional.of(new 
> SimpleDateFormat(SnapShooter.DATE_FMT, Locale.ROOT).parse(m.group(1)));
>   } catch (ParseException e) {
> this.timestamp = Optional.empty();
>   }
> }
>   }
> {code}
> Allthough i'm not 100% certain, I believe the way the user was triggering 
> this bug was by configuring classic replication configured with something 
> like {{commit}} -- so that usage may be 
> neccessary to trigger the exception?
> Alternatively: perhaps this exception gets logged the *first* time anyone 
> tries to use any code that involves SnapShooter -- and after that a timestamp 
> file *is* created and teh problem neer manifests itself again?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9536) OldBackupDirectory timestamp init bug causes NPEs from SnapShooter?

2016-10-25 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-9536:
-

Assignee: Mark Miller  (was: Varun Thacker)

> OldBackupDirectory timestamp init bug causes NPEs from SnapShooter?
> ---
>
> Key: SOLR-9536
> URL: https://issues.apache.org/jira/browse/SOLR-9536
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Mark Miller
>
> On IRC, a 6.2.0 user reported getting an NPE from 
> SnapShooter.deleteOldBackups L244, with the only other frame of the 
> stacktrace being {{lambda$createSnapAsync$1}} L196 (it was a screenshot, not 
> text easily cut/paste here)
> The problematic L244 is...
> {code}
>   if (obd.getTimestamp().isPresent()) {
> {code}
> ..and i believe the root of the issue is that while {{getTimestamp()}} is 
> declared to return an {{Optional}}, there is no guarantee that the 
> {{Optional}} instance is itself non-null...
> {code}
>private Optional timestamp;
>   public OldBackupDirectory(URI basePath, String dirName) {
> this.dirName = Preconditions.checkNotNull(dirName);
> this.basePath = Preconditions.checkNotNull(basePath);
> Matcher m = dirNamePattern.matcher(dirName);
> if (m.find()) {
>   try {
> this.timestamp = Optional.of(new 
> SimpleDateFormat(SnapShooter.DATE_FMT, Locale.ROOT).parse(m.group(1)));
>   } catch (ParseException e) {
> this.timestamp = Optional.empty();
>   }
> }
>   }
> {code}
> Allthough i'm not 100% certain, I believe the way the user was triggering 
> this bug was by configuring classic replication configured with something 
> like {{commit}} -- so that usage may be 
> neccessary to trigger the exception?
> Alternatively: perhaps this exception gets logged the *first* time anyone 
> tries to use any code that involves SnapShooter -- and after that a timestamp 
> file *is* created and teh problem neer manifests itself again?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9536) OldBackupDirectory timestamp init bug causes NPEs from SnapShooter?

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605803#comment-15605803
 ] 

ASF subversion and git services commented on SOLR-9536:
---

Commit e152575f5ea5ea798ca989c852afb763189dee60 in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e152575 ]

SOLR-9536: OldBackupDirectory timestamp field needs to be initialized to avoid 
NPE.


> OldBackupDirectory timestamp init bug causes NPEs from SnapShooter?
> ---
>
> Key: SOLR-9536
> URL: https://issues.apache.org/jira/browse/SOLR-9536
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Varun Thacker
>
> On IRC, a 6.2.0 user reported getting an NPE from 
> SnapShooter.deleteOldBackups L244, with the only other frame of the 
> stacktrace being {{lambda$createSnapAsync$1}} L196 (it was a screenshot, not 
> text easily cut/paste here)
> The problematic L244 is...
> {code}
>   if (obd.getTimestamp().isPresent()) {
> {code}
> ..and i believe the root of the issue is that while {{getTimestamp()}} is 
> declared to return an {{Optional}}, there is no guarantee that the 
> {{Optional}} instance is itself non-null...
> {code}
>private Optional timestamp;
>   public OldBackupDirectory(URI basePath, String dirName) {
> this.dirName = Preconditions.checkNotNull(dirName);
> this.basePath = Preconditions.checkNotNull(basePath);
> Matcher m = dirNamePattern.matcher(dirName);
> if (m.find()) {
>   try {
> this.timestamp = Optional.of(new 
> SimpleDateFormat(SnapShooter.DATE_FMT, Locale.ROOT).parse(m.group(1)));
>   } catch (ParseException e) {
> this.timestamp = Optional.empty();
>   }
> }
>   }
> {code}
> Allthough i'm not 100% certain, I believe the way the user was triggering 
> this bug was by configuring classic replication configured with something 
> like {{commit}} -- so that usage may be 
> neccessary to trigger the exception?
> Alternatively: perhaps this exception gets logged the *first* time anyone 
> tries to use any code that involves SnapShooter -- and after that a timestamp 
> file *is* created and teh problem neer manifests itself again?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9536) OldBackupDirectory timestamp init bug causes NPEs from SnapShooter?

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605805#comment-15605805
 ] 

ASF subversion and git services commented on SOLR-9536:
---

Commit 22117ddde47bc5b646ec1f0e732b51479a8e8bab in lucene-solr's branch 
refs/heads/branch_6x from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=22117dd ]

SOLR-9536: OldBackupDirectory timestamp field needs to be initialized to avoid 
NPE.

This closes #81.


> OldBackupDirectory timestamp init bug causes NPEs from SnapShooter?
> ---
>
> Key: SOLR-9536
> URL: https://issues.apache.org/jira/browse/SOLR-9536
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Varun Thacker
>
> On IRC, a 6.2.0 user reported getting an NPE from 
> SnapShooter.deleteOldBackups L244, with the only other frame of the 
> stacktrace being {{lambda$createSnapAsync$1}} L196 (it was a screenshot, not 
> text easily cut/paste here)
> The problematic L244 is...
> {code}
>   if (obd.getTimestamp().isPresent()) {
> {code}
> ..and i believe the root of the issue is that while {{getTimestamp()}} is 
> declared to return an {{Optional}}, there is no guarantee that the 
> {{Optional}} instance is itself non-null...
> {code}
>private Optional timestamp;
>   public OldBackupDirectory(URI basePath, String dirName) {
> this.dirName = Preconditions.checkNotNull(dirName);
> this.basePath = Preconditions.checkNotNull(basePath);
> Matcher m = dirNamePattern.matcher(dirName);
> if (m.find()) {
>   try {
> this.timestamp = Optional.of(new 
> SimpleDateFormat(SnapShooter.DATE_FMT, Locale.ROOT).parse(m.group(1)));
>   } catch (ParseException e) {
> this.timestamp = Optional.empty();
>   }
> }
>   }
> {code}
> Allthough i'm not 100% certain, I believe the way the user was triggering 
> this bug was by configuring classic replication configured with something 
> like {{commit}} -- so that usage may be 
> neccessary to trigger the exception?
> Alternatively: perhaps this exception gets logged the *first* time anyone 
> tries to use any code that involves SnapShooter -- and after that a timestamp 
> file *is* created and teh problem neer manifests itself again?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9687) Values not assigned to all valid Interval Facet intervals in some cases

2016-10-25 Thread Andy Chillrud (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605783#comment-15605783
 ] 

Andy Chillrud commented on SOLR-9687:
-

Thanks Tomás. You guys are quick!

> Values not assigned to all valid Interval Facet intervals in some cases
> ---
>
> Key: SOLR-9687
> URL: https://issues.apache.org/jira/browse/SOLR-9687
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 5.3.1
>Reporter: Andy Chillrud
>Assignee: Tomás Fernández Löbbe
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9687.patch
>
>
> Using the interval facet definitions:
> * \{!key=Positive}(0,*]
> * \{!key=Zero}\[0,0]
> * \{!key=Negative}(*,0)
> A document with the value "0" in the numeric field the intervals are being 
> applied to is not counted in the Zero interval. If I change the order of the 
> definitions to , Negative, Zero, Positive the "0" value is correctly counted 
> in the Zero interval.
> Tracing into the 5.3.1 code the problem is in the 
> org.apache.solr.request.IntervalFacets class. When the getSortedIntervals() 
> method sorts the interval definitions for a field by their starting value is 
> doesn't take into account the startOpen property. When two intervals have 
> equal start values it needs to sort intervals where startOpen == false before 
> intervals where startOpen == true.
> In the accumIntervalWithValue() method it checks which intervals each 
> document value should be considered a match for. It iterates through the 
> sorted intervals and stops checking subsequent intervals when 
> LOWER_THAN_START result is returned. If the Positive interval is sorted 
> before the Zero interval it never checks a zero value against the Zero 
> interval.
> I compared the 5.3.1 version of the IntervalFacets class against the 6.2.1 
> code, and it looks like the same issue will occur in 6.2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9441) Solr collection backup on HDFS can only be manipulated by the Solr process owner

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605745#comment-15605745
 ] 

ASF GitHub Bot commented on SOLR-9441:
--

Github user asfgit closed the pull request at:

https://github.com/apache/lucene-solr/pull/71


> Solr collection backup on HDFS can only be manipulated by the Solr process 
> owner
> 
>
> Key: SOLR-9441
> URL: https://issues.apache.org/jira/browse/SOLR-9441
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: trunk
>Reporter: Hrishikesh Gadre
>Assignee: Mark Miller
> Fix For: 6.3, master (7.0)
>
>
> When we backup Solr collection using HDFS backup repository, the backup 
> folder (and the files) are created with permissions 755 (i.e. only solr 
> process owner can delete/move the backup folder). This is inconvenient from 
> user perspective since the backup is essentially a full-copy of the Solr 
> collection and hence manipulating it doesn't affect the Solr collection state 
> in any way.
> We should provide an option by which we can enable other users to manipulate 
> the backup folders. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9441) Solr collection backup on HDFS can only be manipulated by the Solr process owner

2016-10-25 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-9441.
---
   Resolution: Fixed
Fix Version/s: master (7.0)
   6.3

Thanks Hrishikesh!

> Solr collection backup on HDFS can only be manipulated by the Solr process 
> owner
> 
>
> Key: SOLR-9441
> URL: https://issues.apache.org/jira/browse/SOLR-9441
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: trunk
>Reporter: Hrishikesh Gadre
>Assignee: Mark Miller
> Fix For: 6.3, master (7.0)
>
>
> When we backup Solr collection using HDFS backup repository, the backup 
> folder (and the files) are created with permissions 755 (i.e. only solr 
> process owner can delete/move the backup folder). This is inconvenient from 
> user perspective since the backup is essentially a full-copy of the Solr 
> collection and hence manipulating it doesn't affect the Solr collection state 
> in any way.
> We should provide an option by which we can enable other users to manipulate 
> the backup folders. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #71: [SOLR-9441] Support configuring umask for HDFS...

2016-10-25 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/lucene-solr/pull/71


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9441) Solr collection backup on HDFS can only be manipulated by the Solr process owner

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605744#comment-15605744
 ] 

ASF subversion and git services commented on SOLR-9441:
---

Commit d961253c7c031d1a9b8227cc4949dc7211a3f98f in lucene-solr's branch 
refs/heads/branch_6x from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d961253 ]

SOLR-9441: Solr collection backup on HDFS can only be manipulated by the Solr 
process owner.

This closes #71.


> Solr collection backup on HDFS can only be manipulated by the Solr process 
> owner
> 
>
> Key: SOLR-9441
> URL: https://issues.apache.org/jira/browse/SOLR-9441
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: trunk
>Reporter: Hrishikesh Gadre
>Assignee: Mark Miller
>
> When we backup Solr collection using HDFS backup repository, the backup 
> folder (and the files) are created with permissions 755 (i.e. only solr 
> process owner can delete/move the backup folder). This is inconvenient from 
> user perspective since the backup is essentially a full-copy of the Solr 
> collection and hence manipulating it doesn't affect the Solr collection state 
> in any way.
> We should provide an option by which we can enable other users to manipulate 
> the backup folders. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9441) Solr collection backup on HDFS can only be manipulated by the Solr process owner

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605741#comment-15605741
 ] 

ASF subversion and git services commented on SOLR-9441:
---

Commit 27ba8e2e82df6b901bbc5adaa3490d5f002fd76f in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=27ba8e2 ]

SOLR-9441: Solr collection backup on HDFS can only be manipulated by the Solr 
process owner.

This closes #71.


> Solr collection backup on HDFS can only be manipulated by the Solr process 
> owner
> 
>
> Key: SOLR-9441
> URL: https://issues.apache.org/jira/browse/SOLR-9441
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: trunk
>Reporter: Hrishikesh Gadre
>Assignee: Mark Miller
>
> When we backup Solr collection using HDFS backup repository, the backup 
> folder (and the files) are created with permissions 755 (i.e. only solr 
> process owner can delete/move the backup folder). This is inconvenient from 
> user perspective since the backup is essentially a full-copy of the Solr 
> collection and hence manipulating it doesn't affect the Solr collection state 
> in any way.
> We should provide an option by which we can enable other users to manipulate 
> the backup folders. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9691) Streaming expressions need to be able to use a metric computed by the facet stream as a field in other streams.

2016-10-25 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605723#comment-15605723
 ] 

Joel Bernstein commented on SOLR-9691:
--

Once the metric is computed it should be able to be used like any other field, 
it's just a string key pointing to a numeric.

Can you inspect the Tuples in the SumMetric.update() function?

> Streaming expressions need to be able to use a metric computed by the facet 
> stream as a field in other streams.
> ---
>
> Key: SOLR-9691
> URL: https://issues.apache.org/jira/browse/SOLR-9691
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 6.2.1
>Reporter: Timothy Potter
>
> Using 6.2.1, I want to use a facet stream to do an intermediate count and 
> then sum up the counts in a rollup stream, i.e. something like:
> {code}
> rollup(
>   sort(
> hashJoin(
>   search(products,
>   q="*:*",
>   fl="product_id,model_num",
>   sort="product_id asc",
>   qt="/export",
>   partitionKeys="product_id"),
>   hashed=facet(transactions, q="*:*", buckets="product_id", 
> bucketSizeLimit=100, bucketSorts="product_id asc", count(*)),
>   on="product_id"
> ), 
> by="model_num asc"
>   ), 
>   over="model_num",
>   sum(count(*))
> )
> {code}
> Basically, I want to get a count of each product_id from the transactions 
> collection (# of transactions per product) and then join that with the 
> products table to generate a projection containing:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "product_id": "1234",
> "count(*)": 4,
> "model_num": "X"
>   },
>   {
> "product_id": "5678",
> "count(*)": 5,
> "model_num": "Y"
>   },
>   ...
> ]
>   }
> }
> {code}
> This works, but the outer rollup doesn't recognize the count(*) as a field. I 
> get this error:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "EXCEPTION": "Invalid expression sum(count(*)) - expected 
> sum(columnName)",
> "EOF": true
>   }
> ]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3626 - Unstable!

2016-10-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3626/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

7 tests failed.
FAILED:  org.apache.solr.cloud.RecoveryZkTest.test

Error Message:
There are still nodes recoverying - waited for 120 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 120 
seconds
at 
__randomizedtesting.SeedInfo.seed([6E3F85D9190C0879:E66BBA03B7F06581]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:184)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:862)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1418)
at org.apache.solr.cloud.RecoveryZkTest.test(RecoveryZkTest.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+140) - Build # 18138 - Still Unstable!

2016-10-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18138/
Java: 32bit/jdk-9-ea+140 -server -XX:+UseConcMarkSweepGC

7 tests failed.
FAILED:  org.apache.solr.cloud.RecoveryZkTest.test

Error Message:
There are still nodes recoverying - waited for 120 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 120 
seconds
at 
__randomizedtesting.SeedInfo.seed([E68B03114A64F9E7:6EDF3CCBE498941F]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:184)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:862)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1418)
at org.apache.solr.cloud.RecoveryZkTest.test(RecoveryZkTest.java:105)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[JENKINS] Lucene-Solr-Tests-master - Build # 1451 - Unstable

2016-10-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1451/

6 tests failed.
FAILED:  org.apache.solr.search.facet.TestJsonFacets.testStatsDistrib {p0=SMART}

Error Message:
mismatch: 'A'!='B' @ facets/cat0/buckets/[0]/val

Stack Trace:
java.lang.RuntimeException: mismatch: 'A'!='B' @ facets/cat0/buckets/[0]/val
at 
__randomizedtesting.SeedInfo.seed([D4CCA2CD7F586EC4:80C0BBB12D9CA0E]:0)
at org.apache.solr.SolrTestCaseHS.matchJSON(SolrTestCaseHS.java:161)
at org.apache.solr.SolrTestCaseHS.assertJQ(SolrTestCaseHS.java:143)
at 
org.apache.solr.SolrTestCaseHS$Client$Tester.assertJQ(SolrTestCaseHS.java:255)
at org.apache.solr.SolrTestCaseHS$Client.testJQ(SolrTestCaseHS.java:296)
at 
org.apache.solr.search.facet.TestJsonFacets.doStatsTemplated(TestJsonFacets.java:1152)
at 
org.apache.solr.search.facet.TestJsonFacets.doStats(TestJsonFacets.java:361)
at 
org.apache.solr.search.facet.TestJsonFacets.testStatsDistrib(TestJsonFacets.java:322)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Assigned] (SOLR-9442) Add json.nl=arrnvp (array of NamedValuePair) style in JSONResponseWriter

2016-10-25 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke reassigned SOLR-9442:
-

Assignee: Christine Poerschke

> Add json.nl=arrnvp (array of NamedValuePair) style in JSONResponseWriter
> 
>
> Key: SOLR-9442
> URL: https://issues.apache.org/jira/browse/SOLR-9442
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Response Writers
>Reporter: Jonny Marks
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-9442.patch, SOLR-9442.patch
>
>
> The JSONResponseWriter class currently supports several styles of NamedList 
> output format, documented on the wiki at http://wiki.apache.org/solr/SolJSON 
> and in the code at 
> https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/response/JSONResponseWriter.java#L71-L76.
> For example the 'arrmap' style:
> {code}NamedList("a"=1,"b"=2,null=3) => [{"a":1},{"b":2},3]
> NamedList("a"=1,"bar”=“foo",null=3.4f) => [{"a":1},{"bar”:”foo"},{3.4}]{code}
> This patch creates a new style ‘arrnvp’ which is an array of named value 
> pairs. For example:
> {code}NamedList("a"=1,"b"=2,null=3) => 
> [{"name":"a","int":1},{"name":"b","int":2},{"int":3}]
> NamedList("a"=1,"bar”=“foo",null=3.4f) => 
> [{"name":"a","int":1},{"name":"b","str":"foo"},{"float":3.4}]{code}
> This style maintains the type information of the values, similar to the xml 
> format:
> {code:xml}
>   1
>   foo
>   3.4
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9691) Streaming expressions need to be able to use a metric computed by the facet stream as a field in other streams.

2016-10-25 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605650#comment-15605650
 ] 

Dennis Gove commented on SOLR-9691:
---

Seems to throw on 
https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/metrics/SumMetric.java#L69

Is it at all possible for the value count(\*) to be null? I have to suspect 
that the extracted columnName within SumMetric isn't the expected "count(\*)" 
but is something else.

> Streaming expressions need to be able to use a metric computed by the facet 
> stream as a field in other streams.
> ---
>
> Key: SOLR-9691
> URL: https://issues.apache.org/jira/browse/SOLR-9691
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 6.2.1
>Reporter: Timothy Potter
>
> Using 6.2.1, I want to use a facet stream to do an intermediate count and 
> then sum up the counts in a rollup stream, i.e. something like:
> {code}
> rollup(
>   sort(
> hashJoin(
>   search(products,
>   q="*:*",
>   fl="product_id,model_num",
>   sort="product_id asc",
>   qt="/export",
>   partitionKeys="product_id"),
>   hashed=facet(transactions, q="*:*", buckets="product_id", 
> bucketSizeLimit=100, bucketSorts="product_id asc", count(*)),
>   on="product_id"
> ), 
> by="model_num asc"
>   ), 
>   over="model_num",
>   sum(count(*))
> )
> {code}
> Basically, I want to get a count of each product_id from the transactions 
> collection (# of transactions per product) and then join that with the 
> products table to generate a projection containing:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "product_id": "1234",
> "count(*)": 4,
> "model_num": "X"
>   },
>   {
> "product_id": "5678",
> "count(*)": 5,
> "model_num": "Y"
>   },
>   ...
> ]
>   }
> }
> {code}
> This works, but the outer rollup doesn't recognize the count(*) as a field. I 
> get this error:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "EXCEPTION": "Invalid expression sum(count(*)) - expected 
> sum(columnName)",
> "EOF": true
>   }
> ]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7521) Simplify PackedInts

2016-10-25 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605644#comment-15605644
 ] 

Yonik Seeley commented on LUCENE-7521:
--

OK, so to avoid slowdowns (or more memory usage) in the FieldCache, some 
options include:
- Make PackedInts extensible and move the unwanted-by-lucene implementations to 
Solr
- Since PackedInts is so tied to the FieldCache, simply copy the whole "packed" 
package to Solr, like was done with the "uninverted" package
 

> Simplify PackedInts
> ---
>
> Key: LUCENE-7521
> URL: https://issues.apache.org/jira/browse/LUCENE-7521
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7521.patch
>
>
> We have a lot of specialization in PackedInts about how to keep packed arrays 
> of longs in memory. However, most use-cases have slowly moved to DirectWriter 
> and DirectMonotonicWriter and most specializations we have are barely used 
> for performance-sensitive operations, so I'd like to clean this up a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9691) Streaming expressions need to be able to use a metric computed by the facet stream as a field in other streams.

2016-10-25 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605628#comment-15605628
 ] 

Timothy Potter commented on SOLR-9691:
--

NPE results:

{code}
2016-10-25 15:35:14.737 ERROR (qtp1348949648-22) [c:transactions s:shard2 
r:core_node2 x:transactions_shard2_replica1] o.a.s.c.s.i.s.ExceptionStream 
java.lang.NullPointerException
at 
org.apache.solr.client.solrj.io.stream.metrics.SumMetric.update(SumMetric.java:69)
at 
org.apache.solr.client.solrj.io.stream.RollupStream.read(RollupStream.java:254)
at 
org.apache.solr.client.solrj.io.stream.ExceptionStream.read(ExceptionStream.java:68)
at 
org.apache.solr.handler.StreamHandler$TimerStream.read(StreamHandler.java:449)
at 
org.apache.solr.response.TextResponseWriter.writeTupleStream(TextResponseWriter.java:308)
at 
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:168)
at 
org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:178)
at 
org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:294)
at 
org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:90)
at 
org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:55)
at 
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
at 
org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:726)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:468)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
{code}

> Streaming expressions need to be able to use a metric computed by the facet 
> stream as a field in other streams.
> ---
>
> Key: SOLR-9691
> URL: https://issues.apache.org/jira/browse/SOLR-9691
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 6.2.1
>Reporter: Timothy Potter
>
> Using 6.2.1, I want to use a facet stream to do an intermediate count and 
> then sum up the counts in a rollup stream, i.e. something like:
> {code}
> rollup(
>   sort(
> hashJoin(
>   search(products,
>   q="*:*",
>   fl="product_id,model_num",
>   sort="product_id asc",
>   qt="/export",
>   partitionKeys="product_id"),
>   hashed=facet(transactions, q="*:*", buckets="product_id", 
> bucketSizeLimit=100, bucketSorts="product_id asc", count(*)),
>   on="product_id"
> ), 
> by="model_num asc"
>   ), 
>   over="model_num",
>   sum(count(*))
> )
> {code}
> Basically, I want to get a count of each product_id from the transactions 
> collection (# of transactions per product) and then join that with the 
> products table to generate a projection containing:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "product_id": "1234",
> "count(*)": 4,
> "model_num": "X"
>   },
>   {
> "product_id": "5678",
> "count(*)": 5,
> "model_num": "Y"
>   },
>   ...
> ]
>   }
> }
> {code}
> This works, but the outer rollup doesn't recognize the count(*) as a field. I 
> get this error:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "EXCEPTION": "Invalid expression sum(count(*)) - expected 
> sum(columnName)",
> "EOF": true
>   }
> ]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9687) Values not assigned to all valid Interval Facet intervals in some cases

2016-10-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe resolved SOLR-9687.
-
Resolution: Fixed

Resolving. Thanks Andy! The patch should apply cleanly to 5.3 if you need that

> Values not assigned to all valid Interval Facet intervals in some cases
> ---
>
> Key: SOLR-9687
> URL: https://issues.apache.org/jira/browse/SOLR-9687
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 5.3.1
>Reporter: Andy Chillrud
>Assignee: Tomás Fernández Löbbe
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9687.patch
>
>
> Using the interval facet definitions:
> * \{!key=Positive}(0,*]
> * \{!key=Zero}\[0,0]
> * \{!key=Negative}(*,0)
> A document with the value "0" in the numeric field the intervals are being 
> applied to is not counted in the Zero interval. If I change the order of the 
> definitions to , Negative, Zero, Positive the "0" value is correctly counted 
> in the Zero interval.
> Tracing into the 5.3.1 code the problem is in the 
> org.apache.solr.request.IntervalFacets class. When the getSortedIntervals() 
> method sorts the interval definitions for a field by their starting value is 
> doesn't take into account the startOpen property. When two intervals have 
> equal start values it needs to sort intervals where startOpen == false before 
> intervals where startOpen == true.
> In the accumIntervalWithValue() method it checks which intervals each 
> document value should be considered a match for. It iterates through the 
> sorted intervals and stops checking subsequent intervals when 
> LOWER_THAN_START result is returned. If the Positive interval is sorted 
> before the Zero interval it never checks a zero value against the Zero 
> interval.
> I compared the 5.3.1 version of the IntervalFacets class against the 6.2.1 
> code, and it looks like the same issue will occur in 6.2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8332) factor HttpShardHandler[Factory]'s url shuffling out into a ReplicaListTransformer class

2016-10-25 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605618#comment-15605618
 ] 

Christine Poerschke commented on SOLR-8332:
---

Thanks Noble for taking a look at the patch. I have made revisions and opened 
the above [#102|https://github.com/apache/lucene-solr/pull/102] pull request 
with them because in a patch it can sometimes be difficult to see the context 
of changes.

{{ReplicaFilter.filter}} versus {{ReplicaListTransformer.transform}} names.
* The _transformer_ part of the interface name was quite deliberate to try and 
be generic about what the transformation might be e.g. it could be removal 
(i.e. filtering out) of elements or it could just be reordering (e.g. via 
shuffling) of elements.
* revision made:
** javadocs added to ReplicaListTransformer.transform giving filtering and 
shuffling as example transformations

There being both {{transform}} and {{transformUrls}} methods.
* Yes, i was uneasy about that as well. Usually Replica objects need to be 
filtered or reordered but 
[HttpShardHandler|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/component/HttpShardHandler.java#L306]
 can also operate on URLs passed in via the ShardsParam.SHARDS parameter.
* revisions made:
** {{transform(List)}} and {{transformUrls(List)}} combined 
into one {{transform(List choices)}} method
** javadocs added to mention about Replica vs. String choices
** ToyMatchingReplicaListTransformer class (in tests) to demonstrate how 
{{List choices}} can be used

{{List methodName(List inputs)}} versus {{void methodName(List choices)}} 
signature.
* no revisions made: making the parameter input-and-output seems unproblematic 
and saves having to allocate an output list to be returned

{{transform(List,SolrQueryRequest)}} versus {{void transform(List)}} signature.
* transform is called multiple times i.e. once for each shard but there is only 
one transformer object. The transformer's constructor may take a 
SolrQueryRequest argument (if needed) and that way request param deciphering 
happens only once per prepDistributed call.
* revisions made:
** Added ReplicaListTransformerTest class to demonstrate (with toy 
ReplicaListTransformer classes) how SolrQueryRequest params (or indeed 
SolrQueryRequest itself) could be passed to a ReplicaListTransformer upon 
construction.

> factor HttpShardHandler[Factory]'s url shuffling out into a 
> ReplicaListTransformer class
> 
>
> Key: SOLR-8332
> URL: https://issues.apache.org/jira/browse/SOLR-8332
> Project: Solr
>  Issue Type: Wish
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8332.patch, SOLR-8332.patch, SOLR-8332.patch
>
>
> Proposed patch against trunk to follow. No change in behaviour intended. This 
> would be as a step towards SOLR-6730.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8332) factor HttpShardHandler[Factory]'s url shuffling out into a ReplicaListTransformer class

2016-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605611#comment-15605611
 ] 

ASF GitHub Bot commented on SOLR-8332:
--

GitHub user cpoerschke opened a pull request:

https://github.com/apache/lucene-solr/pull/102

SOLR-8332: factor HttpShardHandler[Factory]'s url shuffling out ...

... into a ReplicaListTransformer class

(switching from patch attachment to pull request for clarity and 
convenience)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bloomberg/lucene-solr master-solr-8332

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/102.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #102


commit 55d03abfc3c8c74d24815084c58924bcc270bd0b
Author: Christine Poerschke 
Date:   2016-10-25T11:14:17Z

SOLR-8332: work-in-progress (applied Oct 18th patch to current master)

commit 87f419d38146a06bc40106f344cdef69987415c0
Author: Christine Poerschke 
Date:   2016-10-25T11:47:54Z

SOLR-8332: revisions incorporating review comments (see ticket log for 
details)




> factor HttpShardHandler[Factory]'s url shuffling out into a 
> ReplicaListTransformer class
> 
>
> Key: SOLR-8332
> URL: https://issues.apache.org/jira/browse/SOLR-8332
> Project: Solr
>  Issue Type: Wish
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8332.patch, SOLR-8332.patch, SOLR-8332.patch
>
>
> Proposed patch against trunk to follow. No change in behaviour intended. This 
> would be as a step towards SOLR-6730.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #102: SOLR-8332: factor HttpShardHandler[Factory]'s...

2016-10-25 Thread cpoerschke
GitHub user cpoerschke opened a pull request:

https://github.com/apache/lucene-solr/pull/102

SOLR-8332: factor HttpShardHandler[Factory]'s url shuffling out ...

... into a ReplicaListTransformer class

(switching from patch attachment to pull request for clarity and 
convenience)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bloomberg/lucene-solr master-solr-8332

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/102.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #102


commit 55d03abfc3c8c74d24815084c58924bcc270bd0b
Author: Christine Poerschke 
Date:   2016-10-25T11:14:17Z

SOLR-8332: work-in-progress (applied Oct 18th patch to current master)

commit 87f419d38146a06bc40106f344cdef69987415c0
Author: Christine Poerschke 
Date:   2016-10-25T11:47:54Z

SOLR-8332: revisions incorporating review comments (see ticket log for 
details)




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9120) Luke NoSuchFileException

2016-10-25 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605563#comment-15605563
 ] 

Erick Erickson commented on SOLR-9120:
--

There's no "official" roadmap. Submitting a patch makes it much more likely 
it'll be addressed though


> Luke NoSuchFileException
> 
>
> Key: SOLR-9120
> URL: https://issues.apache.org/jira/browse/SOLR-9120
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Markus Jelsma
>
> On Solr 6.0, we frequently see the following errors popping up:
> {code}
> java.nio.file.NoSuchFileException: 
> /var/lib/solr/logs_shard2_replica1/data/index/segments_2c5
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
>   at 
> sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
>   at 
> sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
>   at java.nio.file.Files.readAttributes(Files.java:1737)
>   at java.nio.file.Files.size(Files.java:2332)
>   at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243)
>   at 
> org.apache.lucene.store.NRTCachingDirectory.fileLength(NRTCachingDirectory.java:131)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.getFileLength(LukeRequestHandler.java:597)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.getIndexInfo(LukeRequestHandler.java:585)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.handleRequestBody(LukeRequestHandler.java:137)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2033)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:518)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9690) Date/Time DocRouter

2016-10-25 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-9690:
---
Attachment: DirectHashRouter.java

I started playing with the idea of a custom DocRouter subclassing 
HashBasedRouter in which the hash integer is parsed from the router.field 
instead of using a hash function.  I'm thinking that this approach would then 
allow someone to possibly use shard splitting, and it might allow for more 
existing Solr infrastructure to be leveraged, for example the shard/slice hash 
ranges to pick which slice to go to.  The code is attached; it's really an 
un-used toy at this time.

But I don't yet know if this approach makes sense for how Solr should do 
date/time routing as a 1st class feature.  For example, what's the implication 
of   HashBasedRouter.hashToSlice() throwing an exception if it doesn't see a 
Slice with a range covering this document?  Maybe it's okay.  Out of scope of 
this JIRA issue but a follow-on would be auto-creation of shards, perhaps 
just-in-time (when docs arrive and there is no suitable shard).  And another 
JIRA issue would be capping a shard by size such that the last shard range's 
end range integer could be lowered to whatever the time is of the last document.

[~shalinmangar] I'm especially curious what your opinion is, given that you've 
done work pertaining the the existing DocRouters.

> Date/Time DocRouter
> ---
>
> Key: SOLR-9690
> URL: https://issues.apache.org/jira/browse/SOLR-9690
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
> Attachments: DirectHashRouter.java
>
>
> This issue is for a custom Solr DocRouter that works with dates/times (or 
> perhaps any field providing an int).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7522) Make the Lucene jar an OSGi bundle

2016-10-25 Thread Michal Hlavac (JIRA)
Michal Hlavac created LUCENE-7522:
-

 Summary: Make the Lucene jar an OSGi bundle
 Key: LUCENE-7522
 URL: https://issues.apache.org/jira/browse/LUCENE-7522
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/build
Affects Versions: 6.2.1
Reporter: Michal Hlavac


Add support for OSGi. LUCENE-1344 added this feature to previous versions, but 
now lucene jars are not OSGi bundles. There are OSGi bundles from Servicemix, 
but I think lucene should add this feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4164) Result Grouping fails if no hits

2016-10-25 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-4164:
---
Attachment: SOLR-4164.patch

Initial patch for this issue.
[~yo...@apache.org] Please kindly review this patch.

> Result Grouping fails if no hits
> 
>
> Key: SOLR-4164
> URL: https://issues.apache.org/jira/browse/SOLR-4164
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other, SolrCloud
>Affects Versions: 4.0
>Reporter: Lance Norskog
>Assignee: Cao Manh Dat
> Attachments: SOLR-4164.patch
>
>
> In SolrCloud, found a result grouping bug in the 4.0 release.
> A distributed result grouping request under SolrCloud got this result:
> {noformat}
> Dec 10, 2012 10:32:07 PM org.apache.solr.common.SolrException log
> SEVERE: null:java.lang.IllegalArgumentException: numHits must be > 0; please 
> use TotalHitCountCollector if you just need the total hit count
> at 
> org.apache.lucene.search.TopFieldCollector.create(TopFieldCollector.java:1120)
> at 
> org.apache.lucene.search.TopFieldCollector.create(TopFieldCollector.java:1069)
> at 
> org.apache.lucene.search.grouping.AbstractSecondPassGroupingCollector.(AbstractSecondPassGroupingCollector.java:75)
> at 
> org.apache.lucene.search.grouping.term.TermSecondPassGroupingCollector.(TermSecondPassGroupingCollector.java:49)
> at 
> org.apache.solr.search.grouping.distributed.command.TopGroupsFieldCommand.create(TopGroupsFieldCommand.java:128)
> at 
> org.apache.solr.search.grouping.CommandHandler.execute(CommandHandler.java:132)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:339)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:206)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7521) Simplify PackedInts

2016-10-25 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605482#comment-15605482
 ] 

Adrien Grand commented on LUCENE-7521:
--

Yes they are. 

> Simplify PackedInts
> ---
>
> Key: LUCENE-7521
> URL: https://issues.apache.org/jira/browse/LUCENE-7521
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7521.patch
>
>
> We have a lot of specialization in PackedInts about how to keep packed arrays 
> of longs in memory. However, most use-cases have slowly moved to DirectWriter 
> and DirectMonotonicWriter and most specializations we have are barely used 
> for performance-sensitive operations, so I'd like to clean this up a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9691) Streaming expressions need to be able to use a metric computed by the facet stream as a field in other streams.

2016-10-25 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605474#comment-15605474
 ] 

Dennis Gove commented on SOLR-9691:
---

I suspect it's seeing count(\*) and considering it an expression representing 
CountMetric. Try quoting the field name to get around this
{code}
sum("count(*)")
{code}

> Streaming expressions need to be able to use a metric computed by the facet 
> stream as a field in other streams.
> ---
>
> Key: SOLR-9691
> URL: https://issues.apache.org/jira/browse/SOLR-9691
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 6.2.1
>Reporter: Timothy Potter
>
> Using 6.2.1, I want to use a facet stream to do an intermediate count and 
> then sum up the counts in a rollup stream, i.e. something like:
> {code}
> rollup(
>   sort(
> hashJoin(
>   search(products,
>   q="*:*",
>   fl="product_id,model_num",
>   sort="product_id asc",
>   qt="/export",
>   partitionKeys="product_id"),
>   hashed=facet(transactions, q="*:*", buckets="product_id", 
> bucketSizeLimit=100, bucketSorts="product_id asc", count(*)),
>   on="product_id"
> ), 
> by="model_num asc"
>   ), 
>   over="model_num",
>   sum(count(*))
> )
> {code}
> Basically, I want to get a count of each product_id from the transactions 
> collection (# of transactions per product) and then join that with the 
> products table to generate a projection containing:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "product_id": "1234",
> "count(*)": 4,
> "model_num": "X"
>   },
>   {
> "product_id": "5678",
> "count(*)": 5,
> "model_num": "Y"
>   },
>   ...
> ]
>   }
> }
> {code}
> This works, but the outer rollup doesn't recognize the count(*) as a field. I 
> get this error:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "EXCEPTION": "Invalid expression sum(count(*)) - expected 
> sum(columnName)",
> "EOF": true
>   }
> ]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9691) Streaming expressions need to be able to use a metric computed by the facet stream as a field in other streams.

2016-10-25 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-9691:
-
Component/s: (was: eaming Expressions)
 eaming expression

> Streaming expressions need to be able to use a metric computed by the facet 
> stream as a field in other streams.
> ---
>
> Key: SOLR-9691
> URL: https://issues.apache.org/jira/browse/SOLR-9691
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 6.2.1
>Reporter: Timothy Potter
>
> Using 6.2.1, I want to use a facet stream to do an intermediate count and 
> then sum up the counts in a rollup stream, i.e. something like:
> {code}
> rollup(
>   sort(
> hashJoin(
>   search(products,
>   q="*:*",
>   fl="product_id,model_num",
>   sort="product_id asc",
>   qt="/export",
>   partitionKeys="product_id"),
>   hashed=facet(transactions, q="*:*", buckets="product_id", 
> bucketSizeLimit=100, bucketSorts="product_id asc", count(*)),
>   on="product_id"
> ), 
> by="model_num asc"
>   ), 
>   over="model_num",
>   sum(count(*))
> )
> {code}
> Basically, I want to get a count of each product_id from the transactions 
> collection (# of transactions per product) and then join that with the 
> products table to generate a projection containing:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "product_id": "1234",
> "count(*)": 4,
> "model_num": "X"
>   },
>   {
> "product_id": "5678",
> "count(*)": 5,
> "model_num": "Y"
>   },
>   ...
> ]
>   }
> }
> {code}
> This works, but the outer rollup doesn't recognize the count(*) as a field. I 
> get this error:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "EXCEPTION": "Invalid expression sum(count(*)) - expected 
> sum(columnName)",
> "EOF": true
>   }
> ]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9691) Streaming expressions need to be able to use a metric computed by the facet stream as a field in other streams.

2016-10-25 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-9691:
-
Component/s: (was: eaming)
 Parallel SQL

> Streaming expressions need to be able to use a metric computed by the facet 
> stream as a field in other streams.
> ---
>
> Key: SOLR-9691
> URL: https://issues.apache.org/jira/browse/SOLR-9691
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 6.2.1
>Reporter: Timothy Potter
>
> Using 6.2.1, I want to use a facet stream to do an intermediate count and 
> then sum up the counts in a rollup stream, i.e. something like:
> {code}
> rollup(
>   sort(
> hashJoin(
>   search(products,
>   q="*:*",
>   fl="product_id,model_num",
>   sort="product_id asc",
>   qt="/export",
>   partitionKeys="product_id"),
>   hashed=facet(transactions, q="*:*", buckets="product_id", 
> bucketSizeLimit=100, bucketSorts="product_id asc", count(*)),
>   on="product_id"
> ), 
> by="model_num asc"
>   ), 
>   over="model_num",
>   sum(count(*))
> )
> {code}
> Basically, I want to get a count of each product_id from the transactions 
> collection (# of transactions per product) and then join that with the 
> products table to generate a projection containing:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "product_id": "1234",
> "count(*)": 4,
> "model_num": "X"
>   },
>   {
> "product_id": "5678",
> "count(*)": 5,
> "model_num": "Y"
>   },
>   ...
> ]
>   }
> }
> {code}
> This works, but the outer rollup doesn't recognize the count(*) as a field. I 
> get this error:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "EXCEPTION": "Invalid expression sum(count(*)) - expected 
> sum(columnName)",
> "EOF": true
>   }
> ]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9691) Streaming expressions need to be able to use a metric computed by the facet stream as a field in other streams.

2016-10-25 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-9691:
-
Component/s: (was: eaming expression)
 eaming

> Streaming expressions need to be able to use a metric computed by the facet 
> stream as a field in other streams.
> ---
>
> Key: SOLR-9691
> URL: https://issues.apache.org/jira/browse/SOLR-9691
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 6.2.1
>Reporter: Timothy Potter
>
> Using 6.2.1, I want to use a facet stream to do an intermediate count and 
> then sum up the counts in a rollup stream, i.e. something like:
> {code}
> rollup(
>   sort(
> hashJoin(
>   search(products,
>   q="*:*",
>   fl="product_id,model_num",
>   sort="product_id asc",
>   qt="/export",
>   partitionKeys="product_id"),
>   hashed=facet(transactions, q="*:*", buckets="product_id", 
> bucketSizeLimit=100, bucketSorts="product_id asc", count(*)),
>   on="product_id"
> ), 
> by="model_num asc"
>   ), 
>   over="model_num",
>   sum(count(*))
> )
> {code}
> Basically, I want to get a count of each product_id from the transactions 
> collection (# of transactions per product) and then join that with the 
> products table to generate a projection containing:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "product_id": "1234",
> "count(*)": 4,
> "model_num": "X"
>   },
>   {
> "product_id": "5678",
> "count(*)": 5,
> "model_num": "Y"
>   },
>   ...
> ]
>   }
> }
> {code}
> This works, but the outer rollup doesn't recognize the count(*) as a field. I 
> get this error:
> {code}
> {
>   "result-set": {
> "docs": [
>   {
> "EXCEPTION": "Invalid expression sum(count(*)) - expected 
> sum(columnName)",
> "EOF": true
>   }
> ]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9690) Date/Time DocRouter

2016-10-25 Thread David Smiley (JIRA)
David Smiley created SOLR-9690:
--

 Summary: Date/Time DocRouter
 Key: SOLR-9690
 URL: https://issues.apache.org/jira/browse/SOLR-9690
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: David Smiley


This issue is for a custom Solr DocRouter that works with dates/times (or 
perhaps any field providing an int).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7521) Simplify PackedInts

2016-10-25 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605358#comment-15605358
 ] 

Yonik Seeley commented on LUCENE-7521:
--

Are any of these formats used by FieldCacheImpl (that was moved to Solr?)
It's hard to tell at first blush, I may have to resort to prints...

> Simplify PackedInts
> ---
>
> Key: LUCENE-7521
> URL: https://issues.apache.org/jira/browse/LUCENE-7521
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7521.patch
>
>
> We have a lot of specialization in PackedInts about how to keep packed arrays 
> of longs in memory. However, most use-cases have slowly moved to DirectWriter 
> and DirectMonotonicWriter and most specializations we have are barely used 
> for performance-sensitive operations, so I'd like to clean this up a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7521) Simplify PackedInts

2016-10-25 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605337#comment-15605337
 ] 

Michael McCandless commented on LUCENE-7521:


+1!

Look at all that removed code :)

> Simplify PackedInts
> ---
>
> Key: LUCENE-7521
> URL: https://issues.apache.org/jira/browse/LUCENE-7521
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7521.patch
>
>
> We have a lot of specialization in PackedInts about how to keep packed arrays 
> of longs in memory. However, most use-cases have slowly moved to DirectWriter 
> and DirectMonotonicWriter and most specializations we have are barely used 
> for performance-sensitive operations, so I'd like to clean this up a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9481) BasicAuthPlugin should support standalone mode

2016-10-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605267#comment-15605267
 ] 

Jan Høydahl commented on SOLR-9481:
---

I'm going to commit this to master first and let it bake and receive feedback 
for some days. It is probably easier for people to take it for a spin when it's 
on master.

> BasicAuthPlugin should support standalone mode
> --
>
> Key: SOLR-9481
> URL: https://issues.apache.org/jira/browse/SOLR-9481
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: authentication
> Attachments: SOLR-9481.patch, SOLR-9481.patch
>
>
> The BasicAuthPlugin currently only supports SolrCloud, and reads users and 
> credentials from ZK /security.json
> Add support for standalone mode operation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_102) - Build # 6202 - Unstable!

2016-10-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6202/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=47411, 
name=SocketProxy-Response-55796:56285, state=RUNNABLE, 
group=TGRP-HttpPartitionTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=47411, name=SocketProxy-Response-55796:56285, 
state=RUNNABLE, group=TGRP-HttpPartitionTest]
at 
__randomizedtesting.SeedInfo.seed([2B3CBAB4DD1DC2C3:A368856E73E1AF3B]:0)
Caused by: java.lang.RuntimeException: java.net.SocketException: Socket is 
closed
at __randomizedtesting.SeedInfo.seed([2B3CBAB4DD1DC2C3]:0)
at 
org.apache.solr.cloud.SocketProxy$Bridge$Pump.run(SocketProxy.java:347)
Caused by: java.net.SocketException: Socket is closed
at java.net.Socket.setSoTimeout(Socket.java:1137)
at 
org.apache.solr.cloud.SocketProxy$Bridge$Pump.run(SocketProxy.java:344)


FAILED:  
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail

Error Message:
expected:<200> but was:<404>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<404>
at 
__randomizedtesting.SeedInfo.seed([2B3CBAB4DD1DC2C3:43838F9E0D87D02F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.cancelDelegationToken(TestSolrCloudWithDelegationTokens.java:140)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail(TestSolrCloudWithDelegationTokens.java:303)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (SOLR-9188) BlockUnknown property makes inter-node communication impossible

2016-10-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605214#comment-15605214
 ] 

Jan Høydahl commented on SOLR-9188:
---

This could perhaps be the bug I discovered in SOLR-9640?
* Fix bug in SolrDispatchFilter - path {{/admin/info/key}} should always be 
open. It required authentication since we were comparing with {{getPathInfo}} 
instead of {{getServletPath}}

> BlockUnknown property makes inter-node communication impossible
> ---
>
> Key: SOLR-9188
> URL: https://issues.apache.org/jira/browse/SOLR-9188
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 6.0
>Reporter: Piotr Tempes
>Assignee: Noble Paul
>Priority: Critical
>  Labels: BasicAuth, Security
> Fix For: 6.2.1, 6.3, master (7.0)
>
> Attachments: solr9188-errorlog.txt
>
>
> When I setup my solr cloud without blockUnknown property it works as 
> expected. When I want to block non authenticated requests I get following 
> stacktrace during startup (see attached file).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7036) Faster method for group.facet

2016-10-25 Thread Danny Teichthal (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605136#comment-15605136
 ] 

Danny Teichthal commented on SOLR-7036:
---

[~ysee...@gmail.com] I want to make sure I got this right.
You say that the group.facet functionality is exactly the same as calling JSON 
API with unique function applied on the group.field.
For current patch it means:
No need to touch the UnInverted field code, instead we should just pass the 
calculation to the JSON API and add a unique calculation on the group.field 
from our query.
Did I get it right?

If it is true:
1. Will it have the same or better performance than current patch?
2. Will we also have transparent support also in prefix facet and facet query 
(the patch doesn't support this currently)?



> Faster method for group.facet
> -
>
> Key: SOLR-7036
> URL: https://issues.apache.org/jira/browse/SOLR-7036
> Project: Solr
>  Issue Type: Improvement
>  Components: faceting
>Affects Versions: 4.10.3
>Reporter: Jim Musil
>Assignee: Erick Erickson
> Fix For: 5.5, 6.0
>
> Attachments: SOLR-7036.patch, SOLR-7036.patch, SOLR-7036.patch, 
> SOLR-7036.patch, SOLR-7036.patch, SOLR-7036.patch, SOLR-7036_zipped.zip, 
> jstack-output.txt, performance.txt, source_for_patch.zip
>
>
> This is a patch that speeds up the performance of requests made with 
> group.facet=true. The original code that collects and counts unique facet 
> values for each group does not use the same improved field cache methods that 
> have been added for normal faceting in recent versions.
> Specifically, this approach leverages the UninvertedField class which 
> provides a much faster way to look up docs that contain a term. I've also 
> added a simple grouping map so that when a term is found for a doc, it can 
> quickly look up the group to which it belongs.
> Group faceting was very slow for our data set and when the number of docs or 
> terms was high, the latency spiked to multiple second requests. This solution 
> provides better overall performance -- from an average of 54ms to 32ms. It 
> also dropped our slowest performing queries way down -- from 6012ms to 991ms.
> I also added a few tests.
> I added an additional parameter so that you can choose to use this method or 
> the original. Add group.facet.method=fc to use the improved method or 
> group.facet.method=original which is the default if not specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7521) Simplify PackedInts

2016-10-25 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7521:
-
Attachment: LUCENE-7521.patch

Here is a patch. It removes the Direct* and Packed*ThreeBlocks impls which were 
just specializations of Packed64 for given numbers of bits per value. It also 
makes sure that PackedInts.fastestFormatAndBits doesn't use the 
PACKED_SINGLE_BLOCK format anymore so that we can remove it in Lucene 8.

> Simplify PackedInts
> ---
>
> Key: LUCENE-7521
> URL: https://issues.apache.org/jira/browse/LUCENE-7521
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7521.patch
>
>
> We have a lot of specialization in PackedInts about how to keep packed arrays 
> of longs in memory. However, most use-cases have slowly moved to DirectWriter 
> and DirectMonotonicWriter and most specializations we have are barely used 
> for performance-sensitive operations, so I'd like to clean this up a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9628) Trie fields have unset lastDocId

2016-10-25 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-9628.

Resolution: Fixed

> Trie fields have unset lastDocId
> 
>
> Key: SOLR-9628
> URL: https://issues.apache.org/jira/browse/SOLR-9628
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Attachments: SOLR-9628.patch, SOLR-9628.patch
>
>
> LUCENE-7407 switched doc values usage to an iterator API, introducing a 
> lastDocId to track in TrieLongField, TrieIntField, and TrieDoubleField. This 
> is never set. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9559) Add ExecutorStream to execute stored Streaming Expressions

2016-10-25 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604023#comment-15604023
 ] 

Joel Bernstein edited comment on SOLR-9559 at 10/25/16 10:58 AM:
-

All interesting questions.

I thought about *exec* and *eval* but settled on executor because it really is 
a work queue for streaming expressions. It's a really powerful executor because 
it's parallel on a single node and can be parallelized across a cluster of 
worker nodes by wrapping it in the *parallel* function.

The StreamTask's job is to iterate the stream. All functionality in streaming 
expressions is achieved by iterating the stream. In order for something 
interesting to happen in this scenario you would need to use a stream decorator 
that pushes data somewhere, such as the update() function. The update function 
pushes Tuples to another SolrCloud collection. 

For example the executor could be used to train millions of machine learning 
models and store the models in a SolrCloud collection.

There are three core use cases for this:

1) As part of a scalable framework for developing Actor Model systems 
https://en.wikipedia.org/wiki/Actor_model. The Actor Model is one of core 
features of Scala and Erlang. The *daemon* function can be used to construct 
Actors that interact with each other through work queues and mail boxes.

2) Massively scalable stored queries and alerts. See the *topic* function for 
more details on subscribing to a query.

3) A general purpose parallel executor / work queue. 

Error handling currently is just logging errors. But there is a lot we can do 
with error handling as this matures. One of the really nice things about the 
topic() function is that it persists it's checkpoints in a collection. If you 
run a job that uses a topic() and it fails in the middle, you can simply start 
it back up and it picks up where it left off.






was (Author: joel.bernstein):
All interesting questions.

I thought about *exec* and *eval* but settled on executor because it really is 
a work queue for streaming expressions. It's a really powerful executor because 
it's parallel on a single node and can be parallelized across a cluster of 
worker nodes by wrapping it in the *parallel* function.

The StreamTask's job is to iterate the stream. All functionality in streaming 
expressions is achieved by iterating the stream. In order for something 
interesting to happen in this scenario you would need to use a stream decorator 
that pushes data somewhere, such as the update() function. The update function 
pushes Tuples to another SolrCloud collection. 

For example the executor could be used to train millions of machine learning 
models and store the models in a SolrCloud collection.

There are three core use cases for this:

1) As part of a scalable framework for developing Actor Model systems 
https://en.wikipedia.org/wiki/Actor_model. This is one of core features of 
Spark. The *daemon* function can be used to construct Actors that interact with 
each other through work queues and mail boxes.

2) Massively scalable stored queries and alerts. See the *topic* function for 
more details on subscribing to a query.

3) A general purpose parallel executor / work queue. 

Error handling currently is just logging errors. But there is a lot we can do 
with error handling as this matures. One of the really nice things about the 
topic() function is that it persists it's checkpoints in a collection. If you 
run a job that uses a topic() and it fails in the middle, you can simply start 
it back up and it picks up where it left off.





> Add ExecutorStream to execute stored Streaming Expressions
> --
>
> Key: SOLR-9559
> URL: https://issues.apache.org/jira/browse/SOLR-9559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.3
>
> Attachments: SOLR-9559.patch, SOLR-9559.patch, SOLR-9559.patch, 
> SOLR-9559.patch
>
>
> The *ExecutorStream* will wrap a stream which contains Tuples with Streaming 
> Expressions to execute. By default the ExecutorStream will look for the 
> expression in the *expr_s* field in the Tuples.
> The ExecutorStream will have an internal thread pool so expressions can be 
> executed in parallel on a single worker. The ExecutorStream can also be 
> wrapped by the parallel function to partition the Streaming Expressions that 
> need to be executed across a cluster of worker nodes.
> *Sample syntax*:
> {code}
> daemon(executor(threads=10, topic(storedExpressions, fl="expr_s", ...)))
> {code}
> In the example above a *daemon* wraps an *executor* which wraps a *topic* 
> that is reading stored Streaming Expressions. 

[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 469 - Unstable!

2016-10-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/469/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.spelling.suggest.SuggesterWFSTTest.testRebuild

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([5246FF0F6360F13E:9635D4C57608BA4]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:813)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:780)
at 
org.apache.solr.spelling.suggest.SuggesterTest.testRebuild(SuggesterTest.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//lst[@name='spellcheck']/lst[@name='suggestions']/lst[@name='ac']/int[@name='numFound'][.='2']
xml response was: 

00


request 
was:q=ac=/suggest_wfst=true=2=xml
at 

[jira] [Commented] (SOLR-9188) BlockUnknown property makes inter-node communication impossible

2016-10-25 Thread Ewen Cluley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604973#comment-15604973
 ] 

Ewen Cluley commented on SOLR-9188:
---

I have deployed 6.2.1 and am still encountering the same (i think the same) 
issue. I am using self signed ssl certificates but dont think that should make 
an impact.

The work around still works where i specify adminuser:passw...@servername.com 
as the solr host name in the solr.in.sh file.  

Log:
2016-10-25 10:46:34.243 ERROR (qtp240650537-21) [c:ecm s:shard3 r:core_node2 
x:ecm_shard3_replica1] o.a.s.s.PKIAuthenticationPlugin Exception trying to get 
public key from : https://server00314.phx.abc.com:8984/solr
org.noggit.JSONParser$ParseException: JSON Parse Error: char=<,position=0 
BEFORE='<' AFTER='html>  https://server00314.phx.abc.com:8984/solr
org.noggit.JSONParser$ParseException: JSON Parse Error: char=<,position=0 
BEFORE='<' AFTER='html>  https://issues.apache.org/jira/browse/SOLR-9188
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 6.0
>Reporter: Piotr Tempes
>Assignee: Noble Paul
>Priority: Critical
>  Labels: BasicAuth, Security
> Fix For: 6.2.1, 6.3, master (7.0)
>
> Attachments: solr9188-errorlog.txt
>
>
> When I setup my solr cloud without blockUnknown property it works as 
> expected. When I want to block non authenticated requests I get following 
> stacktrace during startup (see attached file).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7519) Optimize computing browse-only facets for taxonomy and sorted set methods

2016-10-25 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-7519.

   Resolution: Fixed
Fix Version/s: (was: 6.3)

> Optimize computing browse-only facets for taxonomy and sorted set methods
> -
>
> Key: LUCENE-7519
> URL: https://issues.apache.org/jira/browse/LUCENE-7519
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0)
>
> Attachments: LUCENE-7519.patch
>
>
> For the "browse facets" use case, where logically you run 
> {{MatchAllDocsQuery}} and then compute facet hits, we can optimize this case 
> for both {{SortedSetDocValuesFacets}} and {{FastTaxonomyFacetCounts}} so that 
> we don't use the query DISI at all and rather just pull from the doc values 
> iterator using {{nextDoc}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7519) Optimize computing browse-only facets for taxonomy and sorted set methods

2016-10-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604904#comment-15604904
 ] 

ASF subversion and git services commented on LUCENE-7519:
-

Commit 0782b09571fc5ac3e92b566f9abc047b2bd7966c in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0782b09 ]

LUCENE-7519: add optimized implementations for browse-only facets


> Optimize computing browse-only facets for taxonomy and sorted set methods
> -
>
> Key: LUCENE-7519
> URL: https://issues.apache.org/jira/browse/LUCENE-7519
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7519.patch
>
>
> For the "browse facets" use case, where logically you run 
> {{MatchAllDocsQuery}} and then compute facet hits, we can optimize this case 
> for both {{SortedSetDocValuesFacets}} and {{FastTaxonomyFacetCounts}} so that 
> we don't use the query DISI at all and rather just pull from the doc values 
> iterator using {{nextDoc}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8542) Integrate Learning to Rank into Solr

2016-10-25 Thread adeppa (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604783#comment-15604783
 ] 

adeppa commented on SOLR-8542:
--


Hi Mike,

Thanks for the information, Now i can't able to upgrade to solr 6x ,i was tried 
above patch but not working still showing many errors, my solr current version 
5.1.0 ,please help me how to apply that patch my current solr source 



Thanks
Adeppa

> Integrate Learning to Rank into Solr
> 
>
> Key: SOLR-8542
> URL: https://issues.apache.org/jira/browse/SOLR-8542
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joshua Pantony
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8542-branch_5x.patch, SOLR-8542-trunk.patch
>
>
> This is a ticket to integrate learning to rank machine learning models into 
> Solr. Solr Learning to Rank (LTR) provides a way for you to extract features 
> directly inside Solr for use in training a machine learned model. You can 
> then deploy that model to Solr and use it to rerank your top X search 
> results. This concept was previously [presented by the authors at Lucene/Solr 
> Revolution 
> 2015|http://www.slideshare.net/lucidworks/learning-to-rank-in-solr-presented-by-michael-nilsson-diego-ceccarelli-bloomberg-lp].
> [Read through the 
> README|https://github.com/bloomberg/lucene-solr/tree/master-ltr-plugin-release/solr/contrib/ltr]
>  for a tutorial on using the plugin, in addition to how to train your own 
> external model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7517) Explore making Scorer.score() return a double

2016-10-25 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604773#comment-15604773
 ] 

Adrien Grand commented on LUCENE-7517:
--

I started exploring making this change and would appreciate if somebody could 
cross-check these assumptions:
 - the Explanation API needs to be switched to doubles, so that it can combine 
scores from sub queries the same way as the Scorer API
 - TopDocsCollector and TopFieldCollector with SortField.Type.SCORE need to 
cast to a float _before_ comparing the bottom score and adding to the priority 
queue. Otherwise the final top docs could appear as being out of order.
 - score-based value sources still need to expose the score as a float rather 
than a double, so that sorting by a score value source yields the same result 
as sorting by score

> Explore making Scorer.score() return a double
> -
>
> Key: LUCENE-7517
> URL: https://issues.apache.org/jira/browse/LUCENE-7517
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
>
> Follow-up to 
> http://search-lucene.com/m/l6pAi1BoyPJ1vr2382=Re+JENKINS+EA+Lucene+Solr+master+Linux+64bit+jdk+9+ea+140+Build+18103+Unstable+.
> We could make Scorer.score() return a double in order to lose less accuracy 
> when combining scores together, while still using floats on TopDocs and more 
> generally all parts of the code that need to store scores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 511 - Unstable

2016-10-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/511/

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ObjectTracker found 5 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper, MDCAwareThreadPoolExecutor, 
MockDirectoryWrapper, SolrCore] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:333)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:640)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:848)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:774)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)  at 
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498)  at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at 
org.apache.solr.core.SolrCore.initSnapshotMetaDataManager(SolrCore.java:435)  
at org.apache.solr.core.SolrCore.(SolrCore.java:842)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:774)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)  at 
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498)  at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at org.apache.solr.core.SolrCore.(SolrCore.java:797)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:774)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)  at 
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498)  at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:66)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:672)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:848)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:774)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)  at 
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498)  at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at org.apache.solr.core.SolrCore.(SolrCore.java:938)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:774)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)  at 
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498)  at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at 

[jira] [Commented] (SOLR-9618) Tests hang on a forked process (deadlock inside the process)

2016-10-25 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604592#comment-15604592
 ] 

Dawid Weiss commented on SOLR-9618:
---

Seems like after that, in a19ec194d25692.

> Tests hang on a forked process (deadlock inside the process)
> 
>
> Key: SOLR-9618
> URL: https://issues.apache.org/jira/browse/SOLR-9618
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: trace.log.bz2
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >