[jira] [Commented] (LUCENE-8466) FrozenBufferedUpdates#apply*Deletes is incorrect when index sorting is enabled

2018-08-29 Thread JIRA


[ 
https://issues.apache.org/jira/browse/LUCENE-8466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597062#comment-16597062
 ] 

Tomás Fernández Löbbe commented on LUCENE-8466:
---

bq. Maybe we should start thinking about releasing 7.5 because of this bug
+1
Do you know if this bug was always present with index sorting or if there was a 
regression?

> FrozenBufferedUpdates#apply*Deletes is incorrect when index sorting is enabled
> --
>
> Key: LUCENE-8466
> URL: https://issues.apache.org/jira/browse/LUCENE-8466
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Critical
> Fix For: 7.5, master (8.0)
>
> Attachments: LUCENE-8466.patch
>
>
> This was reported by Vish Ramachandran at 
> https://markmail.org/message/w27h7n2isb5eogos. When deleting by term or 
> query, we record the term/query that is deleted and the current max doc id. 
> Deletes are later applied on flush by FrozenBufferedUpdates#apply*Deletes. 
> Unfortunately, this doesn't work when index sorting is enabled since 
> documents are renumbered between the time that the current max doc id is 
> computed and the time that deletes are applied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12519) Support Deeply Nested Docs In Child Documents Transformer

2018-08-29 Thread mosh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597058#comment-16597058
 ] 

mosh commented on SOLR-12519:
-

Would adding an fl to ChildDocTransformer be a part of this ticket, or is it a 
whole new one?

> Support Deeply Nested Docs In Child Documents Transformer
> -
>
> Key: SOLR-12519
> URL: https://issues.apache.org/jira/browse/SOLR-12519
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.5
>
> Attachments: SOLR-12519-fix-solrj-tests.patch, 
> SOLR-12519-no-commit.patch, SOLR-12519.patch
>
>  Time Spent: 25.5h
>  Remaining Estimate: 0h
>
> As discussed in SOLR-12298, to make use of the meta-data fields in 
> SOLR-12441, there needs to be a smarter child document transformer, which 
> provides the ability to rebuild the original nested documents' structure.
>  In addition, I also propose the transformer will also have the ability to 
> bring only some of the original hierarchy, to prevent unnecessary block join 
> queries. e.g.
> {code}  {"a": "b", "c": [ {"e": "f"}, {"e": "g"} , {"h": "i"} ]} {code}
>  Incase my query is for all the children of "a:b", which contain the key "e" 
> in them, the query will be broken in to two parts:
>  1. The parent query "a:b"
>  2. The child query "e:*".
> If the only children flag is on, the transformer will return the following 
> documents:
>  {code}[ {"e": "f"}, {"e": "g"} ]{code}
> In case the flag was not turned on(perhaps the default state), the whole 
> document hierarchy will be returned, containing only the matching children:
> {code}{"a": "b", "c": [ {"e": "f"}, {"e": "g"} ]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12519) Support Deeply Nested Docs In Child Documents Transformer

2018-08-29 Thread mosh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597058#comment-16597058
 ] 

mosh edited comment on SOLR-12519 at 8/30/18 4:45 AM:
--

Would adding an "fl" param to ChildDocTransformer be a part of this ticket, or 
is it a whole new one?


was (Author: moshebla):
Would adding an fl to ChildDocTransformer be a part of this ticket, or is it a 
whole new one?

> Support Deeply Nested Docs In Child Documents Transformer
> -
>
> Key: SOLR-12519
> URL: https://issues.apache.org/jira/browse/SOLR-12519
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.5
>
> Attachments: SOLR-12519-fix-solrj-tests.patch, 
> SOLR-12519-no-commit.patch, SOLR-12519.patch
>
>  Time Spent: 25.5h
>  Remaining Estimate: 0h
>
> As discussed in SOLR-12298, to make use of the meta-data fields in 
> SOLR-12441, there needs to be a smarter child document transformer, which 
> provides the ability to rebuild the original nested documents' structure.
>  In addition, I also propose the transformer will also have the ability to 
> bring only some of the original hierarchy, to prevent unnecessary block join 
> queries. e.g.
> {code}  {"a": "b", "c": [ {"e": "f"}, {"e": "g"} , {"h": "i"} ]} {code}
>  Incase my query is for all the children of "a:b", which contain the key "e" 
> in them, the query will be broken in to two parts:
>  1. The parent query "a:b"
>  2. The child query "e:*".
> If the only children flag is on, the transformer will return the following 
> documents:
>  {code}[ {"e": "f"}, {"e": "g"} ]{code}
> In case the flag was not turned on(perhaps the default state), the whole 
> document hierarchy will be returned, containing only the matching children:
> {code}{"a": "b", "c": [ {"e": "f"}, {"e": "g"} ]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-11-ea+28) - Build # 2653 - Still Failing!

2018-08-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2653/
Java: 64bit/jdk-11-ea+28 -XX:+UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 14082 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/temp/junit4-J0-20180830_034340_5362653631271706410673.sysout
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  SIGSEGV (0xb) at pc=0x7fe5ea07613c, pid=9930, tid=9954
   [junit4] #
   [junit4] # JRE version: OpenJDK Runtime Environment (11.0+28) (build 11+28)
   [junit4] # Java VM: OpenJDK 64-Bit Server VM (11+28, mixed mode, tiered, 
compressed oops, parallel gc, linux-amd64)
   [junit4] # Problematic frame:
   [junit4] # V  [libjvm.so+0xd3e13c]  PhaseIdealLoop::split_up(Node*, Node*, 
Node*) [clone .part.39]+0x47c
   [junit4] #
   [junit4] # No core dump will be written. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J0/hs_err_pid9930.log
   [junit4] #
   [junit4] # Compiler replay data is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J0/replay_pid9930.log
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.java.com/bugreport/crash.jsp
   [junit4] #
   [junit4] <<< JVM J0: EOF 

[...truncated 1357 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: 
/home/jenkins/tools/java/64bit/jdk-11-ea+28/bin/java -XX:+UseCompressedOops 
-XX:+UseParallelGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/home/jenkins/workspace/Lucene-Solr-7.x-Linux/heapdumps -ea 
-esa --illegal-access=deny -Dtests.prefix=tests -Dtests.seed=4E74EE5CAF44C33F 
-Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false 
-Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz 
-Dtests.luceneMatchVersion=7.5.0 -Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=3 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Djunit4.tempDir=/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/temp
 -Dcommon.dir=/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene 
-Dclover.db.dir=/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/build/clover/db
 
-Djava.security.policy=/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/tools/junit4/solr-tests.policy
 -Dtests.LUCENE_VERSION=7.5.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.src.home=/home/jenkins/workspace/Lucene-Solr-7.x-Linux 
-Djava.security.egd=file:/dev/./urandom 
-Djunit4.childvm.cwd=/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J0
 -Djunit4.childvm.id=0 -Djunit4.childvm.count=3 -Dfile.encoding=ISO-8859-1 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dtests.filterstacks=true -Dtests.leaveTemporary=false -Dtests.badapples=false 
-classpath 

[jira] [Closed] (LUCENE-7903) Highlighting boolean queries shouldn't always highlight some clauses

2018-08-29 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-7903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley closed LUCENE-7903.


> Highlighting boolean queries shouldn't always highlight some clauses
> 
>
> Key: LUCENE-7903
> URL: https://issues.apache.org/jira/browse/LUCENE-7903
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Reporter: Damian Pawski
>Priority: Minor
>
> I am having difficulties with getting correct "highlighting" section from 
> Solr.
> My query returns correct results, only highlighting does not work as I would 
> expected.
> My query:
> http://solrServer/solr/solrCore/select?q=(((field1:((word1)AND(word2)))%20OR%20(field2:((word1)AND(word2)))%20OR%20(field3:((word1)AND(word2)))%20OR%20(field4:((word1)AND(word2)=field5:()=true=field1:(word1)=field1,field2,field3,field4
> If I run this query the highlighting section is correct - there is no 
> document with phrase "word1" - therefore field1 is not listed in the 
> highlighting element - correct.
> If I update my query to:
> http://solrServer/solr/solrCore/select?q=(((field1:((word1)AND(word2)))%20OR%20(field2:((word1)AND(word2)))%20OR%20(field3:((word1)AND(word2)))%20OR%20(field4:((word1)AND(word2)=field5:()=true=field1:(word1
>  OR word2)=field1,field2,field3,field4
> then I am not getting expected results, word2 has been found in field1 but 
> word1 is missing, but Solr returned field1 in highlighting element with 
> highlighted "word2" only.
> I have explicitly added an extra query using hl.q and I have used AND 
> operator (word1 AND word2), why Solr returns field1 in case when only word2 
> has been found?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7903) Highlighting boolean queries shouldn't always highlight some clauses

2018-08-29 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597041#comment-16597041
 ] 

David Smiley commented on LUCENE-7903:
--

Note this is tested in TestUnifiedHighlighter.testNestedBooleanQueryAccuracy

> Highlighting boolean queries shouldn't always highlight some clauses
> 
>
> Key: LUCENE-7903
> URL: https://issues.apache.org/jira/browse/LUCENE-7903
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Reporter: Damian Pawski
>Priority: Minor
>
> I am having difficulties with getting correct "highlighting" section from 
> Solr.
> My query returns correct results, only highlighting does not work as I would 
> expected.
> My query:
> http://solrServer/solr/solrCore/select?q=(((field1:((word1)AND(word2)))%20OR%20(field2:((word1)AND(word2)))%20OR%20(field3:((word1)AND(word2)))%20OR%20(field4:((word1)AND(word2)=field5:()=true=field1:(word1)=field1,field2,field3,field4
> If I run this query the highlighting section is correct - there is no 
> document with phrase "word1" - therefore field1 is not listed in the 
> highlighting element - correct.
> If I update my query to:
> http://solrServer/solr/solrCore/select?q=(((field1:((word1)AND(word2)))%20OR%20(field2:((word1)AND(word2)))%20OR%20(field3:((word1)AND(word2)))%20OR%20(field4:((word1)AND(word2)=field5:()=true=field1:(word1
>  OR word2)=field1,field2,field3,field4
> then I am not getting expected results, word2 has been found in field1 but 
> word1 is missing, but Solr returned field1 in highlighting element with 
> highlighted "word2" only.
> I have explicitly added an extra query using hl.q and I have used AND 
> operator (word1 AND word2), why Solr returns field1 in case when only word2 
> has been found?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12721) The shard parameter should be optional in AddReplica API

2018-08-29 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-12721:


 Summary: The shard parameter should be optional in AddReplica API
 Key: SOLR-12721
 URL: https://issues.apache.org/jira/browse/SOLR-12721
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
 Fix For: master (8.0), 7.5


Currently either {{shard}} or {{_route_}} must be specified. However, if a user 
only wants to maintain balance in the cluster, he/she should be able to call 
add replica with just the collection name and the API should select a replica 
from the shard having the lowest replica count and create it on the given node. 
Today, the user has to do this manually by inspecting the cluster state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7903) Highlighting boolean queries shouldn't always highlight some clauses

2018-08-29 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-7903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved LUCENE-7903.
--
Resolution: Fixed

This is fixed by LUCENE-8286 for the UnifiedHighlighter using the new Lucene 
MatchesIterator API.  

There's no Solr parameter toggle for this yet; feel free to post an issue & 
patch if you have the time.  It should be pretty easy.

> Highlighting boolean queries shouldn't always highlight some clauses
> 
>
> Key: LUCENE-7903
> URL: https://issues.apache.org/jira/browse/LUCENE-7903
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Reporter: Damian Pawski
>Priority: Minor
>
> I am having difficulties with getting correct "highlighting" section from 
> Solr.
> My query returns correct results, only highlighting does not work as I would 
> expected.
> My query:
> http://solrServer/solr/solrCore/select?q=(((field1:((word1)AND(word2)))%20OR%20(field2:((word1)AND(word2)))%20OR%20(field3:((word1)AND(word2)))%20OR%20(field4:((word1)AND(word2)=field5:()=true=field1:(word1)=field1,field2,field3,field4
> If I run this query the highlighting section is correct - there is no 
> document with phrase "word1" - therefore field1 is not listed in the 
> highlighting element - correct.
> If I update my query to:
> http://solrServer/solr/solrCore/select?q=(((field1:((word1)AND(word2)))%20OR%20(field2:((word1)AND(word2)))%20OR%20(field3:((word1)AND(word2)))%20OR%20(field4:((word1)AND(word2)=field5:()=true=field1:(word1
>  OR word2)=field1,field2,field3,field4
> then I am not getting expected results, word2 has been found in field1 but 
> word1 is missing, but Solr returned field1 in highlighting element with 
> highlighted "word2" only.
> I have explicitly added an extra query using hl.q and I have used AND 
> operator (word1 AND word2), why Solr returns field1 in case when only word2 
> has been found?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8286) UnifiedHighlighter should support the new Weight.matches API for better match accuracy

2018-08-29 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved LUCENE-8286.
--
   Resolution: Fixed
 Assignee: David Smiley
Fix Version/s: 7.5

> UnifiedHighlighter should support the new Weight.matches API for better match 
> accuracy
> --
>
> Key: LUCENE-8286
> URL: https://issues.apache.org/jira/browse/LUCENE-8286
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.5
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The new Weight.matches() API should allow the UnifiedHighlighter to more 
> accurately highlight some BooleanQuery patterns correctly -- see LUCENE-7903.
> In addition, this API should make the job of highlighting easier, reducing 
> the LOC and related complexities, especially the UH's PhraseHelper.  Note: 
> reducing/removing PhraseHelper is not a near-term goal since Weight.matches 
> is experimental and incomplete, and perhaps we'll discover some gaps in 
> flexibility/functionality.
> This issue should introduce a new UnifiedHighlighter.HighlightFlag enum 
> option for this method of highlighting.   Perhaps call it {{WEIGHT_MATCHES}}? 
>  Longer term it could go away and it'll be implied if you specify enum values 
> for PHRASES & MULTI_TERM_QUERY?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8286) UnifiedHighlighter should support the new Weight.matches API for better match accuracy

2018-08-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597034#comment-16597034
 ] 

ASF subversion and git services commented on LUCENE-8286:
-

Commit bf7d1078e4ef6c99abaf5c76eccf56ed0f09f553 in lucene-solr's branch 
refs/heads/branch_7x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bf7d107 ]

LUCENE-8286: UnifiedHighlighter: new HighlightFlag.WEIGHT_MATCHES for 
MatchesIterator API.
Other API changes: New UHComponents, and FieldOffsetStrategy takes a LeafReader 
not IndexReader now.
Closes #409

(cherry picked from commit b19ae942f154924b9108c4e0409865128f2a07d4)


> UnifiedHighlighter should support the new Weight.matches API for better match 
> accuracy
> --
>
> Key: LUCENE-8286
> URL: https://issues.apache.org/jira/browse/LUCENE-8286
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: David Smiley
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The new Weight.matches() API should allow the UnifiedHighlighter to more 
> accurately highlight some BooleanQuery patterns correctly -- see LUCENE-7903.
> In addition, this API should make the job of highlighting easier, reducing 
> the LOC and related complexities, especially the UH's PhraseHelper.  Note: 
> reducing/removing PhraseHelper is not a near-term goal since Weight.matches 
> is experimental and incomplete, and perhaps we'll discover some gaps in 
> flexibility/functionality.
> This issue should introduce a new UnifiedHighlighter.HighlightFlag enum 
> option for this method of highlighting.   Perhaps call it {{WEIGHT_MATCHES}}? 
>  Longer term it could go away and it'll be implied if you specify enum values 
> for PHRASES & MULTI_TERM_QUERY?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12713) make "solr.data.dir" points to multiple base path, and collection's replicas may distribute in them

2018-08-29 Thread weizhenyuan (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597032#comment-16597032
 ] 

weizhenyuan edited comment on SOLR-12713 at 8/30/18 3:56 AM:
-

[~varunthacker] I do need this feature to improve indexing throughput   in some 
cases, be my pleasure to implement this if possible.


was (Author: tinswzy):
[~varunthacker] I do need this feature to improve indexing throughput   in same 
cases, be my pleasure to implement this if possible.

> make "solr.data.dir" points to multiple base path, and collection's replicas 
> may distribute in them
> ---
>
> Key: SOLR-12713
> URL: https://issues.apache.org/jira/browse/SOLR-12713
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Reporter: weizhenyuan
>Priority: Major
>
> As discussion in user mail-list,It's of worthy of making  "solr.data.dir" 
> points to multiple base paths, which collection's replicas may distribute in.
> Currently, solr only could points to a single common dataDir,to maximize 
> indexing throughput   in same cases,solr maybe introduce a new feature to 
> support multiple dataDir paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5032) Implement tool and/or API for moving a replica to a specific node

2018-08-29 Thread Shalin Shekhar Mangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-5032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-5032.
-
Resolution: Duplicate

This was fixed by SOLR-10239

> Implement tool and/or API for moving a replica to a specific node
> -
>
> Key: SOLR-5032
> URL: https://issues.apache.org/jira/browse/SOLR-5032
> Project: Solr
>  Issue Type: New Feature
>Reporter: Otis Gospodnetic
>Priority: Minor
> Attachments: SOLR-5032.patch
>
>
> See http://search-lucene.com/m/Sri8gFljGw



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12713) make "solr.data.dir" points to multiple base path, and collection's replicas may distribute in them

2018-08-29 Thread weizhenyuan (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597032#comment-16597032
 ] 

weizhenyuan commented on SOLR-12713:


[~varunthacker] I do need this feature to improve indexing throughput   in same 
cases, be my pleasure to implement this if possible.

> make "solr.data.dir" points to multiple base path, and collection's replicas 
> may distribute in them
> ---
>
> Key: SOLR-12713
> URL: https://issues.apache.org/jira/browse/SOLR-12713
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Reporter: weizhenyuan
>Priority: Major
>
> As discussion in user mail-list,It's of worthy of making  "solr.data.dir" 
> points to multiple base paths, which collection's replicas may distribute in.
> Currently, solr only could points to a single common dataDir,to maximize 
> indexing throughput   in same cases,solr maybe introduce a new feature to 
> support multiple dataDir paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12713) make "solr.data.dir" points to multiple base path, and collection's replicas may distribute in them

2018-08-29 Thread weizhenyuan (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597028#comment-16597028
 ] 

weizhenyuan commented on SOLR-12713:



[~elyograg] That sounds work too, I think the approach you mention can be work 
automatically. 
In a simple way, when the collection is requested to create,each shard replica 
will  use one of the data paths  as it's dataDir property, 
and make the distribution more balance overall. 

> make "solr.data.dir" points to multiple base path, and collection's replicas 
> may distribute in them
> ---
>
> Key: SOLR-12713
> URL: https://issues.apache.org/jira/browse/SOLR-12713
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Reporter: weizhenyuan
>Priority: Major
>
> As discussion in user mail-list,It's of worthy of making  "solr.data.dir" 
> points to multiple base paths, which collection's replicas may distribute in.
> Currently, solr only could points to a single common dataDir,to maximize 
> indexing throughput   in same cases,solr maybe introduce a new feature to 
> support multiple dataDir paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #409: LUCENE-8286: UH initial support for Weight.ma...

2018-08-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/lucene-solr/pull/409


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8286) UnifiedHighlighter should support the new Weight.matches API for better match accuracy

2018-08-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597019#comment-16597019
 ] 

ASF subversion and git services commented on LUCENE-8286:
-

Commit b19ae942f154924b9108c4e0409865128f2a07d4 in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b19ae94 ]

LUCENE-8286: UnifiedHighlighter: new HighlightFlag.WEIGHT_MATCHES for 
MatchesIterator API.
Other API changes: New UHComponents, and FieldOffsetStrategy takes a LeafReader 
not IndexReader now.
Closes #409


> UnifiedHighlighter should support the new Weight.matches API for better match 
> accuracy
> --
>
> Key: LUCENE-8286
> URL: https://issues.apache.org/jira/browse/LUCENE-8286
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: David Smiley
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The new Weight.matches() API should allow the UnifiedHighlighter to more 
> accurately highlight some BooleanQuery patterns correctly -- see LUCENE-7903.
> In addition, this API should make the job of highlighting easier, reducing 
> the LOC and related complexities, especially the UH's PhraseHelper.  Note: 
> reducing/removing PhraseHelper is not a near-term goal since Weight.matches 
> is experimental and incomplete, and perhaps we'll discover some gaps in 
> flexibility/functionality.
> This issue should introduce a new UnifiedHighlighter.HighlightFlag enum 
> option for this method of highlighting.   Perhaps call it {{WEIGHT_MATCHES}}? 
>  Longer term it could go away and it'll be implied if you specify enum values 
> for PHRASES & MULTI_TERM_QUERY?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12720) Remove autoReplicaFailoverWaitAfterExpiration in Solr 8.0

2018-08-29 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-12720:


 Summary: Remove autoReplicaFailoverWaitAfterExpiration in Solr 8.0
 Key: SOLR-12720
 URL: https://issues.apache.org/jira/browse/SOLR-12720
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
  Components: AutoScaling, SolrCloud
Reporter: Shalin Shekhar Mangar
 Fix For: master (8.0)


SOLR-12719 deprecated the autoReplicaFailoverWaitAfterExpiration property in 
solr.xml. We should remove it entirely in Solr 8.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12719) Deprecate autoReplicaFailoverWaitAfterExpiration in solr.xml

2018-08-29 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-12719:


 Summary: Deprecate autoReplicaFailoverWaitAfterExpiration in 
solr.xml
 Key: SOLR-12719
 URL: https://issues.apache.org/jira/browse/SOLR-12719
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
  Components: AutoScaling, SolrCloud
Reporter: Shalin Shekhar Mangar
 Fix For: master (8.0), 7.5


The {{autoReplicaFailoverWaitAfterExpiration}} in solr.xml is used to populate 
the auto add replicas trigger's {{waitFor}} value. This was done to preserve 
back-compat in SOLR-10397. However, it has limitations as currently implemented 
e.g. SOLR-12114.

We should move it to either a cluster property or autoscaling property, 
deprecate the current setting and remove it in 8.0 (in another issue)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12088) Shards with dead replicas cause increased write latency

2018-08-29 Thread Shalin Shekhar Mangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-12088.
--
   Resolution: Fixed
Fix Version/s: 7.3.1
   master (8.0)

I'm marking this as fixed by SOLR-12146. We can re-open if we see the problem 
again.

> Shards with dead replicas cause increased write latency
> ---
>
> Key: SOLR-12088
> URL: https://issues.apache.org/jira/browse/SOLR-12088
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Major
> Fix For: master (8.0), 7.3.1
>
>
> If a collection's shard contains dead replicas, write latency to the 
> collection is increased. For example, if a collection has 10 shards with a 
> replication factor of 3, and one of those shards contains 3 replicas and 3 
> downed replicas, write latency is increased in comparison to a shard that 
> contains only 3 replicas.
> My feeling here is that downed replicas should be completely ignored and not 
> cause issues to other alive replicas in terms of write latency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.4) - Build # 2652 - Failure!

2018-08-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2652/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 15025 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/temp/junit4-J0-20180830_002730_9935784049908683076987.sysout
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  SIGSEGV (0xb) at pc=0x7fe94a32fdc9, pid=3218, tid=3307
   [junit4] #
   [junit4] # JRE version: Java(TM) SE Runtime Environment (9.0+11) (build 
9.0.4+11)
   [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (9.0.4+11, mixed mode, 
tiered, g1 gc, linux-amd64)
   [junit4] # Problematic frame:
   [junit4] # V  [libjvm.so+0xc5adc9]  PhaseIdealLoop::split_up(Node*, Node*, 
Node*) [clone .part.40]+0x619
   [junit4] #
   [junit4] # No core dump will be written. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J0/hs_err_pid3218.log
   [junit4] #
   [junit4] # Compiler replay data is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J0/replay_pid3218.log
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.java.com/bugreport/crash.jsp
   [junit4] #
   [junit4] <<< JVM J0: EOF 

[...truncated 365 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: 
/home/jenkins/tools/java/64bit/jdk-9.0.4/bin/java -XX:-UseCompressedOops 
-XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/home/jenkins/workspace/Lucene-Solr-7.x-Linux/heapdumps -ea 
-esa --illegal-access=deny -Dtests.prefix=tests -Dtests.seed=6B5FE4FF44407AB9 
-Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false 
-Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz 
-Dtests.luceneMatchVersion=7.5.0 -Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=3 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Djunit4.tempDir=/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/temp
 -Dcommon.dir=/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene 
-Dclover.db.dir=/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/build/clover/db
 
-Djava.security.policy=/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/tools/junit4/solr-tests.policy
 -Dtests.LUCENE_VERSION=7.5.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.src.home=/home/jenkins/workspace/Lucene-Solr-7.x-Linux 
-Djava.security.egd=file:/dev/./urandom 
-Djunit4.childvm.cwd=/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J0
 -Djunit4.childvm.id=0 -Djunit4.childvm.count=3 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dtests.filterstacks=true -Dtests.leaveTemporary=false -Dtests.badapples=false 
-classpath 

[jira] [Updated] (SOLR-12718) StreamContext ctor should always take a SolrClientCache

2018-08-29 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12718:
-
Labels: newdev streaming  (was: )

> StreamContext ctor should always take a SolrClientCache
> ---
>
> Key: SOLR-12718
> URL: https://issues.apache.org/jira/browse/SOLR-12718
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
>  Labels: newdev, streaming
>
> {code:java}
> StreamExpression expression = StreamExpressionParser.parse(expr);
> TupleStream stream = new CloudSolrStream(expression, factory);
> SolrClientCache solrClientCache = new SolrClientCache();
> StreamContext streamContext = new StreamContext();
> streamContext.setSolrClientCache(solrClientCache);
> stream.setStreamContext(streamContext);
> List tuples = getTuples(stream);{code}
>  
> If we don't call {{streamContext.setSolrClientCache}} we will get an NPE. 
> Seems like we should always have the user take solrClientCache in 
> StreamContext's ctor ?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12718) StreamContext ctor should always take a SolrClientCache

2018-08-29 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-12718:


 Summary: StreamContext ctor should always take a SolrClientCache
 Key: SOLR-12718
 URL: https://issues.apache.org/jira/browse/SOLR-12718
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Varun Thacker


{code:java}
StreamExpression expression = StreamExpressionParser.parse(expr);
TupleStream stream = new CloudSolrStream(expression, factory);

SolrClientCache solrClientCache = new SolrClientCache();
StreamContext streamContext = new StreamContext();
streamContext.setSolrClientCache(solrClientCache);

stream.setStreamContext(streamContext);
List tuples = getTuples(stream);{code}
 

If we don't call {{streamContext.setSolrClientCache}} we will get an NPE. Seems 
like we should always have the user take solrClientCache in StreamContext's 
ctor ?

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10981) Allow update to load gzip files

2018-08-29 Thread Andrew Lundgren (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Lundgren updated SOLR-10981:
---
Attachment: SOLR-10981.patch

> Allow update to load gzip files 
> 
>
> Key: SOLR-10981
> URL: https://issues.apache.org/jira/browse/SOLR-10981
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.6
>Reporter: Andrew Lundgren
>Priority: Major
>  Labels: patch
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-10981.patch, SOLR-10981.patch, SOLR-10981.patch, 
> SOLR-10981.patch, SOLR-10981.patch
>
>
> We currently import large CSV files. We store them in gzip files as they 
> compress at around 80%.
> To import them we must gunzip them and then import them. After that we no 
> longer need the decompressed files.
> This patch allows directly opening either URL, or local files that are 
> gzipped.
> For URLs, to determine if the file is gzipped, it will check the content 
> encoding=="gzip" or if the file ends in ".gz"
> For files, if the file ends in ".gz" then it will assume the file is gzipped.
> I have tested the patch with 4.10.4, 6.6.0, 7.0.1 and master from git.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-9) - Build # 4802 - Still Unstable!

2018-08-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4802/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseSerialGC

9 tests failed.
FAILED:  
org.apache.solr.update.processor.ParsingFieldUpdateProcessorsTest.testAsctimeLeniency

Error Message:
ERROR: [doc=1] Error adding field 'date_dt'='Friday Oct 7 13:14:15 2005' 
msg=Invalid Date String:'Friday Oct 7 13:14:15 2005'

Stack Trace:
org.apache.solr.common.SolrException: ERROR: [doc=1] Error adding field 
'date_dt'='Friday Oct 7 13:14:15 2005' msg=Invalid Date String:'Friday Oct 7 
13:14:15 2005'
at 
__randomizedtesting.SeedInfo.seed([5F558E89FDEB586E:CEB2DC932880003]:0)
at 
org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:215)
at 
org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:102)
at 
org.apache.solr.update.DirectUpdateHandler2.updateDocOrDocValues(DirectUpdateHandler2.java:962)
at 
org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:341)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:288)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:235)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1003)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:653)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
at 
org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
at 
org.apache.solr.update.processor.UpdateProcessorTestBase.processAdd(UpdateProcessorTestBase.java:75)
at 
org.apache.solr.update.processor.UpdateProcessorTestBase.processAdd(UpdateProcessorTestBase.java:47)
at 
org.apache.solr.update.processor.ParsingFieldUpdateProcessorsTest.assertParsedDate(ParsingFieldUpdateProcessorsTest.java:1022)
at 
org.apache.solr.update.processor.ParsingFieldUpdateProcessorsTest.testAsctimeLeniency(ParsingFieldUpdateProcessorsTest.java:1002)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10) - Build # 22766 - Unstable!

2018-08-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22766/
Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

10 tests failed.
FAILED:  
org.apache.solr.update.processor.ParsingFieldUpdateProcessorsTest.testRfc1036

Error Message:
ERROR: [doc=1] Error adding field 'date_dt'='Friday, 07-Oct-05 13:14:15 GMT' 
msg=Invalid Date String:'Friday, 07-Oct-05 13:14:15 GMT'

Stack Trace:
org.apache.solr.common.SolrException: ERROR: [doc=1] Error adding field 
'date_dt'='Friday, 07-Oct-05 13:14:15 GMT' msg=Invalid Date String:'Friday, 
07-Oct-05 13:14:15 GMT'
at 
__randomizedtesting.SeedInfo.seed([DC7DAF4CF72E32F0:CCA8582725BB41FC]:0)
at 
org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:215)
at 
org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:102)
at 
org.apache.solr.update.DirectUpdateHandler2.updateDocOrDocValues(DirectUpdateHandler2.java:962)
at 
org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:341)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:288)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:235)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1003)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:653)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
at 
org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
at 
org.apache.solr.update.processor.UpdateProcessorTestBase.processAdd(UpdateProcessorTestBase.java:75)
at 
org.apache.solr.update.processor.UpdateProcessorTestBase.processAdd(UpdateProcessorTestBase.java:47)
at 
org.apache.solr.update.processor.ParsingFieldUpdateProcessorsTest.assertParsedDate(ParsingFieldUpdateProcessorsTest.java:1022)
at 
org.apache.solr.update.processor.ParsingFieldUpdateProcessorsTest.testRfc1036(ParsingFieldUpdateProcessorsTest.java:988)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 

[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_172) - Build # 761 - Unstable!

2018-08-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/761/
Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger

Error Message:
expected:<3> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<3> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([9EE966380B3F174C:FD2250BA92F06461]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.scheduledTriggerTest(ScheduledTriggerTest.java:113)
at 
org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger(ScheduledTriggerTest.java:66)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

[jira] [Created] (LUCENE-8472) Soft-deletes merge retention query should be rewritten

2018-08-29 Thread Nhat Nguyen (JIRA)
Nhat Nguyen created LUCENE-8472:
---

 Summary: Soft-deletes merge retention query should be rewritten
 Key: LUCENE-8472
 URL: https://issues.apache.org/jira/browse/LUCENE-8472
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Nhat Nguyen


We should rewrite the retention query before passing it into createWeight in 
SoftDeletesRetentionMergePolicy#getScorer method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (LUCENE-3318) Sketch out highlighting based on term positions / position iterators

2018-08-29 Thread Uwe Schindler (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-3318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler closed LUCENE-3318.
-

> Sketch out highlighting based on term positions / position iterators
> 
>
> Key: LUCENE-3318
> URL: https://issues.apache.org/jira/browse/LUCENE-3318
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: modules/highlighter
>Affects Versions: Positions Branch
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: Positions Branch
>
> Attachments: LUCENE-3318.patch, LUCENE-3318.patch, LUCENE-3318.patch
>
>
> Spinn off from LUCENE-2878. Since we have positions on a large number of 
> queries already in the branch is worth looking at highlighting as a real 
> consumer of the API. A prototype is already committed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-3318) Sketch out highlighting based on term positions / position iterators

2018-08-29 Thread Uwe Schindler (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-3318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-3318.
---
Resolution: Won't Fix

> Sketch out highlighting based on term positions / position iterators
> 
>
> Key: LUCENE-3318
> URL: https://issues.apache.org/jira/browse/LUCENE-3318
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: modules/highlighter
>Affects Versions: Positions Branch
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: Positions Branch
>
> Attachments: LUCENE-3318.patch, LUCENE-3318.patch, LUCENE-3318.patch
>
>
> Spinn off from LUCENE-2878. Since we have positions on a large number of 
> queries already in the branch is worth looking at highlighting as a real 
> consumer of the API. A prototype is already committed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3318) Sketch out highlighting based on term positions / position iterators

2018-08-29 Thread Mike Sokolov (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-3318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596890#comment-16596890
 ] 

Mike Sokolov commented on LUCENE-3318:
--

[~arafalov] please feel free to resolve, as discussed on the mailing list. This 
issue was superseded by LUCENE-8229 (7 years later!)

> Sketch out highlighting based on term positions / position iterators
> 
>
> Key: LUCENE-3318
> URL: https://issues.apache.org/jira/browse/LUCENE-3318
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: modules/highlighter
>Affects Versions: Positions Branch
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: Positions Branch
>
> Attachments: LUCENE-3318.patch, LUCENE-3318.patch, LUCENE-3318.patch
>
>
> Spinn off from LUCENE-2878. Since we have positions on a large number of 
> queries already in the branch is worth looking at highlighting as a real 
> consumer of the API. A prototype is already committed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8267) Remove memory codecs from the codebase

2018-08-29 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596879#comment-16596879
 ] 

David Smiley commented on LUCENE-8267:
--

[~dweiss] I noticed the solr/CHANGES.txt entry you added recommended users 
switch to "Direct" instead.  I'm surprised we would recommend that 
(especially given the demise of "Memory").  Wouldn't FST50 be better?  I'd like 
to reword the CHANGES.txt to the following:
{noformat}
* LUCENE-8267: Memory codecs have been removed from the codebase 
(MemoryPostings,
  MemoryDocValues). If you used postingsFormat="Memory" switch to "FST50" as 
the next best alternative,
  or use the default.  If you used docValuesFormat="Memory" then remove it to 
get the default. (Dawid Weiss){noformat}

> Remove memory codecs from the codebase
> --
>
> Key: LUCENE-8267
> URL: https://issues.apache.org/jira/browse/LUCENE-8267
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: LUCENE-8267.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Memory codecs (MemoryPostings*, MemoryDocValues*) are part of random 
> selection of codecs for tests and cause occasional OOMs when a test with huge 
> data is selected. We don't use those memory codecs anywhere outside of tests, 
> it has been suggested to just remove them to avoid maintenance costs and OOMs 
> in tests. [1]
> [1] https://apache.markmail.org/thread/mj53os2ekyldsoy3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-10.0.1) - Build # 7493 - Still Unstable!

2018-08-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7493/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseG1GC

10 tests failed.
FAILED:  
org.apache.solr.update.processor.ParsingFieldUpdateProcessorsTest.testRfc1036

Error Message:
ERROR: [doc=1] Error adding field 'date_dt'='Friday, 07-Oct-05 13:14:15 GMT' 
msg=Invalid Date String:'Friday, 07-Oct-05 13:14:15 GMT'

Stack Trace:
org.apache.solr.common.SolrException: ERROR: [doc=1] Error adding field 
'date_dt'='Friday, 07-Oct-05 13:14:15 GMT' msg=Invalid Date String:'Friday, 
07-Oct-05 13:14:15 GMT'
at 
__randomizedtesting.SeedInfo.seed([FCE65FAC9AEBBB85:EC33A8C7487EC889]:0)
at 
org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:215)
at 
org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:102)
at 
org.apache.solr.update.DirectUpdateHandler2.updateDocOrDocValues(DirectUpdateHandler2.java:962)
at 
org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:341)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:288)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:235)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1003)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:653)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
at 
org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
at 
org.apache.solr.update.processor.UpdateProcessorTestBase.processAdd(UpdateProcessorTestBase.java:75)
at 
org.apache.solr.update.processor.UpdateProcessorTestBase.processAdd(UpdateProcessorTestBase.java:47)
at 
org.apache.solr.update.processor.ParsingFieldUpdateProcessorsTest.assertParsedDate(ParsingFieldUpdateProcessorsTest.java:1022)
at 
org.apache.solr.update.processor.ParsingFieldUpdateProcessorsTest.testRfc1036(ParsingFieldUpdateProcessorsTest.java:988)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 

[jira] [Commented] (LUCENE-765) Index package level javadocs needs content

2018-08-29 Thread Mike Sokolov (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596792#comment-16596792
 ] 

Mike Sokolov commented on LUCENE-765:
-

OK, this patch supplies fully-qualified paths for all the \{@Link tags} and 
precommit passes for me with this.

> Index package level javadocs needs content
> --
>
> Key: LUCENE-765
> URL: https://issues.apache.org/jira/browse/LUCENE-765
> Project: Lucene - Core
>  Issue Type: Wish
>  Components: general/javadocs
>Reporter: Grant Ingersoll
>Priority: Minor
>  Labels: newdev
> Attachments: LUCENE-765.patch, LUCENE-765.patch, LUCENE-765.patch, 
> LUCENE-765.patch, LUCENE-765.patch
>
>
> The org.apache.lucene.index package level javadocs are sorely lacking.  They 
> should be updated to give a summary of the important classes, how indexing 
> works, etc.  Maybe give an overview of how the different writers coordinate.  
> Links to file formats, information on the posting algorithm, etc. would be 
> helpful.
> See the search package javadocs as a sample of the kind of info that could go 
> here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-765) Index package level javadocs needs content

2018-08-29 Thread Mike Sokolov (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Sokolov updated LUCENE-765:

Attachment: LUCENE-765.patch

> Index package level javadocs needs content
> --
>
> Key: LUCENE-765
> URL: https://issues.apache.org/jira/browse/LUCENE-765
> Project: Lucene - Core
>  Issue Type: Wish
>  Components: general/javadocs
>Reporter: Grant Ingersoll
>Priority: Minor
>  Labels: newdev
> Attachments: LUCENE-765.patch, LUCENE-765.patch, LUCENE-765.patch, 
> LUCENE-765.patch, LUCENE-765.patch
>
>
> The org.apache.lucene.index package level javadocs are sorely lacking.  They 
> should be updated to give a summary of the important classes, how indexing 
> works, etc.  Maybe give an overview of how the different writers coordinate.  
> Links to file formats, information on the posting algorithm, etc. would be 
> helpful.
> See the search package javadocs as a sample of the kind of info that could go 
> here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Closing a JIRA issue

2018-08-29 Thread Alexandre Rafalovitch
I can see "Resolve" button, which is a step towards closure.

So, if you do not see it, the permissions may be in play. I will leave
the issue as is, to let the discrepancy to be figured out.

Regards,
   Alex.

On 29 August 2018 at 15:56, Michael Sokolov  wrote:
> This old issue was still assigned to me:
> https://issues.apache.org/jira/browse/LUCENE-3318. I had worked on it seven
> years ago, but it is no longer relevant today, and I'd like to close it, but
> I don't see any UI affordance for doing that in JIRA. Am I missing
> permissions? Is the issue in some weird state? I unassigned it so at least
> it no longer shows up as "my" issue, but I would still like to see if there
> is some way to close it
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-765) Index package level javadocs needs content

2018-08-29 Thread Mike Sokolov (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Sokolov updated LUCENE-765:

Attachment: (was: LUCENE-765.patch.2)

> Index package level javadocs needs content
> --
>
> Key: LUCENE-765
> URL: https://issues.apache.org/jira/browse/LUCENE-765
> Project: Lucene - Core
>  Issue Type: Wish
>  Components: general/javadocs
>Reporter: Grant Ingersoll
>Priority: Minor
>  Labels: newdev
> Attachments: LUCENE-765.patch, LUCENE-765.patch, LUCENE-765.patch, 
> LUCENE-765.patch, LUCENE-765.patch
>
>
> The org.apache.lucene.index package level javadocs are sorely lacking.  They 
> should be updated to give a summary of the important classes, how indexing 
> works, etc.  Maybe give an overview of how the different writers coordinate.  
> Links to file formats, information on the posting algorithm, etc. would be 
> helpful.
> See the search package javadocs as a sample of the kind of info that could go 
> here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-765) Index package level javadocs needs content

2018-08-29 Thread Mike Sokolov (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Sokolov updated LUCENE-765:

Attachment: LUCENE-765.patch.2

> Index package level javadocs needs content
> --
>
> Key: LUCENE-765
> URL: https://issues.apache.org/jira/browse/LUCENE-765
> Project: Lucene - Core
>  Issue Type: Wish
>  Components: general/javadocs
>Reporter: Grant Ingersoll
>Priority: Minor
>  Labels: newdev
> Attachments: LUCENE-765.patch, LUCENE-765.patch, LUCENE-765.patch, 
> LUCENE-765.patch, LUCENE-765.patch
>
>
> The org.apache.lucene.index package level javadocs are sorely lacking.  They 
> should be updated to give a summary of the important classes, how indexing 
> works, etc.  Maybe give an overview of how the different writers coordinate.  
> Links to file formats, information on the posting algorithm, etc. would be 
> helpful.
> See the search package javadocs as a sample of the kind of info that could go 
> here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-11-ea+28) - Build # 22765 - Failure!

2018-08-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22765/
Java: 64bit/jdk-11-ea+28 -XX:+UseCompressedOops -XX:+UseG1GC

5 tests failed.
FAILED:  org.apache.solr.cloud.ZkShardTermsTest.testParticipationOfReplicas

Error Message:
expected:<2> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<2> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([219C97ECB69E466A:9475D37AADCF41D7]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.ZkShardTermsTest.waitFor(ZkShardTermsTest.java:309)
at 
org.apache.solr.cloud.ZkShardTermsTest.testParticipationOfReplicas(ZkShardTermsTest.java:69)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  
org.apache.solr.cloud.api.collections.CustomCollectionTest.testRouteFieldForImplicitRouter

Error Message:

Closing a JIRA issue

2018-08-29 Thread Michael Sokolov
This old issue was still assigned to me:
https://issues.apache.org/jira/browse/LUCENE-3318. I had worked on it seven
years ago, but it is no longer relevant today, and I'd like to close it,
but I don't see any UI affordance for doing that in JIRA. Am I missing
permissions? Is the issue in some weird state? I unassigned it so at least
it no longer shows up as "my" issue, but I would still like to see if there
is some way to close it


[jira] [Assigned] (LUCENE-3318) Sketch out highlighting based on term positions / position iterators

2018-08-29 Thread Mike Sokolov (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-3318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Sokolov reassigned LUCENE-3318:


Assignee: (was: Mike Sokolov)

> Sketch out highlighting based on term positions / position iterators
> 
>
> Key: LUCENE-3318
> URL: https://issues.apache.org/jira/browse/LUCENE-3318
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: modules/highlighter
>Affects Versions: Positions Branch
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: Positions Branch
>
> Attachments: LUCENE-3318.patch, LUCENE-3318.patch, LUCENE-3318.patch
>
>
> Spinn off from LUCENE-2878. Since we have positions on a large number of 
> queries already in the branch is worth looking at highlighting as a real 
> consumer of the API. A prototype is already committed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8469) Inline calls to the deprecated StringHelper.compare

2018-08-29 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596747#comment-16596747
 ] 

Uwe Schindler commented on LUCENE-8469:
---

I am ok with this change on Master. But this may cause patches to be harder to 
Backport.

> Inline calls to the deprecated StringHelper.compare
> ---
>
> Key: LUCENE-8469
> URL: https://issues.apache.org/jira/browse/LUCENE-8469
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: 7.5
>
> Attachments: LUCENE-8469.patch, LUCENE-8469.patch
>
>
> In an attempt to limit the number of warnings during compilation I though 
> it'd be nice to clean up our own stuff. This is a start: StringHelper.compare 
> is used throughout the code and is delegated to FutureArrays (where it 
> belongs, as the arguments are byte[], not Strings).
> This can cause other patches to not apply anymore... so we could apply this 
> to master only. If anybody has a strong feeling about it, please voice it. 
> The patch is trivial.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12689) Add example of collection creation when autoscaling policy/prefs are configured

2018-08-29 Thread Cassandra Targett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596724#comment-16596724
 ] 

Cassandra Targett commented on SOLR-12689:
--

The content is good in terms of what it covers, but:

* The link created near the top of the page ( that reads, "{{See the section 
<> for an example of how policy and 
preferences affect replica placement}}") fails because the link doesn't 
reference the correct section title. It should be {{<>}}. But I would argue that the correct 
section title is a bit long...the way you have it in the link to the section 
might really be a better title for the section (so, I'm suggesting to change 
the name of the section instead of fixing the link to it).
* I think you might want to read it over one more time before committing. There 
are several sentences that run a bit long, particularly the opening paragraph 
of the first section and the first sub-bullet. These make the meaning a bit 
harder to follow easily.
* Also, you have a single top-level bullet with two sub-bullets. If you don't 
have a 2nd top-level bullet, you can probably just get rid of the one you have, 
integrate it with the hanging sentence fragment above it, and make the 2 
sub-bullets top-level bullets.


> Add example of collection creation when autoscaling policy/prefs are 
> configured
> ---
>
> Key: SOLR-12689
> URL: https://issues.apache.org/jira/browse/SOLR-12689
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, documentation
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Attachments: SOLR-12689.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8469) Inline calls to the deprecated StringHelper.compare

2018-08-29 Thread Dawid Weiss (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596713#comment-16596713
 ] 

Dawid Weiss commented on LUCENE-8469:
-

Updated the patch. I don't like the duplication in arguments (same 
subexpressions), but it'd require an explicit pull of a local variable(s) for 
each and every occurrence. The compiler should handle these efficiently and I 
wouldn't want to make a mistake somewhere in there.

> Inline calls to the deprecated StringHelper.compare
> ---
>
> Key: LUCENE-8469
> URL: https://issues.apache.org/jira/browse/LUCENE-8469
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: 7.5
>
> Attachments: LUCENE-8469.patch, LUCENE-8469.patch
>
>
> In an attempt to limit the number of warnings during compilation I though 
> it'd be nice to clean up our own stuff. This is a start: StringHelper.compare 
> is used throughout the code and is delegated to FutureArrays (where it 
> belongs, as the arguments are byte[], not Strings).
> This can cause other patches to not apply anymore... so we could apply this 
> to master only. If anybody has a strong feeling about it, please voice it. 
> The patch is trivial.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8469) Inline calls to the deprecated StringHelper.compare

2018-08-29 Thread Dawid Weiss (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-8469:

Attachment: LUCENE-8469.patch

> Inline calls to the deprecated StringHelper.compare
> ---
>
> Key: LUCENE-8469
> URL: https://issues.apache.org/jira/browse/LUCENE-8469
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: 7.5
>
> Attachments: LUCENE-8469.patch, LUCENE-8469.patch
>
>
> In an attempt to limit the number of warnings during compilation I though 
> it'd be nice to clean up our own stuff. This is a start: StringHelper.compare 
> is used throughout the code and is delegated to FutureArrays (where it 
> belongs, as the arguments are byte[], not Strings).
> This can cause other patches to not apply anymore... so we could apply this 
> to master only. If anybody has a strong feeling about it, please voice it. 
> The patch is trivial.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12584) Add basic auth credentials configuration to the Solr exporter for Prometheus/Grafana

2018-08-29 Thread CDatta (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596710#comment-16596710
 ] 

CDatta commented on SOLR-12584:
---

Sorry for late response. 

ant precommit:

BUILD SUCCESSFUL

Total time: 10 minutes 56 seconds

ant test:

BUILD FAILED

There were test failures: 829 suites (5 ignored), 4019 tests, 1 failure, 163 
ignored (137 assumptions) [seed: 5AEECD470FD76BBC]

 

> Add basic auth credentials configuration to the Solr exporter for 
> Prometheus/Grafana  
> --
>
> Key: SOLR-12584
> URL: https://issues.apache.org/jira/browse/SOLR-12584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, metrics, security
>Affects Versions: 7.3, 7.4
>Reporter: Dwane Hall
>Priority: Minor
>  Labels: authentication, metrics, security
>
> The Solr exporter for Prometheus/Grafana provides a useful visual layer over 
> the solr metrics api for monitoring the state of a Solr cluster. Currently 
> this can not be configured and used on a secure Solr cluster with the Basic 
> Authentication plugin enabled. The exporter does not provide a mechanism to 
> configure/pass through basic auth credentials when SolrJ requests information 
> from the metrics api endpoints and would be a useful addition for Solr users 
> running a secure Solr instance.   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12593) Remove date parsing functionality from extraction contrib

2018-08-29 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596701#comment-16596701
 ] 

David Smiley commented on SOLR-12593:
-

I committed the configSet modification stuff as part of SOLR-12591. Please 
rebase the PR, thus remove that portion that has been committed. And please try 
out my last question in the code review of the PR:
{quote}I was thinking... perhaps we should set _default's solrconfig.xml 
 Remove date parsing functionality from extraction contrib
> -
>
> Key: SOLR-12593
> URL: https://issues.apache.org/jira/browse/SOLR-12593
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - Solr Cell (Tika extraction)
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: master (8.0)
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> The date parsing functionality in the extraction contrib is obsoleted by 
> equivalent functionality in ParseDateFieldUpdateProcessorFactory.  It should 
> be removed.  We should add documentation within this part of the ref guide on 
> how to accomplish the same (and test it).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12591) Ensure ParseDateFieldUpdateProcessorFactory can be used instead of ExtractionDateUtil

2018-08-29 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12591:

   Priority: Major  (was: Minor)
Description: 
ParseDateFieldUpdateProcessorFactory should ideally be able to handle the cases 
that ExtractionDateUtil does in the "extraction" contrib module.  Tests should 
be added, ported from patches in SOLR-12561 that enhance TestExtractionDateUtil 
to similarly ensure the URP is tested.  Additionally the default configSet's 
configuration of this URP should be expanded to include the patterns parsed by 
the extract contrib.

Once this issue is complete, it should be appropriate to gut date time parsing 
out of the "extraction" contrib module – a separate issue (see SOLR-12593).

  was:
ParseDateFieldUpdateProcessorFactory should ideally be able to handle the cases 
that ExtractionDateUtil does in the "extraction" contrib module.  Tests should 
be added, ported from patches in SOLR-12561 that enhance TestExtractionDateUtil 
to similarly ensure the URP is tested.  I think in this issue, I should switch 
out Joda time for java.time as well (though leave the complete removal for 
SOLR-12586) if it any changes are actually necessary – they probably will be.

Once this issue is complete, it should be appropriate to gut date time parsing 
out of the "extraction" contrib module – a separate issue. 


I committed a large portion of PR #438 for SOLR-12593 that had to do with 
modifying the default configSet to subsume the patterns handled by the 
"extraction" contrib.  That aspect seemed very distinct and seemed to fit with 
this issue well.

I made some further edits from the PR.  Instead of {{[ ]d}} or the day in 
asctime() I went with {{ppd}} which for practical purposes is the same (tests 
pass both ways; "leniency" surely contributing to that) but in principle is 
more true to asctime()'s format.  I made some other minor renaming and comment 
tweaking in the test too.  I wanted to put a comment somewhere as to where 
these patterns are coming from but didn't know where to do so and I forgot.

> Ensure ParseDateFieldUpdateProcessorFactory can be used instead of 
> ExtractionDateUtil
> -
>
> Key: SOLR-12591
> URL: https://issues.apache.org/jira/browse/SOLR-12591
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-12591.patch
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> ParseDateFieldUpdateProcessorFactory should ideally be able to handle the 
> cases that ExtractionDateUtil does in the "extraction" contrib module.  Tests 
> should be added, ported from patches in SOLR-12561 that enhance 
> TestExtractionDateUtil to similarly ensure the URP is tested.  Additionally 
> the default configSet's configuration of this URP should be expanded to 
> include the patterns parsed by the extract contrib.
> Once this issue is complete, it should be appropriate to gut date time 
> parsing out of the "extraction" contrib module – a separate issue (see 
> SOLR-12593).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8469) Inline calls to the deprecated StringHelper.compare

2018-08-29 Thread Dawid Weiss (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596682#comment-16596682
 ] 

Dawid Weiss commented on LUCENE-8469:
-

Of course I inlined it automatically -- come on, I don't trust myself more than 
I trust software. :)

I'll correct these and apply removal of the deprecated method in master. Thanks 
Adrien.

> Inline calls to the deprecated StringHelper.compare
> ---
>
> Key: LUCENE-8469
> URL: https://issues.apache.org/jira/browse/LUCENE-8469
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: 7.5
>
> Attachments: LUCENE-8469.patch
>
>
> In an attempt to limit the number of warnings during compilation I though 
> it'd be nice to clean up our own stuff. This is a start: StringHelper.compare 
> is used throughout the code and is delegated to FutureArrays (where it 
> belongs, as the arguments are byte[], not Strings).
> This can cause other patches to not apply anymore... so we could apply this 
> to master only. If anybody has a strong feeling about it, please voice it. 
> The patch is trivial.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12591) Ensure ParseDateFieldUpdateProcessorFactory can be used instead of ExtractionDateUtil

2018-08-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596677#comment-16596677
 ] 

ASF subversion and git services commented on SOLR-12591:


Commit 18874a6e36b1930bc7437ee3f1095912b1d20a95 in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=18874a6 ]

SOLR-12591: Expand default configSet's date patterns to subsume those of 
extract contrib


> Ensure ParseDateFieldUpdateProcessorFactory can be used instead of 
> ExtractionDateUtil
> -
>
> Key: SOLR-12591
> URL: https://issues.apache.org/jira/browse/SOLR-12591
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: SOLR-12591.patch
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> ParseDateFieldUpdateProcessorFactory should ideally be able to handle the 
> cases that ExtractionDateUtil does in the "extraction" contrib module.  Tests 
> should be added, ported from patches in SOLR-12561 that enhance 
> TestExtractionDateUtil to similarly ensure the URP is tested.  I think in 
> this issue, I should switch out Joda time for java.time as well (though leave 
> the complete removal for SOLR-12586) if it any changes are actually necessary 
> – they probably will be.
> Once this issue is complete, it should be appropriate to gut date time 
> parsing out of the "extraction" contrib module – a separate issue. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: javadoc linting on JDK10+

2018-08-29 Thread Michael Sokolov
I made this small change and it seems to work (enables javadoc linting),
not sure what other impact this mioght have though. Obviously the change is
just a WIP -- should support 9 as well, maybe 11?

diff --git a/lucene/common-build.xml b/lucene/common-build.xml
index ac4c504..c02e394 100644
--- a/lucene/common-build.xml
+++ b/lucene/common-build.xml
@@ -323,6 +323,8 @@
   
   
   
+  
+  
 
   

@@ -2062,7 +2064,11 @@ ${ant.project.name
}.test.dependencies=${test.classpath.list}
   

   
-
+
+  
+  
+  
+
   


On Wed, Aug 29, 2018 at 1:38 PM Michael Sokolov  wrote:

> I am trying to run ant precommit (on master) and it fails for me with this
> message:
>
> -ecj-javadoc-lint-unsupported:
>
> BUILD FAILED
> /home/
> ANT.AMAZON.COM/sokolovm/workspace/lbench/lucene_baseline/lucene/common-build.xml:2076:
> Linting documentation with ECJ is not supported on this Java version
> (unknown).
>
> I think because I am using JDK10?
>
> I checked in build.xml and see this:
>
>   arg1="${build.java.runtime}" arg2="1.8"/> 
>
> does this indicate javadoc lint only works when building with JDK8?
>


javadoc linting on JDK10+

2018-08-29 Thread Michael Sokolov
I am trying to run ant precommit (on master) and it fails for me with this
message:

-ecj-javadoc-lint-unsupported:

BUILD FAILED
/home/
ANT.AMAZON.COM/sokolovm/workspace/lbench/lucene_baseline/lucene/common-build.xml:2076:
Linting documentation with ECJ is not supported on this Java version
(unknown).

I think because I am using JDK10?

I checked in build.xml and see this:

  

does this indicate javadoc lint only works when building with JDK8?


[jira] [Commented] (SOLR-12519) Support Deeply Nested Docs In Child Documents Transformer

2018-08-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596640#comment-16596640
 ] 

ASF subversion and git services commented on SOLR-12519:


Commit bcbdeedbad507c99ce5a8d8756ebda40de0779e8 in lucene-solr's branch 
refs/heads/branch_7x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bcbdeed ]

SOLR-12519: fix testGrandChildFilterJSON
Simplified differentiating random docs we don't care about from those we do by 
using IDs less than 0

(cherry picked from commit cae91b1eaf15d15f5cd6db792b33df5a26d6f2bc)


> Support Deeply Nested Docs In Child Documents Transformer
> -
>
> Key: SOLR-12519
> URL: https://issues.apache.org/jira/browse/SOLR-12519
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.5
>
> Attachments: SOLR-12519-fix-solrj-tests.patch, 
> SOLR-12519-no-commit.patch, SOLR-12519.patch
>
>  Time Spent: 25.5h
>  Remaining Estimate: 0h
>
> As discussed in SOLR-12298, to make use of the meta-data fields in 
> SOLR-12441, there needs to be a smarter child document transformer, which 
> provides the ability to rebuild the original nested documents' structure.
>  In addition, I also propose the transformer will also have the ability to 
> bring only some of the original hierarchy, to prevent unnecessary block join 
> queries. e.g.
> {code}  {"a": "b", "c": [ {"e": "f"}, {"e": "g"} , {"h": "i"} ]} {code}
>  Incase my query is for all the children of "a:b", which contain the key "e" 
> in them, the query will be broken in to two parts:
>  1. The parent query "a:b"
>  2. The child query "e:*".
> If the only children flag is on, the transformer will return the following 
> documents:
>  {code}[ {"e": "f"}, {"e": "g"} ]{code}
> In case the flag was not turned on(perhaps the default state), the whole 
> document hierarchy will be returned, containing only the matching children:
> {code}{"a": "b", "c": [ {"e": "f"}, {"e": "g"} ]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12519) Support Deeply Nested Docs In Child Documents Transformer

2018-08-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596638#comment-16596638
 ] 

ASF subversion and git services commented on SOLR-12519:


Commit cae91b1eaf15d15f5cd6db792b33df5a26d6f2bc in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cae91b1 ]

SOLR-12519: fix testGrandChildFilterJSON
Simplified differentiating random docs we don't care about from those we do by 
using IDs less than 0


> Support Deeply Nested Docs In Child Documents Transformer
> -
>
> Key: SOLR-12519
> URL: https://issues.apache.org/jira/browse/SOLR-12519
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.5
>
> Attachments: SOLR-12519-fix-solrj-tests.patch, 
> SOLR-12519-no-commit.patch, SOLR-12519.patch
>
>  Time Spent: 25.5h
>  Remaining Estimate: 0h
>
> As discussed in SOLR-12298, to make use of the meta-data fields in 
> SOLR-12441, there needs to be a smarter child document transformer, which 
> provides the ability to rebuild the original nested documents' structure.
>  In addition, I also propose the transformer will also have the ability to 
> bring only some of the original hierarchy, to prevent unnecessary block join 
> queries. e.g.
> {code}  {"a": "b", "c": [ {"e": "f"}, {"e": "g"} , {"h": "i"} ]} {code}
>  Incase my query is for all the children of "a:b", which contain the key "e" 
> in them, the query will be broken in to two parts:
>  1. The parent query "a:b"
>  2. The child query "e:*".
> If the only children flag is on, the transformer will return the following 
> documents:
>  {code}[ {"e": "f"}, {"e": "g"} ]{code}
> In case the flag was not turned on(perhaps the default state), the whole 
> document hierarchy will be returned, containing only the matching children:
> {code}{"a": "b", "c": [ {"e": "f"}, {"e": "g"} ]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8471) Expose the number of bytes currently being flushed in IndexWriter

2018-08-29 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596628#comment-16596628
 ] 

Michael McCandless commented on LUCENE-8471:


+1, but could we name it {{getFlushingBytes}} instead?  {{flushBytes}} sounds 
like it's going to write bytes to disk or something.

> Expose the number of bytes currently being flushed in IndexWriter
> -
>
> Key: LUCENE-8471
> URL: https://issues.apache.org/jira/browse/LUCENE-8471
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8471.patch
>
>
> This is already available via the DocumentWriter and flush control.  Making 
> it public on IndexWriter would allow for better memory accounting when using 
> IndexWriter#flushNextBuffer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12713) make "solr.data.dir" points to multiple base path, and collection's replicas may distribute in them

2018-08-29 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596599#comment-16596599
 ] 

Varun Thacker commented on SOLR-12713:
--

Slightly safer variant would be create a coreless collection , then add one 
replica at a time and pass =/path/to/disk1 and individually 
stripe it.

+1 to the Jira to have solr support it in a better way. Is this something that 
you plan on working ? 

> make "solr.data.dir" points to multiple base path, and collection's replicas 
> may distribute in them
> ---
>
> Key: SOLR-12713
> URL: https://issues.apache.org/jira/browse/SOLR-12713
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Reporter: weizhenyuan
>Priority: Major
>
> As discussion in user mail-list,It's of worthy of making  "solr.data.dir" 
> points to multiple base paths, which collection's replicas may distribute in.
> Currently, solr only could points to a single common dataDir,to maximize 
> indexing throughput   in same cases,solr maybe introduce a new feature to 
> support multiple dataDir paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-9) - Build # 4801 - Unstable!

2018-08-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4801/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseG1GC

5 tests failed.
FAILED:  
org.apache.solr.response.transform.TestChildDocTransformerHierarchy.testParentFilterJSON

Error Message:
-1

Stack Trace:
java.lang.ArrayIndexOutOfBoundsException: -1
at 
__randomizedtesting.SeedInfo.seed([B4639ABEDB591523:2B859D8465E193A0]:0)
at 
org.apache.solr.response.transform.TestChildDocTransformerHierarchy.fullNestedDocTemplate(TestChildDocTransformerHierarchy.java:350)
at 
org.apache.solr.response.transform.TestChildDocTransformerHierarchy.testParentFilterJSON(TestChildDocTransformerHierarchy.java:98)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.response.transform.TestChildDocTransformerHierarchy.testParentFilterJSON

Error Message:
-1

Stack Trace:
java.lang.ArrayIndexOutOfBoundsException: -1
at 

[jira] [Comment Edited] (LUCENE-765) Index package level javadocs needs content

2018-08-29 Thread Mike Sokolov (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596529#comment-16596529
 ] 

Mike Sokolov edited comment on LUCENE-765 at 8/29/18 4:39 PM:
--

Ah, OK sorry about that! I had run "ant javadocs" in the core module, and that 
passed OK, just with one warning about future change to html5 format, unrelated 
to my patch. I didn't realize there were additional checks in precommit – I'm 
running that now.

 

UPDATE - `ant precommit` fails its license check for me on a clean master 
checkout with this message:

 

{{check-licenses:}}

[echo] License check under: .../solr

{{BUILD FAILED}}
 {{.../build.xml:117: The following error occurred while executing this line:}}
 {{.../solr/build.xml:364: The following error occurred while executing this 
line:}}
 {{...e/lucene/tools/custom-tasks.xml:62: JAR resource does not exist: 
core/lib/avatica-core-1.9.0.jar}}

 

OK I see what happened - I moved my build directory, but absolute paths have 
been embedded as symlinks. I guess I need to clean up and start over


was (Author: sokolov):
Ah, OK sorry about that! I had run "ant javadocs" in the core module, and that 
passed OK, just with one warning about future change to html5 format, unrelated 
to my patch. I didn't realize there were additional checks in precommit – I'm 
running that now.

 

UPDATE - `ant precommit` fails its license check for me on a clean master 
checkout with this message:

 

{{check-licenses:}}

[echo] License check under: .../solr

{{BUILD FAILED}}
 {{.../build.xml:117: The following error occurred while executing this line:}}
 {{.../solr/build.xml:364: The following error occurred while executing this 
line:}}
 {{...e/lucene/tools/custom-tasks.xml:62: JAR resource does not exist: 
core/lib/avatica-core-1.9.0.jar}}

> Index package level javadocs needs content
> --
>
> Key: LUCENE-765
> URL: https://issues.apache.org/jira/browse/LUCENE-765
> Project: Lucene - Core
>  Issue Type: Wish
>  Components: general/javadocs
>Reporter: Grant Ingersoll
>Priority: Minor
>  Labels: newdev
> Attachments: LUCENE-765.patch, LUCENE-765.patch, LUCENE-765.patch, 
> LUCENE-765.patch
>
>
> The org.apache.lucene.index package level javadocs are sorely lacking.  They 
> should be updated to give a summary of the important classes, how indexing 
> works, etc.  Maybe give an overview of how the different writers coordinate.  
> Links to file formats, information on the posting algorithm, etc. would be 
> helpful.
> See the search package javadocs as a sample of the kind of info that could go 
> here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8471) Expose the number of bytes currently being flushed in IndexWriter

2018-08-29 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596581#comment-16596581
 ] 

Adrien Grand commented on LUCENE-8471:
--

+1 Let's maybe mention in javadocs that this is a subset of what #ramBytesUsed 
returns?

> Expose the number of bytes currently being flushed in IndexWriter
> -
>
> Key: LUCENE-8471
> URL: https://issues.apache.org/jira/browse/LUCENE-8471
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8471.patch
>
>
> This is already available via the DocumentWriter and flush control.  Making 
> it public on IndexWriter would allow for better memory accounting when using 
> IndexWriter#flushNextBuffer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12713) make "solr.data.dir" points to multiple base path, and collection's replicas may distribute in them

2018-08-29 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596573#comment-16596573
 ] 

Shawn Heisey commented on SOLR-12713:
-

Interim workaround:

After you create a collection, and before you add data to it, find the 
"core.properties" files for each shard replica in the collection and add/change 
the dataDir property to manually point it to a different location.  Then reload 
the collection with the Collections API to make the changes effective.

> make "solr.data.dir" points to multiple base path, and collection's replicas 
> may distribute in them
> ---
>
> Key: SOLR-12713
> URL: https://issues.apache.org/jira/browse/SOLR-12713
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Reporter: weizhenyuan
>Priority: Major
>
> As discussion in user mail-list,It's of worthy of making  "solr.data.dir" 
> points to multiple base paths, which collection's replicas may distribute in.
> Currently, solr only could points to a single common dataDir,to maximize 
> indexing throughput   in same cases,solr maybe introduce a new feature to 
> support multiple dataDir paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-765) Index package level javadocs needs content

2018-08-29 Thread Mike Sokolov (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596529#comment-16596529
 ] 

Mike Sokolov edited comment on LUCENE-765 at 8/29/18 4:28 PM:
--

Ah, OK sorry about that! I had run "ant javadocs" in the core module, and that 
passed OK, just with one warning about future change to html5 format, unrelated 
to my patch. I didn't realize there were additional checks in precommit – I'm 
running that now.

 

UPDATE - `ant precommit` fails its license check for me on a clean master 
checkout with this message:

 

{{check-licenses:}}

[echo] License check under: .../solr

{{BUILD FAILED}}
 {{.../build.xml:117: The following error occurred while executing this line:}}
 {{.../solr/build.xml:364: The following error occurred while executing this 
line:}}
 {{...e/lucene/tools/custom-tasks.xml:62: JAR resource does not exist: 
core/lib/avatica-core-1.9.0.jar}}


was (Author: sokolov):
Ah, OK sorry about that! I had run "ant javadocs" in the core module, and that 
passed OK, just with one warning about future change to html5 format, unrelated 
to my patch. I didn't realize there were additional checks in precommit – I'm 
running that now.

 

UPDATE - `ant precommit` fails its license check for me on a clean master 
checkout with this message:

 

{{check-licenses:}}
{{ [echo] License check under: .../solr}}{{BUILD FAILED}}
{{.../build.xml:117: The following error occurred while executing this line:}}
{{.../solr/build.xml:364: The following error occurred while executing this 
line:}}
{{...e/lucene/tools/custom-tasks.xml:62: JAR resource does not exist: 
core/lib/avatica-core-1.9.0.jar}}

> Index package level javadocs needs content
> --
>
> Key: LUCENE-765
> URL: https://issues.apache.org/jira/browse/LUCENE-765
> Project: Lucene - Core
>  Issue Type: Wish
>  Components: general/javadocs
>Reporter: Grant Ingersoll
>Priority: Minor
>  Labels: newdev
> Attachments: LUCENE-765.patch, LUCENE-765.patch, LUCENE-765.patch, 
> LUCENE-765.patch
>
>
> The org.apache.lucene.index package level javadocs are sorely lacking.  They 
> should be updated to give a summary of the important classes, how indexing 
> works, etc.  Maybe give an overview of how the different writers coordinate.  
> Links to file formats, information on the posting algorithm, etc. would be 
> helpful.
> See the search package javadocs as a sample of the kind of info that could go 
> here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-765) Index package level javadocs needs content

2018-08-29 Thread Mike Sokolov (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596529#comment-16596529
 ] 

Mike Sokolov edited comment on LUCENE-765 at 8/29/18 4:27 PM:
--

Ah, OK sorry about that! I had run "ant javadocs" in the core module, and that 
passed OK, just with one warning about future change to html5 format, unrelated 
to my patch. I didn't realize there were additional checks in precommit – I'm 
running that now.

 

UPDATE - `ant precommit` fails its license check for me on a clean master 
checkout with this message:

 

{{check-licenses:}}
{{ [echo] License check under: .../solr}}{{BUILD FAILED}}
{{.../build.xml:117: The following error occurred while executing this line:}}
{{.../solr/build.xml:364: The following error occurred while executing this 
line:}}
{{...e/lucene/tools/custom-tasks.xml:62: JAR resource does not exist: 
core/lib/avatica-core-1.9.0.jar}}


was (Author: sokolov):
Ah, OK sorry about that! I had run "ant javadocs" in the core module, and that 
passed OK, just with one warning about future change to html5 format, unrelated 
to my patch. I didn't realize there were additional checks in precommit – I'm 
running that now.

> Index package level javadocs needs content
> --
>
> Key: LUCENE-765
> URL: https://issues.apache.org/jira/browse/LUCENE-765
> Project: Lucene - Core
>  Issue Type: Wish
>  Components: general/javadocs
>Reporter: Grant Ingersoll
>Priority: Minor
>  Labels: newdev
> Attachments: LUCENE-765.patch, LUCENE-765.patch, LUCENE-765.patch, 
> LUCENE-765.patch
>
>
> The org.apache.lucene.index package level javadocs are sorely lacking.  They 
> should be updated to give a summary of the important classes, how indexing 
> works, etc.  Maybe give an overview of how the different writers coordinate.  
> Links to file formats, information on the posting algorithm, etc. would be 
> helpful.
> See the search package javadocs as a sample of the kind of info that could go 
> here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8470) Remove Legacy*DocValues classes

2018-08-29 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596541#comment-16596541
 ] 

Adrien Grand commented on LUCENE-8470:
--

Here is a patch. Instead of removing the legacy classes, I moved them to 
oal.codecs.memory in the lucene/codecs module as pkg-private classes since they 
are still used by the "Direct" doc-values format and migrating it to the new 
API requires quite some work. It would be nice to clean this up later.

> Remove Legacy*DocValues classes
> ---
>
> Key: LUCENE-8470
> URL: https://issues.apache.org/jira/browse/LUCENE-8470
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8470.patch
>
>
> These classes had been added to keep supporting 6.x codecs when transitioning 
> from random-access doc values to sequential-access docvalues. We should 
> remove them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8470) Remove Legacy*DocValues classes

2018-08-29 Thread Adrien Grand (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-8470:
-
Attachment: LUCENE-8470.patch

> Remove Legacy*DocValues classes
> ---
>
> Key: LUCENE-8470
> URL: https://issues.apache.org/jira/browse/LUCENE-8470
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8470.patch
>
>
> These classes had been added to keep supporting 6.x codecs when transitioning 
> from random-access doc values to sequential-access docvalues. We should 
> remove them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-765) Index package level javadocs needs content

2018-08-29 Thread Mike Sokolov (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596529#comment-16596529
 ] 

Mike Sokolov commented on LUCENE-765:
-

Ah, OK sorry about that! I had run "ant javadocs" in the core module, and that 
passed OK, just with one warning about future change to html5 format, unrelated 
to my patch. I didn't realize there were additional checks in precommit – I'm 
running that now.

> Index package level javadocs needs content
> --
>
> Key: LUCENE-765
> URL: https://issues.apache.org/jira/browse/LUCENE-765
> Project: Lucene - Core
>  Issue Type: Wish
>  Components: general/javadocs
>Reporter: Grant Ingersoll
>Priority: Minor
>  Labels: newdev
> Attachments: LUCENE-765.patch, LUCENE-765.patch, LUCENE-765.patch, 
> LUCENE-765.patch
>
>
> The org.apache.lucene.index package level javadocs are sorely lacking.  They 
> should be updated to give a summary of the important classes, how indexing 
> works, etc.  Maybe give an overview of how the different writers coordinate.  
> Links to file formats, information on the posting algorithm, etc. would be 
> helpful.
> See the search package javadocs as a sample of the kind of info that could go 
> here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12714) Facet query is not working solrcloud

2018-08-29 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-12714.
---
Resolution: Invalid

It's pretty unlikely that faceting isn't working at all, I strongly suspect 
this is some other issue.

So please ask the question here: solr-u...@lucene.apache.org, see: 
http://lucene.apache.org/solr/community.html#mailing-lists-irc

When you raise question on the user's list, include pertinent details, 
including sample data, queries and responses if possible, along with what you 
expect to see but don't.

If the consensus there is that there are code issues, we can reopen this JIRA 
or create a new one.

> Facet query is not working solrcloud 
> -
>
> Key: SOLR-12714
> URL: https://issues.apache.org/jira/browse/SOLR-12714
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting, search, SolrCloud
>Affects Versions: 6.6
>Reporter: adeppa
>Priority: Major
>
> Environment:
> solrcloud with two nodes ,Each collection two shard and replication factor 2 ,
>  
> simeple Faceting is not working in solrcloud like below query
> /solr/qa-res/select?facet.field=tf_toc_search=on=id:dc65t0-insight_platform_content-1029743-1=on=*:*=json
> Exception information :
> "error":\{ "metadata":[ "error-class","org.apache.solr.common.SolrException", 
> "root-error-class","org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException"],
>  "msg":"org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
> available to handle this 
> request:[http://172.22.1.56:8983/solr/qa-res_shard2_replica0, 
> http://172.22.1.56:8983/solr/qa-res_shard1_replica0, 
> http://172.22.0.231:8983/solr/qa-res_shard2_replica1];, 
> "trace":"org.apache.solr.common.SolrException: 
> org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
> available to handle this 
> request:[http://172.22.1.56:8983/solr/qa-res_shard2_replica0, 
> http://172.22.1.56:8983/solr/qa-res_shard1_replica0, 
> http://172.22.0.231:8983/solr/qa-res_shard2_replica1]\n\tat 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:416)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)\n\tat
>  org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
>  
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:534)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)\n\tat
>  
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)\n\tat
>  org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)\n\tat 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)\n\tat
>  
> 

Re: [VOTE] Release PyLucene 7.4.0 (rc1)

2018-08-29 Thread Aric Coady
On Aug 28, 2018, at 11:05 AM, Andi Vajda  wrote:
> 
> 
> The PyLucene 7.4.0 (rc1) release tracking the recent release of
> Apache Lucene 7.4.0 is ready.
> 
> A release candidate is available from:
>  https://dist.apache.org/repos/dist/dev/lucene/pylucene/7.4.0-rc1/
> 
> PyLucene 7.4.0 is built with JCC 3.2 included in these release artifacts.
> 
> JCC 3.2 supports Python 3.3+ (in addition to Python 2.3+).
> PyLucene may be built with Python 2 or Python 3.
> 
> Please vote to release these artifacts as PyLucene 7.4.0.
> Anyone interested in this release can and should vote !

+1.  Release candidate builds also available for docker and homebrew:
$ docker pull coady/pylucene:rc
$ brew install coady/tap/pylucene

> Thanks !
> 
> Andi..
> 
> ps: the KEYS file for PyLucene release signing is at:
> https://dist.apache.org/repos/dist/release/lucene/pylucene/KEYS
> https://dist.apache.org/repos/dist/dev/lucene/pylucene/KEYS
> 
> pps: here is my +1



[jira] [Commented] (SOLR-12662) Reproducing TestPolicy failures: NPE and NoClassDefFoundError

2018-08-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596474#comment-16596474
 ] 

ASF subversion and git services commented on SOLR-12662:


Commit 098f475a671c88bf09cb2e73af631fd45ee5c5ef in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=098f475 ]

SOLR-12662: Eliminate possible race conditions by moving Type-by-name map 
construction to Variable.Type, accessible via Variable.Type.get(name)


> Reproducing TestPolicy failures: NPE and NoClassDefFoundError
> -
>
> Key: SOLR-12662
> URL: https://issues.apache.org/jira/browse/SOLR-12662
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, Tests
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Attachments: SOLR-12662-part2.patch, SOLR-12662.patch
>
>
> From [https://builds.apache.org/job/Lucene-Solr-Tests-7.x/773/]:
> {noformat}
>[junit4] Suite: org.apache.solr.client.solrj.cloud.autoscaling.TestPolicy
>[junit4]   2> Creating dataDir: 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-solrj/test/J0/temp/solr.client.solrj.cloud.autoscaling.TestPolicy_D876F0AD4FD0DF80-001/init-core-data-001
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestPolicy 
> -Dtests.method=testWithCollection -Dtests.seed=D876F0AD4FD0DF80 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr 
> -Dtests.timezone=Europe/Busingen -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.09s J0 | TestPolicy.testWithCollection <<<
>[junit4]> Throwable #1: java.lang.ExceptionInInitializerError
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([D876F0AD4FD0DF80:575A9671946EA761]:0)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.Variable$Type.(Variable.java:242)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.Variable$Type.(Variable.java:85)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getNodeValues(SolrClientNodeStateProvider.java:130)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.TestPolicy.testWithCollection(TestPolicy.java:244)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
>[junit4]> Caused by: java.lang.NullPointerException
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.Variable$Type.values(Variable.java:84)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.VariableBase.(VariableBase.java:203)
>[junit4]>  ... 43 more
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestPolicy 
> -Dtests.method=testEmptyClusterState -Dtests.seed=D876F0AD4FD0DF80 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr 
> -Dtests.timezone=Europe/Busingen -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.07s J0 | TestPolicy.testEmptyClusterState <<<
>[junit4]> Throwable #1: java.lang.NoClassDefFoundError: Could not 
> initialize class org.apache.solr.client.solrj.cloud.autoscaling.VariableBase
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([D876F0AD4FD0DF80:39224A25B6C6C014]:0)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.(Policy.java:127)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig.getPolicy(AutoScalingConfig.java:353)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper$SessionRef.createSession(PolicyHelper.java:356)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper$SessionRef.get(PolicyHelper.java:321)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper.getSession(PolicyHelper.java:377)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper.getReplicaLocations(PolicyHelper.java:113)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.TestPolicy.testEmptyClusterState(TestPolicy.java:2185)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestPolicy 
> -Dtests.method=testUtilizeNodeFailure -Dtests.seed=D876F0AD4FD0DF80 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr 
> -Dtests.timezone=Europe/Busingen -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.02s J0 | TestPolicy.testUtilizeNodeFailure <<<
>[junit4]> Throwable #1: java.lang.NoClassDefFoundError: Could not 
> 

[jira] [Commented] (SOLR-12662) Reproducing TestPolicy failures: NPE and NoClassDefFoundError

2018-08-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596473#comment-16596473
 ] 

ASF subversion and git services commented on SOLR-12662:


Commit 9fcd4929db83f8302ff4a3021247f60db4af4b8e in lucene-solr's branch 
refs/heads/branch_7x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9fcd492 ]

SOLR-12662: Eliminate possible race conditions by moving Type-by-name map 
construction to Variable.Type, accessible via Variable.Type.get(name)


> Reproducing TestPolicy failures: NPE and NoClassDefFoundError
> -
>
> Key: SOLR-12662
> URL: https://issues.apache.org/jira/browse/SOLR-12662
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, Tests
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Attachments: SOLR-12662-part2.patch, SOLR-12662.patch
>
>
> From [https://builds.apache.org/job/Lucene-Solr-Tests-7.x/773/]:
> {noformat}
>[junit4] Suite: org.apache.solr.client.solrj.cloud.autoscaling.TestPolicy
>[junit4]   2> Creating dataDir: 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-solrj/test/J0/temp/solr.client.solrj.cloud.autoscaling.TestPolicy_D876F0AD4FD0DF80-001/init-core-data-001
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestPolicy 
> -Dtests.method=testWithCollection -Dtests.seed=D876F0AD4FD0DF80 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr 
> -Dtests.timezone=Europe/Busingen -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.09s J0 | TestPolicy.testWithCollection <<<
>[junit4]> Throwable #1: java.lang.ExceptionInInitializerError
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([D876F0AD4FD0DF80:575A9671946EA761]:0)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.Variable$Type.(Variable.java:242)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.Variable$Type.(Variable.java:85)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getNodeValues(SolrClientNodeStateProvider.java:130)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.TestPolicy.testWithCollection(TestPolicy.java:244)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
>[junit4]> Caused by: java.lang.NullPointerException
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.Variable$Type.values(Variable.java:84)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.VariableBase.(VariableBase.java:203)
>[junit4]>  ... 43 more
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestPolicy 
> -Dtests.method=testEmptyClusterState -Dtests.seed=D876F0AD4FD0DF80 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr 
> -Dtests.timezone=Europe/Busingen -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.07s J0 | TestPolicy.testEmptyClusterState <<<
>[junit4]> Throwable #1: java.lang.NoClassDefFoundError: Could not 
> initialize class org.apache.solr.client.solrj.cloud.autoscaling.VariableBase
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([D876F0AD4FD0DF80:39224A25B6C6C014]:0)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.(Policy.java:127)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig.getPolicy(AutoScalingConfig.java:353)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper$SessionRef.createSession(PolicyHelper.java:356)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper$SessionRef.get(PolicyHelper.java:321)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper.getSession(PolicyHelper.java:377)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper.getReplicaLocations(PolicyHelper.java:113)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.TestPolicy.testEmptyClusterState(TestPolicy.java:2185)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestPolicy 
> -Dtests.method=testUtilizeNodeFailure -Dtests.seed=D876F0AD4FD0DF80 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr 
> -Dtests.timezone=Europe/Busingen -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.02s J0 | TestPolicy.testUtilizeNodeFailure <<<
>[junit4]> Throwable #1: java.lang.NoClassDefFoundError: Could not 

[jira] [Updated] (LUCENE-8470) Remove Legacy*DocValues classes

2018-08-29 Thread Adrien Grand (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-8470:
-
Attachment: (was: LUCENE-8470.patch)

> Remove Legacy*DocValues classes
> ---
>
> Key: LUCENE-8470
> URL: https://issues.apache.org/jira/browse/LUCENE-8470
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
>
> These classes had been added to keep supporting 6.x codecs when transitioning 
> from random-access doc values to sequential-access docvalues. We should 
> remove them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8470) Remove Legacy*DocValues classes

2018-08-29 Thread Adrien Grand (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-8470:
-
Attachment: LUCENE-8470.patch

> Remove Legacy*DocValues classes
> ---
>
> Key: LUCENE-8470
> URL: https://issues.apache.org/jira/browse/LUCENE-8470
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8470.patch
>
>
> These classes had been added to keep supporting 6.x codecs when transitioning 
> from random-access doc values to sequential-access docvalues. We should 
> remove them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12519) Support Deeply Nested Docs In Child Documents Transformer

2018-08-29 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned SOLR-12519:
---

   Resolution: Fixed
 Assignee: David Smiley
Fix Version/s: 7.5

> Support Deeply Nested Docs In Child Documents Transformer
> -
>
> Key: SOLR-12519
> URL: https://issues.apache.org/jira/browse/SOLR-12519
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.5
>
> Attachments: SOLR-12519-fix-solrj-tests.patch, 
> SOLR-12519-no-commit.patch, SOLR-12519.patch
>
>  Time Spent: 25.5h
>  Remaining Estimate: 0h
>
> As discussed in SOLR-12298, to make use of the meta-data fields in 
> SOLR-12441, there needs to be a smarter child document transformer, which 
> provides the ability to rebuild the original nested documents' structure.
>  In addition, I also propose the transformer will also have the ability to 
> bring only some of the original hierarchy, to prevent unnecessary block join 
> queries. e.g.
> {code}  {"a": "b", "c": [ {"e": "f"}, {"e": "g"} , {"h": "i"} ]} {code}
>  Incase my query is for all the children of "a:b", which contain the key "e" 
> in them, the query will be broken in to two parts:
>  1. The parent query "a:b"
>  2. The child query "e:*".
> If the only children flag is on, the transformer will return the following 
> documents:
>  {code}[ {"e": "f"}, {"e": "g"} ]{code}
> In case the flag was not turned on(perhaps the default state), the whole 
> document hierarchy will be returned, containing only the matching children:
> {code}{"a": "b", "c": [ {"e": "f"}, {"e": "g"} ]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8471) Expose the number of bytes currently being flushed in IndexWriter

2018-08-29 Thread Simon Willnauer (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596429#comment-16596429
 ] 

Simon Willnauer commented on LUCENE-8471:
-

+1 LGTM

> Expose the number of bytes currently being flushed in IndexWriter
> -
>
> Key: LUCENE-8471
> URL: https://issues.apache.org/jira/browse/LUCENE-8471
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8471.patch
>
>
> This is already available via the DocumentWriter and flush control.  Making 
> it public on IndexWriter would allow for better memory accounting when using 
> IndexWriter#flushNextBuffer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #416: WIP: SOLR-12519

2018-08-29 Thread moshebla
Github user moshebla closed the pull request at:

https://github.com/apache/lucene-solr/pull/416


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #416: WIP: SOLR-12519

2018-08-29 Thread moshebla
Github user moshebla commented on the issue:

https://github.com/apache/lucene-solr/pull/416
  
was commited in commit 5a0e7a615a9b1e7ac97c6b0f9e5604dcc1aeb03f


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8471) Expose the number of bytes currently being flushed in IndexWriter

2018-08-29 Thread Nhat Nguyen (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596395#comment-16596395
 ] 

Nhat Nguyen commented on LUCENE-8471:
-

I think we should change only `flushBytes` because we only expose it for now.

> Expose the number of bytes currently being flushed in IndexWriter
> -
>
> Key: LUCENE-8471
> URL: https://issues.apache.org/jira/browse/LUCENE-8471
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8471.patch
>
>
> This is already available via the DocumentWriter and flush control.  Making 
> it public on IndexWriter would allow for better memory accounting when using 
> IndexWriter#flushNextBuffer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8471) Expose the number of bytes currently being flushed in IndexWriter

2018-08-29 Thread Alan Woodward (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596389#comment-16596389
 ] 

Alan Woodward commented on LUCENE-8471:
---

Sure.  Should I do the same for activeBytes() and netBytes()?

> Expose the number of bytes currently being flushed in IndexWriter
> -
>
> Key: LUCENE-8471
> URL: https://issues.apache.org/jira/browse/LUCENE-8471
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8471.patch
>
>
> This is already available via the DocumentWriter and flush control.  Making 
> it public on IndexWriter would allow for better memory accounting when using 
> IndexWriter#flushNextBuffer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12519) Support Deeply Nested Docs In Child Documents Transformer

2018-08-29 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596383#comment-16596383
 ] 

David Smiley commented on SOLR-12519:
-

This morning I spent some time carefully indenting the document string literal 
in TestChildDocTransformerHierarchy#generateDocHierarchy and a couple of it's 
nearby methods so that I could more clearly see what was going on.  I also 
enhanced testParentFilterLimitJSON to test that it does *not* return the 
"toppings", because they follow the "lonely" stuff which is returned.  (a quick 
try of increasing the limit caused the assertion to fail so I think it's 
right).  I also simplified the limit/match logic, and used ">=" which I think 
is more clearly correct than "==" since we collect more matches due to needing 
ancestors.

Please close the PR.

> Support Deeply Nested Docs In Child Documents Transformer
> -
>
> Key: SOLR-12519
> URL: https://issues.apache.org/jira/browse/SOLR-12519
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12519-fix-solrj-tests.patch, 
> SOLR-12519-no-commit.patch, SOLR-12519.patch
>
>  Time Spent: 25h 10m
>  Remaining Estimate: 0h
>
> As discussed in SOLR-12298, to make use of the meta-data fields in 
> SOLR-12441, there needs to be a smarter child document transformer, which 
> provides the ability to rebuild the original nested documents' structure.
>  In addition, I also propose the transformer will also have the ability to 
> bring only some of the original hierarchy, to prevent unnecessary block join 
> queries. e.g.
> {code}  {"a": "b", "c": [ {"e": "f"}, {"e": "g"} , {"h": "i"} ]} {code}
>  Incase my query is for all the children of "a:b", which contain the key "e" 
> in them, the query will be broken in to two parts:
>  1. The parent query "a:b"
>  2. The child query "e:*".
> If the only children flag is on, the transformer will return the following 
> documents:
>  {code}[ {"e": "f"}, {"e": "g"} ]{code}
> In case the flag was not turned on(perhaps the default state), the whole 
> document hierarchy will be returned, containing only the matching children:
> {code}{"a": "b", "c": [ {"e": "f"}, {"e": "g"} ]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8471) Expose the number of bytes currently being flushed in IndexWriter

2018-08-29 Thread Nhat Nguyen (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596380#comment-16596380
 ] 

Nhat Nguyen commented on LUCENE-8471:
-

Can we make `flushBytes` in DocumentsWriterFlushControl a volatile field and 
its getter without synchronization?

> Expose the number of bytes currently being flushed in IndexWriter
> -
>
> Key: LUCENE-8471
> URL: https://issues.apache.org/jira/browse/LUCENE-8471
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8471.patch
>
>
> This is already available via the DocumentWriter and flush control.  Making 
> it public on IndexWriter would allow for better memory accounting when using 
> IndexWriter#flushNextBuffer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12519) Support Deeply Nested Docs In Child Documents Transformer

2018-08-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596376#comment-16596376
 ] 

ASF subversion and git services commented on SOLR-12519:


Commit 171cfc8e8e4d4e3f0061aa181c28c14e967a350f in lucene-solr's branch 
refs/heads/branch_7x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=171cfc8 ]

SOLR-12519: child doc transformer can now produce a nested structure.
Fixed SolrDocument's confusion of field-attached child documents in addField()
Fixed AtomicUpdateDocumentMerger's confusion of field-attached child documents 
in isAtomicUpdate()

(cherry picked from commit 5a0e7a615a9b1e7ac97c6b0f9e5604dcc1aeb03f)


> Support Deeply Nested Docs In Child Documents Transformer
> -
>
> Key: SOLR-12519
> URL: https://issues.apache.org/jira/browse/SOLR-12519
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12519-fix-solrj-tests.patch, 
> SOLR-12519-no-commit.patch, SOLR-12519.patch
>
>  Time Spent: 25h 10m
>  Remaining Estimate: 0h
>
> As discussed in SOLR-12298, to make use of the meta-data fields in 
> SOLR-12441, there needs to be a smarter child document transformer, which 
> provides the ability to rebuild the original nested documents' structure.
>  In addition, I also propose the transformer will also have the ability to 
> bring only some of the original hierarchy, to prevent unnecessary block join 
> queries. e.g.
> {code}  {"a": "b", "c": [ {"e": "f"}, {"e": "g"} , {"h": "i"} ]} {code}
>  Incase my query is for all the children of "a:b", which contain the key "e" 
> in them, the query will be broken in to two parts:
>  1. The parent query "a:b"
>  2. The child query "e:*".
> If the only children flag is on, the transformer will return the following 
> documents:
>  {code}[ {"e": "f"}, {"e": "g"} ]{code}
> In case the flag was not turned on(perhaps the default state), the whole 
> document hierarchy will be returned, containing only the matching children:
> {code}{"a": "b", "c": [ {"e": "f"}, {"e": "g"} ]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12519) Support Deeply Nested Docs In Child Documents Transformer

2018-08-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596374#comment-16596374
 ] 

ASF subversion and git services commented on SOLR-12519:


Commit 5a0e7a615a9b1e7ac97c6b0f9e5604dcc1aeb03f in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5a0e7a6 ]

SOLR-12519: child doc transformer can now produce a nested structure.
Fixed SolrDocument's confusion of field-attached child documents in addField()
Fixed AtomicUpdateDocumentMerger's confusion of field-attached child documents 
in isAtomicUpdate()


> Support Deeply Nested Docs In Child Documents Transformer
> -
>
> Key: SOLR-12519
> URL: https://issues.apache.org/jira/browse/SOLR-12519
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12519-fix-solrj-tests.patch, 
> SOLR-12519-no-commit.patch, SOLR-12519.patch
>
>  Time Spent: 25h 10m
>  Remaining Estimate: 0h
>
> As discussed in SOLR-12298, to make use of the meta-data fields in 
> SOLR-12441, there needs to be a smarter child document transformer, which 
> provides the ability to rebuild the original nested documents' structure.
>  In addition, I also propose the transformer will also have the ability to 
> bring only some of the original hierarchy, to prevent unnecessary block join 
> queries. e.g.
> {code}  {"a": "b", "c": [ {"e": "f"}, {"e": "g"} , {"h": "i"} ]} {code}
>  Incase my query is for all the children of "a:b", which contain the key "e" 
> in them, the query will be broken in to two parts:
>  1. The parent query "a:b"
>  2. The child query "e:*".
> If the only children flag is on, the transformer will return the following 
> documents:
>  {code}[ {"e": "f"}, {"e": "g"} ]{code}
> In case the flag was not turned on(perhaps the default state), the whole 
> document hierarchy will be returned, containing only the matching children:
> {code}{"a": "b", "c": [ {"e": "f"}, {"e": "g"} ]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8471) Expose the number of bytes currently being flushed in IndexWriter

2018-08-29 Thread Alan Woodward (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596365#comment-16596365
 ] 

Alan Woodward commented on LUCENE-8471:
---

Patch.  I changed tests that were reaching through DocumentWriter and 
FlushControl to get this value to use the IndexWriter method instead.

> Expose the number of bytes currently being flushed in IndexWriter
> -
>
> Key: LUCENE-8471
> URL: https://issues.apache.org/jira/browse/LUCENE-8471
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8471.patch
>
>
> This is already available via the DocumentWriter and flush control.  Making 
> it public on IndexWriter would allow for better memory accounting when using 
> IndexWriter#flushNextBuffer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8471) Expose the number of bytes currently being flushed in IndexWriter

2018-08-29 Thread Alan Woodward (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-8471:
--
Attachment: LUCENE-8471.patch

> Expose the number of bytes currently being flushed in IndexWriter
> -
>
> Key: LUCENE-8471
> URL: https://issues.apache.org/jira/browse/LUCENE-8471
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8471.patch
>
>
> This is already available via the DocumentWriter and flush control.  Making 
> it public on IndexWriter would allow for better memory accounting when using 
> IndexWriter#flushNextBuffer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8471) Expose the number of bytes currently being flushed in IndexWriter

2018-08-29 Thread Alan Woodward (JIRA)
Alan Woodward created LUCENE-8471:
-

 Summary: Expose the number of bytes currently being flushed in 
IndexWriter
 Key: LUCENE-8471
 URL: https://issues.apache.org/jira/browse/LUCENE-8471
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Alan Woodward
 Attachments: LUCENE-8471.patch

This is already available via the DocumentWriter and flush control.  Making it 
public on IndexWriter would allow for better memory accounting when using 
IndexWriter#flushNextBuffer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8196) Add IntervalQuery and IntervalsSource to expose minimum interval semantics across term fields

2018-08-29 Thread Alan Woodward (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596348#comment-16596348
 ] 

Alan Woodward commented on LUCENE-8196:
---

Hi [~Martin Hermann]

Thanks for the detailed feedback - this is very helpful!

1) As with Spans, one way to fix the issue with OR intervals is to change the 
precedence rules so that longer intervals sort before their prefixes.  I need 
to go re-read the paper's proof concerning the OR operator, it would be 
interesting to see if this ends up causing problems elsewhere .  Another option 
would be to add a separate IntervalsSource with this behaviour, maybe triggered 
as a parameter on {{Intervals.or()}}

2) Intervals don't really have the notion of 'slop' that Spans do, but we could 
add the idea of an 'internal slop' to ordered and unordered spans.  This would 
be measured as the space within an interval not taken up by the component 
intervals.  Your {{("big bad" OR evil) wolf}} query I think can already be done 
using {{Intervals.phrase()}}?

3) Spans have the notion of a 'gap' Span, which could be usefully added here.  
This could help with avoiding minimization in your CONTAINS query

> Add IntervalQuery and IntervalsSource to expose minimum interval semantics 
> across term fields
> -
>
> Key: LUCENE-8196
> URL: https://issues.apache.org/jira/browse/LUCENE-8196
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 7.4
>
> Attachments: LUCENE-8196-debug.patch, LUCENE-8196.patch, 
> LUCENE-8196.patch, LUCENE-8196.patch, LUCENE-8196.patch, LUCENE-8196.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This ticket proposes an alternative implementation of the SpanQuery family 
> that uses minimum-interval semantics from 
> [http://vigna.di.unimi.it/ftp/papers/EfficientAlgorithmsMinimalIntervalSemantics.pdf]
>  to implement positional queries across term-based fields.  Rather than using 
> TermQueries to construct the interval operators, as in LUCENE-2878 or the 
> current Spans implementation, we instead use a new IntervalsSource object, 
> which will produce IntervalIterators over a particular segment and field.  
> These are constructed using various static helper methods, and can then be 
> passed to a new IntervalQuery which will return documents that contain one or 
> more intervals so defined.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8470) Remove Legacy*DocValues classes

2018-08-29 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-8470:


 Summary: Remove Legacy*DocValues classes
 Key: LUCENE-8470
 URL: https://issues.apache.org/jira/browse/LUCENE-8470
 Project: Lucene - Core
  Issue Type: Task
Reporter: Adrien Grand


These classes had been added to keep supporting 6.x codecs when transitioning 
from random-access doc values to sequential-access docvalues. We should remove 
them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8469) Inline calls to the deprecated StringHelper.compare

2018-08-29 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596332#comment-16596332
 ] 

Adrien Grand commented on LUCENE-8469:
--

Let's also remove StringHelper#compare on master? The patch looks like it was 
generated automatically, could you change it so that it doesn't perform sums 
with zero, eg. {{FutureArrays.compareUnsigned(packedValue, Integer.BYTES, 
Integer.BYTES + Integer.BYTES, maxLon, 0, Integer.BYTES)}} instead of 
{{FutureArrays.compareUnsigned(packedValue, Integer.BYTES, Integer.BYTES + 
Integer.BYTES, maxLon, 0, 0 + Integer.BYTES)}}?


> Inline calls to the deprecated StringHelper.compare
> ---
>
> Key: LUCENE-8469
> URL: https://issues.apache.org/jira/browse/LUCENE-8469
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: 7.5
>
> Attachments: LUCENE-8469.patch
>
>
> In an attempt to limit the number of warnings during compilation I though 
> it'd be nice to clean up our own stuff. This is a start: StringHelper.compare 
> is used throughout the code and is delegated to FutureArrays (where it 
> belongs, as the arguments are byte[], not Strings).
> This can cause other patches to not apply anymore... so we could apply this 
> to master only. If anybody has a strong feeling about it, please voice it. 
> The patch is trivial.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-765) Index package level javadocs needs content

2018-08-29 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596326#comment-16596326
 ] 

Adrien Grand commented on LUCENE-765:
-

[~sokolov] I have precommit failures with this patch which seem to be due to 
the names of the referenced classes not being fully-qualified.

> Index package level javadocs needs content
> --
>
> Key: LUCENE-765
> URL: https://issues.apache.org/jira/browse/LUCENE-765
> Project: Lucene - Core
>  Issue Type: Wish
>  Components: general/javadocs
>Reporter: Grant Ingersoll
>Priority: Minor
>  Labels: newdev
> Attachments: LUCENE-765.patch, LUCENE-765.patch, LUCENE-765.patch, 
> LUCENE-765.patch
>
>
> The org.apache.lucene.index package level javadocs are sorely lacking.  They 
> should be updated to give a summary of the important classes, how indexing 
> works, etc.  Maybe give an overview of how the different writers coordinate.  
> Links to file formats, information on the posting algorithm, etc. would be 
> helpful.
> See the search package javadocs as a sample of the kind of info that could go 
> here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12526) Metrics History doesn't work with AuthenticationPlugin

2018-08-29 Thread Michal Hlavac (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596302#comment-16596302
 ] 

Michal Hlavac commented on SOLR-12526:
--

Thanks [~janhoy] for help and suggestions.

> Metrics History doesn't work with AuthenticationPlugin
> --
>
> Key: SOLR-12526
> URL: https://issues.apache.org/jira/browse/SOLR-12526
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, metrics
>Affects Versions: 7.4
>Reporter: Michal Hlavac
>Priority: Critical
> Attachments: ProxyAuthPlugin.java
>
>
> Since solr 7.4.0 there is Metrics History which uses SOLRJ client to make 
> http requests to SOLR. But it doesnt work with AuthenticationPlugin. Since 
> its enabled by default, there are errors in log every time 
> {{MetricsHistoryHandler}} tries to collect data.
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://172.20.0.5:8983/solr: Expected mime type 
> application/octet-stream but got text/html. 
> 
> 
> Error 401 require authentication
> 
> HTTP ERROR 401
> Problem accessing /solr/admin/metrics. Reason:
>     require authentication
> 
> 
>    at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219) 
> ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$ClientSnitchCtx.invoke(SolrClientNodeStateProvider.java:292)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:1
> 4]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchMetrics(SolrClientNodeStateProvider.java:150)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$AutoScalingSnitch.getRemoteInfo(SolrClientNodeStateProvider.java:199)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18
> 16:55:14]
>    at 
> org.apache.solr.common.cloud.rule.ImplicitSnitch.getTags(ImplicitSnitch.java:76)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getNodeValues(SolrClientNodeStateProvider.java:111)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.collectGlobalMetrics(MetricsHistoryHandler.java:495)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>    at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.collectMetrics(MetricsHistoryHandler.java:368)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>    at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.lambda$new$0(MetricsHistoryHandler.java:230)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>    at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514) [?:?]
>    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 
> [?:?]
>    at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
>  [?:?]
>    at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
>  [?:?]
>    at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
>  [?:?]
>    at java.lang.Thread.run(Thread.java:844) [?:?]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-10) - Build # 7492 - Unstable!

2018-08-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7492/
Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([F46C8CD1E6D40804:7C38B30B482865FC]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:202)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 22763 - Unstable!

2018-08-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22763/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMixedBounds

Error Message:
expected: but was:

Stack Trace:
java.lang.AssertionError: expected: but was:
at 
__randomizedtesting.SeedInfo.seed([D23D61A189A44D61:D8BEDE0CC41F463B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMixedBounds(IndexSizeTriggerTest.java:669)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMixedBounds

Error Message:
expected: but was:

Stack Trace:
java.lang.AssertionError: expected: but was:
at 

[jira] [Created] (SOLR-12717) Support #EACH for collections so that collection/shard pairs can be uniformly distributed

2018-08-29 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-12717:


 Summary: Support #EACH for collections so that collection/shard 
pairs can be uniformly distributed
 Key: SOLR-12717
 URL: https://issues.apache.org/jira/browse/SOLR-12717
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: AutoScaling
Reporter: Shalin Shekhar Mangar
 Fix For: master (8.0), 7.5


See the third goal of the question at 
https://stackoverflow.com/questions/50839060/solr-autoscaling-add-replicas-on-new-nodes

The user wants to ensure that "Only one replica of each collection should exist 
on a node". We'd need support for collection:#EACH in a rule to support this 
use-case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12716) NodeLostTrigger should support deleting replicas from lost nodes

2018-08-29 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-12716:


 Summary: NodeLostTrigger should support deleting replicas from 
lost nodes
 Key: SOLR-12716
 URL: https://issues.apache.org/jira/browse/SOLR-12716
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: AutoScaling
Reporter: Shalin Shekhar Mangar
 Fix For: master (8.0), 7.5


NodeLostTrigger only moves replicas from the lost node to other nodes in the 
cluster. We should add a way to delete replicas of the lost node from the 
cluster state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12715) NodeAddedTrigger should support adding replicas to new nodes

2018-08-29 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-12715:


 Summary: NodeAddedTrigger should support adding replicas to new 
nodes
 Key: SOLR-12715
 URL: https://issues.apache.org/jira/browse/SOLR-12715
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: AutoScaling
Reporter: Shalin Shekhar Mangar
 Fix For: master (8.0), 7.5


NodeAddedTrigger only moves replicas from other nodes to the newly added 
node(s). We should add support for addreplica operations via preferredOperation 
flag as is done in other triggers like MetricsTrigger and ScheduledTrigger.

The use-case is to add replica(s) of one or more collections to a new node 
automatically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #442: Fixing an edge case bug when overriding a def...

2018-08-29 Thread Apmats
GitHub user Apmats opened a pull request:

https://github.com/apache/lucene-solr/pull/442

Fixing an edge case bug when overriding a default PostingsSolrHighligher

Encountered this edge case issue. The impact of this should be limited but 
in our case (using a default PostingsSolrHighlighter) this was causing a weird 
issue.

The issue here is that, when passing the parameter hl.method=unified, we 
enter this switch, and end up in the UNIFIED case. But then we check whether 
the preconfigured highlighter is an instance of UnifiedSolrHighlighter. 
If the preconfigured highlighter is a PostingsSolrHighlighter, which is 
implemented for depracation reasons as a UnifiedSolrHighlighter, then that 
highlighter is used instead of a unified one (and effectively the hl.method 
parameter is ignored).

In our case, the additional parameters that the PostingsSolrHighlighter 
brings with it were causing an issue with highlighting down the line.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Apmats/lucene-solr postings-highlighter-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/442.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #442


commit bed25f19f605edd9faf8fad65e90a381b16a6afd
Author: amatsagkas 
Date:   2018-08-29T10:00:21Z

Fixing an edge case bug when overriding a default PostingSolrHighlighter 
with a unified one through a request parameter




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10.0.1) - Build # 2648 - Still Unstable!

2018-08-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2648/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.io.stream.MathExpressionTest.testMultiVariateNormalDistribution

Error Message:
[-3.0959758228999377, 49.653412583828356]

Stack Trace:
java.lang.AssertionError: [-3.0959758228999377, 49.653412583828356]
at 
__randomizedtesting.SeedInfo.seed([C07C1134F03FFB0F:5A879843C716B904]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.client.solrj.io.stream.MathExpressionTest.testMultiVariateNormalDistribution(MathExpressionTest.java:3114)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 16247 lines...]
   [junit4] Suite: org.apache.solr.client.solrj.io.stream.MathExpressionTest
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-11690) DIH JdbcDataSource - Problem decoding encrypted password using encryptKeyFile

2018-08-29 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-11690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596149#comment-16596149
 ] 

Jan Høydahl commented on SOLR-11690:


See new PR #441 with a suggestion for improving the docs. [~noble.paul]

> DIH JdbcDataSource - Problem decoding encrypted password using encryptKeyFile
> -
>
> Key: SOLR-11690
> URL: https://issues.apache.org/jira/browse/SOLR-11690
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - DataImportHandler
>Affects Versions: 6.6.2
>Reporter: Rajesh Arumugam
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: easyfix
> Fix For: master (8.0), 7.5
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The password decryption is not working fine because of a bug in 
> JdbcDataSorce.java -> decryptPwd(Context context, Properties initProps) 
> method. The problem is due to bad construction of key string while making a 
> call to CryptoKeys.decodeAES(). Due to this the CryptoKeys throws "*Bad 
> password, algorithm, mode or padding; no salt, wrong number of iterations or 
> corrupted ciphertext.*" exception while trying to decode password.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #441: SOLR-11690: Improve documentation about DIH p...

2018-08-29 Thread janhoy
GitHub user janhoy opened a pull request:

https://github.com/apache/lucene-solr/pull/441

SOLR-11690: Improve documentation about DIH password encryption

Only refguide doc change

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cominvent/lucene-solr 
solr11960-dih-encrypt-docs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/441.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #441


commit f770dea6e731a797100992917967eb1132122351
Author: Jan Høydahl 
Date:   2018-08-29T09:50:55Z

Improve documentation about DIH password encryption




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8469) Inline calls to the deprecated StringHelper.compare

2018-08-29 Thread Dawid Weiss (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-8469:

Attachment: LUCENE-8469.patch

> Inline calls to the deprecated StringHelper.compare
> ---
>
> Key: LUCENE-8469
> URL: https://issues.apache.org/jira/browse/LUCENE-8469
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: 7.5
>
> Attachments: LUCENE-8469.patch
>
>
> In an attempt to limit the number of warnings during compilation I though 
> it'd be nice to clean up our own stuff. This is a start: StringHelper.compare 
> is used throughout the code and is delegated to FutureArrays (where it 
> belongs, as the arguments are byte[], not Strings).
> This can cause other patches to not apply anymore... so we could apply this 
> to master only. If anybody has a strong feeling about it, please voice it. 
> The patch is trivial.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8466) FrozenBufferedUpdates#apply*Deletes is incorrect when index sorting is enabled

2018-08-29 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596122#comment-16596122
 ] 

Adrien Grand commented on LUCENE-8466:
--

Maybe we should start thinking about releasing 7.5 because of this bug.

> FrozenBufferedUpdates#apply*Deletes is incorrect when index sorting is enabled
> --
>
> Key: LUCENE-8466
> URL: https://issues.apache.org/jira/browse/LUCENE-8466
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Critical
> Fix For: 7.5, master (8.0)
>
> Attachments: LUCENE-8466.patch
>
>
> This was reported by Vish Ramachandran at 
> https://markmail.org/message/w27h7n2isb5eogos. When deleting by term or 
> query, we record the term/query that is deleted and the current max doc id. 
> Deletes are later applied on flush by FrozenBufferedUpdates#apply*Deletes. 
> Unfortunately, this doesn't work when index sorting is enabled since 
> documents are renumbered between the time that the current max doc id is 
> computed and the time that deletes are applied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8469) Inline calls to the deprecated StringHelper.compare

2018-08-29 Thread Dawid Weiss (JIRA)
Dawid Weiss created LUCENE-8469:
---

 Summary: Inline calls to the deprecated StringHelper.compare
 Key: LUCENE-8469
 URL: https://issues.apache.org/jira/browse/LUCENE-8469
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Dawid Weiss
Assignee: Dawid Weiss
 Fix For: 7.5


In an attempt to limit the number of warnings during compilation I though it'd 
be nice to clean up our own stuff. This is a start: StringHelper.compare is 
used throughout the code and is delegated to FutureArrays (where it belongs, as 
the arguments are byte[], not Strings).

This can cause other patches to not apply anymore... so we could apply this to 
master only. If anybody has a strong feeling about it, please voice it. The 
patch is trivial.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >