[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 356 - Still Failing!

2016-01-25 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/356/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionReloadTest.testReloadedLeaderStateAfterZkSessionLoss

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:50496/solr within 1 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:50496/solr within 1 ms
at 
__randomizedtesting.SeedInfo.seed([320CA9D190F65E70:C922128C361B2D25]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:182)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:116)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:106)
at 
org.apache.solr.common.cloud.ZkStateReader.(ZkStateReader.java:202)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.connect(CloudSolrClient.java:467)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.initCloud(AbstractFullDistribZkTestBase.java:266)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:328)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.TimeoutException: Could not connect to 
ZooKeeper 127.0.0.1:50496/solr w

[jira] [Updated] (SOLR-8459) NPE using TermVectorComponent in combinition with ExactStatsCache

2016-01-25 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-8459:
---
Attachment: SOLR-8459.patch

Update file patch :
- Execute TermVectorComponent on ShardRequest.PURPOSE_GET_FIELDS only
- Not execute MoreLikeThisComponent on ShardRequest.PURPOSE_GET_TERM_STATS
- Random setup statsCache in BaseDistributedSearchTestCase

> NPE using TermVectorComponent in combinition with ExactStatsCache
> -
>
> Key: SOLR-8459
> URL: https://issues.apache.org/jira/browse/SOLR-8459
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3
>Reporter: Andreas Daffner
>Assignee: Varun Thacker
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8459.patch, SOLR-8459.patch, SOLR-8459.patch
>
>
> Hello,
> I am getting a NPE when using the TermVectorComponent in combinition with 
> ExactStatsCache.
> I am using SOLR 5.3.0 with 4 shards in total.
> I set up my solrconfig.xml as described in these 2 links:
> TermVectorComponent:
> https://cwiki.apache.org/confluence/display/solr/The+Term+Vector+Component
> ExactStatsCache:
> https://cwiki.apache.org/confluence/display/solr/Distributed+Requests#Configuring+statsCache+implementation
> My snippets from solrconfig.xml:
> {code}
> ...
>   
>   
>   
>class="org.apache.solr.handler.component.TermVectorComponent"/>
>class="org.apache.solr.handler.component.SearchHandler">
> 
>   true
> 
> 
>   tvComponent
> 
>   
> ...
> {code}
> Unfortunately a request to SOLR like 
> "http://host/solr/corename/tvrh?q=site_url_id:74"; ends up with this NPE:
> {code}
> 4329458 ERROR (qtp59559151-17) [c:SingleDomainSite_11 s:shard1 r:core_node1 
> x:SingleDomainSite_11_shard1_replica1] o.a.s.c.SolrCore 
> java.lang.NullPointerException
>   at 
> org.apache.solr.handler.component.TermVectorComponent.finishStage(TermVectorComponent.java:454)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:416)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:210)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:499)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> According to https://issues.apache.org/jira/browse/SOLR-7756 this Bug should 
> be fixed with SOLR 5.3.0, but obviously this NPE is still present.
> Can you please help me here?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8512) Implement minimal set of get* methods in ResultSetImpl for column indices

2016-01-25 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8512:
-
Fix Version/s: 6.0

> Implement minimal set of get* methods in ResultSetImpl for column indices
> -
>
> Key: SOLR-8512
> URL: https://issues.apache.org/jira/browse/SOLR-8512
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Kevin Risden
> Fix For: 6.0
>
> Attachments: SOLR-8512.patch, SOLR-8512.patch, SOLR-8512.patch, 
> SOLR-8512.patch, sql-preserve-order.patch
>
>
> SQL clients use the proper get* methods on the ResultSet to return items to 
> be displayed. At minimum, the following methods should be implemented for 
> column index:
> * public Object getObject
> * public String getString
> * public boolean getBoolean
> * public short getShort
> * public int getInt
> * public long getLong
> * public float getFloat
> * public double getDouble
> * public BigDecimal getBigDecimal
> * public Date getDate
> * public Time getTime
> * public Timestamp getTimestamp



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8512) Implement minimal set of get* methods in ResultSetImpl for column indices

2016-01-25 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-8512.
--
Resolution: Implemented

> Implement minimal set of get* methods in ResultSetImpl for column indices
> -
>
> Key: SOLR-8512
> URL: https://issues.apache.org/jira/browse/SOLR-8512
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Kevin Risden
> Attachments: SOLR-8512.patch, SOLR-8512.patch, SOLR-8512.patch, 
> SOLR-8512.patch, sql-preserve-order.patch
>
>
> SQL clients use the proper get* methods on the ResultSet to return items to 
> be displayed. At minimum, the following methods should be implemented for 
> column index:
> * public Object getObject
> * public String getString
> * public boolean getBoolean
> * public short getShort
> * public int getInt
> * public long getLong
> * public float getFloat
> * public double getDouble
> * public BigDecimal getBigDecimal
> * public Date getDate
> * public Time getTime
> * public Timestamp getTimestamp



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8512) Implement minimal set of get* methods in ResultSetImpl for column indices

2016-01-25 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8512:
-
Affects Version/s: (was: Trunk)
   6.0

> Implement minimal set of get* methods in ResultSetImpl for column indices
> -
>
> Key: SOLR-8512
> URL: https://issues.apache.org/jira/browse/SOLR-8512
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Kevin Risden
> Fix For: 6.0
>
> Attachments: SOLR-8512.patch, SOLR-8512.patch, SOLR-8512.patch, 
> SOLR-8512.patch, sql-preserve-order.patch
>
>
> SQL clients use the proper get* methods on the ResultSet to return items to 
> be displayed. At minimum, the following methods should be implemented for 
> column index:
> * public Object getObject
> * public String getString
> * public boolean getBoolean
> * public short getShort
> * public int getInt
> * public long getLong
> * public float getFloat
> * public double getDouble
> * public BigDecimal getBigDecimal
> * public Date getDate
> * public Time getTime
> * public Timestamp getTimestamp



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8512) Implement minimal set of get* methods in ResultSetImpl for column indices

2016-01-25 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15116629#comment-15116629
 ] 

Joel Bernstein commented on SOLR-8512:
--

commit: 
https://github.com/apache/lucene-solr/commit/0ff8d11367f8fe734abba9203d48be878f4ce7f2

> Implement minimal set of get* methods in ResultSetImpl for column indices
> -
>
> Key: SOLR-8512
> URL: https://issues.apache.org/jira/browse/SOLR-8512
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Kevin Risden
> Fix For: 6.0
>
> Attachments: SOLR-8512.patch, SOLR-8512.patch, SOLR-8512.patch, 
> SOLR-8512.patch, sql-preserve-order.patch
>
>
> SQL clients use the proper get* methods on the ResultSet to return items to 
> be displayed. At minimum, the following methods should be implemented for 
> column index:
> * public Object getObject
> * public String getString
> * public boolean getBoolean
> * public short getShort
> * public int getInt
> * public long getLong
> * public float getFloat
> * public double getDouble
> * public BigDecimal getBigDecimal
> * public Date getDate
> * public Time getTime
> * public Timestamp getTimestamp



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8512) Implement minimal set of get* methods in ResultSetImpl for column indices

2016-01-25 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15116616#comment-15116616
 ] 

Joel Bernstein commented on SOLR-8512:
--

Patch looks great. Running precommit now.

> Implement minimal set of get* methods in ResultSetImpl for column indices
> -
>
> Key: SOLR-8512
> URL: https://issues.apache.org/jira/browse/SOLR-8512
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8512.patch, SOLR-8512.patch, SOLR-8512.patch, 
> SOLR-8512.patch, sql-preserve-order.patch
>
>
> SQL clients use the proper get* methods on the ResultSet to return items to 
> be displayed. At minimum, the following methods should be implemented for 
> column index:
> * public Object getObject
> * public String getString
> * public boolean getBoolean
> * public short getShort
> * public int getInt
> * public long getLong
> * public float getFloat
> * public double getDouble
> * public BigDecimal getBigDecimal
> * public Date getDate
> * public Time getTime
> * public Timestamp getTimestamp



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 1083 - Still Failing

2016-01-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/1083/

2 tests failed.
FAILED:  org.apache.solr.hadoop.MorphlineBasicMiniMRTest.mrRun

Error Message:
Failed on local exception: java.io.IOException: Connection reset by peer; Host 
Details : local host is: "lucene1-us-west/10.41.0.5"; destination host is: 
"lucene1-us-west.apache.org":37554; 

Stack Trace:
java.io.IOException: Failed on local exception: java.io.IOException: Connection 
reset by peer; Host Details : local host is: "lucene1-us-west/10.41.0.5"; 
destination host is: "lucene1-us-west.apache.org":37554; 
at 
__randomizedtesting.SeedInfo.seed([754181868AC1B5FD:7B1335888B5787F2]:0)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
at org.apache.hadoop.ipc.Client.call(Client.java:1472)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy45.getClusterMetrics(Unknown Source)
at 
org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getClusterMetrics(ApplicationClientProtocolPBClientImpl.java:202)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy46.getClusterMetrics(Unknown Source)
at 
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getYarnClusterMetrics(YarnClientImpl.java:461)
at 
org.apache.hadoop.mapred.ResourceMgrDelegate.getClusterMetrics(ResourceMgrDelegate.java:151)
at 
org.apache.hadoop.mapred.YARNRunner.getClusterMetrics(YARNRunner.java:179)
at 
org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:246)
at org.apache.hadoop.mapred.JobClient$3.run(JobClient.java:719)
at org.apache.hadoop.mapred.JobClient$3.run(JobClient.java:717)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at 
org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:717)
at 
org.apache.solr.hadoop.MapReduceIndexerTool.run(MapReduceIndexerTool.java:645)
at 
org.apache.solr.hadoop.MapReduceIndexerTool.run(MapReduceIndexerTool.java:608)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at 
org.apache.solr.hadoop.MorphlineBasicMiniMRTest.mrRun(MorphlineBasicMiniMRTest.java:363)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(Random

[jira] [Commented] (SOLR-8518) Implement ResultSetMetaDataImpl getColumnType and getColumnTypeName

2016-01-25 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15116584#comment-15116584
 ] 

Joel Bernstein commented on SOLR-8518:
--

Using the approach where we look at the column types of the first real Tuple 
means that numerics will always be either a long or a double. From a practical 
standpoint this approach is very fast and it will work fine with visualization 
clients. As the implementation matures we can look at ways to cache the real 
types in the SQLHandler and return it with the meta-data tuple.

> Implement ResultSetMetaDataImpl getColumnType and getColumnTypeName
> ---
>
> Key: SOLR-8518
> URL: https://issues.apache.org/jira/browse/SOLR-8518
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8518.patch
>
>
> DBVisualizer uses getColumnType and getColumnTypeName to determine which 
> ResultSetImpl.get* method to use when displaying the data otherwise it falls 
> back to ResultSetImpl.getObject.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8502) Improve Solr JDBC Driver to support SQL Clients like DBVisualizer

2016-01-25 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15116566#comment-15116566
 ] 

Kevin Risden commented on SOLR-8502:


SOLR-8518 has a patch available as well once SOLR-8512 is done.

> Improve Solr JDBC Driver to support SQL Clients like DBVisualizer
> -
>
> Key: SOLR-8502
> URL: https://issues.apache.org/jira/browse/SOLR-8502
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
>  Labels: jdbc
> Fix For: Trunk
>
>
> Currently when trying to connect to Solr with the JDBC driver with a SQL 
> client the driver must implement more methods and metadata to allow 
> connections. This JIRA is designed to act as an umbrella for the JDBC changes.
> An initial pass from a few months ago is here: 
> https://github.com/risdenk/lucene-solr/tree/expand-jdbc. This needs to be 
> broken up and create patches for the related sub tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8518) Implement ResultSetMetaDataImpl getColumnType and getColumnTypeName

2016-01-25 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8518:
---
Attachment: SOLR-8518.patch

Implemented SOLR-8518 based on [~joel.bernstein]'s suggestion of using the 
pushback stream to grab the first tuple. This tuple is then used to determine 
type information.

> Implement ResultSetMetaDataImpl getColumnType and getColumnTypeName
> ---
>
> Key: SOLR-8518
> URL: https://issues.apache.org/jira/browse/SOLR-8518
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8518.patch
>
>
> DBVisualizer uses getColumnType and getColumnTypeName to determine which 
> ResultSetImpl.get* method to use when displaying the data otherwise it falls 
> back to ResultSetImpl.getObject.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8518) Implement ResultSetMetaDataImpl getColumnType and getColumnTypeName

2016-01-25 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8518:
---
Flags: Patch

> Implement ResultSetMetaDataImpl getColumnType and getColumnTypeName
> ---
>
> Key: SOLR-8518
> URL: https://issues.apache.org/jira/browse/SOLR-8518
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8518.patch
>
>
> DBVisualizer uses getColumnType and getColumnTypeName to determine which 
> ResultSetImpl.get* method to use when displaying the data otherwise it falls 
> back to ResultSetImpl.getObject.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8512) Implement minimal set of get* methods in ResultSetImpl for column indices

2016-01-25 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8512:
---
Attachment: SOLR-8512.patch

Patch that takes into account changes from SOLR-8517

> Implement minimal set of get* methods in ResultSetImpl for column indices
> -
>
> Key: SOLR-8512
> URL: https://issues.apache.org/jira/browse/SOLR-8512
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8512.patch, SOLR-8512.patch, SOLR-8512.patch, 
> SOLR-8512.patch, sql-preserve-order.patch
>
>
> SQL clients use the proper get* methods on the ResultSet to return items to 
> be displayed. At minimum, the following methods should be implemented for 
> column index:
> * public Object getObject
> * public String getString
> * public boolean getBoolean
> * public short getShort
> * public int getInt
> * public long getLong
> * public float getFloat
> * public double getDouble
> * public BigDecimal getBigDecimal
> * public Date getDate
> * public Time getTime
> * public Timestamp getTimestamp



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8517) Implement minimal set of get* methods in ResultSetImpl for column names

2016-01-25 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8517:
-
Affects Version/s: (was: Trunk)
   6.0

> Implement minimal set of get* methods in ResultSetImpl for column names
> ---
>
> Key: SOLR-8517
> URL: https://issues.apache.org/jira/browse/SOLR-8517
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Kevin Risden
> Fix For: 6.0
>
> Attachments: SOLR-8517.patch, SOLR-8517.patch, SOLR-8517.patch, 
> SOLR-8517.patch
>
>
> This is related to the ResultSetImpl for column indices but requires that 
> more metadata be based back from the SQL handler in relation to column names. 
> The SQL handler already knows about the column names and order but they 
> aren't passed back to the client. SQL clients used the column names to 
> display so this must be implemented for DBVisualizer to work properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8517) Implement minimal set of get* methods in ResultSetImpl for column names

2016-01-25 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8517:
-
Fix Version/s: 6.0

> Implement minimal set of get* methods in ResultSetImpl for column names
> ---
>
> Key: SOLR-8517
> URL: https://issues.apache.org/jira/browse/SOLR-8517
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Kevin Risden
> Fix For: 6.0
>
> Attachments: SOLR-8517.patch, SOLR-8517.patch, SOLR-8517.patch, 
> SOLR-8517.patch
>
>
> This is related to the ResultSetImpl for column indices but requires that 
> more metadata be based back from the SQL handler in relation to column names. 
> The SQL handler already knows about the column names and order but they 
> aren't passed back to the client. SQL clients used the column names to 
> display so this must be implemented for DBVisualizer to work properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8517) Implement minimal set of get* methods in ResultSetImpl for column names

2016-01-25 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-8517.
--
Resolution: Implemented

> Implement minimal set of get* methods in ResultSetImpl for column names
> ---
>
> Key: SOLR-8517
> URL: https://issues.apache.org/jira/browse/SOLR-8517
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Kevin Risden
> Attachments: SOLR-8517.patch, SOLR-8517.patch, SOLR-8517.patch, 
> SOLR-8517.patch
>
>
> This is related to the ResultSetImpl for column indices but requires that 
> more metadata be based back from the SQL handler in relation to column names. 
> The SQL handler already knows about the column names and order but they 
> aren't passed back to the client. SQL clients used the column names to 
> display so this must be implemented for DBVisualizer to work properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8517) Implement minimal set of get* methods in ResultSetImpl for column names

2016-01-25 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15116533#comment-15116533
 ] 

Joel Bernstein commented on SOLR-8517:
--

Commit: 
https://github.com/apache/lucene-solr/commit/ce0069a75126ee9d9f2b82aaf380317562bf5f50

> Implement minimal set of get* methods in ResultSetImpl for column names
> ---
>
> Key: SOLR-8517
> URL: https://issues.apache.org/jira/browse/SOLR-8517
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8517.patch, SOLR-8517.patch, SOLR-8517.patch, 
> SOLR-8517.patch
>
>
> This is related to the ResultSetImpl for column indices but requires that 
> more metadata be based back from the SQL handler in relation to column names. 
> The SQL handler already knows about the column names and order but they 
> aren't passed back to the client. SQL clients used the column names to 
> display so this must be implemented for DBVisualizer to work properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8517) Implement minimal set of get* methods in ResultSetImpl for column names

2016-01-25 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15116530#comment-15116530
 ] 

Joel Bernstein commented on SOLR-8517:
--

Working with this patch now. I made some changes to how numerics are handled. I 
believe all numerics will be converted to longs or doubles during the json 
de-serialization. To support SQL properly we need to support getInt, getFloat, 
getShort, getByte along with getLong and getDouble. So I changed the Numeric 
handling to cast to a Number and return the corresponding primitive type. If we 
want to add some safeguards to protect against truncation and loss of precision 
we can come back and do that in another ticket. 

> Implement minimal set of get* methods in ResultSetImpl for column names
> ---
>
> Key: SOLR-8517
> URL: https://issues.apache.org/jira/browse/SOLR-8517
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8517.patch, SOLR-8517.patch, SOLR-8517.patch, 
> SOLR-8517.patch
>
>
> This is related to the ResultSetImpl for column indices but requires that 
> more metadata be based back from the SQL handler in relation to column names. 
> The SQL handler already knows about the column names and order but they 
> aren't passed back to the client. SQL clients used the column names to 
> display so this must be implemented for DBVisualizer to work properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 3038 - Failure!

2016-01-25 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/3038/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.lucene.queries.CommonTermsQueryTest.testMinShouldMatch

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([EE4A9FB7F0E4AA4E:CA42975297E1E860]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.lucene.queries.CommonTermsQueryTest.testMinShouldMatch(CommonTermsQueryTest.java:268)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.lucene.queries.CommonTermsQueryTest.testExtend

Error Message:
expected:<[2]> but was:<[0]>

Stack Trace:
org.junit.ComparisonFailure: expected:<[2]> but was:<[0]>
at 
__randomizedtesting.SeedInfo.seed([EE4A9FB7F0E4AA4E:B080DE8E3F545DD2]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.lucene.queries.CommonTermsQueryTest.testExtend(CommonTermsQueryTest.java:392)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 

Re: Merge vs Rebase

2016-01-25 Thread Erick Erickson
P.S. with attribution re: the hadoop committer guide.

On Mon, Jan 25, 2016 at 4:49 PM, Erick Erickson  wrote:
> OK, I'm _really tempted_ to just steal the Hadoop guide verbatim as a
> start and we can refine as necessary.
>
> I'll put big banners about "PRELIMINARY" in it, and add bits about
> this being recommended not required.
>
> Thoughts?
>
> On Mon, Jan 25, 2016 at 4:36 PM, Uwe Schindler  wrote:
>> Hi,
>>
>>> Another chore we do is on adding new files is
>>> svn propset svn:eol-style native 
>>>
>>> do we have an equivalent for that in git?
>>
>> Per-file properties like eol-style or MIME-type don't exist. Git has some 
>> set of internal file extensions it treats as text files and does the 
>> newlines automagically. If we want to configure that, we can commit a 
>> ".gitconfig" file in root directory of repository: 
>> https://help.github.com/articles/dealing-with-line-endings/
>>
>> I would like to add such a .gitconfig file in any case to do some sane 
>> defaults.
>>
>> The ANT checker in "precommit" now also checks your GIT working copy and 
>> fails like before, although it no longer looks at mime types or eol-style.
>>
>> Uwe
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Merge vs Rebase

2016-01-25 Thread Erick Erickson
OK, I'm _really tempted_ to just steal the Hadoop guide verbatim as a
start and we can refine as necessary.

I'll put big banners about "PRELIMINARY" in it, and add bits about
this being recommended not required.

Thoughts?

On Mon, Jan 25, 2016 at 4:36 PM, Uwe Schindler  wrote:
> Hi,
>
>> Another chore we do is on adding new files is
>> svn propset svn:eol-style native 
>>
>> do we have an equivalent for that in git?
>
> Per-file properties like eol-style or MIME-type don't exist. Git has some set 
> of internal file extensions it treats as text files and does the newlines 
> automagically. If we want to configure that, we can commit a ".gitconfig" 
> file in root directory of repository: 
> https://help.github.com/articles/dealing-with-line-endings/
>
> I would like to add such a .gitconfig file in any case to do some sane 
> defaults.
>
> The ANT checker in "precommit" now also checks your GIT working copy and 
> fails like before, although it no longer looks at mime types or eol-style.
>
> Uwe
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Merge vs Rebase

2016-01-25 Thread Uwe Schindler
Hi,

> Another chore we do is on adding new files is
> svn propset svn:eol-style native 
> 
> do we have an equivalent for that in git?

Per-file properties like eol-style or MIME-type don't exist. Git has some set 
of internal file extensions it treats as text files and does the newlines 
automagically. If we want to configure that, we can commit a ".gitconfig" file 
in root directory of repository: 
https://help.github.com/articles/dealing-with-line-endings/

I would like to add such a .gitconfig file in any case to do some sane defaults.

The ANT checker in "precommit" now also checks your GIT working copy and fails 
like before, although it no longer looks at mime types or eol-style.

Uwe


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Merge vs Rebase

2016-01-25 Thread Uwe Schindler
Hi,

> > The example of the conflict between my commit and Mike’s is just a
> “normal usecase”.

It was meant as a conflict in workflow (two people committing at almost same 
time). And that happens quite often.

Uwe


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8522) ImplicitSnitch to support IPv4 fragment and host name fragment tags

2016-01-25 Thread Arcadius Ahouansou (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15116328#comment-15116328
 ] 

Arcadius Ahouansou edited comment on SOLR-8522 at 1/26/16 12:08 AM:


Hello again [~noble.paul]
Thank you very much for taking the time to have a look.

- I have changed the node format from {{http://host:port/context/collection}} 
... to {{host:port_context}}
- To further simplify things, I have also removed all tests related to 
SOLR-8523 . Tests will be added later to SOLR-8523


I need one clarification:

In the original {{ImplicitSnitch.java}}, we have:
{code}
Pattern hostAndPortPattern = Pattern.compile("(?:https?://)?([^:]+):(\\d+)")
{code}
Is that regex accurate given that node names do not contain any {{http}} or  
{{https}} in the format specified above?

Thank you very much


was (Author: arcadius):
Hello again [~noble.paul]
Thank you very much for taking the time to have a look.

- I have changed the node format from {{http://host:post/context/collection}} 
... to {{host:port_context}}
- To further simplify things, I have also removed all tests related to 
SOLR-8523 . Tests will be added later to SOLR-8523


I need one clarification:

In the original {{ImplicitSnitch.java}}, we have:
{code}
Pattern hostAndPortPattern = Pattern.compile("(?:https?://)?([^:]+):(\\d+)")
{code}
Is that regex accurate given that node names do not contain any {{http}} or  
{{https}} in the format specified above?

Thank you very much

> ImplicitSnitch to support IPv4 fragment and host name  fragment tags
> 
>
> Key: SOLR-8522
> URL: https://issues.apache.org/jira/browse/SOLR-8522
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.4
>Reporter: Arcadius Ahouansou
>Assignee: Noble Paul
>Priority: Minor
> Attachments: SOLR-8522.patch, SOLR-8522.patch, SOLR-8522.patch
>
>
> This is a description from [~noble.paul]'s comment on SOLR-8146
> h3. IPv4 fragment tags
> Lets assume a Solr node IPv4 address is {{192.93.255.255}} .
> This is about enhancing the current {{ImplicitSnitch}}  to support IP based 
> tags like:
> - {{hostfrag_1 = 255}}
> - {{hostfrag_2 = 255}}
> - {{hostfrag_3 = 93}}
> - {{hostfrag_4 = 192}}
> Note that IPv6 support will be implemented by a separate ticket
> h3. Host name fragment tags
> Lets assume a Solr node host name {{serv1.dc1.country1.apache.org}} .
> This is about enhancing the current {{ImplicitSnitch}}  to support  tags like:
> - {{hostfrag_1 = org}}
> - {{hostfrag_2 = apache}}
> - {{hostfrag_3 = country1}}
> - {{hostfrag_4 = dc1}}
> - {{hostfrag_5 = serv1}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8522) ImplicitSnitch to support IPv4 fragment and host name fragment tags

2016-01-25 Thread Arcadius Ahouansou (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15116328#comment-15116328
 ] 

Arcadius Ahouansou commented on SOLR-8522:
--

Hello again [~noble.paul]
Thank you very much for taking the time to have a look.

- I have changed the node format from {{http://host:post/context/collection}} 
... to {{host:port_context}}
- To further simplify things, I have also removed all tests related to 
SOLR-8523 . Tests will be added later to SOLR-8523


I need one clarification:

In the original {{ImplicitSnitch.java}}, we have:
{code}
Pattern hostAndPortPattern = Pattern.compile("(?:https?://)?([^:]+):(\\d+)")
{code}
Is that regex accurate given that node names do not contain any {{http}} or  
{{https}} in the format specified above?

Thank you very much

> ImplicitSnitch to support IPv4 fragment and host name  fragment tags
> 
>
> Key: SOLR-8522
> URL: https://issues.apache.org/jira/browse/SOLR-8522
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.4
>Reporter: Arcadius Ahouansou
>Assignee: Noble Paul
>Priority: Minor
> Attachments: SOLR-8522.patch, SOLR-8522.patch, SOLR-8522.patch
>
>
> This is a description from [~noble.paul]'s comment on SOLR-8146
> h3. IPv4 fragment tags
> Lets assume a Solr node IPv4 address is {{192.93.255.255}} .
> This is about enhancing the current {{ImplicitSnitch}}  to support IP based 
> tags like:
> - {{hostfrag_1 = 255}}
> - {{hostfrag_2 = 255}}
> - {{hostfrag_3 = 93}}
> - {{hostfrag_4 = 192}}
> Note that IPv6 support will be implemented by a separate ticket
> h3. Host name fragment tags
> Lets assume a Solr node host name {{serv1.dc1.country1.apache.org}} .
> This is about enhancing the current {{ImplicitSnitch}}  to support  tags like:
> - {{hostfrag_1 = org}}
> - {{hostfrag_2 = apache}}
> - {{hostfrag_3 = country1}}
> - {{hostfrag_4 = dc1}}
> - {{hostfrag_5 = serv1}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_66) - Build # 5570 - Failure!

2016-01-25 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5570/
Java: 64bit/jdk1.8.0_66 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.lucene.queries.CommonTermsQueryTest.testMinShouldMatch

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([70F9F2FD6C992CDE:54F1FA180B9C6EF0]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.lucene.queries.CommonTermsQueryTest.testMinShouldMatch(CommonTermsQueryTest.java:268)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.lucene.queries.CommonTermsQueryTest.testExtend

Error Message:
expected:<[2]> but was:<[0]>

Stack Trace:
org.junit.ComparisonFailure: expected:<[2]> but was:<[0]>
at 
__randomizedtesting.SeedInfo.seed([70F9F2FD6C992CDE:2E33B3C4A329DB42]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.lucene.queries.CommonTermsQueryTest.testExtend(CommonTermsQueryTest.java:392)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   

[jira] [Updated] (SOLR-8522) ImplicitSnitch to support IPv4 fragment and host name fragment tags

2016-01-25 Thread Arcadius Ahouansou (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arcadius Ahouansou updated SOLR-8522:
-
Attachment: SOLR-8522.patch

> ImplicitSnitch to support IPv4 fragment and host name  fragment tags
> 
>
> Key: SOLR-8522
> URL: https://issues.apache.org/jira/browse/SOLR-8522
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.4
>Reporter: Arcadius Ahouansou
>Assignee: Noble Paul
>Priority: Minor
> Attachments: SOLR-8522.patch, SOLR-8522.patch, SOLR-8522.patch
>
>
> This is a description from [~noble.paul]'s comment on SOLR-8146
> h3. IPv4 fragment tags
> Lets assume a Solr node IPv4 address is {{192.93.255.255}} .
> This is about enhancing the current {{ImplicitSnitch}}  to support IP based 
> tags like:
> - {{hostfrag_1 = 255}}
> - {{hostfrag_2 = 255}}
> - {{hostfrag_3 = 93}}
> - {{hostfrag_4 = 192}}
> Note that IPv6 support will be implemented by a separate ticket
> h3. Host name fragment tags
> Lets assume a Solr node host name {{serv1.dc1.country1.apache.org}} .
> This is about enhancing the current {{ImplicitSnitch}}  to support  tags like:
> - {{hostfrag_1 = org}}
> - {{hostfrag_2 = apache}}
> - {{hostfrag_3 = country1}}
> - {{hostfrag_4 = dc1}}
> - {{hostfrag_5 = serv1}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Shard splitting blocks the overseer queue for duration of split

2016-01-25 Thread Scott Blum
Looking deeper, it's entirely possible my experience is out of date; we're
running a Solr ~5.2.1 installation, and I'm 100% sure that in 5.2.1 a split
shard command completely blocks overseer.  Even OVERSEERSTATUS times out
while a split shard is happening.

Perhaps this was fixed as part of SOLR-7855?  I don't grok all the new
code, but it looks like as of 5.4 there's some support for overseer doing
more things concurrently.



On Mon, Jan 25, 2016 at 4:57 PM, Anshum Gupta 
wrote:

> Hi Scott,
>
> Shard splitting shouldn't block unrelated tasks. Here's the current
> definition of 'unrelated': anything that involves a different collection.
> Right now, the Overseer only processes one collection specific task at a
> time, however, you should certainly be able to split shards from other
> collections. It's a bug if it doesn't work that way.
>
> There is logic to check for mutual exclusion so that race conditions don't
> come back to bite us e.g. if I send in add replica, shard split, delete
> replica, AND/OR delete shard request for the same collection, we might run
> into issues.
>
>
> On Mon, Jan 25, 2016 at 1:02 PM, Scott Blum  wrote:
>
>> Hi dev,
>>
>> I searched around on this but couldn't find any related JIRA tickets or
>> work, although perhaps I missed it.
>>
>> We've run into a major scaling problem in the shard splitting operation.
>> The entire shard split is a single operation in overseer, and blocks any
>> other queue items from executing while the shard split happens.  Shard
>> splits can take on the order of many minutes to complete, during this time
>> no other overseer ops (including status updates) can occur.  Additionally,
>> this means you can only run a single shard split operation at a time,
>> across an entire deployment.
>>
>> Is anyone already working on this?  If not, I'm planning on working on it
>> myself, because we have to solve this scaling issue one way or another.
>> I'd love to get guidance from someone knowledgeable, both to make it more
>> solid, and also hopefully so it could be upstreamed.
>>
>> Thanks!
>> Scott
>>
>>
>
>
> --
> Anshum Gupta
>


[JENKINS-EA] Lucene-Solr-5.x-Linux (64bit/jdk-9-ea+95) - Build # 15351 - Still Failing!

2016-01-25 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/15351/
Java: 64bit/jdk-9-ea+95 -XX:+UseCompressedOops -XX:+UseSerialGC 
-XX:-CompactStrings

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=9571, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)2) Thread[id=9570, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)3) Thread[id=9574, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)4) Thread[id=9573, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)5) Thread[id=9572, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=9571, name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest]
at jdk.internal.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
at 
java.util.concurre

[jira] [Commented] (LUCENE-6956) TestBKDTree.testRandomMedium() failure: some hits were wrong

2016-01-25 Thread Nicholas Knize (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15116246#comment-15116246
 ] 

Nicholas Knize commented on LUCENE-6956:


bq. Hmm I hit this failure with the patch after a some beasting

Dug deeper and this error revealed a 2x accuracy issue with large exotic 
rectangles created by the BKD split approach. (different from the pole 
crossing). I added back the distance restriction and opened LUCENE-6994 to 
address the accuracy issue.

> TestBKDTree.testRandomMedium() failure: some hits were wrong
> 
>
> Key: LUCENE-6956
> URL: https://issues.apache.org/jira/browse/LUCENE-6956
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
>Assignee: Michael McCandless
> Attachments: LUCENE-6956.patch, LUCENE-6956.patch, LUCENE-6956.patch, 
> LUCENE-6956.patch
>
>
> My Jenkins found a reproducible seed for a failure of 
> {{TestBKDTree.testRandomMedium()}} on branch_5x with Java8:
> {noformat}
>   [junit4] Suite: org.apache.lucene.bkdtree.TestBKDTree
>[junit4]   1> T1: id=29784 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29528
>[junit4]   1>   lat=86.88086835667491 lon=-8.821268286556005
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=29801 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29545
>[junit4]   1>   lat=86.88149104826152 lon=-9.34366637840867
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=29961 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29705
>[junit4]   1>   lat=86.8706679996103 lon=-9.38328042626381
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30015 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29759
>[junit4]   1>   lat=86.84762765653431 lon=-9.44802425801754
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30017 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29761
>[junit4]   1>   lat=86.8753323610872 lon=-9.091365560889244
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30042 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29786
>[junit4]   1>   lat=86.85837233439088 lon=-9.127480667084455
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30061 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29805
>[junit4]   1>   lat=86.85876209288836 lon=-9.408821929246187
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30077 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29821
>[junit4]   1>   lat=86.84681385755539 lon=-8.837449550628662
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30185 should match but did not
>

[jira] [Updated] (LUCENE-6956) TestBKDTree.testRandomMedium() failure: some hits were wrong

2016-01-25 Thread Nicholas Knize (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-6956:
---
Attachment: LUCENE-6956.patch

Updated patch to address comments.  Note: distance size restriction was added 
back and new issue LUCENE-6994 was opened to investigate distance accuracy 
issues.

> TestBKDTree.testRandomMedium() failure: some hits were wrong
> 
>
> Key: LUCENE-6956
> URL: https://issues.apache.org/jira/browse/LUCENE-6956
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
>Assignee: Michael McCandless
> Attachments: LUCENE-6956.patch, LUCENE-6956.patch, LUCENE-6956.patch, 
> LUCENE-6956.patch
>
>
> My Jenkins found a reproducible seed for a failure of 
> {{TestBKDTree.testRandomMedium()}} on branch_5x with Java8:
> {noformat}
>   [junit4] Suite: org.apache.lucene.bkdtree.TestBKDTree
>[junit4]   1> T1: id=29784 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29528
>[junit4]   1>   lat=86.88086835667491 lon=-8.821268286556005
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=29801 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29545
>[junit4]   1>   lat=86.88149104826152 lon=-9.34366637840867
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=29961 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29705
>[junit4]   1>   lat=86.8706679996103 lon=-9.38328042626381
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30015 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29759
>[junit4]   1>   lat=86.84762765653431 lon=-9.44802425801754
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30017 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29761
>[junit4]   1>   lat=86.8753323610872 lon=-9.091365560889244
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30042 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29786
>[junit4]   1>   lat=86.85837233439088 lon=-9.127480667084455
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30061 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29805
>[junit4]   1>   lat=86.85876209288836 lon=-9.408821929246187
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30077 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29821
>[junit4]   1>   lat=86.84681385755539 lon=-8.837449550628662
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30185 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.75223124

[jira] [Created] (LUCENE-6994) GeoUtils Distance accuracy degrades with irregular rectangles

2016-01-25 Thread Nicholas Knize (JIRA)
Nicholas Knize created LUCENE-6994:
--

 Summary: GeoUtils Distance accuracy degrades with irregular 
rectangles
 Key: LUCENE-6994
 URL: https://issues.apache.org/jira/browse/LUCENE-6994
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Nicholas Knize


This is a continuation of LUCENE-6908 which validates USGS accuracy of 0.5% for 
Sinnott haversine distance calculation. The issue also introduced a space 
segmentation approach for BKD distance queries near the poles. In LUCENE-6956 a 
restriction on distance size was initially removed to randomly test BKD 
distance queries at any range. This revealed an issue where distance error 
nearly doubles for exotic rectangles created by BKD's split algorithm. This 
issue will investigate potential distance error caused by the segmentation 
approach introduced in the second part of LUCENE-6908.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8595) Use BinaryRequestWriter by default in HttpSolrClient and ConcurrentUpdateSolrClient

2016-01-25 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15116182#comment-15116182
 ] 

David Smiley commented on SOLR-8595:


+1.

Side note (separate issue):  It'd be nice if there was a SolrServer one-liner 
to set all communication (req & rsp) to be XML or binary as a toggle, without 
having to know which classes to the parsing/writing as that's really an 
implementation/internal detail.  It's nice to make such a flag configurable in 
the client's software to ease upgrades.

> Use BinaryRequestWriter by default in HttpSolrClient and 
> ConcurrentUpdateSolrClient
> ---
>
> Key: SOLR-8595
> URL: https://issues.apache.org/jira/browse/SOLR-8595
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8595.patch
>
>
> Use BinaryRequestWriter by default in HttpSolrClient and 
> ConcurrentUpdateSolrClient. They both use xml based update format right now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 355 - Still Failing!

2016-01-25 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/355/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testMaxDocs

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([1A548009804C33A8:A3D556D6ACA63722]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:755)
at 
org.apache.solr.update.AutoCommitTest.testMaxDocs(AutoCommitTest.java:199)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=id:14&qt=standard&start=0&rows=20&version=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:748)
... 40 more




Build Log:
[...truncated 9236 lines...]
   [junit4] Suite: org.apache.solr.update.A

Re: Shard splitting blocks the overseer queue for duration of split

2016-01-25 Thread Anshum Gupta
Hi Scott,

Shard splitting shouldn't block unrelated tasks. Here's the current
definition of 'unrelated': anything that involves a different collection.
Right now, the Overseer only processes one collection specific task at a
time, however, you should certainly be able to split shards from other
collections. It's a bug if it doesn't work that way.

There is logic to check for mutual exclusion so that race conditions don't
come back to bite us e.g. if I send in add replica, shard split, delete
replica, AND/OR delete shard request for the same collection, we might run
into issues.


On Mon, Jan 25, 2016 at 1:02 PM, Scott Blum  wrote:

> Hi dev,
>
> I searched around on this but couldn't find any related JIRA tickets or
> work, although perhaps I missed it.
>
> We've run into a major scaling problem in the shard splitting operation.
> The entire shard split is a single operation in overseer, and blocks any
> other queue items from executing while the shard split happens.  Shard
> splits can take on the order of many minutes to complete, during this time
> no other overseer ops (including status updates) can occur.  Additionally,
> this means you can only run a single shard split operation at a time,
> across an entire deployment.
>
> Is anyone already working on this?  If not, I'm planning on working on it
> myself, because we have to solve this scaling issue one way or another.
> I'd love to get guidance from someone knowledgeable, both to make it more
> solid, and also hopefully so it could be upstreamed.
>
> Thanks!
> Scott
>
>


-- 
Anshum Gupta


Re: Merge vs Rebase

2016-01-25 Thread Shai Erera
Thanks Dawid. I'll try for sport to not amend commits for a while, see how
that works out for me :). I'll admit that I already ran into needing to
revert a local change cause it didn't work, but I didn't have the "history"
to revert to ...

I don't mind typing 'git push origin HEAD:master'. I prefer commands that
are explicit. Anyway, I usually do Ctrl+R (in bash) and type the command,
which is quickly found in the history. Since it's always the same, I don't
really type it all the time.

Shai

On Mon, Jan 25, 2016 at 11:07 PM Dawid Weiss  wrote:

> > I'll admit Dawid that I obviously still have a lot to learn about working
> > with Git, but what I wrote above reflects my workflow on another project,
> > where we also use Gerrit for doing the code reviews.
>
> I'm not a git expert too, but I've grown to like it for its,
> surprisingly, simplicity. If you grasp the idea of a commit being a
> "patch with a parent" then everything becomes really logical. Looking
> up options and switches to commands in git can be frustrating, but
> this is "implementation layer" not "understanding of what's happening"
> layer.
>
> > I was referring to our current review style -- I'll upload a patch to
> JIRA
> > and get reviews. Yes, for my own history I could commit as many as I want
> > locally, then squash what I want, drop the ones that I don't etc. At the
> > moment I'm used to amending the commit, but I know of a coworker of mine
> who
> > works like you propose -- many commits and squashing in the end.
>
> Your workflow is up to you, Shai. I think amending the same commit
> over and over is in contrast to what you use a version tracking system
> for -- you do *want* those changes layered one on top of another. If
> not for anything else, then for just the possibility that you may want
> to browse through them and see what you've changed when (I do this a
> lot).
>
> > I thought that if I'm in branch 'feature', I cannot do 'git push' since
> > there's no remote tracking branch called 'feature'.
>
> This is where git does not excel. A "git push" for one person may be
> different than for another person -- this depends... on the
> configuration.
>
> The push by default tries to send all the "matching" branches -- the
> ones you have locally that are tracking something on the remote end
> with the same name. They have been fiddling with the defaults though,
> so I'm not sure if this is still the case. Grep for "push.default"
> here:
>
> https://git-scm.com/docs/git-config
>
> > So just so I'm clear, the sequence of commands you're proposing is:
> >
> > git checkout master
> > git merge --squash feature
> # review patch here
> > git push (update origin/master)
> >
> > git checkout branch_5x
> > git cherry-pick master (or the commit hash)
> > git push (update origin/branch_5x)
>
> Yes, that's one of my favorites. It consolidates all the "working"
> state of a feature into one final diff (patch) which you can review
> (see above) before pushing. I have pushes set to  "matching" so for me
> it's one final push after I'm done with all the branches.
>
> > Can you make it even shorter? :)
>
> Yes, you can use what Yonik suggested -- just work on the master
> branch directly and rebase before you commit. For tiny things this
> works just fine.
>
> Dawid
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[JENKINS-EA] Lucene-Solr-5.x-Linux (64bit/jdk-9-ea+95) - Build # 15350 - Failure!

2016-01-25 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/15350/
Java: 64bit/jdk-9-ea+95 -XX:-UseCompressedOops -XX:+UseSerialGC 
-XX:-CompactStrings

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=3084, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)2) Thread[id=3086, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)3) Thread[id=3088, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)4) Thread[id=3087, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)5) Thread[id=3085, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=3084, name=apacheds, state=WAITING, 
group=TGRP-SaslZkACLProviderTest]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:516)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
   2) Thread[id=3086, name=groupCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest]
at jdk

Re: Merge vs Rebase

2016-01-25 Thread Dawid Weiss
> I'll admit Dawid that I obviously still have a lot to learn about working
> with Git, but what I wrote above reflects my workflow on another project,
> where we also use Gerrit for doing the code reviews.

I'm not a git expert too, but I've grown to like it for its,
surprisingly, simplicity. If you grasp the idea of a commit being a
"patch with a parent" then everything becomes really logical. Looking
up options and switches to commands in git can be frustrating, but
this is "implementation layer" not "understanding of what's happening"
layer.

> I was referring to our current review style -- I'll upload a patch to JIRA
> and get reviews. Yes, for my own history I could commit as many as I want
> locally, then squash what I want, drop the ones that I don't etc. At the
> moment I'm used to amending the commit, but I know of a coworker of mine who
> works like you propose -- many commits and squashing in the end.

Your workflow is up to you, Shai. I think amending the same commit
over and over is in contrast to what you use a version tracking system
for -- you do *want* those changes layered one on top of another. If
not for anything else, then for just the possibility that you may want
to browse through them and see what you've changed when (I do this a
lot).

> I thought that if I'm in branch 'feature', I cannot do 'git push' since
> there's no remote tracking branch called 'feature'.

This is where git does not excel. A "git push" for one person may be
different than for another person -- this depends... on the
configuration.

The push by default tries to send all the "matching" branches -- the
ones you have locally that are tracking something on the remote end
with the same name. They have been fiddling with the defaults though,
so I'm not sure if this is still the case. Grep for "push.default"
here:

https://git-scm.com/docs/git-config

> So just so I'm clear, the sequence of commands you're proposing is:
>
> git checkout master
> git merge --squash feature
# review patch here
> git push (update origin/master)
>
> git checkout branch_5x
> git cherry-pick master (or the commit hash)
> git push (update origin/branch_5x)

Yes, that's one of my favorites. It consolidates all the "working"
state of a feature into one final diff (patch) which you can review
(see above) before pushing. I have pushes set to  "matching" so for me
it's one final push after I'm done with all the branches.

> Can you make it even shorter? :)

Yes, you can use what Yonik suggested -- just work on the master
branch directly and rebase before you commit. For tiny things this
works just fine.

Dawid

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Shard splitting blocks the overseer queue for duration of split

2016-01-25 Thread Scott Blum
Hi dev,

I searched around on this but couldn't find any related JIRA tickets or
work, although perhaps I missed it.

We've run into a major scaling problem in the shard splitting operation.
The entire shard split is a single operation in overseer, and blocks any
other queue items from executing while the shard split happens.  Shard
splits can take on the order of many minutes to complete, during this time
no other overseer ops (including status updates) can occur.  Additionally,
this means you can only run a single shard split operation at a time,
across an entire deployment.

Is anyone already working on this?  If not, I'm planning on working on it
myself, because we have to solve this scaling issue one way or another.
I'd love to get guidance from someone knowledgeable, both to make it more
solid, and also hopefully so it could be upstreamed.

Thanks!
Scott


Re: Merge vs Rebase

2016-01-25 Thread Yonik Seeley
On Mon, Jan 25, 2016 at 2:47 PM, Shai Erera  wrote:
> The 'merged' commit, in this case, seems redundant to me as it doesn't add
> any useful information about them. I believe this case isn't an example one
> for a merge. Just my thoughts...

+1

For back-porting the majority of issues from trunk to 5x, cherry-pick
should definitely be the default we point to.
"merge" is about merging *all* changes from a branch, so it won't even
work unless one has a separate feature branch for the change being
back-ported.

-Yonik

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6930) Decouple GeoPointField from NumericType

2016-01-25 Thread Nicholas Knize (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15116002#comment-15116002
 ] 

Nicholas Knize edited comment on LUCENE-6930 at 1/25/16 8:50 PM:
-

Updated patch to include the following:

* incorporate review feedback
* override {{GeoPointPrefixTermsEnum.accept}} to "seek" to the  "floor" range 
of the candidate term. This boosts query performance by eliminating superfluous 
range visits.
* fixed bug in {{GeoEncodingUtils.geoCodedToPrefixCodedBytes}} and 
{{.getPrefixCodedLongShift}} that was ignoring the {{BytesRef.offset}} variable

I'm going to open up another query performance improvement issue that switches 
from comparing BytesRefs to directly comparing the long encoded range values. 
This will instead convert candidate terms to their encoded range values and 
eliminate the need for constantly converting ranges to BytesRefs for 
comparisons.

NOTE: Beast testing this may result in some accuracy failures that are being 
fixed separately by LUCENE-6956


was (Author: nknize):
Updated patch to include the following:

* incorporate review feedback
* override {{GeoPointPrefixTermsEnum.accept}} to "seek" to the  "floor" range 
of the candidate term. This boosts query performance by eliminating superfluous 
range visits.
* fixed bug in {{GeoEncodingUtils.geoCodedToPrefixCodedBytes}} and 
{{.getPrefixCodedLongShift}} that was ignoring the {{BytesRef.offset}} variable

I'm going to open up another query performance improvement issue that switches 
from comparing BytesRefs to directly comparing the long encoded range values. 
This will instead convert candidate terms to their encoded range values and 
eliminate the need for constantly converting ranges to BytesRefs for 
comparisons.

> Decouple GeoPointField from NumericType
> ---
>
> Key: LUCENE-6930
> URL: https://issues.apache.org/jira/browse/LUCENE-6930
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nicholas Knize
> Attachments: LUCENE-6930.patch, LUCENE-6930.patch, LUCENE-6930.patch, 
> LUCENE-6930.patch
>
>
> {{GeoPointField}} currently relies on {{NumericTokenStream}} to create prefix 
> terms for a GeoPoint using the precision step defined in {{GeoPointField}}. 
> At search time {{GeoPointTermsEnum}} recurses to a max precision that is 
> computed by the Query parameters. This max precision is never the full 
> precision, so creating and indexing the full precision terms is useless and 
> wasteful (it was always a side effect of just using indexing logic from the 
> Numeric type). 
> Furthermore, since the numerical logic always stored high precision terms 
> first, the recursion in {{GeoPointTermsEnum}} required transient memory for 
> storing ranges. By moving the trie logic to its own {{GeoPointTokenStream}} 
> and reversing the term order (such that lower resolution terms are first), 
> the GeoPointTermsEnum can naturally traverse, enabling on-demand creation of 
> PrefixTerms. This will be done in a separate issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6930) Decouple GeoPointField from NumericType

2016-01-25 Thread Nicholas Knize (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-6930:
---
Attachment: LUCENE-6930.patch

Updated patch to include the following:

* incorporate review feedback
* override {{GeoPointPrefixTermsEnum.accept}} to "seek" to the  "floor" range 
of the candidate term. This boosts query performance by eliminating superfluous 
range visits.
* fixed bug in {{GeoEncodingUtils.geoCodedToPrefixCodedBytes}} and 
{{.getPrefixCodedLongShift}} that was ignoring the {{BytesRef.offset}} variable

I'm going to open up another query performance improvement issue that switches 
from comparing BytesRefs to directly comparing the long encoded range values. 
This will instead convert candidate terms to their encoded range values and 
eliminate the need for constantly converting ranges to BytesRefs for 
comparisons.

> Decouple GeoPointField from NumericType
> ---
>
> Key: LUCENE-6930
> URL: https://issues.apache.org/jira/browse/LUCENE-6930
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nicholas Knize
> Attachments: LUCENE-6930.patch, LUCENE-6930.patch, LUCENE-6930.patch, 
> LUCENE-6930.patch
>
>
> {{GeoPointField}} currently relies on {{NumericTokenStream}} to create prefix 
> terms for a GeoPoint using the precision step defined in {{GeoPointField}}. 
> At search time {{GeoPointTermsEnum}} recurses to a max precision that is 
> computed by the Query parameters. This max precision is never the full 
> precision, so creating and indexing the full precision terms is useless and 
> wasteful (it was always a side effect of just using indexing logic from the 
> Numeric type). 
> Furthermore, since the numerical logic always stored high precision terms 
> first, the recursion in {{GeoPointTermsEnum}} required transient memory for 
> storing ranges. By moving the trie logic to its own {{GeoPointTokenStream}} 
> and reversing the term order (such that lower resolution terms are first), 
> the GeoPointTermsEnum can naturally traverse, enabling on-demand creation of 
> PrefixTerms. This will be done in a separate issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6956) TestBKDTree.testRandomMedium() failure: some hits were wrong

2016-01-25 Thread Nicholas Knize (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15115991#comment-15115991
 ] 

Nicholas Knize commented on LUCENE-6956:


Oh, sorry for not addressing the {{1e-7}} question. With the 32 bit 
quantization encoding error can exceed {{1e-7}} but not {{1e-6}}. So its not 
that its weakening the test, its that once a point is indexed (quantized from 
two doubles to two 32 bit ints) the location can be affected by that much. I'm 
not exactly sure where 1e-7 came from? I can reduce it from 1e-6 but, in all 
honesty, I haven't worked out the exact maximum error. That could be a fun and 
useful exercise, just haven't had the time.

bq. Hmm I hit this failure with the patch after a some beasting:

Looks like the TestGeoUtils distance test doesn't have any boundary checks, so 
this is related to above. I thought I added a boundary check (e.g., return null 
boolean)? Maybe it was stepped on? I can add it back.

bq. Can we rename GeoRelationUtils.pointInRect to .pointInRectPrecise since 
it's now comparing doubles directly?

Absolutely!

> TestBKDTree.testRandomMedium() failure: some hits were wrong
> 
>
> Key: LUCENE-6956
> URL: https://issues.apache.org/jira/browse/LUCENE-6956
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
>Assignee: Michael McCandless
> Attachments: LUCENE-6956.patch, LUCENE-6956.patch, LUCENE-6956.patch
>
>
> My Jenkins found a reproducible seed for a failure of 
> {{TestBKDTree.testRandomMedium()}} on branch_5x with Java8:
> {noformat}
>   [junit4] Suite: org.apache.lucene.bkdtree.TestBKDTree
>[junit4]   1> T1: id=29784 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29528
>[junit4]   1>   lat=86.88086835667491 lon=-8.821268286556005
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=29801 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29545
>[junit4]   1>   lat=86.88149104826152 lon=-9.34366637840867
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=29961 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29705
>[junit4]   1>   lat=86.8706679996103 lon=-9.38328042626381
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30015 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29759
>[junit4]   1>   lat=86.84762765653431 lon=-9.44802425801754
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30017 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29761
>[junit4]   1>   lat=86.8753323610872 lon=-9.091365560889244
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30042 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29786
>[junit4]   1>   lat=86.85837233439088 lon=-9.127480667084455
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30061 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29805
>[junit4]   1>   lat=86.85876209288836 lo

[jira] [Updated] (SOLR-8595) Use BinaryRequestWriter by default in HttpSolrClient and ConcurrentUpdateSolrClient

2016-01-25 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-8595:

Attachment: SOLR-8595.patch

Trivial patch. ConcurrentUpdateSolrClient uses HttpSolrClient so changing the 
default in HttpSolrClient is sufficient.

> Use BinaryRequestWriter by default in HttpSolrClient and 
> ConcurrentUpdateSolrClient
> ---
>
> Key: SOLR-8595
> URL: https://issues.apache.org/jira/browse/SOLR-8595
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8595.patch
>
>
> Use BinaryRequestWriter by default in HttpSolrClient and 
> ConcurrentUpdateSolrClient. They both use xml based update format right now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8595) Use BinaryRequestWriter by default in HttpSolrClient and ConcurrentUpdateSolrClient

2016-01-25 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-8595:
---

 Summary: Use BinaryRequestWriter by default in HttpSolrClient and 
ConcurrentUpdateSolrClient
 Key: SOLR-8595
 URL: https://issues.apache.org/jira/browse/SOLR-8595
 Project: Solr
  Issue Type: Improvement
  Components: SolrJ
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 5.5, Trunk


Use BinaryRequestWriter by default in HttpSolrClient and 
ConcurrentUpdateSolrClient. They both use xml based update format right now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Merge vs Rebase

2016-01-25 Thread Mark Miller
On Mon, Jan 25, 2016 at 3:11 PM Dawid Weiss  wrote:

> > commit 51396b8cf63fd7c77d005d10803ae953acfe659f
> > Merge: 5319ca6 6df206a
> >
> > The 'merged' commit, in this case, seems redundant to me as it doesn't
> add
> > any useful information about them.
>
> It tells you exactly which two parent commits this merge actually
> joins. In terms of patches -- this tells you which two different
> "lines" of patches this commit consolidates. This isn't unimportant
> information.
>
>
Depends on the change, who you are, and what you care about.

It's a discussion that's already been had 10 times on the internet. We
won't break any new ground here.

Most of the other Apache projects go for linear history and rebase (eg see
the Hadoop projects https://wiki.apache.org/hadoop/HowToCommit). Other
projects don't. It's simply a religious decision.

- Mark
-- 
- Mark
about.me/markrmiller


Re: Merge vs Rebase

2016-01-25 Thread Shai Erera
As a person looking at the history, I don't care what this particular
commit merged. I only care about the two commits before it. They're the
ones with content, they're the ones I'll want to 'git show' ... most likely.

Anyway, we don't have to agree on this, but I do suggest that we come up w/
a standard. Otherwise, we'll see a mess. I personally prefer rebase, but if
the project decided to do merge, I'll adopt that and apply it. I'm not
going to argue much because aside from aesthetics and cleanup of 'git log',
I don't feel knowledgeable enough to argue against one or the other.

Shai

On Mon, Jan 25, 2016 at 10:11 PM Dawid Weiss  wrote:

> > commit 51396b8cf63fd7c77d005d10803ae953acfe659f
> > Merge: 5319ca6 6df206a
> >
> > The 'merged' commit, in this case, seems redundant to me as it doesn't
> add
> > any useful information about them.
>
> It tells you exactly which two parent commits this merge actually
> joins. In terms of patches -- this tells you which two different
> "lines" of patches this commit consolidates. This isn't unimportant
> information.
>
> Think of Solr and Lucene, for example -- the merge commit that glues
> the two projects together looks exactly like the one above: it
> connects two different lines of (thousands) of patches that in the
> result form Lucene-Solr.
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (LUCENE-6956) TestBKDTree.testRandomMedium() failure: some hits were wrong

2016-01-25 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15115897#comment-15115897
 ] 

Michael McCandless commented on LUCENE-6956:


Thanks [~nknize], I like seeing further fixing of previous test castrations 
like this :) :

{noformat}
   @Override
   protected boolean forceSmall() {
-// TODO: GeoUtils are potentially slow if we use small=false with heavy 
testing
-return true;
+return false;
   }
{noformat}

But what about my worries on 10x increase on tolerance for test
failures?  I'd rather not weaken tests by increasing the allowed
"fuzz" unless it's really necessary ... it's 1e-7 now but the patch
changes it to 1e-6 for poly and rect tests.

Can we rename {{GeoRelationUtils.pointInRect}} to
{{.pointInRectPrecise}} since it's now comparing doubles directly?

Hmm I hit this failure with the patch after a some beasting:

{noformat}
[junit4:pickseed] Seed property 'tests.seed' already defined: CDA594C3EF930919
   [junit4]  says hallo! Master seed: CDA594C3EF930919  

   
   [junit4] Executing 1 suite with 1 JVM.   

   
   [junit4] 

   
   [junit4] Started J0 PID(71413@localhost).

   
   [junit4] Suite: org.apache.lucene.util.TestGeoUtils  

   
   [junit4]   1> doc=983 matched but should not on iteration 49 

   
   [junit4]   1>   lon=139.60821881890297 lat=69.31676804088056 
distanceMeters=4715192.068461553 vs radiusMeters=4698375.33421177   
   
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestGeoUtils 
-Dtests.method=testGeoRelations -Dtests.seed=CDA594C3EF930919 -Dtests.slow=true 
-Dtests.linedocsfile=/lucenedata/hudson.enwiki.random.lines.txt.fixed\
 -Dtests.locale=tr-TR -Dtests.timezone=America/Matamoros -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8 
 
   [junit4] FAILURE 0.43s | TestGeoUtils.testGeoRelations <<<   

   
   [junit4]> Throwable #1: java.lang.AssertionError: 1 incorrect hits (see 
above)  

   [junit4]>at 
__randomizedtesting.SeedInfo.seed([CDA594C3EF930919:F8680769B647FA7]:0) 


   [junit4]>at 
org.apache.lucene.util.TestGeoUtils.testGeoRelations(TestGeoUtils.java:532) 


   [junit4]>at java.lang.Thread.run(Thread.java:745)

   
   [junit4]   2> NOTE: test params are: codec=Lucene60, 
sim=RandomSimilarity(queryNorm=true,coord=no): {}, locale=tr-TR, 
timezone=America/Matamoros  
  
   [junit4]   2> NOTE: Linux 3.19.0-21-generic amd64/Oracle Corporation 
1.8.0_65 (64-bit)/cpus=72,threads=1,free=428760240,total=514850816  
   
   [junit4]   2> NOTE: All tests run in this JVM: [TestGeoUtils]

   
   [junit4] Completed [1/1 (1!)] in 0.83s, 1 test, 1 failure <<< FAILURES!  
 

Re: Merge vs Rebase

2016-01-25 Thread Dawid Weiss
> commit 51396b8cf63fd7c77d005d10803ae953acfe659f
> Merge: 5319ca6 6df206a
>
> The 'merged' commit, in this case, seems redundant to me as it doesn't add
> any useful information about them.

It tells you exactly which two parent commits this merge actually
joins. In terms of patches -- this tells you which two different
"lines" of patches this commit consolidates. This isn't unimportant
information.

Think of Solr and Lucene, for example -- the merge commit that glues
the two projects together looks exactly like the one above: it
connects two different lines of (thousands) of patches that in the
result form Lucene-Solr.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Merge vs Rebase

2016-01-25 Thread Shai Erera
I'll admit Dawid that I obviously still have a lot to learn about working
with Git, but what I wrote above reflects my workflow on another project,
where we also use Gerrit for doing the code reviews.

> How do you get reviews? Did you push feature-branch to origin or did you
just create a patch from it?

I was referring to our current review style -- I'll upload a patch to JIRA
and get reviews. Yes, for my own history I could commit as many as I want
locally, then squash what I want, drop the ones that I don't etc. At the
moment I'm used to amending the commit, but I know of a coworker of mine
who works like you propose -- many commits and squashing in the end. He's
worked w/ Git for years and you obviously have too, so I can only conclude
that your way probably has benefits, which I haven't yet discovered.

> You don't have to rebase anything.

You're right, and I've realized it after I sent it. Again, this is just a
workflow I've developed, where my local master is always up-to-date with
origin, and then I rebase / branch off it instead of origin/master. But I
could also totally rebase my local feature-branch over origin/master.

> git push will push all remote tracking branches (or current branch),
> so you could simply do git push. Shorter.

I thought that if I'm in branch 'feature', I cannot do 'git push' since
there's no remote tracking branch called 'feature'. To just use 'git push',
won't I need to first merge 'feature' into local master and then use 'git
push'? Could be I'm wrong, but I'm almost sure I couldn't just 'git push'
from feature, without specifying that I want HEAD (or feature) to be pushed
to origin/master.

> A less verbose way to do it would be to merge --squash your feature
> into master and then cherry pick a single commit from master to
> branch_5x.

So just so I'm clear, the sequence of commands you're proposing is:

git checkout master
git merge --squash feature
git push (update origin/master)

git checkout branch_5x
git cherry-pick master (or the commit hash)
git push (update origin/branch_5x)

I used cherry-pick in my previous commit, only from a separate local
branch. Reason is I always prefer to keep master (and in this case
branch_5x too) clean, but the (much shorter!) sequence above would work too.

Is this what you meant?
Can you make it even shorter? :)

Shai


On Mon, Jan 25, 2016 at 9:47 PM Shai Erera  wrote:

> I don't think anyone here says "no merges at all", but rather I feel the
> direction is "rebase whenever possible, merge when you must or it makes
> sense". I realize that the last part is fuzzy and open, and maybe that's
> why Dawid (I think?) suggested that we don't change much yet, but rather
> let this new Git roll in, let everyone feel it and experience it, and then
> perhaps a month-two from now we can discuss how we want to have commits
> done in the project.
>
> About history, when I look at 'git log' in branch_5x, it looks like that:
>
> -
> commit 51396b8cf63fd7c77d005d10803ae953acfe659f
> Merge: *5319ca6*
> *6df206a *Author: Michael McCandless 
> Date: Sun Jan 24 16:51:39 2016 -0500
>   merged
>
> commit *5319ca6*11a7dabc07d23a63555ff2df39596d00e
> Author: Michael McCandless 
> Date: Sun Jan 24 16:48:51 2016 -0500
>revert this est change until we can fix geo API border cases
> (LUCENE-6956)
>
> commit *6df206a*51dee447e9f4625d864ffd80778bdf8ff
> Author: Uwe Schindler 
> Date: Sun Jan 24 22:05:38 2016 +0100
>LUCENE-6938: Add WC checks back, now based on JGit
> --
>
> The 'merged' commit, in this case, seems redundant to me as it doesn't add
> any useful information about them. I believe this case isn't an example one
> for a merge. Just my thoughts...
>
> Shai
>
>
> On Mon, Jan 25, 2016 at 9:30 PM Michael McCandless <
> luc...@mikemccandless.com> wrote:
>
>> I am very much a git newbie, but:
>>
>> > The example of the conflict between my commit and Mike’s is just a
>> “normal usecase”.
>>
>> It did not happen in this case, but what if there were tricky
>> conflicts to resolve?  And I went and did that and then committed the
>> resulting merge?  I would want this information ("Mike did a hairy
>> conflict ridden merge using Emacs while drinking too much beer") to be
>> preserved because it is in fact meaningful, e.g. to retroactively
>> understand how bugs were introduced, as one example.
>>
>> If I understand it right, a git pull --rebase would also introduce
>> conflicts, which I would have (secretly) resolved and then committed
>> as if I had gone and spontaneously developed that patch suddenly?
>>
>> I think it's odd to insist on "beauty" for our source control history
>> when in fact the reality is quite messy.  This is like people who
>> insist on decorating the inside of their home as if they live in a
>> museum when in reality they have four crazy kids running around.
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>> On Mon, Jan 25, 2016 at 3:34 AM, Uwe Schindler  wrote:
>> > Hi,
>

Re: Merge vs Rebase

2016-01-25 Thread Mark Miller
Yup, it's committers like you and Joel that I'm most worried about simple
guidelines for. Those that are deep into Git on this project are likely to
do whatever they want anyway. I'm not looking to waste my time fighting
them. But for those that don't want to walk those waters, that don't have a
religious bent about having the history of every patch in SVN and hundreds
of one off branches, let's set up a simple work flow for them that helps
keep things sane.

Mark

On Mon, Jan 25, 2016 at 2:42 PM Erick Erickson 
wrote:

> Gotta echo Joel, I just want to get through my first commit here.
> Yonik's outline is along the lines of what I'm looking for. For
> newcomers (including newcomers to Git like me), this discussion is a
> little analogous to someone asking "how do I write to a file in Java"
> and having the discussion dive into the pros and cons of the zillion
> different things you can do with writing data to some disk somewhere;
> options for binary, text, character set, buffered, flushed, fsync'd,
> and on and on. Each and every one of the options discussed has a very
> good use case, but is hard to absorb all at once.
>
> The "How to Contribute" page needs to be updated; I'll be happy to
> volunteer to create/edit a "Git commit process". I can bring the
> newbie's ignorant perspective to bear on it, doing all the dumb things
> that newbies do. Trust me on this, I can misinterpret even the
> simplest of instructions.
>
> And do note that it's perfectly OK IMO to have multiple ways of doing
> things, so this more along the lines of recommendations than
> requirements. I'm not interested in "the one true way". I _am_
> interested in "how do I keep from messing this up completely".
>
> I'm envisioning a few sections here.
> >Your first Git commit
> >> steps for method 1
> >>> backporting to 5x
> >> steps for method 2
> >>> backporting to 5x
>
> > Advanced issues
> >> advanced issue 1
> >> advanced issue 2
>
> Of course we have to reach some consensus on what acceptable  "method
> for newbies" are, which is much of what this discussion is about.
>
> Erick
>
> On Mon, Jan 25, 2016 at 10:07 AM, Shai Erera  wrote:
> > I used cherry-pick, as to me it's the most similar thing to the svn merge
> > workflow we had. But as Mark said, there are other ways too, and IMO we
> > should get to an agreed upon way as a project, and not individuals.
> >
> > For instance, IIRC, branch_5x now has 2 commits: the first is the one
> from
> > master, and the second is named 'merge'. Seeing that in 'svn log', IMO,
> > doesn't contribute to a better understanding of the history.
> >
> > Shai
> >
> >
> > On Mon, Jan 25, 2016, 19:56 Mark Miller  wrote:
> >>
> >> Although, again, before someone gets angry, some groups choose to merge
> in
> >> this case as well instead. There are lots of legit choices. Projects
> tend to
> >> come to consensus on how they operate with these things. There is no
> >> 'correct' choice, just opinions and what the project coalesces around.
> >>
> >>
> >> - Mark
> >>
> >> On Mon, Jan 25, 2016 at 12:45 PM Mark Miller 
> >> wrote:
> >>>
> >>> >> The next step is merging to branch_5x
> >>> >> How do you recommend I do it?
> >>>
> >>> Generally, people use 'cherry-pick' for this.
> >>>
> >>> - Mark
> >>>
> >>> On Mon, Jan 25, 2016 at 12:39 PM Noble Paul 
> wrote:
> 
>  The most common usecase is
>  Do development on trunk(master)
> 
>  git commit to master
>  git push
> 
> 
>  The next step is merging to branch_5x
>  How do you recommend I do it?
> 
>  Another chore we do is on adding new files is
>  svn propset svn:eol-style native 
> 
>  do we have an equivalent for that in git?
> 
> 
>  On Mon, Jan 25, 2016 at 10:20 PM, Yonik Seeley 
>  wrote:
>  >>  git push origin HEAD:master (this is the equivalent of svn commit)
>  >>  (b) Is there a less verbose way to do it,
>  >
>  > I'm no git expert either, but it seems like the simplest example of
>  > applying and committing a patch could be much simpler by having good
>  > defaults and not using a separate branch.
>  >
>  > 1) update your repo (note my .gitconfig makes this use rebasing)
>  > $ git pull
>  >
>  > 2) apply patch / changes, run tests, etc
>  >
>  > 3) commit locally
>  > $ git add  # add the changed files.. use "git add -u" for adding all
>  > modified files
>  > $ git commit -m "my commit message"
>  >
>  > 4) push to remote
>  > $ git push
>  >
>  > -Yonik
>  >
>  >  my .gitconfig --
>  > [user]
>  >   name = yonik
>  >   email = yo...@apache.org
>  >
>  > [color]
>  >   diff = auto
>  >   status = auto
>  >   branch = auto
>  >
>  > [alias]
>  >   br = branch
>  >   co = checkout
>  >   l = log --pretty=oneline
>  >   hist = log --pretty=format:\"%h %ad | %s%d [%an]\" --graph
>  > --date=short
>  

[jira] [Updated] (SOLR-8594) Impossible Cast: equals() method in ConstDoubleSource always returns false

2016-01-25 Thread Marc Breslow (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marc Breslow updated SOLR-8594:
---
Attachment: SOLR-8594-fix-impossible-cast.patch

> Impossible Cast: equals() method in ConstDoubleSource always returns false
> --
>
> Key: SOLR-8594
> URL: https://issues.apache.org/jira/browse/SOLR-8594
> Project: Solr
>  Issue Type: Bug
>Reporter: Marc Breslow
> Attachments: SOLR-8594-fix-impossible-cast.patch
>
>
> The equals() method in 
> org.apache.solr.analytics.util.valuesource.ConstDoubleSource is written as
> {code:java}
>   public boolean equals(Object o) {
> if (!(o instanceof ConstValueSource)) return false;
> ConstDoubleSource other = (ConstDoubleSource)o;
> return  this.constant == other.constant;
>   }
> {code}
> There is no common ancestor for ConstValueSource so the first conditional 
> will always return false. Attaching a patch to fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8594) Impossible Cast: equals() method in ConstDoubleSource always returns false

2016-01-25 Thread Marc Breslow (JIRA)
Marc Breslow created SOLR-8594:
--

 Summary: Impossible Cast: equals() method in ConstDoubleSource 
always returns false
 Key: SOLR-8594
 URL: https://issues.apache.org/jira/browse/SOLR-8594
 Project: Solr
  Issue Type: Bug
Reporter: Marc Breslow


The equals() method in 
org.apache.solr.analytics.util.valuesource.ConstDoubleSource is written as
{code:java}
  public boolean equals(Object o) {
if (!(o instanceof ConstValueSource)) return false;
ConstDoubleSource other = (ConstDoubleSource)o;
return  this.constant == other.constant;
  }
{code}

There is no common ancestor for ConstValueSource so the first conditional will 
always return false. Attaching a patch to fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Merge vs Rebase

2016-01-25 Thread Shai Erera
I don't think anyone here says "no merges at all", but rather I feel the
direction is "rebase whenever possible, merge when you must or it makes
sense". I realize that the last part is fuzzy and open, and maybe that's
why Dawid (I think?) suggested that we don't change much yet, but rather
let this new Git roll in, let everyone feel it and experience it, and then
perhaps a month-two from now we can discuss how we want to have commits
done in the project.

About history, when I look at 'git log' in branch_5x, it looks like that:

-
commit 51396b8cf63fd7c77d005d10803ae953acfe659f
Merge: *5319ca6*
*6df206a *Author: Michael McCandless 
Date: Sun Jan 24 16:51:39 2016 -0500
  merged

commit *5319ca6*11a7dabc07d23a63555ff2df39596d00e
Author: Michael McCandless 
Date: Sun Jan 24 16:48:51 2016 -0500
   revert this est change until we can fix geo API border cases
(LUCENE-6956)

commit *6df206a*51dee447e9f4625d864ffd80778bdf8ff
Author: Uwe Schindler 
Date: Sun Jan 24 22:05:38 2016 +0100
   LUCENE-6938: Add WC checks back, now based on JGit
--

The 'merged' commit, in this case, seems redundant to me as it doesn't add
any useful information about them. I believe this case isn't an example one
for a merge. Just my thoughts...

Shai


On Mon, Jan 25, 2016 at 9:30 PM Michael McCandless <
luc...@mikemccandless.com> wrote:

> I am very much a git newbie, but:
>
> > The example of the conflict between my commit and Mike’s is just a
> “normal usecase”.
>
> It did not happen in this case, but what if there were tricky
> conflicts to resolve?  And I went and did that and then committed the
> resulting merge?  I would want this information ("Mike did a hairy
> conflict ridden merge using Emacs while drinking too much beer") to be
> preserved because it is in fact meaningful, e.g. to retroactively
> understand how bugs were introduced, as one example.
>
> If I understand it right, a git pull --rebase would also introduce
> conflicts, which I would have (secretly) resolved and then committed
> as if I had gone and spontaneously developed that patch suddenly?
>
> I think it's odd to insist on "beauty" for our source control history
> when in fact the reality is quite messy.  This is like people who
> insist on decorating the inside of their home as if they live in a
> museum when in reality they have four crazy kids running around.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> On Mon, Jan 25, 2016 at 3:34 AM, Uwe Schindler  wrote:
> > Hi,
> >
> >
> >
> > I am fine with both. The example of the conflict between my commit and
> > Mike’s is just a “normal usecase”. To me it looks correct how it is
> shown in
> > history. At least it shows reality: 2 people were about to commit the
> same.
> > This happened with SVN many times, too, but you are right it was solved
> by
> > SVN through additional update (a rebase) and then try commit again. I am
> > fine with both variants. But if we decide to only do one variant, I’d
> prefer
> > to have some “howto chart” what you need to do to setup your working copy
> > correctly (all commands for configuring @apache.org username, pull
> > settings,…) that are local to the repository. Maybe add a
> shell/windows.cmd
> > script to devtools! I don’t want to change those settings globaly, so
> please
> > don’t use the magic –global setting in the example.If we have a script,
> we
> > can do that per WC:
> >
> > -  Fetch repo from git-wip-us
> >
> > -  Run script
> >
> >
> >
> > About merge: When we get pull requests from 3rd parties, we should
> > definitely not rebase With merging that in (in the same way how
> Githiub
> > is doing it), we preserve attribution to the original commiter. We should
> > really keep that! That is to me the only good reason to use Git!
> >
> >
> >
> > I am fine with rebasing our own stuff and make it a slight as possible,
> but
> > for stuff from 3rd party people, we should really preserve what they
> did! So
> > I will always use the command in the github pull request mail and apply
> that
> > to my working copy and push.
> >
> >
> >
> > Uwe
> >
> >
> >
> > -
> >
> > Uwe Schindler
> >
> > H.-H.-Meier-Allee 63, D-28213 Bremen
> >
> > http://www.thetaphi.de
> >
> > eMail: u...@thetaphi.de
> >
> >
> >
> > From: Shai Erera [mailto:ser...@gmail.com]
> > Sent: Monday, January 25, 2016 8:50 AM
> > To: dev@lucene.apache.org
> > Subject: Re: Merge vs Rebase
> >
> >
> >
> > I agree David. I'm sure there are valid use cases for merging commits,
> but I
> > always prefer rebasing. This has been our way with Apache SVN anyway, so
> why
> > change it? I fell like merging only adds unnecessary lines to 'git log',
> > where you see "Merge commits (1, 7)" but this doesn't add much
> information
> > to whoever looks at it.
> >
> > What does it matter if this merge commit is from previous master and
> > feature-commit? Why do we need one additional commit per change?
> >
> > I'm not a Git expert, but I know (think.

Re: Merge vs Rebase

2016-01-25 Thread Erick Erickson
Gotta echo Joel, I just want to get through my first commit here.
Yonik's outline is along the lines of what I'm looking for. For
newcomers (including newcomers to Git like me), this discussion is a
little analogous to someone asking "how do I write to a file in Java"
and having the discussion dive into the pros and cons of the zillion
different things you can do with writing data to some disk somewhere;
options for binary, text, character set, buffered, flushed, fsync'd,
and on and on. Each and every one of the options discussed has a very
good use case, but is hard to absorb all at once.

The "How to Contribute" page needs to be updated; I'll be happy to
volunteer to create/edit a "Git commit process". I can bring the
newbie's ignorant perspective to bear on it, doing all the dumb things
that newbies do. Trust me on this, I can misinterpret even the
simplest of instructions.

And do note that it's perfectly OK IMO to have multiple ways of doing
things, so this more along the lines of recommendations than
requirements. I'm not interested in "the one true way". I _am_
interested in "how do I keep from messing this up completely".

I'm envisioning a few sections here.
>Your first Git commit
>> steps for method 1
>>> backporting to 5x
>> steps for method 2
>>> backporting to 5x

> Advanced issues
>> advanced issue 1
>> advanced issue 2

Of course we have to reach some consensus on what acceptable  "method
for newbies" are, which is much of what this discussion is about.

Erick

On Mon, Jan 25, 2016 at 10:07 AM, Shai Erera  wrote:
> I used cherry-pick, as to me it's the most similar thing to the svn merge
> workflow we had. But as Mark said, there are other ways too, and IMO we
> should get to an agreed upon way as a project, and not individuals.
>
> For instance, IIRC, branch_5x now has 2 commits: the first is the one from
> master, and the second is named 'merge'. Seeing that in 'svn log', IMO,
> doesn't contribute to a better understanding of the history.
>
> Shai
>
>
> On Mon, Jan 25, 2016, 19:56 Mark Miller  wrote:
>>
>> Although, again, before someone gets angry, some groups choose to merge in
>> this case as well instead. There are lots of legit choices. Projects tend to
>> come to consensus on how they operate with these things. There is no
>> 'correct' choice, just opinions and what the project coalesces around.
>>
>>
>> - Mark
>>
>> On Mon, Jan 25, 2016 at 12:45 PM Mark Miller 
>> wrote:
>>>
>>> >> The next step is merging to branch_5x
>>> >> How do you recommend I do it?
>>>
>>> Generally, people use 'cherry-pick' for this.
>>>
>>> - Mark
>>>
>>> On Mon, Jan 25, 2016 at 12:39 PM Noble Paul  wrote:

 The most common usecase is
 Do development on trunk(master)

 git commit to master
 git push


 The next step is merging to branch_5x
 How do you recommend I do it?

 Another chore we do is on adding new files is
 svn propset svn:eol-style native 

 do we have an equivalent for that in git?


 On Mon, Jan 25, 2016 at 10:20 PM, Yonik Seeley 
 wrote:
 >>  git push origin HEAD:master (this is the equivalent of svn commit)
 >>  (b) Is there a less verbose way to do it,
 >
 > I'm no git expert either, but it seems like the simplest example of
 > applying and committing a patch could be much simpler by having good
 > defaults and not using a separate branch.
 >
 > 1) update your repo (note my .gitconfig makes this use rebasing)
 > $ git pull
 >
 > 2) apply patch / changes, run tests, etc
 >
 > 3) commit locally
 > $ git add  # add the changed files.. use "git add -u" for adding all
 > modified files
 > $ git commit -m "my commit message"
 >
 > 4) push to remote
 > $ git push
 >
 > -Yonik
 >
 >  my .gitconfig --
 > [user]
 >   name = yonik
 >   email = yo...@apache.org
 >
 > [color]
 >   diff = auto
 >   status = auto
 >   branch = auto
 >
 > [alias]
 >   br = branch
 >   co = checkout
 >   l = log --pretty=oneline
 >   hist = log --pretty=format:\"%h %ad | %s%d [%an]\" --graph
 > --date=short
 >
 > [branch]
 > autosetuprebase = always
 >
 > [push]
 > default = tracking
 >
 > -
 > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 > For additional commands, e-mail: dev-h...@lucene.apache.org
 >



 --
 -
 Noble Paul

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org

>>> --
>>> - Mark
>>> about.me/markrmiller
>>
>> --
>> - Mark
>> about.me/markrmiller

-

Re: Merge vs Rebase

2016-01-25 Thread Dawid Weiss
> git checkout -b feature-branch
> git commit -a -m "message"
> (Get reviews)
> git commit --amend (addressing review comments)

How do you get reviews? Did you push feature-branch to origin or did
you just create a patch from it? If you did push then it doesn't make
sense to amend. If you didn't push then I still don't see any point in
amending -- you'd just  apply any reviews you see fit and squash for
the merge with the mainline. Why bother amending commits that never
make it to the mainline? I see only downsides -- you lose the history
of what you fixed in response to those reviews, for example.

> I do not intend to stack additional commits on top of the initial commit,
> and then in the end squash them or whatever. I treat it all as one piece of
> work and commit, and I don't think our history should reflect every comment
> someone makes on an issue.

Why not? I admit I don't understand. A merge squash at the end of your
work is essentially combining all the intermediate commits into a
single patch -- what's the difference to having one commit amended
multiple times?

> In analogy, today in JIRA we always update a full
> .patch file, and eventually commit it, irrespective of the number of
> iterations that issue has gone through.

Issues very often have a history of patches, not just the most recent one.

> Again, I think we're sync, but wanted to clarify what I meant.

I don't think we are, but at the same time I don't think it matters.
It's what I said -- how you work on your own branches is your thing.

> git checkout master && git pull --rebase (this is nearly identical to the

You don't have to rebase anything. This pull will always succeed and
will fast-forward to the current origin/master, unless you work
directly on your local master branch (which you don't, concluding from
your log of commands).

> git push origin HEAD:master (this is the equivalent of svn commit)

git push will push all remote tracking branches (or current branch),
so you could simply do git push. Shorter.

> (a) Is this (one of) the correct way to do it?

Yes.

> (b) Is there a less verbose way to do it, aside from rebasing feature on
> origin/branch_5x?

You're committing multiple commits on top of master (by rebasing).
Like I mentioned before, this is just one way to do it (in my opinion
it's inferior to no-fast-forward merge in that it obcures which
commits formed a single "feature").

A less verbose way to do it would be to merge --squash your feature
into master and then cherry pick a single commit from master to
branch_5x.

D.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Merge vs Rebase

2016-01-25 Thread Michael McCandless
On Mon, Jan 25, 2016 at 2:49 AM, Shai Erera  wrote:

This has been our way with Apache SVN anyway, so why change it?
>

I don't consider this a valid argument ;)  Git is a new tool, it opens up
new features.  We shouldn't "lock ourselves into the SVN" way just because
"that is what we have always done".

Do you still restrict yourself to what "vi" was able to do while using
Eclipse ;)

Mike McCandless

http://blog.mikemccandless.com


Re: Merge vs Rebase

2016-01-25 Thread Michael McCandless
I am very much a git newbie, but:

> The example of the conflict between my commit and Mike’s is just a “normal 
> usecase”.

It did not happen in this case, but what if there were tricky
conflicts to resolve?  And I went and did that and then committed the
resulting merge?  I would want this information ("Mike did a hairy
conflict ridden merge using Emacs while drinking too much beer") to be
preserved because it is in fact meaningful, e.g. to retroactively
understand how bugs were introduced, as one example.

If I understand it right, a git pull --rebase would also introduce
conflicts, which I would have (secretly) resolved and then committed
as if I had gone and spontaneously developed that patch suddenly?

I think it's odd to insist on "beauty" for our source control history
when in fact the reality is quite messy.  This is like people who
insist on decorating the inside of their home as if they live in a
museum when in reality they have four crazy kids running around.

Mike McCandless

http://blog.mikemccandless.com

On Mon, Jan 25, 2016 at 3:34 AM, Uwe Schindler  wrote:
> Hi,
>
>
>
> I am fine with both. The example of the conflict between my commit and
> Mike’s is just a “normal usecase”. To me it looks correct how it is shown in
> history. At least it shows reality: 2 people were about to commit the same.
> This happened with SVN many times, too, but you are right it was solved by
> SVN through additional update (a rebase) and then try commit again. I am
> fine with both variants. But if we decide to only do one variant, I’d prefer
> to have some “howto chart” what you need to do to setup your working copy
> correctly (all commands for configuring @apache.org username, pull
> settings,…) that are local to the repository. Maybe add a shell/windows.cmd
> script to devtools! I don’t want to change those settings globaly, so please
> don’t use the magic –global setting in the example.If we have a script, we
> can do that per WC:
>
> -  Fetch repo from git-wip-us
>
> -  Run script
>
>
>
> About merge: When we get pull requests from 3rd parties, we should
> definitely not rebase With merging that in (in the same way how Githiub
> is doing it), we preserve attribution to the original commiter. We should
> really keep that! That is to me the only good reason to use Git!
>
>
>
> I am fine with rebasing our own stuff and make it a slight as possible, but
> for stuff from 3rd party people, we should really preserve what they did! So
> I will always use the command in the github pull request mail and apply that
> to my working copy and push.
>
>
>
> Uwe
>
>
>
> -
>
> Uwe Schindler
>
> H.-H.-Meier-Allee 63, D-28213 Bremen
>
> http://www.thetaphi.de
>
> eMail: u...@thetaphi.de
>
>
>
> From: Shai Erera [mailto:ser...@gmail.com]
> Sent: Monday, January 25, 2016 8:50 AM
> To: dev@lucene.apache.org
> Subject: Re: Merge vs Rebase
>
>
>
> I agree David. I'm sure there are valid use cases for merging commits, but I
> always prefer rebasing. This has been our way with Apache SVN anyway, so why
> change it? I fell like merging only adds unnecessary lines to 'git log',
> where you see "Merge commits (1, 7)" but this doesn't add much information
> to whoever looks at it.
>
> What does it matter if this merge commit is from previous master and
> feature-commit? Why do we need one additional commit per change?
>
> I'm not a Git expert, but I know (think...) that if you merge C1 and C2, and
> C2 is a parent of C1, Git doesn't do a merge commit. Someone probably can
> confirm that.
>
> FWIW, I plan to continue working the 'SVN' way by doing the following:
>
> git checkout master
>
> git pull --rebase (update to latest commit/rev)
>
> git checkout -b feature
>
> git commit -a -m "feature message"
>
> git commit --amend (applying review feedback)
>
> git fetch origin master:master (a'la 'svn up' we used to do)
> git rebase master (now my feature commit is right on top of master's latest
> commit / rev)
>
> git push origin HEAD:master
>
> This will preserve the history linear and flat, which is what we currently
> have w/ SVN.
>
>
>
> As for merging this commit now to branch_5x. I'll admit I don't have
> experience working with Git w/ multiple active (feature) branches, so I'm
> not sure if rebasing branch_5x on my commit is what we want (cause it will
> drag with it all of trunk's history, as far as I understand). I might try to
> cheerrypick that commit only and apply to branch_5x, which is, again - AFAIU
> - what we used to do in SVN.
>
> However, as I said, I'm not a Git expert, so if anyone thinks I should adopt
> a different workflow, especially for the branch_5x changes, I'd be happy to
> learn.
>
> Shai
>
>
>
> On Mon, Jan 25, 2016 at 8:13 AM David Smiley 
> wrote:
>
> I suspect my picture didn’t make it so I’m trying again:
>
>
>
> Or if that didn’t work, I put it on dropbox:
>
> https://www.dropbox.com/s/p3q9ycxytxfqssz/lucene-merge-commit-pic.png?dl=0
>
>
>
> ~ David
>
>
>
> On Jan 25, 2016, at 1

[jira] [Updated] (SOLR-8512) Implement minimal set of get* methods in ResultSetImpl for column indices

2016-01-25 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8512:
---
Attachment: SOLR-8512.patch

Fixed merge issues on top of SOLR-8519 and SOLR-8517

> Implement minimal set of get* methods in ResultSetImpl for column indices
> -
>
> Key: SOLR-8512
> URL: https://issues.apache.org/jira/browse/SOLR-8512
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8512.patch, SOLR-8512.patch, SOLR-8512.patch, 
> sql-preserve-order.patch
>
>
> SQL clients use the proper get* methods on the ResultSet to return items to 
> be displayed. At minimum, the following methods should be implemented for 
> column index:
> * public Object getObject
> * public String getString
> * public boolean getBoolean
> * public short getShort
> * public int getInt
> * public long getLong
> * public float getFloat
> * public double getDouble
> * public BigDecimal getBigDecimal
> * public Date getDate
> * public Time getTime
> * public Timestamp getTimestamp



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8517) Implement minimal set of get* methods in ResultSetImpl for column names

2016-01-25 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8517:
---
Attachment: SOLR-8517.patch

Fixed merge issues on top of changes from SOLR-8519

> Implement minimal set of get* methods in ResultSetImpl for column names
> ---
>
> Key: SOLR-8517
> URL: https://issues.apache.org/jira/browse/SOLR-8517
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8517.patch, SOLR-8517.patch, SOLR-8517.patch, 
> SOLR-8517.patch
>
>
> This is related to the ResultSetImpl for column indices but requires that 
> more metadata be based back from the SQL handler in relation to column names. 
> The SQL handler already knows about the column names and order but they 
> aren't passed back to the client. SQL clients used the column names to 
> display so this must be implemented for DBVisualizer to work properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8519) Implement ResultSetMetaDataImpl.getColumnCount()

2016-01-25 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-8519.
--
Resolution: Implemented

> Implement ResultSetMetaDataImpl.getColumnCount()
> 
>
> Key: SOLR-8519
> URL: https://issues.apache.org/jira/browse/SOLR-8519
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8519.patch, SOLR-8519.patch, SOLR-8519.patch, 
> SOLR-8519.patch
>
>
> DBVisualizer uses getColumnCount to determine how many columns to try to 
> display from the result.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 919 - Still Failing

2016-01-25 Thread Michael McCandless
I'll disable Direct and Memory for this test.

Mike McCandless

http://blog.mikemccandless.com


On Mon, Jan 25, 2016 at 6:12 AM, Apache Jenkins Server
 wrote:
> Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/919/
>
> 1 tests failed.
> FAILED:  org.apache.lucene.search.TestGeoPointQuery.testRandomBig
>
> Error Message:
> CheckIndex failed
>
> Stack Trace:
> java.lang.RuntimeException: CheckIndex failed
> at 
> __randomizedtesting.SeedInfo.seed([D35E8C12334C2B8D:5409F19DA215570D]:0)
> at org.apache.lucene.util.TestUtil.checkIndex(TestUtil.java:281)
> at org.apache.lucene.util.TestUtil.checkIndex(TestUtil.java:263)
> at 
> org.apache.lucene.store.BaseDirectoryWrapper.close(BaseDirectoryWrapper.java:46)
> at org.apache.lucene.util.IOUtils.close(IOUtils.java:89)
> at org.apache.lucene.util.IOUtils.close(IOUtils.java:76)
> at 
> org.apache.lucene.util.BaseGeoPointTestCase.verify(BaseGeoPointTestCase.java:771)
> at 
> org.apache.lucene.util.BaseGeoPointTestCase.doTestRandom(BaseGeoPointTestCase.java:411)
> at 
> org.apache.lucene.util.BaseGeoPointTestCase.testRandomBig(BaseGeoPointTestCase.java:340)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:3

[jira] [Commented] (SOLR-8519) Implement ResultSetMetaDataImpl.getColumnCount()

2016-01-25 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15115765#comment-15115765
 ] 

Joel Bernstein commented on SOLR-8519:
--

Committed: 
https://github.com/apache/lucene-solr/commit/c99698b6dd4754b0742409feae90c833e2cfa60a

> Implement ResultSetMetaDataImpl.getColumnCount()
> 
>
> Key: SOLR-8519
> URL: https://issues.apache.org/jira/browse/SOLR-8519
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8519.patch, SOLR-8519.patch, SOLR-8519.patch, 
> SOLR-8519.patch
>
>
> DBVisualizer uses getColumnCount to determine how many columns to try to 
> display from the result.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8519) Implement ResultSetMetaDataImpl.getColumnCount()

2016-01-25 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15115717#comment-15115717
 ] 

Joel Bernstein commented on SOLR-8519:
--

Actually that patches are working out well. Let's keep going with them.

> Implement ResultSetMetaDataImpl.getColumnCount()
> 
>
> Key: SOLR-8519
> URL: https://issues.apache.org/jira/browse/SOLR-8519
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8519.patch, SOLR-8519.patch, SOLR-8519.patch, 
> SOLR-8519.patch
>
>
> DBVisualizer uses getColumnCount to determine how many columns to try to 
> display from the result.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8517) Implement minimal set of get* methods in ResultSetImpl for column names

2016-01-25 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15115692#comment-15115692
 ] 

Kevin Risden commented on SOLR-8517:


Created PR for this: https://github.com/apache/lucene-solr/pull/3

> Implement minimal set of get* methods in ResultSetImpl for column names
> ---
>
> Key: SOLR-8517
> URL: https://issues.apache.org/jira/browse/SOLR-8517
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8517.patch, SOLR-8517.patch, SOLR-8517.patch
>
>
> This is related to the ResultSetImpl for column indices but requires that 
> more metadata be based back from the SQL handler in relation to column names. 
> The SQL handler already knows about the column names and order but they 
> aren't passed back to the client. SQL clients used the column names to 
> display so this must be implemented for DBVisualizer to work properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8519) Implement ResultSetMetaDataImpl.getColumnCount()

2016-01-25 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15115684#comment-15115684
 ] 

Kevin Risden commented on SOLR-8519:


{quote}
As we start implementing the column type methods I think it makes sense to read 
the first data Tuple and interpret the types. We can wrap the internal 
SolrStream in Pushback stream an push the first Tuple back after reading it.
{quote}

Yea I was thinking about types and this seems reasonable to me. I didn't 
realize Pushback was an option.

PS - My comment about PR above just crossed paths with your comment. Didn't 
realize you were already looking at this. I can put up PRs for the other JIRAs 
moving forward if that helps.

> Implement ResultSetMetaDataImpl.getColumnCount()
> 
>
> Key: SOLR-8519
> URL: https://issues.apache.org/jira/browse/SOLR-8519
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8519.patch, SOLR-8519.patch, SOLR-8519.patch, 
> SOLR-8519.patch
>
>
> DBVisualizer uses getColumnCount to determine how many columns to try to 
> display from the result.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8519) Implement ResultSetMetaDataImpl.getColumnCount()

2016-01-25 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15115681#comment-15115681
 ] 

Kevin Risden commented on SOLR-8519:


[~joel.bernstein] - I opened a PR for this which has the same changes as the 
latest patch. https://github.com/apache/lucene-solr/pull/2 Let me know if PRs 
are easier and I can do them instead of patches moving forward.

> Implement ResultSetMetaDataImpl.getColumnCount()
> 
>
> Key: SOLR-8519
> URL: https://issues.apache.org/jira/browse/SOLR-8519
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8519.patch, SOLR-8519.patch, SOLR-8519.patch, 
> SOLR-8519.patch
>
>
> DBVisualizer uses getColumnCount to determine how many columns to try to 
> display from the result.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8519) Implement ResultSetMetaDataImpl.getColumnCount()

2016-01-25 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15115680#comment-15115680
 ] 

Joel Bernstein commented on SOLR-8519:
--

The latest patch looks great! I don't plan on making any changes to it. Running 
precommit now.

As we start implementing the column type methods I think it makes sense to read 
the first data Tuple and interpret the types. We can wrap the internal 
SolrStream in Pushback stream an push the first Tuple back after reading it.

> Implement ResultSetMetaDataImpl.getColumnCount()
> 
>
> Key: SOLR-8519
> URL: https://issues.apache.org/jira/browse/SOLR-8519
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8519.patch, SOLR-8519.patch, SOLR-8519.patch, 
> SOLR-8519.patch
>
>
> DBVisualizer uses getColumnCount to determine how many columns to try to 
> display from the result.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Merge vs Rebase

2016-01-25 Thread Shai Erera
I used cherry-pick, as to me it's the most similar thing to the svn merge
workflow we had. But as Mark said, there are other ways too, and IMO we
should get to an agreed upon way as a project, and not individuals.

For instance, IIRC, branch_5x now has 2 commits: the first is the one from
master, and the second is named 'merge'. Seeing that in 'svn log', IMO,
doesn't contribute to a better understanding of the history.

Shai

On Mon, Jan 25, 2016, 19:56 Mark Miller  wrote:

> Although, again, before someone gets angry, some groups choose to merge in
> this case as well instead. There are lots of legit choices. Projects tend
> to come to consensus on how they operate with these things. There is no
> 'correct' choice, just opinions and what the project coalesces around.
>
>
> - Mark
>
> On Mon, Jan 25, 2016 at 12:45 PM Mark Miller 
> wrote:
>
>> >> The next step is merging to branch_5x
>> >> How do you recommend I do it?
>>
>> Generally, people use 'cherry-pick' for this.
>>
>> - Mark
>>
>> On Mon, Jan 25, 2016 at 12:39 PM Noble Paul  wrote:
>>
>>> The most common usecase is
>>> Do development on trunk(master)
>>>
>>> git commit to master
>>> git push
>>>
>>>
>>> The next step is merging to branch_5x
>>> How do you recommend I do it?
>>>
>>> Another chore we do is on adding new files is
>>> svn propset svn:eol-style native 
>>>
>>> do we have an equivalent for that in git?
>>>
>>>
>>> On Mon, Jan 25, 2016 at 10:20 PM, Yonik Seeley 
>>> wrote:
>>> >>  git push origin HEAD:master (this is the equivalent of svn commit)
>>> >>  (b) Is there a less verbose way to do it,
>>> >
>>> > I'm no git expert either, but it seems like the simplest example of
>>> > applying and committing a patch could be much simpler by having good
>>> > defaults and not using a separate branch.
>>> >
>>> > 1) update your repo (note my .gitconfig makes this use rebasing)
>>> > $ git pull
>>> >
>>> > 2) apply patch / changes, run tests, etc
>>> >
>>> > 3) commit locally
>>> > $ git add  # add the changed files.. use "git add -u" for adding all
>>> > modified files
>>> > $ git commit -m "my commit message"
>>> >
>>> > 4) push to remote
>>> > $ git push
>>> >
>>> > -Yonik
>>> >
>>> >  my .gitconfig --
>>> > [user]
>>> >   name = yonik
>>> >   email = yo...@apache.org
>>> >
>>> > [color]
>>> >   diff = auto
>>> >   status = auto
>>> >   branch = auto
>>> >
>>> > [alias]
>>> >   br = branch
>>> >   co = checkout
>>> >   l = log --pretty=oneline
>>> >   hist = log --pretty=format:\"%h %ad | %s%d [%an]\" --graph
>>> --date=short
>>> >
>>> > [branch]
>>> > autosetuprebase = always
>>> >
>>> > [push]
>>> > default = tracking
>>> >
>>> > -
>>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> > For additional commands, e-mail: dev-h...@lucene.apache.org
>>> >
>>>
>>>
>>>
>>> --
>>> -
>>> Noble Paul
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>> --
>> - Mark
>> about.me/markrmiller
>>
> --
> - Mark
> about.me/markrmiller
>


Re: Merge vs Rebase

2016-01-25 Thread Mark Miller
Although, again, before someone gets angry, some groups choose to merge in
this case as well instead. There are lots of legit choices. Projects tend
to come to consensus on how they operate with these things. There is no
'correct' choice, just opinions and what the project coalesces around.


- Mark

On Mon, Jan 25, 2016 at 12:45 PM Mark Miller  wrote:

> >> The next step is merging to branch_5x
> >> How do you recommend I do it?
>
> Generally, people use 'cherry-pick' for this.
>
> - Mark
>
> On Mon, Jan 25, 2016 at 12:39 PM Noble Paul  wrote:
>
>> The most common usecase is
>> Do development on trunk(master)
>>
>> git commit to master
>> git push
>>
>>
>> The next step is merging to branch_5x
>> How do you recommend I do it?
>>
>> Another chore we do is on adding new files is
>> svn propset svn:eol-style native 
>>
>> do we have an equivalent for that in git?
>>
>>
>> On Mon, Jan 25, 2016 at 10:20 PM, Yonik Seeley  wrote:
>> >>  git push origin HEAD:master (this is the equivalent of svn commit)
>> >>  (b) Is there a less verbose way to do it,
>> >
>> > I'm no git expert either, but it seems like the simplest example of
>> > applying and committing a patch could be much simpler by having good
>> > defaults and not using a separate branch.
>> >
>> > 1) update your repo (note my .gitconfig makes this use rebasing)
>> > $ git pull
>> >
>> > 2) apply patch / changes, run tests, etc
>> >
>> > 3) commit locally
>> > $ git add  # add the changed files.. use "git add -u" for adding all
>> > modified files
>> > $ git commit -m "my commit message"
>> >
>> > 4) push to remote
>> > $ git push
>> >
>> > -Yonik
>> >
>> >  my .gitconfig --
>> > [user]
>> >   name = yonik
>> >   email = yo...@apache.org
>> >
>> > [color]
>> >   diff = auto
>> >   status = auto
>> >   branch = auto
>> >
>> > [alias]
>> >   br = branch
>> >   co = checkout
>> >   l = log --pretty=oneline
>> >   hist = log --pretty=format:\"%h %ad | %s%d [%an]\" --graph
>> --date=short
>> >
>> > [branch]
>> > autosetuprebase = always
>> >
>> > [push]
>> > default = tracking
>> >
>> > -
>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> > For additional commands, e-mail: dev-h...@lucene.apache.org
>> >
>>
>>
>>
>> --
>> -
>> Noble Paul
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>> --
> - Mark
> about.me/markrmiller
>
-- 
- Mark
about.me/markrmiller


Re: Merge vs Rebase

2016-01-25 Thread Mark Miller
>> The next step is merging to branch_5x
>> How do you recommend I do it?

Generally, people use 'cherry-pick' for this.

- Mark

On Mon, Jan 25, 2016 at 12:39 PM Noble Paul  wrote:

> The most common usecase is
> Do development on trunk(master)
>
> git commit to master
> git push
>
>
> The next step is merging to branch_5x
> How do you recommend I do it?
>
> Another chore we do is on adding new files is
> svn propset svn:eol-style native 
>
> do we have an equivalent for that in git?
>
>
> On Mon, Jan 25, 2016 at 10:20 PM, Yonik Seeley  wrote:
> >>  git push origin HEAD:master (this is the equivalent of svn commit)
> >>  (b) Is there a less verbose way to do it,
> >
> > I'm no git expert either, but it seems like the simplest example of
> > applying and committing a patch could be much simpler by having good
> > defaults and not using a separate branch.
> >
> > 1) update your repo (note my .gitconfig makes this use rebasing)
> > $ git pull
> >
> > 2) apply patch / changes, run tests, etc
> >
> > 3) commit locally
> > $ git add  # add the changed files.. use "git add -u" for adding all
> > modified files
> > $ git commit -m "my commit message"
> >
> > 4) push to remote
> > $ git push
> >
> > -Yonik
> >
> >  my .gitconfig --
> > [user]
> >   name = yonik
> >   email = yo...@apache.org
> >
> > [color]
> >   diff = auto
> >   status = auto
> >   branch = auto
> >
> > [alias]
> >   br = branch
> >   co = checkout
> >   l = log --pretty=oneline
> >   hist = log --pretty=format:\"%h %ad | %s%d [%an]\" --graph --date=short
> >
> > [branch]
> > autosetuprebase = always
> >
> > [push]
> > default = tracking
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
>
>
> --
> -
> Noble Paul
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
- Mark
about.me/markrmiller


Re: Merge vs Rebase

2016-01-25 Thread Noble Paul
The most common usecase is
Do development on trunk(master)

git commit to master
git push


The next step is merging to branch_5x
How do you recommend I do it?

Another chore we do is on adding new files is
svn propset svn:eol-style native 

do we have an equivalent for that in git?


On Mon, Jan 25, 2016 at 10:20 PM, Yonik Seeley  wrote:
>>  git push origin HEAD:master (this is the equivalent of svn commit)
>>  (b) Is there a less verbose way to do it,
>
> I'm no git expert either, but it seems like the simplest example of
> applying and committing a patch could be much simpler by having good
> defaults and not using a separate branch.
>
> 1) update your repo (note my .gitconfig makes this use rebasing)
> $ git pull
>
> 2) apply patch / changes, run tests, etc
>
> 3) commit locally
> $ git add  # add the changed files.. use "git add -u" for adding all
> modified files
> $ git commit -m "my commit message"
>
> 4) push to remote
> $ git push
>
> -Yonik
>
>  my .gitconfig --
> [user]
>   name = yonik
>   email = yo...@apache.org
>
> [color]
>   diff = auto
>   status = auto
>   branch = auto
>
> [alias]
>   br = branch
>   co = checkout
>   l = log --pretty=oneline
>   hist = log --pretty=format:\"%h %ad | %s%d [%an]\" --graph --date=short
>
> [branch]
> autosetuprebase = always
>
> [push]
> default = tracking
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>



-- 
-
Noble Paul

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6985) Create some (simple) guides on how to use git to perform common dev tasks

2016-01-25 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15114007#comment-15114007
 ] 

Paul Elschot edited comment on LUCENE-6985 at 1/25/16 4:52 PM:
---

I have some long running branches against trunk that I needed to move onto 
master.
These branches regularly get trunk merged into them, so rebasing will not work.

This command sequence takes a local branch brn from trunk to master.
Except for the merge and the cherry-pick, these steps can be easily done in 
gitk, normally with a right click:

git checkout brn
git merge trunk # produce a merge commit, possibly resolve any conflicts.

git tag brn.20160123 # tag the merge commit
git branch -D brn # delete the branch

git checkout master
git branch brn # recreate the branch starting on master
git checkout brn 
git cherry-pick brn.20160123 -m 2 # add a commit with the diff to trunk as 
merged above




was (Author: paul.elsc...@xs4all.nl):
I have some long running branches against trunk that I needed to move onto 
master.
These branches regularly get trunk merged into them, so rebasing will not work.

This command sequence takes a local branch brn from trunk to master.
Except for the merge and the cherry-pick, these steps can be easily done in 
gitk, normally with a right click:

git checkout brn
git merge trunk # produce a merge commit, possibly resolve any conflicts.

git tag brn.20160123 # tag the merge commit
git branch -D prefilltokenstream # delete the branch

git checkout master
git branch brn # recreate the branch starting on master
git checkout brn 
git cherry-pick brn.20160123 -m 2 # add a commit with the diff to trunk as 
merged above



> Create some (simple) guides on how to use git to perform common dev tasks
> -
>
> Key: LUCENE-6985
> URL: https://issues.apache.org/jira/browse/LUCENE-6985
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>
> Some simple guides that demonstrate basic git principles and routine tasks 
> (below). The guides are here:
> https://github.com/dweiss/lucene-git-guides
> Core concepts
> 1. how to clone and setup lucene/solr git repo for local work
> 2. basic git concepts: branches, remote branches, references, staging area.
> Simple tasks:
> 1. Checkout branch X, create local branch Y, modify something, create a diff 
> for Jira.
> 2. Checkout branch X, create local branch Y, modify something, catch-up with 
> changes on X, create a diff for Jira.
> 3. Checkout branch X, create local branch Y, modify something, catch-up with 
> changes on X, apply aggregate changes from Y on X (as a single commit).
> 4. Backport feature/ commit C from branch X to Y via cherry picking.
> More advanced:
> 1. Create a feature branch off branch X, develop the feature, then apply it 
> as a series of commits to X.
> 2. Create a feature branch off branch X, develop the feature, then apply it 
> as a series of commits to X and Y (different branch).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Merge vs Rebase

2016-01-25 Thread Yonik Seeley
>  git push origin HEAD:master (this is the equivalent of svn commit)
>  (b) Is there a less verbose way to do it,

I'm no git expert either, but it seems like the simplest example of
applying and committing a patch could be much simpler by having good
defaults and not using a separate branch.

1) update your repo (note my .gitconfig makes this use rebasing)
$ git pull

2) apply patch / changes, run tests, etc

3) commit locally
$ git add  # add the changed files.. use "git add -u" for adding all
modified files
$ git commit -m "my commit message"

4) push to remote
$ git push

-Yonik

 my .gitconfig --
[user]
  name = yonik
  email = yo...@apache.org

[color]
  diff = auto
  status = auto
  branch = auto

[alias]
  br = branch
  co = checkout
  l = log --pretty=oneline
  hist = log --pretty=format:\"%h %ad | %s%d [%an]\" --graph --date=short

[branch]
autosetuprebase = always

[push]
default = tracking

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.7.0_80) - Build # 15348 - Failure!

2016-01-25 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/15348/
Java: 32bit/jdk1.7.0_80 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.handler.component.TestDistributedStatsComponentCardinality.test

Error Message:
int_i: goodEst=13923, poorEst=13970, real=13952, 
p=q=id:[376+TO+14327]&rows=0&stats=true&stats.field={!cardinality%3D0.006786916679482613+key%3Dlow_int_i}int_i&stats.field={!cardinality%3D0.006786916679482613+key%3Dlow_int_i_prehashed_l+hllPreHashed%3Dtrue}int_i_prehashed_l&stats.field={!cardinality%3D0.5067869166794826+key%3Dhigh_int_i}int_i&stats.field={!cardinality%3D0.5067869166794826+key%3Dhigh_int_i_prehashed_l+hllPreHashed%3Dtrue}int_i_prehashed_l&stats.field={!cardinality%3D0.006786916679482613+key%3Dlow_long_l}long_l&stats.field={!cardinality%3D0.006786916679482613+key%3Dlow_long_l_prehashed_l+hllPreHashed%3Dtrue}long_l_prehashed_l&stats.field={!cardinality%3D0.5067869166794826+key%3Dhigh_long_l}long_l&stats.field={!cardinality%3D0.5067869166794826+key%3Dhigh_long_l_prehashed_l+hllPreHashed%3Dtrue}long_l_prehashed_l&stats.field={!cardinality%3D0.006786916679482613+key%3Dlow_string_s}string_s&stats.field={!cardinality%3D0.006786916679482613+key%3Dlow_string_s_prehashed_l+hllPreHashed%3Dtrue}string_s_prehashed_l&stats.field={!cardinality%3D0.5067869166794826+key%3Dhigh_string_s}string_s&stats.field={!cardinality%3D0.5067869166794826+key%3Dhigh_string_s_prehashed_l+hllPreHashed%3Dtrue}string_s_prehashed_l

Stack Trace:
java.lang.AssertionError: int_i: goodEst=13923, poorEst=13970, real=13952, 
p=q=id:[376+TO+14327]&rows=0&stats=true&stats.field={!cardinality%3D0.006786916679482613+key%3Dlow_int_i}int_i&stats.field={!cardinality%3D0.006786916679482613+key%3Dlow_int_i_prehashed_l+hllPreHashed%3Dtrue}int_i_prehashed_l&stats.field={!cardinality%3D0.5067869166794826+key%3Dhigh_int_i}int_i&stats.field={!cardinality%3D0.5067869166794826+key%3Dhigh_int_i_prehashed_l+hllPreHashed%3Dtrue}int_i_prehashed_l&stats.field={!cardinality%3D0.006786916679482613+key%3Dlow_long_l}long_l&stats.field={!cardinality%3D0.006786916679482613+key%3Dlow_long_l_prehashed_l+hllPreHashed%3Dtrue}long_l_prehashed_l&stats.field={!cardinality%3D0.5067869166794826+key%3Dhigh_long_l}long_l&stats.field={!cardinality%3D0.5067869166794826+key%3Dhigh_long_l_prehashed_l+hllPreHashed%3Dtrue}long_l_prehashed_l&stats.field={!cardinality%3D0.006786916679482613+key%3Dlow_string_s}string_s&stats.field={!cardinality%3D0.006786916679482613+key%3Dlow_string_s_prehashed_l+hllPreHashed%3Dtrue}string_s_prehashed_l&stats.field={!cardinality%3D0.5067869166794826+key%3Dhigh_string_s}string_s&stats.field={!cardinality%3D0.5067869166794826+key%3Dhigh_string_s_prehashed_l+hllPreHashed%3Dtrue}string_s_prehashed_l
at 
__randomizedtesting.SeedInfo.seed([5F56E64717F153AD:D702D99DB90D3E55]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.handler.component.TestDistributedStatsComponentCardinality.test(TestDistributedStatsComponentCardinality.java:216)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearc

[jira] [Closed] (LUCENE-6922) Improve svn to git workaround script

2016-01-25 Thread Paul Elschot (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Elschot closed LUCENE-6922.

Resolution: Fixed

After the move from svn to git there is no more need for this.

Thanks for moving, and also thanks for infra for keeping the git-svn connection 
up longer than announced.

I have reforked the github repo PaulElschot/lucene-solr from 
apache/lucene-solr. For future reference I pushed the above .svn tags into 
there, however please expect these tags to disappear again in a few months.

> Improve svn to git workaround script
> 
>
> Key: LUCENE-6922
> URL: https://issues.apache.org/jira/browse/LUCENE-6922
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: -tools
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: LUCENE-6922.patch, svnBranchToGit.py, svnBranchToGit.py, 
> svnBranchToGit.py
>
>
> As the git-svn mirror for Lucene/Solr will be turned off near the end of 
> 2015, try and improve the workaround script to become more usable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-5.x #1165: POMs out of sync

2016-01-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-5.x/1165/

No tests ran.

Build Log:
[...truncated 25439 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:766: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:299: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/lucene/build.xml:420: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/lucene/common-build.xml:2240:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/lucene/common-build.xml:1668:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/lucene/common-build.xml:579:
 Error deploying artifact 'org.apache.lucene:lucene-demo:jar': Error deploying 
artifact: Error transferring file

Total time: 29 minutes 20 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: To Detect Wheter Core is Available To Post

2016-01-25 Thread Shawn Heisey
On 1/25/2016 12:13 AM, Edwin Lee wrote:
> We want to detect the health of each core —- whether they are
> available to post. We have to figure out ways to do that:
>
>  1. Using luke request . —- Cost is a bit high for core loading
>  2. We have designed a cache and adding the hook when the core is open
> or closed to record whether the core is loaded. —- *Question: If a
> core is loaded, is there situation that we still cannot post data
> to it?*
>  3. We try to post some meanless data with our unique id, and delete
> that data within the same commit, like this
>
> |{ "add": { "doc":{ "id": "%%ID%%" } }, "delete": { "id": "%%ID%%" },
> "commit": {} } |
>
> *But we still not 100% sure whether it will mess up with our normal data.*
>
> What is the best way for this requirment. We want to consult your
> opinions.
>

I'm assuming you're running Solr.  You did not indicate, but the JSON
format looks like Solr's format.

There may be some really quick way to detect whether an index is fully
writeable, but if there is, I do not know it.

What I think I would do is add a document with some current value in a
field besides the uniqueKey field, commit, and request that document,
making sure that the value submitted is the value received.

Deleting the document after validation would be optional, a step I
probably wouldn't bother doing.  You would need a special ID value for
the document, and to avoid problems with relevancy on your other
documents, this document should include a value in a field not used by
the rest of your documents, a value that changes every time the health
check is run.

FYI, this is a question better suited for the solr-user list, not the
dev list.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Merge vs Rebase

2016-01-25 Thread Joel Bernstein
I'm a non git expert and I would prefer to have guidelines to follow.
Understanding the complexities here is not something I have any interest
in. I just want to merge and commit in some agreed apon standard way.

Joel Bernstein
http://joelsolr.blogspot.com/

On Mon, Jan 25, 2016 at 10:37 AM, Shai Erera  wrote:

> Hi Dawid,
>
> Yes, I know what --amend does. I never amend the parent commit, only my
> initial commit. I think we're talking about the same thing, but just to
> clarify, this is what I mean:
>
> git checkout -b feature-branch
> git commit -a -m "message"
> (Get reviews)
> git commit --amend (addressing review comments)
>
> I do not intend to stack additional commits on top of the initial commit,
> and then in the end squash them or whatever. I treat it all as one piece of
> work and commit, and I don't think our history should reflect every comment
> someone makes on an issue. In analogy, today in JIRA we always update a
> full .patch file, and eventually commit it, irrespective of the number of
> iterations that issue has gone through.
>
> Again, I think we're sync, but wanted to clarify what I meant.
>
> About the guidelines, I'd prefer that we use rebase, but that's not what I
> was after with the guide. Today I've made my first commit to Git (in
> Lucene!), and the way I handled trunk and branch_5x is as follows:
>
> git checkout -b feature master (i.e. trunk)
> (work, commit locally, publish ...)
> (time to push, i.e. SVN commit)
> git fetch origin
> git checkout master && git pull --rebase (this is nearly identical to the
> previous 'svn up')
> git checkout feature && git rebase master
> git push origin HEAD:master (this is the equivalent of svn commit)
>
> ('merge' to branch_5x)
> git checkout branch_5x
> git pull --rebase (pulls from origin/branch_5x)
> git checkout -b feature_5x branch_5x
> git cherry-pick feature (this is partly 'svn merge')
> git push origin HEAD:branch_5x
>
> (a) Is this (one of) the correct way to do it?
> (b) Is there a less verbose way to do it, aside from rebasing feature on
> origin/branch_5x?
>
> So, if we can have such sequence of operations written somewhere, even if
> only as a proposed way to handle it, I think it will make it easy on the
> Git newcomers among committers, since they're going to need to do this a
> lot. And certainly if there's a quicker way to do it!
>
> Shai
>
> On Mon, Jan 25, 2016 at 5:15 PM Dawid Weiss  wrote:
>
>> Hi Shai,
>>
>> > I usually do 'git commit --amend',
>>
>> When do you do this? Note that what the above command does is:
>>
>> 1) it takes the last commit on the current local branch,
>> 2) it takes its parent commit,
>> 3) it applies the diff from the last commit to the parent, permits you
>> to modify the message and commits this new stuff on top of the parent,
>> *resetting* the branch's HEAD reference to this new commit.
>>
>> If you already pushed your unamended commit to the remote then it's
>> too late -- you'd end up with diverged local branch. Again, the
>> simplest way to "see" this would be gitk. I'm sorry if I'm explaining
>> something obvious.
>>
>> > is beneficial. But if I upload a patch, get reviews and intend to
>> upload an
>> > updated patch, I will --amend my commit.
>>
>> I don't see any relation with commits and --amend here. As long as
>> you're working on a branch, this really doesn't matter -- you can just
>> commit your changed stuff after a review and push it to a remote
>> feature branch for others to see (or keep it local if you work on it
>> alone). Amending commit is meant merely to fix a typo in a commit
>> message. Yes, you can play with commits in a serious way (skipping
>> commits, merging them together, changing commit  contents, etc.), but
>> this is really beyond the scope of a new git user (for those who are
>> interested -- see what an interactive rebase is -- rebase -i).
>>
>> > I also agree w/ Uwe, I think it will really help if we have a
>> > guidelines/cheatsheet that document how do we expect dev to happen in
>> the
>> > Git world.
>>
>> Even if it seems right I don't think this is realistic, given that
>> people have their own preferences and (strong) opinions. For example I
>> disagree with Mark that multiple merges and then final merge are a bad
>> thing in general. They are fairly straightforward in the commit graph
>> and they very clearly signify that somebody was working on their own
>> stuff and periodically synced up with changes on the master (or any
>> other branch). Finally, when the feature is ready, a merge to the
>> mainline happens. It's no different to what we used in SVN, actually.
>> This is a classic example of flying fish strategy -- it's been with
>> versioning systems for ages. What I think Mark is worried about is
>> merges across multiple branches, which indeed can quickly become
>> mind-boggling. That's why I suggested that, at first, everyone should
>> stick to simple cherry picks and squashed merges -- not that they're
>> "the best" way to work with git,

[jira] [Updated] (LUCENE-6993) Update TLDs to latest list

2016-01-25 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated LUCENE-6993:
--
Attachment: LUCENE-6993.patch

Attaching a patch against trunk that updates the TLD Macro file and the 
UAX29URLEmailTokenizerImpl.

When running {{ant jflex}} I had to increase the amount of heap space available 
due to the increased number of TLDs, not sure if this will result in a negative 
impact to the rest of the build.

The {{.an}} and {{.tp}} domains were removed from the list, 
{{random.text.with.urls}} was updated accordingly.

> Update TLDs to latest list
> --
>
> Key: LUCENE-6993
> URL: https://issues.apache.org/jira/browse/LUCENE-6993
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Mike Drob
> Attachments: LUCENE-6993.patch
>
>
> We did this once before in LUCENE-5357, but it might be time to update the 
> list of TLDs again. Comparing our old list with a new list indicates 800+ new 
> domains, so it would be nice to include them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6992) Add sugar methods to allow creating a MemoryIndex from a Document or set of IndexableFields

2016-01-25 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15115441#comment-15115441
 ] 

David Smiley commented on LUCENE-6992:
--

+1 Looks good.  Nice work Alan.

> Add sugar methods to allow creating a MemoryIndex from a Document or set of 
> IndexableFields
> ---
>
> Key: LUCENE-6992
> URL: https://issues.apache.org/jira/browse/LUCENE-6992
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 5.5, Trunk
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: LUCENE-6992.patch
>
>
> This came up on the mailing list a few days ago - it's not as obvious as it 
> should be how to add arbitrary IndexableFields to a MemoryIndex, and a few 
> sugar methods will make this a lot simpler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6991) WordDelimiterFilter bug

2016-01-25 Thread Pawel Rog (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15115414#comment-15115414
 ] 

Pawel Rog edited comment on LUCENE-6991 at 1/25/16 3:41 PM:


Thanks for the suggestion. When I changed whitespace tokenizer to keyword 
tokenizer the test passes. Nevertheless I think the problem stays in 
WordDelimiterFilter. Right?


was (Author: prog):
Thanks for the suggestion. When I changed whitespace tokenizer to keyword 
tokenizer the test passes.

> WordDelimiterFilter bug
> ---
>
> Key: LUCENE-6991
> URL: https://issues.apache.org/jira/browse/LUCENE-6991
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.10.4, 5.3.1
>Reporter: Pawel Rog
>Priority: Minor
>
> I was preparing analyzer which contains WordDelimiterFilter and I realized it 
> sometimes gives results different then expected.
> I prepared a short test which shows the problem. I haven't used Lucene tests 
> for this but this doesn't matter for showing the bug.
> {code}
> String urlIndexed = "144.214.37.14 - - [05/Jun/2013:08:39:27 +] \"GET 
> /products/key-phrase-extractor/ HTTP/1.1\"" +
> " 200 3437 http://www.google.com/url?sa=t&rct=j&q=&esrc=s&"; +
> 
> "source=web&cd=15&cad=rja&ved=0CEgQFjAEOAo&url=http%3A%2F%2Fwww.sematext.com%2Fproducts%2Fkey-"
>  +
> 
> "phrase-extractor%2F&ei=TPOuUbaWM-OKiQfGxIGYDw&usg=AFQjCNGwYAFYg_M3EZnp2eEWJzdvRrVPrg&sig2"
>  +
> "=oYitONI2EIZ0CQar7Ej8HA&bvm=bv.47380653,d.aGc\" \"Mozilla/5.0 
> (X11; Ubuntu; Linux i686; rv:20.0) " +
> "Gecko/20100101 Firefox/20.0\"";
> List tokens1 = new ArrayList();
> List tokens2 = new ArrayList();
> WhitespaceAnalyzer analyzer = new WhitespaceAnalyzer();
> TokenStream tokenStream = analyzer.tokenStream("test", urlIndexed);
> tokenStream = new WordDelimiterFilter(tokenStream,
> WordDelimiterFilter.GENERATE_WORD_PARTS |
> WordDelimiterFilter.CATENATE_WORDS |
> WordDelimiterFilter.SPLIT_ON_CASE_CHANGE,
> null);
> CharTermAttribute charAttrib = 
> tokenStream.addAttribute(CharTermAttribute.class);
> tokenStream.reset();
> while(tokenStream.incrementToken()) {
>   tokens1.add(charAttrib.toString());
>   System.out.println(charAttrib.toString());
> }
> tokenStream.end();
> tokenStream.close();
> urlIndexed = "144.214.37.14 - - [05/Jun/2013:08:39:27 +] \"GET 
> /products/key-phrase-extractor/ HTTP/1.1\"" +
> " 200 3437 \"http://www.google.com/url?sa=t&rct=j&q=&esrc=s&"; +
> 
> "source=web&cd=15&cad=rja&ved=0CEgQFjAEOAo&url=http%3A%2F%2Fwww.sematext.com%2Fproducts%2Fkey-"
>  +
> 
> "phrase-extractor%2F&ei=TPOuUbaWM-OKiQfGxIGYDw&usg=AFQjCNGwYAFYg_M3EZnp2eEWJzdvRrVPrg&sig2"
>  +
> "=oYitONI2EIZ0CQar7Ej8HA&bvm=bv.47380653,d.aGc\" \"Mozilla/5.0 (X11; 
> Ubuntu; Linux i686; rv:20.0) " +
> "Gecko/20100101 Firefox/20.0\"";
> System.out.println("\n\n\n\n");
> tokenStream = analyzer.tokenStream("test", urlIndexed);
> tokenStream = new WordDelimiterFilter(tokenStream,
> WordDelimiterFilter.GENERATE_WORD_PARTS |
> WordDelimiterFilter.CATENATE_WORDS |
> WordDelimiterFilter.SPLIT_ON_CASE_CHANGE,
> null);
> charAttrib = tokenStream.addAttribute(CharTermAttribute.class);
> tokenStream.reset();
> while(tokenStream.incrementToken()) {
>   tokens2.add(charAttrib.toString());
>   System.out.println(charAttrib.toString());
> }
> tokenStream.end();
> tokenStream.close();
> assertEquals(Joiner.on(",").join(tokens1), Joiner.on(",").join(tokens2));
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6991) WordDelimiterFilter bug

2016-01-25 Thread Pawel Rog (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15115414#comment-15115414
 ] 

Pawel Rog commented on LUCENE-6991:
---

Thanks for the suggestion. When I changed whitespace tokenizer to keyword 
tokenizer the test passes.

> WordDelimiterFilter bug
> ---
>
> Key: LUCENE-6991
> URL: https://issues.apache.org/jira/browse/LUCENE-6991
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.10.4, 5.3.1
>Reporter: Pawel Rog
>Priority: Minor
>
> I was preparing analyzer which contains WordDelimiterFilter and I realized it 
> sometimes gives results different then expected.
> I prepared a short test which shows the problem. I haven't used Lucene tests 
> for this but this doesn't matter for showing the bug.
> {code}
> String urlIndexed = "144.214.37.14 - - [05/Jun/2013:08:39:27 +] \"GET 
> /products/key-phrase-extractor/ HTTP/1.1\"" +
> " 200 3437 http://www.google.com/url?sa=t&rct=j&q=&esrc=s&"; +
> 
> "source=web&cd=15&cad=rja&ved=0CEgQFjAEOAo&url=http%3A%2F%2Fwww.sematext.com%2Fproducts%2Fkey-"
>  +
> 
> "phrase-extractor%2F&ei=TPOuUbaWM-OKiQfGxIGYDw&usg=AFQjCNGwYAFYg_M3EZnp2eEWJzdvRrVPrg&sig2"
>  +
> "=oYitONI2EIZ0CQar7Ej8HA&bvm=bv.47380653,d.aGc\" \"Mozilla/5.0 
> (X11; Ubuntu; Linux i686; rv:20.0) " +
> "Gecko/20100101 Firefox/20.0\"";
> List tokens1 = new ArrayList();
> List tokens2 = new ArrayList();
> WhitespaceAnalyzer analyzer = new WhitespaceAnalyzer();
> TokenStream tokenStream = analyzer.tokenStream("test", urlIndexed);
> tokenStream = new WordDelimiterFilter(tokenStream,
> WordDelimiterFilter.GENERATE_WORD_PARTS |
> WordDelimiterFilter.CATENATE_WORDS |
> WordDelimiterFilter.SPLIT_ON_CASE_CHANGE,
> null);
> CharTermAttribute charAttrib = 
> tokenStream.addAttribute(CharTermAttribute.class);
> tokenStream.reset();
> while(tokenStream.incrementToken()) {
>   tokens1.add(charAttrib.toString());
>   System.out.println(charAttrib.toString());
> }
> tokenStream.end();
> tokenStream.close();
> urlIndexed = "144.214.37.14 - - [05/Jun/2013:08:39:27 +] \"GET 
> /products/key-phrase-extractor/ HTTP/1.1\"" +
> " 200 3437 \"http://www.google.com/url?sa=t&rct=j&q=&esrc=s&"; +
> 
> "source=web&cd=15&cad=rja&ved=0CEgQFjAEOAo&url=http%3A%2F%2Fwww.sematext.com%2Fproducts%2Fkey-"
>  +
> 
> "phrase-extractor%2F&ei=TPOuUbaWM-OKiQfGxIGYDw&usg=AFQjCNGwYAFYg_M3EZnp2eEWJzdvRrVPrg&sig2"
>  +
> "=oYitONI2EIZ0CQar7Ej8HA&bvm=bv.47380653,d.aGc\" \"Mozilla/5.0 (X11; 
> Ubuntu; Linux i686; rv:20.0) " +
> "Gecko/20100101 Firefox/20.0\"";
> System.out.println("\n\n\n\n");
> tokenStream = analyzer.tokenStream("test", urlIndexed);
> tokenStream = new WordDelimiterFilter(tokenStream,
> WordDelimiterFilter.GENERATE_WORD_PARTS |
> WordDelimiterFilter.CATENATE_WORDS |
> WordDelimiterFilter.SPLIT_ON_CASE_CHANGE,
> null);
> charAttrib = tokenStream.addAttribute(CharTermAttribute.class);
> tokenStream.reset();
> while(tokenStream.incrementToken()) {
>   tokens2.add(charAttrib.toString());
>   System.out.println(charAttrib.toString());
> }
> tokenStream.end();
> tokenStream.close();
> assertEquals(Joiner.on(",").join(tokens1), Joiner.on(",").join(tokens2));
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6993) Update TLDs to latest list

2016-01-25 Thread Mike Drob (JIRA)
Mike Drob created LUCENE-6993:
-

 Summary: Update TLDs to latest list
 Key: LUCENE-6993
 URL: https://issues.apache.org/jira/browse/LUCENE-6993
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/analysis
Reporter: Mike Drob


We did this once before in LUCENE-5357, but it might be time to update the list 
of TLDs again. Comparing our old list with a new list indicates 800+ new 
domains, so it would be nice to include them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Merge vs Rebase

2016-01-25 Thread Shai Erera
Hi Dawid,

Yes, I know what --amend does. I never amend the parent commit, only my
initial commit. I think we're talking about the same thing, but just to
clarify, this is what I mean:

git checkout -b feature-branch
git commit -a -m "message"
(Get reviews)
git commit --amend (addressing review comments)

I do not intend to stack additional commits on top of the initial commit,
and then in the end squash them or whatever. I treat it all as one piece of
work and commit, and I don't think our history should reflect every comment
someone makes on an issue. In analogy, today in JIRA we always update a
full .patch file, and eventually commit it, irrespective of the number of
iterations that issue has gone through.

Again, I think we're sync, but wanted to clarify what I meant.

About the guidelines, I'd prefer that we use rebase, but that's not what I
was after with the guide. Today I've made my first commit to Git (in
Lucene!), and the way I handled trunk and branch_5x is as follows:

git checkout -b feature master (i.e. trunk)
(work, commit locally, publish ...)
(time to push, i.e. SVN commit)
git fetch origin
git checkout master && git pull --rebase (this is nearly identical to the
previous 'svn up')
git checkout feature && git rebase master
git push origin HEAD:master (this is the equivalent of svn commit)

('merge' to branch_5x)
git checkout branch_5x
git pull --rebase (pulls from origin/branch_5x)
git checkout -b feature_5x branch_5x
git cherry-pick feature (this is partly 'svn merge')
git push origin HEAD:branch_5x

(a) Is this (one of) the correct way to do it?
(b) Is there a less verbose way to do it, aside from rebasing feature on
origin/branch_5x?

So, if we can have such sequence of operations written somewhere, even if
only as a proposed way to handle it, I think it will make it easy on the
Git newcomers among committers, since they're going to need to do this a
lot. And certainly if there's a quicker way to do it!

Shai

On Mon, Jan 25, 2016 at 5:15 PM Dawid Weiss  wrote:

> Hi Shai,
>
> > I usually do 'git commit --amend',
>
> When do you do this? Note that what the above command does is:
>
> 1) it takes the last commit on the current local branch,
> 2) it takes its parent commit,
> 3) it applies the diff from the last commit to the parent, permits you
> to modify the message and commits this new stuff on top of the parent,
> *resetting* the branch's HEAD reference to this new commit.
>
> If you already pushed your unamended commit to the remote then it's
> too late -- you'd end up with diverged local branch. Again, the
> simplest way to "see" this would be gitk. I'm sorry if I'm explaining
> something obvious.
>
> > is beneficial. But if I upload a patch, get reviews and intend to upload
> an
> > updated patch, I will --amend my commit.
>
> I don't see any relation with commits and --amend here. As long as
> you're working on a branch, this really doesn't matter -- you can just
> commit your changed stuff after a review and push it to a remote
> feature branch for others to see (or keep it local if you work on it
> alone). Amending commit is meant merely to fix a typo in a commit
> message. Yes, you can play with commits in a serious way (skipping
> commits, merging them together, changing commit  contents, etc.), but
> this is really beyond the scope of a new git user (for those who are
> interested -- see what an interactive rebase is -- rebase -i).
>
> > I also agree w/ Uwe, I think it will really help if we have a
> > guidelines/cheatsheet that document how do we expect dev to happen in the
> > Git world.
>
> Even if it seems right I don't think this is realistic, given that
> people have their own preferences and (strong) opinions. For example I
> disagree with Mark that multiple merges and then final merge are a bad
> thing in general. They are fairly straightforward in the commit graph
> and they very clearly signify that somebody was working on their own
> stuff and periodically synced up with changes on the master (or any
> other branch). Finally, when the feature is ready, a merge to the
> mainline happens. It's no different to what we used in SVN, actually.
> This is a classic example of flying fish strategy -- it's been with
> versioning systems for ages. What I think Mark is worried about is
> merges across multiple branches, which indeed can quickly become
> mind-boggling. That's why I suggested that, at first, everyone should
> stick to simple cherry picks and squashed merges -- not that they're
> "the best" way to work with git, it's just that they're conceptually
> simpler to understand for those who start their adventure with
> distributed revision control systems. I personally use a lot of
> partial merges and I have no problem with them at all.
>
> > What you (Dawid) put on Github is great for Git newcomers, but as
> > a community I think that having rough standards and guidelines will help,
>
> I didn't have enough time and I'm on vacation with my family this
> week. But I a

Re: Merge vs Rebase

2016-01-25 Thread Mark Miller
On Mon, Jan 25, 2016 at 10:15 AM Dawid Weiss  wrote:

>  For example I
> disagree with Mark that multiple merges and then final merge are a bad
> thing in general. They are fairly straightforward in the commit graph
> and they very clearly signify that somebody was working on their own
> stuff and periodically synced up with changes on the master (or any
> other branch).
>

It sounds like you are talking about something else. If you are
legitimately working on a large feature, what you would an svn branch for,
merge commits can make sense. That's not the general case at all. Like I
said, it's about determining when the merge commit adds value and when it
just creates complexity with no extra value.

Other than that, you guys are just too obsessed with 'forbidden'.

We talk about getting on the same page and guidelines, and you guys talk
about 'forbidden' and 'bans'. You are just talking past us.

Do some googling. Pretty much every project faces this and either decides
to create an insane amount of merge commits or not. Both choices are taken,
but the path that is almost never taken is, everyone just does whatever the
hell they want without discussing first.


- Mark
-- 
- Mark
about.me/markrmiller


[jira] [Commented] (LUCENE-6991) WordDelimiterFilter bug

2016-01-25 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15115405#comment-15115405
 ] 

Jack Krupansky commented on LUCENE-6991:


Does seem odd and wrong.

I also notice that it is not generating terms for the single letters from the 
%-escapes: %3A, %2F.

It also seems odd that that long token of catenated word parts is not all of 
the word parts from the URL. It seems like a digit not preceded by a letter is 
causing a break, while a digit preceded by a letter prevents a break.

Since you are using the white space tokenizer, the WDF is only seeing each 
space-delimited term at a time. You might try your test with just the URL 
portion itself, both with and without the escaped quote, just to see if that 
affects anything.


> WordDelimiterFilter bug
> ---
>
> Key: LUCENE-6991
> URL: https://issues.apache.org/jira/browse/LUCENE-6991
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.10.4, 5.3.1
>Reporter: Pawel Rog
>Priority: Minor
>
> I was preparing analyzer which contains WordDelimiterFilter and I realized it 
> sometimes gives results different then expected.
> I prepared a short test which shows the problem. I haven't used Lucene tests 
> for this but this doesn't matter for showing the bug.
> {code}
> String urlIndexed = "144.214.37.14 - - [05/Jun/2013:08:39:27 +] \"GET 
> /products/key-phrase-extractor/ HTTP/1.1\"" +
> " 200 3437 http://www.google.com/url?sa=t&rct=j&q=&esrc=s&"; +
> 
> "source=web&cd=15&cad=rja&ved=0CEgQFjAEOAo&url=http%3A%2F%2Fwww.sematext.com%2Fproducts%2Fkey-"
>  +
> 
> "phrase-extractor%2F&ei=TPOuUbaWM-OKiQfGxIGYDw&usg=AFQjCNGwYAFYg_M3EZnp2eEWJzdvRrVPrg&sig2"
>  +
> "=oYitONI2EIZ0CQar7Ej8HA&bvm=bv.47380653,d.aGc\" \"Mozilla/5.0 
> (X11; Ubuntu; Linux i686; rv:20.0) " +
> "Gecko/20100101 Firefox/20.0\"";
> List tokens1 = new ArrayList();
> List tokens2 = new ArrayList();
> WhitespaceAnalyzer analyzer = new WhitespaceAnalyzer();
> TokenStream tokenStream = analyzer.tokenStream("test", urlIndexed);
> tokenStream = new WordDelimiterFilter(tokenStream,
> WordDelimiterFilter.GENERATE_WORD_PARTS |
> WordDelimiterFilter.CATENATE_WORDS |
> WordDelimiterFilter.SPLIT_ON_CASE_CHANGE,
> null);
> CharTermAttribute charAttrib = 
> tokenStream.addAttribute(CharTermAttribute.class);
> tokenStream.reset();
> while(tokenStream.incrementToken()) {
>   tokens1.add(charAttrib.toString());
>   System.out.println(charAttrib.toString());
> }
> tokenStream.end();
> tokenStream.close();
> urlIndexed = "144.214.37.14 - - [05/Jun/2013:08:39:27 +] \"GET 
> /products/key-phrase-extractor/ HTTP/1.1\"" +
> " 200 3437 \"http://www.google.com/url?sa=t&rct=j&q=&esrc=s&"; +
> 
> "source=web&cd=15&cad=rja&ved=0CEgQFjAEOAo&url=http%3A%2F%2Fwww.sematext.com%2Fproducts%2Fkey-"
>  +
> 
> "phrase-extractor%2F&ei=TPOuUbaWM-OKiQfGxIGYDw&usg=AFQjCNGwYAFYg_M3EZnp2eEWJzdvRrVPrg&sig2"
>  +
> "=oYitONI2EIZ0CQar7Ej8HA&bvm=bv.47380653,d.aGc\" \"Mozilla/5.0 (X11; 
> Ubuntu; Linux i686; rv:20.0) " +
> "Gecko/20100101 Firefox/20.0\"";
> System.out.println("\n\n\n\n");
> tokenStream = analyzer.tokenStream("test", urlIndexed);
> tokenStream = new WordDelimiterFilter(tokenStream,
> WordDelimiterFilter.GENERATE_WORD_PARTS |
> WordDelimiterFilter.CATENATE_WORDS |
> WordDelimiterFilter.SPLIT_ON_CASE_CHANGE,
> null);
> charAttrib = tokenStream.addAttribute(CharTermAttribute.class);
> tokenStream.reset();
> while(tokenStream.incrementToken()) {
>   tokens2.add(charAttrib.toString());
>   System.out.println(charAttrib.toString());
> }
> tokenStream.end();
> tokenStream.close();
> assertEquals(Joiner.on(",").join(tokens1), Joiner.on(",").join(tokens2));
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8517) Implement minimal set of get* methods in ResultSetImpl for column names

2016-01-25 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8517:
---
Attachment: SOLR-8517.patch

Fixes a NPE with .toString() on the Tuple#getString method when manually 
testing. Use String.valueOf instead of toString()

> Implement minimal set of get* methods in ResultSetImpl for column names
> ---
>
> Key: SOLR-8517
> URL: https://issues.apache.org/jira/browse/SOLR-8517
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8517.patch, SOLR-8517.patch, SOLR-8517.patch
>
>
> This is related to the ResultSetImpl for column indices but requires that 
> more metadata be based back from the SQL handler in relation to column names. 
> The SQL handler already knows about the column names and order but they 
> aren't passed back to the client. SQL clients used the column names to 
> display so this must be implemented for DBVisualizer to work properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8502) Improve Solr JDBC Driver to support SQL Clients like DBVisualizer

2016-01-25 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15115325#comment-15115325
 ] 

Kevin Risden commented on SOLR-8502:


[~joel.bernstein] Here are some more tickets that are ready for review:
* SOLR-8519
* SOLR-8517
* SOLR-8512

They have to be merged in that order based on dependencies. There might be 
slight conflicts between them, but easily addressed.

> Improve Solr JDBC Driver to support SQL Clients like DBVisualizer
> -
>
> Key: SOLR-8502
> URL: https://issues.apache.org/jira/browse/SOLR-8502
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
>  Labels: jdbc
> Fix For: Trunk
>
>
> Currently when trying to connect to Solr with the JDBC driver with a SQL 
> client the driver must implement more methods and metadata to allow 
> connections. This JIRA is designed to act as an umbrella for the JDBC changes.
> An initial pass from a few months ago is here: 
> https://github.com/risdenk/lucene-solr/tree/expand-jdbc. This needs to be 
> broken up and create patches for the related sub tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8512) Implement minimal set of get* methods in ResultSetImpl for column indices

2016-01-25 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8512:
---
Attachment: SOLR-8512.patch

Patch that builds upon metadata passing from SOLR-8519. This is much simpler 
than modifying the underlying map of the tuples.

> Implement minimal set of get* methods in ResultSetImpl for column indices
> -
>
> Key: SOLR-8512
> URL: https://issues.apache.org/jira/browse/SOLR-8512
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8512.patch, SOLR-8512.patch, 
> sql-preserve-order.patch
>
>
> SQL clients use the proper get* methods on the ResultSet to return items to 
> be displayed. At minimum, the following methods should be implemented for 
> column index:
> * public Object getObject
> * public String getString
> * public boolean getBoolean
> * public short getShort
> * public int getInt
> * public long getLong
> * public float getFloat
> * public double getDouble
> * public BigDecimal getBigDecimal
> * public Date getDate
> * public Time getTime
> * public Timestamp getTimestamp



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8512) Implement minimal set of get* methods in ResultSetImpl for column indices

2016-01-25 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15115317#comment-15115317
 ] 

Kevin Risden commented on SOLR-8512:


Depends upon metadata from SOLR-8519 and implemented methods from SOLR-8517

> Implement minimal set of get* methods in ResultSetImpl for column indices
> -
>
> Key: SOLR-8512
> URL: https://issues.apache.org/jira/browse/SOLR-8512
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8512.patch, sql-preserve-order.patch
>
>
> SQL clients use the proper get* methods on the ResultSet to return items to 
> be displayed. At minimum, the following methods should be implemented for 
> column index:
> * public Object getObject
> * public String getString
> * public boolean getBoolean
> * public short getShort
> * public int getInt
> * public long getLong
> * public float getFloat
> * public double getDouble
> * public BigDecimal getBigDecimal
> * public Date getDate
> * public Time getTime
> * public Timestamp getTimestamp



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Merge vs Rebase

2016-01-25 Thread david.w.smi...@gmail.com
Well put Mark.  It's definitely not an either-or, it's about judicious use
of merge commits.

And, someone correct me if I'm wrong, but I think we decided to switch from
svn to git* but otherwise keep things the same.*  Wouldn't that preclude
these little merge bubbles?  This is because 'svn' does the equivalent of a
git rebase when pushing a new commit that isn't otherwise some explicit
merge.  Thus that has been our workflow, and I think we agreed not to
change the workflow.  I definitely think we should minimize what we're
changing step by step (and I think we agreed to that notion); and so the
workflow should be the same until a bit of time passes and someone proposes
a specific change.

We still haven't heard from an *advocate* of these git little merge
bubbles, and this one in particular to keep this decision concrete rather
than abstract.  I suspect there is no advocate.  If committers are only
"fine" with them (ambivalent), or not-fine, then I think we should not have
them.  But should we hypothetically decide to have these merge bubbles, I
think we should not do them now -- see last paragraph.

~ David

On Mon, Jan 25, 2016 at 9:15 AM Mark Miller  wrote:

> Yup, these are the nasty little merge commits that if you do every time
> make the history ridiculous.
>
> Though before our 'touchy' committers go nuts again, it's not about merge
> vs rebase, it's about proper use of merge commits. You can avoid them with
> squash merges as well, rebase is simply one option. It's really about the
> decision of when a merge commit adds value and when it doesn't. If you keep
> adding them when they add no value, it's just a useless mess.
>
> Rebase is just one way to keep sane history though. The merge command
> *can* do it too if you know what you are doing.
>
> Mark
> On Mon, Jan 25, 2016 at 4:37 AM Shai Erera  wrote:
>
>> I usually do 'git commit --amend', but I agree Dawid, especially in those
>> JIRA issues which are logically sub-divided into multiple sub-tasks, with
>> ISSUE-1.patch, ISSUE-2.patch.. that keeping all the commits in the history
>> is beneficial. But if I upload a patch, get reviews and intend to upload an
>> updated patch, I will --amend my commit.
>>
>> I also agree w/ Uwe, I think it will really help if we have a
>> guidelines/cheatsheet that document how do we expect dev to happen in the
>> Git world. What you (Dawid) put on Github is great for Git newcomers, but
>> as a community I think that having rough standards and guidelines will
>> help, especially the newcomers who may make work one way just because they
>> read about it somewhere. The merge/rebase and especially between master and
>> branch_5x are a good starting point to alleviate confusion and setting
>> expectations / proposing a workflow.
>>
>> > git merge origin/master   # merge any changes from remote master,
>>
>> I would do a rebase here. Is there a reason you pick merge in the example
>> - i.e. do u think it's the preferred way, or was it just an example?
>> (asking for educational reasons)
>>
>> Shai
>>
>> On Mon, Jan 25, 2016 at 10:48 AM Dawid Weiss 
>> wrote:
>>
>>> > [...] merge C1 and C2, and C2 is a parent of C1, Git doesn't do a
>>> merge commit. Someone probably can confirm that.
>>>
>>> No, there is freedom in how you do it. You can do a fast-forward merge
>>> or a regular merge, which will show even a single commit which would
>>> otherwise be linear as a diversion from history.
>>>
>>> There is no way to "script rebase" since rebases can cause conflicts
>>> and these need to be resolved. If you wish to avoid these "bubbles"
>>> then I'd suggest to:
>>>
>>> 1) *never* work on any remote-tracking branch directly, branch your
>>> feature branch and work on that, merging from remote branch until
>>> you're ready to commit.
>>>
>>> git fetch origin
>>> git checkout master -b myfeature
>>> while (!done) {
>>>   ... work on myfeature, committing to myfeature
>>>   git fetch origin
>>>   git merge origin/master   # merge any changes from remote master,
>>> resolving conflicts
>>> }
>>>
>>> # when done, either rebase myfeature on top of origin/master and do a
>>> fast-forward commit (preserves history of all intermediate commits) or
>>> squash the entire feature into a single commit.
>>>
>>> git checkout master
>>> git pull   # this will never conflict or rebase anything since you
>>> never had any own changes on master
>>> git merge --squash myfeature
>>> git commit -m "myfeature"
>>>
>>> By the way -- having those "bubbles" in history can be perceived as
>>> beneficial if you merge features that have multiple commits because
>>> then you "see" all the intermediate commits and you can revert the
>>> entire feature in one step (as opposed to multiple fast-forward
>>> commits).
>>>
>>> Dawid
>>>
>>>
>>>
>>> Dawid
>>>
>>>
>>> On Mon, Jan 25, 2016 at 9:34 AM, Uwe Schindler  wrote:
>>> > Hi,
>>> >
>>> >
>>> >
>>> > I am fine with both. The example of the conflict between my commit and
>>> > Mike’s is just a “normal useca

[jira] [Updated] (SOLR-8517) Implement minimal set of get* methods in ResultSetImpl for column names

2016-01-25 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8517:
---
Attachment: SOLR-8517.patch

Added some tests for getObject.

> Implement minimal set of get* methods in ResultSetImpl for column names
> ---
>
> Key: SOLR-8517
> URL: https://issues.apache.org/jira/browse/SOLR-8517
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8517.patch, SOLR-8517.patch
>
>
> This is related to the ResultSetImpl for column indices but requires that 
> more metadata be based back from the SQL handler in relation to column names. 
> The SQL handler already knows about the column names and order but they 
> aren't passed back to the client. SQL clients used the column names to 
> display so this must be implemented for DBVisualizer to work properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8519) Implement ResultSetMetaDataImpl.getColumnCount()

2016-01-25 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15115289#comment-15115289
 ] 

Joel Bernstein commented on SOLR-8519:
--

Ok, I'll take a look at this today.

> Implement ResultSetMetaDataImpl.getColumnCount()
> 
>
> Key: SOLR-8519
> URL: https://issues.apache.org/jira/browse/SOLR-8519
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8519.patch, SOLR-8519.patch, SOLR-8519.patch, 
> SOLR-8519.patch
>
>
> DBVisualizer uses getColumnCount to determine how many columns to try to 
> display from the result.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8590) example/files improvements

2016-01-25 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-8590:
---
Description: 
There are several example/files improvements/fixes that are warranted:

* Fix e-mail and URL field names ({{_ss}} and {{_ss}}, with angle 
brackets in field names), also add display of these fields in /browse results 
rendering
* Improve quality of extracted phrases
* Extract, facet, and display acronyms
* Add sorting controls, possibly all or some of these: last modified date, 
created date, relevancy, and title
* Add grouping by doc_type perhaps
* fix debug mode - currently does not update the parsed query debug output 
(this is probably a bug in data driven /browse as well)

  was:
There are several example/files improvements/fixes that are warranted:

* Fix e-mail and URL field names ({{_ss}} and {{_ss}}, with angle 
brackets in field names), also add display of these fields in /browse results 
rendering
* Improve quality of extracted phrases
* Extract, facet, and display acronyms
* Add sorting controls, possibly all or some of these: last modified date, 
created date, relevancy, and title
* Add grouping by doc_type perhaps


> example/files improvements
> --
>
> Key: SOLR-8590
> URL: https://issues.apache.org/jira/browse/SOLR-8590
> Project: Solr
>  Issue Type: Bug
>  Components: examples
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: 6.0
>
>
> There are several example/files improvements/fixes that are warranted:
> * Fix e-mail and URL field names ({{_ss}} and {{_ss}}, with angle 
> brackets in field names), also add display of these fields in /browse results 
> rendering
> * Improve quality of extracted phrases
> * Extract, facet, and display acronyms
> * Add sorting controls, possibly all or some of these: last modified date, 
> created date, relevancy, and title
> * Add grouping by doc_type perhaps
> * fix debug mode - currently does not update the parsed query debug output 
> (this is probably a bug in data driven /browse as well)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8251) MatchAllDocsQuery is much slower in solr5.3.1 compare to solr4.7

2016-01-25 Thread Stephan Lagraulet (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15115284#comment-15115284
 ] 

Stephan Lagraulet commented on SOLR-8251:
-

I have other deadlocks when the cluster crashed:
{code}
Frozen threads found (potential deadlock)
 
It seems that the following threads have not changed their stack for more than 
10 seconds.
These threads are possibly (but not necessarily!) in a deadlock or hung.
 
Thread-722 <--- Frozen for at least 33 sec
org.apache.solr.update.DefaultSolrCoreState.doRecovery(CoreContainer, 
CoreDescriptor) DefaultSolrCoreState.java:262
org.apache.solr.handler.admin.CoreAdminHandler$1.run() CoreAdminHandler.java:822



Thread-723 <--- Frozen for at least 33 sec
org.apache.solr.update.DefaultSolrCoreState.doRecovery(CoreContainer, 
CoreDescriptor) DefaultSolrCoreState.java:262
org.apache.solr.handler.admin.CoreAdminHandler$1.run() CoreAdminHandler.java:822

{code}

> MatchAllDocsQuery is much slower in solr5.3.1 compare to solr4.7
> 
>
> Key: SOLR-8251
> URL: https://issues.apache.org/jira/browse/SOLR-8251
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 5.3, 5.3.1
>Reporter: wei shen
> Fix For: 5.5
>
> Attachments: Solr 4 vs Solr 5.png
>
>
> I am trying to upgrade our production solr instance from 4.7 to 5.3.1. 
> Unfortunately when I do load testing I find the MatchAllDocsQuery is much 
> slower in solr 5.3.1 compare to 4.7. (solr 5.3.1 is faster in load test with 
> queries other than MatchAllDocsQuery). I asked solr-user and discussed with 
> Yonik Seeley. He confirmed that he can see the problem too comparing solr 
> 5.3.1 and 4.10.
> here is the query I use:
> {code}
> q={!cache=false}*:*&fq=+categoryIdsPath:1001&fl=id&start=0&rows=2&debug=true
> {code}
> for me the query is consistently about 60-70% slower on solr5 than solr4.
> Yonik mentioned in his email "For me, 5.3.1
> is about 5x slower than 4.10 for this particular query."



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8251) MatchAllDocsQuery is much slower in solr5.3.1 compare to solr4.7

2016-01-25 Thread Stephan Lagraulet (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15115281#comment-15115281
 ] 

Stephan Lagraulet commented on SOLR-8251:
-

When Solr5 is under heavy load yourkit reports some deadlocks, not sure if it 
is real deadlocks or if it is the monitoring overhead:
Frozen threads found (potential deadlock)
{code}
It seems that the following threads have not changed their stack for more than 
10 seconds.
These threads are possibly (but not necessarily!) in a deadlock or hung.
 
qtp1983747920-155 <--- Frozen for at least 2m 18 sec
org.apache.lucene.uninverting.FieldCacheImpl$Cache.get(LeafReader, 
FieldCacheImpl$CacheKey, boolean) FieldCacheImpl.java:187
org.apache.lucene.uninverting.FieldCacheImpl.getDocTermOrds(LeafReader, String, 
BytesRef) FieldCacheImpl.java:933
org.apache.lucene.uninverting.UninvertingReader.getSortedSetDocValues(String) 
UninvertingReader.java:275
org.apache.lucene.index.FilterLeafReader.getSortedSetDocValues(String) 
FilterLeafReader.java:454
org.apache.lucene.index.DocValues.getSortedSet(LeafReader, String) 
DocValues.java:302
org.apache.lucene.search.SortedSetSortField$1.getSortedDocValues(LeafReaderContext,
 String) SortedSetSortField.java:125
org.apache.lucene.search.FieldComparator$TermOrdValComparator.getLeafComparator(LeafReaderContext)
 FieldComparator.java:767
org.apache.lucene.search.FieldValueHitQueue.getComparators(LeafReaderContext) 
FieldValueHitQueue.java:183
org.apache.lucene.search.TopFieldCollector$SimpleFieldCollector.getLeafCollector(LeafReaderContext)
 TopFieldCollector.java:164
org.apache.lucene.search.MultiCollector.getLeafCollector(LeafReaderContext) 
MultiCollector.java:121
org.apache.lucene.search.IndexSearcher.search(List, Weight, Collector) 
IndexSearcher.java:812
org.apache.lucene.search.IndexSearcher.search(Query, Collector) 
IndexSearcher.java:535
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher$QueryResult,
 Query, Collector, SolrIndexSearcher$QueryCommand, DelegatingCollector) 
SolrIndexSearcher.java:202
org.apache.solr.search.SolrIndexSearcher.getDocListAndSetNC(SolrIndexSearcher$QueryResult,
 SolrIndexSearcher$QueryCommand) SolrIndexSearcher.java:1768
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher$QueryResult,
 SolrIndexSearcher$QueryCommand) SolrIndexSearcher.java:1487
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher$QueryResult, 
SolrIndexSearcher$QueryCommand) SolrIndexSearcher.java:557
org.apache.solr.handler.component.QueryComponent.process(ResponseBuilder) 
QueryComponent.java:525
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SolrQueryRequest,
 SolrQueryResponse) SearchHandler.java:273
org.apache.solr.handler.RequestHandlerBase.handleRequest(SolrQueryRequest, 
SolrQueryResponse) RequestHandlerBase.java:156
org.apache.solr.core.SolrCore.execute(SolrRequestHandler, SolrQueryRequest, 
SolrQueryResponse) SolrCore.java:2073
org.apache.solr.servlet.HttpSolrCall.execute(SolrQueryResponse) 
HttpSolrCall.java:658
org.apache.solr.servlet.HttpSolrCall.call() HttpSolrCall.java:457
org.apache.solr.servlet.SolrDispatchFilter.doFilter(ServletRequest, 
ServletResponse, FilterChain, boolean) SolrDispatchFilter.java:222
org.apache.solr.servlet.SolrDispatchFilter.doFilter(ServletRequest, 
ServletResponse, FilterChain) SolrDispatchFilter.java:181
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletRequest, 
ServletResponse) ServletHandler.java:1652
org.eclipse.jetty.servlet.ServletHandler.doHandle(String, Request, 
HttpServletRequest, HttpServletResponse) ServletHandler.java:585
org.eclipse.jetty.server.handler.ScopedHandler.handle(String, Request, 
HttpServletRequest, HttpServletResponse) ScopedHandler.java:143
org.eclipse.jetty.security.SecurityHandler.handle(String, Request, 
HttpServletRequest, HttpServletResponse) SecurityHandler.java:577
org.eclipse.jetty.server.session.SessionHandler.doHandle(String, Request, 
HttpServletRequest, HttpServletResponse) SessionHandler.java:223
org.eclipse.jetty.server.handler.ContextHandler.doHandle(String, Request, 
HttpServletRequest, HttpServletResponse) ContextHandler.java:1127
org.eclipse.jetty.servlet.ServletHandler.doScope(String, Request, 
HttpServletRequest, HttpServletResponse) ServletHandler.java:515
org.eclipse.jetty.server.session.SessionHandler.doScope(String, Request, 
HttpServletRequest, HttpServletResponse) SessionHandler.java:185
org.eclipse.jetty.server.handler.ContextHandler.doScope(String, Request, 
HttpServletRequest, HttpServletResponse) ContextHandler.java:1061
org.eclipse.jetty.server.handler.ScopedHandler.handle(String, Request, 
HttpServletRequest, HttpServletResponse) ScopedHandler.java:141
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(String, 
Request, HttpServletRequest, HttpServletResponse) 
ContextHandlerCollection.java:215
org.eclipse.jet

[jira] [Updated] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-01-25 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8593:
-
Fix Version/s: Trunk

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
> Fix For: Trunk
>
>
> The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-01-25 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15115253#comment-15115253
 ] 

Joel Bernstein commented on SOLR-8593:
--

Linking to the JDBC driver work. The JDBC driver and the Calcite integration 
are connected because both are going to require hooks into the SQL Catalog. So 
work on the JDBC driver will inform work coming soon on the Calcite 
integration. 

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>
> The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8587) Add segments file information to core admin status

2016-01-25 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15115246#comment-15115246
 ] 

Shai Erera commented on SOLR-8587:
--

Oh :). But the plan is still that if the commit message starts w/ 
{{LUCENE/SOLR-1234}}, it will be linked to the issue, right?

> Add segments file information to core admin status
> --
>
> Key: SOLR-8587
> URL: https://issues.apache.org/jira/browse/SOLR-8587
> Project: Solr
>  Issue Type: Improvement
>Reporter: Shai Erera
>Assignee: Shai Erera
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8587.patch, SOLR-8587.patch
>
>
> Having the index's segments file name returned by CoreAdminHandler STATUS can 
> be useful. The info I'm thinking about is the segments file name and its 
> size. If you record that from time to time, in a case of crisis, when u need 
> to restore the index and may not be sure which copy you need to restore, this 
> tiny piece of info can be very useful, as the segmentsN file records the 
> commit point, and therefore what you core reported and what you see at hand 
> can help you make a safer decision.
> I also think it's useful info in general, e.g. probably even more than 
> 'version', and it doesn't add much complexity to the handler or the response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >