[jira] [Assigned] (KUDU-2687) ITClient retries are broken

2019-03-20 Thread Adar Dembo (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adar Dembo reassigned KUDU-2687:


Assignee: Adar Dembo

> ITClient retries are broken
> ---
>
> Key: KUDU-2687
> URL: https://issues.apache.org/jira/browse/KUDU-2687
> Project: Kudu
>  Issue Type: Bug
>  Components: java, test
>Affects Versions: 1.9.0
>Reporter: Adar Dembo
>Assignee: Adar Dembo
>Priority: Major
> Attachments: TEST-org.apache.kudu.client.ITClient.xml
>
>
> I thought I fixed this with [this 
> commit|https://github.com/apache/kudu/commit/0b80b0abae99c56db20e96249981a48886c56d33]
>  but apparently there's still some other issue at play.
> From the most recent failure:
> {noformat}
> 23:58:18.980 [ERROR - Thread-18] (ITClient.java:153) Got error while 
> inserting row 0
> org.apache.kudu.client.NonRecoverableException: 
>   at 
> org.apache.kudu.client.KuduException.transformException(KuduException.java:132)
>   at org.apache.kudu.client.KuduSession.apply(KuduSession.java:93)
>   at org.apache.kudu.client.ITClient$WriterThread.run(ITClient.java:264)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
>   at org.apache.kudu.client.Connection.enqueueMessage(Connection.java:546)
>   at org.apache.kudu.client.RpcProxy.sendRpc(RpcProxy.java:126)
>   at 
> org.apache.kudu.client.AsyncKuduClient.sendRpcToTablet(AsyncKuduClient.java:1229)
>   at 
> org.apache.kudu.client.AsyncKuduClient$RetryRpcCB.call(AsyncKuduClient.java:1278)
>   at 
> org.apache.kudu.client.AsyncKuduClient$RetryRpcCB.call(AsyncKuduClient.java:1268)
>   at com.stumbleupon.async.Deferred.doCall(Deferred.java:1280)
>   at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1259)
>   at com.stumbleupon.async.Deferred.callback(Deferred.java:1002)
>   at org.apache.kudu.client.KuduRpc.handleCallback(KuduRpc.java:247)
>   at org.apache.kudu.client.KuduRpc.callback(KuduRpc.java:294)
>   at org.apache.kudu.client.RpcProxy.responseReceived(RpcProxy.java:269)
>   at org.apache.kudu.client.RpcProxy.access$000(RpcProxy.java:59)
>   at org.apache.kudu.client.RpcProxy$1.call(RpcProxy.java:131)
>   at org.apache.kudu.client.RpcProxy$1.call(RpcProxy.java:127)
>   at 
> org.apache.kudu.client.Connection.messageReceived(Connection.java:391)
>   at 
> org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
>   at org.apache.kudu.client.Connection.handleUpstream(Connection.java:243)
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
>   at 
> org.jboss.netty.handler.timeout.ReadTimeoutHandler.messageReceived(ReadTimeoutHandler.java:184)
>   at 
> org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
>   at 
> org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
>   at 
> org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:70)
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
>   at 
> org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
>   at 
> org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
>   at 
> org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
>   at 
> org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
>   at 
> org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
>   at 
> org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
>   at 
> org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
>   at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
>   at 
> org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
> 

[jira] [Updated] (KUDU-2687) ITClient retries are broken

2019-03-20 Thread Adar Dembo (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adar Dembo updated KUDU-2687:
-
Code Review: https://gerrit.cloudera.org/c/12820/

> ITClient retries are broken
> ---
>
> Key: KUDU-2687
> URL: https://issues.apache.org/jira/browse/KUDU-2687
> Project: Kudu
>  Issue Type: Bug
>  Components: java, test
>Affects Versions: 1.9.0
>Reporter: Adar Dembo
>Assignee: Adar Dembo
>Priority: Major
> Attachments: TEST-org.apache.kudu.client.ITClient.xml
>
>
> I thought I fixed this with [this 
> commit|https://github.com/apache/kudu/commit/0b80b0abae99c56db20e96249981a48886c56d33]
>  but apparently there's still some other issue at play.
> From the most recent failure:
> {noformat}
> 23:58:18.980 [ERROR - Thread-18] (ITClient.java:153) Got error while 
> inserting row 0
> org.apache.kudu.client.NonRecoverableException: 
>   at 
> org.apache.kudu.client.KuduException.transformException(KuduException.java:132)
>   at org.apache.kudu.client.KuduSession.apply(KuduSession.java:93)
>   at org.apache.kudu.client.ITClient$WriterThread.run(ITClient.java:264)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
>   at org.apache.kudu.client.Connection.enqueueMessage(Connection.java:546)
>   at org.apache.kudu.client.RpcProxy.sendRpc(RpcProxy.java:126)
>   at 
> org.apache.kudu.client.AsyncKuduClient.sendRpcToTablet(AsyncKuduClient.java:1229)
>   at 
> org.apache.kudu.client.AsyncKuduClient$RetryRpcCB.call(AsyncKuduClient.java:1278)
>   at 
> org.apache.kudu.client.AsyncKuduClient$RetryRpcCB.call(AsyncKuduClient.java:1268)
>   at com.stumbleupon.async.Deferred.doCall(Deferred.java:1280)
>   at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1259)
>   at com.stumbleupon.async.Deferred.callback(Deferred.java:1002)
>   at org.apache.kudu.client.KuduRpc.handleCallback(KuduRpc.java:247)
>   at org.apache.kudu.client.KuduRpc.callback(KuduRpc.java:294)
>   at org.apache.kudu.client.RpcProxy.responseReceived(RpcProxy.java:269)
>   at org.apache.kudu.client.RpcProxy.access$000(RpcProxy.java:59)
>   at org.apache.kudu.client.RpcProxy$1.call(RpcProxy.java:131)
>   at org.apache.kudu.client.RpcProxy$1.call(RpcProxy.java:127)
>   at 
> org.apache.kudu.client.Connection.messageReceived(Connection.java:391)
>   at 
> org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
>   at org.apache.kudu.client.Connection.handleUpstream(Connection.java:243)
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
>   at 
> org.jboss.netty.handler.timeout.ReadTimeoutHandler.messageReceived(ReadTimeoutHandler.java:184)
>   at 
> org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
>   at 
> org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
>   at 
> org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:70)
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
>   at 
> org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
>   at 
> org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
>   at 
> org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
>   at 
> org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
>   at 
> org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
>   at 
> org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
>   at 
> org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
>   at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
>   at 
> org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioW

[jira] [Resolved] (KUDU-2734) RemoteKsckTest.TestClusterWithLocation is flaky

2019-03-20 Thread Adar Dembo (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adar Dembo resolved KUDU-2734.
--
   Resolution: Fixed
Fix Version/s: 1.10.0

We're going to speculatively say this is fixed since Will fixed KUDU-2748.

> RemoteKsckTest.TestClusterWithLocation is flaky
> ---
>
> Key: KUDU-2734
> URL: https://issues.apache.org/jira/browse/KUDU-2734
> Project: Kudu
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 1.9.0
>Reporter: Mike Percy
>Assignee: Will Berkeley
>Priority: Major
> Fix For: 1.10.0
>
>
> RemoteKsckTest.TestClusterWithLocation is flaky
> Alexey took a look at it and here is the analysis:
> In essence, due to slowness of TSAN builds, connection negotiation from kudu 
> CLI to one of master servers timed out, so one of the preconditions of the 
> test didn't meet.  The error output by the test was:
> {code:java}
> /data/somelongdirectorytoavoidrpathissues/src/kudu/src/kudu/tools/ksck_remote-test.cc:523:
>  Failure
> Failed                                                                        
>   
> Bad status: Network error: failed to gather info from all masters: 1 of 3 had 
> errors
> {code}
> The corresponding error in the master's log was:
> {code:java}
> W0221 12:38:27.119146 31380 negotiation.cc:313] Failed RPC negotiation. 
> Trace:  
> 0221 12:38:23.949428 (+     0us) reactor.cc:583] Submitting negotiation task 
> for client connection to 127.25.42.190:51799
> 0221 12:38:25.362220 (+1412792us) negotiation.cc:98] Waiting for socket to 
> connect
> 0221 12:38:25.363489 (+  1269us) client_negotiation.cc:167] Beginning 
> negotiation
> 0221 12:38:25.369976 (+  6487us) client_negotiation.cc:244] Sending NEGOTIATE 
> NegotiatePB request
> 0221 12:38:25.431582 (+ 61606us) client_negotiation.cc:261] Received 
> NEGOTIATE NegotiatePB response
> 0221 12:38:25.431610 (+    28us) client_negotiation.cc:355] Received 
> NEGOTIATE response from server
> 0221 12:38:25.432659 (+  1049us) client_negotiation.cc:182] Negotiated 
> authn=SASL
> 0221 12:38:27.051125 (+1618466us) client_negotiation.cc:483] Received 
> TLS_HANDSHAKE response from server
> 0221 12:38:27.062085 (+ 10960us) client_negotiation.cc:471] Sending 
> TLS_HANDSHAKE message to server
> 0221 12:38:27.062132 (+    47us) client_negotiation.cc:244] Sending 
> TLS_HANDSHAKE NegotiatePB request
> 0221 12:38:27.064391 (+  2259us) negotiation.cc:304] Negotiation complete: 
> Timed out: Client connection negotiation failed: client connection to 
> 127.25.42.190:51799: BlockingWrite timed out
> {code}
> We are seeing this on the flaky test dashboard for both TSAN and ASAN builds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KUDU-2748) Leader master erroneously tries to tablet copy to a follower master due to race at startup

2019-03-20 Thread Adar Dembo (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adar Dembo resolved KUDU-2748.
--
   Resolution: Fixed
Fix Version/s: 1.10.0

Will fixed this in commit 28c706722.

> Leader master erroneously tries to tablet copy to a follower master due to 
> race at startup
> --
>
> Key: KUDU-2748
> URL: https://issues.apache.org/jira/browse/KUDU-2748
> Project: Kudu
>  Issue Type: Bug
>Affects Versions: 1.9.0
>Reporter: Will Berkeley
>Assignee: Will Berkeley
>Priority: Major
> Fix For: 1.10.0
>
>
> I was investigating KUDU-2734 and ran into a weird situation. The test runs 
> with 3 masters and changes the value of a flag on the masters. To effect the 
> change, it restarts the masters. Suppose the masters are labelled A, B, and 
> C. Somewhat rarely (e.g. 8% of the time when run in TSAN with 8 stress 
> threads), the following happens:
> 1. A and B are restarted successfully. They form a quorum and elect a leader 
> (say A).
> 2. C is in the process of restarting. The ConsensusService is registered and 
> C is accepting RPCs.
> 3. A sends C an UpdateConsensus RPC. However, C is still in the process of 
> starting and has not yet initialized the systable. When C receives the 
> UpdateConsensus call, as a result it responds with TABLET_NOT_FOUND, even 
> though the proper response should be SERVICE_UNAVAILABLE.
> 4. A interprets TABLET_NOT_FOUND to mean that C needs to be copied to, and it 
> tries forever to tablet copy to C. The copies never start because tablet copy 
> is not implemented for masters.
> 5. C finishes its startup but does not receive UpdateConsensus from A because 
> A is sending StartTableCopy requests. C calls pre-elections endlessly.
> This effectively means the cluster is running with two masters until there is 
> a leadership change. This caused the flakiness of 
> KsckRemoteTest.TestClusterWithLocation because C never recognizes the 
> leadership of A, so Ksck master consensus checks fail.
> A regular tablet on a tablet server is not vulnerable to this. It's specific 
> to how the master starts up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2390) ITClient fails with "Row count unexpectedly decreased"

2019-03-20 Thread Adar Dembo (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797715#comment-16797715
 ] 

Adar Dembo commented on KUDU-2390:
--

This sometimes manifests as a failure in the randomGet() function in ITClient:
{noformat}
15:58:33.042 [ERROR - scanner-test-thread] (ITClient.java:128) Random get got 0 
or many rows 0 for key 97305
{noformat}

Might be a different issue, but seems quite similar.

> ITClient fails with "Row count unexpectedly decreased"
> --
>
> Key: KUDU-2390
> URL: https://issues.apache.org/jira/browse/KUDU-2390
> Project: Kudu
>  Issue Type: Bug
>  Components: java, test
>Affects Versions: 1.7.0, 1.8.0
>Reporter: Todd Lipcon
>Priority: Critical
> Attachments: Stdout.txt.gz, TEST-org.apache.kudu.client.ITClient.xml, 
> TEST-org.apache.kudu.client.ITClient.xml.gz, 
> TEST-org.apache.kudu.client.ITClient.xml.xz
>
>
> On master, hit the following failure of ITClient:
> {code}
> 20:05:05.407 [DEBUG - New I/O worker #17] (AsyncKuduScanner.java:934) 
> AsyncKuduScanner$Response(scannerId = "6ddf5d0da48241aea4b9eb51645716cc", 
> data = RowResultIterator for 27600 rows, more = true, responseScanTimestamp = 
> 6234957022375723008) for scanner
> 20:05:05.407 [DEBUG - New I/O worker #17] (AsyncKuduScanner.java:447) Scanner 
> "6ddf5d0da48241aea4b9eb51645716cc" opened on 
> d78cb5506f6e4e17bd54fdaf1819a8a2@[729d64003e7740cabb650f8f6aea4af6(127.1.76.194:60468),7a2e5f9b2be9497fadc30b81a6a50b24(127.1.76.19
> 20:05:05.409 [DEBUG - New I/O worker #17] (AsyncKuduScanner.java:934) 
> AsyncKuduScanner$Response(scannerId = "", data = RowResultIterator for 7314 
> rows, more = false) for scanner 
> KuduScanner(table=org.apache.kudu.client.ITClient-1522206255318, tablet=d78c
> 20:05:05.409 [INFO - Thread-4] (ITClient.java:397) New row count 90114
> 20:05:05.414 [DEBUG - New I/O worker #17] (AsyncKuduScanner.java:934) 
> AsyncKuduScanner$Response(scannerId = "c230614ad13e40478254b785995d1d7c", 
> data = RowResultIterator for 27600 rows, more = true, responseScanTimestamp = 
> 6234957022413987840) for scanner
> 20:05:05.414 [DEBUG - New I/O worker #17] (AsyncKuduScanner.java:447) Scanner 
> "c230614ad13e40478254b785995d1d7c" opened on 
> d78cb5506f6e4e17bd54fdaf1819a8a2@[729d64003e7740cabb650f8f6aea4af6(127.1.76.194:60468),7a2e5f9b2be9497fadc30b81a6a50b24(127.1.76.19
> 20:05:05.419 [DEBUG - New I/O worker #17] (AsyncKuduScanner.java:934) 
> AsyncKuduScanner$Response(scannerId = "", data = RowResultIterator for 27600 
> rows, more = true) for scanner 
> KuduScanner(table=org.apache.kudu.client.ITClient-1522206255318, tablet=d78c
> 20:05:05.420 [DEBUG - New I/O worker #17] (AsyncKuduScanner.java:934) 
> AsyncKuduScanner$Response(scannerId = "", data = RowResultIterator for 7342 
> rows, more = false) for scanner 
> KuduScanner(table=org.apache.kudu.client.ITClient-1522206255318, tablet=d78c
> 20:05:05.421 [ERROR - Thread-4] (ITClient.java:134) Row count unexpectedly 
> decreased from 90114to 62542
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2423) rpc-test flaky in TestRpc.TestServerShutsDown (ENOTCONN errno)

2019-03-20 Thread Adar Dembo (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797689#comment-16797689
 ] 

Adar Dembo commented on KUDU-2423:
--

Saw this again:

{noformat}
[ RUN  ] TestRpc.TestServerShutsDown
I0320 07:06:18.357066   223 rpc-test-base.h:597] Bound to: 0.0.0.0:42187
I0320 07:06:18.357116   223 rpc-test.cc:1003] Connecting to 0.0.0.0:42187
W0320 07:06:18.358333   232 negotiation.cc:313] Failed RPC negotiation. Trace:
0320 07:06:18.357731 (+ 0us) reactor.cc:583] Submitting negotiation task 
for client connection to 0.0.0.0:42187
0320 07:06:18.357988 (+   257us) negotiation.cc:98] Waiting for socket to 
connect
0320 07:06:18.357997 (+ 9us) client_negotiation.cc:167] Beginning 
negotiation
0320 07:06:18.358053 (+56us) client_negotiation.cc:244] Sending NEGOTIATE 
NegotiatePB request
0320 07:06:18.358266 (+   213us) negotiation.cc:304] Negotiation complete: 
Network error: Client connection negotiation failed: client connection to 
0.0.0.0:42187: BlockingRecv error: recv error from 0.0.0.0:0: Transport 
endpoint is not connected (error 107)
Metrics: 
{"client-negotiator.queue_time_us":202,"thread_start_us":42,"threads_started":1}
/data/somelongdirectorytoavoidrpathissues/src/kudu/src/kudu/rpc/rpc-test.cc:1074:
 Failure
Value of: s.posix_code() == EPIPE || s.posix_code() == ECONNRESET || 
s.posix_code() == ESHUTDOWN || s.posix_code() == ECONNREFUSED || s.posix_code() 
== EINVAL
  Actual: false
Expected: true
Unexpected status: Network error: Client connection negotiation failed: client 
connection to 0.0.0.0:42187: BlockingRecv error: recv error from 0.0.0.0:0: 
Transport endpoint is not connected (error 107)
I0320 07:06:18.358696   223 test_util.cc:135] 
---
I0320 07:06:18.358717   223 test_util.cc:136] Had fatal failures, leaving test 
files at 
/tmp/dist-test-taskJxf2XY/test-tmp/rpc-test.0.TestRpc.TestServerShutsDown.1553065575283406-223
[  FAILED  ] TestRpc.TestServerShutsDown (2 ms)
{noformat} 

> rpc-test flaky in TestRpc.TestServerShutsDown (ENOTCONN errno)
> --
>
> Key: KUDU-2423
> URL: https://issues.apache.org/jira/browse/KUDU-2423
> Project: Kudu
>  Issue Type: Bug
>  Components: rpc, test
>Affects Versions: 1.8.0
>Reporter: Todd Lipcon
>Priority: Major
>
> Seeing this flaky on the dashboard. errno seems to be getting set to ENOTCONN 
> instead of one of the expected options:
> {code}
> 0430 03:18:29.804763 (+   637us) negotiation.cc:304] Negotiation complete: 
> Network error: Client connection negotiation failed: client connection to 
> 0.0.0.0:35773: BlockingRecv error: recv error from 0.0.0.0:0: Transport 
> endpoint is not connected (error 107)
> Metrics: 
> {"client-negotiator.queue_time_us":382,"thread_start_us":187,"threads_started":1}
> /data/somelongdirectorytoavoidrpathissues/src/kudu/src/kudu/rpc/rpc-test.cc:989:
>  Failure
> Value of: s.posix_code() == EPIPE || s.posix_code() == ECONNRESET || 
> s.posix_code() == ESHUTDOWN || s.posix_code() == ECONNREFUSED || 
> s.posix_code() == EINVAL
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-1711) Add support for storing column comments in ColumnSchema

2019-03-20 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-1711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797437#comment-16797437
 ] 

Grant Henke commented on KUDU-1711:
---

I don't think anyone is. I have assigned you to the jira.

> Add support for storing column comments in ColumnSchema
> ---
>
> Key: KUDU-1711
> URL: https://issues.apache.org/jira/browse/KUDU-1711
> Project: Kudu
>  Issue Type: Improvement
>  Components: impala
>Affects Versions: 1.0.1
>Reporter: Dimitris Tsirogiannis
>Assignee: HeLifu
>Priority: Minor
>
> Currently, there is no way to persist column comments for Kudu tables unless 
> we store them in HMS. We should be able to store column comments in Kudu 
> through the ColumnSchema class. 
> Example of using column comments in a CREATE TABLE statement:
> {code}
> impala>create table foo (a int primary key comment 'this is column a') 
> distribute by hash (a) into 4 buckets stored as kudu;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KUDU-1711) Add support for storing column comments in ColumnSchema

2019-03-20 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-1711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KUDU-1711:
-

Assignee: HeLifu

> Add support for storing column comments in ColumnSchema
> ---
>
> Key: KUDU-1711
> URL: https://issues.apache.org/jira/browse/KUDU-1711
> Project: Kudu
>  Issue Type: Improvement
>  Components: impala
>Affects Versions: 1.0.1
>Reporter: Dimitris Tsirogiannis
>Assignee: HeLifu
>Priority: Minor
>
> Currently, there is no way to persist column comments for Kudu tables unless 
> we store them in HMS. We should be able to store column comments in Kudu 
> through the ColumnSchema class. 
> Example of using column comments in a CREATE TABLE statement:
> {code}
> impala>create table foo (a int primary key comment 'this is column a') 
> distribute by hash (a) into 4 buckets stored as kudu;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2750) Add create timestamp to every table

2019-03-20 Thread ZhangYao (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797069#comment-16797069
 ] 

ZhangYao commented on KUDU-2750:


When we add the create time, we should support the SQL using create time 
predicate too. Then we can list the recently created tables easily.

> Add create timestamp to every table
> ---
>
> Key: KUDU-2750
> URL: https://issues.apache.org/jira/browse/KUDU-2750
> Project: Kudu
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 1.9.1
>Reporter: HeLifu
>Priority: Major
>
> There seems to be no place to look at the creation time of a table, thus it 
> is difficult to get the latest created tables, especially in a cluster which 
> has accumulated lots of tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2750) Add create timestamp to every table

2019-03-20 Thread Xu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797065#comment-16797065
 ] 

Xu Yao commented on KUDU-2750:
--

I think the last alteration time of the table is the same.

> Add create timestamp to every table
> ---
>
> Key: KUDU-2750
> URL: https://issues.apache.org/jira/browse/KUDU-2750
> Project: Kudu
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 1.9.1
>Reporter: HeLifu
>Priority: Major
>
> There seems to be no place to look at the creation time of a table, thus it 
> is difficult to get the latest created tables, especially in a cluster which 
> has accumulated lots of tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KUDU-2750) Add create timestamp to every table

2019-03-20 Thread HeLifu (JIRA)
HeLifu created KUDU-2750:


 Summary: Add create timestamp to every table
 Key: KUDU-2750
 URL: https://issues.apache.org/jira/browse/KUDU-2750
 Project: Kudu
  Issue Type: Improvement
  Components: master
Affects Versions: 1.9.1
Reporter: HeLifu


There seems to be no place to look at the creation time of a table, thus it is 
difficult to get the latest created tables, especially in a cluster which has 
accumulated lots of tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-1711) Add support for storing column comments in ColumnSchema

2019-03-20 Thread HeLifu (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-1711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796942#comment-16796942
 ] 

HeLifu commented on KUDU-1711:
--

Is anyone developing this feature already? If not, i'd like to try.

> Add support for storing column comments in ColumnSchema
> ---
>
> Key: KUDU-1711
> URL: https://issues.apache.org/jira/browse/KUDU-1711
> Project: Kudu
>  Issue Type: Improvement
>  Components: impala
>Affects Versions: 1.0.1
>Reporter: Dimitris Tsirogiannis
>Priority: Minor
>
> Currently, there is no way to persist column comments for Kudu tables unless 
> we store them in HMS. We should be able to store column comments in Kudu 
> through the ColumnSchema class. 
> Example of using column comments in a CREATE TABLE statement:
> {code}
> impala>create table foo (a int primary key comment 'this is column a') 
> distribute by hash (a) into 4 buckets stored as kudu;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)