[jira] [Commented] (KUDU-1563) Add support for INSERT IGNORE

2019-04-18 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-1563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821303#comment-16821303
 ] 

Grant Henke commented on KUDU-1563:
---

This would be a useful optimization for full restore (via Spark) optimizations. 
Right now we use UPSERT in case a spark task needs to be retried, but in the 
case of a failed Spark task that means we UPSERT all the rows that previously 
succeeded again. 

> Add support for INSERT IGNORE
> -
>
> Key: KUDU-1563
> URL: https://issues.apache.org/jira/browse/KUDU-1563
> Project: Kudu
>  Issue Type: New Feature
>Reporter: Dan Burkert
>Assignee: Brock Noland
>Priority: Major
>  Labels: backup, newbie
>
> The Java client currently has an [option to ignore duplicate row key errors| 
> https://kudu.apache.org/apidocs/org/kududb/client/AsyncKuduSession.html#setIgnoreAllDuplicateRows-boolean-],
>  which is implemented by filtering the errors on the client side.  If we are 
> going to continue to support this feature (and the consensus seems to be that 
> we probably should), we should promote it to a first class operation type 
> that is handled on the server side.  This would have a modest perf. 
> improvement since less errors are returned, and it would allow INSERT IGNORE 
> ops to be mixed in the same batch as other INSERT, DELETE, UPSERT, etc. ops.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KUDU-1563) Add support for INSERT IGNORE

2019-04-18 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-1563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KUDU-1563:
--
Labels: backup newbie  (was: newbie)

> Add support for INSERT IGNORE
> -
>
> Key: KUDU-1563
> URL: https://issues.apache.org/jira/browse/KUDU-1563
> Project: Kudu
>  Issue Type: New Feature
>Reporter: Dan Burkert
>Assignee: Brock Noland
>Priority: Major
>  Labels: backup, newbie
>
> The Java client currently has an [option to ignore duplicate row key errors| 
> https://kudu.apache.org/apidocs/org/kududb/client/AsyncKuduSession.html#setIgnoreAllDuplicateRows-boolean-],
>  which is implemented by filtering the errors on the client side.  If we are 
> going to continue to support this feature (and the consensus seems to be that 
> we probably should), we should promote it to a first class operation type 
> that is handled on the server side.  This would have a modest perf. 
> improvement since less errors are returned, and it would allow INSERT IGNORE 
> ops to be mixed in the same batch as other INSERT, DELETE, UPSERT, etc. ops.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KUDU-2777) Upgrade LZ4 to 1.9.0+

2019-04-17 Thread Grant Henke (JIRA)
Grant Henke created KUDU-2777:
-

 Summary: Upgrade LZ4 to 1.9.0+
 Key: KUDU-2777
 URL: https://issues.apache.org/jira/browse/KUDU-2777
 Project: Kudu
  Issue Type: Improvement
Affects Versions: 1.9.0
Reporter: Grant Henke


The most recent [release|https://github.com/lz4/lz4/releases/tag/v1.9.0] of LZ4 
(1.9.0) has significant decompression performance improvements. It also 
contains Todd's patch for clang.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2356) Idle WALs can consume significant memory

2019-04-10 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814619#comment-16814619
 ] 

Grant Henke commented on KUDU-2356:
---

Here is the original change that got reverted: 
https://gerrit.cloudera.org/#/c/9747/

> Idle WALs can consume significant memory
> 
>
> Key: KUDU-2356
> URL: https://issues.apache.org/jira/browse/KUDU-2356
> Project: Kudu
>  Issue Type: Improvement
>  Components: log, tserver
>Affects Versions: 1.7.0
>Reporter: Todd Lipcon
>Priority: Major
> Attachments: heap.svg
>
>
> I grabbed a heap sample of a tserver which has been running a write workload 
> for a little while and found that 750MB of memory is used by faststring 
> allocations inside WritableLogSegment::WriteEntryBatch. It seems like this is 
> the 'compress_buf_' member. This buffer always resizes up during a log write 
> but never shrinks back down, even when the WAL is idle. We should consider 
> clearing the buffer after each append, or perhaps after a short timeout like 
> 100ms after a WAL becomes idle.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-1948) Client-side configuration of cluster details

2019-04-07 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-1948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16811697#comment-16811697
 ] 

Grant Henke commented on KUDU-1948:
---

I am onboard with everything proposed. I think I am okay with a default client 
config path too assuming it can be overridden. 

> Client-side configuration of cluster details
> 
>
> Key: KUDU-1948
> URL: https://issues.apache.org/jira/browse/KUDU-1948
> Project: Kudu
>  Issue Type: New Feature
>  Components: client, security
>Affects Versions: 1.3.0
>Reporter: Todd Lipcon
>Assignee: Grant Henke
>Priority: Major
>
> In the beginning, Kudu clients were configured with only the address of the 
> single Kudu master. This was nice and simple, and there was no need for a 
> client "configuration file".
> Then, we added multi-masters, and the client API had to take a list of master 
> addresses. This wasn't awful, but started to be a bit aggravating when trying 
> to use tools on a multi-master cluster (who wants to type out three long 
> hostnames in a 'ksck' command line every time?).
> Now with security, we have a couple more bits of configuration for the 
> client. Namely:
> - "require SSL" and "require authentication" booleans -- necessary to prevent 
> MITM downgrade attacks
> - custom Kerberos principal -- if the server wants to use a principal other 
> than 'kudu/@REALM' then the client needs to know to expect it and fetch 
> the appropriate service ticket. (Note this isn't yet supported but would like 
> to be!)
> In the future, there are other items that might be best specified as part of 
> a client configuration as well (e.g. CA cert for BYO PKI, wire compression 
> options, etc).
> For the above use cases it would be nicer to allow the various options to be 
> specified in a configuration file rather than adding specific APIs for all 
> options.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KUDU-1711) Add support for storing column comments in ColumnSchema

2019-04-05 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-1711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KUDU-1711.
---
   Resolution: Fixed
Fix Version/s: 1.10.0

This was resolved with the following commits: 
* https://github.com/apache/kudu/commit/62d1dec5d5a5c31647d3a92ff203e103198a92d1
* https://github.com/apache/kudu/commit/e2870925f9a07da43658447248a4e64acde398b4

> Add support for storing column comments in ColumnSchema
> ---
>
> Key: KUDU-1711
> URL: https://issues.apache.org/jira/browse/KUDU-1711
> Project: Kudu
>  Issue Type: Improvement
>  Components: impala
>Affects Versions: 1.0.1
>Reporter: Dimitris Tsirogiannis
>Assignee: HeLifu
>Priority: Minor
> Fix For: 1.10.0
>
>
> Currently, there is no way to persist column comments for Kudu tables unless 
> we store them in HMS. We should be able to store column comments in Kudu 
> through the ColumnSchema class. 
> Example of using column comments in a CREATE TABLE statement:
> {code}
> impala>create table foo (a int primary key comment 'this is column a') 
> distribute by hash (a) into 4 buckets stored as kudu;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KUDU-2755) Protoc 3.7.0 links against libatomic and can break the Java build

2019-04-01 Thread Grant Henke (JIRA)
Grant Henke created KUDU-2755:
-

 Summary: Protoc 3.7.0 links against libatomic and can break the 
Java build
 Key: KUDU-2755
 URL: https://issues.apache.org/jira/browse/KUDU-2755
 Project: Kudu
  Issue Type: Improvement
  Components: java
Affects Versions: 1.10.0
Reporter: Grant Henke
Assignee: Grant Henke


The protobuf version was recently upgraded to 3.7.0, but builds in some 
environments can be broken due to libatomic linking. This is fixed in 3.71 
version of protoc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KUDU-2754) Keep a maximum number of old log files

2019-03-29 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KUDU-2754:
-

Assignee: Grant Henke

> Keep a maximum number of old log files
> --
>
> Key: KUDU-2754
> URL: https://issues.apache.org/jira/browse/KUDU-2754
> Project: Kudu
>  Issue Type: Improvement
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Major
>
> Kudu generates various different log files 
> (INFO,WARNING,ERROR,diagnostic,minidumps,etc). To prevent issues running out 
> of logging space, it would be nice if a user could configure the maximum 
> number of each log file type to keep.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2754) Keep a maximum number of old log files

2019-03-29 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16805152#comment-16805152
 ] 

Grant Henke commented on KUDU-2754:
---

It looks like this feature already exists, but configuring the maximum number 
is "experimental". This jira can just track changing `max_log_files` to stable.

https://github.com/apache/kudu/blob/master/src/kudu/util/logging.cc#L69

> Keep a maximum number of old log files
> --
>
> Key: KUDU-2754
> URL: https://issues.apache.org/jira/browse/KUDU-2754
> Project: Kudu
>  Issue Type: Improvement
>Reporter: Grant Henke
>Priority: Major
>
> Kudu generates various different log files 
> (INFO,WARNING,ERROR,diagnostic,minidumps,etc). To prevent issues running out 
> of logging space, it would be nice if a user could configure the maximum 
> number of each log file type to keep.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KUDU-2754) Keep a maximum number of old log files

2019-03-29 Thread Grant Henke (JIRA)
Grant Henke created KUDU-2754:
-

 Summary: Keep a maximum number of old log files
 Key: KUDU-2754
 URL: https://issues.apache.org/jira/browse/KUDU-2754
 Project: Kudu
  Issue Type: Improvement
Reporter: Grant Henke


Kudu generates various different log files 
(INFO,WARNING,ERROR,diagnostic,minidumps,etc). To prevent issues running out of 
logging space, it would be nice if a user could configure the maximum number of 
each log file type to keep.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-1395) Scanner KeepAlive requests can get starved on an overloaded server

2019-03-29 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16805109#comment-16805109
 ] 

Grant Henke commented on KUDU-1395:
---

FWIW the Java client retries keepAlive requests (KUDU-2710)

> Scanner KeepAlive requests can get starved on an overloaded server
> --
>
> Key: KUDU-1395
> URL: https://issues.apache.org/jira/browse/KUDU-1395
> Project: Kudu
>  Issue Type: Bug
>  Components: impala, rpc, tserver
>Affects Versions: 0.8.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Major
>  Labels: backup
>
> As of 0.8.0, the RPC system schedules RPCs on an earliest-deadline-first 
> basis, rejecting those with later deadlines. This works well for RPCs which 
> are retried on SERVER_TOO_BUSY errors, since the retries maintain the 
> original deadline and thus get higher and higher priority as they get closer 
> to timing out.
> We don't, however, do any retries on scanner KeepAlive RPCs. So, if a 
> keepalive RPC arrives at a heavily overloaded tserver, it will likely get 
> rejected, and won't retry. This means that Impala queries or other long scans 
> that rely on KeepAlives will likely fail on overloaded clusters since the 
> KeepAlive never gets through.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KUDU-1395) Scanner KeepAlive requests can get starved on an overloaded server

2019-03-29 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KUDU-1395:
--
Labels: backup  (was: )

> Scanner KeepAlive requests can get starved on an overloaded server
> --
>
> Key: KUDU-1395
> URL: https://issues.apache.org/jira/browse/KUDU-1395
> Project: Kudu
>  Issue Type: Bug
>  Components: impala, rpc, tserver
>Affects Versions: 0.8.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Major
>  Labels: backup
>
> As of 0.8.0, the RPC system schedules RPCs on an earliest-deadline-first 
> basis, rejecting those with later deadlines. This works well for RPCs which 
> are retried on SERVER_TOO_BUSY errors, since the retries maintain the 
> original deadline and thus get higher and higher priority as they get closer 
> to timing out.
> We don't, however, do any retries on scanner KeepAlive RPCs. So, if a 
> keepalive RPC arrives at a heavily overloaded tserver, it will likely get 
> rejected, and won't retry. This means that Impala queries or other long scans 
> that rely on KeepAlives will likely fail on overloaded clusters since the 
> KeepAlive never gets through.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-1711) Add support for storing column comments in ColumnSchema

2019-03-20 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-1711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797437#comment-16797437
 ] 

Grant Henke commented on KUDU-1711:
---

I don't think anyone is. I have assigned you to the jira.

> Add support for storing column comments in ColumnSchema
> ---
>
> Key: KUDU-1711
> URL: https://issues.apache.org/jira/browse/KUDU-1711
> Project: Kudu
>  Issue Type: Improvement
>  Components: impala
>Affects Versions: 1.0.1
>Reporter: Dimitris Tsirogiannis
>Assignee: HeLifu
>Priority: Minor
>
> Currently, there is no way to persist column comments for Kudu tables unless 
> we store them in HMS. We should be able to store column comments in Kudu 
> through the ColumnSchema class. 
> Example of using column comments in a CREATE TABLE statement:
> {code}
> impala>create table foo (a int primary key comment 'this is column a') 
> distribute by hash (a) into 4 buckets stored as kudu;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KUDU-1711) Add support for storing column comments in ColumnSchema

2019-03-20 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-1711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KUDU-1711:
-

Assignee: HeLifu

> Add support for storing column comments in ColumnSchema
> ---
>
> Key: KUDU-1711
> URL: https://issues.apache.org/jira/browse/KUDU-1711
> Project: Kudu
>  Issue Type: Improvement
>  Components: impala
>Affects Versions: 1.0.1
>Reporter: Dimitris Tsirogiannis
>Assignee: HeLifu
>Priority: Minor
>
> Currently, there is no way to persist column comments for Kudu tables unless 
> we store them in HMS. We should be able to store column comments in Kudu 
> through the ColumnSchema class. 
> Example of using column comments in a CREATE TABLE statement:
> {code}
> impala>create table foo (a int primary key comment 'this is column a') 
> distribute by hash (a) into 4 buckets stored as kudu;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KUDU-2672) Spark write to kudu, too many machines write to one tserver.

2019-03-18 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KUDU-2672:
-

Assignee: Grant Henke

> Spark write to kudu, too many machines write to one tserver.
> 
>
> Key: KUDU-2672
> URL: https://issues.apache.org/jira/browse/KUDU-2672
> Project: Kudu
>  Issue Type: Improvement
>  Components: java, spark
>Affects Versions: 1.8.0
>Reporter: yangz
>Assignee: Grant Henke
>Priority: Major
>  Labels: backup, performance
> Fix For: 1.10.0
>
>
> For the spark use case. We sometimes will use spark to write data to kudu.
> Such as import a hive table data to kudu table.
> There will have 2 problems here in current implement.
>  # It use a FlushMode.AUTO_FLUSH_BACKGROUND, which is not efficient for error 
> processing. When some error happen such as timeout. It will always flush all 
> data in the task.Then failed the task. It retry by the task level. 
>  # For the write mode, spark use default hash way to split data to partition. 
> And the hash method is not always meets the tablet distribution. Such as a 
> big hive table for 500G size.It will give 2000 task, but we only have 20 
> tserver machines. so there will may 2000 machines write at same time to 20 
> tserver machines. There will be two bad thing for the performance. First is 
> primary key lock, tserver user row lock, so there will so many lock wait. The 
> worst case it always timeout for the write operation.Second is there are so 
> many machines write data at the same time to tserver. And no any controller 
> in the code.
> So we suggest two thing to do
>  # Change the flush mode to MANNUL_FLUSH_MODE, and process the error at row 
> level. At last at task level.
>  # Give an optional repartition step in spark. We can repartition the data by 
> the tablet distribution. Then we can get only one machine will write to one 
> tserver. There will no lock any more.
> We use this feature for some times. And it solve some problem when write big 
> table data to spark.I hope this feature will be useful for the community who 
> uses a lot spark with kudu. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KUDU-2672) Spark write to kudu, too many machines write to one tserver.

2019-03-18 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KUDU-2672.
---
   Resolution: Fixed
Fix Version/s: 1.10.0

> Spark write to kudu, too many machines write to one tserver.
> 
>
> Key: KUDU-2672
> URL: https://issues.apache.org/jira/browse/KUDU-2672
> Project: Kudu
>  Issue Type: Improvement
>  Components: java, spark
>Affects Versions: 1.8.0
>Reporter: yangz
>Priority: Major
>  Labels: backup, performance
> Fix For: 1.10.0
>
>
> For the spark use case. We sometimes will use spark to write data to kudu.
> Such as import a hive table data to kudu table.
> There will have 2 problems here in current implement.
>  # It use a FlushMode.AUTO_FLUSH_BACKGROUND, which is not efficient for error 
> processing. When some error happen such as timeout. It will always flush all 
> data in the task.Then failed the task. It retry by the task level. 
>  # For the write mode, spark use default hash way to split data to partition. 
> And the hash method is not always meets the tablet distribution. Such as a 
> big hive table for 500G size.It will give 2000 task, but we only have 20 
> tserver machines. so there will may 2000 machines write at same time to 20 
> tserver machines. There will be two bad thing for the performance. First is 
> primary key lock, tserver user row lock, so there will so many lock wait. The 
> worst case it always timeout for the write operation.Second is there are so 
> many machines write data at the same time to tserver. And no any controller 
> in the code.
> So we suggest two thing to do
>  # Change the flush mode to MANNUL_FLUSH_MODE, and process the error at row 
> level. At last at task level.
>  # Give an optional repartition step in spark. We can repartition the data by 
> the tablet distribution. Then we can get only one machine will write to one 
> tserver. There will no lock any more.
> We use this feature for some times. And it solve some problem when write big 
> table data to spark.I hope this feature will be useful for the community who 
> uses a lot spark with kudu. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2672) Spark write to kudu, too many machines write to one tserver.

2019-03-18 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795443#comment-16795443
 ] 

Grant Henke commented on KUDU-2672:
---

The main part of this jira has been resolved via 
[d9be1f|https://github.com/apache/kudu/commit/d9be1f6623c068524e9bd65a89e25146d9b70dd5].

I suggest opening another jira to track the work described in #1 to adjust row 
flushing. 

> Spark write to kudu, too many machines write to one tserver.
> 
>
> Key: KUDU-2672
> URL: https://issues.apache.org/jira/browse/KUDU-2672
> Project: Kudu
>  Issue Type: Improvement
>  Components: java, spark
>Affects Versions: 1.8.0
>Reporter: yangz
>Priority: Major
>  Labels: backup, performance
>
> For the spark use case. We sometimes will use spark to write data to kudu.
> Such as import a hive table data to kudu table.
> There will have 2 problems here in current implement.
>  # It use a FlushMode.AUTO_FLUSH_BACKGROUND, which is not efficient for error 
> processing. When some error happen such as timeout. It will always flush all 
> data in the task.Then failed the task. It retry by the task level. 
>  # For the write mode, spark use default hash way to split data to partition. 
> And the hash method is not always meets the tablet distribution. Such as a 
> big hive table for 500G size.It will give 2000 task, but we only have 20 
> tserver machines. so there will may 2000 machines write at same time to 20 
> tserver machines. There will be two bad thing for the performance. First is 
> primary key lock, tserver user row lock, so there will so many lock wait. The 
> worst case it always timeout for the write operation.Second is there are so 
> many machines write data at the same time to tserver. And no any controller 
> in the code.
> So we suggest two thing to do
>  # Change the flush mode to MANNUL_FLUSH_MODE, and process the error at row 
> level. At last at task level.
>  # Give an optional repartition step in spark. We can repartition the data by 
> the tablet distribution. Then we can get only one machine will write to one 
> tserver. There will no lock any more.
> We use this feature for some times. And it solve some problem when write big 
> table data to spark.I hope this feature will be useful for the community who 
> uses a lot spark with kudu. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KUDU-2674) Add Java KuduPartitioner API

2019-03-18 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KUDU-2674.
---
   Resolution: Fixed
Fix Version/s: 1.10.0

Resolved via 
[3db5c2|https://github.com/apache/kudu/commit/3db5c2151fb99c9ca834d6651a893610bc6e4ccd].

> Add Java KuduPartitioner API
> 
>
> Key: KUDU-2674
> URL: https://issues.apache.org/jira/browse/KUDU-2674
> Project: Kudu
>  Issue Type: Improvement
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Major
>  Labels: backup
> Fix For: 1.10.0
>
>
> We should port the client side KuduPartitioner implementation from KUDU-1713 
> ([https://gerrit.cloudera.org/#/c/5775/]) to the Java client. 
> This would allow Spark and other Java integrations to repartition and 
> pre-sort the data before writing to Kudu. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2746) 1.9.0 download link no longer goes to correct directory

2019-03-15 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793766#comment-16793766
 ] 

Grant Henke commented on KUDU-2746:
---

It looks like this is true for 1.8.0 too.

> 1.9.0 download link no longer goes to correct directory
> ---
>
> Key: KUDU-2746
> URL: https://issues.apache.org/jira/browse/KUDU-2746
> Project: Kudu
>  Issue Type: Bug
>  Components: website
>Reporter: Sebb
>Priority: Major
>
> The 1.9.0 source download link is:
> http://www.apache.org/closer.cgi?filename=kudu/1.9.0/apache-kudu-1.9.0.tar.gz
> However, this takes one to the top-level Apache directory of the mirror, not 
> the specific directory or file. The user then has to navigate down two 
> directory levels.
> The following URL seems to work:
> http://www.apache.org/closer.cgi?path=kudu/1.9.0/apache-kudu-1.9.0.tar.gz



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KUDU-2747) Update building from source documentation

2019-03-15 Thread Grant Henke (JIRA)
Grant Henke created KUDU-2747:
-

 Summary: Update building from source documentation
 Key: KUDU-2747
 URL: https://issues.apache.org/jira/browse/KUDU-2747
 Project: Kudu
  Issue Type: Improvement
  Components: documentation
Affects Versions: 1.9.0
Reporter: Grant Henke


The building from source documentation doesn't work for all supported OS 
versions. Based on the learnings found when developing the docker build support 
we should update the docs. 

We could either point the user to the [bootstrap 
script|https://github.com/apache/kudu/blob/master/docker/bootstrap-dev-env.sh] 
itself, or use it to update the docs manually. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KUDU-2737) Allow KuduContext row errors to be handled

2019-03-06 Thread Grant Henke (JIRA)
Grant Henke created KUDU-2737:
-

 Summary: Allow KuduContext row errors to be handled
 Key: KUDU-2737
 URL: https://issues.apache.org/jira/browse/KUDU-2737
 Project: Kudu
  Issue Type: Improvement
  Components: spark
Affects Versions: 1.9.0
Reporter: Grant Henke


Currently when writing to Kudu via Spark the writeRows Api detects all row 
errors and throws a RuntimeException with some of the sample errors included in 
the string: 
https://github.com/apache/kudu/blob/master/java/kudu-spark/src/main/scala/org/apache/kudu/spark/kudu/KuduContext.scala#L344

We should optionally return these row errors and  allow users to handle them, 
or potentially take and error handler function to allow custom error handling 
logic to be passed through. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KUDU-2722) Ability to mark a partition or table as read only

2019-03-01 Thread Grant Henke (JIRA)
Grant Henke created KUDU-2722:
-

 Summary: Ability to mark a partition or table as read only
 Key: KUDU-2722
 URL: https://issues.apache.org/jira/browse/KUDU-2722
 Project: Kudu
  Issue Type: Improvement
Affects Versions: 1.8.0
Reporter: Grant Henke


It could be useful to prevent data from being mutated in a table or partition. 
For example this would allow users to lock older range partitions from 
receiving inserts/updates/deletes ensuring any queries/reports running on that 
data always show the same results.

There might also be optimization (resource/storage) opportunities we could make 
server side once a table is marked as read only. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2715) Get all tests passing on macOS

2019-02-27 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779672#comment-16779672
 ] 

Grant Henke commented on KUDU-2715:
---

I was doing a bit of research on run Mac docker containers in the past. I am 
not sure the feasibility, but found a few links that may be a good starting 
point: 
* https://github.com/Cleafy/sxkdvm
* https://github.com/kholia/OSX-KVM


> Get all tests passing on macOS
> --
>
> Key: KUDU-2715
> URL: https://issues.apache.org/jira/browse/KUDU-2715
> Project: Kudu
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1.9.0
>Reporter: Adar Dembo
>Priority: Major
>
> It seems that there are always a handful of tests that don't pass when run on 
> macOS, though precisely which set depends on the underlying version of macOS. 
> This taxes the release vote process, wherein macOS-based Kudu developers are 
> forced to figure out whether the test failures they're seeing are "known 
> issues" or indicative of problems with the release. Not to mention the 
> day-to-day process of developing on macOS, where you never quite know whether 
> your local work regressed a test, or whether that test was broken all along.
> In the past we looked into macOS CI builds and found the situation to be 
> fairly bleak. Hopefully things have improved since then, but if not, I think 
> we should still get the tests passing uniformly (disabling those which make 
> no sense) and work in an ad hoc fashion towards keeping them that way.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KUDU-2710) Retries of scanner keep alive requests are broken in the Java client

2019-02-27 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KUDU-2710:
--
   Resolution: Fixed
Fix Version/s: 1.8.1
   Status: Resolved  (was: In Review)

Resolved via 
[6302811|https://github.com/apache/kudu/commit/6302811eb73efdfd2a3da84c25f5d6589302dee1].

> Retries of scanner keep alive requests are broken in the Java client
> 
>
> Key: KUDU-2710
> URL: https://issues.apache.org/jira/browse/KUDU-2710
> Project: Kudu
>  Issue Type: Bug
>Affects Versions: 1.9.0
>Reporter: Will Berkeley
>Assignee: Grant Henke
>Priority: Critical
> Fix For: 1.9.0, 1.8.1
>
>
> KuduRpc implements a default `partitionKey` method:
> {noformat}
> /**
>* Returns the partition key this RPC is for, or {@code null} if the RPC is
>* not tablet specific.
>* 
>* DO NOT MODIFY THE CONTENTS OF THE RETURNED ARRAY.
>*/
>   byte[] partitionKey() {
> return null;
>   }
> {noformat}
> Subclasses override this method to indicate the start key of the tablet they 
> should be sent to, and the Java client uses this, in part, to select which 
> tserver to send retries to. The default implementation returns {{null}}, 
> which is a special value that is only valid as a partition key for the master 
> table. The keep alive RPC does not override this method, so it uses the 
> default implementation.
> When {{KuduScanner#keepAlive}} is called, the initial keep alive RPC does not 
> use {{partitionKey}}, so it works OK. However, retries go through standard 
> retry logic, which calls {{delayedSendRpcToTablet}}, which calls 
> {{sendRpcToTablet}} after a delay and on a timer thread. In 
> {{sendRpcToTablet}} we call {{getTableLocationEntry}} with a null 
> {{partitionkey}}, because the RPC never set one. That results in 
> {{cache.get(partitionKey)}} throwing an exception (usually) because there are 
> multiple entries in the cache for the table, but the {{null}} partition key 
> makes the lookup act like it is looking up the master table, so the invariant 
> check for the master table {{Preconditions.checkState(entries.size() <= 1)}} 
> fails.
> As a workaround, users can set {{keepAlivePeriodMs}} on {{KuduReadOptions}} 
> to something very large like {{Long.MAX_VALUE}}; or, if using the default 
> source, pass the {{kudu.keepAlivePeriodMs}} spark config with a very large 
> value. Note that there also has to be something causing keep alive requests 
> to fail and retry, and this is relatively rare (in my experience).
> To fix, we'll need to make sure that keep alive RPCs act like scan RPCs, and 
> are always retried on the same server as the one currently open for scanning 
> (or no op if there is no such server).
> Also, it's not wise to keep the default implementation in KuduRpc-- 
> subclasses ought to have to make an explicit choice about the default 
> partition key, which is a proxy for which tablet they will go to.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KUDU-2714) Support target Mac OS version in the mini-cluster jar build

2019-02-27 Thread Grant Henke (JIRA)
Grant Henke created KUDU-2714:
-

 Summary: Support target Mac OS version in the mini-cluster jar 
build
 Key: KUDU-2714
 URL: https://issues.apache.org/jira/browse/KUDU-2714
 Project: Kudu
  Issue Type: Improvement
  Components: mini-cluster, build
Affects Versions: 1.9.0
Reporter: Grant Henke


When building the mini-cluster jars we need to use the oldest version of Mac we 
want to support. Instead it would be nice to support building on newer Mac OS 
versions while targeting an older version. An example of how to do this can be 
seen here: 

https://www.cocoawithlove.com/2009/09/building-for-earlier-os-versions-in.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2710) Retries of scanner keep alive requests are broken in the Java client

2019-02-26 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778841#comment-16778841
 ] 

Grant Henke commented on KUDU-2710:
---

FYI here are some code examples for workarounds:


{code:scala}
val rdd = kuduContext.kuduRDD(
   ss.sparkContext,
   tableName,
   options = KuduReadOptions(keepAlivePeriodMs = Long.MaxValue)
)
{code}

{code:scala}
sqlContext.read.option("kudu.keepAlivePeriodMs", 
Long.MaxValue).format("kudu").load
{code}


> Retries of scanner keep alive requests are broken in the Java client
> 
>
> Key: KUDU-2710
> URL: https://issues.apache.org/jira/browse/KUDU-2710
> Project: Kudu
>  Issue Type: Bug
>Affects Versions: 1.9.0
>Reporter: Will Berkeley
>Assignee: Grant Henke
>Priority: Critical
> Fix For: 1.9.0
>
>
> KuduRpc implements a default `partitionKey` method:
> {noformat}
> /**
>* Returns the partition key this RPC is for, or {@code null} if the RPC is
>* not tablet specific.
>* 
>* DO NOT MODIFY THE CONTENTS OF THE RETURNED ARRAY.
>*/
>   byte[] partitionKey() {
> return null;
>   }
> {noformat}
> Subclasses override this method to indicate the start key of the tablet they 
> should be sent to, and the Java client uses this, in part, to select which 
> tserver to send retries to. The default implementation returns {{null}}, 
> which is a special value that is only valid as a partition key for the master 
> table. The keep alive RPC does not override this method, so it uses the 
> default implementation.
> When {{KuduScanner#keepAlive}} is called, the initial keep alive RPC does not 
> use {{partitionKey}}, so it works OK. However, retries go through standard 
> retry logic, which calls {{delayedSendRpcToTablet}}, which calls 
> {{sendRpcToTablet}} after a delay and on a timer thread. In 
> {{sendRpcToTablet}} we call {{getTableLocationEntry}} with a null 
> {{partitionkey}}, because the RPC never set one. That results in 
> {{cache.get(partitionKey)}} throwing an exception (usually) because there are 
> multiple entries in the cache for the table, but the {{null}} partition key 
> makes the lookup act like it is looking up the master table, so the invariant 
> check for the master table {{Preconditions.checkState(entries.size() <= 1)}} 
> fails.
> As a workaround, users can set {{keepAlivePeriodMs}} on {{KuduReadOptions}} 
> to something very large like {{Long.MAX_VALUE}}; or, if using the default 
> source, pass the {{kudu.keepAlivePeriodMs}} spark config with a very large 
> value. Note that there also has to be something causing keep alive requests 
> to fail and retry, and this is relatively rare (in my experience).
> To fix, we'll need to make sure that keep alive RPCs act like scan RPCs, and 
> are always retried on the same server as the one currently open for scanning 
> (or no op if there is no such server).
> Also, it's not wise to keep the default implementation in KuduRpc-- 
> subclasses ought to have to make an explicit choice about the default 
> partition key, which is a proxy for which tablet they will go to.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KUDU-2710) Retries of scanner keep alive requests are broken in the Java client

2019-02-25 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KUDU-2710:
--
Status: In Review  (was: In Progress)

> Retries of scanner keep alive requests are broken in the Java client
> 
>
> Key: KUDU-2710
> URL: https://issues.apache.org/jira/browse/KUDU-2710
> Project: Kudu
>  Issue Type: Bug
>Affects Versions: 1.9.0
>Reporter: Will Berkeley
>Assignee: Grant Henke
>Priority: Critical
> Fix For: 1.9.0
>
>
> KuduRpc implements a default `partitionKey` method:
> {noformat}
> /**
>* Returns the partition key this RPC is for, or {@code null} if the RPC is
>* not tablet specific.
>* 
>* DO NOT MODIFY THE CONTENTS OF THE RETURNED ARRAY.
>*/
>   byte[] partitionKey() {
> return null;
>   }
> {noformat}
> Subclasses override this method to indicate the start key of the tablet they 
> should be sent to, and the Java client uses this, in part, to select which 
> tserver to send retries to. The default implementation returns {{null}}, 
> which is a special value that is only valid as a partition key for the master 
> table. The keep alive RPC does not override this method, so it uses the 
> default implementation.
> When {{KuduScanner#keepAlive}} is called, the initial keep alive RPC does not 
> use {{partitionKey}}, so it works OK. However, retries go through standard 
> retry logic, which calls {{delayedSendRpcToTablet}}, which calls 
> {{sendRpcToTablet}} after a delay and on a timer thread. In 
> {{sendRpcToTablet}} we call {{getTableLocationEntry}} with a null 
> {{partitionkey}}, because the RPC never set one. That results in 
> {{cache.get(partitionKey)}} throwing an exception (usually) because there are 
> multiple entries in the cache for the table, but the {{null}} partition key 
> makes the lookup act like it is looking up the master table, so the invariant 
> check for the master table {{Preconditions.checkState(entries.size() <= 1)}} 
> fails.
> As a workaround, users can set {{keepAlivePeriodMs}} on {{KuduReadOptions}} 
> to something very large like {{Long.MAX_VALUE}}; or, if using the default 
> source, pass the {{kudu.keepAlivePeriodMs}} spark config with a very large 
> value. Note that there also has to be something causing keep alive requests 
> to fail and retry, and this is relatively rare (in my experience).
> To fix, we'll need to make sure that keep alive RPCs act like scan RPCs, and 
> are always retried on the same server as the one currently open for scanning 
> (or no op if there is no such server).
> Also, it's not wise to keep the default implementation in KuduRpc-- 
> subclasses ought to have to make an explicit choice about the default 
> partition key, which is a proxy for which tablet they will go to.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KUDU-2710) Retries of scanner keep alive requests are broken in the Java client

2019-02-25 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KUDU-2710:
--
Fix Version/s: 1.9.0

> Retries of scanner keep alive requests are broken in the Java client
> 
>
> Key: KUDU-2710
> URL: https://issues.apache.org/jira/browse/KUDU-2710
> Project: Kudu
>  Issue Type: Bug
>Affects Versions: 1.9.0
>Reporter: Will Berkeley
>Assignee: Grant Henke
>Priority: Critical
> Fix For: 1.9.0
>
>
> KuduRpc implements a default `partitionKey` method:
> {noformat}
> /**
>* Returns the partition key this RPC is for, or {@code null} if the RPC is
>* not tablet specific.
>* 
>* DO NOT MODIFY THE CONTENTS OF THE RETURNED ARRAY.
>*/
>   byte[] partitionKey() {
> return null;
>   }
> {noformat}
> Subclasses override this method to indicate the start key of the tablet they 
> should be sent to, and the Java client uses this, in part, to select which 
> tserver to send retries to. The default implementation returns {{null}}, 
> which is a special value that is only valid as a partition key for the master 
> table. The keep alive RPC does not override this method, so it uses the 
> default implementation.
> When {{KuduScanner#keepAlive}} is called, the initial keep alive RPC does not 
> use {{partitionKey}}, so it works OK. However, retries go through standard 
> retry logic, which calls {{delayedSendRpcToTablet}}, which calls 
> {{sendRpcToTablet}} after a delay and on a timer thread. In 
> {{sendRpcToTablet}} we call {{getTableLocationEntry}} with a null 
> {{partitionkey}}, because the RPC never set one. That results in 
> {{cache.get(partitionKey)}} throwing an exception (usually) because there are 
> multiple entries in the cache for the table, but the {{null}} partition key 
> makes the lookup act like it is looking up the master table, so the invariant 
> check for the master table {{Preconditions.checkState(entries.size() <= 1)}} 
> fails.
> As a workaround, users can set {{keepAlivePeriodMs}} on {{KuduReadOptions}} 
> to something very large like {{Long.MAX_VALUE}}; or, if using the default 
> source, pass the {{kudu.keepAlivePeriodMs}} spark config with a very large 
> value. Note that there also has to be something causing keep alive requests 
> to fail and retry, and this is relatively rare (in my experience).
> To fix, we'll need to make sure that keep alive RPCs act like scan RPCs, and 
> are always retried on the same server as the one currently open for scanning 
> (or no op if there is no such server).
> Also, it's not wise to keep the default implementation in KuduRpc-- 
> subclasses ought to have to make an explicit choice about the default 
> partition key, which is a proxy for which tablet they will go to.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KUDU-2710) Retries of scanner keep alive requests are broken in the Java client

2019-02-25 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KUDU-2710:
--
Priority: Critical  (was: Major)

> Retries of scanner keep alive requests are broken in the Java client
> 
>
> Key: KUDU-2710
> URL: https://issues.apache.org/jira/browse/KUDU-2710
> Project: Kudu
>  Issue Type: Bug
>Affects Versions: 1.9.0
>Reporter: Will Berkeley
>Assignee: Grant Henke
>Priority: Critical
>
> KuduRpc implements a default `partitionKey` method:
> {noformat}
> /**
>* Returns the partition key this RPC is for, or {@code null} if the RPC is
>* not tablet specific.
>* 
>* DO NOT MODIFY THE CONTENTS OF THE RETURNED ARRAY.
>*/
>   byte[] partitionKey() {
> return null;
>   }
> {noformat}
> Subclasses override this method to indicate the start key of the tablet they 
> should be sent to, and the Java client uses this, in part, to select which 
> tserver to send retries to. The default implementation returns {{null}}, 
> which is a special value that is only valid as a partition key for the master 
> table. The keep alive RPC does not override this method, so it uses the 
> default implementation.
> When {{KuduScanner#keepAlive}} is called, the initial keep alive RPC does not 
> use {{partitionKey}}, so it works OK. However, retries go through standard 
> retry logic, which calls {{delayedSendRpcToTablet}}, which calls 
> {{sendRpcToTablet}} after a delay and on a timer thread. In 
> {{sendRpcToTablet}} we call {{getTableLocationEntry}} with a null 
> {{partitionkey}}, because the RPC never set one. That results in 
> {{cache.get(partitionKey)}} throwing an exception (usually) because there are 
> multiple entries in the cache for the table, but the {{null}} partition key 
> makes the lookup act like it is looking up the master table, so the invariant 
> check for the master table {{Preconditions.checkState(entries.size() <= 1)}} 
> fails.
> As a workaround, users can set {{keepAlivePeriodMs}} on {{KuduReadOptions}} 
> to something very large like {{Long.MAX_VALUE}}; or, if using the default 
> source, pass the {{kudu.keepAlivePeriodMs}} spark config with a very large 
> value. Note that there also has to be something causing keep alive requests 
> to fail and retry, and this is relatively rare (in my experience).
> To fix, we'll need to make sure that keep alive RPCs act like scan RPCs, and 
> are always retried on the same server as the one currently open for scanning 
> (or no op if there is no such server).
> Also, it's not wise to keep the default implementation in KuduRpc-- 
> subclasses ought to have to make an explicit choice about the default 
> partition key, which is a proxy for which tablet they will go to.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KUDU-2710) Retries of scanner keep alive requests are broken in the Java client

2019-02-25 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KUDU-2710:
-

Assignee: Grant Henke

> Retries of scanner keep alive requests are broken in the Java client
> 
>
> Key: KUDU-2710
> URL: https://issues.apache.org/jira/browse/KUDU-2710
> Project: Kudu
>  Issue Type: Bug
>Affects Versions: 1.9.0
>Reporter: Will Berkeley
>Assignee: Grant Henke
>Priority: Major
>
> KuduRpc implements a default `partitionKey` method:
> {noformat}
> /**
>* Returns the partition key this RPC is for, or {@code null} if the RPC is
>* not tablet specific.
>* 
>* DO NOT MODIFY THE CONTENTS OF THE RETURNED ARRAY.
>*/
>   byte[] partitionKey() {
> return null;
>   }
> {noformat}
> Subclasses override this method to indicate the start key of the tablet they 
> should be sent to, and the Java client uses this, in part, to select which 
> tserver to send retries to. The default implementation returns {{null}}, 
> which is a special value that is only valid as a partition key for the master 
> table. The keep alive RPC does not override this method, so it uses the 
> default implementation.
> When {{KuduScanner#keepAlive}} is called, the initial keep alive RPC does not 
> use {{partitionKey}}, so it works OK. However, retries go through standard 
> retry logic, which calls {{delayedSendRpcToTablet}}, which calls 
> {{sendRpcToTablet}} after a delay and on a timer thread. In 
> {{sendRpcToTablet}} we call {{getTableLocationEntry}} with a null 
> {{partitionkey}}, because the RPC never set one. That results in 
> {{cache.get(partitionKey)}} throwing an exception (usually) because there are 
> multiple entries in the cache for the table, but the {{null}} partition key 
> makes the lookup act like it is looking up the master table, so the invariant 
> check for the master table {{Preconditions.checkState(entries.size() <= 1)}} 
> fails.
> As a workaround, users can set {{keepAlivePeriodMs}} on {{KuduReadOptions}} 
> to something very large like {{Long.MAX_VALUE}}; or, if using the default 
> source, pass the {{kudu.keepAlivePeriodMs}} spark config with a very large 
> value. Note that there also has to be something causing keep alive requests 
> to fail and retry, and this is relatively rare (in my experience).
> To fix, we'll need to make sure that keep alive RPCs act like scan RPCs, and 
> are always retried on the same server as the one currently open for scanning 
> (or no op if there is no such server).
> Also, it's not wise to keep the default implementation in KuduRpc-- 
> subclasses ought to have to make an explicit choice about the default 
> partition key, which is a proxy for which tablet they will go to.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KUDU-2676) Restore: Support creating tables with greater than the maximum allowed number of partitions

2019-02-08 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KUDU-2676:
-

Assignee: Will Berkeley

> Restore: Support creating tables with greater than the maximum allowed number 
> of partitions
> ---
>
> Key: KUDU-2676
> URL: https://issues.apache.org/jira/browse/KUDU-2676
> Project: Kudu
>  Issue Type: Bug
>Affects Versions: 1.8.0
>Reporter: Grant Henke
>Assignee: Will Berkeley
>Priority: Major
>  Labels: backup
>
> Currently it is possible to backup a table that has more partitions than are 
> allowed at create time. 
> This results in the restore job failing with the following exception:
> {noformat}
> 19/01/24 08:17:14 INFO backup.KuduRestore$: Restoring from path: 
> hdfs:///user/ghenke/kudu-backup-tests/20190124-080741
> Exception in thread "main" org.apache.kudu.client.NonRecoverableException: 
> the requested number of tablet replicas is over the maximum permitted at 
> creation time (
> 450), additional tablets may be added by adding range partitions to the table 
> post-creation
> at 
> org.apache.kudu.client.KuduException.transformException(KuduException.java:110)
> at 
> org.apache.kudu.client.KuduClient.joinAndHandleException(KuduClient.java:365)
> at org.apache.kudu.client.KuduClient.createTable(KuduClient.java:109)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KUDU-2676) Restore: Support creating tables with greater than the maximum allowed number of partitions

2019-02-08 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KUDU-2676.
---
   Resolution: Fixed
Fix Version/s: 1.10.0

Resolved via 
[ce60d64|https://github.com/apache/kudu/commit/ce60d6408f5ac0dd4f9f53ce2ab9a9ce76aad211].

> Restore: Support creating tables with greater than the maximum allowed number 
> of partitions
> ---
>
> Key: KUDU-2676
> URL: https://issues.apache.org/jira/browse/KUDU-2676
> Project: Kudu
>  Issue Type: Bug
>Affects Versions: 1.8.0
>Reporter: Grant Henke
>Assignee: Will Berkeley
>Priority: Major
>  Labels: backup
> Fix For: 1.10.0
>
>
> Currently it is possible to backup a table that has more partitions than are 
> allowed at create time. 
> This results in the restore job failing with the following exception:
> {noformat}
> 19/01/24 08:17:14 INFO backup.KuduRestore$: Restoring from path: 
> hdfs:///user/ghenke/kudu-backup-tests/20190124-080741
> Exception in thread "main" org.apache.kudu.client.NonRecoverableException: 
> the requested number of tablet replicas is over the maximum permitted at 
> creation time (
> 450), additional tablets may be added by adding range partitions to the table 
> post-creation
> at 
> org.apache.kudu.client.KuduException.transformException(KuduException.java:110)
> at 
> org.apache.kudu.client.KuduClient.joinAndHandleException(KuduClient.java:365)
> at org.apache.kudu.client.KuduClient.createTable(KuduClient.java:109)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KUDU-2689) Make PartialRow.add methods fluent

2019-02-06 Thread Grant Henke (JIRA)
Grant Henke created KUDU-2689:
-

 Summary: Make PartialRow.add methods fluent
 Key: KUDU-2689
 URL: https://issues.apache.org/jira/browse/KUDU-2689
 Project: Kudu
  Issue Type: Improvement
  Components: client
Affects Versions: 1.8.0
Reporter: Grant Henke


Today when creating populating a partial row the user needs to specify each 
value on a new line:

{code:java}
PartialRow row = schema.newPartialRow();
row.addInt("col1", 1);
row.addString("col2" "hello");
row.addBoolean("col3", false);
{code}

By adjusting all of the add methods to return `this` a user could build the row 
fluently:

{code:java}
PartialRow row = schema.newPartialRow()
   .addInt("col1", 1)
   .addString("col2" "hello")
   .addBoolean("col3", false);
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-1990) Support Hadoop 3 when available

2019-02-05 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-1990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16761307#comment-16761307
 ] 

Grant Henke commented on KUDU-1990:
---

We upgraded the Hadoop dependency to 3.x in 
[133ef7e|https://github.com/apache/kudu/commit/133ef7e9dc469927a5b5e6ecec2cb24af91719ac#diff-c8766e11ebe92a5ae9ba91b23e73ab03].

> Support Hadoop 3 when available
> ---
>
> Key: KUDU-1990
> URL: https://issues.apache.org/jira/browse/KUDU-1990
> Project: Kudu
>  Issue Type: Task
>  Components: client
>Reporter: Jean-Daniel Cryans
>Priority: Minor
>
> The Hadoop project is in the process of releasing 3.0.0, and according to 
> this page it should happen in August: 
> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+3.0.0+release
> The only exposure we really have is in the kudu-mapreduce Java module, we 
> should make sure that we can support the new version when it comes out.
> Would be interesting to try building against an alpha release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KUDU-1990) Support Hadoop 3 when available

2019-02-05 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-1990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KUDU-1990.
---
   Resolution: Fixed
 Assignee: Grant Henke
Fix Version/s: 1.8.0

> Support Hadoop 3 when available
> ---
>
> Key: KUDU-1990
> URL: https://issues.apache.org/jira/browse/KUDU-1990
> Project: Kudu
>  Issue Type: Task
>  Components: client
>Reporter: Jean-Daniel Cryans
>Assignee: Grant Henke
>Priority: Minor
> Fix For: 1.8.0
>
>
> The Hadoop project is in the process of releasing 3.0.0, and according to 
> this page it should happen in August: 
> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+3.0.0+release
> The only exposure we really have is in the kudu-mapreduce Java module, we 
> should make sure that we can support the new version when it comes out.
> Would be interesting to try building against an alpha release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2676) [Backup] Support restoring tables over the maximum allowed replicas

2019-01-28 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754543#comment-16754543
 ] 

Grant Henke commented on KUDU-2676:
---

I am not sure exactly what the limiting factor is in the current replica size 
limitation. Perhaps finding a way to create a table with more replicas is also 
an option. 

It seems if a client would write code that would sequentially create range 
partitions to prevent creating too many replicas at once, the server could do 
the same when a range partitioned table was created. 

> [Backup] Support restoring tables over the maximum allowed replicas
> ---
>
> Key: KUDU-2676
> URL: https://issues.apache.org/jira/browse/KUDU-2676
> Project: Kudu
>  Issue Type: Bug
>Affects Versions: 1.8.0
>Reporter: Grant Henke
>Priority: Major
>  Labels: backup
>
> Currently it is possible to backup a table that has more partitions than are 
> allowed at create time. 
> This results in the restore job failing with the following exception:
> {noformat}
> 19/01/24 08:17:14 INFO backup.KuduRestore$: Restoring from path: 
> hdfs:///user/ghenke/kudu-backup-tests/20190124-080741
> Exception in thread "main" org.apache.kudu.client.NonRecoverableException: 
> the requested number of tablet replicas is over the maximum permitted at 
> creation time (
> 450), additional tablets may be added by adding range partitions to the table 
> post-creation
> at 
> org.apache.kudu.client.KuduException.transformException(KuduException.java:110)
> at 
> org.apache.kudu.client.KuduClient.joinAndHandleException(KuduClient.java:365)
> at org.apache.kudu.client.KuduClient.createTable(KuduClient.java:109)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2670) Splitting more tasks for spark job, and add more concurrent for scan operation

2019-01-28 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754376#comment-16754376
 ] 

Grant Henke commented on KUDU-2670:
---

[~yangz] If you have an older un-rebased WIP patch I would be interested to see 
it. This work would be very beneficial to the backup jobs that we are working 
on now. I am happy to help get this work into Kudu anyway I can.



> Splitting more tasks for spark job, and add more concurrent for scan operation
> --
>
> Key: KUDU-2670
> URL: https://issues.apache.org/jira/browse/KUDU-2670
> Project: Kudu
>  Issue Type: Improvement
>  Components: java, spark
>Affects Versions: 1.8.0
>Reporter: yangz
>Priority: Major
>  Labels: backup, performance
>
> Refer to the KUDU-2437 Split a tablet into primary key ranges by size.
> We need a java client implementation to support the split the tablet scan 
> operation.
> We suggest two new implementation for the java client.
>  # A ConcurrentKuduScanner to get more scanner read data at the same time. 
> This will be useful for one case.  We scanner only one row, but the predicate 
> doesn't contain the primary key, for this case, we will send a lot scanner 
> request but only one row return.It will be slow to send so much scanner 
> request one by one. So we need a concurrent way. And by this case we test, 
> for a 10G tablet, it will save a lot time for one machine.
>  # A way to split more spark task. To do so, we need get scanner tokens for 
> two step, first we send to the tserver to give range, then with this range we 
> get more scanner tokens. For our usage we make a tablet 10G, but we split a 
> task to process only 1G data. So we get better performance.
> And all this feature has run well for us for half a year. We hope this 
> feature will be useful for the community.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KUDU-1868) Java client mishandles socket read timeouts for scans

2019-01-28 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KUDU-1868:
--
Labels: backup  (was: )

> Java client mishandles socket read timeouts for scans
> -
>
> Key: KUDU-1868
> URL: https://issues.apache.org/jira/browse/KUDU-1868
> Project: Kudu
>  Issue Type: Bug
>  Components: client
>Affects Versions: 1.2.0
>Reporter: Jean-Daniel Cryans
>Assignee: Will Berkeley
>Priority: Major
>  Labels: backup
>
> Scan calls from the Java client that take more than the socket read timeout 
> get retried (unless the operation timeout has expired) instead of being 
> killed. Users will see this:
> {code}
> org.apache.kudu.client.NonRecoverableException: Invalid call sequence ID in 
> scan request
> {code}
> Note that the right behavior here would still end up killing the scanner, so 
> this is really a problem the user has to deal with! It's usually caused by 
> slow IO, combined with very selection scans.
> Workaround: set defaultSocketReadTimeoutMs higher, ideally equal to 
> defaultOperationTimeoutMs (the defaults are 10 and 30 seconds respectively). 
> But really the user should investigate why single the scans are so slow.
> One potentially easy fix to this is to handle retries differently for 
> scanners so that the user gets nicer exception. A harder fix is to handle 
> socket read timeouts completely differently, basically it should be per-RPC 
> and not per TabletClient like it is right now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KUDU-2676) [Backup] Support restoring tables over the maximum allowed replicas

2019-01-28 Thread Grant Henke (JIRA)
Grant Henke created KUDU-2676:
-

 Summary: [Backup] Support restoring tables over the maximum 
allowed replicas
 Key: KUDU-2676
 URL: https://issues.apache.org/jira/browse/KUDU-2676
 Project: Kudu
  Issue Type: Bug
Affects Versions: 1.8.0
Reporter: Grant Henke


Currently it is possible to backup a table that has more partitions than are 
allowed at create time. 

This results in the restore job failing with the following exception:

{noformat}
19/01/24 08:17:14 INFO backup.KuduRestore$: Restoring from path: 
hdfs:///user/ghenke/kudu-backup-tests/20190124-080741
Exception in thread "main" org.apache.kudu.client.NonRecoverableException: the 
requested number of tablet replicas is over the maximum permitted at creation 
time (
450), additional tablets may be added by adding range partitions to the table 
post-creation
at 
org.apache.kudu.client.KuduException.transformException(KuduException.java:110)
at 
org.apache.kudu.client.KuduClient.joinAndHandleException(KuduClient.java:365)
at org.apache.kudu.client.KuduClient.createTable(KuduClient.java:109)

{noformat}





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KUDU-1220) Improve bulk loads from multiple sequential writers

2019-01-28 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-1220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KUDU-1220:
--
Component/s: backup

> Improve bulk loads from multiple sequential writers
> ---
>
> Key: KUDU-1220
> URL: https://issues.apache.org/jira/browse/KUDU-1220
> Project: Kudu
>  Issue Type: Improvement
>  Components: backup, perf
>Affects Versions: Public beta
>Reporter: Jean-Daniel Cryans
>Assignee: Todd Lipcon
>Priority: Major
> Attachments: orderkeys.py, write-pattern.png
>
>
> We ran some experiments loading lineitem at scale factor 15k. The 10 nodes 
> cluster (1 master, 9 TS) is equipped with Intel P3700 SSDs, one per TS, 
> dedicated for the WALs. The table is hash-partitioned and set to have 10 
> tablets per TS.
> Our findings :
> - Reading the bloom filters puts a lot of contention on the block cache. This 
> isn't new, see KUDU-613, but it's now coming up when writing because the SSDs 
> are just really fast.
> - Kudu performs best when data is inserted in order, but with hash 
> partitioning we end up multiple clients writing simultaneously in different 
> key ranges in each tablet. This becomes a worst case scenario, we have to 
> compact (optimize) the row sets over and over again to put them in order. 
> Even if we were to delay this to the end of the bulk load, we're still taking 
> a hit because we have to look at more and more bloom filters to check if a 
> row currently exists or not.
> - In the case of an initial bulk load, we know we're not trying to overwrite 
> rows or update them, so all those checks are unnecessary.
> Some ideas for improvements:
> - Obviously, we need a better block cache.
> - When flushing, we could detect those disjoint set of rows and make sure 
> that maps to row sets that don't cover the gaps. For example, if the MRS has 
> a,b,c,x,y,z then flushing would give us two row sets eg a,b,c and x,y,z 
> instead of one. The danger here is generating too many row sets.
> - When reading, to have the row set interval tree be smart enough to not send 
> readers into the row set gaps. Again with the same example, let's say we're 
> looking for "m", normally we'd see a row set that's a-z so we'd have to check 
> its bloom filter, but if we could detect that it's actually a-c then x-z then 
> we'd save a check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KUDU-2672) Spark write to kudu, too many machines write to one tserver.

2019-01-28 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KUDU-2672:
--
Labels: backup performance  (was: performance)

> Spark write to kudu, too many machines write to one tserver.
> 
>
> Key: KUDU-2672
> URL: https://issues.apache.org/jira/browse/KUDU-2672
> Project: Kudu
>  Issue Type: Improvement
>  Components: java, spark
>Affects Versions: 1.8.0
>Reporter: yangz
>Priority: Major
>  Labels: backup, performance
>
> For the spark use case. We sometimes will use spark to write data to kudu.
> Such as import a hive table data to kudu table.
> There will have 2 problems here in current implement.
>  # It use a FlushMode.AUTO_FLUSH_BACKGROUND, which is not efficient for error 
> processing. When some error happen such as timeout. It will always flush all 
> data in the task.Then failed the task. It retry by the task level. 
>  # For the write mode, spark use default hash way to split data to partition. 
> And the hash method is not always meets the tablet distribution. Such as a 
> big hive table for 500G size.It will give 2000 task, but we only have 20 
> tserver machines. so there will may 2000 machines write at same time to 20 
> tserver machines. There will be two bad thing for the performance. First is 
> primary key lock, tserver user row lock, so there will so many lock wait. The 
> worst case it always timeout for the write operation.Second is there are so 
> many machines write data at the same time to tserver. And no any controller 
> in the code.
> So we suggest two thing to do
>  # Change the flush mode to MANNUL_FLUSH_MODE, and process the error at row 
> level. At last at task level.
>  # Give an optional repartition step in spark. We can repartition the data by 
> the tablet distribution. Then we can get only one machine will write to one 
> tserver. There will no lock any more.
> We use this feature for some times. And it solve some problem when write big 
> table data to spark.I hope this feature will be useful for the community who 
> uses a lot spark with kudu. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-1868) Java client mishandles socket read timeouts for scans

2019-01-25 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16752595#comment-16752595
 ] 

Grant Henke commented on KUDU-1868:
---

Would it be reasonable to simply adjust the default to match the configured 
operation timeout unless explicitly set?

> Java client mishandles socket read timeouts for scans
> -
>
> Key: KUDU-1868
> URL: https://issues.apache.org/jira/browse/KUDU-1868
> Project: Kudu
>  Issue Type: Bug
>  Components: client
>Affects Versions: 1.2.0
>Reporter: Jean-Daniel Cryans
>Assignee: Will Berkeley
>Priority: Major
>
> Scan calls from the Java client that take more than the socket read timeout 
> get retried (unless the operation timeout has expired) instead of being 
> killed. Users will see this:
> {code}
> org.apache.kudu.client.NonRecoverableException: Invalid call sequence ID in 
> scan request
> {code}
> Note that the right behavior here would still end up killing the scanner, so 
> this is really a problem the user has to deal with! It's usually caused by 
> slow IO, combined with very selection scans.
> Workaround: set defaultSocketReadTimeoutMs higher, ideally equal to 
> defaultOperationTimeoutMs (the defaults are 10 and 30 seconds respectively). 
> But really the user should investigate why single the scans are so slow.
> One potentially easy fix to this is to handle retries differently for 
> scanners so that the user gets nicer exception. A harder fix is to handle 
> socket read timeouts completely differently, basically it should be per-RPC 
> and not per TabletClient like it is right now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KUDU-2674) Add Java KuduPartitioner API

2019-01-25 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KUDU-2674:
--
Description: 
We should port the client side KuduPartitioner implementation from KUDU-1713 
([https://gerrit.cloudera.org/#/c/5775/]) to the Java client. 

This would allow Spark and other Java integrations to repartition and pre-sort 
the data before writing to Kudu. 

  was:
We should port the client side KuduPartitioner implementation from KUDU-1713 
([https://gerrit.cloudera.org/#/c/5775/)] to the Java client. 

This would allow Spark and other Java integrations to repartition and pre-sort 
the data before writing to Kudu. 


> Add Java KuduPartitioner API
> 
>
> Key: KUDU-2674
> URL: https://issues.apache.org/jira/browse/KUDU-2674
> Project: Kudu
>  Issue Type: Improvement
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Major
>
> We should port the client side KuduPartitioner implementation from KUDU-1713 
> ([https://gerrit.cloudera.org/#/c/5775/]) to the Java client. 
> This would allow Spark and other Java integrations to repartition and 
> pre-sort the data before writing to Kudu. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KUDU-2674) Add Java KuduPartitioner API

2019-01-25 Thread Grant Henke (JIRA)
Grant Henke created KUDU-2674:
-

 Summary: Add Java KuduPartitioner API
 Key: KUDU-2674
 URL: https://issues.apache.org/jira/browse/KUDU-2674
 Project: Kudu
  Issue Type: Improvement
Reporter: Grant Henke
Assignee: Grant Henke


We should port the client side KuduPartitioner implementation from KUDU-1713 
([https://gerrit.cloudera.org/#/c/5775/)] to the Java client. 

This would allow Spark and other Java integrations to repartition and pre-sort 
the data before writing to Kudu. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2671) Change hash number for range partitioning

2019-01-24 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751272#comment-16751272
 ] 

Grant Henke commented on KUDU-2671:
---

[~yangz] I removed the fix version because we cannot say it is fixed until the 
patch is comitted. 

> Change hash number for range partitioning
> -
>
> Key: KUDU-2671
> URL: https://issues.apache.org/jira/browse/KUDU-2671
> Project: Kudu
>  Issue Type: Improvement
>  Components: client, java, master, server
>Affects Versions: 1.8.0
>Reporter: yangz
>Priority: Major
>  Labels: feature
> Fix For: 1.8.0
>
> Attachments: 屏幕快照 2019-01-24 下午12.03.41.png
>
>
> For our usage, the kudu schema design isn't flexible enough.
> We create our table for day range such as dt='20181112' as hive table.
> But our data size change a lot every day, for one day it will be 50G, but for 
> some other day it will be 500G. For this case, it be hard to set the hash 
> schema. If too big, for most case, it will be too wasteful. But too small, 
> there is a performance problem in the case of a large amount of data.
>  
> So we suggest a solution we can change the hash number by the history data of 
> a table.
> for example
>  # we create schema with one estimated value.
>  # we collect the data size by day range
>  # we create new day range partition by our collected day size.
> We use this feature for half a year, and it work well. We hope this feature 
> will be useful for the community. Maybe the solution isn't so complete. 
> Please help us make it better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KUDU-2671) Change hash number for range partitioning

2019-01-24 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KUDU-2671:
--
Fix Version/s: (was: 1.8.0)

> Change hash number for range partitioning
> -
>
> Key: KUDU-2671
> URL: https://issues.apache.org/jira/browse/KUDU-2671
> Project: Kudu
>  Issue Type: Improvement
>  Components: client, java, master, server
>Affects Versions: 1.8.0
>Reporter: yangz
>Priority: Major
>  Labels: feature
> Attachments: 屏幕快照 2019-01-24 下午12.03.41.png
>
>
> For our usage, the kudu schema design isn't flexible enough.
> We create our table for day range such as dt='20181112' as hive table.
> But our data size change a lot every day, for one day it will be 50G, but for 
> some other day it will be 500G. For this case, it be hard to set the hash 
> schema. If too big, for most case, it will be too wasteful. But too small, 
> there is a performance problem in the case of a large amount of data.
>  
> So we suggest a solution we can change the hash number by the history data of 
> a table.
> for example
>  # we create schema with one estimated value.
>  # we collect the data size by day range
>  # we create new day range partition by our collected day size.
> We use this feature for half a year, and it work well. We hope this feature 
> will be useful for the community. Maybe the solution isn't so complete. 
> Please help us make it better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KUDU-2671) Change hash number for range partitioning

2019-01-24 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751272#comment-16751272
 ] 

Grant Henke edited comment on KUDU-2671 at 1/24/19 4:16 PM:


[~yangz] I removed the fix version because we cannot say it is fixed until the 
patch is committed. 


was (Author: granthenke):
[~yangz] I removed the fix version because we cannot say it is fixed until the 
patch is comitted. 

> Change hash number for range partitioning
> -
>
> Key: KUDU-2671
> URL: https://issues.apache.org/jira/browse/KUDU-2671
> Project: Kudu
>  Issue Type: Improvement
>  Components: client, java, master, server
>Affects Versions: 1.8.0
>Reporter: yangz
>Priority: Major
>  Labels: feature
> Fix For: 1.8.0
>
> Attachments: 屏幕快照 2019-01-24 下午12.03.41.png
>
>
> For our usage, the kudu schema design isn't flexible enough.
> We create our table for day range such as dt='20181112' as hive table.
> But our data size change a lot every day, for one day it will be 50G, but for 
> some other day it will be 500G. For this case, it be hard to set the hash 
> schema. If too big, for most case, it will be too wasteful. But too small, 
> there is a performance problem in the case of a large amount of data.
>  
> So we suggest a solution we can change the hash number by the history data of 
> a table.
> for example
>  # we create schema with one estimated value.
>  # we collect the data size by day range
>  # we create new day range partition by our collected day size.
> We use this feature for half a year, and it work well. We hope this feature 
> will be useful for the community. Maybe the solution isn't so complete. 
> Please help us make it better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KUDU-2671) Change hash number for range partitioning

2019-01-24 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KUDU-2671:
--
Fix Version/s: (was: 1.8.0)

> Change hash number for range partitioning
> -
>
> Key: KUDU-2671
> URL: https://issues.apache.org/jira/browse/KUDU-2671
> Project: Kudu
>  Issue Type: Improvement
>  Components: client, java, master, server
>Affects Versions: 1.8.0
>Reporter: yangz
>Priority: Major
> Attachments: 屏幕快照 2019-01-24 下午12.03.41.png
>
>
> For our usage, the kudu schema design isn't flexible enough.
> We create our table for day range such as dt='20181112' as hive table.
> But our data size change a lot every day, for one day it will be 50G, but for 
> some other day it will be 500G. For this case, it be hard to set the hash 
> schema. If too big, for most case, it will be too wasteful. But too small, 
> there is a performance problem in the case of a large amount of data.
>  
> So we suggest a solution we can change the hash number by the history data of 
> a table.
> for example
>  # we create schema with one estimated value.
>  # we collect the data size by day range
>  # we create new day range partition by our collected day size.
> We use this feature for half a year, and it work well. We hope this feature 
> will be useful for the community. Maybe the solution isn't so complete. 
> Please help us make it better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2670) Splitting more tasks for spark job, and add more concurrent for scan operation

2019-01-23 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16750692#comment-16750692
 ] 

Grant Henke commented on KUDU-2670:
---

I think the first step to implement this is expose the work in KUDU-2437 via 
client APIs. That could be in its own patch as a part of this jira. 

I then think #2 you listed is the most widely beneficial and should be 
straightforward to implement if the client APIs exist. 

I am not sure I fully understand the approach for #1 above. I understand you 
want to lookup a single row without the key. However, I am not sure sending a 
ton of concurrent requests to Kudu is a good idea. It could result in the rpc 
queue filling up with a spike of new requests. That said I am not sure I have a 
better answer off of the top of my head. I will think about this though. 

 

> Splitting more tasks for spark job, and add more concurrent for scan operation
> --
>
> Key: KUDU-2670
> URL: https://issues.apache.org/jira/browse/KUDU-2670
> Project: Kudu
>  Issue Type: Improvement
>  Components: java, spark
>Affects Versions: 1.8.0
>Reporter: yangz
>Priority: Major
>  Labels: performance
>
> Refer to the KUDU-2437 Split a tablet into primary key ranges by size.
> We need a java client implementation to support the split the tablet scan 
> operation.
> We suggest two new implementation for the java client.
>  # A ConcurrentKuduScanner to get more scanner read data at the same time. 
> This will be useful for one case.  We scanner only one row, but the predicate 
> doesn't contain the primary key, for this case, we will send a lot scanner 
> request but only one row return.It will be slow to send so much scanner 
> request one by one. So we need a concurrent way. And by this case we test, 
> for a 10G tablet, it will save a lot time for one machine.
>  # A way to split more spark task. To do so, we need get scanner tokens for 
> two step, first we send to the tserver to give range, then with this range we 
> get more scanner tokens. For our usage we make a tablet 10G, but we split a 
> task to process only 1G data. So we get better performance.
> And all this feature has run well for us for half a year. We hope this 
> feature will be useful for the community.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KUDU-2670) Splitting more tasks for spark job, and add more concurrent for scan operation

2019-01-23 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KUDU-2670:
--
Fix Version/s: (was: 1.8.0)

> Splitting more tasks for spark job, and add more concurrent for scan operation
> --
>
> Key: KUDU-2670
> URL: https://issues.apache.org/jira/browse/KUDU-2670
> Project: Kudu
>  Issue Type: Improvement
>  Components: java, spark
>Affects Versions: 1.8.0
>Reporter: yangz
>Priority: Major
>  Labels: performance
>
> Refer to the KUDU-2437 Split a tablet into primary key ranges by size.
> We need a java client implementation to support the split the tablet scan 
> operation.
> We suggest two new implementation for the java client.
>  # A ConcurrentKuduScanner to get more scanner read data at the same time. 
> This will be useful for one case.  We scanner only one row, but the predicate 
> doesn't contain the primary key, for this case, we will send a lot scanner 
> request but only one row return.It will be slow to send so much scanner 
> request one by one. So we need a concurrent way. And by this case we test, 
> for a 10G tablet, it will save a lot time for one machine.
>  # A way to split more spark task. To do so, we need get scanner tokens for 
> two step, first we send to the tserver to give range, then with this range we 
> get more scanner tokens. For our usage we make a tablet 10G, but we split a 
> task to process only 1G data. So we get better performance.
> And all this feature has run well for us for half a year. We hope this 
> feature will be useful for the community.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KUDU-2669) Automate/Standardize the release process

2019-01-23 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KUDU-2669:
--
Target Version/s: 1.9.0

> Automate/Standardize the release process
> 
>
> Key: KUDU-2669
> URL: https://issues.apache.org/jira/browse/KUDU-2669
> Project: Kudu
>  Issue Type: Improvement
>Reporter: Grant Henke
>Priority: Major
>
> We recently saw an issue where the docs generated by a release were wrong 
> because we released on a mac and that resulted in different effective 
> defaults.
> In this case it was code likes this that caused the issue: 
> {code:java}
> #ifndef __APPLE__
> static constexpr bool kDefaultSystemAuthToLocal = true;
> #else
> // macOS's Heimdal library has a no-op implementation of
> // krb5_aname_to_localname, so instead we just use the simple
> // implementation.
> static constexpr bool kDefaultSystemAuthToLocal = false;
> {code}
> Additionally the release process is fairly manual. We should leverage the 
> docker work to standardize a release environment and automated process to 
> ensure a consistent reproducible release. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KUDU-2669) Automate/Standardize the release process

2019-01-23 Thread Grant Henke (JIRA)
Grant Henke created KUDU-2669:
-

 Summary: Automate/Standardize the release process
 Key: KUDU-2669
 URL: https://issues.apache.org/jira/browse/KUDU-2669
 Project: Kudu
  Issue Type: Improvement
Reporter: Grant Henke


We recently saw an issue where the docs generated by a release were wrong 
because we released on a mac and that resulted in different effective defaults.

In this case it was code likes this that caused the issue: 
{code:java}
#ifndef __APPLE__
static constexpr bool kDefaultSystemAuthToLocal = true;
#else
// macOS's Heimdal library has a no-op implementation of
// krb5_aname_to_localname, so instead we just use the simple
// implementation.
static constexpr bool kDefaultSystemAuthToLocal = false;
{code}
Additionally the release process is fairly manual. We should leverage the 
docker work to standardize a release environment and automated process to 
ensure a consistent reproducible release. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2666) kudu spark intergration taskRead Locality Level is RACK_LOCAL

2019-01-23 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16750240#comment-16750240
 ] 

Grant Henke commented on KUDU-2666:
---

I looked at recent spark runs I have done and have seen NODE_LOCAL tasks. 

Are you sure that your spark executor nodes are the same as your kudu nodes in 
your cluster? 

Adjusting the `spark.locality.wait` configurations could also help improve 
locality: http://spark.apache.org/docs/latest/configuration.html#scheduling

> kudu spark intergration taskRead Locality Level is RACK_LOCAL
> -
>
> Key: KUDU-2666
> URL: https://issues.apache.org/jira/browse/KUDU-2666
> Project: Kudu
>  Issue Type: Improvement
>  Components: spark
>Affects Versions: 1.8.0
>Reporter: wkhapy123
>Priority: Major
> Attachments: 1.png, 2.png
>
>
> spark version 2.3.0
> MyKuduCluster is 3 node
> each tablet 3 replicas.
> when I use sparkcontext read kudu table, task Locality Level is RACK_LOCAL。
> How could it be Node_LOCAL?
> query like this
> spark.sqlContext.sql(s"select * from tablea where event_day>=1546185600 and 
> tenant_id=1 and channel_id='15850513729' limit 1 ").collect



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KUDU-2166) Kudu Python package needs refresh from 1.2.0

2019-01-21 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KUDU-2166.
---
   Resolution: Fixed
Fix Version/s: 1.7.0

Bindings have been published since 1.7.0: 
https://pypi.org/project/kudu-python/#history

> Kudu Python package needs refresh from 1.2.0
> 
>
> Key: KUDU-2166
> URL: https://issues.apache.org/jira/browse/KUDU-2166
> Project: Kudu
>  Issue Type: Bug
>  Components: client
>Affects Versions: 1.4.0
>Reporter: Mladen Kovacevic
>Assignee: Grant Henke
>Priority: Major
> Fix For: 1.7.0
>
>
> PyPI kudu-python 1.2.0 package needs  a refresh.
> The encodings are out of date with that specific client package (namely 
> bitshuffle and dict are missing). Most likely we're missing at least this 
> commit for "KUDU-1691 - [python] Updated Column Encoding Types" included in 
> 1.3.0 or later.
> The instructions also say that Cython is only required when installing from 
> source.
> When I ran with pip3, I would get:
> {code:none}
> $ sudo /usr/local/bin/pip3 install kudu-python
> Collecting kudu-python
>   Using cached kudu-python-1.2.0.tar.gz
> Complete output from command python setup.py egg_info:
> Traceback (most recent call last):
>   File "", line 1, in 
>   File "/tmp/pip-build-d6gtx9d6/kudu-python/setup.py", line 21, in 
> 
> from Cython.Distutils import build_ext
> ImportError: No module named 'Cython'
> {code}
> So then it would seem even running install via pip requires Cython..
> Finally, Python 3.6 is out now, and perhaps this package should be enabled to 
> work with 3.6 as well (I think current max is 3.5)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2332) Error when importing kudu package in Python Anaconda distribution

2019-01-21 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16748298#comment-16748298
 ] 

Grant Henke commented on KUDU-2332:
---

Was this ever resolved? There is a newer version that could be tried.

> Error when importing kudu package in Python Anaconda distribution
> -
>
> Key: KUDU-2332
> URL: https://issues.apache.org/jira/browse/KUDU-2332
> Project: Kudu
>  Issue Type: Bug
>  Components: python
>Affects Versions: 1.2.0
> Environment: ProductName:Mac OS X
> ProductVersion:10.13.3
> BuildVersion:17D102
> Anaconda Python distribution
>Reporter: Michał Sznajder
>Priority: Minor
> Attachments: error.png
>
>
> I tried to install Kudu on my local machine:
> ProductName:    Mac OS X
> ProductVersion:    10.13.3
> BuildVersion:    17D102
> I followed all the steps to build Kudu 1.6 from source:
> 1. brew install autoconf automake cmake libtool pkg-config pstree
> 2. git clone https://github.com/apache/incubator-kudu kudu
> 2. cd kudu
> 3. PKG_CONFIG_PATH=/usr/local/Cellar/openssl/1.0.2n/lib/pkgconfig 
> thirdparty/build-if-necessary.sh
> 4. mkdir -p build/release
> 5. cd build/release
> 6. PKG_CONFIG_PATH=/usr/local/Cellar/openssl/1.0.2n/lib/pkgconfig 
> ../../thirdparty/installed/common/bin/cmake \
>   -DCMAKE_BUILD_TYPE=release \
>   -DOPENSSL_ROOT_DIR=/usr/local/opt/openssl \
>   ../..
> 7. make -j4
> 8. sudo make install
> This resulted with following libraries installed:
>  
> /usr/local/include/kudu
> /usr/local/include/kudu/util/kudu_export.h
> /usr/local/lib/libkudu_client.0.1.0.dylib
> /usr/local/lib/libkudu_client.dylib
> /usr/local/lib/libkudu_client.0.dylib
> /usr/local/share/kuduClient
> /usr/local/share/kuduClient/cmake/kuduClientTargets.cmake
> /usr/local/share/kuduClient/cmake/kuduClientTargets-release.cmake
> /usr/local/share/kuduClient/cmake/kuduClientConfig.cmake
> /usr/local/share/doc/kuduClient
>  
> Then I followed steps to instal kudu-python package using pip:
> 1. clean pip cache to make sure it is clean
> 2. pip install -v kudu-python
> Then after calling:
> import kudu
> I got error like in the attachment "error.png".
> As first line of this screen states it was Anaconda Python distribution.
> After removing Anaconda Python and installing Python using Homebrew and again 
> following above steps - all worked.
> My conclusion: there is some kind of issue happening between Anaconda Python 
> and kudu-python package.
> Some more details are also in 
> [https://getkudu.slack.com|https://getkudu.slack.com/] slack channel on 
> #kudu-general channel.
> I am reachable there under msznajder nickname.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KUDU-2166) Kudu Python package needs refresh from 1.2.0

2019-01-21 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KUDU-2166:
-

Assignee: Grant Henke

> Kudu Python package needs refresh from 1.2.0
> 
>
> Key: KUDU-2166
> URL: https://issues.apache.org/jira/browse/KUDU-2166
> Project: Kudu
>  Issue Type: Bug
>  Components: client
>Affects Versions: 1.4.0
>Reporter: Mladen Kovacevic
>Assignee: Grant Henke
>Priority: Major
>
> PyPI kudu-python 1.2.0 package needs  a refresh.
> The encodings are out of date with that specific client package (namely 
> bitshuffle and dict are missing). Most likely we're missing at least this 
> commit for "KUDU-1691 - [python] Updated Column Encoding Types" included in 
> 1.3.0 or later.
> The instructions also say that Cython is only required when installing from 
> source.
> When I ran with pip3, I would get:
> {code:none}
> $ sudo /usr/local/bin/pip3 install kudu-python
> Collecting kudu-python
>   Using cached kudu-python-1.2.0.tar.gz
> Complete output from command python setup.py egg_info:
> Traceback (most recent call last):
>   File "", line 1, in 
>   File "/tmp/pip-build-d6gtx9d6/kudu-python/setup.py", line 21, in 
> 
> from Cython.Distutils import build_ext
> ImportError: No module named 'Cython'
> {code}
> So then it would seem even running install via pip requires Cython..
> Finally, Python 3.6 is out now, and perhaps this package should be enabled to 
> work with 3.6 as well (I think current max is 3.5)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KUDU-2640) Add a KuduSink for Spark Structured Streaming

2019-01-17 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KUDU-2640.
---
   Resolution: Fixed
Fix Version/s: 1.9.0

This was committed in 
[969c46e|https://github.com/apache/kudu/commit/969c46e0dc59c04537282ddc5c2d377f09b8fc17].

> Add a KuduSink for Spark Structured Streaming
> -
>
> Key: KUDU-2640
> URL: https://issues.apache.org/jira/browse/KUDU-2640
> Project: Kudu
>  Issue Type: New Feature
>Affects Versions: 1.8.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Major
> Fix For: 1.9.0
>
>
> Today writing to Kudu from spark takes some clever usage of the KuduContext. 
> This Jira tracks adding a fully configurable KuduSink so that direct usage of 
> the KuduContext is not required. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2646) kudu restart the tablets stats from INITIALIZED change to RUNNING cost a few days

2018-12-18 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16724669#comment-16724669
 ] 

Grant Henke commented on KUDU-2646:
---

Without any details it is hard to asses this problem. Generally the best way to 
get help with problems is to first report the issue you are seeing with as much 
context and information as possible on the users mailing list. Any bugs or 
issues identified there could result in a jira. 

I will close this Jira for now and we can re-open it if needed.

 

> kudu restart the tablets stats from INITIALIZED change to RUNNING cost a few 
> days
> -
>
> Key: KUDU-2646
> URL: https://issues.apache.org/jira/browse/KUDU-2646
> Project: Kudu
>  Issue Type: Bug
>Reporter: qinzl_1
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KUDU-2646) kudu restart the tablets stats from INITIALIZED change to RUNNING cost a few days

2018-12-18 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KUDU-2646.
---
   Resolution: Invalid
Fix Version/s: n/a

> kudu restart the tablets stats from INITIALIZED change to RUNNING cost a few 
> days
> -
>
> Key: KUDU-2646
> URL: https://issues.apache.org/jira/browse/KUDU-2646
> Project: Kudu
>  Issue Type: Bug
>Reporter: qinzl_1
>Priority: Major
> Fix For: n/a
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KUDU-2640) Add a KuduSink for Spark Structured Streaming

2018-12-13 Thread Grant Henke (JIRA)
Grant Henke created KUDU-2640:
-

 Summary: Add a KuduSink for Spark Structured Streaming
 Key: KUDU-2640
 URL: https://issues.apache.org/jira/browse/KUDU-2640
 Project: Kudu
  Issue Type: New Feature
Affects Versions: 1.8.0
Reporter: Grant Henke
Assignee: Grant Henke


Today writing to Kudu from spark takes some clever usage of the KuduContext. 
This Jira tracks adding a fully configurable KuduSink so that direct usage of 
the KuduContext is not required. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KUDU-2637) Add a note about leadership imbalance in the faq

2018-12-12 Thread Grant Henke (JIRA)
Grant Henke created KUDU-2637:
-

 Summary: Add a note about leadership imbalance in the faq
 Key: KUDU-2637
 URL: https://issues.apache.org/jira/browse/KUDU-2637
 Project: Kudu
  Issue Type: Improvement
  Components: documentation
Reporter: Grant Henke


There have been a few questions on leadership imbalance and whether or not it 
is important to monitor and fix. We should update the FAQ section to address 
this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KUDU-1686) Add API to split a scan token into smaller scans

2018-12-10 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-1686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KUDU-1686.
---
   Resolution: Fixed
 Assignee: Grant Henke
Fix Version/s: n/a

> Add API to split a scan token into smaller scans
> 
>
> Key: KUDU-1686
> URL: https://issues.apache.org/jira/browse/KUDU-1686
> Project: Kudu
>  Issue Type: Improvement
>  Components: client, perf, tserver
>Affects Versions: 1.0.0
>Reporter: Todd Lipcon
>Assignee: Grant Henke
>Priority: Major
> Fix For: n/a
>
>
> Quoting from the scan token design doc:
> {quote}
> Eventually, the scan token API should allow applications to further split 
> scan tokens so that inter-tablet parallelism can be acheived. Splitting 
> tokens may be achieved by assigning the child tokens non-overlapping sections 
> of the primary key range. Even without the token splitting feature built in 
> to the API, applications can simulate the effect by building multiple sets of 
> scan tokens using non-overlapping sets of primary key bounds. However, it is 
> likely that in the future Kudu will be able to choose a more optimal primary 
> key split point than the application, perhaps through an internal tablet 
> statistics API. Additionally, having the API built in to the Kudu client 
> further decreases the effort required to write high performance integrations 
> for Kudu.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-1563) Add support for INSERT IGNORE

2018-12-07 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-1563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713014#comment-16713014
 ] 

Grant Henke commented on KUDU-1563:
---

I am reading through and catching up on this. I think it would definitely be a 
nice feature to have. 

It looks like [~danburkert] also mentioned both the operation level setting and 
the session level setting: 

bq. I think I'm in favor of merging the current patch which introduces an 
INSERT IGNORE operation to ignore constraint violations of type 1 on the server 
side. Additionally, we should strongly consider adding a session-specific 
options to selectively ignore each type of constraint individually. So for 
example, the client could use the INSERT IGNORE operation type if they want to 
selectively ignore some instances of duplicate primary-key constraints, or it 
could call KuduSession::ignoreDuplicatePrimaryKeyViolations to ignore all of 
them for the entire session.

I agree that the intuitive place to define the expected behavior would be on 
the operation. I am not sure if there is a big benefit to having both, but 
having it be session based only seams to reduce the flexibility of what a 
client can do.

 





> Add support for INSERT IGNORE
> -
>
> Key: KUDU-1563
> URL: https://issues.apache.org/jira/browse/KUDU-1563
> Project: Kudu
>  Issue Type: New Feature
>Reporter: Dan Burkert
>Assignee: Brock Noland
>Priority: Major
>  Labels: newbie
>
> The Java client currently has an [option to ignore duplicate row key errors| 
> https://kudu.apache.org/apidocs/org/kududb/client/AsyncKuduSession.html#setIgnoreAllDuplicateRows-boolean-],
>  which is implemented by filtering the errors on the client side.  If we are 
> going to continue to support this feature (and the consensus seems to be that 
> we probably should), we should promote it to a first class operation type 
> that is handled on the server side.  This would have a modest perf. 
> improvement since less errors are returned, and it would allow INSERT IGNORE 
> ops to be mixed in the same batch as other INSERT, DELETE, UPSERT, etc. ops.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2437) Split a tablet into primary key ranges by size

2018-11-14 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686980#comment-16686980
 ] 

Grant Henke commented on KUDU-2437:
---

Hi [~oclarms],

I see the server side support has been contributed. Are you interested in 
contributing Java client and Spark support? 

Thanks,
Grant

> Split a tablet into primary key ranges by size
> --
>
> Key: KUDU-2437
> URL: https://issues.apache.org/jira/browse/KUDU-2437
> Project: Kudu
>  Issue Type: Improvement
>  Components: client, tablet
>Reporter: Xu Yao
>Assignee: Xu Yao
>Priority: Major
> Fix For: 1.8.0
>
>
> When reading data in a kudu table using spark, if there is a large amount of 
> data in the tablet, reading the data takes a long time. The reason is that 
> KuduRDD uses a tablet to generate the scanToken, so a spark task needs to 
> process all the data in a tablet. 
> We think that TabletServer should provide an RPC interface, which can be 
> split tablet into multiple primary key ranges by size. The kudu-client can 
> choose whether to perform parallel scan according to the case.
> RPC interface:
> {code:java}
> // A split key range request. Split tablet to key ranges, the request
> // doesn't change layout of tablet.
> message SplitKeyRangeRequestPB {
>  required bytes tablet_id = 1;
>  // Encoded primary key to begin scanning at (inclusive).
>  optional bytes start_primary_key = 2 [(kudu.REDACT) = true];
>  // Encoded primary key to stop scanning at (exclusive).
>  optional bytes stop_primary_key = 3 [(kudu.REDACT) = true];
>  // Number of bytes to try to return in each chunk. This is a hint.
>  // The tablet server may return chunks larger or smaller than this value.
>  optional uint64 target_chunk_size_bytes = 4;
>  // The columns to consider when chunking.
>  // If specified, then the size estimate used for 'target_chunk_size_bytes'
>  // should only include these columns. This can be used if a query will
>  // only scan a certain subset of the columns.
>  repeated ColumnSchemaPB columns = 5;
> }
> // The primary key range of a Kudu tablet.
> message KeyRangePB {
>  // Encoded primary key to begin scanning at (inclusive).
>  optional bytes start_primary_key = 1 [(kudu.REDACT) = true];
>  // Encoded primary key to stop scanning at (exclusive).
>  optional bytes stop_primary_key = 2 [(kudu.REDACT) = true];
>  // Number of bytes in chunk.
>  required uint64 size_bytes_estimates = 3;
> }
> message SplitKeyRangeResponsePB {
>  // The error, if an error occurred with this request.
>  optional TabletServerErrorPB error = 1;
>  repeated KeyRangePB ranges = 2;
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KUDU-2584) Flaky testSimpleBackupAndRestore

2018-11-13 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KUDU-2584.
---
   Resolution: Fixed
Fix Version/s: 1.9.0

> Flaky testSimpleBackupAndRestore
> 
>
> Key: KUDU-2584
> URL: https://issues.apache.org/jira/browse/KUDU-2584
> Project: Kudu
>  Issue Type: Bug
>  Components: backup
>Reporter: Mike Percy
>Assignee: Grant Henke
>Priority: Major
> Fix For: 1.9.0
>
> Attachments: TEST-org.apache.kudu.backup.TestKuduBackup.xml
>
>
> testSimpleBackupAndRestore is flaky and tends to fail with the following 
> error:
> {code:java}
> 04:48:06.604 [ERROR - Test worker] (RetryRule.java:72) 
> testRandomBackupAndRestore(org.apache.kudu.backup.TestKuduBackup): failed run 
> 1 
> java.lang.AssertionError: expected:<111> but was:<110> 
> at org.junit.Assert.fail(Assert.java:88) 
> at org.junit.Assert.failNotEquals(Assert.java:834) 
> at org.junit.Assert.assertEquals(Assert.java:645) 
> at org.junit.Assert.assertEquals(Assert.java:631) 
> at 
> org.apache.kudu.backup.TestKuduBackup.testRandomBackupAndRestore(TestKuduBackup.scala:99)
>  
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:483) 
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.apache.kudu.junit.RetryRule$RetryStatement.evaluate(RetryRule.java:68) 
> at org.junit.rules.RunRules.evaluate(RunRules.java:20) 
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) 
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) 
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) 
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) 
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363) 
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:106)
>  
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>  
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>  
> at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:66)
>  
> at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>  
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:483) 
> at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>  
> at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>  
> at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>  
> at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>  
> at com.sun.proxy.$Proxy2.processTestClass(Unknown Source) 
> at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:117)
>  
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:483) 
> at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>  
> at 
> 

[jira] [Commented] (KUDU-2584) Flaky testSimpleBackupAndRestore

2018-11-13 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16685296#comment-16685296
 ] 

Grant Henke commented on KUDU-2584:
---

Resolved via 
[aa20ef0|https://github.com/apache/kudu/commit/aa20ef0576cd9e2cf4a035ecdf6dbd746d94c586].

> Flaky testSimpleBackupAndRestore
> 
>
> Key: KUDU-2584
> URL: https://issues.apache.org/jira/browse/KUDU-2584
> Project: Kudu
>  Issue Type: Bug
>  Components: backup
>Reporter: Mike Percy
>Assignee: Grant Henke
>Priority: Major
> Attachments: TEST-org.apache.kudu.backup.TestKuduBackup.xml
>
>
> testSimpleBackupAndRestore is flaky and tends to fail with the following 
> error:
> {code:java}
> 04:48:06.604 [ERROR - Test worker] (RetryRule.java:72) 
> testRandomBackupAndRestore(org.apache.kudu.backup.TestKuduBackup): failed run 
> 1 
> java.lang.AssertionError: expected:<111> but was:<110> 
> at org.junit.Assert.fail(Assert.java:88) 
> at org.junit.Assert.failNotEquals(Assert.java:834) 
> at org.junit.Assert.assertEquals(Assert.java:645) 
> at org.junit.Assert.assertEquals(Assert.java:631) 
> at 
> org.apache.kudu.backup.TestKuduBackup.testRandomBackupAndRestore(TestKuduBackup.scala:99)
>  
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:483) 
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.apache.kudu.junit.RetryRule$RetryStatement.evaluate(RetryRule.java:68) 
> at org.junit.rules.RunRules.evaluate(RunRules.java:20) 
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) 
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) 
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) 
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) 
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363) 
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:106)
>  
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>  
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>  
> at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:66)
>  
> at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>  
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:483) 
> at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>  
> at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>  
> at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>  
> at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>  
> at com.sun.proxy.$Proxy2.processTestClass(Unknown Source) 
> at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:117)
>  
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:483) 
> at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>  
> at 
> 

[jira] [Resolved] (KUDU-2599) Timeout in DefaultSourceTest.testSocketReadTimeoutPropagation

2018-11-13 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KUDU-2599.
---
   Resolution: Fixed
Fix Version/s: 1.9.0

Resolved via 
[aa20ef0|https://github.com/apache/kudu/commit/aa20ef0576cd9e2cf4a035ecdf6dbd746d94c586].

> Timeout in DefaultSourceTest.testSocketReadTimeoutPropagation
> -
>
> Key: KUDU-2599
> URL: https://issues.apache.org/jira/browse/KUDU-2599
> Project: Kudu
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 1.8.0
>Reporter: Will Berkeley
>Assignee: Adar Dembo
>Priority: Major
> Fix For: 1.9.0
>
> Attachments: DefaultSourceTestFailure-snippet.txt, 
> TEST-org.apache.kudu.spark.kudu.DefaultSourceTest.xml
>
>
> Log attached
> Here is the relevant stack trace:
> {code:java}
>  classname="org.apache.kudu.spark.kudu.DefaultSourceTest" time="19.927">
>  type="java.security.PrivilegedActionException">java.security.PrivilegedActionException:
>  org.apache.kudu.client.NoLeaderFoundException: Master config 
> (127.23.212.125:33436,127.23.212.126:35133,127.23.212.124:41615) has no 
> leader. Exceptions received: org.apache.kudu.client.RecoverableException: 
> [peer master-127.23.212.125:33436(127.23.212.125:33436)] encountered a read 
> timeout; closing the channel,org.apache.kudu.client.RecoverableException: 
> [peer master-127.23.212.126:35133(127.23.212.126:35133)] encountered a read 
> timeout; closing the channel,org.apache.kudu.client.RecoverableException: 
> [peer master-127.23.212.124:41615(127.23.212.124:41615)] encountered a read 
> timeout; closing the channel
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:360)
> at org.apache.kudu.spark.kudu.KuduContext.init(KuduContext.scala:122)
> at 
> org.apache.kudu.spark.kudu.KuduRelation.init(DefaultSource.scala:212)
> at 
> org.apache.kudu.spark.kudu.DefaultSource.createRelation(DefaultSource.scala:101)
> at 
> org.apache.kudu.spark.kudu.DefaultSource.createRelation(DefaultSource.scala:76)
> at 
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:341)
> at 
> org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:164)
> at 
> org.apache.kudu.spark.kudu.package$KuduDataFrameReader.kudu(package.scala:30)
> at 
> org.apache.kudu.spark.kudu.DefaultSourceTest.testSocketReadTimeoutPropagation(DefaultSourceTest.scala:924)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.apache.kudu.junit.RetryRule$RetryStatement.evaluate(RetryRule.java:72)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:106)
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
> at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:66)
> at 
> 

[jira] [Commented] (KUDU-2599) Timeout in DefaultSourceTest.testSocketReadTimeoutPropagation

2018-11-08 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16680369#comment-16680369
 ] 

Grant Henke commented on KUDU-2599:
---

I modified the tests to output VLOG level 1 messages and inline the failing 
exception. Attached is a snippet of the failing log. It's not immediately clear 
why the following exception is periodically occurring:
{code:java}
java.security.PrivilegedActionException: 
org.apache.kudu.client.NoLeaderFoundException: Master config 
(127.14.104.60:38345,127.14.104.61:42699,127.14.104.62:45145) has no 
leader.20:04:13.043{code}
[^DefaultSourceTestFailure-snippet.txt]

> Timeout in DefaultSourceTest.testSocketReadTimeoutPropagation
> -
>
> Key: KUDU-2599
> URL: https://issues.apache.org/jira/browse/KUDU-2599
> Project: Kudu
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 1.8.0
>Reporter: Will Berkeley
>Priority: Major
> Attachments: DefaultSourceTestFailure-snippet.txt, 
> TEST-org.apache.kudu.spark.kudu.DefaultSourceTest.xml
>
>
> Log attached
> Here is the relevant stack trace:
> {code:java}
>  classname="org.apache.kudu.spark.kudu.DefaultSourceTest" time="19.927">
>  type="java.security.PrivilegedActionException">java.security.PrivilegedActionException:
>  org.apache.kudu.client.NoLeaderFoundException: Master config 
> (127.23.212.125:33436,127.23.212.126:35133,127.23.212.124:41615) has no 
> leader. Exceptions received: org.apache.kudu.client.RecoverableException: 
> [peer master-127.23.212.125:33436(127.23.212.125:33436)] encountered a read 
> timeout; closing the channel,org.apache.kudu.client.RecoverableException: 
> [peer master-127.23.212.126:35133(127.23.212.126:35133)] encountered a read 
> timeout; closing the channel,org.apache.kudu.client.RecoverableException: 
> [peer master-127.23.212.124:41615(127.23.212.124:41615)] encountered a read 
> timeout; closing the channel
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:360)
> at org.apache.kudu.spark.kudu.KuduContext.init(KuduContext.scala:122)
> at 
> org.apache.kudu.spark.kudu.KuduRelation.init(DefaultSource.scala:212)
> at 
> org.apache.kudu.spark.kudu.DefaultSource.createRelation(DefaultSource.scala:101)
> at 
> org.apache.kudu.spark.kudu.DefaultSource.createRelation(DefaultSource.scala:76)
> at 
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:341)
> at 
> org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:164)
> at 
> org.apache.kudu.spark.kudu.package$KuduDataFrameReader.kudu(package.scala:30)
> at 
> org.apache.kudu.spark.kudu.DefaultSourceTest.testSocketReadTimeoutPropagation(DefaultSourceTest.scala:924)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.apache.kudu.junit.RetryRule$RetryStatement.evaluate(RetryRule.java:72)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:106)
> at 
> 

[jira] [Updated] (KUDU-2599) Timeout in DefaultSourceTest.testSocketReadTimeoutPropagation

2018-11-08 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KUDU-2599:
--
Attachment: DefaultSourceTestFailure-snippet.txt

> Timeout in DefaultSourceTest.testSocketReadTimeoutPropagation
> -
>
> Key: KUDU-2599
> URL: https://issues.apache.org/jira/browse/KUDU-2599
> Project: Kudu
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 1.8.0
>Reporter: Will Berkeley
>Priority: Major
> Attachments: DefaultSourceTestFailure-snippet.txt, 
> TEST-org.apache.kudu.spark.kudu.DefaultSourceTest.xml
>
>
> Log attached
> Here is the relevant stack trace:
> {code:java}
>  classname="org.apache.kudu.spark.kudu.DefaultSourceTest" time="19.927">
>  type="java.security.PrivilegedActionException">java.security.PrivilegedActionException:
>  org.apache.kudu.client.NoLeaderFoundException: Master config 
> (127.23.212.125:33436,127.23.212.126:35133,127.23.212.124:41615) has no 
> leader. Exceptions received: org.apache.kudu.client.RecoverableException: 
> [peer master-127.23.212.125:33436(127.23.212.125:33436)] encountered a read 
> timeout; closing the channel,org.apache.kudu.client.RecoverableException: 
> [peer master-127.23.212.126:35133(127.23.212.126:35133)] encountered a read 
> timeout; closing the channel,org.apache.kudu.client.RecoverableException: 
> [peer master-127.23.212.124:41615(127.23.212.124:41615)] encountered a read 
> timeout; closing the channel
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:360)
> at org.apache.kudu.spark.kudu.KuduContext.init(KuduContext.scala:122)
> at 
> org.apache.kudu.spark.kudu.KuduRelation.init(DefaultSource.scala:212)
> at 
> org.apache.kudu.spark.kudu.DefaultSource.createRelation(DefaultSource.scala:101)
> at 
> org.apache.kudu.spark.kudu.DefaultSource.createRelation(DefaultSource.scala:76)
> at 
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:341)
> at 
> org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:164)
> at 
> org.apache.kudu.spark.kudu.package$KuduDataFrameReader.kudu(package.scala:30)
> at 
> org.apache.kudu.spark.kudu.DefaultSourceTest.testSocketReadTimeoutPropagation(DefaultSourceTest.scala:924)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.apache.kudu.junit.RetryRule$RetryStatement.evaluate(RetryRule.java:72)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:106)
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
> at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:66)
> at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 

[jira] [Commented] (KUDU-2402) Kudu Gerrit Sign-in link broken with Gerrit New UI

2018-11-08 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16680270#comment-16680270
 ] 

Grant Henke commented on KUDU-2402:
---

Awesome! Thanks for fixing this tricky bug Mike.

> Kudu Gerrit Sign-in link broken with Gerrit New UI
> --
>
> Key: KUDU-2402
> URL: https://issues.apache.org/jira/browse/KUDU-2402
> Project: Kudu
>  Issue Type: Bug
>  Components: project-infra
>Reporter: Mike Percy
>Assignee: Mike Percy
>Priority: Major
> Fix For: n/a
>
>
> Not sure if we need to upgrade the gerrit github plugin or what. The Sign In 
> link is broken after switching to the New UI in Gerrit. The URL I get is: 
> [https://gerrit.cloudera.org/login/%2Fq%2Fstatus%3Aopen] and that leads to a 
> 404 error.
> Sign-in seems to work fine after switching back to the "Old UI" in Gerrit.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KUDU-2356) Idle WALs can consume significant memory

2018-11-08 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KUDU-2356:
--
Fix Version/s: (was: 1.8.0)

> Idle WALs can consume significant memory
> 
>
> Key: KUDU-2356
> URL: https://issues.apache.org/jira/browse/KUDU-2356
> Project: Kudu
>  Issue Type: Improvement
>  Components: log, tserver
>Affects Versions: 1.7.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Major
> Attachments: heap.svg
>
>
> I grabbed a heap sample of a tserver which has been running a write workload 
> for a little while and found that 750MB of memory is used by faststring 
> allocations inside WritableLogSegment::WriteEntryBatch. It seems like this is 
> the 'compress_buf_' member. This buffer always resizes up during a log write 
> but never shrinks back down, even when the WAL is idle. We should consider 
> clearing the buffer after each append, or perhaps after a short timeout like 
> 100ms after a WAL becomes idle.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KUDU-2356) Idle WALs can consume significant memory

2018-11-08 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KUDU-2356:
--
Target Version/s:   (was: 1.8.0)

> Idle WALs can consume significant memory
> 
>
> Key: KUDU-2356
> URL: https://issues.apache.org/jira/browse/KUDU-2356
> Project: Kudu
>  Issue Type: Improvement
>  Components: log, tserver
>Affects Versions: 1.7.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Major
> Attachments: heap.svg
>
>
> I grabbed a heap sample of a tserver which has been running a write workload 
> for a little while and found that 750MB of memory is used by faststring 
> allocations inside WritableLogSegment::WriteEntryBatch. It seems like this is 
> the 'compress_buf_' member. This buffer always resizes up during a log write 
> but never shrinks back down, even when the WAL is idle. We should consider 
> clearing the buffer after each append, or perhaps after a short timeout like 
> 100ms after a WAL becomes idle.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2418) ksck should be able to auto-repair single replica tablets (with data loss)

2018-11-08 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16679266#comment-16679266
 ] 

Grant Henke commented on KUDU-2418:
---

Was this done in 1.8.0? I suspect a bad fix version. Will remove it. Feel free 
to correct it though.

> ksck should be able to auto-repair single replica tablets (with data loss)
> --
>
> Key: KUDU-2418
> URL: https://issues.apache.org/jira/browse/KUDU-2418
> Project: Kudu
>  Issue Type: New Feature
>  Components: ksck
>Affects Versions: 1.7.0
>Reporter: Adar Dembo
>Priority: Major
> Fix For: 1.8.0
>
>
> There's an established Kudu workflow for manually "repairing" a tablet that 
> has only one working replica, using the unsafe_config_change CLI tool. I used 
> quotes around repairing because while it brings the tablet back to a healthy 
> state as far as Kudu is concerned, the tablet may have suffered data loss. In 
> some circumstances, however, that's something users are willing to accept.
> The problem is when this happens writ large, to an entire cluster. For 
> example, suppose a three node cluster hosting 1000 tablets loses two nodes. 
> It should be possible to automate this repair process so that users needn't 
> script it themselves.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KUDU-2616) DenseNodeTest Flake "Timed out waiting for Table Creation"

2018-10-29 Thread Grant Henke (JIRA)
Grant Henke created KUDU-2616:
-

 Summary: DenseNodeTest Flake "Timed out waiting for Table Creation"
 Key: KUDU-2616
 URL: https://issues.apache.org/jira/browse/KUDU-2616
 Project: Kudu
  Issue Type: Bug
Reporter: Grant Henke


I saw an apparently flaky failure of the DenseNodeTest:
{code:java}
test_workload.cc:283] Timed out: Timed out waiting for Table Creation
@ 0x7efeb39afc37 gsignal at ??:0
@ 0x7efeb39b3028 abort at ??:0
@ 0x7efecbf760b6 kudu::TestWorkload::Setup() at ??:0
@   0x545e44 kudu::DenseNodeTest_RunTest_Test::TestBody() at 
/data/somelongdirectorytoavoidrpathissues/src/kudu/src/kudu/integration-tests/dense_node-itest.cc:172
@ 0x7efecb7dab15 main at ??:0
@ 0x7efeb399af45 __libc_start_main at ??:0
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KUDU-2599) Timeout in DefaultSourceTest.testSocketReadTimeoutPropagation

2018-10-29 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KUDU-2599:
--
Description: 
Log attached

Here is the relevant stack trace:
{code:java}

java.security.PrivilegedActionException:
 org.apache.kudu.client.NoLeaderFoundException: Master config 
(127.23.212.125:33436,127.23.212.126:35133,127.23.212.124:41615) has no leader. 
Exceptions received: org.apache.kudu.client.RecoverableException: [peer 
master-127.23.212.125:33436(127.23.212.125:33436)] encountered a read timeout; 
closing the channel,org.apache.kudu.client.RecoverableException: [peer 
master-127.23.212.126:35133(127.23.212.126:35133)] encountered a read timeout; 
closing the channel,org.apache.kudu.client.RecoverableException: [peer 
master-127.23.212.124:41615(127.23.212.124:41615)] encountered a read timeout; 
closing the channel
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.kudu.spark.kudu.KuduContext.init(KuduContext.scala:122)
at org.apache.kudu.spark.kudu.KuduRelation.init(DefaultSource.scala:212)
at 
org.apache.kudu.spark.kudu.DefaultSource.createRelation(DefaultSource.scala:101)
at 
org.apache.kudu.spark.kudu.DefaultSource.createRelation(DefaultSource.scala:76)
at 
org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:341)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:164)
at org.apache.kudu.spark.kudu.package$KuduDataFrameReader.kudu(package.scala:30)
at 
org.apache.kudu.spark.kudu.DefaultSourceTest.testSocketReadTimeoutPropagation(DefaultSourceTest.scala:924)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.apache.kudu.junit.RetryRule$RetryStatement.evaluate(RetryRule.java:72)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:106)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:66)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:117)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 

[jira] [Commented] (KUDU-2599) Timeout in DefaultSourceTest.testSocketReadTimeoutPropagation

2018-10-29 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667206#comment-16667206
 ] 

Grant Henke commented on KUDU-2599:
---

It seams there are a few DefaultSourceTest cases that can be susceptible to 
this.

> Timeout in DefaultSourceTest.testSocketReadTimeoutPropagation
> -
>
> Key: KUDU-2599
> URL: https://issues.apache.org/jira/browse/KUDU-2599
> Project: Kudu
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 1.8.0
>Reporter: Will Berkeley
>Priority: Major
> Attachments: TEST-org.apache.kudu.spark.kudu.DefaultSourceTest.xml
>
>
> Log attached



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KUDU-1404) Please delete old releases from mirroring system

2018-10-28 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-1404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KUDU-1404:
-

Assignee: Grant Henke  (was: Jean-Daniel Cryans)

> Please delete old releases from mirroring system
> 
>
> Key: KUDU-1404
> URL: https://issues.apache.org/jira/browse/KUDU-1404
> Project: Kudu
>  Issue Type: Bug
> Environment: http://www.apache.org/dist/incubator/kudu/
>Reporter: Sebb
>Assignee: Grant Henke
>Priority: Major
> Fix For: n/a
>
>
> To reduce the load on the ASF mirrors, projects are required to delete old 
> releases [1]
> Please can you remove all non-current releases?
> Thanks!
> Note: you can still reference superseded versions from the download page, but 
> the links should be adjusted to point to the archive server.
> [1] http://www.apache.org/dev/release.html#when-to-archive



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-1404) Please delete old releases from mirroring system

2018-10-28 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-1404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1664#comment-1664
 ] 

Grant Henke commented on KUDU-1404:
---

[~s...@apache.org]  We just released 1.8.0 adding another version to the 
release directory. I suspect that is what is triggering this request.

Our release notes mention removing old releases in step #12 here: 
[https://github.com/apache/kudu/blob/master/RELEASING.adoc#release]

Our release directory contains 6 releases that are currently active: 

 
{noformat}
1.3.1, 1.4.0, 1.5.0, 1.6.0, 1.7.1, 1.8.0
{noformat}
The documentation states that we should keep 1 copy of each active branch. We 
branch for each minor version.  There are definitely people using 1.3 and 1.4 
releases, but it is debatable if we would backport and release in the 1.3 and 
1.4 releases, so perhaps we could remove those soon. 

That said is there something that triggered this clean up request? Is there a 
maximum number of "active development branches" we should be staying under? Is 
6 too many? 

> Please delete old releases from mirroring system
> 
>
> Key: KUDU-1404
> URL: https://issues.apache.org/jira/browse/KUDU-1404
> Project: Kudu
>  Issue Type: Bug
> Environment: http://www.apache.org/dist/incubator/kudu/
>Reporter: Sebb
>Assignee: Jean-Daniel Cryans
>Priority: Major
> Fix For: n/a
>
>
> To reduce the load on the ASF mirrors, projects are required to delete old 
> releases [1]
> Please can you remove all non-current releases?
> Thanks!
> Note: you can still reference superseded versions from the download page, but 
> the links should be adjusted to point to the archive server.
> [1] http://www.apache.org/dev/release.html#when-to-archive



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2504) Add Kudu version number to header of docs pages

2018-10-26 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16665227#comment-16665227
 ] 

Grant Henke commented on KUDU-2504:
---

Hi [~iramos]. Can you please follow the contribution guide 
[here|http://kudu.apache.org/docs/contributing.html] to submit the change to 
gerrit for review?



> Add Kudu version number to header of docs pages
> ---
>
> Key: KUDU-2504
> URL: https://issues.apache.org/jira/browse/KUDU-2504
> Project: Kudu
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 1.7.1
>Reporter: Mike Percy
>Assignee: Isaac Ramos
>Priority: Minor
>  Labels: newbie
>
> It is currently not easy to tell which version of the docs you are looking at 
> when you are on the "unversioned" section of the Kudu docs @ 
> [http://kudu.apache.org/docs/] – we should add a header or a little strip to 
> the top of each page that says something like "you are looking at version 
> 1.7.1 of the docs" or "you are looking at docs for version 1.8.0-SNAPSHOT 
> generated from Git commit eee82d90a54108f2d7e18e84ec0bbd391fcc129a"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2504) Add Kudu version number to header of docs pages

2018-10-24 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663002#comment-16663002
 ] 

Grant Henke commented on KUDU-2504:
---

Great! I added you as a contributor and assigned the Jira to you [~iramos].

> Add Kudu version number to header of docs pages
> ---
>
> Key: KUDU-2504
> URL: https://issues.apache.org/jira/browse/KUDU-2504
> Project: Kudu
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 1.7.1
>Reporter: Mike Percy
>Assignee: Isaac Ramos
>Priority: Minor
>  Labels: newbie
>
> It is currently not easy to tell which version of the docs you are looking at 
> when you are on the "unversioned" section of the Kudu docs @ 
> [http://kudu.apache.org/docs/] – we should add a header or a little strip to 
> the top of each page that says something like "you are looking at version 
> 1.7.1 of the docs" or "you are looking at docs for version 1.8.0-SNAPSHOT 
> generated from Git commit eee82d90a54108f2d7e18e84ec0bbd391fcc129a"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KUDU-2504) Add Kudu version number to header of docs pages

2018-10-24 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KUDU-2504:
-

Assignee: Isaac Ramos

> Add Kudu version number to header of docs pages
> ---
>
> Key: KUDU-2504
> URL: https://issues.apache.org/jira/browse/KUDU-2504
> Project: Kudu
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 1.7.1
>Reporter: Mike Percy
>Assignee: Isaac Ramos
>Priority: Minor
>  Labels: newbie
>
> It is currently not easy to tell which version of the docs you are looking at 
> when you are on the "unversioned" section of the Kudu docs @ 
> [http://kudu.apache.org/docs/] – we should add a header or a little strip to 
> the top of each page that says something like "you are looking at version 
> 1.7.1 of the docs" or "you are looking at docs for version 1.8.0-SNAPSHOT 
> generated from Git commit eee82d90a54108f2d7e18e84ec0bbd391fcc129a"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KUDU-2419) Add dist-test support for the Java build

2018-10-16 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KUDU-2419.
---
  Resolution: Fixed
Assignee: Grant Henke
   Fix Version/s: 1.8.0
Target Version/s: 1.8.0

This was fixed with a few commits, primarily 
[f5117d2|https://github.com/apache/kudu/commit/f5117d294a32ea9e4035bb4f8e9fb376dea68646].

> Add dist-test support for the Java build
> 
>
> Key: KUDU-2419
> URL: https://issues.apache.org/jira/browse/KUDU-2419
> Project: Kudu
>  Issue Type: Improvement
>Affects Versions: n/a
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Major
> Fix For: 1.8.0
>
>
> Adding Java/Gradle support to dist-test will allow for much faster pre-commit 
> checks and to expose the Flaky tests in the same place as our c++ tests. 
> Currently the flakies are hidden in the logs and we have no idea how common 
> they are. 
> As an anecdotal point of reference Gradle does not have flaky retries and I 
> have had a hard time getting a clean pre-commit build.
> See WIP patches here:
> https://gerrit.cloudera.org/#/c/7579/
> https://gerrit.cloudera.org/#/c/9932/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KUDU-2223) Failed to add distributed masters: Unable to start Master at index 0

2018-10-16 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KUDU-2223.
---
   Resolution: Fixed
 Assignee: Grant Henke
Fix Version/s: NA

This should have been fixed with the latest series of testing improvements. Let 
me know if not.

> Failed to add distributed masters: Unable to start Master at index 0
> 
>
> Key: KUDU-2223
> URL: https://issues.apache.org/jira/browse/KUDU-2223
> Project: Kudu
>  Issue Type: Bug
>  Components: build, java
>Affects Versions: 1.6.0
>Reporter: Nacho García Fernández
>Assignee: Grant Henke
>Priority: Major
> Fix For: NA
>
>
> After successfully building Kudu on my OSX, I try to run the  mvn verify 
> command in the java submodule, but I get the following exception:
> {code:java}
> [INFO] ---
> [INFO]  T E S T S
> [INFO] ---
> [INFO] Running org.apache.kudu.client.TestAlterTable
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.42 
> s <<< FAILURE! - in org.apache.kudu.client.TestAlterTable
> [ERROR] org.apache.kudu.client.TestAlterTable  Time elapsed: 0.42 s  <<< 
> ERROR!
> org.apache.kudu.client.NonRecoverableException: Failed to add distributed 
> masters: Unable to start Master at index 0: 
> /Users/0xNacho/dev/github/kudu/build/latest/bin/kudu-master: process exited 
> on signal 6
> [INFO] Running org.apache.kudu.client.TestAsyncKuduClient
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.644 
> s <<< FAILURE! - in org.apache.kudu.client.TestAsyncKuduClient
> [ERROR] org.apache.kudu.client.TestAsyncKuduClient  Time elapsed: 0.644 s  
> <<< ERROR!
> org.apache.kudu.client.NonRecoverableException: Failed to add distributed 
> masters: Unable to start Master at index 0: 
> /Users/0xNacho/dev/github/kudu/build/latest/bin/kudu-master: process exited 
> on signal 6
>   at 
> org.apache.kudu.client.TestAsyncKuduClient.setUpBeforeClass(TestAsyncKuduClient.java:45)
> [INFO] Running org.apache.kudu.client.TestAsyncKuduSession
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.478 
> s <<< FAILURE! - in org.apache.kudu.client.TestAsyncKuduSession
> [ERROR] org.apache.kudu.client.TestAsyncKuduSession  Time elapsed: 0.478 s  
> <<< ERROR!
> org.apache.kudu.client.NonRecoverableException: Failed to add distributed 
> masters: Unable to start Master at index 0: 
> /Users/0xNacho/dev/github/kudu/build/latest/bin/kudu-master: process exited 
> on signal 6
>   at 
> org.apache.kudu.client.TestAsyncKuduSession.setUpBeforeClass(TestAsyncKuduSession.java:59)
> [INFO] Running org.apache.kudu.client.TestAuthnTokenReacquire
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.601 
> s <<< FAILURE! - in org.apache.kudu.client.TestAuthnTokenReacquire
> [ERROR] org.apache.kudu.client.TestAuthnTokenReacquire  Time elapsed: 0.601 s 
>  <<< ERROR!
> org.apache.kudu.client.NonRecoverableException: Failed to add distributed 
> masters: Unable to start Master at index 0: 
> /Users/0xNacho/dev/github/kudu/build/latest/bin/kudu-master: process exited 
> on signal 6
>   at 
> org.apache.kudu.client.TestAuthnTokenReacquire.setUpBeforeClass(TestAuthnTokenReacquire.java:55)
> [INFO] Running org.apache.kudu.client.TestAuthnTokenReacquireOpen
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.865 
> s <<< FAILURE! - in org.apache.kudu.client.TestAuthnTokenReacquireOpen
> [ERROR] org.apache.kudu.client.TestAuthnTokenReacquireOpen  Time elapsed: 
> 0.865 s  <<< ERROR!
> org.apache.kudu.client.NonRecoverableException: Failed to start a single 
> Master: /Users/0xNacho/dev/github/kudu/build/latest/bin/kudu-master: process 
> exited on signal 6
>   at 
> org.apache.kudu.client.TestAuthnTokenReacquireOpen.setUpBeforeClass(TestAuthnTokenReacquireOpen.java:58)
> [INFO] Running org.apache.kudu.client.TestBitSet
> [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.061 
> s - in org.apache.kudu.client.TestBitSet
> [INFO] Running org.apache.kudu.client.TestBytes
> [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.077 
> s - in org.apache.kudu.client.TestBytes
> [INFO] Running org.apache.kudu.client.TestClientFailoverSupport
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.399 
> s <<< FAILURE! - in org.apache.kudu.client.TestClientFailoverSupport
> [ERROR] org.apache.kudu.client.TestClientFailoverSupport  Time elapsed: 0.399 
> s  <<< ERROR!
> org.apache.kudu.client.NonRecoverableException: Failed to add distributed 
> masters: Unable to start Master at index 0: 
> 

[jira] [Assigned] (KUDU-2527) Add Describe Table Tool

2018-10-15 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KUDU-2527:
-

Assignee: Will Berkeley  (was: Grant Henke)

> Add Describe Table Tool
> ---
>
> Key: KUDU-2527
> URL: https://issues.apache.org/jira/browse/KUDU-2527
> Project: Kudu
>  Issue Type: Improvement
>Reporter: Grant Henke
>Assignee: Will Berkeley
>Priority: Major
>
> Add a tool to describe a table on the cli with similar information shown in 
> the table web ui. Perhaps include a verbosity flag or the option to provide 
> what "columns" of information to include. 
> Example: 
> {code}
> kudu table describe   ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KUDU-2602) testRandomBackupAndRestore is flaky

2018-10-11 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KUDU-2602:
-

Assignee: Grant Henke

> testRandomBackupAndRestore is flaky
> ---
>
> Key: KUDU-2602
> URL: https://issues.apache.org/jira/browse/KUDU-2602
> Project: Kudu
>  Issue Type: Bug
>Reporter: Hao Hao
>Assignee: Grant Henke
>Priority: Major
> Fix For: NA
>
> Attachments: TEST-org.apache.kudu.backup.TestKuduBackup.xml
>
>
> Saw the following failure with testRandomBackupAndRestore:
> {noformat}
> java.lang.AssertionError: 
> expected:21 but was:20
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:834)
> at org.junit.Assert.assertEquals(Assert.java:645)
> at org.junit.Assert.assertEquals(Assert.java:631)
> at 
> org.apache.kudu.backup.TestKuduBackup.testRandomBackupAndRestore(TestKuduBackup.scala:99)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.apache.kudu.junit.RetryRule$RetryStatement.evaluate(RetryRule.java:72)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:106)
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
> at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:66)
> at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
> at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
> at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
> at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
> at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
> at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:117)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
> at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
> at 
> org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:155)
> at 
> 

[jira] [Resolved] (KUDU-2602) testRandomBackupAndRestore is flaky

2018-10-11 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KUDU-2602.
---
   Resolution: Duplicate
Fix Version/s: NA

> testRandomBackupAndRestore is flaky
> ---
>
> Key: KUDU-2602
> URL: https://issues.apache.org/jira/browse/KUDU-2602
> Project: Kudu
>  Issue Type: Bug
>Reporter: Hao Hao
>Priority: Major
> Fix For: NA
>
> Attachments: TEST-org.apache.kudu.backup.TestKuduBackup.xml
>
>
> Saw the following failure with testRandomBackupAndRestore:
> {noformat}
> java.lang.AssertionError: 
> expected:21 but was:20
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:834)
> at org.junit.Assert.assertEquals(Assert.java:645)
> at org.junit.Assert.assertEquals(Assert.java:631)
> at 
> org.apache.kudu.backup.TestKuduBackup.testRandomBackupAndRestore(TestKuduBackup.scala:99)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.apache.kudu.junit.RetryRule$RetryStatement.evaluate(RetryRule.java:72)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:106)
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
> at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:66)
> at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
> at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
> at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
> at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
> at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
> at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:117)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
> at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
> at 
> org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:155)
> at 
> 

[jira] [Created] (KUDU-2606) Use Spark's built in Avro support

2018-10-11 Thread Grant Henke (JIRA)
Grant Henke created KUDU-2606:
-

 Summary: Use Spark's built in Avro support
 Key: KUDU-2606
 URL: https://issues.apache.org/jira/browse/KUDU-2606
 Project: Kudu
  Issue Type: New Feature
Reporter: Grant Henke
Assignee: Grant Henke


In Spark 2.4 spark-avro is now a part of Spark itself. We should upgrade to 
Spark 2.4 when it is released and use the new spark-avro module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KUDU-2563) Spark integration should use the scanner keep-alive API

2018-10-08 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KUDU-2563.
---
   Resolution: Fixed
Fix Version/s: 1.8.0

Resolved via 
[cf1b1f4|https://github.com/apache/kudu/commit/cf1b1f42cbcc3ee67477ddc44cd0ff5070f1caac].

> Spark integration should use the scanner keep-alive API
> ---
>
> Key: KUDU-2563
> URL: https://issues.apache.org/jira/browse/KUDU-2563
> Project: Kudu
>  Issue Type: Improvement
>  Components: client, spark
>Affects Versions: 1.7.1
>Reporter: Mike Percy
>Assignee: Grant Henke
>Priority: Major
> Fix For: 1.8.0
>
>
> The Spark integration should implement the scanner keep-alive API like the 
> Impala scanner does in order to avoid errors related to scanners timing out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KUDU-2186) Gradle shading of kudu-client test JAR is broken

2018-10-02 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KUDU-2186.
---
   Resolution: Fixed
Fix Version/s: 1.8.0

This was resolved via 
[fd1ffd0|https://github.com/apache/kudu/commit/fd1ffd0fb65e138f1f015a55aa96ae870c1d51cd].

> Gradle shading of kudu-client test JAR is broken
> 
>
> Key: KUDU-2186
> URL: https://issues.apache.org/jira/browse/KUDU-2186
> Project: Kudu
>  Issue Type: Bug
>  Components: java
>Affects Versions: 1.5.0
>Reporter: Adar Dembo
>Assignee: Grant Henke
>Priority: Major
> Fix For: 1.8.0
>
>
> I don't understand the details, but I know that if MiniKuduCluster calls a 
> kudu-client ProtobufHelper method, the kudu-client-tools tests fail to run in 
> Gradle. This isn't an issue with Maven.
> I've worked around this by duplicating the one ProtobufHelper method used by 
> MiniKuduCluster into that class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KUDU-2539) Supporting Spark Streaming DataFrame in KuduContext

2018-09-26 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KUDU-2539.
---
   Resolution: Fixed
Fix Version/s: 1.8.0

This is resolved via 
[8020cbf|https://github.com/apache/kudu/commit/8020cbf2760483c46ed0766dfdebe3c12d0107f1].

> Supporting Spark Streaming DataFrame in KuduContext
> ---
>
> Key: KUDU-2539
> URL: https://issues.apache.org/jira/browse/KUDU-2539
> Project: Kudu
>  Issue Type: Improvement
>  Components: spark
>Affects Versions: 1.8.0
>Reporter: Attila Zsolt Piros
>Assignee: Attila Zsolt Piros
>Priority: Minor
> Fix For: 1.8.0
>
>
> Currently KuduContext does not support Spark Streaming DataFrame. The problem 
> comes from a foreachPartition call which in case of spark streaming is an 
> unsupported operation, like foreach: 
> [unsupported operations in 
> streaming|https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#unsupported-operations]
> I have created a small example app with a custom Kudu sink which can be used 
> for testing:
> [kudu custom sink and example 
> app|https://github.com/attilapiros/kudu_custom_sink]
> The patch fixing this issue is also available for kudu-spark, so soon a 
> gerrit review can be expected with the solution.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KUDU-2584) Flaky testSimpleBackupAndRestore

2018-09-19 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KUDU-2584:
-

Assignee: Grant Henke

> Flaky testSimpleBackupAndRestore
> 
>
> Key: KUDU-2584
> URL: https://issues.apache.org/jira/browse/KUDU-2584
> Project: Kudu
>  Issue Type: Bug
>  Components: backup
>Reporter: Mike Percy
>Assignee: Grant Henke
>Priority: Major
>
> testSimpleBackupAndRestore is flaky and tends to fail with the following 
> error:
> {code:java}
> 04:48:06.604 [ERROR - Test worker] (RetryRule.java:72) 
> testRandomBackupAndRestore(org.apache.kudu.backup.TestKuduBackup): failed run 
> 1 
> java.lang.AssertionError: expected:<111> but was:<110> 
> at org.junit.Assert.fail(Assert.java:88) 
> at org.junit.Assert.failNotEquals(Assert.java:834) 
> at org.junit.Assert.assertEquals(Assert.java:645) 
> at org.junit.Assert.assertEquals(Assert.java:631) 
> at 
> org.apache.kudu.backup.TestKuduBackup.testRandomBackupAndRestore(TestKuduBackup.scala:99)
>  
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:483) 
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.apache.kudu.junit.RetryRule$RetryStatement.evaluate(RetryRule.java:68) 
> at org.junit.rules.RunRules.evaluate(RunRules.java:20) 
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) 
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) 
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) 
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) 
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363) 
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:106)
>  
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>  
> at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>  
> at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:66)
>  
> at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>  
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:483) 
> at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>  
> at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>  
> at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>  
> at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>  
> at com.sun.proxy.$Proxy2.processTestClass(Unknown Source) 
> at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:117)
>  
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:483) 
> at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>  
> at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>  
> at 
> 

[jira] [Updated] (KUDU-2563) Spark integration should use the scanner keep-alive API

2018-09-15 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KUDU-2563:
--
Summary: Spark integration should use the scanner keep-alive API  (was: 
Spark integration should implement scanner keep-alive API)

> Spark integration should use the scanner keep-alive API
> ---
>
> Key: KUDU-2563
> URL: https://issues.apache.org/jira/browse/KUDU-2563
> Project: Kudu
>  Issue Type: Improvement
>  Components: client, spark
>Affects Versions: 1.7.1
>Reporter: Mike Percy
>Assignee: Grant Henke
>Priority: Major
>
> The Spark integration should implement the scanner keep-alive API like the 
> Impala scanner does in order to avoid errors related to scanners timing out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2095) Add scanner keepAlive method to the java client

2018-09-15 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16616533#comment-16616533
 ] 

Grant Henke commented on KUDU-2095:
---

This is resolved via 
[42db87b|https://github.com/apache/kudu/commit/42db87b0b128c573b96e39615e7fa41227fea368].

> Add scanner keepAlive method to the java client
> ---
>
> Key: KUDU-2095
> URL: https://issues.apache.org/jira/browse/KUDU-2095
> Project: Kudu
>  Issue Type: Bug
>Affects Versions: 1.3.1
>Reporter: zhangguangqiang
>Assignee: Grant Henke
>Priority: Major
>
> when I use kudu java client,I need to make scanner to keepAlive in my usage 
> case。But I can not find this method in java client; On the other hand,I found 
> kudu::client::KuduScanner::KeepAlive in c++ client。This is very necessary in 
> my usage,will you implement it in java client? thank you !



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KUDU-2095) Add scanner keepAlive method to the java client

2018-09-15 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KUDU-2095.
---
  Resolution: Fixed
   Fix Version/s: 1.8.0
Target Version/s: 1.8.0

> Add scanner keepAlive method to the java client
> ---
>
> Key: KUDU-2095
> URL: https://issues.apache.org/jira/browse/KUDU-2095
> Project: Kudu
>  Issue Type: Bug
>Affects Versions: 1.3.1
>Reporter: zhangguangqiang
>Assignee: Grant Henke
>Priority: Major
> Fix For: 1.8.0
>
>
> when I use kudu java client,I need to make scanner to keepAlive in my usage 
> case。But I can not find this method in java client; On the other hand,I found 
> kudu::client::KuduScanner::KeepAlive in c++ client。This is very necessary in 
> my usage,will you implement it in java client? thank you !



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KUDU-2563) Spark integration should implement scanner keep-alive API

2018-09-15 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KUDU-2563:
--
Target Version/s: 1.8.0

> Spark integration should implement scanner keep-alive API
> -
>
> Key: KUDU-2563
> URL: https://issues.apache.org/jira/browse/KUDU-2563
> Project: Kudu
>  Issue Type: Improvement
>  Components: client, spark
>Affects Versions: 1.7.1
>Reporter: Mike Percy
>Assignee: Grant Henke
>Priority: Major
>
> The Spark integration should implement the scanner keep-alive API like the 
> Impala scanner does in order to avoid errors related to scanners timing out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KUDU-2095) Add scanner keepAlive method to the java client

2018-09-15 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KUDU-2095:
--
Summary: Add scanner keepAlive method to the java client  (was: scanner 
keepAlive method is necessary in java client)

> Add scanner keepAlive method to the java client
> ---
>
> Key: KUDU-2095
> URL: https://issues.apache.org/jira/browse/KUDU-2095
> Project: Kudu
>  Issue Type: Bug
>Affects Versions: 1.3.1
>Reporter: zhangguangqiang
>Assignee: Grant Henke
>Priority: Major
>
> when I use kudu java client,I need to make scanner to keepAlive in my usage 
> case。But I can not find this method in java client; On the other hand,I found 
> kudu::client::KuduScanner::KeepAlive in c++ client。This is very necessary in 
> my usage,will you implement it in java client? thank you !



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2500) Kudu Spark InterfaceStability class not found

2018-09-11 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16611332#comment-16611332
 ] 

Grant Henke commented on KUDU-2500:
---

A workaround was committed in 
[daee55|https://github.com/apache/kudu/commit/daee55].

> Kudu Spark InterfaceStability class not found
> -
>
> Key: KUDU-2500
> URL: https://issues.apache.org/jira/browse/KUDU-2500
> Project: Kudu
>  Issue Type: Bug
>Affects Versions: 1.7.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Major
>
> We recently marked the Yetus annotation library as optional because the 
> annotations are not used at runtime and therefore should not be needed. Here 
> is a good summary of why the annotations are not required at runtime: 
> https://stackoverflow.com/questions/3567413/why-doesnt-a-missing-annotation-cause-a-classnotfoundexception-at-runtime/3568041#3568041
> However, for some reason Spark is requiring the annotation when performing 
> some reflection. See the sample stacktrace below:
> {code}
> Driver stacktrace:
>   at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1602)
>   at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1590)
>   at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1589)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>   at 
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1589)
>   at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
>   at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
>   at scala.Option.foreach(Option.scala:257)
>   at 
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
>   at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1823)
>   at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1772)
>   at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1761)
>   at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>   at 
> org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2055)
>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2074)
>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
>   at 
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:929)
>   at 
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:927)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
>   at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:927)
>   at 
> org.apache.spark.sql.Dataset$$anonfun$foreachPartition$1.apply$mcV$sp(Dataset.scala:2675)
>   at 
> org.apache.spark.sql.Dataset$$anonfun$foreachPartition$1.apply(Dataset.scala:2675)
>   at 
> org.apache.spark.sql.Dataset$$anonfun$foreachPartition$1.apply(Dataset.scala:2675)
>   at 
> org.apache.spark.sql.Dataset$$anonfun$withNewRDDExecutionId$1.apply(Dataset.scala:3239)
>   at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
>   at 
> org.apache.spark.sql.Dataset.withNewRDDExecutionId(Dataset.scala:3235)
>   at org.apache.spark.sql.Dataset.foreachPartition(Dataset.scala:2674)
>   at 
> org.apache.kudu.spark.kudu.KuduContext.writeRows(KuduContext.scala:276)
>   at 
> org.apache.kudu.spark.kudu.KuduContext.insertRows(KuduContext.scala:206)
>   at 
> org.apache.kudu.backup.KuduRestore$$anonfun$run$1.apply(KuduRestore.scala:65)
>   at 
> org.apache.kudu.backup.KuduRestore$$anonfun$run$1.apply(KuduRestore.scala:44)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at org.apache.kudu.backup.KuduRestore$.run(KuduRestore.scala:44)
>   at 
> org.apache.kudu.backup.TestKuduBackup.backupAndRestore(TestKuduBackup.scala:310)
>   at 
> org.apache.kudu.backup.TestKuduBackup$$anonfun$2.apply$mcV$sp(TestKuduBackup.scala:83)
>   at 
> org.apache.kudu.backup.TestKuduBackup$$anonfun$2.apply(TestKuduBackup.scala:76)
>   at 
> 

[jira] [Assigned] (KUDU-2095) scanner keepAlive method is necessary in java client

2018-09-11 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KUDU-2095:
-

Assignee: Grant Henke  (was: Tony Foerster)

> scanner keepAlive method is necessary in java client
> 
>
> Key: KUDU-2095
> URL: https://issues.apache.org/jira/browse/KUDU-2095
> Project: Kudu
>  Issue Type: Bug
>Affects Versions: 1.3.1
>Reporter: zhangguangqiang
>Assignee: Grant Henke
>Priority: Major
>
> when I use kudu java client,I need to make scanner to keepAlive in my usage 
> case。But I can not find this method in java client; On the other hand,I found 
> kudu::client::KuduScanner::KeepAlive in c++ client。This is very necessary in 
> my usage,will you implement it in java client? thank you !



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2563) Spark integration should implement scanner keep-alive API

2018-09-11 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16610912#comment-16610912
 ] 

Grant Henke commented on KUDU-2563:
---

That's right. That work is tracked by KUDU-2095. There is an older Gerrit patch 
[here|https://gerrit.cloudera.org/#/c/7749/] from [~afoerster]. I will sync 
with him on that work. 

> Spark integration should implement scanner keep-alive API
> -
>
> Key: KUDU-2563
> URL: https://issues.apache.org/jira/browse/KUDU-2563
> Project: Kudu
>  Issue Type: Improvement
>  Components: client, spark
>Affects Versions: 1.7.1
>Reporter: Mike Percy
>Assignee: Grant Henke
>Priority: Major
>
> The Spark integration should implement the scanner keep-alive API like the 
> Impala scanner does in order to avoid errors related to scanners timing out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KUDU-2348) Java client doesn't pick a random replica when no replica is local

2018-09-10 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/KUDU-2348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KUDU-2348:
-

Assignee: Grant Henke

> Java client doesn't pick a random replica when no replica is local
> --
>
> Key: KUDU-2348
> URL: https://issues.apache.org/jira/browse/KUDU-2348
> Project: Kudu
>  Issue Type: Improvement
>  Components: client, java
>Affects Versions: 1.7.0
>Reporter: Todd Lipcon
>Assignee: Grant Henke
>Priority: Major
>  Labels: newbie
>
> In RemoteTablet we have this comment:
> {code}
>* Get the information on the closest server. If none is closer than the 
> others,
>* return the information on a randomly picked server.
> {code}
> However it appears that the "random" replica is deterministic - it's always 
> the last replica in hashmap iteration order, which is likely to be the same 
> order across all clients in the cluster. That would cause load to be 
> concentrated on one replica rather than spread even if the clients are 
> meaning to spread the load.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


<    5   6   7   8   9   10   11   12   13   14   >