[jira] [Work started] (HBASE-19521) HBase mob compaction need to check hfile version

2017-12-14 Thread Qilin Cao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-19521 started by Qilin Cao.
-
> HBase mob compaction need to check hfile version
> 
>
> Key: HBASE-19521
> URL: https://issues.apache.org/jira/browse/HBASE-19521
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction, mob
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Qilin Cao
>Assignee: Qilin Cao
>Priority: Critical
>
> When HBase master configuration is not set hfile.format.version to 3, and 
> user already run compact mob, this operation will cause compactor write V2 
> ref hfile, the result is that user can not scan the correct cell value since 
> the mob cell ref tags are not written. So it is necessary to check the hfile 
> version before to run mob compaction.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19358) Improve the stability of splitting log when do fail over

2017-12-14 Thread Jingyun Tian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-19358:
-
Description: 
The way we splitting log now is like the following figure:
!https://issues.apache.org/jira/secure/attachment/12902234/split-logic-old.jpg!
The problem is the OutputSink will write the recovered edits during splitting 
log, which means it will create one WriterAndPath for each region and retain it 
until the end. If the cluster is small and the number of regions per rs is 
large, it will create too many HDFS streams at the same time. Then it is prone 
to failure since each datanode need to handle too many streams.

Thus I come up with a new way to split log.  
!https://issues.apache.org/jira/secure/attachment/12902235/split-logic-new.jpg!
We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, we 
will pick the largest EntryBuffer and write it to a file (close the writer 
after finish). Then after we read all entries into memory, we will start a 
writeAndCloseThreadPool, it starts a certain number of threads to write all 
buffers to files. Thus it will not create HDFS streams more than 
*_hbase.regionserver.hlog.splitlog.writer.threads_* we set.
The biggest benefit is we can control the number of streams we create during 
splitting log, 
it will not exceeds *_hbase.regionserver.wal.max.splitters * 
hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is 
*_hbase.regionserver.wal.max.splitters * the number of region the hlog 
contains_*.


  was:
The way we splitting log now is like the following figure:
!https://issues.apache.org/jira/secure/attachment/12902234/split-logic-old.jpg!
The problem is the OutputSink will write the recovered edits during splitting 
log, which means it will create one WriterAndPath for each region and retain it 
until the end. If the cluster is small and the number of regions per rs is 
large, it will create too many HDFS streams at the same time. Then it is prone 
to failure since each datanode need to handle too many streams.

Thus I come up with a new way to split log.  
!https://issues.apache.org/jira/secure/attachment/12902235/split-logic-new.jpg!
We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, we 
will pick the largest EntryBuffer and write it to a file (close the writer 
after finish). Then after we read all entries into memory, we will start a 
writeAndCloseThreadPool, it starts a certain number of threads to write all 
buffers to files. Thus it will not create HDFS streams more than 
hbase.regionserver.hlog.splitlog.writer.threads we set.
The biggest benefit is we can control the number of streams we create during 
splitting log, 
it will not exceeds *_hbase.regionserver.wal.max.splitters * 
hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is 
*_hbase.regionserver.wal.max.splitters * the number of region the hlog 
contains_*.



> Improve the stability of splitting log when do fail over
> 
>
> Key: HBASE-19358
> URL: https://issues.apache.org/jira/browse/HBASE-19358
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR
>Affects Versions: 0.98.24
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
> Attachments: HBASE-19358.patch, split-1-log.png, split-logic-new.jpg, 
> split-logic-old.jpg, split-table.png, split_test_result.png
>
>
> The way we splitting log now is like the following figure:
> !https://issues.apache.org/jira/secure/attachment/12902234/split-logic-old.jpg!
> The problem is the OutputSink will write the recovered edits during splitting 
> log, which means it will create one WriterAndPath for each region and retain 
> it until the end. If the cluster is small and the number of regions per rs is 
> large, it will create too many HDFS streams at the same time. Then it is 
> prone to failure since each datanode need to handle too many streams.
> Thus I come up with a new way to split log.  
> !https://issues.apache.org/jira/secure/attachment/12902235/split-logic-new.jpg!
> We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, 
> we will pick the largest EntryBuffer and write it to a file (close the writer 
> after finish). Then after we read all entries into memory, we will start a 
> writeAndCloseThreadPool, it starts a certain number of threads to write all 
> buffers to files. Thus it will not create HDFS streams more than 
> *_hbase.regionserver.hlog.splitlog.writer.threads_* we set.
> The biggest benefit is we can control the number of streams we create during 
> splitting log, 
> it will not exceeds *_hbase.regionserver.wal.max.splitters * 
> hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is 
> *_hbase.regionserver.wal.max.splitters * the number of region the hlog 
> 

[jira] [Updated] (HBASE-19358) Improve the stability of splitting log when do fail over

2017-12-14 Thread Jingyun Tian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-19358:
-
Description: 
The way we splitting log now is like the following figure:
!https://issues.apache.org/jira/secure/attachment/12902234/split-logic-old.jpg!
The problem is the OutputSink will write the recovered edits during splitting 
log, which means it will create one WriterAndPath for each region and retain it 
until the end. If the cluster is small and the number of regions per rs is 
large, it will create too many HDFS streams at the same time. Then it is prone 
to failure since each datanode need to handle too many streams.

Thus I come up with a new way to split log.  
!https://issues.apache.org/jira/secure/attachment/12902235/split-logic-new.jpg!
We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, we 
will pick the largest EntryBuffer and write it to a file (close the writer 
after finish). Then after we read all entries into memory, we will start a 
writeAndCloseThreadPool, it starts a certain number of threads to write all 
buffers to files. Thus it will not create HDFS streams more than 
hbase.regionserver.hlog.splitlog.writer.threads we set.
The biggest benefit is we can control the number of streams we create during 
splitting log, 
it will not exceeds *_hbase.regionserver.wal.max.splitters * 
hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is 
*_hbase.regionserver.wal.max.splitters * the number of region the hlog 
contains_*.


  was:
The way we splitting log now is like the following figure:
!https://issues.apache.org/jira/secure/attachment/12902234/split-logic-old.jpg!
The problem is the OutputSink will write the recovered edits during splitting 
log, which means it will create one WriterAndPath for each region and retain it 
until the end. If the cluster is small and the number of regions per rs is 
large, it will create too many HDFS streams at the same time. Then it is prone 
to failure since each datanode need to handle too many streams.

Thus I come up with a new way to split log.  
!http://example.com/image.png!
We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, we 
will pick the largest EntryBuffer and write it to a file (close the writer 
after finish). Then after we read all entries into memory, we will start a 
writeAndCloseThreadPool, it starts a certain number of threads to write all 
buffers to files. Thus it will not create HDFS streams more than 
hbase.regionserver.hlog.splitlog.writer.threads we set.
The biggest benefit is we can control the number of streams we create during 
splitting log, 
it will not exceeds *_hbase.regionserver.wal.max.splitters * 
hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is 
*_hbase.regionserver.wal.max.splitters * the number of region the hlog 
contains_*.



> Improve the stability of splitting log when do fail over
> 
>
> Key: HBASE-19358
> URL: https://issues.apache.org/jira/browse/HBASE-19358
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR
>Affects Versions: 0.98.24
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
> Attachments: HBASE-19358.patch, split-1-log.png, split-logic-new.jpg, 
> split-logic-old.jpg, split-table.png, split_test_result.png
>
>
> The way we splitting log now is like the following figure:
> !https://issues.apache.org/jira/secure/attachment/12902234/split-logic-old.jpg!
> The problem is the OutputSink will write the recovered edits during splitting 
> log, which means it will create one WriterAndPath for each region and retain 
> it until the end. If the cluster is small and the number of regions per rs is 
> large, it will create too many HDFS streams at the same time. Then it is 
> prone to failure since each datanode need to handle too many streams.
> Thus I come up with a new way to split log.  
> !https://issues.apache.org/jira/secure/attachment/12902235/split-logic-new.jpg!
> We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, 
> we will pick the largest EntryBuffer and write it to a file (close the writer 
> after finish). Then after we read all entries into memory, we will start a 
> writeAndCloseThreadPool, it starts a certain number of threads to write all 
> buffers to files. Thus it will not create HDFS streams more than 
> hbase.regionserver.hlog.splitlog.writer.threads we set.
> The biggest benefit is we can control the number of streams we create during 
> splitting log, 
> it will not exceeds *_hbase.regionserver.wal.max.splitters * 
> hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is 
> *_hbase.regionserver.wal.max.splitters * the number of region the hlog 
> contains_*.



--
This message was sent by Atlassian JIRA

[jira] [Updated] (HBASE-19358) Improve the stability of splitting log when do fail over

2017-12-14 Thread Jingyun Tian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-19358:
-
Description: 
The way we splitting log now is like the following figure:
!https://issues.apache.org/jira/secure/attachment/12902234/split-logic-old.jpg!
The problem is the OutputSink will write the recovered edits during splitting 
log, which means it will create one WriterAndPath for each region and retain it 
until the end. If the cluster is small and the number of regions per rs is 
large, it will create too many HDFS streams at the same time. Then it is prone 
to failure since each datanode need to handle too many streams.

Thus I come up with a new way to split log.  
!http://example.com/image.png!
We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, we 
will pick the largest EntryBuffer and write it to a file (close the writer 
after finish). Then after we read all entries into memory, we will start a 
writeAndCloseThreadPool, it starts a certain number of threads to write all 
buffers to files. Thus it will not create HDFS streams more than 
hbase.regionserver.hlog.splitlog.writer.threads we set.
The biggest benefit is we can control the number of streams we create during 
splitting log, 
it will not exceeds *_hbase.regionserver.wal.max.splitters * 
hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is 
*_hbase.regionserver.wal.max.splitters * the number of region the hlog 
contains_*.


  was:
The way we splitting log now is like the following figure:

The problem is the OutputSink will write the recovered edits during splitting 
log, which means it will create one WriterAndPath for each region and retain it 
until the end. If the cluster is small and the number of regions per rs is 
large, it will create too many HDFS streams at the same time. Then it is prone 
to failure since each datanode need to handle too many streams.

Thus I come up with a new way to split log.  

We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, we 
will pick the largest EntryBuffer and write it to a file (close the writer 
after finish). Then after we read all entries into memory, we will start a 
writeAndCloseThreadPool, it starts a certain number of threads to write all 
buffers to files. Thus it will not create HDFS streams more than 
hbase.regionserver.hlog.splitlog.writer.threads we set.
The biggest benefit is we can control the number of streams we create during 
splitting log, 
it will not exceeds *_hbase.regionserver.wal.max.splitters * 
hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is 
*_hbase.regionserver.wal.max.splitters * the number of region the hlog 
contains_*.



> Improve the stability of splitting log when do fail over
> 
>
> Key: HBASE-19358
> URL: https://issues.apache.org/jira/browse/HBASE-19358
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR
>Affects Versions: 0.98.24
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
> Attachments: HBASE-19358.patch, split-1-log.png, split-logic-new.jpg, 
> split-logic-old.jpg, split-table.png, split_test_result.png
>
>
> The way we splitting log now is like the following figure:
> !https://issues.apache.org/jira/secure/attachment/12902234/split-logic-old.jpg!
> The problem is the OutputSink will write the recovered edits during splitting 
> log, which means it will create one WriterAndPath for each region and retain 
> it until the end. If the cluster is small and the number of regions per rs is 
> large, it will create too many HDFS streams at the same time. Then it is 
> prone to failure since each datanode need to handle too many streams.
> Thus I come up with a new way to split log.  
> !http://example.com/image.png!
> We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, 
> we will pick the largest EntryBuffer and write it to a file (close the writer 
> after finish). Then after we read all entries into memory, we will start a 
> writeAndCloseThreadPool, it starts a certain number of threads to write all 
> buffers to files. Thus it will not create HDFS streams more than 
> hbase.regionserver.hlog.splitlog.writer.threads we set.
> The biggest benefit is we can control the number of streams we create during 
> splitting log, 
> it will not exceeds *_hbase.regionserver.wal.max.splitters * 
> hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is 
> *_hbase.regionserver.wal.max.splitters * the number of region the hlog 
> contains_*.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19358) Improve the stability of splitting log when do fail over

2017-12-14 Thread Jingyun Tian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-19358:
-
Attachment: split-logic-new.jpg
split-logic-old.jpg

> Improve the stability of splitting log when do fail over
> 
>
> Key: HBASE-19358
> URL: https://issues.apache.org/jira/browse/HBASE-19358
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR
>Affects Versions: 0.98.24
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
> Attachments: HBASE-19358.patch, split-1-log.png, split-logic-new.jpg, 
> split-logic-old.jpg, split-table.png, split_test_result.png
>
>
> The way we splitting log now is like the following figure:
> The problem is the OutputSink will write the recovered edits during splitting 
> log, which means it will create one WriterAndPath for each region and retain 
> it until the end. If the cluster is small and the number of regions per rs is 
> large, it will create too many HDFS streams at the same time. Then it is 
> prone to failure since each datanode need to handle too many streams.
> Thus I come up with a new way to split log.  
> We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, 
> we will pick the largest EntryBuffer and write it to a file (close the writer 
> after finish). Then after we read all entries into memory, we will start a 
> writeAndCloseThreadPool, it starts a certain number of threads to write all 
> buffers to files. Thus it will not create HDFS streams more than 
> hbase.regionserver.hlog.splitlog.writer.threads we set.
> The biggest benefit is we can control the number of streams we create during 
> splitting log, 
> it will not exceeds *_hbase.regionserver.wal.max.splitters * 
> hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is 
> *_hbase.regionserver.wal.max.splitters * the number of region the hlog 
> contains_*.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19358) Improve the stability of splitting log when do fail over

2017-12-14 Thread Jingyun Tian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-19358:
-
Attachment: HBASE-19358.patch

> Improve the stability of splitting log when do fail over
> 
>
> Key: HBASE-19358
> URL: https://issues.apache.org/jira/browse/HBASE-19358
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR
>Affects Versions: 0.98.24
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
> Attachments: HBASE-19358.patch, split-1-log.png, split-table.png, 
> split_test_result.png
>
>
> The way we splitting log now is like the following figure:
> The problem is the OutputSink will write the recovered edits during splitting 
> log, which means it will create one WriterAndPath for each region and retain 
> it until the end. If the cluster is small and the number of regions per rs is 
> large, it will create too many HDFS streams at the same time. Then it is 
> prone to failure since each datanode need to handle too many streams.
> Thus I come up with a new way to split log.  
> We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, 
> we will pick the largest EntryBuffer and write it to a file (close the writer 
> after finish). Then after we read all entries into memory, we will start a 
> writeAndCloseThreadPool, it starts a certain number of threads to write all 
> buffers to files. Thus it will not create HDFS streams more than 
> hbase.regionserver.hlog.splitlog.writer.threads we set.
> The biggest benefit is we can control the number of streams we create during 
> splitting log, 
> it will not exceeds *_hbase.regionserver.wal.max.splitters * 
> hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is 
> *_hbase.regionserver.wal.max.splitters * the number of region the hlog 
> contains_*.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19521) HBase mob compaction need to check hfile version

2017-12-14 Thread Qilin Cao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qilin Cao updated HBASE-19521:
--
Description: When HBase master configuration is not set 
hfile.format.version to 3, and user already run compact mob, this operation 
will cause compactor write V2 ref hfile, the result is that user can not scan 
the correct cell value since the mob cell ref tags are not written. So it is 
necessary to check the hfile version before to run mob compaction.  (was: When 
HBase master configuration is not set hfile.format.version to 3, and user 
already run compact mob, this operation will cause compactor write V2 hfile, 
the result is that user can not scan the correct cell value since the mob cell 
ref tags are not written. So it is necessary to check the hfile version before 
to run mob compaction.)

> HBase mob compaction need to check hfile version
> 
>
> Key: HBASE-19521
> URL: https://issues.apache.org/jira/browse/HBASE-19521
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction, mob
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Qilin Cao
>Assignee: Qilin Cao
>Priority: Critical
>
> When HBase master configuration is not set hfile.format.version to 3, and 
> user already run compact mob, this operation will cause compactor write V2 
> ref hfile, the result is that user can not scan the correct cell value since 
> the mob cell ref tags are not written. So it is necessary to check the hfile 
> version before to run mob compaction.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19358) Improve the stability of splitting log when do fail over

2017-12-14 Thread Jingyun Tian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-19358:
-
Description: 
The way we splitting log now is like the following figure:

The problem is the OutputSink will write the recovered edits during splitting 
log, which means it will create one WriterAndPath for each region and retain it 
until the end. If the cluster is small and the number of regions per rs is 
large, it will create too many HDFS streams at the same time. Then it is prone 
to failure since each datanode need to handle too many streams.

Thus I come up with a new way to split log.  

We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, we 
will pick the largest EntryBuffer and write it to a file (close the writer 
after finish). Then after we read all entries into memory, we will start a 
writeAndCloseThreadPool, it starts a certain number of threads to write all 
buffers to files. Thus it will not create HDFS streams more than 
hbase.regionserver.hlog.splitlog.writer.threads we set.
The biggest benefit is we can control the number of streams we create during 
splitting log, 
it will not exceeds *_hbase.regionserver.wal.max.splitters * 
hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is 
*_hbase.regionserver.wal.max.splitters * the number of region the hlog 
contains_*.


  was:
The way we splitting log now is like the following figure:
!https://issues.apache.org/jira/secure/attachment/12899558/previousLogic.jpg!
The problem is the OutputSink will write the recovered edits during splitting 
log, which means it will create one WriterAndPath for each region. If the 
cluster is small and the number of regions per rs is large, it will create too 
many HDFS streams at the same time. Then it is prone to failure since each 
datanode need to handle too many streams.

Thus I come up with a new way to split log.  
!https://issues.apache.org/jira/secure/attachment/12899557/newLogic.jpg!
We cached the recovered edits unless exceeds the memory limits we set or reach 
the end, then  we have a thread pool to do the rest things: write them to files 
and move to the destination.

The biggest benefit is we can control the number of streams we create during 
splitting log, 
it will not exceeds *_hbase.regionserver.wal.max.splitters * 
hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is 
*_hbase.regionserver.wal.max.splitters * the number of region the hlog 
contains_*.



> Improve the stability of splitting log when do fail over
> 
>
> Key: HBASE-19358
> URL: https://issues.apache.org/jira/browse/HBASE-19358
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR
>Affects Versions: 0.98.24
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
> Attachments: split-1-log.png, split-table.png, split_test_result.png
>
>
> The way we splitting log now is like the following figure:
> The problem is the OutputSink will write the recovered edits during splitting 
> log, which means it will create one WriterAndPath for each region and retain 
> it until the end. If the cluster is small and the number of regions per rs is 
> large, it will create too many HDFS streams at the same time. Then it is 
> prone to failure since each datanode need to handle too many streams.
> Thus I come up with a new way to split log.  
> We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, 
> we will pick the largest EntryBuffer and write it to a file (close the writer 
> after finish). Then after we read all entries into memory, we will start a 
> writeAndCloseThreadPool, it starts a certain number of threads to write all 
> buffers to files. Thus it will not create HDFS streams more than 
> hbase.regionserver.hlog.splitlog.writer.threads we set.
> The biggest benefit is we can control the number of streams we create during 
> splitting log, 
> it will not exceeds *_hbase.regionserver.wal.max.splitters * 
> hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is 
> *_hbase.regionserver.wal.max.splitters * the number of region the hlog 
> contains_*.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19521) HBase mob compaction need to check hfile version

2017-12-14 Thread Qilin Cao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qilin Cao updated HBASE-19521:
--
Priority: Critical  (was: Minor)

> HBase mob compaction need to check hfile version
> 
>
> Key: HBASE-19521
> URL: https://issues.apache.org/jira/browse/HBASE-19521
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction, mob
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Qilin Cao
>Assignee: Qilin Cao
>Priority: Critical
>
> When HBase master configuration is not set hfile.format.version to 3, and 
> user already run compact mob, this operation will cause compactor write V2 
> hfile, the result is that user can not scan the correct cell value since the 
> mob cell ref tags are not written. So it is necessary to check the hfile 
> version before to run mob compaction.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19521) HBase mob compaction need to check hfile version

2017-12-14 Thread Qilin Cao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qilin Cao updated HBASE-19521:
--
Component/s: mob

> HBase mob compaction need to check hfile version
> 
>
> Key: HBASE-19521
> URL: https://issues.apache.org/jira/browse/HBASE-19521
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction, mob
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Qilin Cao
>Priority: Minor
>
> When HBase master configuration is not set hfile.format.version to 3, and 
> user already run compact mob, this operation will cause compactor write V2 
> hfile, the result is that user can not scan the correct cell value since the 
> mob cell ref tags are not written. So it is necessary to check the hfile 
> version before to run mob compaction.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HBASE-19521) HBase mob compaction need to check hfile version

2017-12-14 Thread Qilin Cao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qilin Cao reassigned HBASE-19521:
-

Assignee: Qilin Cao

> HBase mob compaction need to check hfile version
> 
>
> Key: HBASE-19521
> URL: https://issues.apache.org/jira/browse/HBASE-19521
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction, mob
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Qilin Cao
>Assignee: Qilin Cao
>Priority: Minor
>
> When HBase master configuration is not set hfile.format.version to 3, and 
> user already run compact mob, this operation will cause compactor write V2 
> hfile, the result is that user can not scan the correct cell value since the 
> mob cell ref tags are not written. So it is necessary to check the hfile 
> version before to run mob compaction.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19521) HBase mob compaction need to check hfile version

2017-12-14 Thread Qilin Cao (JIRA)
Qilin Cao created HBASE-19521:
-

 Summary: HBase mob compaction need to check hfile version
 Key: HBASE-19521
 URL: https://issues.apache.org/jira/browse/HBASE-19521
 Project: HBase
  Issue Type: Improvement
  Components: Compaction
Affects Versions: 2.0.0, 3.0.0
Reporter: Qilin Cao
Priority: Minor


When HBase master configuration is not set hfile.format.version to 3, and user 
already run compact mob, this operation will cause compactor write V2 hfile, 
the result is that user can not scan the correct cell value since the mob cell 
ref tags are not written. So it is necessary to check the hfile version before 
to run mob compaction.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19505) Disable ByteBufferPool by default at HM

2017-12-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292148#comment-16292148
 ] 

Hadoop QA commented on HBASE-19505:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
35s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
7s{color} | {color:red} hbase-server: The patch generated 2 new + 182 unchanged 
- 2 fixed = 184 total (was 184) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
34s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
20m 10s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0-beta1. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}109m 
51s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}149m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19505 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902211/HBASE-19505_V3.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 702d6d2d5a83 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / deba43b156 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10468/artifact/patchprocess/diff-checkstyle-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10468/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10468/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was automatically generated.



> Disable ByteBufferPool by 

[jira] [Updated] (HBASE-19358) Improve the stability of splitting log when do fail over

2017-12-14 Thread Jingyun Tian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-19358:
-
Attachment: (was: previousLogic.jpg)

> Improve the stability of splitting log when do fail over
> 
>
> Key: HBASE-19358
> URL: https://issues.apache.org/jira/browse/HBASE-19358
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR
>Affects Versions: 0.98.24
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
> Attachments: split-1-log.png, split-table.png, split_test_result.png
>
>
> The way we splitting log now is like the following figure:
> !https://issues.apache.org/jira/secure/attachment/12899558/previousLogic.jpg!
> The problem is the OutputSink will write the recovered edits during splitting 
> log, which means it will create one WriterAndPath for each region. If the 
> cluster is small and the number of regions per rs is large, it will create 
> too many HDFS streams at the same time. Then it is prone to failure since 
> each datanode need to handle too many streams.
> Thus I come up with a new way to split log.  
> !https://issues.apache.org/jira/secure/attachment/12899557/newLogic.jpg!
> We cached the recovered edits unless exceeds the memory limits we set or 
> reach the end, then  we have a thread pool to do the rest things: write them 
> to files and move to the destination.
> The biggest benefit is we can control the number of streams we create during 
> splitting log, 
> it will not exceeds *_hbase.regionserver.wal.max.splitters * 
> hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is 
> *_hbase.regionserver.wal.max.splitters * the number of region the hlog 
> contains_*.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19358) Improve the stability of splitting log when do fail over

2017-12-14 Thread Jingyun Tian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-19358:
-
Attachment: (was: newLogic.jpg)

> Improve the stability of splitting log when do fail over
> 
>
> Key: HBASE-19358
> URL: https://issues.apache.org/jira/browse/HBASE-19358
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR
>Affects Versions: 0.98.24
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
> Attachments: split-1-log.png, split-table.png, split_test_result.png
>
>
> The way we splitting log now is like the following figure:
> !https://issues.apache.org/jira/secure/attachment/12899558/previousLogic.jpg!
> The problem is the OutputSink will write the recovered edits during splitting 
> log, which means it will create one WriterAndPath for each region. If the 
> cluster is small and the number of regions per rs is large, it will create 
> too many HDFS streams at the same time. Then it is prone to failure since 
> each datanode need to handle too many streams.
> Thus I come up with a new way to split log.  
> !https://issues.apache.org/jira/secure/attachment/12899557/newLogic.jpg!
> We cached the recovered edits unless exceeds the memory limits we set or 
> reach the end, then  we have a thread pool to do the rest things: write them 
> to files and move to the destination.
> The biggest benefit is we can control the number of streams we create during 
> splitting log, 
> it will not exceeds *_hbase.regionserver.wal.max.splitters * 
> hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is 
> *_hbase.regionserver.wal.max.splitters * the number of region the hlog 
> contains_*.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18300) Implement a Multi TieredBucketCache

2017-12-14 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292133#comment-16292133
 ] 

Anoop Sam John commented on HBASE-18300:


As said in HBASE-19357 comments, when we do this, we should see putting the 
system table's data blocks always on off heap BC. (When we have File mode + off 
heap mode tiered BC)

> Implement a Multi TieredBucketCache
> ---
>
> Key: HBASE-18300
> URL: https://issues.apache.org/jira/browse/HBASE-18300
> Project: HBase
>  Issue Type: New Feature
>  Components: BucketCache
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.1.0
>
>
> We did an internal brainstorming to study the feasibility of this. Some of 
> our recent tests on SSDs like Optane shows that they are vastly faster in 
> randomreads and can act as effective caches. 
> In the current state we have a single tier of Bucket cache and the bucket 
> cache can either be offheap or configured to work with file mode. (The file 
> mode can have multiple files backing it).
> So this model restricts us from using either the memory or the file and not 
> both. 
> With the advent of faster devices like Optane SSDs, NVMe based devices it is 
> better we try to utilize all those devices and try using them for the bucket 
> cache so that we can avoid the impact of slower devices where the actual data 
> resides on the HDFS data nodes. 
> Combined with this we can allow the user to configure the caching layer per 
> family/table so that one can effectively make use of the caching tiers.
> Can upload a design doc here. Before that, would like to know the suggestions 
> here. Thoughts!!!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-19357) Bucket cache no longer L2 for LRU cache

2017-12-14 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292025#comment-16292025
 ] 

Anoop Sam John edited comment on HBASE-19357 at 12/15/17 7:07 AM:
--

Will add to RN.
bq.perhaps system tables are offheap whereas everything else is file-backed?
There is no tiered BC at all. Either off heap or file.  This can be useful.. 
There is a jira and Ram was working on a prototype.  We will come back to that 
next year.   Will add a comment in that issue. So when/if we do, then we can 
think of adding the system table blocks always to off heap BC. cc [~ram_krish]

The issue is this one HBASE-18300


was (Author: anoop.hbase):
Will add to RN.
bq.perhaps system tables are offheap whereas everything else is file-backed?
There is no tiered BC at all. Either off heap or file.  This can be useful.. 
There is a jira and Ram was working on a prototype.  We will come back to that 
next year.   Will add a comment in that issue. So when/if we do, then we can 
think of adding the system table blocks always to off heap BC. cc [~ram_krish]

> Bucket cache no longer L2 for LRU cache
> ---
>
> Key: HBASE-19357
> URL: https://issues.apache.org/jira/browse/HBASE-19357
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19357.patch, HBASE-19357.patch, 
> HBASE-19357_V2.patch, HBASE-19357_V3.patch, HBASE-19357_V3.patch
>
>
> When Bucket cache is used, by default we dont configure it as an L2 cache 
> alone. The default setting is combined mode ON where the data blocks to 
> Bucket cache and index/bloom blocks go to LRU cache. But there is a way to 
> turn this off and make LRU as L1 and Bucket cache as a victim handler for L1. 
> It will be just L2.   
> After the off heap read path optimization Bucket cache is no longer slower 
> compared to L1. We have test results on data sizes from 12 GB.  The Alibaba 
> use case was also with 12 GB and they have observed a ~30% QPS improve over 
> the LRU cache.
> This issue is to remove the option for combined mode = false. So when Bucket 
> cache is in use, data blocks will go to it only and LRU will get only index 
> /meta/bloom blocks.   Bucket cache will no longer be configured as a victim 
> handler for LRU.
> Note : WHen external cache is in use, there only the L1 L2 thing comes. LRU 
> will be L1 and external cache act as its L2. That make full sense.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19357) Bucket cache no longer L2 for LRU cache

2017-12-14 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-19357:
---
Release Note: 
Removed cacheDataInL1 option for HCD
BucketCache is no longer the L2 for LRU on heap cache. When BC is used, data 
blocks will be strictly on BC only where as index/bloom blocks are on LRU L1 
cache.
Config 'hbase.bucketcache.combinedcache.enabled' is removed. There is no way 
set combined mode = false. Means make BC as victim handler for LRU cache.
This will be one more noticeable change when one uses BucketCache in File mode. 
 Then the system table's data block(Including the META table)  will be cached 
in Bucket Cache files only. Plain scan from META files alone test reveal that 
the throughput of file mode BC is almost half only.  But for META entries we 
have RegionLocation cache at client side connections. So this would not be a 
big concern in a real cluster usage. Will check more on this and probably fix 
even when we do tiered BucketCache.

  was:
Removed cacheDataInL1 option for HCD
BucketCache is no longer the L2 for LRU on heap cache. When BC is used, data 
blocks will be strictly on BC only where as index/bloom blocks are on LRU L1 
cache.
Config 'hbase.bucketcache.combinedcache.enabled' is removed. There is no way 
set combined mode = false. Means make BC as victim handler for LRU cache.


> Bucket cache no longer L2 for LRU cache
> ---
>
> Key: HBASE-19357
> URL: https://issues.apache.org/jira/browse/HBASE-19357
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19357.patch, HBASE-19357.patch, 
> HBASE-19357_V2.patch, HBASE-19357_V3.patch, HBASE-19357_V3.patch
>
>
> When Bucket cache is used, by default we dont configure it as an L2 cache 
> alone. The default setting is combined mode ON where the data blocks to 
> Bucket cache and index/bloom blocks go to LRU cache. But there is a way to 
> turn this off and make LRU as L1 and Bucket cache as a victim handler for L1. 
> It will be just L2.   
> After the off heap read path optimization Bucket cache is no longer slower 
> compared to L1. We have test results on data sizes from 12 GB.  The Alibaba 
> use case was also with 12 GB and they have observed a ~30% QPS improve over 
> the LRU cache.
> This issue is to remove the option for combined mode = false. So when Bucket 
> cache is in use, data blocks will go to it only and LRU will get only index 
> /meta/bloom blocks.   Bucket cache will no longer be configured as a victim 
> handler for LRU.
> Note : WHen external cache is in use, there only the L1 L2 thing comes. LRU 
> will be L1 and external cache act as its L2. That make full sense.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-15536) Make AsyncFSWAL as our default WAL

2017-12-14 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292128#comment-16292128
 ] 

Duo Zhang commented on HBASE-15536:
---

{quote}
[ERROR] testThreeRSAbort(org.apache.hadoop.hbase.master.TestDLSAsyncFSWAL)  
Time elapsed: 20.627 s  <<< ERROR!
org.apache.hadoop.hbase.TableNotFoundException: Region of 
'hbase:namespace,,1513320505933.451650152885a3b41d0b1110deca513c.' is expected 
in the table of 'testThreeRSAbort', but hbase:meta says it is in the table of 
'hbase:namespace'. hbase:meta might be damaged.
{quote}

The error message itself is an error I think... Let me dig...

> Make AsyncFSWAL as our default WAL
> --
>
> Key: HBASE-15536
> URL: https://issues.apache.org/jira/browse/HBASE-15536
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: 15536.addendum2.enable.asyncfswal.by.default.2.txt, 
> 15536.addendum2.enable.asyncfswal.by.default.3.txt, 
> 15536.addendum2.enable.asyncfswal.by.default.txt, 
> 15536.addendum2.enable.asyncfswal.by.default.txt, 
> 15536.addendum2.enable.asyncfswal.by.default.txt, 15536.minor.addendum.patch, 
> HBASE-15536-v1.patch, HBASE-15536-v2.patch, HBASE-15536-v3.patch, 
> HBASE-15536-v4.patch, HBASE-15536-v5.patch, HBASE-15536.patch, 
> latesttrunk_asyncWAL_50threads_10cols.jfr, 
> latesttrunk_defaultWAL_50threads_10cols.jfr
>
>
> As it should be predicated on passing basic cluster ITBLL



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-15536) Make AsyncFSWAL as our default WAL

2017-12-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292124#comment-16292124
 ] 

Hadoop QA commented on HBASE-15536:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
3s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.6.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
27s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
39s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
19m 54s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0-beta1. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 15s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-15536 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902208/15536.addendum2.enable.asyncfswal.by.default.txt
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 94b1fc1e630c 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / deba43b156 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10466/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results 

[jira] [Commented] (HBASE-19112) Suspect methods on Cell to be deprecated

2017-12-14 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292110#comment-16292110
 ] 

Chia-Ping Tsai commented on HBASE-19112:


bq. Seems like one small change and its possible this whole Cell thing is done 
(new issue does fine-tuning!).
The change is small but more discussion may be required. If we reach the 
consensus on that cp users should not add the custom cell, the change (wrap 
put/delete) can be addressed in this issue. 

bq.  May be we should try wrapping that Cell then 
yep. We can wrap the custom cell to be a ExtendedCell. Most methods in 
ExtendedCell are default methods so wrapping the cell should be a small change 
also. :)

> Suspect methods on Cell to be deprecated
> 
>
> Key: HBASE-19112
> URL: https://issues.apache.org/jira/browse/HBASE-19112
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Josh Elser
>Assignee: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19112_branch-2.patch, 
> HBASE-19112_branch-2_1.patch, HBASE-19112_master.patch, 
> HBASE-19112_master_1.patch, HBASE-19112_master_1.patch, 
> HBASE-19112_master_2.patch, HBASE-19112_master_3.patch
>
>
> [~chia7712] suggested on the [mailing 
> list|https://lists.apache.org/thread.html/e6de9af26d9b888a358ba48bf74655ccd893573087c032c0fcf01585@%3Cdev.hbase.apache.org%3E]
>  that we have some methods on Cell which should be deprecated for removal:
> * {{#getType()}}
> * {{#getTimestamp()}}
> * {{#getTag()}}
> * {{#getSequenceId()}}
> Let's make a pass over these (and maybe the rest) to make sure that there 
> aren't others which are either implementation details or methods returning 
> now-private-marked classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19507) Get or Scan Mob by rowkey return error value when run compact_mob or major_compact_mob after change MOB_THRESHOLD bigger

2017-12-14 Thread WangYuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292108#comment-16292108
 ] 

WangYuan commented on HBASE-19507:
--

[~jingcheng.du] Yes,thank you!
[~huaxiang]  Add an empty mobfile is a good idea ,thank you! 


> Get or Scan Mob by rowkey return error value when run compact_mob or 
> major_compact_mob after change MOB_THRESHOLD bigger
> 
>
> Key: HBASE-19507
> URL: https://issues.apache.org/jira/browse/HBASE-19507
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: WangYuan
>Assignee: huaxiang sun
>
> 1、
> create   'abc',{NAME => 'cf', MOB_THRESHOLD => '10', IS_MOB => 'true'}
> put 'abc','1','cf:a','1'
> put 'abc','2','cf:a','2'
> put 'abc','3','cf:a','3'
> put 'abc','4','cf:a','y1'
> put 'abc','5','cf:a','y2'
> put 'abc','6','cf:a','y3'
>   
> hbase(main):011:0> scan 'abc'
> ROWCOLUMN+CELL
>   
>
>  1 column=cf:a, 
> timestamp=1513171753098, value=1  
>  
>  2 column=cf:a, 
> timestamp=1513171753208, value=2  
>  
>  3 column=cf:a, 
> timestamp=1513171753246, value=3  
>  
>  4 column=cf:a, 
> timestamp=1513171753273, value=y1 
>  
>  5 column=cf:a, 
> timestamp=1513171753301, value=y2 
>  
>  6 column=cf:a, 
> timestamp=1513171754282, value=y3 
>  
> hbase(main):012:0> flush 'abc'
> hbase(main):012:0> major_compact 'abc'
> hbase(main):012:0> major_compact_mob 'abc'
> 2、
> [See Hfile]:
> hbase org.apache.hadoop.hbase.io.hfile.HFile -f 
> /hbase/data/default/abc/a31b3146cba0d4569a7bf44e70e299c9/cf/22a432ba5c2c4802bedd947b99626f10
>  -p
> K: 1/cf:a/1513172294864/Put/vlen=5/seqid=4 V: 1
> K: 2/cf:a/1513172294892/Put/vlen=5/seqid=5 V: 2
> K: 3/cf:a/1513172294914/Put/vlen=5/seqid=6 V: 3
> K: 4/cf:a/1513172294954/Put/vlen=76/seqid=7 V: 
> \x00\x00\x00\x0Ed41d8cd98f00b204e9800998ecf8427e20171213ce022548c4c3498e864fda289b81e711
>  T[0]:  T[1]: abc
> K: 5/cf:a/1513172294982/Put/vlen=76/seqid=8 V: 
> \x00\x00\x00\x0Ed41d8cd98f00b204e9800998ecf8427e20171213ce022548c4c3498e864fda289b81e711
>  T[0]:  T[1]: abc
> K: 6/cf:a/1513172296455/Put/vlen=76/seqid=9 V: 
> \x00\x00\x00\x0Ed41d8cd98f00b204e9800998ecf8427e20171213ce022548c4c3498e864fda289b81e711
>  T[0]:  T[1]: abc
> Scanned kv count -> 6
> [See Mobfile]:
> hbase org.apache.hadoop.hbase.io.hfile.HFile -f 
> /hbase/mobdir/data/default/abc/07aab825b62dd9111831839cc9039df9/cf/d41d8cd98f00b204e9800998ecf8427e20171213bd8cfaf146684d4096ebf7994f050e96
>  -p
> K: 4/cf:a/1513172924196/Put/vlen=14/seqid=7 V: y1
> K: 5/cf:a/1513172924214/Put/vlen=14/seqid=8 V: y2
> K: 6/cf:a/1513172925768/Put/vlen=14/seqid=9 V: y3
> 3、
> alter 'abc',{NAME => 'cf', MOB_THRESHOLD => '10240' }
> put 
> 

[jira] [Created] (HBASE-19520) Add UTs for the new lock type PEER

2017-12-14 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-19520:
-

 Summary: Add UTs for the new lock type PEER
 Key: HBASE-19520
 URL: https://issues.apache.org/jira/browse/HBASE-19520
 Project: HBase
  Issue Type: Sub-task
  Components: proc-v2
Reporter: Duo Zhang






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19216) Implement a general framework to execute remote procedure on RS

2017-12-14 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292105#comment-16292105
 ] 

Duo Zhang commented on HBASE-19216:
---

Will push to feature branch HBASE-19397 if no other big problem so that 
[~openinx] can land his work.

> Implement a general framework to execute remote procedure on RS
> ---
>
> Key: HBASE-19216
> URL: https://issues.apache.org/jira/browse/HBASE-19216
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HBASE-19216-v1.patch, HBASE-19216-v2.patch, 
> HBASE-19216.patch, HBASE-19216.patch, HBASE-19216.patch
>
>
> When building the basic framework for HBASE-19064, I found that the 
> enable/disable peer is built upon the watcher of zk.
> The problem of using watcher is that, you do not know the exact time when all 
> RSes in the cluster have done the change, it is a 'eventually done'. 
> And for synchronous replication, when changing the state of a replication 
> peer, we need to know the exact time as we can only enable read/write after 
> that time. So I think we'd better use procedure to do this. Change the flag 
> on zk, and then execute a procedure on all RSes to reload the flag from zk.
> Another benefit is that, after the change, zk will be mainly used as a 
> storage, so it will be easy to implement another replication peer storage to 
> replace zk so that we can reduce the dependency on zk.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19216) Implement a general framework to execute remote procedure on RS

2017-12-14 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-19216:
--
Attachment: HBASE-19216-v2.patch

Address the comments on rb.

> Implement a general framework to execute remote procedure on RS
> ---
>
> Key: HBASE-19216
> URL: https://issues.apache.org/jira/browse/HBASE-19216
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HBASE-19216-v1.patch, HBASE-19216-v2.patch, 
> HBASE-19216.patch, HBASE-19216.patch, HBASE-19216.patch
>
>
> When building the basic framework for HBASE-19064, I found that the 
> enable/disable peer is built upon the watcher of zk.
> The problem of using watcher is that, you do not know the exact time when all 
> RSes in the cluster have done the change, it is a 'eventually done'. 
> And for synchronous replication, when changing the state of a replication 
> peer, we need to know the exact time as we can only enable read/write after 
> that time. So I think we'd better use procedure to do this. Change the flag 
> on zk, and then execute a procedure on all RSes to reload the flag from zk.
> Another benefit is that, after the change, zk will be mainly used as a 
> storage, so it will be easy to implement another replication peer storage to 
> replace zk so that we can reduce the dependency on zk.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19457) Debugging flaky TestTruncateTableProcedure#testRecoveryAndDoubleExecutionPreserveSplits

2017-12-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292097#comment-16292097
 ] 

stack commented on HBASE-19457:
---

Good one Appy. There are pieces that still need paving over. Looks like you 
found one (I'm currently working on another).

When we truncate, we delete the table and its regions from hbase:meta or do we 
just edit state? (Looks like we delete the regions... good).

Dang. Why is this Truncate Table not calling DeleteTable then CreateTable as 
subprocedures? Why is it dup'ing procedure body?

If a crash puts us into a whack state such that on resumption we do the wrong 
thing, then the Procedure is not written properly. 

What is wrong about when it goes to assign? Is it that we have not finished 
editing/adding all regions to hbase:meta?

I've been working on Master startup. It reads meta and if it finds regions in 
OPEN state, it will reassign them trying to retain their old locations. It will 
also assign regions that are OFFLINE which thinking about it now is NOT what we 
want.

Who is doing the assign of regions with empty state?

(Can talk tomorrow boss)

> Debugging flaky 
> TestTruncateTableProcedure#testRecoveryAndDoubleExecutionPreserveSplits
> ---
>
> Key: HBASE-19457
> URL: https://issues.apache.org/jira/browse/HBASE-19457
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-19457.master.001.patch, patch1, test-output.txt
>
>
> Trying to explain the bug in a more general way where understanding of 
> ProcedureV2 is not required.
> Truncating table operation:
> 
> delete region states from meta
> delete table state from meta
> 
> add new regions to meta with state null.
> crash
> recovery: TableStateManager treats table with null state as ENABLED. AM 
> treats regions with null state as offline. Combined result - AM starts 
> assigning the new regions from incomplete truncate operation.
> Fix: Mark table as disabled instead of deleting it's state.
> 
> *patch1*
> Just added some logging to help with debugging:
> - 60s was too less time, increased timeout
> - Added some useful log statements



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19519) HBase creates snapshot, even after throwing snapshot creation exception.

2017-12-14 Thread Amit Kabra (JIRA)
Amit Kabra created HBASE-19519:
--

 Summary: HBase creates snapshot, even after throwing snapshot 
creation exception.
 Key: HBASE-19519
 URL: https://issues.apache.org/jira/browse/HBASE-19519
 Project: HBase
  Issue Type: Bug
Reporter: Amit Kabra


Example client side flow:
for : i = 0 --> 3 retrials:
a) create snapshot
b) In case of exception catch it and delete snapshot.
c) sleep 1 min * i.

Now #a throws creation exception if creation takes more than 5 mins. Many times 
cluster can get busy and snapshot creation takes time and it can happen : 
i = 0 --> #a takes time, throws snapshot creation exception --> #b doesn't work 
since snapshot isn't created --> #c snapshot got created
i = 1 --> #a gives snapshot exist exception.

Issue : Since creation failed exception came, hbase should delete the snapshot 
and should not create it since client won't know that its a valid or a 
corrupted snapshot.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19112) Suspect methods on Cell to be deprecated

2017-12-14 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292095#comment-16292095
 ] 

Anoop Sam John commented on HBASE-19112:


I read that concern of exposing setTs etc to CPs.  But the issue is that we 
allow the CP hooks to add Cells. I believe Phoenix is using this even (for 
indexing).  All such cells must be created using the Builder. Then we are good. 
If not, then only issue comes. We might expose upto RawCell only for CPs and 
there no contract of setTs. So the impl can have NO such impl and server can 
not handle such situation. Oh may be it can.  May be we should try wrapping 
that Cell then .  Just said. Can discuss in another issue.

> Suspect methods on Cell to be deprecated
> 
>
> Key: HBASE-19112
> URL: https://issues.apache.org/jira/browse/HBASE-19112
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Josh Elser
>Assignee: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19112_branch-2.patch, 
> HBASE-19112_branch-2_1.patch, HBASE-19112_master.patch, 
> HBASE-19112_master_1.patch, HBASE-19112_master_1.patch, 
> HBASE-19112_master_2.patch, HBASE-19112_master_3.patch
>
>
> [~chia7712] suggested on the [mailing 
> list|https://lists.apache.org/thread.html/e6de9af26d9b888a358ba48bf74655ccd893573087c032c0fcf01585@%3Cdev.hbase.apache.org%3E]
>  that we have some methods on Cell which should be deprecated for removal:
> * {{#getType()}}
> * {{#getTimestamp()}}
> * {{#getTag()}}
> * {{#getSequenceId()}}
> Let's make a pass over these (and maybe the rest) to make sure that there 
> aren't others which are either implementation details or methods returning 
> now-private-marked classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-15536) Make AsyncFSWAL as our default WAL

2017-12-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292094#comment-16292094
 ] 

stack commented on HBASE-15536:
---

TestZKSecretWatcher is flakey.

There must be a failing test in TestHRegion inherited by the other two 
superclasses?

> Make AsyncFSWAL as our default WAL
> --
>
> Key: HBASE-15536
> URL: https://issues.apache.org/jira/browse/HBASE-15536
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: 15536.addendum2.enable.asyncfswal.by.default.2.txt, 
> 15536.addendum2.enable.asyncfswal.by.default.3.txt, 
> 15536.addendum2.enable.asyncfswal.by.default.txt, 
> 15536.addendum2.enable.asyncfswal.by.default.txt, 
> 15536.addendum2.enable.asyncfswal.by.default.txt, 15536.minor.addendum.patch, 
> HBASE-15536-v1.patch, HBASE-15536-v2.patch, HBASE-15536-v3.patch, 
> HBASE-15536-v4.patch, HBASE-15536-v5.patch, HBASE-15536.patch, 
> latesttrunk_asyncWAL_50threads_10cols.jfr, 
> latesttrunk_defaultWAL_50threads_10cols.jfr
>
>
> As it should be predicated on passing basic cluster ITBLL



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19112) Suspect methods on Cell to be deprecated

2017-12-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292091#comment-16292091
 ] 

stack commented on HBASE-19112:
---

Really, in another issue? Seems like one small change and its possible this 
whole Cell thing is done (new issue does fine-tuning!).  Thanks [~ram_krish] 
[~anoop.hbase] and [~chia7712]

> Suspect methods on Cell to be deprecated
> 
>
> Key: HBASE-19112
> URL: https://issues.apache.org/jira/browse/HBASE-19112
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Josh Elser
>Assignee: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19112_branch-2.patch, 
> HBASE-19112_branch-2_1.patch, HBASE-19112_master.patch, 
> HBASE-19112_master_1.patch, HBASE-19112_master_1.patch, 
> HBASE-19112_master_2.patch, HBASE-19112_master_3.patch
>
>
> [~chia7712] suggested on the [mailing 
> list|https://lists.apache.org/thread.html/e6de9af26d9b888a358ba48bf74655ccd893573087c032c0fcf01585@%3Cdev.hbase.apache.org%3E]
>  that we have some methods on Cell which should be deprecated for removal:
> * {{#getType()}}
> * {{#getTimestamp()}}
> * {{#getTag()}}
> * {{#getSequenceId()}}
> Let's make a pass over these (and maybe the rest) to make sure that there 
> aren't others which are either implementation details or methods returning 
> now-private-marked classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19112) Suspect methods on Cell to be deprecated

2017-12-14 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292088#comment-16292088
 ] 

Anoop Sam John commented on HBASE-19112:


Ya pls in another issue.. This is having so many discuss and difficult to 
recollect all.. That Put related thing (if comment alone or some other way) all 
can be done as part of another issue pls.

> Suspect methods on Cell to be deprecated
> 
>
> Key: HBASE-19112
> URL: https://issues.apache.org/jira/browse/HBASE-19112
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Josh Elser
>Assignee: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19112_branch-2.patch, 
> HBASE-19112_branch-2_1.patch, HBASE-19112_master.patch, 
> HBASE-19112_master_1.patch, HBASE-19112_master_1.patch, 
> HBASE-19112_master_2.patch, HBASE-19112_master_3.patch
>
>
> [~chia7712] suggested on the [mailing 
> list|https://lists.apache.org/thread.html/e6de9af26d9b888a358ba48bf74655ccd893573087c032c0fcf01585@%3Cdev.hbase.apache.org%3E]
>  that we have some methods on Cell which should be deprecated for removal:
> * {{#getType()}}
> * {{#getTimestamp()}}
> * {{#getTag()}}
> * {{#getSequenceId()}}
> Let's make a pass over these (and maybe the rest) to make sure that there 
> aren't others which are either implementation details or methods returning 
> now-private-marked classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19112) Suspect methods on Cell to be deprecated

2017-12-14 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292089#comment-16292089
 ] 

Chia-Ping Tsai commented on HBASE-19112:


bq. Shall we do it in a follow on?
yep. We need to discuss this change in another jira. Not sure whether any user 
falls victim to this change.  :(  However, they can use CellBuilder instead so 
the change should be ok I think.

> Suspect methods on Cell to be deprecated
> 
>
> Key: HBASE-19112
> URL: https://issues.apache.org/jira/browse/HBASE-19112
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Josh Elser
>Assignee: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19112_branch-2.patch, 
> HBASE-19112_branch-2_1.patch, HBASE-19112_master.patch, 
> HBASE-19112_master_1.patch, HBASE-19112_master_1.patch, 
> HBASE-19112_master_2.patch, HBASE-19112_master_3.patch
>
>
> [~chia7712] suggested on the [mailing 
> list|https://lists.apache.org/thread.html/e6de9af26d9b888a358ba48bf74655ccd893573087c032c0fcf01585@%3Cdev.hbase.apache.org%3E]
>  that we have some methods on Cell which should be deprecated for removal:
> * {{#getType()}}
> * {{#getTimestamp()}}
> * {{#getTag()}}
> * {{#getSequenceId()}}
> Let's make a pass over these (and maybe the rest) to make sure that there 
> aren't others which are either implementation details or methods returning 
> now-private-marked classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19505) Disable ByteBufferPool by default at HM

2017-12-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292087#comment-16292087
 ] 

stack commented on HBASE-19505:
---

+1 if hadoopqa likes it. Thanks for making the change.

> Disable ByteBufferPool by default at HM
> ---
>
> Key: HBASE-19505
> URL: https://issues.apache.org/jira/browse/HBASE-19505
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19505.patch, HBASE-19505_V2.patch, 
> HBASE-19505_V3.patch
>
>
> The main usage of the pool is while accepting bigger sized requests ie. 
> Mutation requests. HM do not have any regions by default.  So we can make 
> this pool OFF in HM side. Still add a config to turn this ON.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19112) Suspect methods on Cell to be deprecated

2017-12-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292086#comment-16292086
 ] 

stack commented on HBASE-19112:
---

[~ram_krish] I think agreement above has it that RawCell now has getTypeByte 
also.

> Suspect methods on Cell to be deprecated
> 
>
> Key: HBASE-19112
> URL: https://issues.apache.org/jira/browse/HBASE-19112
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Josh Elser
>Assignee: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19112_branch-2.patch, 
> HBASE-19112_branch-2_1.patch, HBASE-19112_master.patch, 
> HBASE-19112_master_1.patch, HBASE-19112_master_1.patch, 
> HBASE-19112_master_2.patch, HBASE-19112_master_3.patch
>
>
> [~chia7712] suggested on the [mailing 
> list|https://lists.apache.org/thread.html/e6de9af26d9b888a358ba48bf74655ccd893573087c032c0fcf01585@%3Cdev.hbase.apache.org%3E]
>  that we have some methods on Cell which should be deprecated for removal:
> * {{#getType()}}
> * {{#getTimestamp()}}
> * {{#getTag()}}
> * {{#getSequenceId()}}
> Let's make a pass over these (and maybe the rest) to make sure that there 
> aren't others which are either implementation details or methods returning 
> now-private-marked classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19515) Region server left in online servers list forever if it went down after registering to master and before creating ephemeral node

2017-12-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292085#comment-16292085
 ] 

stack commented on HBASE-19515:
---

HBASE-18946 just made the failure more likely.  HBASE-9593 is in branch-1 as a 
'fix' but it is wrong. branch-2 undoes the HBASE-9593 fix but the HBASE-9593 
issue still exists (hence this issue).

> Region server left in online servers list forever if it went down after 
> registering to master and before creating ephemeral node
> 
>
> Key: HBASE-19515
> URL: https://issues.apache.org/jira/browse/HBASE-19515
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.0
>
>
> This one is interesting. It was supposedly fixed long time ago back in 
> HBASE-9593 (The issue has same subject as this one) but there was a problem 
> w/ the fix reported later, post-commit, long after the issue was closed. The 
> 'fix' was registering ephemeral node in ZK BEFORE reporting in to the Master 
> for the first time. The problem w/ this approach is that the Master tells the 
> RS what name it should use reporting in. If we register in ZK before we talk 
> to the Master, the name in ZK and the one the RS ends up using could deviate.
> In hbase2, we do the right thing registering the ephemeral node after we 
> report to the Master. So, the issue reported in HBASE-9593, that a RS that 
> dies between reporting to master and registering up in ZK, stays registered 
> at the Master for ever is back; we'll keep trying to assign it regions. Its a 
> real problem.
> That hbase2 has this issue has been suppressed up until now. The test that 
> was written for HBASE-9593, TestRSKilledWhenInitializing, is a good test but 
> a little sloppy. It puts up two RSs aborting one only after registering at 
> the Master before posting to ZK. That leaves one healthy server up. It is 
> hosting hbase:meta. This is enough for the test to bluster through. The only 
> assign it does is namespace table. It goes to the hbase:meta server. If the 
> test created a new table and did roundrobin, it'd fail.
> After HBASE-18946, where we do round robin on table create -- a desirable 
> attribute -- via the balancer so all is kosher, the test 
> TestRSKilledWhenInitializing now starts to fail because we chose the hobbled 
> server most of the time.
> So, this issue is about fixing the original issue properly for hbase2. We 
> don't have a timeout on assign in AMv2, not yet, that might be the fix, or 
> perhaps a double report before we online a server with the second report 
> coming in after ZK goes up (or we stop doing ephemeral nodes for RS up in ZK 
> and just rely on heartbeats).
> Making this a critical issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19112) Suspect methods on Cell to be deprecated

2017-12-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292083#comment-16292083
 ] 

stack commented on HBASE-19112:
---

Looks like we have agreement (smile).

> Suspect methods on Cell to be deprecated
> 
>
> Key: HBASE-19112
> URL: https://issues.apache.org/jira/browse/HBASE-19112
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Josh Elser
>Assignee: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19112_branch-2.patch, 
> HBASE-19112_branch-2_1.patch, HBASE-19112_master.patch, 
> HBASE-19112_master_1.patch, HBASE-19112_master_1.patch, 
> HBASE-19112_master_2.patch, HBASE-19112_master_3.patch
>
>
> [~chia7712] suggested on the [mailing 
> list|https://lists.apache.org/thread.html/e6de9af26d9b888a358ba48bf74655ccd893573087c032c0fcf01585@%3Cdev.hbase.apache.org%3E]
>  that we have some methods on Cell which should be deprecated for removal:
> * {{#getType()}}
> * {{#getTimestamp()}}
> * {{#getTag()}}
> * {{#getSequenceId()}}
> Let's make a pass over these (and maybe the rest) to make sure that there 
> aren't others which are either implementation details or methods returning 
> now-private-marked classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19112) Suspect methods on Cell to be deprecated

2017-12-14 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292082#comment-16292082
 ] 

ramkrishna.s.vasudevan commented on HBASE-19112:


bq.Agreed. We can add comment on Put#add(Cell). Or pass a wrap put which do the 
type check in add(Cell) to cp user?
Oops. I am just seeing this. I did not do any change to Put#add(). Shall we do 
it in a follow on?

> Suspect methods on Cell to be deprecated
> 
>
> Key: HBASE-19112
> URL: https://issues.apache.org/jira/browse/HBASE-19112
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Josh Elser
>Assignee: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19112_branch-2.patch, 
> HBASE-19112_branch-2_1.patch, HBASE-19112_master.patch, 
> HBASE-19112_master_1.patch, HBASE-19112_master_1.patch, 
> HBASE-19112_master_2.patch, HBASE-19112_master_3.patch
>
>
> [~chia7712] suggested on the [mailing 
> list|https://lists.apache.org/thread.html/e6de9af26d9b888a358ba48bf74655ccd893573087c032c0fcf01585@%3Cdev.hbase.apache.org%3E]
>  that we have some methods on Cell which should be deprecated for removal:
> * {{#getType()}}
> * {{#getTimestamp()}}
> * {{#getTag()}}
> * {{#getSequenceId()}}
> Let's make a pass over these (and maybe the rest) to make sure that there 
> aren't others which are either implementation details or methods returning 
> now-private-marked classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19112) Suspect methods on Cell to be deprecated

2017-12-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292080#comment-16292080
 ] 

stack commented on HBASE-19112:
---

getTypeByte in CP exposed RawCell would be fine.

SequenceId is internals though. CPs can't play w/ this. Timestamp also?

If CPs could edit timestamp and sequenceid, it would only be in the write-path 
and before the Cell goes into the WAL. Edits after this point will just cause 
havoc inside the server.

> Suspect methods on Cell to be deprecated
> 
>
> Key: HBASE-19112
> URL: https://issues.apache.org/jira/browse/HBASE-19112
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Josh Elser
>Assignee: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19112_branch-2.patch, 
> HBASE-19112_branch-2_1.patch, HBASE-19112_master.patch, 
> HBASE-19112_master_1.patch, HBASE-19112_master_1.patch, 
> HBASE-19112_master_2.patch, HBASE-19112_master_3.patch
>
>
> [~chia7712] suggested on the [mailing 
> list|https://lists.apache.org/thread.html/e6de9af26d9b888a358ba48bf74655ccd893573087c032c0fcf01585@%3Cdev.hbase.apache.org%3E]
>  that we have some methods on Cell which should be deprecated for removal:
> * {{#getType()}}
> * {{#getTimestamp()}}
> * {{#getTag()}}
> * {{#getSequenceId()}}
> Let's make a pass over these (and maybe the rest) to make sure that there 
> aren't others which are either implementation details or methods returning 
> now-private-marked classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19112) Suspect methods on Cell to be deprecated

2017-12-14 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-19112:
---
Attachment: HBASE-19112_master_3.patch

Big rebase was needed. Trying QA. 
Addresses all the RB comments. Will upload to RB also for one more look at this.

> Suspect methods on Cell to be deprecated
> 
>
> Key: HBASE-19112
> URL: https://issues.apache.org/jira/browse/HBASE-19112
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Josh Elser
>Assignee: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19112_branch-2.patch, 
> HBASE-19112_branch-2_1.patch, HBASE-19112_master.patch, 
> HBASE-19112_master_1.patch, HBASE-19112_master_1.patch, 
> HBASE-19112_master_2.patch, HBASE-19112_master_3.patch
>
>
> [~chia7712] suggested on the [mailing 
> list|https://lists.apache.org/thread.html/e6de9af26d9b888a358ba48bf74655ccd893573087c032c0fcf01585@%3Cdev.hbase.apache.org%3E]
>  that we have some methods on Cell which should be deprecated for removal:
> * {{#getType()}}
> * {{#getTimestamp()}}
> * {{#getTag()}}
> * {{#getSequenceId()}}
> Let's make a pass over these (and maybe the rest) to make sure that there 
> aren't others which are either implementation details or methods returning 
> now-private-marked classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19112) Suspect methods on Cell to be deprecated

2017-12-14 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-19112:
---
Status: Patch Available  (was: Open)

> Suspect methods on Cell to be deprecated
> 
>
> Key: HBASE-19112
> URL: https://issues.apache.org/jira/browse/HBASE-19112
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Josh Elser
>Assignee: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19112_branch-2.patch, 
> HBASE-19112_branch-2_1.patch, HBASE-19112_master.patch, 
> HBASE-19112_master_1.patch, HBASE-19112_master_1.patch, 
> HBASE-19112_master_2.patch, HBASE-19112_master_3.patch
>
>
> [~chia7712] suggested on the [mailing 
> list|https://lists.apache.org/thread.html/e6de9af26d9b888a358ba48bf74655ccd893573087c032c0fcf01585@%3Cdev.hbase.apache.org%3E]
>  that we have some methods on Cell which should be deprecated for removal:
> * {{#getType()}}
> * {{#getTimestamp()}}
> * {{#getTag()}}
> * {{#getSequenceId()}}
> Let's make a pass over these (and maybe the rest) to make sure that there 
> aren't others which are either implementation details or methods returning 
> now-private-marked classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19515) Region server left in online servers list forever if it went down after registering to master and before creating ephemeral node

2017-12-14 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292074#comment-16292074
 ] 

ramkrishna.s.vasudevan commented on HBASE-19515:


bq.After HBASE-18946, where we do round robin on table create – a desirable 
attribute – via the balancer so all is kosher, the test 
TestRSKilledWhenInitializing now starts to fail because we chose the hobbled 
server most of the time
Even before HBASE-18946 this was happening the same way correct? The place 
where we do round robin only changed? I have not digged in to this like how you 
have done it but just asking. Your point may be right but just want to know.

> Region server left in online servers list forever if it went down after 
> registering to master and before creating ephemeral node
> 
>
> Key: HBASE-19515
> URL: https://issues.apache.org/jira/browse/HBASE-19515
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.0
>
>
> This one is interesting. It was supposedly fixed long time ago back in 
> HBASE-9593 (The issue has same subject as this one) but there was a problem 
> w/ the fix reported later, post-commit, long after the issue was closed. The 
> 'fix' was registering ephemeral node in ZK BEFORE reporting in to the Master 
> for the first time. The problem w/ this approach is that the Master tells the 
> RS what name it should use reporting in. If we register in ZK before we talk 
> to the Master, the name in ZK and the one the RS ends up using could deviate.
> In hbase2, we do the right thing registering the ephemeral node after we 
> report to the Master. So, the issue reported in HBASE-9593, that a RS that 
> dies between reporting to master and registering up in ZK, stays registered 
> at the Master for ever is back; we'll keep trying to assign it regions. Its a 
> real problem.
> That hbase2 has this issue has been suppressed up until now. The test that 
> was written for HBASE-9593, TestRSKilledWhenInitializing, is a good test but 
> a little sloppy. It puts up two RSs aborting one only after registering at 
> the Master before posting to ZK. That leaves one healthy server up. It is 
> hosting hbase:meta. This is enough for the test to bluster through. The only 
> assign it does is namespace table. It goes to the hbase:meta server. If the 
> test created a new table and did roundrobin, it'd fail.
> After HBASE-18946, where we do round robin on table create -- a desirable 
> attribute -- via the balancer so all is kosher, the test 
> TestRSKilledWhenInitializing now starts to fail because we chose the hobbled 
> server most of the time.
> So, this issue is about fixing the original issue properly for hbase2. We 
> don't have a timeout on assign in AMv2, not yet, that might be the fix, or 
> perhaps a double report before we online a server with the second report 
> coming in after ZK goes up (or we stop doing ephemeral nodes for RS up in ZK 
> and just rely on heartbeats).
> Making this a critical issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19112) Suspect methods on Cell to be deprecated

2017-12-14 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292075#comment-16292075
 ] 

Chia-Ping Tsai commented on HBASE-19112:


bq. Ya setTs and seqId things must be there. Remembering now that 
SettableTimeStamp and other interface were CP exposed for this reason. 
Moving the setTs and seqId to RawCell mean that cp user can modify the 
cells...I prefer to make {{Cell}} and {{RawCell}} be the read-only interface 
since changing the ts and id in cp layer may cause a unknown result.

bq. I was not suggesting we should allow adding custom cells in CP. 
Agreed. We can add comment on Put#add(Cell). Or pass a wrap put which do the 
type check in add(Cell) to cp user?
{code:title=WrapPut.java}
@Override
public Put add(Cell kv) throws IOException {
  if (kv instanceof ExtendedCell) {
super.add(kv);
  }
  throw new UnsupportedException("Only ExtendedCell can be added to Put in cp 
layer")
}
{code}

> Suspect methods on Cell to be deprecated
> 
>
> Key: HBASE-19112
> URL: https://issues.apache.org/jira/browse/HBASE-19112
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Josh Elser
>Assignee: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19112_branch-2.patch, 
> HBASE-19112_branch-2_1.patch, HBASE-19112_master.patch, 
> HBASE-19112_master_1.patch, HBASE-19112_master_1.patch, 
> HBASE-19112_master_2.patch
>
>
> [~chia7712] suggested on the [mailing 
> list|https://lists.apache.org/thread.html/e6de9af26d9b888a358ba48bf74655ccd893573087c032c0fcf01585@%3Cdev.hbase.apache.org%3E]
>  that we have some methods on Cell which should be deprecated for removal:
> * {{#getType()}}
> * {{#getTimestamp()}}
> * {{#getTag()}}
> * {{#getSequenceId()}}
> Let's make a pass over these (and maybe the rest) to make sure that there 
> aren't others which are either implementation details or methods returning 
> now-private-marked classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19112) Suspect methods on Cell to be deprecated

2017-12-14 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-19112:
---
Status: Open  (was: Patch Available)

> Suspect methods on Cell to be deprecated
> 
>
> Key: HBASE-19112
> URL: https://issues.apache.org/jira/browse/HBASE-19112
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Josh Elser
>Assignee: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19112_branch-2.patch, 
> HBASE-19112_branch-2_1.patch, HBASE-19112_master.patch, 
> HBASE-19112_master_1.patch, HBASE-19112_master_1.patch, 
> HBASE-19112_master_2.patch
>
>
> [~chia7712] suggested on the [mailing 
> list|https://lists.apache.org/thread.html/e6de9af26d9b888a358ba48bf74655ccd893573087c032c0fcf01585@%3Cdev.hbase.apache.org%3E]
>  that we have some methods on Cell which should be deprecated for removal:
> * {{#getType()}}
> * {{#getTimestamp()}}
> * {{#getTag()}}
> * {{#getSequenceId()}}
> Let's make a pass over these (and maybe the rest) to make sure that there 
> aren't others which are either implementation details or methods returning 
> now-private-marked classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19515) Region server left in online servers list forever if it went down after registering to master and before creating ephemeral node

2017-12-14 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292067#comment-16292067
 ] 

Anoop Sam John commented on HBASE-19515:


A good one Stack.  Nice explain and great debug..  :-)

> Region server left in online servers list forever if it went down after 
> registering to master and before creating ephemeral node
> 
>
> Key: HBASE-19515
> URL: https://issues.apache.org/jira/browse/HBASE-19515
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.0
>
>
> This one is interesting. It was supposedly fixed long time ago back in 
> HBASE-9593 (The issue has same subject as this one) but there was a problem 
> w/ the fix reported later, post-commit, long after the issue was closed. The 
> 'fix' was registering ephemeral node in ZK BEFORE reporting in to the Master 
> for the first time. The problem w/ this approach is that the Master tells the 
> RS what name it should use reporting in. If we register in ZK before we talk 
> to the Master, the name in ZK and the one the RS ends up using could deviate.
> In hbase2, we do the right thing registering the ephemeral node after we 
> report to the Master. So, the issue reported in HBASE-9593, that a RS that 
> dies between reporting to master and registering up in ZK, stays registered 
> at the Master for ever is back; we'll keep trying to assign it regions. Its a 
> real problem.
> That hbase2 has this issue has been suppressed up until now. The test that 
> was written for HBASE-9593, TestRSKilledWhenInitializing, is a good test but 
> a little sloppy. It puts up two RSs aborting one only after registering at 
> the Master before posting to ZK. That leaves one healthy server up. It is 
> hosting hbase:meta. This is enough for the test to bluster through. The only 
> assign it does is namespace table. It goes to the hbase:meta server. If the 
> test created a new table and did roundrobin, it'd fail.
> After HBASE-18946, where we do round robin on table create -- a desirable 
> attribute -- via the balancer so all is kosher, the test 
> TestRSKilledWhenInitializing now starts to fail because we chose the hobbled 
> server most of the time.
> So, this issue is about fixing the original issue properly for hbase2. We 
> don't have a timeout on assign in AMv2, not yet, that might be the fix, or 
> perhaps a double report before we online a server with the second report 
> coming in after ZK goes up (or we stop doing ephemeral nodes for RS up in ZK 
> and just rely on heartbeats).
> Making this a critical issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18946) Stochastic load balancer assigns replica regions to the same RS

2017-12-14 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18946:
--
Attachment: HBASE-18946.master.012.patch

> Stochastic load balancer assigns replica regions to the same RS
> ---
>
> Key: HBASE-18946
> URL: https://issues.apache.org/jira/browse/HBASE-18946
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-3
>Reporter: ramkrishna.s.vasudevan
>Assignee: stack
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18946.master.001.patch, 
> HBASE-18946.master.002.patch, HBASE-18946.master.003.patch, 
> HBASE-18946.master.004.patch, HBASE-18946.master.005.patch, 
> HBASE-18946.master.006.patch, HBASE-18946.master.007.patch, 
> HBASE-18946.master.008.patch, HBASE-18946.master.009.patch, 
> HBASE-18946.master.010.patch, HBASE-18946.master.011.patch, 
> HBASE-18946.master.012.patch, HBASE-18946.patch, HBASE-18946.patch, 
> HBASE-18946_2.patch, HBASE-18946_2.patch, HBASE-18946_simple_7.patch, 
> HBASE-18946_simple_8.patch, TestRegionReplicasWithRestartScenarios.java
>
>
> Trying out region replica and its assignment I can see that some times the 
> default LB Stocahstic load balancer assigns replica regions to the same RS. 
> This happens when we have 3 RS checked in and we have a table with 3 
> replicas. When a RS goes down then the replicas being assigned to same RS is 
> acceptable but the case when we have enough RS to assign this behaviour is 
> undesirable and does not solve the purpose of replicas. 
> [~huaxiang] and [~enis]. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18946) Stochastic load balancer assigns replica regions to the same RS

2017-12-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292062#comment-16292062
 ] 

stack commented on HBASE-18946:
---

Failure was TestMasterFailover. It failed to write its xml because it timed out 
twice. Looking that the test, it tries to set zk node states and move meta 
regions off master, neither of which makes sense in AMv2. I refactored the 
TestMasterFailover test that does nonesense.

I see other timeouts though it looks like most other tests just pass. Here is 
what I see in console:

TestDLSFSHLog
TestStochasticLoadBalancer
TestReplicationZKNodeCleaner
TestLogsCleaner

These all pass locally w/o issue EXCEPT TestDLSFSHLog. It looks sick, stuck. 
Digging, indeed, its the fault of this patch. We try to keep sending state 
change messages to master as long as we can only the thread is not daemon so it 
keeps the RS up. Ugh! Fixed.

> Stochastic load balancer assigns replica regions to the same RS
> ---
>
> Key: HBASE-18946
> URL: https://issues.apache.org/jira/browse/HBASE-18946
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-3
>Reporter: ramkrishna.s.vasudevan
>Assignee: stack
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18946.master.001.patch, 
> HBASE-18946.master.002.patch, HBASE-18946.master.003.patch, 
> HBASE-18946.master.004.patch, HBASE-18946.master.005.patch, 
> HBASE-18946.master.006.patch, HBASE-18946.master.007.patch, 
> HBASE-18946.master.008.patch, HBASE-18946.master.009.patch, 
> HBASE-18946.master.010.patch, HBASE-18946.master.011.patch, 
> HBASE-18946.patch, HBASE-18946.patch, HBASE-18946_2.patch, 
> HBASE-18946_2.patch, HBASE-18946_simple_7.patch, HBASE-18946_simple_8.patch, 
> TestRegionReplicasWithRestartScenarios.java
>
>
> Trying out region replica and its assignment I can see that some times the 
> default LB Stocahstic load balancer assigns replica regions to the same RS. 
> This happens when we have 3 RS checked in and we have a table with 3 
> replicas. When a RS goes down then the replicas being assigned to same RS is 
> acceptable but the case when we have enough RS to assign this behaviour is 
> undesirable and does not solve the purpose of replicas. 
> [~huaxiang] and [~enis]. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19320) document the mysterious direct memory leak in hbase

2017-12-14 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292061#comment-16292061
 ] 

Anoop Sam John commented on HBASE-19320:


And 'maxCacheSize ' for NIO is 5 I believe. Forgot any settings are there to 
control this.

> document the mysterious direct memory leak in hbase 
> 
>
> Key: HBASE-19320
> URL: https://issues.apache.org/jira/browse/HBASE-19320
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.6
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-19320-master-v001.patch, Screen Shot 2017-11-21 at 
> 4.43.36 PM.png, Screen Shot 2017-11-21 at 4.44.22 PM.png
>
>
> Recently we run into a direct memory leak case, which takes some time to 
> trace and debug. Internally discussed with our [~saint@gmail.com], we 
> thought we had some findings and want to share with the community.
> Basically, it is the issue described in 
> http://www.evanjones.ca/java-bytebuffer-leak.html and it happened to one of 
> our hbase clusters.
> Create the jira first and will fill in more details later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19320) document the mysterious direct memory leak in hbase

2017-12-14 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292059#comment-16292059
 ] 

Anoop Sam John commented on HBASE-19320:


I see.. This is per BB size.  hmm
bq. So the max possible DBB size which can be cached in single ThreadLocal is 
maxCacheSize * maxCachedBufferSize (worst case)
And the max possible total DBB size that the NIO can pool/cache can be 
calculated as
maxCacheSize * maxCachedBufferSize * max possible ThreadLocals
max possible ThreadLocals = Reader threads in RpcServer.  
This can be configured using 'hbase.ipc.server.read.threadpool.size'.  Default 
is 10 (Checked 2.0 only not sure 1.x having a diff default)
Just making myself learn this full context and adding that same here for ref 
also :-)
Tks for the sharing [~huaxiang].



> document the mysterious direct memory leak in hbase 
> 
>
> Key: HBASE-19320
> URL: https://issues.apache.org/jira/browse/HBASE-19320
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.6
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-19320-master-v001.patch, Screen Shot 2017-11-21 at 
> 4.43.36 PM.png, Screen Shot 2017-11-21 at 4.44.22 PM.png
>
>
> Recently we run into a direct memory leak case, which takes some time to 
> trace and debug. Internally discussed with our [~saint@gmail.com], we 
> thought we had some findings and want to share with the community.
> Basically, it is the issue described in 
> http://www.evanjones.ca/java-bytebuffer-leak.html and it happened to one of 
> our hbase clusters.
> Create the jira first and will fill in more details later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18775) Add a Global Read-Only property to turn off all writes for the cluster

2017-12-14 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292053#comment-16292053
 ] 

Anoop Sam John commented on HBASE-18775:


+1. Any more comments [~tedyu]?  Else will commit after the latest QA run.

> Add a Global Read-Only property to turn off all writes for the cluster
> --
>
> Key: HBASE-18775
> URL: https://issues.apache.org/jira/browse/HBASE-18775
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, regionserver
>Affects Versions: HBASE-18477
>Reporter: Zach York
>Assignee: Zach York
> Attachments: HBASE-18775.HBASE-18477.001.patch, 
> HBASE-18775.HBASE-18477.002.patch, HBASE-18775.HBASE-18477.003.patch, 
> HBASE-18775.HBASE-18477.004.patch, HBASE-18775.HBASE-18477.005.patch, 
> HBASE-18775.HBASE-18477.006.patch, HBASE-18775.HBASE-18477.007.patch, 
> HBASE-18775.HBASE-18477.008.patch, HBASE-18775.HBASE-18477.009.patch, 
> HBASE-18775.HBASE-18477.010.patch, HBASE-18775.HBASE-18477.011.patch
>
>
> As part of HBASE-18477, we need a way to turn off all modification for a 
> cluster. This patch extends the read only mode used by replication to disable 
> all data and metadata operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HBASE-19518) Moving the bulk load hooks to SecureBulkLoadEndpoint

2017-12-14 Thread Guangxu Cheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangxu Cheng reassigned HBASE-19518:
-

Assignee: Guangxu Cheng

> Moving the bulk load hooks to SecureBulkLoadEndpoint
> 
>
> Key: HBASE-19518
> URL: https://issues.apache.org/jira/browse/HBASE-19518
> Project: HBase
>  Issue Type: Bug
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>
> Based on the discussion in HBASE-19483, we should move the bulk load hooks 
> from AccessController to SecureBulkLoadEndpoint.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18775) Add a Global Read-Only property to turn off all writes for the cluster

2017-12-14 Thread Zach York (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292046#comment-16292046
 ] 

Zach York commented on HBASE-18775:
---

[~anoop.hbase] Thanks. I fixed the checkstyle and naming.

Thanks for the reviews!

> Add a Global Read-Only property to turn off all writes for the cluster
> --
>
> Key: HBASE-18775
> URL: https://issues.apache.org/jira/browse/HBASE-18775
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, regionserver
>Affects Versions: HBASE-18477
>Reporter: Zach York
>Assignee: Zach York
> Attachments: HBASE-18775.HBASE-18477.001.patch, 
> HBASE-18775.HBASE-18477.002.patch, HBASE-18775.HBASE-18477.003.patch, 
> HBASE-18775.HBASE-18477.004.patch, HBASE-18775.HBASE-18477.005.patch, 
> HBASE-18775.HBASE-18477.006.patch, HBASE-18775.HBASE-18477.007.patch, 
> HBASE-18775.HBASE-18477.008.patch, HBASE-18775.HBASE-18477.009.patch, 
> HBASE-18775.HBASE-18477.010.patch, HBASE-18775.HBASE-18477.011.patch
>
>
> As part of HBASE-18477, we need a way to turn off all modification for a 
> cluster. This patch extends the read only mode used by replication to disable 
> all data and metadata operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18775) Add a Global Read-Only property to turn off all writes for the cluster

2017-12-14 Thread Zach York (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zach York updated HBASE-18775:
--
Attachment: HBASE-18775.HBASE-18477.011.patch

> Add a Global Read-Only property to turn off all writes for the cluster
> --
>
> Key: HBASE-18775
> URL: https://issues.apache.org/jira/browse/HBASE-18775
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, regionserver
>Affects Versions: HBASE-18477
>Reporter: Zach York
>Assignee: Zach York
> Attachments: HBASE-18775.HBASE-18477.001.patch, 
> HBASE-18775.HBASE-18477.002.patch, HBASE-18775.HBASE-18477.003.patch, 
> HBASE-18775.HBASE-18477.004.patch, HBASE-18775.HBASE-18477.005.patch, 
> HBASE-18775.HBASE-18477.006.patch, HBASE-18775.HBASE-18477.007.patch, 
> HBASE-18775.HBASE-18477.008.patch, HBASE-18775.HBASE-18477.009.patch, 
> HBASE-18775.HBASE-18477.010.patch, HBASE-18775.HBASE-18477.011.patch
>
>
> As part of HBASE-18477, we need a way to turn off all modification for a 
> cluster. This patch extends the read only mode used by replication to disable 
> all data and metadata operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19518) Moving the bulk load hooks to SecureBulkLoadEndpoint

2017-12-14 Thread Guangxu Cheng (JIRA)
Guangxu Cheng created HBASE-19518:
-

 Summary: Moving the bulk load hooks to SecureBulkLoadEndpoint
 Key: HBASE-19518
 URL: https://issues.apache.org/jira/browse/HBASE-19518
 Project: HBase
  Issue Type: Bug
Reporter: Guangxu Cheng


Based on the discussion in HBASE-19483, we should move the bulk load hooks from 
AccessController to SecureBulkLoadEndpoint.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19112) Suspect methods on Cell to be deprecated

2017-12-14 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292044#comment-16292044
 ] 

Anoop Sam John commented on HBASE-19112:


I was not suggesting we should allow adding custom cells in CP. I was just 
asking if they add. Just for open thinking.  Ya the arg seems fine.
Good point by Ram on the client side CellComparator usage. Ya this will 
definitely be an issue if the custom cells are added.  So Put#add() should take 
a Cell impl created by the Builder. There we know what type we makes.  Then my 
Q will be like why to have Put#add(Cell)?  That cell has to be created using a 
fixed API in CellBuilder.  May be a better way to restrict user from adding 
custom Cells.  I would say 99.9% users might not do this way but will create 
Cell using the builder only .   But just asking to unearth any better for us.
 bq. they must to add ExtendedCell since the RS need to update cell's timestamp 
and sequence id
Good point.  Seems some methods in ExtendedCell has to be moved to RawCell. Ya 
setTs and seqId things must be there.  Remembering now that SettableTimeStamp 
and other interface were CP exposed for this reason.  May be do this as another 
issue?

> Suspect methods on Cell to be deprecated
> 
>
> Key: HBASE-19112
> URL: https://issues.apache.org/jira/browse/HBASE-19112
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Josh Elser
>Assignee: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19112_branch-2.patch, 
> HBASE-19112_branch-2_1.patch, HBASE-19112_master.patch, 
> HBASE-19112_master_1.patch, HBASE-19112_master_1.patch, 
> HBASE-19112_master_2.patch
>
>
> [~chia7712] suggested on the [mailing 
> list|https://lists.apache.org/thread.html/e6de9af26d9b888a358ba48bf74655ccd893573087c032c0fcf01585@%3Cdev.hbase.apache.org%3E]
>  that we have some methods on Cell which should be deprecated for removal:
> * {{#getType()}}
> * {{#getTimestamp()}}
> * {{#getTag()}}
> * {{#getSequenceId()}}
> Let's make a pass over these (and maybe the rest) to make sure that there 
> aren't others which are either implementation details or methods returning 
> now-private-marked classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19483) Add proper privilege check for rsgroup commands

2017-12-14 Thread Guangxu Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292039#comment-16292039
 ] 

Guangxu Cheng commented on HBASE-19483:
---

Thanks for the reviews. I agree to move rs group hooks from AccessController to 
RSGroupAdminEndpoint.This may be more reasonable.I will refactoring 
AccessController.New patch will soon come.:)

bq.After doing that refactor could we move the bulk load hooks out to the 
secure bulk load endpoint?
For consistency,  I think moving the bulk load hooks to SecureBulkLoadEndpoint 
should be done.

> Add proper privilege check for rsgroup commands
> ---
>
> Key: HBASE-19483
> URL: https://issues.apache.org/jira/browse/HBASE-19483
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Guangxu Cheng
> Fix For: 1.4.1, 1.5.0, 2.0.0-beta-1
>
> Attachments: HBASE-19483.master.001.patch, 
> HBASE-19483.master.002.patch, HBASE-19483.master.003.patch
>
>
> Currently list_rsgroups command can be executed by any user.
> This is inconsistent with other list commands such as list_peers and 
> list_peer_configs.
> We should add proper privilege check for list_rsgroups command.
> privilege check should be added for get_table_rsgroup / get_server_rsgroup / 
> get_rsgroup commands.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19505) Disable ByteBufferPool by default at HM

2017-12-14 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-19505:
---
Attachment: HBASE-19505_V3.patch

> Disable ByteBufferPool by default at HM
> ---
>
> Key: HBASE-19505
> URL: https://issues.apache.org/jira/browse/HBASE-19505
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19505.patch, HBASE-19505_V2.patch, 
> HBASE-19505_V3.patch
>
>
> The main usage of the pool is while accepting bigger sized requests ie. 
> Mutation requests. HM do not have any regions by default.  So we can make 
> this pool OFF in HM side. Still add a config to turn this ON.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18838) shaded artifacts are incorrect when built against hadoop 3

2017-12-14 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18838:
--
Attachment: HBASE-18838.v5.patch

> shaded artifacts are incorrect when built against hadoop 3
> --
>
> Key: HBASE-18838
> URL: https://issues.apache.org/jira/browse/HBASE-18838
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0-alpha-3
>Reporter: Sean Busbey
>Assignee: Mike Drob
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18838-WIP.v2.patch, HBASE-18838.WIP.patch, 
> HBASE-18838.v3.patch, HBASE-18838.v4.patch, HBASE-18838.v5.patch
>
>
> Building master/branch-2 against the hadoop-3 profile results in 
> check-invariants screaming about unrelocated dependencies. will list details 
> in comment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18838) shaded artifacts are incorrect when built against hadoop 3

2017-12-14 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292027#comment-16292027
 ] 

Mike Drob commented on HBASE-18838:
---

license check is due to

{noformat}
01:32:07.715 HTTP request sent, awaiting response... 502 Proxy Error
01:32:07.811 2017-12-14 23:42:48 ERROR 502: Proxy Error.
01:32:07.811 
01:32:07.811 Wget error 8 in fetching excludes file from url 
https://builds.apache.org/job/HBase-Find-Flaky-Tests/lastSuccessfulBuild/artifact/excludes/.
 Ignoring and proceeding.
{noformat}
I'll file a JIRA to better handle that later.

compile error is strange, not sure why that appeared suddenly, v5 fixes that

> shaded artifacts are incorrect when built against hadoop 3
> --
>
> Key: HBASE-18838
> URL: https://issues.apache.org/jira/browse/HBASE-18838
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0-alpha-3
>Reporter: Sean Busbey
>Assignee: Mike Drob
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18838-WIP.v2.patch, HBASE-18838.WIP.patch, 
> HBASE-18838.v3.patch, HBASE-18838.v4.patch, HBASE-18838.v5.patch
>
>
> Building master/branch-2 against the hadoop-3 profile results in 
> check-invariants screaming about unrelocated dependencies. will list details 
> in comment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19357) Bucket cache no longer L2 for LRU cache

2017-12-14 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292025#comment-16292025
 ] 

Anoop Sam John commented on HBASE-19357:


Will add to RN.
bq.perhaps system tables are offheap whereas everything else is file-backed?
There is no tiered BC at all. Either off heap or file.  This can be useful.. 
There is a jira and Ram was working on a prototype.  We will come back to that 
next year.   Will add a comment in that issue. So when/if we do, then we can 
think of adding the system table blocks always to off heap BC. cc [~ram_krish]

> Bucket cache no longer L2 for LRU cache
> ---
>
> Key: HBASE-19357
> URL: https://issues.apache.org/jira/browse/HBASE-19357
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19357.patch, HBASE-19357.patch, 
> HBASE-19357_V2.patch, HBASE-19357_V3.patch, HBASE-19357_V3.patch
>
>
> When Bucket cache is used, by default we dont configure it as an L2 cache 
> alone. The default setting is combined mode ON where the data blocks to 
> Bucket cache and index/bloom blocks go to LRU cache. But there is a way to 
> turn this off and make LRU as L1 and Bucket cache as a victim handler for L1. 
> It will be just L2.   
> After the off heap read path optimization Bucket cache is no longer slower 
> compared to L1. We have test results on data sizes from 12 GB.  The Alibaba 
> use case was also with 12 GB and they have observed a ~30% QPS improve over 
> the LRU cache.
> This issue is to remove the option for combined mode = false. So when Bucket 
> cache is in use, data blocks will go to it only and LRU will get only index 
> /meta/bloom blocks.   Bucket cache will no longer be configured as a victim 
> handler for LRU.
> Note : WHen external cache is in use, there only the L1 L2 thing comes. LRU 
> will be L1 and external cache act as its L2. That make full sense.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18775) Add a Global Read-Only property to turn off all writes for the cluster

2017-12-14 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292023#comment-16292023
 ] 

Anoop Sam John commented on HBASE-18775:


+1
Pls check the checkstyle and correct? Also 'READ_ONLY_ENABLED = 
"hbase.readonly";'   This is the readonly config key.  Just see we name the 
constants as **_KEY.. Pls correct that also along with checkstyle fix.  Then 
will commit.

Test failure is not related to this patch.

> Add a Global Read-Only property to turn off all writes for the cluster
> --
>
> Key: HBASE-18775
> URL: https://issues.apache.org/jira/browse/HBASE-18775
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, regionserver
>Affects Versions: HBASE-18477
>Reporter: Zach York
>Assignee: Zach York
> Attachments: HBASE-18775.HBASE-18477.001.patch, 
> HBASE-18775.HBASE-18477.002.patch, HBASE-18775.HBASE-18477.003.patch, 
> HBASE-18775.HBASE-18477.004.patch, HBASE-18775.HBASE-18477.005.patch, 
> HBASE-18775.HBASE-18477.006.patch, HBASE-18775.HBASE-18477.007.patch, 
> HBASE-18775.HBASE-18477.008.patch, HBASE-18775.HBASE-18477.009.patch, 
> HBASE-18775.HBASE-18477.010.patch
>
>
> As part of HBASE-18477, we need a way to turn off all modification for a 
> cluster. This patch extends the read only mode used by replication to disable 
> all data and metadata operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19496) Reusing the ByteBuffer in rpc layer corrupt the ServerLoad and RegionLoad

2017-12-14 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292021#comment-16292021
 ] 

Anoop Sam John commented on HBASE-19496:


Yep a sub task under this.

> Reusing the ByteBuffer in rpc layer corrupt the ServerLoad and RegionLoad
> -
>
> Key: HBASE-19496
> URL: https://issues.apache.org/jira/browse/HBASE-19496
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19496.wip.patch
>
>
> {{ServerLoad}} and {{RegionLoad}} store the pb object internally but the 
> bytebuffer of pb object may be reused in rpc layer. Hence, the {{ServerLoad}} 
> and {{RegionLoad}} which saved by {{HMaster}} will be corrupted if the 
> bytebuffer backed is modified.
> This issue doesn't happen on branch-1.
> # netty server was introduced in 2.0 (see HBASE-17263)
> # reusing bytebuffer to read RPC requests was introduced in 2.0 (see 
> HBASE-15788)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18440) ITs and Actions modify immutable TableDescriptors

2017-12-14 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292020#comment-16292020
 ] 

Guanghao Zhang commented on HBASE-18440:


Ok. Let me take a try.

> ITs and Actions modify immutable TableDescriptors
> -
>
> Key: HBASE-18440
> URL: https://issues.apache.org/jira/browse/HBASE-18440
> Project: HBase
>  Issue Type: Sub-task
>  Components: integration tests
>Reporter: Mike Drob
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18440.patch, HBASE-18440.v2.patch, 
> HBASE-18440.v3.patch, HBASE-18440.v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19505) Disable ByteBufferPool by default at HM

2017-12-14 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292019#comment-16292019
 ] 

Anoop Sam John commented on HBASE-19505:


Ok I will change the RpcServer creation from HM and RS and decide whether to 
use pool or not at those layers itself. Just pass boolean to RpcServer whether 
to enable or not.  No conf check at all at RpcServer level.  Fine. Ya will be 
cleaner.

> Disable ByteBufferPool by default at HM
> ---
>
> Key: HBASE-19505
> URL: https://issues.apache.org/jira/browse/HBASE-19505
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19505.patch, HBASE-19505_V2.patch
>
>
> The main usage of the pool is while accepting bigger sized requests ie. 
> Mutation requests. HM do not have any regions by default.  So we can make 
> this pool OFF in HM side. Still add a config to turn this ON.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-15536) Make AsyncFSWAL as our default WAL

2017-12-14 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-15536:
--
Attachment: 15536.addendum2.enable.asyncfswal.by.default.txt

Retry.

> Make AsyncFSWAL as our default WAL
> --
>
> Key: HBASE-15536
> URL: https://issues.apache.org/jira/browse/HBASE-15536
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: 15536.addendum2.enable.asyncfswal.by.default.2.txt, 
> 15536.addendum2.enable.asyncfswal.by.default.3.txt, 
> 15536.addendum2.enable.asyncfswal.by.default.txt, 
> 15536.addendum2.enable.asyncfswal.by.default.txt, 
> 15536.addendum2.enable.asyncfswal.by.default.txt, 15536.minor.addendum.patch, 
> HBASE-15536-v1.patch, HBASE-15536-v2.patch, HBASE-15536-v3.patch, 
> HBASE-15536-v4.patch, HBASE-15536-v5.patch, HBASE-15536.patch, 
> latesttrunk_asyncWAL_50threads_10cols.jfr, 
> latesttrunk_defaultWAL_50threads_10cols.jfr
>
>
> As it should be predicated on passing basic cluster ITBLL



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18352) Enable TestMasterOperationsForRegionReplicas#testCreateTableWithMultipleReplicas disabled by Proc-V2 AM in HBASE-14614

2017-12-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292010#comment-16292010
 ] 

Hudson commented on HBASE-18352:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4227 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4227/])
HBASE-18352 Enable (stack: rev 1a173f820b739d78e2634d0ded4b1f43188ddd27)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterOperationsForRegionReplicas.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java
Revert "HBASE-18352 Enable (stack: rev 6ab8ce9829fde7ad95e36e3beb6a323140117765)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterOperationsForRegionReplicas.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java


> Enable 
> TestMasterOperationsForRegionReplicas#testCreateTableWithMultipleReplicas 
> disabled by Proc-V2 AM in HBASE-14614
> --
>
> Key: HBASE-18352
> URL: https://issues.apache.org/jira/browse/HBASE-18352
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0-alpha-1
>Reporter: Stephen Yuan Jiang
>Assignee: huaxiang sun
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18352.master.001.patch, HBASE-18946_1.patch
>
>
> The following replica tests were disabled by Core Proc-V2 AM in HBASE-14614:
> - Disabled parts of...testCreateTableWithMultipleReplicas in 
> TestMasterOperationsForRegionReplicas There is an issue w/ assigning more 
> replicas if number of replicas is changed on us. See '/* DISABLED! FOR 
> NOW'.
> ** NOTE We moved fixing of the below two tests out to HBASE-19268
> - Disabled testRegionReplicasOnMidClusterHighReplication in 
> TestStochasticLoadBalancer2
> - Disabled testFlushAndCompactionsInPrimary in TestRegionReplicas
> This JIRA tracks the work to enable them (or modify/remove if not applicable).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19462) Deprecate all addImmutable methods in Put

2017-12-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292011#comment-16292011
 ] 

Hudson commented on HBASE-19462:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4227 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4227/])
HBASE-19462 Deprecate all addImmutable methods in Put (stack: rev 
70f02dbc7ceff51949d7e08438520885e9d6380f)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/ExpAsStringVisibilityLabelServiceImpl.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/favored/FavoredNodeAssignmentHelper.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/DefaultVisibilityLabelServiceImpl.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/SnapshotTestingUtils.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultiRespectsLimits.java
* (edit) 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftUtilities.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java
* (edit) hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RowResource.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestFlushLifeCycleTracker.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/TableNamespaceManager.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlLists.java
* (edit) 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
* (edit) hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestPut.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
* (edit) 
hbase-examples/src/main/java/org/apache/hadoop/hbase/client/example/MultiThreadedClientExample.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionLifeCycleTracker.java
* (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/client/Put.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/RegionStateStore.java


> Deprecate all addImmutable methods in Put
> -
>
> Key: HBASE-19462
> URL: https://issues.apache.org/jira/browse/HBASE-19462
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19462.v0.patch, HBASE-19462.v1.patch
>
>
> Users are able to use {{CellBuilder}} to build the cell without array copy 
> for Put/Delete/Increment/Append, and we always do the copy if user pass the 
> {{ByteBuffer}}. Hence, the {{addImmutable}} is unnecessary.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19517) Could not create interface org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSource when running against hadoop-3

2017-12-14 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19517:
--
Priority: Minor  (was: Major)

> Could not create interface 
> org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSource when running against 
> hadoop-3
> -
>
> Key: HBASE-19517
> URL: https://issues.apache.org/jira/browse/HBASE-19517
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> When running test against hadoop-3, I observe the following:
> {code}
> org.apache.hadoop.hbase.zookeeper.TestZKMulti  Time elapsed: 1.327 sec  <<< 
> ERROR!
> java.lang.RuntimeException: Could not create  interface 
> org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSource Is the hadoop 
> compatibility jar on the classpath?
>   at 
> org.apache.hadoop.hbase.zookeeper.TestZKMulti.setUpBeforeClass(TestZKMulti.java:71)
> Caused by: java.util.ServiceConfigurationError: 
> org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSource: Provider 
> org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSourceImpl could not be 
> instantiated
>   at 
> org.apache.hadoop.hbase.zookeeper.TestZKMulti.setUpBeforeClass(TestZKMulti.java:71)
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/commons/beanutils/DynaBean
>   at 
> org.apache.hadoop.hbase.zookeeper.TestZKMulti.setUpBeforeClass(TestZKMulti.java:71)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.commons.beanutils.DynaBean
>   at 
> org.apache.hadoop.hbase.zookeeper.TestZKMulti.setUpBeforeClass(TestZKMulti.java:71)
> {code}
> I used hadoop-3.0 profile



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19507) Get or Scan Mob by rowkey return error value when run compact_mob or major_compact_mob after change MOB_THRESHOLD bigger

2017-12-14 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292004#comment-16292004
 ] 

huaxiang sun commented on HBASE-19507:
--

Yeah, there is warning if v3 is not configured with mob. Another way is create 
an empty mob file for each partition and do a major mob compaction, we did this 
trick  before.

> Get or Scan Mob by rowkey return error value when run compact_mob or 
> major_compact_mob after change MOB_THRESHOLD bigger
> 
>
> Key: HBASE-19507
> URL: https://issues.apache.org/jira/browse/HBASE-19507
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: WangYuan
>Assignee: huaxiang sun
>
> 1、
> create   'abc',{NAME => 'cf', MOB_THRESHOLD => '10', IS_MOB => 'true'}
> put 'abc','1','cf:a','1'
> put 'abc','2','cf:a','2'
> put 'abc','3','cf:a','3'
> put 'abc','4','cf:a','y1'
> put 'abc','5','cf:a','y2'
> put 'abc','6','cf:a','y3'
>   
> hbase(main):011:0> scan 'abc'
> ROWCOLUMN+CELL
>   
>
>  1 column=cf:a, 
> timestamp=1513171753098, value=1  
>  
>  2 column=cf:a, 
> timestamp=1513171753208, value=2  
>  
>  3 column=cf:a, 
> timestamp=1513171753246, value=3  
>  
>  4 column=cf:a, 
> timestamp=1513171753273, value=y1 
>  
>  5 column=cf:a, 
> timestamp=1513171753301, value=y2 
>  
>  6 column=cf:a, 
> timestamp=1513171754282, value=y3 
>  
> hbase(main):012:0> flush 'abc'
> hbase(main):012:0> major_compact 'abc'
> hbase(main):012:0> major_compact_mob 'abc'
> 2、
> [See Hfile]:
> hbase org.apache.hadoop.hbase.io.hfile.HFile -f 
> /hbase/data/default/abc/a31b3146cba0d4569a7bf44e70e299c9/cf/22a432ba5c2c4802bedd947b99626f10
>  -p
> K: 1/cf:a/1513172294864/Put/vlen=5/seqid=4 V: 1
> K: 2/cf:a/1513172294892/Put/vlen=5/seqid=5 V: 2
> K: 3/cf:a/1513172294914/Put/vlen=5/seqid=6 V: 3
> K: 4/cf:a/1513172294954/Put/vlen=76/seqid=7 V: 
> \x00\x00\x00\x0Ed41d8cd98f00b204e9800998ecf8427e20171213ce022548c4c3498e864fda289b81e711
>  T[0]:  T[1]: abc
> K: 5/cf:a/1513172294982/Put/vlen=76/seqid=8 V: 
> \x00\x00\x00\x0Ed41d8cd98f00b204e9800998ecf8427e20171213ce022548c4c3498e864fda289b81e711
>  T[0]:  T[1]: abc
> K: 6/cf:a/1513172296455/Put/vlen=76/seqid=9 V: 
> \x00\x00\x00\x0Ed41d8cd98f00b204e9800998ecf8427e20171213ce022548c4c3498e864fda289b81e711
>  T[0]:  T[1]: abc
> Scanned kv count -> 6
> [See Mobfile]:
> hbase org.apache.hadoop.hbase.io.hfile.HFile -f 
> /hbase/mobdir/data/default/abc/07aab825b62dd9111831839cc9039df9/cf/d41d8cd98f00b204e9800998ecf8427e20171213bd8cfaf146684d4096ebf7994f050e96
>  -p
> K: 4/cf:a/1513172924196/Put/vlen=14/seqid=7 V: y1
> K: 5/cf:a/1513172924214/Put/vlen=14/seqid=8 V: y2
> K: 6/cf:a/1513172925768/Put/vlen=14/seqid=9 V: y3
> 3、
> alter 'abc',{NAME => 'cf', MOB_THRESHOLD => '10240' }
> put 
> 

[jira] [Commented] (HBASE-19507) Get or Scan Mob by rowkey return error value when run compact_mob or major_compact_mob after change MOB_THRESHOLD bigger

2017-12-14 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291999#comment-16291999
 ] 

Jingcheng Du commented on HBASE-19507:
--

Thanks [~wangyuan].
MOB required HFileV3.
You might need to change hbase.mob.file.compaction.mergeable.threshold and 
re-run the steps you mentioned, and change this threshold back after all data 
are recovered.
Otherwise, you have to write some code to recover the data.
# Scan all the MOB files.
# Convert the read cells to a ref cell and put it back to HBase table.



> Get or Scan Mob by rowkey return error value when run compact_mob or 
> major_compact_mob after change MOB_THRESHOLD bigger
> 
>
> Key: HBASE-19507
> URL: https://issues.apache.org/jira/browse/HBASE-19507
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: WangYuan
>Assignee: huaxiang sun
>
> 1、
> create   'abc',{NAME => 'cf', MOB_THRESHOLD => '10', IS_MOB => 'true'}
> put 'abc','1','cf:a','1'
> put 'abc','2','cf:a','2'
> put 'abc','3','cf:a','3'
> put 'abc','4','cf:a','y1'
> put 'abc','5','cf:a','y2'
> put 'abc','6','cf:a','y3'
>   
> hbase(main):011:0> scan 'abc'
> ROWCOLUMN+CELL
>   
>
>  1 column=cf:a, 
> timestamp=1513171753098, value=1  
>  
>  2 column=cf:a, 
> timestamp=1513171753208, value=2  
>  
>  3 column=cf:a, 
> timestamp=1513171753246, value=3  
>  
>  4 column=cf:a, 
> timestamp=1513171753273, value=y1 
>  
>  5 column=cf:a, 
> timestamp=1513171753301, value=y2 
>  
>  6 column=cf:a, 
> timestamp=1513171754282, value=y3 
>  
> hbase(main):012:0> flush 'abc'
> hbase(main):012:0> major_compact 'abc'
> hbase(main):012:0> major_compact_mob 'abc'
> 2、
> [See Hfile]:
> hbase org.apache.hadoop.hbase.io.hfile.HFile -f 
> /hbase/data/default/abc/a31b3146cba0d4569a7bf44e70e299c9/cf/22a432ba5c2c4802bedd947b99626f10
>  -p
> K: 1/cf:a/1513172294864/Put/vlen=5/seqid=4 V: 1
> K: 2/cf:a/1513172294892/Put/vlen=5/seqid=5 V: 2
> K: 3/cf:a/1513172294914/Put/vlen=5/seqid=6 V: 3
> K: 4/cf:a/1513172294954/Put/vlen=76/seqid=7 V: 
> \x00\x00\x00\x0Ed41d8cd98f00b204e9800998ecf8427e20171213ce022548c4c3498e864fda289b81e711
>  T[0]:  T[1]: abc
> K: 5/cf:a/1513172294982/Put/vlen=76/seqid=8 V: 
> \x00\x00\x00\x0Ed41d8cd98f00b204e9800998ecf8427e20171213ce022548c4c3498e864fda289b81e711
>  T[0]:  T[1]: abc
> K: 6/cf:a/1513172296455/Put/vlen=76/seqid=9 V: 
> \x00\x00\x00\x0Ed41d8cd98f00b204e9800998ecf8427e20171213ce022548c4c3498e864fda289b81e711
>  T[0]:  T[1]: abc
> Scanned kv count -> 6
> [See Mobfile]:
> hbase org.apache.hadoop.hbase.io.hfile.HFile -f 
> /hbase/mobdir/data/default/abc/07aab825b62dd9111831839cc9039df9/cf/d41d8cd98f00b204e9800998ecf8427e20171213bd8cfaf146684d4096ebf7994f050e96
>  -p
> K: 4/cf:a/1513172924196/Put/vlen=14/seqid=7 V: y1
> K: 5/cf:a/1513172924214/Put/vlen=14/seqid=8 V: y2
> K: 6/cf:a/1513172925768/Put/vlen=14/seqid=9 V: y3
> 3、
> alter 'abc',{NAME => 'cf', MOB_THRESHOLD => '10240' }
> put 
> 

[jira] [Resolved] (HBASE-19507) Get or Scan Mob by rowkey return error value when run compact_mob or major_compact_mob after change MOB_THRESHOLD bigger

2017-12-14 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du resolved HBASE-19507.
--
Resolution: Not A Bug

> Get or Scan Mob by rowkey return error value when run compact_mob or 
> major_compact_mob after change MOB_THRESHOLD bigger
> 
>
> Key: HBASE-19507
> URL: https://issues.apache.org/jira/browse/HBASE-19507
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: WangYuan
>Assignee: huaxiang sun
>
> 1、
> create   'abc',{NAME => 'cf', MOB_THRESHOLD => '10', IS_MOB => 'true'}
> put 'abc','1','cf:a','1'
> put 'abc','2','cf:a','2'
> put 'abc','3','cf:a','3'
> put 'abc','4','cf:a','y1'
> put 'abc','5','cf:a','y2'
> put 'abc','6','cf:a','y3'
>   
> hbase(main):011:0> scan 'abc'
> ROWCOLUMN+CELL
>   
>
>  1 column=cf:a, 
> timestamp=1513171753098, value=1  
>  
>  2 column=cf:a, 
> timestamp=1513171753208, value=2  
>  
>  3 column=cf:a, 
> timestamp=1513171753246, value=3  
>  
>  4 column=cf:a, 
> timestamp=1513171753273, value=y1 
>  
>  5 column=cf:a, 
> timestamp=1513171753301, value=y2 
>  
>  6 column=cf:a, 
> timestamp=1513171754282, value=y3 
>  
> hbase(main):012:0> flush 'abc'
> hbase(main):012:0> major_compact 'abc'
> hbase(main):012:0> major_compact_mob 'abc'
> 2、
> [See Hfile]:
> hbase org.apache.hadoop.hbase.io.hfile.HFile -f 
> /hbase/data/default/abc/a31b3146cba0d4569a7bf44e70e299c9/cf/22a432ba5c2c4802bedd947b99626f10
>  -p
> K: 1/cf:a/1513172294864/Put/vlen=5/seqid=4 V: 1
> K: 2/cf:a/1513172294892/Put/vlen=5/seqid=5 V: 2
> K: 3/cf:a/1513172294914/Put/vlen=5/seqid=6 V: 3
> K: 4/cf:a/1513172294954/Put/vlen=76/seqid=7 V: 
> \x00\x00\x00\x0Ed41d8cd98f00b204e9800998ecf8427e20171213ce022548c4c3498e864fda289b81e711
>  T[0]:  T[1]: abc
> K: 5/cf:a/1513172294982/Put/vlen=76/seqid=8 V: 
> \x00\x00\x00\x0Ed41d8cd98f00b204e9800998ecf8427e20171213ce022548c4c3498e864fda289b81e711
>  T[0]:  T[1]: abc
> K: 6/cf:a/1513172296455/Put/vlen=76/seqid=9 V: 
> \x00\x00\x00\x0Ed41d8cd98f00b204e9800998ecf8427e20171213ce022548c4c3498e864fda289b81e711
>  T[0]:  T[1]: abc
> Scanned kv count -> 6
> [See Mobfile]:
> hbase org.apache.hadoop.hbase.io.hfile.HFile -f 
> /hbase/mobdir/data/default/abc/07aab825b62dd9111831839cc9039df9/cf/d41d8cd98f00b204e9800998ecf8427e20171213bd8cfaf146684d4096ebf7994f050e96
>  -p
> K: 4/cf:a/1513172924196/Put/vlen=14/seqid=7 V: y1
> K: 5/cf:a/1513172924214/Put/vlen=14/seqid=8 V: y2
> K: 6/cf:a/1513172925768/Put/vlen=14/seqid=9 V: y3
> 3、
> alter 'abc',{NAME => 'cf', MOB_THRESHOLD => '10240' }
> put 
> 

[jira] [Commented] (HBASE-19513) Fix the wrapped AsyncFSOutput implementation

2017-12-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291983#comment-16291983
 ] 

Hadoop QA commented on HBASE-19513:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
10s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
10s{color} | {color:green} hbase-server: The patch generated 0 new + 1 
unchanged - 2 fixed = 1 total (was 3) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
30s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
22m 35s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0-beta1. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 94m 
38s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19513 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902188/HBASE-19513.master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 71a2799aa8d2 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 6ab8ce9829 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10465/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10465/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was automatically generated.



> Fix the wrapped AsyncFSOutput implementation
> 
>
> Key: HBASE-19513
> 

[jira] [Comment Edited] (HBASE-19507) Get or Scan Mob by rowkey return error value when run compact_mob or major_compact_mob after change MOB_THRESHOLD bigger

2017-12-14 Thread WangYuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291981#comment-16291981
 ] 

WangYuan edited comment on HBASE-19507 at 12/15/17 3:27 AM:


Thank U  [~jingcheng.du]  [~huaxiang]
I found the reason:

The hfile version need modify to  V3,but the old conf is V2。
The configuration  value in hbase-site.xml is  

Then I change conf value from 2 to 3,  then switch(stop-start) hmaster , then 
major_mob ,it's OK.

But I need to do something else :
A、If more then 1 mobfile with same mobDate in 1 region ,it's OK  after mob 
majorcompact.
B、But, if only 1 mobfile(or 1 mobfile with only 1 single mobDate ) in 1 region 
,it can't recovery becaues majorcompact need 2 files at least, So I have to do :
  b1、 put 1 new record into region then flush 
  b2、modify new and old mobfile's mobDate to same mobDate
  b3、majorcompact it to recovery data.
  b4、delete the new record 
C、notify mobfile's size and change it ,may be can't to majorcompact mob because 
hbase.mob.file.compaction.mergeable.threshold is 192M.


was (Author: wangyuan):
Thank U  [~jingcheng.du]  [~huaxiang]
I found the reason:

The hfile version need modify to  V3,but the old conf is V2。
The configuration  value in hbase-site.xml is  

Then I change conf value from 2 to 3,  then switch(stop-start) hmaster , then 
major_mob ,it's OK.

But I need to do something else :
A、If more then 1 mobfile with same mobDate in 1 region ,it's OK  after mob 
majorcompact.
B、But, if only 1 mobfile(or 1 mobfile with only 1 single mobDate ) in 1 region 
,it can't recovery becaues majorcompact need 2 files at least, So I have to do :
  b1、 put 1 new record into region then flush 
  b2、modify new and old mobfile's mobDate to same mobDate
  b3、majorcompact it to recovery data.
C、notify mobfile's size and change it ,may be can't to majorcompact mob because 
hbase.mob.file.compaction.mergeable.threshold is 192M.

> Get or Scan Mob by rowkey return error value when run compact_mob or 
> major_compact_mob after change MOB_THRESHOLD bigger
> 
>
> Key: HBASE-19507
> URL: https://issues.apache.org/jira/browse/HBASE-19507
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: WangYuan
>Assignee: huaxiang sun
>
> 1、
> create   'abc',{NAME => 'cf', MOB_THRESHOLD => '10', IS_MOB => 'true'}
> put 'abc','1','cf:a','1'
> put 'abc','2','cf:a','2'
> put 'abc','3','cf:a','3'
> put 'abc','4','cf:a','y1'
> put 'abc','5','cf:a','y2'
> put 'abc','6','cf:a','y3'
>   
> hbase(main):011:0> scan 'abc'
> ROWCOLUMN+CELL
>   
>
>  1 column=cf:a, 
> timestamp=1513171753098, value=1  
>  
>  2 column=cf:a, 
> timestamp=1513171753208, value=2  
>  
>  3 column=cf:a, 
> timestamp=1513171753246, value=3  
>  
>  4 column=cf:a, 
> timestamp=1513171753273, value=y1 
>  
>  5 column=cf:a, 
> timestamp=1513171753301, value=y2 
>  
>  6 column=cf:a, 
> timestamp=1513171754282, value=y3 
>  
> hbase(main):012:0> flush 'abc'
> hbase(main):012:0> major_compact 'abc'
> hbase(main):012:0> major_compact_mob 'abc'
> 2、
> [See Hfile]:
> hbase org.apache.hadoop.hbase.io.hfile.HFile -f 
> /hbase/data/default/abc/a31b3146cba0d4569a7bf44e70e299c9/cf/22a432ba5c2c4802bedd947b99626f10
>  -p
> K: 1/cf:a/1513172294864/Put/vlen=5/seqid=4 V: 1
> K: 2/cf:a/1513172294892/Put/vlen=5/seqid=5 V: 2
> K: 3/cf:a/1513172294914/Put/vlen=5/seqid=6 V: 3
> K: 4/cf:a/1513172294954/Put/vlen=76/seqid=7 V: 
> \x00\x00\x00\x0Ed41d8cd98f00b204e9800998ecf8427e20171213ce022548c4c3498e864fda289b81e711
>  T[0]:  T[1]: abc
> K: 5/cf:a/1513172294982/Put/vlen=76/seqid=8 V: 
> \x00\x00\x00\x0Ed41d8cd98f00b204e9800998ecf8427e20171213ce022548c4c3498e864fda289b81e711
>  T[0]:  T[1]: abc
> K: 6/cf:a/1513172296455/Put/vlen=76/seqid=9 V: 
> 

[jira] [Comment Edited] (HBASE-19507) Get or Scan Mob by rowkey return error value when run compact_mob or major_compact_mob after change MOB_THRESHOLD bigger

2017-12-14 Thread WangYuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291981#comment-16291981
 ] 

WangYuan edited comment on HBASE-19507 at 12/15/17 3:26 AM:


Thank U  [~jingcheng.du]  [~huaxiang]
I found the reason:

The hfile version need modify to  V3,but the old conf is V2。
The configuration  value in hbase-site.xml is  

Then I change conf value from 2 to 3,  then switch(stop-start) hmaster , then 
major_mob ,it's OK.

But I need to do something else :
A、If more then 1 mobfile with same mobDate in 1 region ,it's OK  after mob 
majorcompact.
B、But, if only 1 mobfile(or 1 mobfile with only 1 single mobDate ) in 1 region 
,it can't recovery becaues majorcompact need 2 files at least, So I have to do :
  b1、 put 1 new record into region then flush 
  b2、modify new and old mobfile's mobDate to same mobDate
  b3、majorcompact it to recovery data.
C、notify mobfile's size and change it ,may be can't to majorcompact mob because 
hbase.mob.file.compaction.mergeable.threshold is 192M.


was (Author: wangyuan):
Thank U  [~jingcheng.du]
I found the reason:

The hfile version need modify to  V3,but the old conf is V2。
The configuration  value in hbase-site.xml is  

Then I change conf value from 2 to 3,  then switch(stop-start) hmaster , then 
major_mob ,it's OK.

But I need to do something else :
A、If more then 1 mobfile with same mobDate in 1 region ,it's OK  after mob 
majorcompact.
B、But, if only 1 mobfile(or 1 mobfile with only 1 single mobDate ) in 1 region 
,it can't recovery becaues majorcompact need 2 files at least, So I have to do :
  b1、 put 1 new record into region then flush 
  b2、modify new and old mobfile's mobDate to same mobDate
  b3、majorcompact it to recovery data.
C、notify mobfile's size and change it ,may be can't to majorcompact mob because 
hbase.mob.file.compaction.mergeable.threshold is 192M.

> Get or Scan Mob by rowkey return error value when run compact_mob or 
> major_compact_mob after change MOB_THRESHOLD bigger
> 
>
> Key: HBASE-19507
> URL: https://issues.apache.org/jira/browse/HBASE-19507
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: WangYuan
>Assignee: huaxiang sun
>
> 1、
> create   'abc',{NAME => 'cf', MOB_THRESHOLD => '10', IS_MOB => 'true'}
> put 'abc','1','cf:a','1'
> put 'abc','2','cf:a','2'
> put 'abc','3','cf:a','3'
> put 'abc','4','cf:a','y1'
> put 'abc','5','cf:a','y2'
> put 'abc','6','cf:a','y3'
>   
> hbase(main):011:0> scan 'abc'
> ROWCOLUMN+CELL
>   
>
>  1 column=cf:a, 
> timestamp=1513171753098, value=1  
>  
>  2 column=cf:a, 
> timestamp=1513171753208, value=2  
>  
>  3 column=cf:a, 
> timestamp=1513171753246, value=3  
>  
>  4 column=cf:a, 
> timestamp=1513171753273, value=y1 
>  
>  5 column=cf:a, 
> timestamp=1513171753301, value=y2 
>  
>  6 column=cf:a, 
> timestamp=1513171754282, value=y3 
>  
> hbase(main):012:0> flush 'abc'
> hbase(main):012:0> major_compact 'abc'
> hbase(main):012:0> major_compact_mob 'abc'
> 2、
> [See Hfile]:
> hbase org.apache.hadoop.hbase.io.hfile.HFile -f 
> /hbase/data/default/abc/a31b3146cba0d4569a7bf44e70e299c9/cf/22a432ba5c2c4802bedd947b99626f10
>  -p
> K: 1/cf:a/1513172294864/Put/vlen=5/seqid=4 V: 1
> K: 2/cf:a/1513172294892/Put/vlen=5/seqid=5 V: 2
> K: 3/cf:a/1513172294914/Put/vlen=5/seqid=6 V: 3
> K: 4/cf:a/1513172294954/Put/vlen=76/seqid=7 V: 
> \x00\x00\x00\x0Ed41d8cd98f00b204e9800998ecf8427e20171213ce022548c4c3498e864fda289b81e711
>  T[0]:  T[1]: abc
> K: 5/cf:a/1513172294982/Put/vlen=76/seqid=8 V: 
> \x00\x00\x00\x0Ed41d8cd98f00b204e9800998ecf8427e20171213ce022548c4c3498e864fda289b81e711
>  T[0]:  T[1]: abc
> K: 6/cf:a/1513172296455/Put/vlen=76/seqid=9 V: 
> \x00\x00\x00\x0Ed41d8cd98f00b204e9800998ecf8427e20171213ce022548c4c3498e864fda289b81e711
>  T[0]:  T[1]: abc
> Scanned kv 

[jira] [Commented] (HBASE-19507) Get or Scan Mob by rowkey return error value when run compact_mob or major_compact_mob after change MOB_THRESHOLD bigger

2017-12-14 Thread WangYuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291981#comment-16291981
 ] 

WangYuan commented on HBASE-19507:
--

Thank U  [~jingcheng.du]
I found the reason:

The hfile version need modify to  V3,but the old conf is V2。
The configuration  value in hbase-site.xml is  

Then I change conf value from 2 to 3,  then switch(stop-start) hmaster , then 
major_mob ,it's OK.

But I need to do something else :
A、If more then 1 mobfile with same mobDate in 1 region ,it's OK  after mob 
majorcompact.
B、But, if only 1 mobfile(or 1 mobfile with only 1 single mobDate ) in 1 region 
,it can't recovery becaues majorcompact need 2 files at least, So I have to do :
  b1、 put 1 new record into region then flush 
  b2、modify new and old mobfile's mobDate to same mobDate
  b3、majorcompact it to recovery data.
C、notify mobfile's size and change it ,may be can't to majorcompact mob because 
hbase.mob.file.compaction.mergeable.threshold is 192M.

> Get or Scan Mob by rowkey return error value when run compact_mob or 
> major_compact_mob after change MOB_THRESHOLD bigger
> 
>
> Key: HBASE-19507
> URL: https://issues.apache.org/jira/browse/HBASE-19507
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: WangYuan
>Assignee: huaxiang sun
>
> 1、
> create   'abc',{NAME => 'cf', MOB_THRESHOLD => '10', IS_MOB => 'true'}
> put 'abc','1','cf:a','1'
> put 'abc','2','cf:a','2'
> put 'abc','3','cf:a','3'
> put 'abc','4','cf:a','y1'
> put 'abc','5','cf:a','y2'
> put 'abc','6','cf:a','y3'
>   
> hbase(main):011:0> scan 'abc'
> ROWCOLUMN+CELL
>   
>
>  1 column=cf:a, 
> timestamp=1513171753098, value=1  
>  
>  2 column=cf:a, 
> timestamp=1513171753208, value=2  
>  
>  3 column=cf:a, 
> timestamp=1513171753246, value=3  
>  
>  4 column=cf:a, 
> timestamp=1513171753273, value=y1 
>  
>  5 column=cf:a, 
> timestamp=1513171753301, value=y2 
>  
>  6 column=cf:a, 
> timestamp=1513171754282, value=y3 
>  
> hbase(main):012:0> flush 'abc'
> hbase(main):012:0> major_compact 'abc'
> hbase(main):012:0> major_compact_mob 'abc'
> 2、
> [See Hfile]:
> hbase org.apache.hadoop.hbase.io.hfile.HFile -f 
> /hbase/data/default/abc/a31b3146cba0d4569a7bf44e70e299c9/cf/22a432ba5c2c4802bedd947b99626f10
>  -p
> K: 1/cf:a/1513172294864/Put/vlen=5/seqid=4 V: 1
> K: 2/cf:a/1513172294892/Put/vlen=5/seqid=5 V: 2
> K: 3/cf:a/1513172294914/Put/vlen=5/seqid=6 V: 3
> K: 4/cf:a/1513172294954/Put/vlen=76/seqid=7 V: 
> \x00\x00\x00\x0Ed41d8cd98f00b204e9800998ecf8427e20171213ce022548c4c3498e864fda289b81e711
>  T[0]:  T[1]: abc
> K: 5/cf:a/1513172294982/Put/vlen=76/seqid=8 V: 
> \x00\x00\x00\x0Ed41d8cd98f00b204e9800998ecf8427e20171213ce022548c4c3498e864fda289b81e711
>  T[0]:  T[1]: abc
> K: 6/cf:a/1513172296455/Put/vlen=76/seqid=9 V: 
> \x00\x00\x00\x0Ed41d8cd98f00b204e9800998ecf8427e20171213ce022548c4c3498e864fda289b81e711
>  T[0]:  T[1]: abc
> Scanned kv count -> 6
> [See Mobfile]:
> hbase org.apache.hadoop.hbase.io.hfile.HFile -f 
> /hbase/mobdir/data/default/abc/07aab825b62dd9111831839cc9039df9/cf/d41d8cd98f00b204e9800998ecf8427e20171213bd8cfaf146684d4096ebf7994f050e96
>  -p
> K: 4/cf:a/1513172924196/Put/vlen=14/seqid=7 V: y1
> K: 5/cf:a/1513172924214/Put/vlen=14/seqid=8 V: y2
> K: 6/cf:a/1513172925768/Put/vlen=14/seqid=9 V: y3
> 3、
> alter 'abc',{NAME => 'cf', MOB_THRESHOLD => '10240' }
> put 
> 

[jira] [Created] (HBASE-19517) Could not create interface org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSource when running against hadoop-3

2017-12-14 Thread Ted Yu (JIRA)
Ted Yu created HBASE-19517:
--

 Summary: Could not create interface 
org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSource when running against 
hadoop-3
 Key: HBASE-19517
 URL: https://issues.apache.org/jira/browse/HBASE-19517
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu


When running test against hadoop-3, I observe the following:
{code}
org.apache.hadoop.hbase.zookeeper.TestZKMulti  Time elapsed: 1.327 sec  <<< 
ERROR!
java.lang.RuntimeException: Could not create  interface 
org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSource Is the hadoop 
compatibility jar on the classpath?
at 
org.apache.hadoop.hbase.zookeeper.TestZKMulti.setUpBeforeClass(TestZKMulti.java:71)
Caused by: java.util.ServiceConfigurationError: 
org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSource: Provider 
org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSourceImpl could not be 
instantiated
at 
org.apache.hadoop.hbase.zookeeper.TestZKMulti.setUpBeforeClass(TestZKMulti.java:71)
Caused by: java.lang.NoClassDefFoundError: org/apache/commons/beanutils/DynaBean
at 
org.apache.hadoop.hbase.zookeeper.TestZKMulti.setUpBeforeClass(TestZKMulti.java:71)
Caused by: java.lang.ClassNotFoundException: 
org.apache.commons.beanutils.DynaBean
at 
org.apache.hadoop.hbase.zookeeper.TestZKMulti.setUpBeforeClass(TestZKMulti.java:71)
{code}
I used hadoop-3.0 profile



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19462) Deprecate all addImmutable methods in Put

2017-12-14 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291969#comment-16291969
 ] 

Chia-Ping Tsai commented on HBASE-19462:


Thanks for the review. [~stack]

> Deprecate all addImmutable methods in Put
> -
>
> Key: HBASE-19462
> URL: https://issues.apache.org/jira/browse/HBASE-19462
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19462.v0.patch, HBASE-19462.v1.patch
>
>
> Users are able to use {{CellBuilder}} to build the cell without array copy 
> for Put/Delete/Increment/Append, and we always do the copy if user pass the 
> {{ByteBuffer}}. Hence, the {{addImmutable}} is unnecessary.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18440) ITs and Actions modify immutable TableDescriptors

2017-12-14 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291968#comment-16291968
 ] 

Chia-Ping Tsai commented on HBASE-18440:


bq.  Run IT with the patch?
Yep. Also the patch need to rebase.

> ITs and Actions modify immutable TableDescriptors
> -
>
> Key: HBASE-18440
> URL: https://issues.apache.org/jira/browse/HBASE-18440
> Project: HBase
>  Issue Type: Sub-task
>  Components: integration tests
>Reporter: Mike Drob
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18440.patch, HBASE-18440.v2.patch, 
> HBASE-18440.v3.patch, HBASE-18440.v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19112) Suspect methods on Cell to be deprecated

2017-12-14 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291967#comment-16291967
 ] 

Chia-Ping Tsai commented on HBASE-19112:


bq. This was on the client-side, right Chia? Not as a CP?
yep, my use case in on client-side. 

bq. They dont have a contract for getTypeByte()
I think they must to have a contract for getTypeByte() if cp user try to add 
the custom cell. We have different expectation of cell impl passed by 
{{Put#add(Cell)}}. If cp users try to add custom cell to Put, they must to add 
{{ExtendedCell}} since the RS need to update cell's timestamp and sequence id. 
In contrast to cp user, adding the {{Cell}} to Put is ok on client side...

bq. CPs should be able to create Cells (with Tags) but custom Cells we should 
disallow
Add the comment on Put#add(Cell) to say "DON'T add your custom cell impl on 
server-side. Use CellBuilder instead"? 

> Suspect methods on Cell to be deprecated
> 
>
> Key: HBASE-19112
> URL: https://issues.apache.org/jira/browse/HBASE-19112
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Josh Elser
>Assignee: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19112_branch-2.patch, 
> HBASE-19112_branch-2_1.patch, HBASE-19112_master.patch, 
> HBASE-19112_master_1.patch, HBASE-19112_master_1.patch, 
> HBASE-19112_master_2.patch
>
>
> [~chia7712] suggested on the [mailing 
> list|https://lists.apache.org/thread.html/e6de9af26d9b888a358ba48bf74655ccd893573087c032c0fcf01585@%3Cdev.hbase.apache.org%3E]
>  that we have some methods on Cell which should be deprecated for removal:
> * {{#getType()}}
> * {{#getTimestamp()}}
> * {{#getTag()}}
> * {{#getSequenceId()}}
> Let's make a pass over these (and maybe the rest) to make sure that there 
> aren't others which are either implementation details or methods returning 
> now-private-marked classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19516) IntegrationTestBulkLoad and IntegrationTestImportTsv run into 'java.lang.RuntimeException: DistributedHBaseCluster@1bb564e2 not an instance of MiniHBaseCluster'

2017-12-14 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-19516:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: (was: 2.0.0)
   Status: Resolved  (was: Patch Available)

Thanks for the patch, Ankit

> IntegrationTestBulkLoad and IntegrationTestImportTsv run into 
> 'java.lang.RuntimeException: DistributedHBaseCluster@1bb564e2 not an instance 
> of MiniHBaseCluster'
> 
>
> Key: HBASE-19516
> URL: https://issues.apache.org/jira/browse/HBASE-19516
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0
>Reporter: Romil Choksi
>Assignee: Ankit Singhal
> Fix For: 2.0.0-beta-1
>
> Attachments: 19516.v1.txt
>
>
> IntegrationTestBulkLoad and IntegrationTestImportTsv run into 
> 'RuntimeException: java.lang.RuntimeException: 
> org.apache.hadoop.hbase.DistributedHBaseCluster@1bb564e2 not an instance of 
> org.apache.hadoop.hbase.MiniHBaseCluster'
> {code}
>2017-12-14 22:26:00,118 ERROR [main] util.AbstractHBaseTool: Error 
> running command-line tool
>java.lang.RuntimeException: java.lang.RuntimeException: 
> org.apache.hadoop.hbase.DistributedHBaseCluster@1bb564e2 not an instance of 
> org.apache.hadoop.hbase.MiniHBaseCluster
>   at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:219)
>   at 
> org.apache.hadoop.hbase.HBaseCommonTestingUtility.waitFor(HBaseCommonTestingUtility.java:249)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.waitUntilAllRegionsAssigned(HBaseTestingUtility.java:3255)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.waitUntilAllRegionsAssigned(HBaseTestingUtility.java:3227)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1378)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1409)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1326)
>   at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.setupTable(IntegrationTestBulkLoad.java:249)
>   at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.runLoad(IntegrationTestBulkLoad.java:229)
>   at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.testBulkLoad(IntegrationTestBulkLoad.java:223)
>   at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.runTestFromCommandLine(IntegrationTestBulkLoad.java:792)
>   at 
> org.apache.hadoop.hbase.IntegrationTestBase.doWork(IntegrationTestBase.java:155)
>   at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:154)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.main(IntegrationTestBulkLoad.java:815)
>Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hbase.DistributedHBaseCluster@1bb564e2 not an instance of 
> org.apache.hadoop.hbase.MiniHBaseCluster
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.getMiniHBaseCluster(HBaseTestingUtility.java:1069)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.getHBaseCluster(HBaseTestingUtility.java:2711)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility$4.evaluate(HBaseTestingUtility.java:3285)
>   at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:191)
>   ... 14 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19516) IntegrationTestBulkLoad and IntegrationTestImportTsv run into 'java.lang.RuntimeException: DistributedHBaseCluster@1bb564e2 not an instance of MiniHBaseCluster'

2017-12-14 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-19516:
---
Summary: IntegrationTestBulkLoad and IntegrationTestImportTsv run into 
'java.lang.RuntimeException: DistributedHBaseCluster@1bb564e2 not an instance 
of MiniHBaseCluster'  (was: IntegrationTestBulkLoad and 
IntegrationTestImportTsv run into 'RuntimeException: 
java.lang.RuntimeException: 
org.apache.hadoop.hbase.DistributedHBaseCluster@1bb564e2 not an instance of 
org.apache.hadoop.hbase.MiniHBaseCluster')

> IntegrationTestBulkLoad and IntegrationTestImportTsv run into 
> 'java.lang.RuntimeException: DistributedHBaseCluster@1bb564e2 not an instance 
> of MiniHBaseCluster'
> 
>
> Key: HBASE-19516
> URL: https://issues.apache.org/jira/browse/HBASE-19516
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0
>Reporter: Romil Choksi
>Assignee: Ankit Singhal
> Fix For: 2.0.0, 2.0.0-beta-1
>
> Attachments: 19516.v1.txt
>
>
> IntegrationTestBulkLoad and IntegrationTestImportTsv run into 
> 'RuntimeException: java.lang.RuntimeException: 
> org.apache.hadoop.hbase.DistributedHBaseCluster@1bb564e2 not an instance of 
> org.apache.hadoop.hbase.MiniHBaseCluster'
> {code}
>2017-12-14 22:26:00,118 ERROR [main] util.AbstractHBaseTool: Error 
> running command-line tool
>java.lang.RuntimeException: java.lang.RuntimeException: 
> org.apache.hadoop.hbase.DistributedHBaseCluster@1bb564e2 not an instance of 
> org.apache.hadoop.hbase.MiniHBaseCluster
>   at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:219)
>   at 
> org.apache.hadoop.hbase.HBaseCommonTestingUtility.waitFor(HBaseCommonTestingUtility.java:249)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.waitUntilAllRegionsAssigned(HBaseTestingUtility.java:3255)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.waitUntilAllRegionsAssigned(HBaseTestingUtility.java:3227)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1378)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1409)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1326)
>   at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.setupTable(IntegrationTestBulkLoad.java:249)
>   at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.runLoad(IntegrationTestBulkLoad.java:229)
>   at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.testBulkLoad(IntegrationTestBulkLoad.java:223)
>   at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.runTestFromCommandLine(IntegrationTestBulkLoad.java:792)
>   at 
> org.apache.hadoop.hbase.IntegrationTestBase.doWork(IntegrationTestBase.java:155)
>   at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:154)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.main(IntegrationTestBulkLoad.java:815)
>Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hbase.DistributedHBaseCluster@1bb564e2 not an instance of 
> org.apache.hadoop.hbase.MiniHBaseCluster
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.getMiniHBaseCluster(HBaseTestingUtility.java:1069)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.getHBaseCluster(HBaseTestingUtility.java:2711)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility$4.evaluate(HBaseTestingUtility.java:3285)
>   at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:191)
>   ... 14 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19112) Suspect methods on Cell to be deprecated

2017-12-14 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291948#comment-16291948
 ] 

ramkrishna.s.vasudevan commented on HBASE-19112:


IMO I agree to it. I get your concern. So again the arg could if the custom 
cell is on the client side - then once the cell goes into server side then we 
are safe from that point onwards but the catch is that if on the client side we 
need to do CellComparator then we have an issue. 
So the @Public facing CellComparator internally should manage getType and 
getTypeByte as you said.

> Suspect methods on Cell to be deprecated
> 
>
> Key: HBASE-19112
> URL: https://issues.apache.org/jira/browse/HBASE-19112
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Josh Elser
>Assignee: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19112_branch-2.patch, 
> HBASE-19112_branch-2_1.patch, HBASE-19112_master.patch, 
> HBASE-19112_master_1.patch, HBASE-19112_master_1.patch, 
> HBASE-19112_master_2.patch
>
>
> [~chia7712] suggested on the [mailing 
> list|https://lists.apache.org/thread.html/e6de9af26d9b888a358ba48bf74655ccd893573087c032c0fcf01585@%3Cdev.hbase.apache.org%3E]
>  that we have some methods on Cell which should be deprecated for removal:
> * {{#getType()}}
> * {{#getTimestamp()}}
> * {{#getTag()}}
> * {{#getSequenceId()}}
> Let's make a pass over these (and maybe the rest) to make sure that there 
> aren't others which are either implementation details or methods returning 
> now-private-marked classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19516) IntegrationTestBulkLoad and IntegrationTestImportTsv run into 'RuntimeException: java.lang.RuntimeException: org.apache.hadoop.hbase.DistributedHBaseCluster@1bb564e2 n

2017-12-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291940#comment-16291940
 ] 

Hadoop QA commented on HBASE-19516:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
1s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.6.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
44s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
38s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
30s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
19m 39s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0-beta1. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}100m 
25s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}139m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19516 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902180/19516.v1.txt |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 46ff45d9236b 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 6ab8ce9829 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10464/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10464/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |



[jira] [Commented] (HBASE-19507) Get or Scan Mob by rowkey return error value when run compact_mob or major_compact_mob after change MOB_THRESHOLD bigger

2017-12-14 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291930#comment-16291930
 ] 

Jingcheng Du commented on HBASE-19507:
--

Thanks a lot Huaxiang. You mean you cannot re-produce this issue in master 
branch or CDH5.7.1? Thanks!

> Get or Scan Mob by rowkey return error value when run compact_mob or 
> major_compact_mob after change MOB_THRESHOLD bigger
> 
>
> Key: HBASE-19507
> URL: https://issues.apache.org/jira/browse/HBASE-19507
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: WangYuan
>Assignee: huaxiang sun
>
> 1、
> create   'abc',{NAME => 'cf', MOB_THRESHOLD => '10', IS_MOB => 'true'}
> put 'abc','1','cf:a','1'
> put 'abc','2','cf:a','2'
> put 'abc','3','cf:a','3'
> put 'abc','4','cf:a','y1'
> put 'abc','5','cf:a','y2'
> put 'abc','6','cf:a','y3'
>   
> hbase(main):011:0> scan 'abc'
> ROWCOLUMN+CELL
>   
>
>  1 column=cf:a, 
> timestamp=1513171753098, value=1  
>  
>  2 column=cf:a, 
> timestamp=1513171753208, value=2  
>  
>  3 column=cf:a, 
> timestamp=1513171753246, value=3  
>  
>  4 column=cf:a, 
> timestamp=1513171753273, value=y1 
>  
>  5 column=cf:a, 
> timestamp=1513171753301, value=y2 
>  
>  6 column=cf:a, 
> timestamp=1513171754282, value=y3 
>  
> hbase(main):012:0> flush 'abc'
> hbase(main):012:0> major_compact 'abc'
> hbase(main):012:0> major_compact_mob 'abc'
> 2、
> [See Hfile]:
> hbase org.apache.hadoop.hbase.io.hfile.HFile -f 
> /hbase/data/default/abc/a31b3146cba0d4569a7bf44e70e299c9/cf/22a432ba5c2c4802bedd947b99626f10
>  -p
> K: 1/cf:a/1513172294864/Put/vlen=5/seqid=4 V: 1
> K: 2/cf:a/1513172294892/Put/vlen=5/seqid=5 V: 2
> K: 3/cf:a/1513172294914/Put/vlen=5/seqid=6 V: 3
> K: 4/cf:a/1513172294954/Put/vlen=76/seqid=7 V: 
> \x00\x00\x00\x0Ed41d8cd98f00b204e9800998ecf8427e20171213ce022548c4c3498e864fda289b81e711
>  T[0]:  T[1]: abc
> K: 5/cf:a/1513172294982/Put/vlen=76/seqid=8 V: 
> \x00\x00\x00\x0Ed41d8cd98f00b204e9800998ecf8427e20171213ce022548c4c3498e864fda289b81e711
>  T[0]:  T[1]: abc
> K: 6/cf:a/1513172296455/Put/vlen=76/seqid=9 V: 
> \x00\x00\x00\x0Ed41d8cd98f00b204e9800998ecf8427e20171213ce022548c4c3498e864fda289b81e711
>  T[0]:  T[1]: abc
> Scanned kv count -> 6
> [See Mobfile]:
> hbase org.apache.hadoop.hbase.io.hfile.HFile -f 
> /hbase/mobdir/data/default/abc/07aab825b62dd9111831839cc9039df9/cf/d41d8cd98f00b204e9800998ecf8427e20171213bd8cfaf146684d4096ebf7994f050e96
>  -p
> K: 4/cf:a/1513172924196/Put/vlen=14/seqid=7 V: y1
> K: 5/cf:a/1513172924214/Put/vlen=14/seqid=8 V: y2
> K: 6/cf:a/1513172925768/Put/vlen=14/seqid=9 V: y3
> 3、
> alter 'abc',{NAME => 'cf', MOB_THRESHOLD => '10240' }
> put 
> 

[jira] [Commented] (HBASE-18946) Stochastic load balancer assigns replica regions to the same RS

2017-12-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291928#comment-16291928
 ] 

Hadoop QA commented on HBASE-18946:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
47s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
12s{color} | {color:red} hbase-server: The patch generated 9 new + 721 
unchanged - 7 fixed = 730 total (was 728) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
24s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
19m  0s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0-beta1. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
2s{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 12s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-18946 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902177/HBASE-18946.master.011.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 0678ddafa22b 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 6ab8ce9829 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| checkstyle | 

[jira] [Updated] (HBASE-19513) Fix the wrapped AsyncFSOutput implementation

2017-12-14 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-19513:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to master and branch-2.

Thanks [~stack] for reviewing.

> Fix the wrapped AsyncFSOutput implementation
> 
>
> Key: HBASE-19513
> URL: https://issues.apache.org/jira/browse/HBASE-19513
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19513.master.001.patch, HBASE-19513.patch, 
> HBASE-19513.patch
>
>
> It causes several flakey tests. Let me rewrite it with more caution...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18838) shaded artifacts are incorrect when built against hadoop 3

2017-12-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291915#comment-16291915
 ] 

Hadoop QA commented on HBASE-18838:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
44s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  7m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 12m 
 1s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
46s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  8m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
26s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
23s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 19m 
55s{color} | {color:red} The patch causes 17 errors with Hadoop v3.0.0-beta1. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}128m 38s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
23s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}221m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-18838 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902157/HBASE-18838.v4.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  shadedjars  hadoopcheck  
xml  compile  findbugs  hbaseanti  checkstyle  |
| uname | Linux eea637dd4035 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 4a1c3b4210 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| unit | 

[jira] [Commented] (HBASE-19289) CommonFSUtils$StreamLacksCapabilityException: hflush when running test against hadoop3 beta1

2017-12-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291911#comment-16291911
 ] 

Hudson commented on HBASE-19289:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4226 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4226/])
HBASE-19289 Add flag to disable stream capability enforcement (mdrob: rev 
2c9ef8a471148ece655b881cc490b6b685d634f4)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/AsyncFSOutputHelper.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/util/CommonFSUtils.java
* (edit) 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java
* (edit) hbase-procedure/src/test/resources/hbase-site.xml
* (edit) hbase-server/src/test/resources/hbase-site.xml


> CommonFSUtils$StreamLacksCapabilityException: hflush when running test 
> against hadoop3 beta1
> 
>
> Key: HBASE-19289
> URL: https://issues.apache.org/jira/browse/HBASE-19289
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Mike Drob
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: 19289.v1.txt, 19289.v2.txt, HBASE-19289.patch, 
> HBASE-19289.v2.patch, HBASE-19289.v3.patch, HBASE-19289.v4.patch, 
> HBASE-19289.v5.patch
>
>
> As of commit d8fb10c8329b19223c91d3cda6ef149382ad4ea0 , I encountered the 
> following exception when running unit test against hadoop3 beta1:
> {code}
> testRefreshStoreFiles(org.apache.hadoop.hbase.regionserver.TestHStore)  Time 
> elapsed: 0.061 sec  <<< ERROR!
> java.io.IOException: cannot get log writer
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.initHRegion(TestHStore.java:215)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:195)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:190)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:185)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:179)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:173)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.testRefreshStoreFiles(TestHStore.java:962)
> Caused by: 
> org.apache.hadoop.hbase.util.CommonFSUtils$StreamLacksCapabilityException: 
> hflush
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.initHRegion(TestHStore.java:215)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:195)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:190)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:185)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:179)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:173)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.testRefreshStoreFiles(TestHStore.java:962)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19267) Eclipse project import issues on 2.0

2017-12-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291912#comment-16291912
 ] 

Hudson commented on HBASE-19267:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4226 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4226/])
HBASE-19267 Remove compiler-plugin mapping executions as it breaks Java8 
(elserj: rev 4a1c3b4210b27b04a09e0c4d2f005b2ba7fd884d)
* (edit) hbase-mapreduce/pom.xml
* (edit) hbase-rsgroup/pom.xml
* (edit) hbase-shell/pom.xml
* (edit) hbase-zookeeper/pom.xml
* (edit) hbase-replication/pom.xml
* (edit) hbase-common/pom.xml
* (edit) hbase-backup/pom.xml
* (edit) hbase-thrift/pom.xml
* (edit) hbase-server/pom.xml
* (edit) hbase-external-blockcache/pom.xml
* (edit) hbase-protocol/pom.xml
* (edit) hbase-client/pom.xml
* (edit) hbase-examples/pom.xml
* (edit) hbase-hadoop-compat/pom.xml
* (edit) hbase-hadoop2-compat/pom.xml
* (edit) hbase-it/pom.xml
* (edit) hbase-http/pom.xml


> Eclipse project import issues on 2.0
> 
>
> Key: HBASE-19267
> URL: https://issues.apache.org/jira/browse/HBASE-19267
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 3.0.0, 2.0.0-beta-1
>
> Attachments: HBASE-19267.001.branch-2.patch, 
> HBASE-19267.002.branch-2.patch, HBASE-19267.002.branch-2.patch, 
> HBASE-19267.002.patch
>
>
> Trying to do a fresh import of branch-2 nets some errors..
> It seems like a previous change I made to clean up errors (HBASE-13236), 
> specifically adding the maven-compiler-plugin lifecycle mapping for 
> m2eclipse, is now causing Eclipse to not compile HBase as Java8. Removing the 
> lifecycle mapping fixes this.
> I assume this only needs to happen for 2.0.
> I keep having issues with the JavaNature being ignored. Not yet sure if this 
> is a result of something we're doing wrong (or just Eclipse being Eclipse).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18440) ITs and Actions modify immutable TableDescriptors

2017-12-14 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291910#comment-16291910
 ] 

Guanghao Zhang commented on HBASE-18440:


+1 for the patch. What we still need to do here? Run IT with the patch?

> ITs and Actions modify immutable TableDescriptors
> -
>
> Key: HBASE-18440
> URL: https://issues.apache.org/jira/browse/HBASE-18440
> Project: HBase
>  Issue Type: Sub-task
>  Components: integration tests
>Reporter: Mike Drob
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18440.patch, HBASE-18440.v2.patch, 
> HBASE-18440.v3.patch, HBASE-18440.v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19513) Fix the wrapped AsyncFSOutput implementation

2017-12-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291899#comment-16291899
 ] 

stack commented on HBASE-19513:
---

+1 Lets try it. Let me know if you want me to commit. The TestZKSecretWatcher 
shows up in a few hadoopqas currently. I can't repro locally. I'm thinking its 
just that slow zk up on jenkins cluster (zk seems real slow to connection on 
jenkins for whatever reason...)

> Fix the wrapped AsyncFSOutput implementation
> 
>
> Key: HBASE-19513
> URL: https://issues.apache.org/jira/browse/HBASE-19513
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19513.master.001.patch, HBASE-19513.patch, 
> HBASE-19513.patch
>
>
> It causes several flakey tests. Let me rewrite it with more caution...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Issue Comment Deleted] (HBASE-19513) Fix the wrapped AsyncFSOutput implementation

2017-12-14 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19513:
--
Comment: was deleted

(was: Go for it @don zhang)

> Fix the wrapped AsyncFSOutput implementation
> 
>
> Key: HBASE-19513
> URL: https://issues.apache.org/jira/browse/HBASE-19513
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19513.master.001.patch, HBASE-19513.patch, 
> HBASE-19513.patch
>
>
> It causes several flakey tests. Let me rewrite it with more caution...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19513) Fix the wrapped AsyncFSOutput implementation

2017-12-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291896#comment-16291896
 ] 

stack commented on HBASE-19513:
---

Go for it @don zhang

> Fix the wrapped AsyncFSOutput implementation
> 
>
> Key: HBASE-19513
> URL: https://issues.apache.org/jira/browse/HBASE-19513
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19513.master.001.patch, HBASE-19513.patch, 
> HBASE-19513.patch
>
>
> It causes several flakey tests. Let me rewrite it with more caution...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19513) Fix the wrapped AsyncFSOutput implementation

2017-12-14 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291890#comment-16291890
 ] 

Duo Zhang commented on HBASE-19513:
---

Thanks [~stack] for the rebasing. Let commit it and run the simple patch in 
HBASE-15536 again? The TestZKSecretWatcher is not related I think.

Thanks.

> Fix the wrapped AsyncFSOutput implementation
> 
>
> Key: HBASE-19513
> URL: https://issues.apache.org/jira/browse/HBASE-19513
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19513.master.001.patch, HBASE-19513.patch, 
> HBASE-19513.patch
>
>
> It causes several flakey tests. Let me rewrite it with more caution...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18838) shaded artifacts are incorrect when built against hadoop 3

2017-12-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291870#comment-16291870
 ] 

stack commented on HBASE-18838:
---

Interesting (excludes in submodules override parent exclude clause -- thats 
what I was seeing t).

> shaded artifacts are incorrect when built against hadoop 3
> --
>
> Key: HBASE-18838
> URL: https://issues.apache.org/jira/browse/HBASE-18838
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0-alpha-3
>Reporter: Sean Busbey
>Assignee: Mike Drob
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18838-WIP.v2.patch, HBASE-18838.WIP.patch, 
> HBASE-18838.v3.patch, HBASE-18838.v4.patch
>
>
> Building master/branch-2 against the hadoop-3 profile results in 
> check-invariants screaming about unrelocated dependencies. will list details 
> in comment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19513) Fix the wrapped AsyncFSOutput implementation

2017-12-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291864#comment-16291864
 ] 

stack commented on HBASE-19513:
---

.001 rebase needed after 

commit 2c9ef8a471148ece655b881cc490b6b685d634f4
Author: Mike Drob 
Date:   Tue Dec 5 14:25:37 2017 -0600

HBASE-19289 Add flag to disable stream capability enforcement

> Fix the wrapped AsyncFSOutput implementation
> 
>
> Key: HBASE-19513
> URL: https://issues.apache.org/jira/browse/HBASE-19513
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19513.master.001.patch, HBASE-19513.patch, 
> HBASE-19513.patch
>
>
> It causes several flakey tests. Let me rewrite it with more caution...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19513) Fix the wrapped AsyncFSOutput implementation

2017-12-14 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19513:
--
Attachment: HBASE-19513.master.001.patch

> Fix the wrapped AsyncFSOutput implementation
> 
>
> Key: HBASE-19513
> URL: https://issues.apache.org/jira/browse/HBASE-19513
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19513.master.001.patch, HBASE-19513.patch, 
> HBASE-19513.patch
>
>
> It causes several flakey tests. Let me rewrite it with more caution...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19494) Create simple WALKey filter that can be plugged in on the Replication Sink

2017-12-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291857#comment-16291857
 ] 

stack commented on HBASE-19494:
---

This is follow-on from HBASE-18846.

It is awkward because replicating we operate on WALEntry and WALKey protobufs, 
not pojos. I can't pass a filter or CP pb WALEntry or WALKeys.  Asking 
hbase-indexer crew if I could just pass a few attributes from WALEntry as 
params on a simple filter.

> Create simple WALKey filter that can be plugged in on the Replication Sink
> --
>
> Key: HBASE-19494
> URL: https://issues.apache.org/jira/browse/HBASE-19494
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0-beta-1
>
>
> hbase-indexer used to look at WALKeys on the sink to see if their time of 
> creation was before the time at which the replication stream was enabled.
> In the parent redo, there is no means for doing this anymore (because WALKey 
> used to be Private and because to get at the WALKey in the Sink, you had to 
> override all of the Replication which meant importing a million Private 
> objects...).
> This issue is about adding a simple filter to Replication on the sink-side 
> that just takes a WALKey (now InterfaceAudience LimitedPrivate and recently 
> made read-only).
> Assigned myself. Need to do this so hbase-indexer can move to hbase2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19516) IntegrationTestBulkLoad and IntegrationTestImportTsv run into 'RuntimeException: java.lang.RuntimeException: org.apache.hadoop.hbase.DistributedHBaseCluster@1bb564e2 n

2017-12-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291829#comment-16291829
 ] 

stack commented on HBASE-19516:
---

Go for it. Thanks.

> IntegrationTestBulkLoad and IntegrationTestImportTsv run into 
> 'RuntimeException: java.lang.RuntimeException: 
> org.apache.hadoop.hbase.DistributedHBaseCluster@1bb564e2 not an instance of 
> org.apache.hadoop.hbase.MiniHBaseCluster'
> --
>
> Key: HBASE-19516
> URL: https://issues.apache.org/jira/browse/HBASE-19516
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0
>Reporter: Romil Choksi
>Assignee: Ankit Singhal
> Fix For: 2.0.0, 2.0.0-beta-1
>
> Attachments: 19516.v1.txt
>
>
> IntegrationTestBulkLoad and IntegrationTestImportTsv run into 
> 'RuntimeException: java.lang.RuntimeException: 
> org.apache.hadoop.hbase.DistributedHBaseCluster@1bb564e2 not an instance of 
> org.apache.hadoop.hbase.MiniHBaseCluster'
> {code}
>2017-12-14 22:26:00,118 ERROR [main] util.AbstractHBaseTool: Error 
> running command-line tool
>java.lang.RuntimeException: java.lang.RuntimeException: 
> org.apache.hadoop.hbase.DistributedHBaseCluster@1bb564e2 not an instance of 
> org.apache.hadoop.hbase.MiniHBaseCluster
>   at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:219)
>   at 
> org.apache.hadoop.hbase.HBaseCommonTestingUtility.waitFor(HBaseCommonTestingUtility.java:249)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.waitUntilAllRegionsAssigned(HBaseTestingUtility.java:3255)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.waitUntilAllRegionsAssigned(HBaseTestingUtility.java:3227)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1378)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1409)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1326)
>   at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.setupTable(IntegrationTestBulkLoad.java:249)
>   at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.runLoad(IntegrationTestBulkLoad.java:229)
>   at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.testBulkLoad(IntegrationTestBulkLoad.java:223)
>   at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.runTestFromCommandLine(IntegrationTestBulkLoad.java:792)
>   at 
> org.apache.hadoop.hbase.IntegrationTestBase.doWork(IntegrationTestBase.java:155)
>   at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:154)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.main(IntegrationTestBulkLoad.java:815)
>Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hbase.DistributedHBaseCluster@1bb564e2 not an instance of 
> org.apache.hadoop.hbase.MiniHBaseCluster
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.getMiniHBaseCluster(HBaseTestingUtility.java:1069)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.getHBaseCluster(HBaseTestingUtility.java:2711)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility$4.evaluate(HBaseTestingUtility.java:3285)
>   at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:191)
>   ... 14 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19505) Disable ByteBufferPool by default at HM

2017-12-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291827#comment-16291827
 ] 

stack commented on HBASE-19505:
---

Patch looks good.

TestRegionsOnMasterOptions is the test that tries each of the master 
configuration options.

Param passed to NettyRpcServer should be a boolean named enableResevoir, not 
'noMaster'. RPC don't know anything about Master nor should it. 

Could this check be done up in HMaster and result passed to RpcServer in 
enableResevoir?

321   private boolean isReservoirEnabled(Configuration conf, boolean 
onMaster) {
322 boolean defaultEnabled = true;
323 if (onMaster) {
324   // RpcServer at HM by default enable ByteBufferPool iff HM having 
user table region in it
325   defaultEnabled = LoadBalancer.isTablesOnMaster(conf)
326   && !LoadBalancer.isSystemTablesOnlyOnMaster(conf);
327 }
328 return conf.getBoolean(RESERVOIR_ENABLED_KEY, defaultEnabled);
329   }

RPCServer shouldn't have balancer or master refs?

Master is subclass of 1203protected RpcServerInterface 
createRpcServer(Server server, Configuration conf,
1204  RpcSchedulerFactory rpcSchedulerFactory, InetSocketAddress 
bindAddress, String name)
1205  throws IOException {
1206try {
1207  return RpcServerFactory.createRpcServer(server, name, 
getServices(),
1208  bindAddress, // use final bindAddress for this server.
1209  conf, rpcSchedulerFactory.create(conf, this, server), false);
1210} catch (BindException be) {
1211  throw new IOException(be.getMessage() + ". To switch ports use 
the '"
1212  + HConstants.REGIONSERVER_PORT + "' configuration property.",
1213  be.getCause() != null ? be.getCause() : be);
1214}
1215  }

... so just take a param and have master figure if it should enable resevoir 
and pass in boolean result only so rpc doesn't have to know about master or 
balancer, etc. or repro logic that we have up in HMaster?

Otherwise, patch is great.



> Disable ByteBufferPool by default at HM
> ---
>
> Key: HBASE-19505
> URL: https://issues.apache.org/jira/browse/HBASE-19505
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19505.patch, HBASE-19505_V2.patch
>
>
> The main usage of the pool is while accepting bigger sized requests ie. 
> Mutation requests. HM do not have any regions by default.  So we can make 
> this pool OFF in HM side. Still add a config to turn this ON.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19498) Fix findbugs and error-prone warnings in hbase-client (branch-2)

2017-12-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291818#comment-16291818
 ] 

Hadoop QA commented on HBASE-19498:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 17 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  7m 
42s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} hbase-common: The patch generated 0 new + 25 
unchanged - 2 fixed = 25 total (was 27) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} hbase-client: The patch generated 0 new + 862 
unchanged - 116 fixed = 862 total (was 978) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} The patch hbase-zookeeper passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} The patch hbase-replication passed checkstyle 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} The patch hbase-server passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
30s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
19m 25s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0-beta1. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
20s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
44s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hbase-zookeeper in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
13s{color} | {color:green} hbase-replication in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}101m 
56s{color} | {color:green} hbase-server in the patch 

  1   2   3   >