[jira] [Updated] (HBASE-14761) Deletes with and without visibility expression do not delete the matching mutation

2015-11-20 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14761:
---
Attachment: HBASE-14761_0.98_addendum.patch

Addendum for 0.98 to solve the compilation issues. Sorry about breaking the 
0.98 build. I got conflicts on the source file so worked on that but did not 
get any conflict on the test file so missed this. 

> Deletes with and without visibility expression do not delete the matching 
> mutation
> --
>
> Key: HBASE-14761
> URL: https://issues.apache.org/jira/browse/HBASE-14761
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.0.0, 1.0.1, 1.1.0, 1.0.2, 1.1.2, 0.98.15
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14761.patch, HBASE-14761_0.98_addendum.patch
>
>
> This is from the user list as reported by Anoop Sharma
> {code}
>  running into an issue related to visibility expressions and delete.
> Example run from hbase shell is listed below.
> Will appreciate any help on this issue.
> thanks.
> In the example below, user running queries has ‘MANAGER’ authorization.
> *First example:*
>   add a column with visib expr ‘MANAGER’
>   delete it by passing in visibility of ‘MANAGER’
>   This works and scan doesn’t return anything.
> *Second example:*
>   add a column with visib expr ‘MANAGER’
>   delete it by not passing in any visibility.
>   This doesn’t delete the column.
>   Scan doesn’t return the row but RAW scan shows the column
>   marked as deleteColumn.
>   Now if delete is done again with visibility of ‘MANAGER’,
>   it still doesn’t delete it and scan returns the original column.
> *Example 1:*
> hbase(main):096:0> create 'HBT1', 'cf'
> hbase(main):098:0* *put 'HBT1', 'John', 'cf:a', 'CA',
> {VISIBILITY=>'MANAGER'}*
> hbase(main):099:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446154722055,
> value=CA
> 1 row(s) in 0.0030 seconds
> hbase(main):100:0> *delete 'HBT1', 'John', 'cf:a', {VISIBILITY=>'MANAGER'}*
> 0 row(s) in 0.0030 seconds
> hbase(main):101:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
> 0 row(s) in 0.0030 seconds
> *Example 2:*
> hbase(main):010:0* *put 'HBT1', 'John', 'cf:a', 'CA',
> {VISIBILITY=>'MANAGER'}*
> 0 row(s) in 0.0040 seconds
> hbase(main):011:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346473,
> value=CA
> 1 row(s) in 0.0060 seconds
> hbase(main):012:0> *delete 'HBT1', 'John', 'cf:a'*
> 0 row(s) in 0.0090 seconds
> hbase(main):013:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346473,
> value=CA
> 1 row(s) in 0.0050 seconds
> hbase(main):014:0> *scan 'HBT1', {RAW => true}*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346519,
> type=DeleteColumn
> 1 row(s) in 0.0060 seconds
> hbase(main):015:0> *delete 'HBT1', 'John', 'cf:a', {VISIBILITY=>'MANAGER'}*
> 0 row(s) in 0.0030 seconds
> hbase(main):016:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346473,
> value=CA
> 1 row(s) in 0.0040 seconds
> hbase(main):017:0> *scan 'HBT1', {RAW => true}*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346601,
> type=DeleteColumn
> 1 row(s) in 0.0060 seconds
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14852) Update build env

2015-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015409#comment-15015409
 ] 

Hadoop QA commented on HBASE-14852:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12773446/HBASE-14852-v4.patch
  against master branch at commit ea48ef86512addc3dc9bcde4b7433a3ac5881424.
  ATTACHMENT ID: 12773446

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev-support patch that doesn't require tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 4 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16606//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16606//artifact/patchprocess/patchReleaseAuditWarnings.txt
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16606//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16606//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16606//console

This message is automatically generated.

> Update build env
> 
>
> Key: HBASE-14852
> URL: https://issues.apache.org/jira/browse/HBASE-14852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-14852-v1.patch, HBASE-14852-v3.patch, 
> HBASE-14852-v4.patch, HBASE-14852.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14840) Sink cluster reports data replication request as success though the data is not replicated

2015-11-20 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015413#comment-15015413
 ] 

Ashish Singhi commented on HBASE-14840:
---

Thanks for review, Ted and Ram.

> Sink cluster reports data replication request as success though the data is 
> not replicated
> --
>
> Key: HBASE-14840
> URL: https://issues.apache.org/jira/browse/HBASE-14840
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2, 1.0.3, 0.98.16
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14840.patch
>
>
> *Scenario:*
> Sink cluster is down
> Create a table and enable table replication
> Put some data
> Now restart the sink cluster
> *Observance:*
> Data is not replicated in sink cluster but still source cluster updates the 
> WAL log position in ZK, resulting in data loss in sink cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14829) Add more checkstyles

2015-11-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015421#comment-15015421
 ] 

Hudson commented on HBASE-14829:


FAILURE: Integrated in HBase-Trunk_matrix #483 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/483/])
HBASE-14829 Add more checkstyles (appy) (stack: rev 
62aba61beae7768880d98d2afd9d8f1a9030177e)
* hbase-checkstyle/src/main/resources/hbase/checkstyle.xml
* pom.xml


> Add more checkstyles
> 
>
> Key: HBASE-14829
> URL: https://issues.apache.org/jira/browse/HBASE-14829
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-14829-master-v2.patch, 
> HBASE-14829-master-v2.patch, HBASE-14829-master.patch
>
>
> This jira will add following checkstyles:
> [ImportOrder|http://checkstyle.sourceforge.net/config_imports.html#ImportOrder]
>  : keep imports in sorted order
> [LeftCurly|http://checkstyle.sourceforge.net/config_blocks.html#LeftCurly] : 
> Placement of left curly brace. Does 'eol' sounds right setting?
> [NeedBraces|http://checkstyle.sourceforge.net/config_blocks.html#NeedBraces] 
> : braces around code blocks
> [JavadocTagContinuationIndentation|http://checkstyle.sourceforge.net/config_javadoc.html#JavadocTagContinuationIndentation]
>  : Avoid weird indentations in javadocs
> [NonEmptyAtclauseDescription|http://checkstyle.sourceforge.net/config_javadoc.html#NonEmptyAtclauseDescription]
>  : We have so many empty javadoc @ clauses. This'll take care of it.
>  
> [Indentation|http://checkstyle.sourceforge.net/config_misc.html#Indentation] 
> : Bad indentation hurts code readability. We have indentation guidelines, 
> should be fine enforcing them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13347) RowCounter using special filter is broken

2015-11-20 Thread Abhishek Singh Chouhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015427#comment-15015427
 ] 

Abhishek Singh Chouhan commented on HBASE-13347:


As per my understanding the rowCounter cleanup can be pushed to existing 
versions too, whereas the deprecation would be for 2.0 ? Should i create 
another jira for the deprecation and use this to just do the minor cleanup?

> RowCounter using special filter is broken
> -
>
> Key: HBASE-13347
> URL: https://issues.apache.org/jira/browse/HBASE-13347
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 1.0.0
>Reporter: Lars George
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.4, 1.0.4
>
> Attachments: HBASE-13347-master.patch
>
>
> The {{RowCounter}} in the {{mapreduce}} package is supposed to check if the 
> row count scan has a column selection added to it, and if so, use a different 
> filter that finds the row and counts it. But the {{qualifier.add()}} call is 
> missing in the {{for}} loop. See 
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java#L165
> Needs fixing or row count might be wrong when using {{--range}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14761) Deletes with and without visibility expression do not delete the matching mutation

2015-11-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015430#comment-15015430
 ] 

ramkrishna.s.vasudevan commented on HBASE-14761:


Pushed addendum to 0.98.

> Deletes with and without visibility expression do not delete the matching 
> mutation
> --
>
> Key: HBASE-14761
> URL: https://issues.apache.org/jira/browse/HBASE-14761
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.0.0, 1.0.1, 1.1.0, 1.0.2, 1.1.2, 0.98.15
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14761.patch, HBASE-14761_0.98_addendum.patch
>
>
> This is from the user list as reported by Anoop Sharma
> {code}
>  running into an issue related to visibility expressions and delete.
> Example run from hbase shell is listed below.
> Will appreciate any help on this issue.
> thanks.
> In the example below, user running queries has ‘MANAGER’ authorization.
> *First example:*
>   add a column with visib expr ‘MANAGER’
>   delete it by passing in visibility of ‘MANAGER’
>   This works and scan doesn’t return anything.
> *Second example:*
>   add a column with visib expr ‘MANAGER’
>   delete it by not passing in any visibility.
>   This doesn’t delete the column.
>   Scan doesn’t return the row but RAW scan shows the column
>   marked as deleteColumn.
>   Now if delete is done again with visibility of ‘MANAGER’,
>   it still doesn’t delete it and scan returns the original column.
> *Example 1:*
> hbase(main):096:0> create 'HBT1', 'cf'
> hbase(main):098:0* *put 'HBT1', 'John', 'cf:a', 'CA',
> {VISIBILITY=>'MANAGER'}*
> hbase(main):099:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446154722055,
> value=CA
> 1 row(s) in 0.0030 seconds
> hbase(main):100:0> *delete 'HBT1', 'John', 'cf:a', {VISIBILITY=>'MANAGER'}*
> 0 row(s) in 0.0030 seconds
> hbase(main):101:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
> 0 row(s) in 0.0030 seconds
> *Example 2:*
> hbase(main):010:0* *put 'HBT1', 'John', 'cf:a', 'CA',
> {VISIBILITY=>'MANAGER'}*
> 0 row(s) in 0.0040 seconds
> hbase(main):011:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346473,
> value=CA
> 1 row(s) in 0.0060 seconds
> hbase(main):012:0> *delete 'HBT1', 'John', 'cf:a'*
> 0 row(s) in 0.0090 seconds
> hbase(main):013:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346473,
> value=CA
> 1 row(s) in 0.0050 seconds
> hbase(main):014:0> *scan 'HBT1', {RAW => true}*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346519,
> type=DeleteColumn
> 1 row(s) in 0.0060 seconds
> hbase(main):015:0> *delete 'HBT1', 'John', 'cf:a', {VISIBILITY=>'MANAGER'}*
> 0 row(s) in 0.0030 seconds
> hbase(main):016:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346473,
> value=CA
> 1 row(s) in 0.0040 seconds
> hbase(main):017:0> *scan 'HBT1', {RAW => true}*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346601,
> type=DeleteColumn
> 1 row(s) in 0.0060 seconds
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14852) Update build env

2015-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015453#comment-15015453
 ] 

Hadoop QA commented on HBASE-14852:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12773445/HBASE-14852-v3.patch
  against master branch at commit ea48ef86512addc3dc9bcde4b7433a3ac5881424.
  ATTACHMENT ID: 12773445

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev-support patch that doesn't require tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 4 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16605//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16605//artifact/patchprocess/patchReleaseAuditWarnings.txt
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16605//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16605//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16605//console

This message is automatically generated.

> Update build env
> 
>
> Key: HBASE-14852
> URL: https://issues.apache.org/jira/browse/HBASE-14852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-14852-v1.patch, HBASE-14852-v3.patch, 
> HBASE-14852-v4.patch, HBASE-14852.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14223) Meta WALs are not cleared if meta region was closed and RS aborts

2015-11-20 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-14223:
--
Attachment: hbase-14223_v3-branch-1.patch

> Meta WALs are not cleared if meta region was closed and RS aborts
> -
>
> Key: HBASE-14223
> URL: https://issues.apache.org/jira/browse/HBASE-14223
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.4, 1.0.4
>
> Attachments: HBASE-14223logs, hbase-14223_v0.patch, 
> hbase-14223_v1-branch-1.patch, hbase-14223_v2-branch-1.patch, 
> hbase-14223_v3-branch-1.patch, hbase-14223_v3-branch-1.patch
>
>
> When an RS opens meta, and later closes it, the WAL(FSHlog) is not closed. 
> The last WAL file just sits there in the RS WAL directory. If RS stops 
> gracefully, the WAL file for meta is deleted. Otherwise if RS aborts, WAL for 
> meta is not cleaned. It is also not split (which is correct) since master 
> determines that the RS no longer hosts meta at the time of RS abort. 
> From a cluster after running ITBLL with CM, I see a lot of {{-splitting}} 
> directories left uncleaned: 
> {code}
> [root@os-enis-dal-test-jun-4-7 cluster-os]# sudo -u hdfs hadoop fs -ls 
> /apps/hbase/data/WALs
> Found 31 items
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 01:14 
> /apps/hbase/data/WALs/hregion-58203265
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 07:54 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-1.openstacklocal,16020,1433489308745-splitting
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 09:28 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-1.openstacklocal,16020,1433494382959-splitting
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 10:01 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-1.openstacklocal,16020,1433498252205-splitting
> ...
> {code}
> The directories contain WALs from meta: 
> {code}
> [root@os-enis-dal-test-jun-4-7 cluster-os]# sudo -u hdfs hadoop fs -ls 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting
> Found 2 items
> -rw-r--r--   3 hbase hadoop 201608 2015-06-05 03:15 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285..meta.1433470511501.meta
> -rw-r--r--   3 hbase hadoop  44420 2015-06-05 04:36 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285..meta.1433474111645.meta
> {code}
> The RS hosted the meta region for some time: 
> {code}
> 2015-06-05 03:14:28,692 INFO  [PostOpenDeployTasks:1588230740] 
> zookeeper.MetaTableLocator: Setting hbase:meta region location in ZooKeeper 
> as os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285
> ...
> 2015-06-05 03:15:17,302 INFO  
> [RS_CLOSE_META-os-enis-dal-test-jun-4-5:16020-0] regionserver.HRegion: Closed 
> hbase:meta,,1.1588230740
> {code}
> In between, a WAL is created: 
> {code}
> 2015-06-05 03:15:11,707 INFO  
> [RS_OPEN_META-os-enis-dal-test-jun-4-5:16020-0-MetaLogRoller] wal.FSHLog: 
> Rolled WAL 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285..meta.1433470511501.meta
>  with entries=385, filesize=196.88 KB; new WAL 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285..meta.1433474111645.meta
> {code}
> When CM killed the region server later master did not see these WAL files: 
> {code}
> ./hbase-hbase-master-os-enis-dal-test-jun-4-3.log:2015-06-05 03:36:46,075 
> INFO  [MASTER_SERVER_OPERATIONS-os-enis-dal-test-jun-4-3:16000-0] 
> master.SplitLogManager: started splitting 2 logs in 
> [hdfs://os-enis-dal-test-jun-4-1.openstacklocal:8020/apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting]
>  for [os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285]
> ./hbase-hbase-master-os-enis-dal-test-jun-4-3.log:2015-06-05 03:36:47,300 
> INFO  [main-EventThread] wal.WALSplitter: Archived processed log 
> hdfs://os-enis-dal-test-jun-4-1.openstacklocal:8020/apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285.default.1433475074436
>  to 
> hdfs://os-enis-dal-test-jun-4-1.openstacklocal:8020/apps/hbase/data/oldWALs/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285.default.1433475074436
> ./hbase-hbase-master-os-enis-dal-test-jun-4-3.log:2015-06-05 03:36:50,497 
> INFO  [main-EventThread] wal.WALSplitter: Archived processed log 
> hdfs://os-enis-dal-test-jun-4-1.openstacklocal:8020

[jira] [Commented] (HBASE-11393) Replication TableCfs should be a PB object rather than a string

2015-11-20 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015475#comment-15015475
 ] 

Ashish Singhi commented on HBASE-11393:
---

bq. Which spitter do you think is best?
The string tableCfs related api was deprecated in 0.99 as part of HBASE-11367 
hence we are good to remove it in 2.0.0. So for 2.0.0 we need not worry about 
this.

For other branches we can just assume that the table passed in tableCfs of 
string type belongs to default name space and set it in 
ZooKeeperProtos.TableCF. Please cross check the code once to see if that is how 
it works currently.
And in ruby script lets use non-deprecated method and update the usage to 
encourage users to use map of table name and CFs

[~enis], do you have any other better suggestions ?

> Replication TableCfs should be a PB object rather than a string
> ---
>
> Key: HBASE-11393
> URL: https://issues.apache.org/jira/browse/HBASE-11393
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
> Fix For: 2.0.0
>
> Attachments: HBASE-11393.patch, HBASE-11393_v1.patch, 
> HBASE-11393_v10.patch, HBASE-11393_v2.patch, HBASE-11393_v3.patch, 
> HBASE-11393_v4.patch, HBASE-11393_v5.patch, HBASE-11393_v6.patch, 
> HBASE-11393_v7.patch, HBASE-11393_v8.patch, HBASE-11393_v9.patch
>
>
> We concatenate the list of tables and column families in format  
> "table1:cf1,cf2;table2:cfA,cfB" in zookeeper for table-cf to replication peer 
> mapping. 
> This results in ugly parsing code. We should do this a PB object. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11393) Replication TableCfs should be a PB object rather than a string

2015-11-20 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015496#comment-15015496
 ] 

Heng Chen commented on HBASE-11393:
---

 backward compatibility problem happens when user upgrade their cluster,  old 
format tableCFs is stored in zk,  we had to read and parse it correctly.

> Replication TableCfs should be a PB object rather than a string
> ---
>
> Key: HBASE-11393
> URL: https://issues.apache.org/jira/browse/HBASE-11393
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
> Fix For: 2.0.0
>
> Attachments: HBASE-11393.patch, HBASE-11393_v1.patch, 
> HBASE-11393_v10.patch, HBASE-11393_v2.patch, HBASE-11393_v3.patch, 
> HBASE-11393_v4.patch, HBASE-11393_v5.patch, HBASE-11393_v6.patch, 
> HBASE-11393_v7.patch, HBASE-11393_v8.patch, HBASE-11393_v9.patch
>
>
> We concatenate the list of tables and column families in format  
> "table1:cf1,cf2;table2:cfA,cfB" in zookeeper for table-cf to replication peer 
> mapping. 
> This results in ugly parsing code. We should do this a PB object. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14703) not collect stats when call HTable.mutateRow

2015-11-20 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-14703:
--
Attachment: HBASE-14702_v5.2_addendum-addendum.patch

> not collect stats when call HTable.mutateRow 
> -
>
> Key: HBASE-14703
> URL: https://issues.apache.org/jira/browse/HBASE-14703
> Project: HBase
>  Issue Type: Bug
>Reporter: Heng Chen
>Assignee: Heng Chen
> Attachments: HBASE-14702_v5.2_addendum-addendum.patch, 
> HBASE-14703-5.2-addendum.patch, HBASE-14703-async.patch, 
> HBASE-14703-start.patch, HBASE-14703-v4.1.patch, HBASE-14703-v4.patch, 
> HBASE-14703.patch, HBASE-14703_v1.patch, HBASE-14703_v2.patch, 
> HBASE-14703_v3.patch, HBASE-14703_v5.1.patch, HBASE-14703_v5.2.patch, 
> HBASE-14703_v5.patch
>
>
> In {{AsyncProcess.SingleServerRequestRunnable}}, it seems we update 
> serverStatistics twice.
> The first one is that we wrapper {{RetryingCallable}}  by 
> {{StatsTrackingRpcRetryingCaller}}, and do serverStatistics update when we 
> call {{callWithRetries}} and {{callWithoutRetries}}. Relates code like below:
> {code}
>   @Override
>   public T callWithRetries(RetryingCallable callable, int callTimeout)
>   throws IOException, RuntimeException {
> T result = delegate.callWithRetries(callable, callTimeout);
> return updateStatsAndUnwrap(result, callable);
>   }
>   @Override
>   public T callWithoutRetries(RetryingCallable callable, int callTimeout)
>   throws IOException, RuntimeException {
> T result = delegate.callWithRetries(callable, callTimeout);
> return updateStatsAndUnwrap(result, callable);
>   }
> {code}
> The secondary one is after we get response, in {{receiveMultiAction}}, we do 
> update again. 
> {code}
> // update the stats about the region, if its a user table. We don't want to 
> slow down
> // updates to meta tables, especially from internal updates (master, etc).
> if (AsyncProcess.this.connection.getStatisticsTracker() != null) {
>   result = ResultStatsUtil.updateStats(result,
>   AsyncProcess.this.connection.getStatisticsTracker(), server, regionName);
> }
> {code}
> It seems that {{StatsTrackingRpcRetryingCaller}} is NOT necessary,  remove it?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11393) Replication TableCfs should be a PB object rather than a string

2015-11-20 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015504#comment-15015504
 ] 

Ashish Singhi commented on HBASE-11393:
---

Yes, so in that case we can consider the table belongs to default namespace (I 
am assuming that this is how it works currently, correct me if I am wrong).
By remove the deprecated api from 2.0.0 I meant that the client after 2.0.0 
still does not continue to provide tableCfs in String format.

> Replication TableCfs should be a PB object rather than a string
> ---
>
> Key: HBASE-11393
> URL: https://issues.apache.org/jira/browse/HBASE-11393
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
> Fix For: 2.0.0
>
> Attachments: HBASE-11393.patch, HBASE-11393_v1.patch, 
> HBASE-11393_v10.patch, HBASE-11393_v2.patch, HBASE-11393_v3.patch, 
> HBASE-11393_v4.patch, HBASE-11393_v5.patch, HBASE-11393_v6.patch, 
> HBASE-11393_v7.patch, HBASE-11393_v8.patch, HBASE-11393_v9.patch
>
>
> We concatenate the list of tables and column families in format  
> "table1:cf1,cf2;table2:cfA,cfB" in zookeeper for table-cf to replication peer 
> mapping. 
> This results in ugly parsing code. We should do this a PB object. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14703) not collect stats when call HTable.mutateRow

2015-11-20 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015515#comment-15015515
 ] 

Heng Chen commented on HBASE-14703:
---

Update a patch based on your last patch, and i create a diff base on your patch.
 * Remove some unused code.  But not remove RegionLoadStats in 
ResultOrException for PB parser not crashed when user upgrade cluster.
 * Unify RS multi response for mutateRow and other calls  which use multi 
interface currently (put/puts/gets). So we can unify different process path in 
AP.

I try to unify checkAndMutate and AP,  but failed.  
The reason is i have no idea how to get processed flag if not add one option in 
MultiResponse.
I try to pass results[] into AP.submit,  but found it can't pass test case 
TestCheckAndMutate.  
Maybe we should add process flag back into MultiResponse.  wdyt?




> not collect stats when call HTable.mutateRow 
> -
>
> Key: HBASE-14703
> URL: https://issues.apache.org/jira/browse/HBASE-14703
> Project: HBase
>  Issue Type: Bug
>Reporter: Heng Chen
>Assignee: Heng Chen
> Attachments: HBASE-14702_v5.2_addendum-addendum.patch, 
> HBASE-14703-5.2-addendum.patch, HBASE-14703-async.patch, 
> HBASE-14703-start.patch, HBASE-14703-v4.1.patch, HBASE-14703-v4.patch, 
> HBASE-14703.patch, HBASE-14703_v1.patch, HBASE-14703_v2.patch, 
> HBASE-14703_v3.patch, HBASE-14703_v5.1.patch, HBASE-14703_v5.2.patch, 
> HBASE-14703_v5.patch
>
>
> In {{AsyncProcess.SingleServerRequestRunnable}}, it seems we update 
> serverStatistics twice.
> The first one is that we wrapper {{RetryingCallable}}  by 
> {{StatsTrackingRpcRetryingCaller}}, and do serverStatistics update when we 
> call {{callWithRetries}} and {{callWithoutRetries}}. Relates code like below:
> {code}
>   @Override
>   public T callWithRetries(RetryingCallable callable, int callTimeout)
>   throws IOException, RuntimeException {
> T result = delegate.callWithRetries(callable, callTimeout);
> return updateStatsAndUnwrap(result, callable);
>   }
>   @Override
>   public T callWithoutRetries(RetryingCallable callable, int callTimeout)
>   throws IOException, RuntimeException {
> T result = delegate.callWithRetries(callable, callTimeout);
> return updateStatsAndUnwrap(result, callable);
>   }
> {code}
> The secondary one is after we get response, in {{receiveMultiAction}}, we do 
> update again. 
> {code}
> // update the stats about the region, if its a user table. We don't want to 
> slow down
> // updates to meta tables, especially from internal updates (master, etc).
> if (AsyncProcess.this.connection.getStatisticsTracker() != null) {
>   result = ResultStatsUtil.updateStats(result,
>   AsyncProcess.this.connection.getStatisticsTracker(), server, regionName);
> }
> {code}
> It seems that {{StatsTrackingRpcRetryingCaller}} is NOT necessary,  remove it?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11393) Replication TableCfs should be a PB object rather than a string

2015-11-20 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015519#comment-15015519
 ] 

Heng Chen commented on HBASE-11393:
---

{quote}
By remove the deprecated api from 2.0.0 I meant that the client after 2.0.0 
still does not continue to provide tableCfs in String format.
{quote}
If we can, that is a good way.

> Replication TableCfs should be a PB object rather than a string
> ---
>
> Key: HBASE-11393
> URL: https://issues.apache.org/jira/browse/HBASE-11393
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
> Fix For: 2.0.0
>
> Attachments: HBASE-11393.patch, HBASE-11393_v1.patch, 
> HBASE-11393_v10.patch, HBASE-11393_v2.patch, HBASE-11393_v3.patch, 
> HBASE-11393_v4.patch, HBASE-11393_v5.patch, HBASE-11393_v6.patch, 
> HBASE-11393_v7.patch, HBASE-11393_v8.patch, HBASE-11393_v9.patch
>
>
> We concatenate the list of tables and column families in format  
> "table1:cf1,cf2;table2:cfA,cfB" in zookeeper for table-cf to replication peer 
> mapping. 
> This results in ugly parsing code. We should do this a PB object. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14703) not collect stats when call HTable.mutateRow

2015-11-20 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-14703:
--
Attachment: HBASE-14703_v6.patch

> not collect stats when call HTable.mutateRow 
> -
>
> Key: HBASE-14703
> URL: https://issues.apache.org/jira/browse/HBASE-14703
> Project: HBase
>  Issue Type: Bug
>Reporter: Heng Chen
>Assignee: Heng Chen
> Attachments: HBASE-14702_v5.2_addendum-addendum.patch, 
> HBASE-14703-5.2-addendum.patch, HBASE-14703-async.patch, 
> HBASE-14703-start.patch, HBASE-14703-v4.1.patch, HBASE-14703-v4.patch, 
> HBASE-14703.patch, HBASE-14703_v1.patch, HBASE-14703_v2.patch, 
> HBASE-14703_v3.patch, HBASE-14703_v5.1.patch, HBASE-14703_v5.2.patch, 
> HBASE-14703_v5.patch, HBASE-14703_v6.patch
>
>
> In {{AsyncProcess.SingleServerRequestRunnable}}, it seems we update 
> serverStatistics twice.
> The first one is that we wrapper {{RetryingCallable}}  by 
> {{StatsTrackingRpcRetryingCaller}}, and do serverStatistics update when we 
> call {{callWithRetries}} and {{callWithoutRetries}}. Relates code like below:
> {code}
>   @Override
>   public T callWithRetries(RetryingCallable callable, int callTimeout)
>   throws IOException, RuntimeException {
> T result = delegate.callWithRetries(callable, callTimeout);
> return updateStatsAndUnwrap(result, callable);
>   }
>   @Override
>   public T callWithoutRetries(RetryingCallable callable, int callTimeout)
>   throws IOException, RuntimeException {
> T result = delegate.callWithRetries(callable, callTimeout);
> return updateStatsAndUnwrap(result, callable);
>   }
> {code}
> The secondary one is after we get response, in {{receiveMultiAction}}, we do 
> update again. 
> {code}
> // update the stats about the region, if its a user table. We don't want to 
> slow down
> // updates to meta tables, especially from internal updates (master, etc).
> if (AsyncProcess.this.connection.getStatisticsTracker() != null) {
>   result = ResultStatsUtil.updateStats(result,
>   AsyncProcess.this.connection.getStatisticsTracker(), server, regionName);
> }
> {code}
> It seems that {{StatsTrackingRpcRetryingCaller}} is NOT necessary,  remove it?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-14703) not collect stats when call HTable.mutateRow

2015-11-20 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015515#comment-15015515
 ] 

Heng Chen edited comment on HBASE-14703 at 11/20/15 9:59 AM:
-

Update a patch based on your last patch, and i create a diff base on your patch.
And i aslo upload the real patch v6.
 * Remove some unused code.  But not remove RegionLoadStats in 
ResultOrException for PB parser not crashed when user upgrade cluster.
 * Unify RS multi response for mutateRow and other calls  which use multi 
interface currently (put/puts/gets). So we can unify different process path in 
AP.

I try to unify checkAndMutate and AP,  but failed.  
The reason is i have no idea how to get processed flag if not add one option in 
MultiResponse.
I try to pass results[] into AP.submit,  but found it can't pass test case 
TestCheckAndMutate.  
Maybe we should add process flag back into MultiResponse.  wdyt?





was (Author: chenheng):
Update a patch based on your last patch, and i create a diff base on your patch.
 * Remove some unused code.  But not remove RegionLoadStats in 
ResultOrException for PB parser not crashed when user upgrade cluster.
 * Unify RS multi response for mutateRow and other calls  which use multi 
interface currently (put/puts/gets). So we can unify different process path in 
AP.

I try to unify checkAndMutate and AP,  but failed.  
The reason is i have no idea how to get processed flag if not add one option in 
MultiResponse.
I try to pass results[] into AP.submit,  but found it can't pass test case 
TestCheckAndMutate.  
Maybe we should add process flag back into MultiResponse.  wdyt?




> not collect stats when call HTable.mutateRow 
> -
>
> Key: HBASE-14703
> URL: https://issues.apache.org/jira/browse/HBASE-14703
> Project: HBase
>  Issue Type: Bug
>Reporter: Heng Chen
>Assignee: Heng Chen
> Attachments: HBASE-14702_v5.2_addendum-addendum.patch, 
> HBASE-14703-5.2-addendum.patch, HBASE-14703-async.patch, 
> HBASE-14703-start.patch, HBASE-14703-v4.1.patch, HBASE-14703-v4.patch, 
> HBASE-14703.patch, HBASE-14703_v1.patch, HBASE-14703_v2.patch, 
> HBASE-14703_v3.patch, HBASE-14703_v5.1.patch, HBASE-14703_v5.2.patch, 
> HBASE-14703_v5.patch, HBASE-14703_v6.patch
>
>
> In {{AsyncProcess.SingleServerRequestRunnable}}, it seems we update 
> serverStatistics twice.
> The first one is that we wrapper {{RetryingCallable}}  by 
> {{StatsTrackingRpcRetryingCaller}}, and do serverStatistics update when we 
> call {{callWithRetries}} and {{callWithoutRetries}}. Relates code like below:
> {code}
>   @Override
>   public T callWithRetries(RetryingCallable callable, int callTimeout)
>   throws IOException, RuntimeException {
> T result = delegate.callWithRetries(callable, callTimeout);
> return updateStatsAndUnwrap(result, callable);
>   }
>   @Override
>   public T callWithoutRetries(RetryingCallable callable, int callTimeout)
>   throws IOException, RuntimeException {
> T result = delegate.callWithRetries(callable, callTimeout);
> return updateStatsAndUnwrap(result, callable);
>   }
> {code}
> The secondary one is after we get response, in {{receiveMultiAction}}, we do 
> update again. 
> {code}
> // update the stats about the region, if its a user table. We don't want to 
> slow down
> // updates to meta tables, especially from internal updates (master, etc).
> if (AsyncProcess.this.connection.getStatisticsTracker() != null) {
>   result = ResultStatsUtil.updateStats(result,
>   AsyncProcess.this.connection.getStatisticsTracker(), server, regionName);
> }
> {code}
> It seems that {{StatsTrackingRpcRetryingCaller}} is NOT necessary,  remove it?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14840) Sink cluster reports data replication request as success though the data is not replicated

2015-11-20 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-14840:
--
Attachment: HBASE-14840-0.98.patch

> Sink cluster reports data replication request as success though the data is 
> not replicated
> --
>
> Key: HBASE-14840
> URL: https://issues.apache.org/jira/browse/HBASE-14840
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2, 1.0.3, 0.98.16
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14840-0.98.patch, HBASE-14840.patch
>
>
> *Scenario:*
> Sink cluster is down
> Create a table and enable table replication
> Put some data
> Now restart the sink cluster
> *Observance:*
> Data is not replicated in sink cluster but still source cluster updates the 
> WAL log position in ZK, resulting in data loss in sink cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14840) Sink cluster reports data replication request as success though the data is not replicated

2015-11-20 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-14840:
--
Attachment: HBASE-14840-branch-1.patch

> Sink cluster reports data replication request as success though the data is 
> not replicated
> --
>
> Key: HBASE-14840
> URL: https://issues.apache.org/jira/browse/HBASE-14840
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2, 1.0.3, 0.98.16
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14840-0.98.patch, HBASE-14840-branch-1.patch, 
> HBASE-14840.patch
>
>
> *Scenario:*
> Sink cluster is down
> Create a table and enable table replication
> Put some data
> Now restart the sink cluster
> *Observance:*
> Data is not replicated in sink cluster but still source cluster updates the 
> WAL log position in ZK, resulting in data loss in sink cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14840) Sink cluster reports data replication request as success though the data is not replicated

2015-11-20 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015546#comment-15015546
 ] 

Ashish Singhi commented on HBASE-14840:
---

Attached 0.98 and branch-1 patch. branch-1 patch will also apply to 1.0.x, 
1.1.x and 1.2.x versions.

> Sink cluster reports data replication request as success though the data is 
> not replicated
> --
>
> Key: HBASE-14840
> URL: https://issues.apache.org/jira/browse/HBASE-14840
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2, 1.0.3, 0.98.16
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14840-0.98.patch, HBASE-14840-branch-1.patch, 
> HBASE-14840.patch
>
>
> *Scenario:*
> Sink cluster is down
> Create a table and enable table replication
> Put some data
> Now restart the sink cluster
> *Observance:*
> Data is not replicated in sink cluster but still source cluster updates the 
> WAL log position in ZK, resulting in data loss in sink cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14840) Sink cluster reports data replication request as success though the data is not replicated

2015-11-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015560#comment-15015560
 ] 

ramkrishna.s.vasudevan commented on HBASE-14840:


Pushed to all branches 0.98+ and above. Thanks for the patch [~ashish singhi].

> Sink cluster reports data replication request as success though the data is 
> not replicated
> --
>
> Key: HBASE-14840
> URL: https://issues.apache.org/jira/browse/HBASE-14840
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2, 1.0.3, 0.98.16
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14840-0.98.patch, HBASE-14840-branch-1.patch, 
> HBASE-14840.patch
>
>
> *Scenario:*
> Sink cluster is down
> Create a table and enable table replication
> Put some data
> Now restart the sink cluster
> *Observance:*
> Data is not replicated in sink cluster but still source cluster updates the 
> WAL log position in ZK, resulting in data loss in sink cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14840) Sink cluster reports data replication request as success though the data is not replicated

2015-11-20 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14840:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Sink cluster reports data replication request as success though the data is 
> not replicated
> --
>
> Key: HBASE-14840
> URL: https://issues.apache.org/jira/browse/HBASE-14840
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2, 1.0.3, 0.98.16
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14840-0.98.patch, HBASE-14840-branch-1.patch, 
> HBASE-14840.patch
>
>
> *Scenario:*
> Sink cluster is down
> Create a table and enable table replication
> Put some data
> Now restart the sink cluster
> *Observance:*
> Data is not replicated in sink cluster but still source cluster updates the 
> WAL log position in ZK, resulting in data loss in sink cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14777) Fix Inter Cluster Replication Future ordering issues

2015-11-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015562#comment-15015562
 ] 

Hudson commented on HBASE-14777:


SUCCESS: Integrated in HBase-1.2 #389 (See 
[https://builds.apache.org/job/HBase-1.2/389/])
HBASE-14777 ADDENDUM Fix failing TestReplicationEndpoint test (busbey: rev 
ca7fbeadafb6caa1b9534a225900e31acbd5cf34)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationEndpoint.java


> Fix Inter Cluster Replication Future ordering issues
> 
>
> Key: HBASE-14777
> URL: https://issues.apache.org/jira/browse/HBASE-14777
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Bhupendra Kumar Jain
>Assignee: Ashu Pachauri
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14777-1.patch, HBASE-14777-2.patch, 
> HBASE-14777-3.patch, HBASE-14777-4.patch, HBASE-14777-5.patch, 
> HBASE-14777-6.patch, HBASE-14777-addendum.patch, HBASE-14777.patch
>
>
> Replication fails with IndexOutOfBoundsException 
> {code}
> regionserver.ReplicationSource$ReplicationSourceWorkerThread(939): 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint
>  threw unknown exception:java.lang.IndexOutOfBoundsException: Index: 1, Size: 
> 1
>   at java.util.ArrayList.rangeCheck(Unknown Source)
>   at java.util.ArrayList.remove(Unknown Source)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.replicate(HBaseInterClusterReplicationEndpoint.java:222)
> {code}
> Its happening due to incorrect removal of entries from the replication 
> entries list. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14826) Small improvement in KVHeap seek() API

2015-11-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015572#comment-15015572
 ] 

Anoop Sam John commented on HBASE-14826:


How it will make a change/if any on a seek (forward = false)

> Small improvement in KVHeap seek() API
> --
>
> Key: HBASE-14826
> URL: https://issues.apache.org/jira/browse/HBASE-14826
> Project: HBase
>  Issue Type: Improvement
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Attachments: HBASE-14826.patch
>
>
> Currently in seek/reseek() APIs we tend to do lot of priorityqueue related 
> operations. We initially add the current scanner to the heap, then poll and 
> again add the scanner back if the seekKey is greater than the topkey in that 
> scanner. Since the KVs are always going to be in increasing order and in 
> ideal scan flow every seek/reseek is followed by a next() call it should be 
> ok if we start with checking the current scanner and then do a poll to get 
> the next scanner. Just avoid the initial PQ.add(current) call. This could 
> save some comparisons. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13347) RowCounter using special filter is broken

2015-11-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015575#comment-15015575
 ] 

Anoop Sam John commented on HBASE-13347:


I feel the we can do cleanup in 2.0 and branch-1 also.  (patch release versions 
no need)  and in 2.0 we can do the deprecation in this Jira.

> RowCounter using special filter is broken
> -
>
> Key: HBASE-13347
> URL: https://issues.apache.org/jira/browse/HBASE-13347
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 1.0.0
>Reporter: Lars George
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.4, 1.0.4
>
> Attachments: HBASE-13347-master.patch
>
>
> The {{RowCounter}} in the {{mapreduce}} package is supposed to check if the 
> row count scan has a column selection added to it, and if so, use a different 
> filter that finds the row and counts it. But the {{qualifier.add()}} call is 
> missing in the {{for}} loop. See 
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java#L165
> Needs fixing or row count might be wrong when using {{--range}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13082) Coarsen StoreScanner locks to RegionScanner

2015-11-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015577#comment-15015577
 ] 

ramkrishna.s.vasudevan commented on HBASE-13082:


Thanks for the reviews Stack.
bq. Need to update comment and rename variable else we'll stay confused.
Will update the comment and say that 'these files are not included in reads'? 
The name 'compactedfiles' is still better?

bq.Only one thread involved here?
bq.public ImmutableCollection clearCompactedFiles() {
Yes it will be single threaded. Used only while closing the store files.
bq.Suggest you change this method to return the Collection rather than set the 
data member internall: i.e remove the 'set' part from sortAndSetCompactedFiles 
Do the set on return. Methods like this with 'side effects' can be tough to 
follow.
Okie.
bq.You know the size of the array to allocate here: 124 newCompactedFiles = 
Lists.newArrayList(); ...
Yes done some refactoring there.
bq. DISCARDED and ACTIVE?
We will make it ACTIVE and COMPACTEDAWAY (as you suggested in another comment).?
bq.I don't follow how we were checking for references when we went to merge but 
in this patch it changes to a check for compactions:
I think you were referring to some old patch. The patch that was latest was _14.
bq.Fix formatting here abouts if you are doing a new patch: if 
(!SystemUtils.IS_OS_WINDOWS) {
This is not there in the latest patch.
bq.Where we explain what it does?
There is a javadoc explaining what it does.
bq.as to be public because its in the Interface? Does it have to be:
We access Store not directly from HStore but from Store.java. So it is better 
to add there and anyway this is going to be common for that store.
bq.Might want to note that expectation is that access on methods like this one 
are single-threaded: clearCompactedFiles
Okie.
bq.Do you have to stop the chore in the region or store close? Before you do 
your close?
Yes good catch. Done now.
bq.void closeAndArchiveCompactedFiles(List compactedStorefiles) 
throws IOException;
In my next version will remove this but keep the other one void 
closeAndArchiveCompactedFiles() throws IOException;
bq.Do they have to be so specific? Can they be made more generic?
Generic in the sense?
bq.t seems like the compacted or not belongs in StoreFileInfo rather than in 
StoreFile. Is this fact persisted across open/close?
We cannot have this in StorefileInfo because we only cache the Storefile (in 
the StorefileManager) and not the StorefileInfo. StoreFileInfos are created 
every time from the hfile path.
bq.Maybe 'compactedAway'?
Ya we can have ACTIVE and COMPACTED_AWAY.?

> Coarsen StoreScanner locks to RegionScanner
> ---
>
> Key: HBASE-13082
> URL: https://issues.apache.org/jira/browse/HBASE-13082
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: ramkrishna.s.vasudevan
> Attachments: 13082-test.txt, 13082-v2.txt, 13082-v3.txt, 
> 13082-v4.txt, 13082.txt, 13082.txt, HBASE-13082.pdf, HBASE-13082_1.pdf, 
> HBASE-13082_12.patch, HBASE-13082_13.patch, HBASE-13082_14.patch, 
> HBASE-13082_1_WIP.patch, HBASE-13082_2.pdf, HBASE-13082_2_WIP.patch, 
> HBASE-13082_3.patch, HBASE-13082_4.patch, HBASE-13082_9.patch, 
> HBASE-13082_9.patch, HBASE-13082_withoutpatch.jpg, HBASE-13082_withpatch.jpg, 
> LockVsSynchronized.java, gc.png, gc.png, gc.png, hits.png, next.png, next.png
>
>
> Continuing where HBASE-10015 left of.
> We can avoid locking (and memory fencing) inside StoreScanner by deferring to 
> the lock already held by the RegionScanner.
> In tests this shows quite a scan improvement and reduced CPU (the fences make 
> the cores wait for memory fetches).
> There are some drawbacks too:
> * All calls to RegionScanner need to be remain synchronized
> * Implementors of coprocessors need to be diligent in following the locking 
> contract. For example Phoenix does not lock RegionScanner.nextRaw() and 
> required in the documentation (not picking on Phoenix, this one is my fault 
> as I told them it's OK)
> * possible starving of flushes and compaction with heavy read load. 
> RegionScanner operations would keep getting the locks and the 
> flushes/compactions would not be able finalize the set of files.
> I'll have a patch soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14843) TestWALProcedureStore.testLoad is flakey

2015-11-20 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-14843:
--
Description: 
I see it twice recently, 
see.
https://builds.apache.org/job/PreCommit-HBASE-Build/16589//testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/

https://builds.apache.org/job/PreCommit-HBASE-Build/16532/testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/

Let's see what's happening.

Update.
It failed once again today, 
https://builds.apache.org/job/PreCommit-HBASE-Build/16602/testReport/junit/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/



  was:
I see it twice recently, 
see.
https://builds.apache.org/job/PreCommit-HBASE-Build/16589//testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/

https://builds.apache.org/job/PreCommit-HBASE-Build/16532/testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/

Let's see what's happening.


> TestWALProcedureStore.testLoad is flakey
> 
>
> Key: HBASE-14843
> URL: https://issues.apache.org/jira/browse/HBASE-14843
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Heng Chen
>
> I see it twice recently, 
> see.
> https://builds.apache.org/job/PreCommit-HBASE-Build/16589//testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> https://builds.apache.org/job/PreCommit-HBASE-Build/16532/testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> Let's see what's happening.
> Update.
> It failed once again today, 
> https://builds.apache.org/job/PreCommit-HBASE-Build/16602/testReport/junit/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13347) RowCounter using special filter is broken

2015-11-20 Thread Abhishek Singh Chouhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015590#comment-15015590
 ] 

Abhishek Singh Chouhan commented on HBASE-13347:


Got it. :)

> RowCounter using special filter is broken
> -
>
> Key: HBASE-13347
> URL: https://issues.apache.org/jira/browse/HBASE-13347
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 1.0.0
>Reporter: Lars George
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.4, 1.0.4
>
> Attachments: HBASE-13347-master.patch
>
>
> The {{RowCounter}} in the {{mapreduce}} package is supposed to check if the 
> row count scan has a column selection added to it, and if so, use a different 
> filter that finds the row and counts it. But the {{qualifier.add()}} call is 
> missing in the {{for}} loop. See 
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java#L165
> Needs fixing or row count might be wrong when using {{--range}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14840) Sink cluster reports data replication request as success though the data is not replicated

2015-11-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015594#comment-15015594
 ] 

Hudson commented on HBASE-14840:


SUCCESS: Integrated in HBase-1.3-IT #327 (See 
[https://builds.apache.org/job/HBase-1.3-IT/327/])
HBASE-14840 Sink cluster reports data replication request as success 
(ramkrishna: rev 11b213101345c9485b3abe12f3142617c90bc692)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestMasterReplication.java


> Sink cluster reports data replication request as success though the data is 
> not replicated
> --
>
> Key: HBASE-14840
> URL: https://issues.apache.org/jira/browse/HBASE-14840
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2, 1.0.3, 0.98.16
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14840-0.98.patch, HBASE-14840-branch-1.patch, 
> HBASE-14840.patch
>
>
> *Scenario:*
> Sink cluster is down
> Create a table and enable table replication
> Put some data
> Now restart the sink cluster
> *Observance:*
> Data is not replicated in sink cluster but still source cluster updates the 
> WAL log position in ZK, resulting in data loss in sink cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14843) TestWALProcedureStore.testLoad is flakey

2015-11-20 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015598#comment-15015598
 ] 

Heng Chen commented on HBASE-14843:
---

Which i can't understand is why there is no log output?
{code}
  private void verifyProcIdsOnRestart(final Set procIds) throws Exception 
{
LOG.debug("expected: " + procIds);
LoadCounter loader = new LoadCounter();
storeRestart(loader);
assertEquals(procIds.size(), loader.getLoadedCount());
assertEquals(0, loader.getCorruptedCount());
  }
{code}
It should output the above log at least?  What's the problem?

> TestWALProcedureStore.testLoad is flakey
> 
>
> Key: HBASE-14843
> URL: https://issues.apache.org/jira/browse/HBASE-14843
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Heng Chen
>
> I see it twice recently, 
> see.
> https://builds.apache.org/job/PreCommit-HBASE-Build/16589//testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> https://builds.apache.org/job/PreCommit-HBASE-Build/16532/testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> Let's see what's happening.
> Update.
> It failed once again today, 
> https://builds.apache.org/job/PreCommit-HBASE-Build/16602/testReport/junit/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14030) HBase Backup/Restore Phase 1

2015-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015603#comment-15015603
 ] 

Hadoop QA commented on HBASE-14030:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12773461/HBASE-14030-v18.patch
  against master branch at commit 8dbbe96e040cdee2a94b2a0ac53462a5c8f5c233.
  ATTACHMENT ID: 12773461

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 25 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
18996 checkstyle errors (more than the master's current 18690 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16609//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16609//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16609//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16609//console

This message is automatically generated.

> HBase Backup/Restore Phase 1
> 
>
> Key: HBASE-14030
> URL: https://issues.apache.org/jira/browse/HBASE-14030
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-14030-v0.patch, HBASE-14030-v1.patch, 
> HBASE-14030-v10.patch, HBASE-14030-v11.patch, HBASE-14030-v12.patch, 
> HBASE-14030-v13.patch, HBASE-14030-v14.patch, HBASE-14030-v15.patch, 
> HBASE-14030-v17.patch, HBASE-14030-v18.patch, HBASE-14030-v2.patch, 
> HBASE-14030-v3.patch, HBASE-14030-v4.patch, HBASE-14030-v5.patch, 
> HBASE-14030-v6.patch, HBASE-14030-v7.patch, HBASE-14030-v8.patch
>
>
> This is the umbrella ticket for Backup/Restore Phase 1. See HBASE-7912 design 
> doc for the phase description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13347) Deprecate FirstKeyValueMatchingQualifiersFilter

2015-11-20 Thread Abhishek Singh Chouhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Singh Chouhan updated HBASE-13347:
---
 Assignee: Abhishek Singh Chouhan
Affects Version/s: (was: 1.0.0)
   2.0.0
 Priority: Minor  (was: Major)
Fix Version/s: (was: 1.0.4)
   (was: 1.1.4)
   (was: 1.2.1)
  Description: 
The {{RowCounter}} in the {{mapreduce}} package uses 
{{FirstKeyValueMatchingQualifiersFilter}} which was introduced in HBASE-6468. 
However we do not need that since we match columns in the scan before we filter.

Deprecate the filter in 2.0 and remove in 3.0.
Do cleanup of RowCounter that tries to use this filter but actually doesn't.

  was:
The {{RowCounter}} in the {{mapreduce}} package is supposed to check if the row 
count scan has a column selection added to it, and if so, use a different 
filter that finds the row and counts it. But the {{qualifier.add()}} call is 
missing in the {{for}} loop. See 
https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java#L165

Needs fixing or row count might be wrong when using {{--range}}.

  Component/s: (was: mapreduce)
   Issue Type: Improvement  (was: Bug)
  Summary: Deprecate FirstKeyValueMatchingQualifiersFilter  (was: 
RowCounter using special filter is broken)

> Deprecate FirstKeyValueMatchingQualifiersFilter
> ---
>
> Key: HBASE-13347
> URL: https://issues.apache.org/jira/browse/HBASE-13347
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Lars George
>Assignee: Abhishek Singh Chouhan
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-13347-master.patch
>
>
> The {{RowCounter}} in the {{mapreduce}} package uses 
> {{FirstKeyValueMatchingQualifiersFilter}} which was introduced in HBASE-6468. 
> However we do not need that since we match columns in the scan before we 
> filter.
> Deprecate the filter in 2.0 and remove in 3.0.
> Do cleanup of RowCounter that tries to use this filter but actually doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14777) Fix Inter Cluster Replication Future ordering issues

2015-11-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015612#comment-15015612
 ] 

Hudson commented on HBASE-14777:


SUCCESS: Integrated in HBase-1.3 #384 (See 
[https://builds.apache.org/job/HBase-1.3/384/])
HBASE-14777 ADDENDUM Fix failing TestReplicationEndpoint test (busbey: rev 
f56c605e7389da1d2dcd2925d25be1fa972cb6cd)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationEndpoint.java


> Fix Inter Cluster Replication Future ordering issues
> 
>
> Key: HBASE-14777
> URL: https://issues.apache.org/jira/browse/HBASE-14777
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Bhupendra Kumar Jain
>Assignee: Ashu Pachauri
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14777-1.patch, HBASE-14777-2.patch, 
> HBASE-14777-3.patch, HBASE-14777-4.patch, HBASE-14777-5.patch, 
> HBASE-14777-6.patch, HBASE-14777-addendum.patch, HBASE-14777.patch
>
>
> Replication fails with IndexOutOfBoundsException 
> {code}
> regionserver.ReplicationSource$ReplicationSourceWorkerThread(939): 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint
>  threw unknown exception:java.lang.IndexOutOfBoundsException: Index: 1, Size: 
> 1
>   at java.util.ArrayList.rangeCheck(Unknown Source)
>   at java.util.ArrayList.remove(Unknown Source)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.replicate(HBaseInterClusterReplicationEndpoint.java:222)
> {code}
> Its happening due to incorrect removal of entries from the replication 
> entries list. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14840) Sink cluster reports data replication request as success though the data is not replicated

2015-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015611#comment-15015611
 ] 

Hadoop QA commented on HBASE-14840:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12773485/HBASE-14840-branch-1.patch
  against branch-1 branch at commit 8dbbe96e040cdee2a94b2a0ac53462a5c8f5c233.
  ATTACHMENT ID: 12773485

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.procedure2.store.wal.TestWALProcedureStore

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16611//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16611//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16611//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16611//console

This message is automatically generated.

> Sink cluster reports data replication request as success though the data is 
> not replicated
> --
>
> Key: HBASE-14840
> URL: https://issues.apache.org/jira/browse/HBASE-14840
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2, 1.0.3, 0.98.16
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14840-0.98.patch, HBASE-14840-branch-1.patch, 
> HBASE-14840.patch
>
>
> *Scenario:*
> Sink cluster is down
> Create a table and enable table replication
> Put some data
> Now restart the sink cluster
> *Observance:*
> Data is not replicated in sink cluster but still source cluster updates the 
> WAL log position in ZK, resulting in data loss in sink cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13347) Deprecate FirstKeyValueMatchingQualifiersFilter

2015-11-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015615#comment-15015615
 ] 

Anoop Sam John commented on HBASE-13347:


pls attach a trunk patch and branch-1 patch.  Will commit then.  

> Deprecate FirstKeyValueMatchingQualifiersFilter
> ---
>
> Key: HBASE-13347
> URL: https://issues.apache.org/jira/browse/HBASE-13347
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Lars George
>Assignee: Abhishek Singh Chouhan
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-13347-master.patch
>
>
> The {{RowCounter}} in the {{mapreduce}} package uses 
> {{FirstKeyValueMatchingQualifiersFilter}} which was introduced in HBASE-6468. 
> However we do not need that since we match columns in the scan before we 
> filter.
> Deprecate the filter in 2.0 and remove in 3.0.
> Do cleanup of RowCounter that tries to use this filter but actually doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14826) Small improvement in KVHeap seek() API

2015-11-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015616#comment-15015616
 ] 

ramkrishna.s.vasudevan commented on HBASE-14826:


In the current code do we do that except when the scanner heap is set/reset and 
also that time we do the seek() on the contents of the KVHeap directly - right? 
 During the course of a scan only we use the KVHeap.reseek().
Anyway in case that could be used even in seek -  we can do the pq.add(current) 
and pq.poll() inside that 'forward' check so that even that is handled. 

> Small improvement in KVHeap seek() API
> --
>
> Key: HBASE-14826
> URL: https://issues.apache.org/jira/browse/HBASE-14826
> Project: HBase
>  Issue Type: Improvement
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Attachments: HBASE-14826.patch
>
>
> Currently in seek/reseek() APIs we tend to do lot of priorityqueue related 
> operations. We initially add the current scanner to the heap, then poll and 
> again add the scanner back if the seekKey is greater than the topkey in that 
> scanner. Since the KVs are always going to be in increasing order and in 
> ideal scan flow every seek/reseek is followed by a next() call it should be 
> ok if we start with checking the current scanner and then do a poll to get 
> the next scanner. Just avoid the initial PQ.add(current) call. This could 
> save some comparisons. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14859) Better checkstyle reporting

2015-11-20 Thread Appy (JIRA)
Appy created HBASE-14859:


 Summary: Better checkstyle reporting
 Key: HBASE-14859
 URL: https://issues.apache.org/jira/browse/HBASE-14859
 Project: HBase
  Issue Type: Improvement
Reporter: Appy
Assignee: Appy


With additional checkstyles in place, I believe "-1 checkstyle" will fire more 
often now. Trying to make hunting down exact errors easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14826) Small improvement in KVHeap seek() API

2015-11-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015620#comment-15015620
 ] 

Anoop Sam John commented on HBASE-14826:


Ya  in real scan case we do seek but then also it is actually fwd only.  I was 
just asking in generic as we have seek back support as such here.

> Small improvement in KVHeap seek() API
> --
>
> Key: HBASE-14826
> URL: https://issues.apache.org/jira/browse/HBASE-14826
> Project: HBase
>  Issue Type: Improvement
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Attachments: HBASE-14826.patch
>
>
> Currently in seek/reseek() APIs we tend to do lot of priorityqueue related 
> operations. We initially add the current scanner to the heap, then poll and 
> again add the scanner back if the seekKey is greater than the topkey in that 
> scanner. Since the KVs are always going to be in increasing order and in 
> ideal scan flow every seek/reseek is followed by a next() call it should be 
> ok if we start with checking the current scanner and then do a poll to get 
> the next scanner. Just avoid the initial PQ.add(current) call. This could 
> save some comparisons. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14860) Improve BoundedByteBufferPool

2015-11-20 Thread Hiroshi Ikeda (JIRA)
Hiroshi Ikeda created HBASE-14860:
-

 Summary: Improve BoundedByteBufferPool
 Key: HBASE-14860
 URL: https://issues.apache.org/jira/browse/HBASE-14860
 Project: HBase
  Issue Type: Improvement
Reporter: Hiroshi Ikeda
Assignee: Hiroshi Ikeda
Priority: Minor


Make it unblocking.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14777) Fix Inter Cluster Replication Future ordering issues

2015-11-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015629#comment-15015629
 ] 

Hudson commented on HBASE-14777:


SUCCESS: Integrated in HBase-Trunk_matrix #484 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/484/])
HBASE-14777 ADDENDUM Fix failing TestReplicationEndpoint test (busbey: rev 
8dbbe96e040cdee2a94b2a0ac53462a5c8f5c233)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationEndpoint.java


> Fix Inter Cluster Replication Future ordering issues
> 
>
> Key: HBASE-14777
> URL: https://issues.apache.org/jira/browse/HBASE-14777
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Bhupendra Kumar Jain
>Assignee: Ashu Pachauri
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14777-1.patch, HBASE-14777-2.patch, 
> HBASE-14777-3.patch, HBASE-14777-4.patch, HBASE-14777-5.patch, 
> HBASE-14777-6.patch, HBASE-14777-addendum.patch, HBASE-14777.patch
>
>
> Replication fails with IndexOutOfBoundsException 
> {code}
> regionserver.ReplicationSource$ReplicationSourceWorkerThread(939): 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint
>  threw unknown exception:java.lang.IndexOutOfBoundsException: Index: 1, Size: 
> 1
>   at java.util.ArrayList.rangeCheck(Unknown Source)
>   at java.util.ArrayList.remove(Unknown Source)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.replicate(HBaseInterClusterReplicationEndpoint.java:222)
> {code}
> Its happening due to incorrect removal of entries from the replication 
> entries list. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14860) Improve BoundedByteBufferPool

2015-11-20 Thread Hiroshi Ikeda (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hiroshi Ikeda updated HBASE-14860:
--
Attachment: HBASE-14860.patch

Added a patch.

> Improve BoundedByteBufferPool
> -
>
> Key: HBASE-14860
> URL: https://issues.apache.org/jira/browse/HBASE-14860
> Project: HBase
>  Issue Type: Improvement
>Reporter: Hiroshi Ikeda
>Assignee: Hiroshi Ikeda
>Priority: Minor
> Attachments: HBASE-14860.patch
>
>
> Make it unblocking.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14860) Improve BoundedByteBufferPool

2015-11-20 Thread Hiroshi Ikeda (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hiroshi Ikeda updated HBASE-14860:
--
Status: Patch Available  (was: Open)

> Improve BoundedByteBufferPool
> -
>
> Key: HBASE-14860
> URL: https://issues.apache.org/jira/browse/HBASE-14860
> Project: HBase
>  Issue Type: Improvement
>Reporter: Hiroshi Ikeda
>Assignee: Hiroshi Ikeda
>Priority: Minor
> Attachments: HBASE-14860.patch
>
>
> Make it unblocking.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14840) Sink cluster reports data replication request as success though the data is not replicated

2015-11-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015655#comment-15015655
 ] 

Hudson commented on HBASE-14840:


SUCCESS: Integrated in HBase-1.2-IT #296 (See 
[https://builds.apache.org/job/HBase-1.2-IT/296/])
HBASE-14840 Sink cluster reports data replication request as success 
(ramkrishna: rev 3e2011e56034cd46a6b4fecc693389f1d15cb962)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestMasterReplication.java


> Sink cluster reports data replication request as success though the data is 
> not replicated
> --
>
> Key: HBASE-14840
> URL: https://issues.apache.org/jira/browse/HBASE-14840
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2, 1.0.3, 0.98.16
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14840-0.98.patch, HBASE-14840-branch-1.patch, 
> HBASE-14840.patch
>
>
> *Scenario:*
> Sink cluster is down
> Create a table and enable table replication
> Put some data
> Now restart the sink cluster
> *Observance:*
> Data is not replicated in sink cluster but still source cluster updates the 
> WAL log position in ZK, resulting in data loss in sink cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13347) Deprecate FirstKeyValueMatchingQualifiersFilter

2015-11-20 Thread Abhishek Singh Chouhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Singh Chouhan updated HBASE-13347:
---
Attachment: HBASE-13347-master-v2.patch

Patch for master that deprecates the special filter and also cleans up 
RowCounter.

> Deprecate FirstKeyValueMatchingQualifiersFilter
> ---
>
> Key: HBASE-13347
> URL: https://issues.apache.org/jira/browse/HBASE-13347
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Lars George
>Assignee: Abhishek Singh Chouhan
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-13347-master-v2.patch, HBASE-13347-master.patch
>
>
> The {{RowCounter}} in the {{mapreduce}} package uses 
> {{FirstKeyValueMatchingQualifiersFilter}} which was introduced in HBASE-6468. 
> However we do not need that since we match columns in the scan before we 
> filter.
> Deprecate the filter in 2.0 and remove in 3.0.
> Do cleanup of RowCounter that tries to use this filter but actually doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14826) Small improvement in KVHeap seek() API

2015-11-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15016227#comment-15016227
 ] 

Anoop Sam John commented on HBASE-14826:


Ya with every next calls and seek calls we move the internal scanner's position 
and 'current' is the one having lowest cell.   So if we add the current back to 
heap again and poll heap, we will get back the current only.  Looks good the  
change then

> Small improvement in KVHeap seek() API
> --
>
> Key: HBASE-14826
> URL: https://issues.apache.org/jira/browse/HBASE-14826
> Project: HBase
>  Issue Type: Improvement
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Attachments: HBASE-14826.patch
>
>
> Currently in seek/reseek() APIs we tend to do lot of priorityqueue related 
> operations. We initially add the current scanner to the heap, then poll and 
> again add the scanner back if the seekKey is greater than the topkey in that 
> scanner. Since the KVs are always going to be in increasing order and in 
> ideal scan flow every seek/reseek is followed by a next() call it should be 
> ok if we start with checking the current scanner and then do a poll to get 
> the next scanner. Just avoid the initial PQ.add(current) call. This could 
> save some comparisons. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13347) Deprecate FirstKeyValueMatchingQualifiersFilter

2015-11-20 Thread Abhishek Singh Chouhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Singh Chouhan updated HBASE-13347:
---
Attachment: HBASE-13347-branch-1.patch

Patch for branch-1 that does RowCounter Cleanup.

> Deprecate FirstKeyValueMatchingQualifiersFilter
> ---
>
> Key: HBASE-13347
> URL: https://issues.apache.org/jira/browse/HBASE-13347
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Lars George
>Assignee: Abhishek Singh Chouhan
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-13347-branch-1.patch, HBASE-13347-master-v2.patch, 
> HBASE-13347-master.patch
>
>
> The {{RowCounter}} in the {{mapreduce}} package uses 
> {{FirstKeyValueMatchingQualifiersFilter}} which was introduced in HBASE-6468. 
> However we do not need that since we match columns in the scan before we 
> filter.
> Deprecate the filter in 2.0 and remove in 3.0.
> Do cleanup of RowCounter that tries to use this filter but actually doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14223) Meta WALs are not cleared if meta region was closed and RS aborts

2015-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15016701#comment-15016701
 ] 

Hadoop QA commented on HBASE-14223:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12773478/hbase-14223_v3-branch-1.patch
  against branch-1 branch at commit 8dbbe96e040cdee2a94b2a0ac53462a5c8f5c233.
  ATTACHMENT ID: 12773478

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 19 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
3782 checkstyle errors (more than the master's current 3779 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16610//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16610//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16610//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16610//console

This message is automatically generated.

> Meta WALs are not cleared if meta region was closed and RS aborts
> -
>
> Key: HBASE-14223
> URL: https://issues.apache.org/jira/browse/HBASE-14223
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.4, 1.0.4
>
> Attachments: HBASE-14223logs, hbase-14223_v0.patch, 
> hbase-14223_v1-branch-1.patch, hbase-14223_v2-branch-1.patch, 
> hbase-14223_v3-branch-1.patch, hbase-14223_v3-branch-1.patch
>
>
> When an RS opens meta, and later closes it, the WAL(FSHlog) is not closed. 
> The last WAL file just sits there in the RS WAL directory. If RS stops 
> gracefully, the WAL file for meta is deleted. Otherwise if RS aborts, WAL for 
> meta is not cleaned. It is also not split (which is correct) since master 
> determines that the RS no longer hosts meta at the time of RS abort. 
> From a cluster after running ITBLL with CM, I see a lot of {{-splitting}} 
> directories left uncleaned: 
> {code}
> [root@os-enis-dal-test-jun-4-7 cluster-os]# sudo -u hdfs hadoop fs -ls 
> /apps/hbase/data/WALs
> Found 31 items
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 01:14 
> /apps/hbase/data/WALs/hregion-58203265
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 07:54 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-1.openstacklocal,16020,1433489308745-splitting
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 09:28 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-1.openstacklocal,16020,1433494382959-splitting
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 10:01 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-1.openstacklocal,16020,1433498252205-splitting
> ...
> {code}
> The directories contain WALs from meta: 
> {code}
> [root@os-enis-dal-test-jun-4-7 cluster-os]# sudo -u hdfs hadoop fs -ls 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting
> Found 2 items
> -rw-r--r--   3 hbase hadoop 201608 2015-06-05 03:15 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285..meta.1433470511501.meta
> -rw-r--r--   3 hbase hadoop  44420 2015-06-05 04:36 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285..meta.1433474111645.meta
> {code}
> The RS hosted the meta region for some time: 
> {code}
> 2015-06-05 03:14:28,692 INFO  [PostOpenDeployTasks:1588230740] 
> zookeep

[jira] [Commented] (HBASE-14840) Sink cluster reports data replication request as success though the data is not replicated

2015-11-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15017742#comment-15017742
 ] 

Hudson commented on HBASE-14840:


FAILURE: Integrated in HBase-0.98-matrix #263 (See 
[https://builds.apache.org/job/HBase-0.98-matrix/263/])
HBASE-14840 Sink cluster reports data replication request as success 
(ramkrishna: rev ec66983a9231e7e537a087f2e024a5983fbfcc36)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestMasterReplication.java


> Sink cluster reports data replication request as success though the data is 
> not replicated
> --
>
> Key: HBASE-14840
> URL: https://issues.apache.org/jira/browse/HBASE-14840
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2, 1.0.3, 0.98.16
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14840-0.98.patch, HBASE-14840-branch-1.patch, 
> HBASE-14840.patch
>
>
> *Scenario:*
> Sink cluster is down
> Create a table and enable table replication
> Put some data
> Now restart the sink cluster
> *Observance:*
> Data is not replicated in sink cluster but still source cluster updates the 
> WAL log position in ZK, resulting in data loss in sink cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14839) [branch-1] Backport test categories so that patch backport is easier

2015-11-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15017744#comment-15017744
 ] 

Hudson commented on HBASE-14839:


FAILURE: Integrated in HBase-0.98-matrix #263 (See 
[https://builds.apache.org/job/HBase-0.98-matrix/263/])
HBASE-14839 [branch-1] Backport test categories so that patch backport (enis: 
rev af46b759be881a216639e5a43ebc843a8a680657)
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/IOTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/VerySlowRegionServerTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MapReduceTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/ClientTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/FlakeyTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RPCTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MiscTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MasterTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/CoprocessorTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RegionServerTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RestTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/FilterTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/SecurityTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/VerySlowMapReduceTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/ReplicationTests.java


> [branch-1] Backport test categories so that patch backport is easier
> 
>
> Key: HBASE-14839
> URL: https://issues.apache.org/jira/browse/HBASE-14839
> Project: HBase
>  Issue Type: Test
>  Components: test
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: hbase-14839-branch-1.patch
>
>
> Test categories are in master and new unit tests are sometimes marked with 
> that particular interface ( {{RPCTests.class}} ). 
> Since we don't have the specific annotation classes in branch-1, backports 
> usually fail. We can just commit those classes to all applicable branches so 
> that committing patches is less work. 
> We can also backport the full patch for running the specific tests from maven 
> as a further issue. Feel free to take it up, if you are interested. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14761) Deletes with and without visibility expression do not delete the matching mutation

2015-11-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15017743#comment-15017743
 ] 

Hudson commented on HBASE-14761:


FAILURE: Integrated in HBase-0.98-matrix #263 (See 
[https://builds.apache.org/job/HBase-0.98-matrix/263/])
HBASE-14761 - Addendum for 0.98 (ram) (ramkrishna: rev 
e7136813d328f247903f30418574d9378aaba4c7)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithDeletes.java


> Deletes with and without visibility expression do not delete the matching 
> mutation
> --
>
> Key: HBASE-14761
> URL: https://issues.apache.org/jira/browse/HBASE-14761
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.0.0, 1.0.1, 1.1.0, 1.0.2, 1.1.2, 0.98.15
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14761.patch, HBASE-14761_0.98_addendum.patch
>
>
> This is from the user list as reported by Anoop Sharma
> {code}
>  running into an issue related to visibility expressions and delete.
> Example run from hbase shell is listed below.
> Will appreciate any help on this issue.
> thanks.
> In the example below, user running queries has ‘MANAGER’ authorization.
> *First example:*
>   add a column with visib expr ‘MANAGER’
>   delete it by passing in visibility of ‘MANAGER’
>   This works and scan doesn’t return anything.
> *Second example:*
>   add a column with visib expr ‘MANAGER’
>   delete it by not passing in any visibility.
>   This doesn’t delete the column.
>   Scan doesn’t return the row but RAW scan shows the column
>   marked as deleteColumn.
>   Now if delete is done again with visibility of ‘MANAGER’,
>   it still doesn’t delete it and scan returns the original column.
> *Example 1:*
> hbase(main):096:0> create 'HBT1', 'cf'
> hbase(main):098:0* *put 'HBT1', 'John', 'cf:a', 'CA',
> {VISIBILITY=>'MANAGER'}*
> hbase(main):099:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446154722055,
> value=CA
> 1 row(s) in 0.0030 seconds
> hbase(main):100:0> *delete 'HBT1', 'John', 'cf:a', {VISIBILITY=>'MANAGER'}*
> 0 row(s) in 0.0030 seconds
> hbase(main):101:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
> 0 row(s) in 0.0030 seconds
> *Example 2:*
> hbase(main):010:0* *put 'HBT1', 'John', 'cf:a', 'CA',
> {VISIBILITY=>'MANAGER'}*
> 0 row(s) in 0.0040 seconds
> hbase(main):011:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346473,
> value=CA
> 1 row(s) in 0.0060 seconds
> hbase(main):012:0> *delete 'HBT1', 'John', 'cf:a'*
> 0 row(s) in 0.0090 seconds
> hbase(main):013:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346473,
> value=CA
> 1 row(s) in 0.0050 seconds
> hbase(main):014:0> *scan 'HBT1', {RAW => true}*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346519,
> type=DeleteColumn
> 1 row(s) in 0.0060 seconds
> hbase(main):015:0> *delete 'HBT1', 'John', 'cf:a', {VISIBILITY=>'MANAGER'}*
> 0 row(s) in 0.0030 seconds
> hbase(main):016:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346473,
> value=CA
> 1 row(s) in 0.0040 seconds
> hbase(main):017:0> *scan 'HBT1', {RAW => true}*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346601,
> type=DeleteColumn
> 1 row(s) in 0.0060 seconds
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14861) HBASE_ZNODE_FILE on master server is overwritten by regionserver process in case of master-rs collocation

2015-11-20 Thread Samir Ahmic (JIRA)
Samir Ahmic created HBASE-14861:
---

 Summary: HBASE_ZNODE_FILE on master server is overwritten by 
regionserver process in case of master-rs collocation 
 Key: HBASE-14861
 URL: https://issues.apache.org/jira/browse/HBASE-14861
 Project: HBase
  Issue Type: Bug
  Components: Operability
Affects Versions: 2.0.0
Reporter: Samir Ahmic
Assignee: Samir Ahmic


In case of master-rs collocation HBASE_ZNODE_FILE is overwritten by 
regionserver process in HRegionServer#handleReportForDutyResponse() here is how 
it looks on master server:
{code}
[hbase@hnode2 hbase]$ cat hbase-hbase-master.znode 
/hbase/rs/hnode2,16000,1448022074888
{code}
it contains regionserver znode path instead of String value of master's 
ServerName.  This affects ZNodeClearer#clear() in way that will not clear 
master znode in case we detect master crash. At end this will extend  failover 
time until master znode expires configured in zookeeper by maxSessionTimeout 
parameter (40s in my case).
I have notice this on mater branch but it can be case in other branches where 
we are collocating master and rs.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13984) Add option to allow caller to know the heartbeat and scanner position when scanner timeout

2015-11-20 Thread He Liangliang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Liangliang updated HBASE-13984:
--
Attachment: HBASE-13984-V5.diff

Add tests for all cases in TestFromClientSide

> Add option to allow caller to know the heartbeat and scanner position when 
> scanner timeout
> --
>
> Key: HBASE-13984
> URL: https://issues.apache.org/jira/browse/HBASE-13984
> Project: HBase
>  Issue Type: Improvement
>  Components: Scanners
>Reporter: He Liangliang
>Assignee: He Liangliang
> Attachments: HBASE-13984-V1.diff, HBASE-13984-V2.diff, 
> HBASE-13984-V3.diff, HBASE-13984-V3.patch, HBASE-13984-V4.diff, 
> HBASE-13984-V5.diff
>
>
> HBASE-13090 introduced scanner heartbeat. However, there are still some 
> limitations (see HBASE-13215). In some application, for example, an operation 
> access hbase to scan table data, and there is strict limit that this call 
> must return in a fixed interval. At the same time, this call is stateless, so 
> the call must return the next position to continue the scan. This is typical 
> use case for online applications.
> Based on this requirement, some improvements are proposed:
> 1. Allow client set a flag whether pass the heartbeat (a result contains the 
> scanner position) to the caller (via ResultScanner next)
> 2. Allow the client pass a timeout to the server, which can override the 
> server side default value
> 3. When requested by the client, the server peek the next cell and return to 
> the client in the heartbeat message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13984) Add option to allow caller to know the heartbeat and scanner position when scanner timeout

2015-11-20 Thread He Liangliang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018030#comment-15018030
 ] 

He Liangliang commented on HBASE-13984:
---

There is one case in TestScannerHeartbeatMessages but added more in the new 
patch.

> Add option to allow caller to know the heartbeat and scanner position when 
> scanner timeout
> --
>
> Key: HBASE-13984
> URL: https://issues.apache.org/jira/browse/HBASE-13984
> Project: HBase
>  Issue Type: Improvement
>  Components: Scanners
>Reporter: He Liangliang
>Assignee: He Liangliang
> Attachments: HBASE-13984-V1.diff, HBASE-13984-V2.diff, 
> HBASE-13984-V3.diff, HBASE-13984-V3.patch, HBASE-13984-V4.diff, 
> HBASE-13984-V5.diff
>
>
> HBASE-13090 introduced scanner heartbeat. However, there are still some 
> limitations (see HBASE-13215). In some application, for example, an operation 
> access hbase to scan table data, and there is strict limit that this call 
> must return in a fixed interval. At the same time, this call is stateless, so 
> the call must return the next position to continue the scan. This is typical 
> use case for online applications.
> Based on this requirement, some improvements are proposed:
> 1. Allow client set a flag whether pass the heartbeat (a result contains the 
> scanner position) to the caller (via ResultScanner next)
> 2. Allow the client pass a timeout to the server, which can override the 
> server side default value
> 3. When requested by the client, the server peek the next cell and return to 
> the client in the heartbeat message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14840) Sink cluster reports data replication request as success though the data is not replicated

2015-11-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018050#comment-15018050
 ] 

Hudson commented on HBASE-14840:


SUCCESS: Integrated in HBase-1.2 #390 (See 
[https://builds.apache.org/job/HBase-1.2/390/])
HBASE-14840 Sink cluster reports data replication request as success 
(ramkrishna: rev 3e2011e56034cd46a6b4fecc693389f1d15cb962)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestMasterReplication.java


> Sink cluster reports data replication request as success though the data is 
> not replicated
> --
>
> Key: HBASE-14840
> URL: https://issues.apache.org/jira/browse/HBASE-14840
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2, 1.0.3, 0.98.16
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14840-0.98.patch, HBASE-14840-branch-1.patch, 
> HBASE-14840.patch
>
>
> *Scenario:*
> Sink cluster is down
> Create a table and enable table replication
> Put some data
> Now restart the sink cluster
> *Observance:*
> Data is not replicated in sink cluster but still source cluster updates the 
> WAL log position in ZK, resulting in data loss in sink cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13347) Deprecate FirstKeyValueMatchingQualifiersFilter

2015-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018098#comment-15018098
 ] 

Hadoop QA commented on HBASE-13347:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12773498/HBASE-13347-branch-1.patch
  against branch-1 branch at commit 86be690b0723e814a655ad0ae8a6577d7111c1f2.
  ATTACHMENT ID: 12773498

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16614//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16614//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16614//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16614//console

This message is automatically generated.

> Deprecate FirstKeyValueMatchingQualifiersFilter
> ---
>
> Key: HBASE-13347
> URL: https://issues.apache.org/jira/browse/HBASE-13347
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Lars George
>Assignee: Abhishek Singh Chouhan
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-13347-branch-1.patch, HBASE-13347-master-v2.patch, 
> HBASE-13347-master.patch
>
>
> The {{RowCounter}} in the {{mapreduce}} package uses 
> {{FirstKeyValueMatchingQualifiersFilter}} which was introduced in HBASE-6468. 
> However we do not need that since we match columns in the scan before we 
> filter.
> Deprecate the filter in 2.0 and remove in 3.0.
> Do cleanup of RowCounter that tries to use this filter but actually doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14840) Sink cluster reports data replication request as success though the data is not replicated

2015-11-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018102#comment-15018102
 ] 

Hudson commented on HBASE-14840:


FAILURE: Integrated in HBase-1.3 #385 (See 
[https://builds.apache.org/job/HBase-1.3/385/])
HBASE-14840 Sink cluster reports data replication request as success 
(ramkrishna: rev 11b213101345c9485b3abe12f3142617c90bc692)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestMasterReplication.java


> Sink cluster reports data replication request as success though the data is 
> not replicated
> --
>
> Key: HBASE-14840
> URL: https://issues.apache.org/jira/browse/HBASE-14840
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2, 1.0.3, 0.98.16
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14840-0.98.patch, HBASE-14840-branch-1.patch, 
> HBASE-14840.patch
>
>
> *Scenario:*
> Sink cluster is down
> Create a table and enable table replication
> Put some data
> Now restart the sink cluster
> *Observance:*
> Data is not replicated in sink cluster but still source cluster updates the 
> WAL log position in ZK, resulting in data loss in sink cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-14719) Add metric for number of MasterProcWALs

2015-11-20 Thread Vrishal Kulkarni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrishal Kulkarni reassigned HBASE-14719:


Assignee: Vrishal Kulkarni

> Add metric for number of MasterProcWALs
> ---
>
> Key: HBASE-14719
> URL: https://issues.apache.org/jira/browse/HBASE-14719
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Vrishal Kulkarni
> Fix For: 2.0.0, 1.3.0
>
>
> Lets add monitoring to this so that we can see when it starts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14719) Add metric for number of MasterProcWALs

2015-11-20 Thread Vrishal Kulkarni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrishal Kulkarni updated HBASE-14719:
-
Status: Patch Available  (was: Open)

BASE-14719 Add metrics for master WAL count and WAL size.
JMX output for master  -

{
"name" : "Hadoop:service=HBase,name=Master,sub=Server",
"modelerType" : "Master,sub=Server",
"tag.liveRegionServers" : 
"192.168.0.105,62778,1448031609644;192.168.0.105,62780,1448031610088",
"tag.deadRegionServers" : "",
"tag.zookeeperQuorum" : "localhost:2181",
"tag.serverName" : "192.168.0.105,62778,1448031609644",
"tag.clusterId" : "f7a5e799-bfc4-41ad-9c99-224ae1c31508",
"tag.isActiveMaster" : "true",
"tag.Context" : "master",
"tag.Hostname" : "vrishal-mbp",
"masterActiveTime" : 1448031610075,
"masterStartTime" : 1448031609644,
"averageLoad" : 0.0,
"numRegionServers" : 2,
"numDeadRegionServers" : 0,
"numMasterWALs" : 1,
"masterWALSize" : 0,
"clusterRequests" : 0
  },

> Add metric for number of MasterProcWALs
> ---
>
> Key: HBASE-14719
> URL: https://issues.apache.org/jira/browse/HBASE-14719
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Vrishal Kulkarni
> Fix For: 2.0.0, 1.3.0
>
>
> Lets add monitoring to this so that we can see when it starts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14860) Improve BoundedByteBufferPool

2015-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018128#comment-15018128
 ] 

Hadoop QA commented on HBASE-14860:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12773494/HBASE-14860.patch
  against master branch at commit 86be690b0723e814a655ad0ae8a6577d7111c1f2.
  ATTACHMENT ID: 12773494

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16612//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16612//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16612//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16612//console

This message is automatically generated.

> Improve BoundedByteBufferPool
> -
>
> Key: HBASE-14860
> URL: https://issues.apache.org/jira/browse/HBASE-14860
> Project: HBase
>  Issue Type: Improvement
>Reporter: Hiroshi Ikeda
>Assignee: Hiroshi Ikeda
>Priority: Minor
> Attachments: HBASE-14860.patch
>
>
> Make it unblocking.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14840) Sink cluster reports data replication request as success though the data is not replicated

2015-11-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018172#comment-15018172
 ] 

Hudson commented on HBASE-14840:


FAILURE: Integrated in HBase-Trunk_matrix #485 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/485/])
HBASE-14840 Sink cluster reports data replication request as success 
(ramkrishna: rev 86be690b0723e814a655ad0ae8a6577d7111c1f2)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestMasterReplication.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java


> Sink cluster reports data replication request as success though the data is 
> not replicated
> --
>
> Key: HBASE-14840
> URL: https://issues.apache.org/jira/browse/HBASE-14840
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2, 1.0.3, 0.98.16
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14840-0.98.patch, HBASE-14840-branch-1.patch, 
> HBASE-14840.patch
>
>
> *Scenario:*
> Sink cluster is down
> Create a table and enable table replication
> Put some data
> Now restart the sink cluster
> *Observance:*
> Data is not replicated in sink cluster but still source cluster updates the 
> WAL log position in ZK, resulting in data loss in sink cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14859) Better checkstyle reporting

2015-11-20 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018189#comment-15018189
 ] 

Sean Busbey commented on HBASE-14859:
-

we'll get this for free once Yetus has a release and we can transition over to 
it.

> Better checkstyle reporting
> ---
>
> Key: HBASE-14859
> URL: https://issues.apache.org/jira/browse/HBASE-14859
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>
> With additional checkstyles in place, I believe "-1 checkstyle" will fire 
> more often now. Trying to make hunting down exact errors easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14840) Sink cluster reports data replication request as success though the data is not replicated

2015-11-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018201#comment-15018201
 ] 

Hudson commented on HBASE-14840:


SUCCESS: Integrated in HBase-1.1-JDK8 #1689 (See 
[https://builds.apache.org/job/HBase-1.1-JDK8/1689/])
HBASE-14840 Sink cluster reports data replication request as success 
(ramkrishna: rev 358178771716c08efaa03f1191351814f37002a5)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestMasterReplication.java


> Sink cluster reports data replication request as success though the data is 
> not replicated
> --
>
> Key: HBASE-14840
> URL: https://issues.apache.org/jira/browse/HBASE-14840
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2, 1.0.3, 0.98.16
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14840-0.98.patch, HBASE-14840-branch-1.patch, 
> HBASE-14840.patch
>
>
> *Scenario:*
> Sink cluster is down
> Create a table and enable table replication
> Put some data
> Now restart the sink cluster
> *Observance:*
> Data is not replicated in sink cluster but still source cluster updates the 
> WAL log position in ZK, resulting in data loss in sink cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14840) Sink cluster reports data replication request as success though the data is not replicated

2015-11-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018206#comment-15018206
 ] 

Hudson commented on HBASE-14840:


FAILURE: Integrated in HBase-1.1-JDK7 #1601 (See 
[https://builds.apache.org/job/HBase-1.1-JDK7/1601/])
HBASE-14840 Sink cluster reports data replication request as success 
(ramkrishna: rev 358178771716c08efaa03f1191351814f37002a5)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestMasterReplication.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java


> Sink cluster reports data replication request as success though the data is 
> not replicated
> --
>
> Key: HBASE-14840
> URL: https://issues.apache.org/jira/browse/HBASE-14840
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2, 1.0.3, 0.98.16
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14840-0.98.patch, HBASE-14840-branch-1.patch, 
> HBASE-14840.patch
>
>
> *Scenario:*
> Sink cluster is down
> Create a table and enable table replication
> Put some data
> Now restart the sink cluster
> *Observance:*
> Data is not replicated in sink cluster but still source cluster updates the 
> WAL log position in ZK, resulting in data loss in sink cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13347) Deprecate FirstKeyValueMatchingQualifiersFilter

2015-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018225#comment-15018225
 ] 

Hadoop QA commented on HBASE-13347:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12773497/HBASE-13347-master-v2.patch
  against master branch at commit 86be690b0723e814a655ad0ae8a6577d7111c1f2.
  ATTACHMENT ID: 12773497

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.regionserver.TestWALLockup

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16613//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16613//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16613//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16613//console

This message is automatically generated.

> Deprecate FirstKeyValueMatchingQualifiersFilter
> ---
>
> Key: HBASE-13347
> URL: https://issues.apache.org/jira/browse/HBASE-13347
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Lars George
>Assignee: Abhishek Singh Chouhan
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-13347-branch-1.patch, HBASE-13347-master-v2.patch, 
> HBASE-13347-master.patch
>
>
> The {{RowCounter}} in the {{mapreduce}} package uses 
> {{FirstKeyValueMatchingQualifiersFilter}} which was introduced in HBASE-6468. 
> However we do not need that since we match columns in the scan before we 
> filter.
> Deprecate the filter in 2.0 and remove in 3.0.
> Do cleanup of RowCounter that tries to use this filter but actually doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14840) Sink cluster reports data replication request as success though the data is not replicated

2015-11-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018238#comment-15018238
 ] 

Hudson commented on HBASE-14840:


FAILURE: Integrated in HBase-1.0 #1115 (See 
[https://builds.apache.org/job/HBase-1.0/1115/])
HBASE-14840 Sink cluster reports data replication request as success 
(ramkrishna: rev cf06a060d6d1e47501c7d7791dff42a37494deff)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestMasterReplication.java


> Sink cluster reports data replication request as success though the data is 
> not replicated
> --
>
> Key: HBASE-14840
> URL: https://issues.apache.org/jira/browse/HBASE-14840
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2, 1.0.3, 0.98.16
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14840-0.98.patch, HBASE-14840-branch-1.patch, 
> HBASE-14840.patch
>
>
> *Scenario:*
> Sink cluster is down
> Create a table and enable table replication
> Put some data
> Now restart the sink cluster
> *Observance:*
> Data is not replicated in sink cluster but still source cluster updates the 
> WAL log position in ZK, resulting in data loss in sink cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14839) [branch-1] Backport test categories so that patch backport is easier

2015-11-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018239#comment-15018239
 ] 

Hudson commented on HBASE-14839:


FAILURE: Integrated in HBase-1.0 #1115 (See 
[https://builds.apache.org/job/HBase-1.0/1115/])
HBASE-14839 [branch-1] Backport test categories so that patch backport (enis: 
rev 1d72de5948162f76ace155f743bea01f77ee31aa)
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/FilterTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/SecurityTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/CoprocessorTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RegionServerTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/ClientTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RPCTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/VerySlowMapReduceTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MasterTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/IOTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/ReplicationTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/VerySlowRegionServerTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MapReduceTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MiscTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RestTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/FlakeyTests.java


> [branch-1] Backport test categories so that patch backport is easier
> 
>
> Key: HBASE-14839
> URL: https://issues.apache.org/jira/browse/HBASE-14839
> Project: HBase
>  Issue Type: Test
>  Components: test
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: hbase-14839-branch-1.patch
>
>
> Test categories are in master and new unit tests are sometimes marked with 
> that particular interface ( {{RPCTests.class}} ). 
> Since we don't have the specific annotation classes in branch-1, backports 
> usually fail. We can just commit those classes to all applicable branches so 
> that committing patches is less work. 
> We can also backport the full patch for running the specific tests from maven 
> as a further issue. Feel free to take it up, if you are interested. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14719) Add metric for number of MasterProcWALs

2015-11-20 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018257#comment-15018257
 ] 

Matteo Bertozzi commented on HBASE-14719:
-

the patch doesn't show up, can you try to attach it again?

> Add metric for number of MasterProcWALs
> ---
>
> Key: HBASE-14719
> URL: https://issues.apache.org/jira/browse/HBASE-14719
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Vrishal Kulkarni
> Fix For: 2.0.0, 1.3.0
>
>
> Lets add monitoring to this so that we can see when it starts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13984) Add option to allow caller to know the heartbeat and scanner position when scanner timeout

2015-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018315#comment-15018315
 ] 

Hadoop QA commented on HBASE-13984:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12773509/HBASE-13984-V5.diff
  against master branch at commit 86be690b0723e814a655ad0ae8a6577d7111c1f2.
  ATTACHMENT ID: 12773509

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 20 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
18699 checkstyle errors (more than the master's current 18690 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  "\002 \001(\004\022\024\n\014more_results\030\003 
\001(\010\022\013\n\003ttl\030\004 \001(\r" +
+  new java.lang.String[] { "Region", "Scan", "ScannerId", 
"NumberOfRows", "CloseScanner", "NextCallSeq", "ClientHandlesPartials", 
"ClientHandlesHeartbeats", "TrackScanMetrics", "Timeout", 
"HeartbeatReturnNext", });
+  new java.lang.String[] { "CellsPerResult", "ScannerId", 
"MoreResults", "Ttl", "Results", "Stale", "PartialFlagPerResult", 
"MoreResultsInRegion", "HeartbeatMessage", "ScanMetrics", "HeartbeatNext", });
+  "name\030\005 
\001(\t\"\032\n\004Type\022\t\n\005HFILE\020\001\022\007\n\003WAL\020\002\"\323"
 +

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16615//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16615//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16615//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16615//console

This message is automatically generated.

> Add option to allow caller to know the heartbeat and scanner position when 
> scanner timeout
> --
>
> Key: HBASE-13984
> URL: https://issues.apache.org/jira/browse/HBASE-13984
> Project: HBase
>  Issue Type: Improvement
>  Components: Scanners
>Reporter: He Liangliang
>Assignee: He Liangliang
> Attachments: HBASE-13984-V1.diff, HBASE-13984-V2.diff, 
> HBASE-13984-V3.diff, HBASE-13984-V3.patch, HBASE-13984-V4.diff, 
> HBASE-13984-V5.diff
>
>
> HBASE-13090 introduced scanner heartbeat. However, there are still some 
> limitations (see HBASE-13215). In some application, for example, an operation 
> access hbase to scan table data, and there is strict limit that this call 
> must return in a fixed interval. At the same time, this call is stateless, so 
> the call must return the next position to continue the scan. This is typical 
> use case for online applications.
> Based on this requirement, some improvements are proposed:
> 1. Allow client set a flag whether pass the heartbeat (a result contains the 
> scanner position) to the caller (via ResultScanner next)
> 2. Allow the client pass a timeout to the server, which can override the 
> server side default value
> 3. When requested by the client, the server peek the next cell and return to 
> the client in the heartbeat message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14719) Add metric for number of MasterProcWALs

2015-11-20 Thread Vrishal Kulkarni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrishal Kulkarni updated HBASE-14719:
-
Attachment: diff.txt

> Add metric for number of MasterProcWALs
> ---
>
> Key: HBASE-14719
> URL: https://issues.apache.org/jira/browse/HBASE-14719
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Vrishal Kulkarni
> Fix For: 2.0.0, 1.3.0
>
> Attachments: diff.txt
>
>
> Lets add monitoring to this so that we can see when it starts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14030) HBase Backup/Restore Phase 1

2015-11-20 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018327#comment-15018327
 ] 

Vladimir Rodionov commented on HBASE-14030:
---

Put the latest patch on review board:
https://reviews.apache.org/r/36591/

> HBase Backup/Restore Phase 1
> 
>
> Key: HBASE-14030
> URL: https://issues.apache.org/jira/browse/HBASE-14030
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-14030-v0.patch, HBASE-14030-v1.patch, 
> HBASE-14030-v10.patch, HBASE-14030-v11.patch, HBASE-14030-v12.patch, 
> HBASE-14030-v13.patch, HBASE-14030-v14.patch, HBASE-14030-v15.patch, 
> HBASE-14030-v17.patch, HBASE-14030-v18.patch, HBASE-14030-v2.patch, 
> HBASE-14030-v3.patch, HBASE-14030-v4.patch, HBASE-14030-v5.patch, 
> HBASE-14030-v6.patch, HBASE-14030-v7.patch, HBASE-14030-v8.patch
>
>
> This is the umbrella ticket for Backup/Restore Phase 1. See HBASE-7912 design 
> doc for the phase description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14719) Add metric for number of MasterProcWALs

2015-11-20 Thread Vrishal Kulkarni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrishal Kulkarni updated HBASE-14719:
-
Attachment: (was: diff.txt)

> Add metric for number of MasterProcWALs
> ---
>
> Key: HBASE-14719
> URL: https://issues.apache.org/jira/browse/HBASE-14719
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Vrishal Kulkarni
> Fix For: 2.0.0, 1.3.0
>
>
> Lets add monitoring to this so that we can see when it starts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14719) Add metric for number of MasterProcWALs

2015-11-20 Thread Vrishal Kulkarni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrishal Kulkarni updated HBASE-14719:
-
Attachment: diff.txt

> Add metric for number of MasterProcWALs
> ---
>
> Key: HBASE-14719
> URL: https://issues.apache.org/jira/browse/HBASE-14719
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Vrishal Kulkarni
> Fix For: 2.0.0, 1.3.0
>
> Attachments: diff.txt
>
>
> Lets add monitoring to this so that we can see when it starts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14852) Update build env

2015-11-20 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14852:
--
Attachment: HBASE-14852-v5.patch

Fix apache-rat check issues.

> Update build env
> 
>
> Key: HBASE-14852
> URL: https://issues.apache.org/jira/browse/HBASE-14852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-14852-v1.patch, HBASE-14852-v3.patch, 
> HBASE-14852-v4.patch, HBASE-14852-v5.patch, HBASE-14852.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14852) Update build env

2015-11-20 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14852:
--
Description: 
* Use docker to ensure that everyone has the correct versions of libs
* Use buck to build.
* Include Folly for IOBuf.

> Update build env
> 
>
> Key: HBASE-14852
> URL: https://issues.apache.org/jira/browse/HBASE-14852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-14852-v1.patch, HBASE-14852-v3.patch, 
> HBASE-14852-v4.patch, HBASE-14852-v5.patch, HBASE-14852.patch
>
>
> * Use docker to ensure that everyone has the correct versions of libs
> * Use buck to build.
> * Include Folly for IOBuf.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14719) Add metric for number of MasterProcWALs

2015-11-20 Thread Vrishal Kulkarni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrishal Kulkarni updated HBASE-14719:
-
Attachment: HBASE-14719.patch

> Add metric for number of MasterProcWALs
> ---
>
> Key: HBASE-14719
> URL: https://issues.apache.org/jira/browse/HBASE-14719
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Vrishal Kulkarni
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14719.patch
>
>
> Lets add monitoring to this so that we can see when it starts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14719) Add metric for number of MasterProcWALs

2015-11-20 Thread Vrishal Kulkarni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrishal Kulkarni updated HBASE-14719:
-
Attachment: (was: diff.txt)

> Add metric for number of MasterProcWALs
> ---
>
> Key: HBASE-14719
> URL: https://issues.apache.org/jira/browse/HBASE-14719
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Vrishal Kulkarni
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14719.patch
>
>
> Lets add monitoring to this so that we can see when it starts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14719) Add metric for number of MasterProcWALs

2015-11-20 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018354#comment-15018354
 ] 

Sean Busbey commented on HBASE-14719:
-

thanks for the patch! Could you please generate it using "git format-patch" 
instead of git diff? Also, please name the file according to our contribution 
guidelines: http://hbase.apache.org/book.html#submitting.patches

> Add metric for number of MasterProcWALs
> ---
>
> Key: HBASE-14719
> URL: https://issues.apache.org/jira/browse/HBASE-14719
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Vrishal Kulkarni
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14719.patch
>
>
> Lets add monitoring to this so that we can see when it starts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14719) Add metric for number of MasterProcWALs

2015-11-20 Thread Vrishal Kulkarni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrishal Kulkarni updated HBASE-14719:
-
Attachment: HBASE-14719.patch

Renamed patch file to include JIRA number
Patch generated using git format-patch

> Add metric for number of MasterProcWALs
> ---
>
> Key: HBASE-14719
> URL: https://issues.apache.org/jira/browse/HBASE-14719
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Vrishal Kulkarni
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14719.patch
>
>
> Lets add monitoring to this so that we can see when it starts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14719) Add metric for number of MasterProcWALs

2015-11-20 Thread Vrishal Kulkarni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrishal Kulkarni updated HBASE-14719:
-
Attachment: (was: HBASE-14719.patch)

> Add metric for number of MasterProcWALs
> ---
>
> Key: HBASE-14719
> URL: https://issues.apache.org/jira/browse/HBASE-14719
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Vrishal Kulkarni
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14719.patch
>
>
> Lets add monitoring to this so that we can see when it starts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14840) Sink cluster reports data replication request as success though the data is not replicated

2015-11-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018374#comment-15018374
 ] 

Hudson commented on HBASE-14840:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #1136 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1136/])
HBASE-14840 Sink cluster reports data replication request as success 
(ramkrishna: rev ec66983a9231e7e537a087f2e024a5983fbfcc36)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestMasterReplication.java


> Sink cluster reports data replication request as success though the data is 
> not replicated
> --
>
> Key: HBASE-14840
> URL: https://issues.apache.org/jira/browse/HBASE-14840
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2, 1.0.3, 0.98.16
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14840-0.98.patch, HBASE-14840-branch-1.patch, 
> HBASE-14840.patch
>
>
> *Scenario:*
> Sink cluster is down
> Create a table and enable table replication
> Put some data
> Now restart the sink cluster
> *Observance:*
> Data is not replicated in sink cluster but still source cluster updates the 
> WAL log position in ZK, resulting in data loss in sink cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14761) Deletes with and without visibility expression do not delete the matching mutation

2015-11-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018375#comment-15018375
 ] 

Hudson commented on HBASE-14761:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #1136 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1136/])
HBASE-14761 - Addendum for 0.98 (ram) (ramkrishna: rev 
e7136813d328f247903f30418574d9378aaba4c7)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithDeletes.java


> Deletes with and without visibility expression do not delete the matching 
> mutation
> --
>
> Key: HBASE-14761
> URL: https://issues.apache.org/jira/browse/HBASE-14761
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.0.0, 1.0.1, 1.1.0, 1.0.2, 1.1.2, 0.98.15
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14761.patch, HBASE-14761_0.98_addendum.patch
>
>
> This is from the user list as reported by Anoop Sharma
> {code}
>  running into an issue related to visibility expressions and delete.
> Example run from hbase shell is listed below.
> Will appreciate any help on this issue.
> thanks.
> In the example below, user running queries has ‘MANAGER’ authorization.
> *First example:*
>   add a column with visib expr ‘MANAGER’
>   delete it by passing in visibility of ‘MANAGER’
>   This works and scan doesn’t return anything.
> *Second example:*
>   add a column with visib expr ‘MANAGER’
>   delete it by not passing in any visibility.
>   This doesn’t delete the column.
>   Scan doesn’t return the row but RAW scan shows the column
>   marked as deleteColumn.
>   Now if delete is done again with visibility of ‘MANAGER’,
>   it still doesn’t delete it and scan returns the original column.
> *Example 1:*
> hbase(main):096:0> create 'HBT1', 'cf'
> hbase(main):098:0* *put 'HBT1', 'John', 'cf:a', 'CA',
> {VISIBILITY=>'MANAGER'}*
> hbase(main):099:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446154722055,
> value=CA
> 1 row(s) in 0.0030 seconds
> hbase(main):100:0> *delete 'HBT1', 'John', 'cf:a', {VISIBILITY=>'MANAGER'}*
> 0 row(s) in 0.0030 seconds
> hbase(main):101:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
> 0 row(s) in 0.0030 seconds
> *Example 2:*
> hbase(main):010:0* *put 'HBT1', 'John', 'cf:a', 'CA',
> {VISIBILITY=>'MANAGER'}*
> 0 row(s) in 0.0040 seconds
> hbase(main):011:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346473,
> value=CA
> 1 row(s) in 0.0060 seconds
> hbase(main):012:0> *delete 'HBT1', 'John', 'cf:a'*
> 0 row(s) in 0.0090 seconds
> hbase(main):013:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346473,
> value=CA
> 1 row(s) in 0.0050 seconds
> hbase(main):014:0> *scan 'HBT1', {RAW => true}*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346519,
> type=DeleteColumn
> 1 row(s) in 0.0060 seconds
> hbase(main):015:0> *delete 'HBT1', 'John', 'cf:a', {VISIBILITY=>'MANAGER'}*
> 0 row(s) in 0.0030 seconds
> hbase(main):016:0> *scan 'HBT1'*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346473,
> value=CA
> 1 row(s) in 0.0040 seconds
> hbase(main):017:0> *scan 'HBT1', {RAW => true}*
> ROW
> COLUMN+CELL
>  John column=cf:a, timestamp=1446155346601,
> type=DeleteColumn
> 1 row(s) in 0.0060 seconds
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14839) [branch-1] Backport test categories so that patch backport is easier

2015-11-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018376#comment-15018376
 ] 

Hudson commented on HBASE-14839:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #1136 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1136/])
HBASE-14839 [branch-1] Backport test categories so that patch backport (enis: 
rev af46b759be881a216639e5a43ebc843a8a680657)
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/FlakeyTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RestTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RegionServerTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MiscTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/FilterTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/SecurityTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RPCTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MasterTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/IOTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/CoprocessorTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/ClientTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/ReplicationTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/VerySlowMapReduceTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/VerySlowRegionServerTests.java
* 
hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MapReduceTests.java


> [branch-1] Backport test categories so that patch backport is easier
> 
>
> Key: HBASE-14839
> URL: https://issues.apache.org/jira/browse/HBASE-14839
> Project: HBase
>  Issue Type: Test
>  Components: test
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: hbase-14839-branch-1.patch
>
>
> Test categories are in master and new unit tests are sometimes marked with 
> that particular interface ( {{RPCTests.class}} ). 
> Since we don't have the specific annotation classes in branch-1, backports 
> usually fail. We can just commit those classes to all applicable branches so 
> that committing patches is less work. 
> We can also backport the full patch for running the specific tests from maven 
> as a further issue. Feel free to take it up, if you are interested. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14719) Add metric for number of MasterProcWALs

2015-11-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018401#comment-15018401
 ] 

Hadoop QA commented on HBASE-14719:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org
  against master branch at commit 86be690b0723e814a655ad0ae8a6577d7111c1f2.
  ATTACHMENT ID: http:

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev-support patch that doesn't require tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16619//console

This message is automatically generated.

> Add metric for number of MasterProcWALs
> ---
>
> Key: HBASE-14719
> URL: https://issues.apache.org/jira/browse/HBASE-14719
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Vrishal Kulkarni
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14719.patch
>
>
> Lets add monitoring to this so that we can see when it starts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14719) Add metric for number of MasterProcWALs

2015-11-20 Thread Vrishal Kulkarni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrishal Kulkarni updated HBASE-14719:
-
Attachment: (was: HBASE-14719.patch)

> Add metric for number of MasterProcWALs
> ---
>
> Key: HBASE-14719
> URL: https://issues.apache.org/jira/browse/HBASE-14719
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Vrishal Kulkarni
> Fix For: 2.0.0, 1.3.0
>
>
> Lets add monitoring to this so that we can see when it starts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14719) Add metric for number of MasterProcWALs

2015-11-20 Thread Vrishal Kulkarni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrishal Kulkarni updated HBASE-14719:
-
Attachment: HBASE-14719.patch

Modified a unit test that uses MetricsAssertHelper to test master WAL metrics.

> Add metric for number of MasterProcWALs
> ---
>
> Key: HBASE-14719
> URL: https://issues.apache.org/jira/browse/HBASE-14719
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Vrishal Kulkarni
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14719.patch
>
>
> Lets add monitoring to this so that we can see when it starts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14719) Add metric for number of MasterProcWALs

2015-11-20 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018427#comment-15018427
 ] 

Sean Busbey commented on HBASE-14719:
-

please leave patches on the ticket so that folks can follow along with changes 
over time. please name subsequent patches to show the order, e.g. 
HBASE-14719.3.patch, HBASE-14719.4.patch, etc.

> Add metric for number of MasterProcWALs
> ---
>
> Key: HBASE-14719
> URL: https://issues.apache.org/jira/browse/HBASE-14719
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Vrishal Kulkarni
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14719.patch
>
>
> Lets add monitoring to this so that we can see when it starts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14862) Add support for reporting p90 for histogram metrics

2015-11-20 Thread Sanjeev Lakshmanan (JIRA)
Sanjeev Lakshmanan created HBASE-14862:
--

 Summary: Add support for reporting p90 for histogram metrics
 Key: HBASE-14862
 URL: https://issues.apache.org/jira/browse/HBASE-14862
 Project: HBase
  Issue Type: Improvement
  Components: metrics
Reporter: Sanjeev Lakshmanan
Assignee: Sanjeev Lakshmanan
Priority: Minor


Currently there is support for reporting p75, p95, and p99 for histogram 
metrics. This JIRA is to add support for reporting p90.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14719) Add metric for number of MasterProcWALs

2015-11-20 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018439#comment-15018439
 ] 

Matteo Bertozzi commented on HBASE-14719:
-

should we have a different class for procedure metrics? there will be way more 
in the future. 
for example snapshot on master has its own subsection (see 
https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetricsSnapshot.java)

I think getWALFileSize() should be in ProcedureWALStore or at least it has to 
change to fix something around concurrency. I think we just return the pointer 
to the list that we have in the Store class, since that was supposed to be 
visible only for testing.

also, the current WAL + all the wals created from master startup will always 
return 0 as size. since we don't have that FileStatus object for them and we 
don't update it. 

> Add metric for number of MasterProcWALs
> ---
>
> Key: HBASE-14719
> URL: https://issues.apache.org/jira/browse/HBASE-14719
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Vrishal Kulkarni
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14719.patch
>
>
> Lets add monitoring to this so that we can see when it starts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HBASE-14862) Add support for reporting p90 for histogram metrics

2015-11-20 Thread Sanjeev Lakshmanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-14862 started by Sanjeev Lakshmanan.
--
> Add support for reporting p90 for histogram metrics
> ---
>
> Key: HBASE-14862
> URL: https://issues.apache.org/jira/browse/HBASE-14862
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Sanjeev Lakshmanan
>Assignee: Sanjeev Lakshmanan
>Priority: Minor
>
> Currently there is support for reporting p75, p95, and p99 for histogram 
> metrics. This JIRA is to add support for reporting p90.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14623) Implement dedicated WAL for system tables

2015-11-20 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14623:
---
Description: 
As Stephen suggested in parent JIRA, dedicating separate WAL for system tables 
(other than hbase:meta) should be done in new JIRA.

This task is to fulfill the system WAL separation.
Below is summary of discussion:

For system table to have its own WAL, we would recover system table faster 
(fast log split, fast log replay). It would probably benefit 
AssignmentManager on system table region assignment. At this time, the new 
AssignmentManager is not planned to change WAL. So the existence of this JIRA 
is good for overall system, not specific to AssignmentManager.

There are 3 strategies for implementing system table WAL:
1. one WAL for all non-meta system tables
2. one WAL for each non-meta system table
3. one WAL for each region of non-meta system table

Currently most system tables are one region table (only ACL table may become 
big). Choices 2 and 3 basically are the same.
>From implementation point of view, choices 2 and 3 are cleaner than choice 1 
>(as we have already had 1 WAL for META table and we can reuse the logic). With 
>choice 2 or 3, assignment manager performance should not be impacted and it 
>would be easier for assignment manager to assign system table region (eg. 
>without waiting for user table log split to complete for assigning system 
>table region).

  was:
As Stephen suggested in parent JIRA, dedicating separate WAL for system tables 
(other than hbase:meta) should be done in new JIRA.

This task is to fulfill the system WAL separation.


> Implement dedicated WAL for system tables
> -
>
> Key: HBASE-14623
> URL: https://issues.apache.org/jira/browse/HBASE-14623
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0
>
> Attachments: 14623-v1.txt, 14623-v2.txt, 14623-v2.txt, 14623-v2.txt, 
> 14623-v2.txt
>
>
> As Stephen suggested in parent JIRA, dedicating separate WAL for system 
> tables (other than hbase:meta) should be done in new JIRA.
> This task is to fulfill the system WAL separation.
> Below is summary of discussion:
> For system table to have its own WAL, we would recover system table faster 
> (fast log split, fast log replay). It would probably benefit 
> AssignmentManager on system table region assignment. At this time, the new 
> AssignmentManager is not planned to change WAL. So the existence of this JIRA 
> is good for overall system, not specific to AssignmentManager.
> There are 3 strategies for implementing system table WAL:
> 1. one WAL for all non-meta system tables
> 2. one WAL for each non-meta system table
> 3. one WAL for each region of non-meta system table
> Currently most system tables are one region table (only ACL table may become 
> big). Choices 2 and 3 basically are the same.
> From implementation point of view, choices 2 and 3 are cleaner than choice 1 
> (as we have already had 1 WAL for META table and we can reuse the logic). 
> With choice 2 or 3, assignment manager performance should not be impacted and 
> it would be easier for assignment manager to assign system table region (eg. 
> without waiting for user table log split to complete for assigning system 
> table region).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13153) Bulk Loaded HFile Replication

2015-11-20 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-13153:
--
Attachment: HBASE-13153-v17.patch

Patch addressing [~anoop.hbase]'s comments from RB.

> Bulk Loaded HFile Replication
> -
>
> Key: HBASE-13153
> URL: https://issues.apache.org/jira/browse/HBASE-13153
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: sunhaitao
>Assignee: Ashish Singhi
> Fix For: 2.0.0
>
> Attachments: HBASE-13153-v1.patch, HBASE-13153-v10.patch, 
> HBASE-13153-v11.patch, HBASE-13153-v12.patch, HBASE-13153-v13.patch, 
> HBASE-13153-v14.patch, HBASE-13153-v15.patch, HBASE-13153-v16.patch, 
> HBASE-13153-v17.patch, HBASE-13153-v2.patch, HBASE-13153-v3.patch, 
> HBASE-13153-v4.patch, HBASE-13153-v5.patch, HBASE-13153-v6.patch, 
> HBASE-13153-v7.patch, HBASE-13153-v8.patch, HBASE-13153-v9.patch, 
> HBASE-13153.patch, HBase Bulk Load Replication-v1-1.pdf, HBase Bulk Load 
> Replication-v2.pdf, HBase Bulk Load Replication-v3.pdf, HBase Bulk Load 
> Replication.pdf, HDFS_HA_Solution.PNG
>
>
> Currently we plan to use HBase Replication feature to deal with disaster 
> tolerance scenario.But we encounter an issue that we will use bulkload very 
> frequently,because bulkload bypass write path, and will not generate WAL, so 
> the data will not be replicated to backup cluster. It's inappropriate to 
> bukload twice both on active cluster and backup cluster. So i advise do some 
> modification to bulkload feature to enable bukload to both active cluster and 
> backup cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14734) TestGenerateDelegationToken fails with BindAddress clash

2015-11-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018546#comment-15018546
 ] 

stack commented on HBASE-14734:
---

Thanks [~apurtell] Would it help if we picked a random port? Do you think the 
incidence of clash would go down? It is pretty rare at the moment but it 
regular enough.

(Not sure why history is showing it having run once only)

https://builds.apache.org/view/H-L/view/HBase/job/HBase-1.3/lastCompletedBuild/jdk=latest1.7,label=Hadoop/testReport/org.apache.hadoop.hbase.security.token/TestGenerateDelegationToken/org_apache_hadoop_hbase_security_token_TestGenerateDelegationToken/history/


We see it again here:

https://builds.apache.org/view/H-L/view/HBase/job/HBase-1.3/lastCompletedBuild/jdk=latest1.7,label=Hadoop/testReport/org.apache.hadoop.hbase.security.token/TestGenerateDelegationToken/org_apache_hadoop_hbase_security_token_TestGenerateDelegationToken/

Error Message

Address already in use
Stacktrace

java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:463)
at sun.nio.ch.Net.bind(Net.java:455)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:198)
at 
org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:51)
at 
org.apache.mina.core.polling.AbstractPollingIoAcceptor.registerHandles(AbstractPollingIoAcceptor.java:547)
at 
org.apache.mina.core.polling.AbstractPollingIoAcceptor.access$400(AbstractPollingIoAcceptor.java:68)
at 
org.apache.mina.core.polling.AbstractPollingIoAcceptor$Acceptor.run(AbstractPollingIoAcceptor.java:422)
at 
org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:64)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

> TestGenerateDelegationToken fails with BindAddress clash
> 
>
> Key: HBASE-14734
> URL: https://issues.apache.org/jira/browse/HBASE-14734
> Project: HBase
>  Issue Type: Bug
>  Components: flakey, test
>Reporter: stack
>
> From 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase-1.2/330/jdk=latest1.7,label=Hadoop/testReport/junit/org.apache.hadoop.hbase.security.token/TestGenerateDelegationToken/org_apache_hadoop_hbase_security_token_TestGenerateDelegationToken/
> Error Message
> Address already in use
> Stacktrace
> java.net.BindException: Address already in use
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:444)
>   at sun.nio.ch.Net.bind(Net.java:436)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at 
> org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:198)
>   at 
> org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:51)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoAcceptor.registerHandles(AbstractPollingIoAcceptor.java:547)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoAcceptor.access$400(AbstractPollingIoAcceptor.java:68)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoAcceptor$Acceptor.run(AbstractPollingIoAcceptor.java:422)
>   at 
> org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:64)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Can this utility be made to not fail if address taken? Try another?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14734) TestGenerateDelegationToken fails with BindAddress clash

2015-11-20 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018559#comment-15018559
 ] 

Andrew Purtell commented on HBASE-14734:


I don't think we have any control here 

> TestGenerateDelegationToken fails with BindAddress clash
> 
>
> Key: HBASE-14734
> URL: https://issues.apache.org/jira/browse/HBASE-14734
> Project: HBase
>  Issue Type: Bug
>  Components: flakey, test
>Reporter: stack
>
> From 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase-1.2/330/jdk=latest1.7,label=Hadoop/testReport/junit/org.apache.hadoop.hbase.security.token/TestGenerateDelegationToken/org_apache_hadoop_hbase_security_token_TestGenerateDelegationToken/
> Error Message
> Address already in use
> Stacktrace
> java.net.BindException: Address already in use
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:444)
>   at sun.nio.ch.Net.bind(Net.java:436)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at 
> org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:198)
>   at 
> org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:51)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoAcceptor.registerHandles(AbstractPollingIoAcceptor.java:547)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoAcceptor.access$400(AbstractPollingIoAcceptor.java:68)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoAcceptor$Acceptor.run(AbstractPollingIoAcceptor.java:422)
>   at 
> org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:64)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Can this utility be made to not fail if address taken? Try another?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14843) TestWALProcedureStore.testLoad is flakey

2015-11-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018563#comment-15018563
 ] 

stack commented on HBASE-14843:
---



Happened again here and again no log output. [~mbertozzi] is fixing the no log 
over in HBASE-14848 (Sorry Matteo, I thought this was already in... when I was 
asking about the hadoop qa failure yesterday)

https://builds.apache.org/view/H-L/view/HBase/job/HBase-Trunk_matrix/485/jdk=latest1.8,label=Hadoop/testReport/junit/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/

> TestWALProcedureStore.testLoad is flakey
> 
>
> Key: HBASE-14843
> URL: https://issues.apache.org/jira/browse/HBASE-14843
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Heng Chen
>
> I see it twice recently, 
> see.
> https://builds.apache.org/job/PreCommit-HBASE-Build/16589//testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> https://builds.apache.org/job/PreCommit-HBASE-Build/16532/testReport/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/
> Let's see what's happening.
> Update.
> It failed once again today, 
> https://builds.apache.org/job/PreCommit-HBASE-Build/16602/testReport/junit/org.apache.hadoop.hbase.procedure2.store.wal/TestWALProcedureStore/testLoad/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14848) some hbase-* module don't have test/resources/log4j and test logs are empty

2015-11-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018562#comment-15018562
 ] 

stack commented on HBASE-14848:
---

+1 Commit as a subtask of this issue? Leave this issue open for investigating 
why we now need to add resources dir to all modules? We broke something

> some hbase-* module don't have test/resources/log4j and test logs are empty
> ---
>
> Key: HBASE-14848
> URL: https://issues.apache.org/jira/browse/HBASE-14848
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0, 1.2.0, 1.1.2, 1.3.0
>Reporter: Matteo Bertozzi
> Attachments: hbase-procedure-resources.patch
>
>
> some of the hbase sub modules (e.g. hbase-procedure, hbase-prefix-tree, ...) 
> don't have the test/resources/log4j.properties file which result in unit 
> tests not printing any information.
> adding the log4j seems to work, but in the past the debug output was visibile 
> even without the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14734) TestGenerateDelegationToken fails with BindAddress clash

2015-11-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018574#comment-15018574
 ] 

stack commented on HBASE-14734:
---

Thanks [~apurtell] Let me take a look (sometime)...

> TestGenerateDelegationToken fails with BindAddress clash
> 
>
> Key: HBASE-14734
> URL: https://issues.apache.org/jira/browse/HBASE-14734
> Project: HBase
>  Issue Type: Bug
>  Components: flakey, test
>Reporter: stack
>
> From 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase-1.2/330/jdk=latest1.7,label=Hadoop/testReport/junit/org.apache.hadoop.hbase.security.token/TestGenerateDelegationToken/org_apache_hadoop_hbase_security_token_TestGenerateDelegationToken/
> Error Message
> Address already in use
> Stacktrace
> java.net.BindException: Address already in use
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:444)
>   at sun.nio.ch.Net.bind(Net.java:436)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at 
> org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:198)
>   at 
> org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:51)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoAcceptor.registerHandles(AbstractPollingIoAcceptor.java:547)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoAcceptor.access$400(AbstractPollingIoAcceptor.java:68)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoAcceptor$Acceptor.run(AbstractPollingIoAcceptor.java:422)
>   at 
> org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:64)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Can this utility be made to not fail if address taken? Try another?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14734) TestGenerateDelegationToken fails with BindAddress clash

2015-11-20 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14734:
--
Issue Type: Sub-task  (was: Bug)
Parent: HBASE-14420

> TestGenerateDelegationToken fails with BindAddress clash
> 
>
> Key: HBASE-14734
> URL: https://issues.apache.org/jira/browse/HBASE-14734
> Project: HBase
>  Issue Type: Sub-task
>  Components: flakey, test
>Reporter: stack
>
> From 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase-1.2/330/jdk=latest1.7,label=Hadoop/testReport/junit/org.apache.hadoop.hbase.security.token/TestGenerateDelegationToken/org_apache_hadoop_hbase_security_token_TestGenerateDelegationToken/
> Error Message
> Address already in use
> Stacktrace
> java.net.BindException: Address already in use
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:444)
>   at sun.nio.ch.Net.bind(Net.java:436)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at 
> org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:198)
>   at 
> org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:51)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoAcceptor.registerHandles(AbstractPollingIoAcceptor.java:547)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoAcceptor.access$400(AbstractPollingIoAcceptor.java:68)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoAcceptor$Acceptor.run(AbstractPollingIoAcceptor.java:422)
>   at 
> org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:64)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Can this utility be made to not fail if address taken? Try another?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14719) Add metric for number of MasterProcWALs

2015-11-20 Thread Vrishal Kulkarni (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018596#comment-15018596
 ] 

Vrishal Kulkarni commented on HBASE-14719:
--

I suppose it makes sense to remove the masterWALSize metric. 

A new subsection like follows?

{
"name" : "Hadoop:service=HBase,name=Master,sub=Procedure",
"modelerType" : "Master,sub=AssignmentManger",
"tag.Context" : "master",
"tag.Hostname" : "vrishal-mbp",
"numMasterWALs" : 1
}


> Add metric for number of MasterProcWALs
> ---
>
> Key: HBASE-14719
> URL: https://issues.apache.org/jira/browse/HBASE-14719
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Vrishal Kulkarni
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14719.patch
>
>
> Lets add monitoring to this so that we can see when it starts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14719) Add metric for number of MasterProcWALs

2015-11-20 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018604#comment-15018604
 ] 

Matteo Bertozzi commented on HBASE-14719:
-

sure, we can skip the wal size metric in this patch. and add the wal size logic 
+ metric in another. 

I think a subsection like that is good. 
I'm not an expert in metrics, maybe [~eclark] can provide his opinion. 
but we will have much more metrics for the procedure so I think a subsection is 
better.

> Add metric for number of MasterProcWALs
> ---
>
> Key: HBASE-14719
> URL: https://issues.apache.org/jira/browse/HBASE-14719
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Vrishal Kulkarni
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14719.patch
>
>
> Lets add monitoring to this so that we can see when it starts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14807) TestWALLockup is flakey

2015-11-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018619#comment-15018619
 ] 

stack commented on HBASE-14807:
---

Just did check on cluster with monkies... and seems coherent with this patch in 
place.

> TestWALLockup is flakey
> ---
>
> Key: HBASE-14807
> URL: https://issues.apache.org/jira/browse/HBASE-14807
> Project: HBase
>  Issue Type: Bug
>  Components: flakey, test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14807.patch, 14807.second.attempt.txt, 
> 14807.second.attempt.txt
>
>
> Fails frequently. 
> Looks like this:
> {code}
> 2015-11-12 10:38:51,812 DEBUG [Time-limited test] regionserver.HRegion(3882): 
> Found 0 recovered edits file(s) under 
> /home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad/data/default/testLockupWhenSyncInMiddleOfZigZagSetup/c8694b53368f3301a8d370089120388d
> 2015-11-12 10:38:51,821 DEBUG [Time-limited test] 
> regionserver.FlushLargeStoresPolicy(56): 
> hbase.hregion.percolumnfamilyflush.size.lower.bound is not specified, use 
> global config(16777216) instead
> 2015-11-12 10:38:51,880 DEBUG [Time-limited test] wal.WALSplitter(729): Wrote 
> region 
> seqId=/home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad/data/default/testLockupWhenSyncInMiddleOfZigZagSetup/c8694b53368f3301a8d370089120388d/recovered.edits/2.seqid
>  to file, newSeqId=2, maxSeqId=0
> 2015-11-12 10:38:51,881 INFO  [Time-limited test] regionserver.HRegion(868): 
> Onlined c8694b53368f3301a8d370089120388d; next sequenceid=2
> 2015-11-12 10:38:51,994 ERROR [sync.1] wal.FSHLog$SyncRunner(1226): Error 
> syncing, request close of WAL
> java.io.IOException: FAKE! Failed to replace a bad datanode...SYNC
>   at 
> org.apache.hadoop.hbase.regionserver.TestWALLockup$1DodgyFSLog$1.sync(TestWALLockup.java:162)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.run(FSHLog.java:1222)
>   at java.lang.Thread.run(Thread.java:745)
> 2015-11-12 10:38:51,997 DEBUG [Thread-4] regionserver.LogRoller(139): WAL 
> roll requested
> 2015-11-12 10:38:52,019 DEBUG [flusher] 
> regionserver.FlushLargeStoresPolicy(100): Since none of the CFs were above 
> the size, flushing all.
> 2015-11-12 10:38:52,192 INFO  [Thread-4] 
> regionserver.TestWALLockup$1DodgyFSLog(129): LATCHED
> java.lang.InterruptedException: sleep interrupted
>   at java.lang.Thread.sleep(Native Method)
>   at org.apache.hadoop.hbase.util.Threads.sleep(Threads.java:146)
>   at 
> org.apache.hadoop.hbase.regionserver.TestWALLockup.testLockupWhenSyncInMiddleOfZigZagSetup(TestWALLockup.java:245)
> 2015-11-12 10:39:18,609 INFO  [main] regionserver.TestWALLockup(91): Cleaning 
> test directory: 
> /home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> ... then times out after being locked up for 30 seconds.  Writes 50+MB of 
> logs while spinning.
> Reported as this:
> {code}
> ---
> Test set: org.apache.hadoop.hbase.regionserver.TestWALLockup
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 198.23 sec 
> <<< FAILURE! - in org.apache.hadoop.hbase.regionserver.TestWALLockup
> testLockupWhenSyncInMiddleOfZigZagSetup(org.apache.hadoop.hbase.regionserver.TestWALLockup)
>   Time elapsed: 0.049 sec  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed o

[jira] [Commented] (HBASE-14807) TestWALLockup is flakey

2015-11-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018622#comment-15018622
 ] 

stack commented on HBASE-14807:
---

Review please!


> TestWALLockup is flakey
> ---
>
> Key: HBASE-14807
> URL: https://issues.apache.org/jira/browse/HBASE-14807
> Project: HBase
>  Issue Type: Bug
>  Components: flakey, test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14807.patch, 14807.second.attempt.txt, 
> 14807.second.attempt.txt
>
>
> Fails frequently. 
> Looks like this:
> {code}
> 2015-11-12 10:38:51,812 DEBUG [Time-limited test] regionserver.HRegion(3882): 
> Found 0 recovered edits file(s) under 
> /home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad/data/default/testLockupWhenSyncInMiddleOfZigZagSetup/c8694b53368f3301a8d370089120388d
> 2015-11-12 10:38:51,821 DEBUG [Time-limited test] 
> regionserver.FlushLargeStoresPolicy(56): 
> hbase.hregion.percolumnfamilyflush.size.lower.bound is not specified, use 
> global config(16777216) instead
> 2015-11-12 10:38:51,880 DEBUG [Time-limited test] wal.WALSplitter(729): Wrote 
> region 
> seqId=/home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad/data/default/testLockupWhenSyncInMiddleOfZigZagSetup/c8694b53368f3301a8d370089120388d/recovered.edits/2.seqid
>  to file, newSeqId=2, maxSeqId=0
> 2015-11-12 10:38:51,881 INFO  [Time-limited test] regionserver.HRegion(868): 
> Onlined c8694b53368f3301a8d370089120388d; next sequenceid=2
> 2015-11-12 10:38:51,994 ERROR [sync.1] wal.FSHLog$SyncRunner(1226): Error 
> syncing, request close of WAL
> java.io.IOException: FAKE! Failed to replace a bad datanode...SYNC
>   at 
> org.apache.hadoop.hbase.regionserver.TestWALLockup$1DodgyFSLog$1.sync(TestWALLockup.java:162)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.run(FSHLog.java:1222)
>   at java.lang.Thread.run(Thread.java:745)
> 2015-11-12 10:38:51,997 DEBUG [Thread-4] regionserver.LogRoller(139): WAL 
> roll requested
> 2015-11-12 10:38:52,019 DEBUG [flusher] 
> regionserver.FlushLargeStoresPolicy(100): Since none of the CFs were above 
> the size, flushing all.
> 2015-11-12 10:38:52,192 INFO  [Thread-4] 
> regionserver.TestWALLockup$1DodgyFSLog(129): LATCHED
> java.lang.InterruptedException: sleep interrupted
>   at java.lang.Thread.sleep(Native Method)
>   at org.apache.hadoop.hbase.util.Threads.sleep(Threads.java:146)
>   at 
> org.apache.hadoop.hbase.regionserver.TestWALLockup.testLockupWhenSyncInMiddleOfZigZagSetup(TestWALLockup.java:245)
> 2015-11-12 10:39:18,609 INFO  [main] regionserver.TestWALLockup(91): Cleaning 
> test directory: 
> /home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> ... then times out after being locked up for 30 seconds.  Writes 50+MB of 
> logs while spinning.
> Reported as this:
> {code}
> ---
> Test set: org.apache.hadoop.hbase.regionserver.TestWALLockup
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 198.23 sec 
> <<< FAILURE! - in org.apache.hadoop.hbase.regionserver.TestWALLockup
> testLockupWhenSyncInMiddleOfZigZagSetup(org.apache.hadoop.hbase.regionserver.TestWALLockup)
>   Time elapsed: 0.049 sec  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 3 
> milliseconds
>   at org.apache.log4j.Category.call

  1   2   >