[jira] [Commented] (HBASE-15403) Performance Evaluation tool isn't working as expected

2016-05-16 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15285273#comment-15285273
 ] 

Jerry He commented on HBASE-15403:
--

The total row count is not being updated correctly. Will open another JIRA to 
address it.

> Performance Evaluation tool isn't working as expected
> -
>
> Key: HBASE-15403
> URL: https://issues.apache.org/jira/browse/HBASE-15403
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 1.2.0
>Reporter: Appy
>Priority: Critical
>
> hbase pe --nomapred --rows=100 --table='t4' randomWrite 10
> # count on t4 gives 620 rows
> hbase pe --nomapred --rows=200 --table='t5' randomWrite 10
> # count on t5 gives 1257 rows
> hbase pe --nomapred --table='t6' --rows=200 randomWrite 1
> # count on t6 gives 126 rows
> I was working with 1.2.0, but it's likely that it'll also be affecting master.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15841) Performance Evaluation tool total rows may not be set correctly

2016-05-16 Thread Jerry He (JIRA)
Jerry He created HBASE-15841:


 Summary: Performance Evaluation tool total rows may not be set 
correctly
 Key: HBASE-15841
 URL: https://issues.apache.org/jira/browse/HBASE-15841
 Project: HBase
  Issue Type: Bug
Reporter: Jerry He
Priority: Minor


Carried my comment on HBASE-15403 to here:

Recently when I ran PerformanceEvaluation, I did notice some problem with the 
number of rows.

hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable1 
randomWrite 1
hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable5 
randomWrite 5
hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable10 
randomWrite 10
hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable10 
randomWrite 20

All produced similar number of rows, and on the file system, they look like in 
similar size as well:

hadoop fs -du -h /apps/hbase/data/data/default
786.5 M /apps/hbase/data/data/default/TestTable1
786.0 M /apps/hbase/data/data/default/TestTable10
782.0 M /apps/hbase/data/data/default/TestTable20
713.4 M /apps/hbase/data/data/default/TestTable5

HBase is 1.2.0. Looks like a regression somewhere.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15841) Performance Evaluation tool total rows may not be set correctly

2016-05-16 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-15841:
-
Attachment: HBASE-15841-branch-1.patch

> Performance Evaluation tool total rows may not be set correctly
> ---
>
> Key: HBASE-15841
> URL: https://issues.apache.org/jira/browse/HBASE-15841
> Project: HBase
>  Issue Type: Bug
>Reporter: Jerry He
>Priority: Minor
> Attachments: HBASE-15841-branch-1.patch
>
>
> Carried my comment on HBASE-15403 to here:
> Recently when I ran PerformanceEvaluation, I did notice some problem with the 
> number of rows.
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable1 
> randomWrite 1
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable5 
> randomWrite 5
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable10 
> randomWrite 10
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable10 
> randomWrite 20
> All produced similar number of rows, and on the file system, they look like 
> in similar size as well:
> hadoop fs -du -h /apps/hbase/data/data/default
> 786.5 M /apps/hbase/data/data/default/TestTable1
> 786.0 M /apps/hbase/data/data/default/TestTable10
> 782.0 M /apps/hbase/data/data/default/TestTable20
> 713.4 M /apps/hbase/data/data/default/TestTable5
> HBase is 1.2.0. Looks like a regression somewhere.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15841) Performance Evaluation tool total rows may not be set correctly

2016-05-16 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-15841:
-
Attachment: HBASE-15841-master.patch

master branch is slightly different.

> Performance Evaluation tool total rows may not be set correctly
> ---
>
> Key: HBASE-15841
> URL: https://issues.apache.org/jira/browse/HBASE-15841
> Project: HBase
>  Issue Type: Bug
>Reporter: Jerry He
>Priority: Minor
> Attachments: HBASE-15841-branch-1.patch, HBASE-15841-master.patch
>
>
> Carried my comment on HBASE-15403 to here:
> Recently when I ran PerformanceEvaluation, I did notice some problem with the 
> number of rows.
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable1 
> randomWrite 1
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable5 
> randomWrite 5
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable10 
> randomWrite 10
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable10 
> randomWrite 20
> All produced similar number of rows, and on the file system, they look like 
> in similar size as well:
> hadoop fs -du -h /apps/hbase/data/data/default
> 786.5 M /apps/hbase/data/data/default/TestTable1
> 786.0 M /apps/hbase/data/data/default/TestTable10
> 782.0 M /apps/hbase/data/data/default/TestTable20
> 713.4 M /apps/hbase/data/data/default/TestTable5
> HBase is 1.2.0. Looks like a regression somewhere.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15841) Performance Evaluation tool total rows may not be set correctly

2016-05-16 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-15841:
-
 Assignee: Jerry He
Fix Version/s: 1.2.2
   1.4.0
   1.3.0
   2.0.0
   Status: Patch Available  (was: Open)

> Performance Evaluation tool total rows may not be set correctly
> ---
>
> Key: HBASE-15841
> URL: https://issues.apache.org/jira/browse/HBASE-15841
> Project: HBase
>  Issue Type: Bug
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.2
>
> Attachments: HBASE-15841-branch-1.patch, HBASE-15841-master.patch
>
>
> Carried my comment on HBASE-15403 to here:
> Recently when I ran PerformanceEvaluation, I did notice some problem with the 
> number of rows.
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable1 
> randomWrite 1
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable5 
> randomWrite 5
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable10 
> randomWrite 10
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable10 
> randomWrite 20
> All produced similar number of rows, and on the file system, they look like 
> in similar size as well:
> hadoop fs -du -h /apps/hbase/data/data/default
> 786.5 M /apps/hbase/data/data/default/TestTable1
> 786.0 M /apps/hbase/data/data/default/TestTable10
> 782.0 M /apps/hbase/data/data/default/TestTable20
> 713.4 M /apps/hbase/data/data/default/TestTable5
> HBase is 1.2.0. Looks like a regression somewhere.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15465) userPermission returned by getUserPermission() for the selected namespace does not have namespace set

2016-05-17 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287037#comment-15287037
 ] 

Jerry He commented on HBASE-15465:
--

Patch looks good.
Can you test it together with HBASE-14818. Thanks.

> userPermission returned by getUserPermission() for the selected namespace 
> does not have namespace set
> -
>
> Key: HBASE-15465
> URL: https://issues.apache.org/jira/browse/HBASE-15465
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Affects Versions: 1.2.0
>Reporter: li xiang
>Assignee: li xiang
>Priority: Minor
> Attachments: HBASE-15465.patch.v0
>
>
> The request sent is with type = Namespace, but the response returned contains 
> Global permissions (that is, the field of namespace is not set)
> It is in 
> hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java,
>  from line 2380, and I made some comments into it
> {code}
> /**
>* A utility used to get permissions for selected namespace.
>* 
>* It's also called by the shell, in case you want to find references.
>*
>* @param protocol the AccessControlService protocol proxy
>* @param namespace name of the namespace
>* @throws ServiceException
>*/
>   public static List getUserPermissions(
>   AccessControlService.BlockingInterface protocol,
>   byte[] namespace) throws ServiceException {
> AccessControlProtos.GetUserPermissionsRequest.Builder builder =
>   AccessControlProtos.GetUserPermissionsRequest.newBuilder();
> if (namespace != null) {
>   builder.setNamespaceName(ByteStringer.wrap(namespace)); 
> }
> builder.setType(AccessControlProtos.Permission.Type.Namespace);  
> //builder is set with type = Namespace
> AccessControlProtos.GetUserPermissionsRequest request = builder.build();  
> //I printed the request, its type is Namespace, which is correct.
> AccessControlProtos.GetUserPermissionsResponse response =  
>protocol.getUserPermissions(null, request);
> /* I printed the response, it contains Global permissions, as below, not a 
> Namespace permission.
> user_permission {
>   user: "a1"
>   permission {
> type: Global
> global_permission {
>   action: READ
>   action: WRITE
>   action: ADMIN
>   action: EXEC
>   action: CREATE
> }
>   }
> }
> AccessControlProtos.GetUserPermissionsRequest has a member called type_ to 
> store the type, but AccessControlProtos.GetUserPermissionsResponse does not.
> */
>  
> List perms = new 
> ArrayList(response.getUserPermissionCount());
> for (AccessControlProtos.UserPermission perm: 
> response.getUserPermissionList()) {
>   perms.add(ProtobufUtil.toUserPermission(perm));  // (1)
> }
> return perms;
>   }
> {code}
> it could be more reasonable to return user permissions with namespace set in 
> getUserPermission() for selected namespace ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10358) Shell changes for setting consistency per request

2016-05-17 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287479#comment-15287479
 ] 

Jerry He commented on HBASE-10358:
--

Hi, [~enis]

Are you ok with the v3 patch? It follows your suggestion.

> Shell changes for setting consistency per request
> -
>
> Key: HBASE-10358
> URL: https://issues.apache.org/jira/browse/HBASE-10358
> Project: HBase
>  Issue Type: New Feature
>  Components: shell
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0
>
> Attachments: Screen Shot 2016-04-21 at 3.09.52 PM.png, Screen Shot 
> 2016-05-05 at 10.38.27 AM.png, shell.patch, shell_3.patch
>
>
> We can add shell support to set consistency per request. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15841) Performance Evaluation tool total rows may not be set correctly

2016-05-17 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287607#comment-15287607
 ] 

Jerry He commented on HBASE-15841:
--

Hi, [~appy]

opts.size will be the default in that last case, and actually it will not be 
used any further.  If 'size' is not specified in the input option, we will not 
be capped by a 'size'
The total rows and the row size will not be capped by the 'size'. 

It is already a fatal exception if  both --size and --rows are specified.
{code}
if (opts.size != DEFAULT_OPTS.size &&
opts.perClientRunRows != DEFAULT_OPTS.perClientRunRows) {
  throw new IllegalArgumentException(rows + " and " + size +
" are mutually exclusive options");
}
{code}



> Performance Evaluation tool total rows may not be set correctly
> ---
>
> Key: HBASE-15841
> URL: https://issues.apache.org/jira/browse/HBASE-15841
> Project: HBase
>  Issue Type: Bug
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.2
>
> Attachments: HBASE-15841-branch-1.patch, HBASE-15841-master.patch
>
>
> Carried my comment on HBASE-15403 to here:
> Recently when I ran PerformanceEvaluation, I did notice some problem with the 
> number of rows.
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable1 
> randomWrite 1
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable5 
> randomWrite 5
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable10 
> randomWrite 10
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable10 
> randomWrite 20
> All produced similar number of rows, and on the file system, they look like 
> in similar size as well:
> hadoop fs -du -h /apps/hbase/data/data/default
> 786.5 M /apps/hbase/data/data/default/TestTable1
> 786.0 M /apps/hbase/data/data/default/TestTable10
> 782.0 M /apps/hbase/data/data/default/TestTable20
> 713.4 M /apps/hbase/data/data/default/TestTable5
> HBase is 1.2.0. Looks like a regression somewhere.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15834) Correct Bloom filter documentation in section 96.4 of Reference Guide

2016-05-17 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287638#comment-15287638
 ] 

Jerry He commented on HBASE-15834:
--

+1

> Correct Bloom filter documentation in section 96.4 of Reference Guide
> -
>
> Key: HBASE-15834
> URL: https://issues.apache.org/jira/browse/HBASE-15834
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: li xiang
>Priority: Minor
> Attachments: HBASE-15834.patch.v0, HBASE-15834.patch.v1
>
>
> In section 96.4, the second paragraph from the bottom
> {code}
> Since HBase 0.96, row-based Bloom filters are enabled by default. (HBASE-)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15841) Performance Evaluation tool total rows may not be set correctly

2016-05-17 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287734#comment-15287734
 ] 

Jerry He commented on HBASE-15841:
--

ok.  That makes sense.
I am ok will imposing new meaning on the 'size'.
It is not just an input value anymore. It can be a deducted, calculated, stored 
and may have future reference as well.

> Performance Evaluation tool total rows may not be set correctly
> ---
>
> Key: HBASE-15841
> URL: https://issues.apache.org/jira/browse/HBASE-15841
> Project: HBase
>  Issue Type: Bug
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.2
>
> Attachments: HBASE-15841-branch-1.patch, HBASE-15841-master.patch
>
>
> Carried my comment on HBASE-15403 to here:
> Recently when I ran PerformanceEvaluation, I did notice some problem with the 
> number of rows.
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable1 
> randomWrite 1
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable5 
> randomWrite 5
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable10 
> randomWrite 10
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable10 
> randomWrite 20
> All produced similar number of rows, and on the file system, they look like 
> in similar size as well:
> hadoop fs -du -h /apps/hbase/data/data/default
> 786.5 M /apps/hbase/data/data/default/TestTable1
> 786.0 M /apps/hbase/data/data/default/TestTable10
> 782.0 M /apps/hbase/data/data/default/TestTable20
> 713.4 M /apps/hbase/data/data/default/TestTable5
> HBase is 1.2.0. Looks like a regression somewhere.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-15834) Correct Bloom filter documentation in section 96.4 of Reference Guide

2016-05-17 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He resolved HBASE-15834.
--
   Resolution: Fixed
Fix Version/s: 2.0.0

> Correct Bloom filter documentation in section 96.4 of Reference Guide
> -
>
> Key: HBASE-15834
> URL: https://issues.apache.org/jira/browse/HBASE-15834
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: li xiang
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15834.patch.v0, HBASE-15834.patch.v1
>
>
> In section 96.4, the second paragraph from the bottom
> {code}
> Since HBase 0.96, row-based Bloom filters are enabled by default. (HBASE-)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15834) Correct Bloom filter documentation in section 96.4 of Reference Guide

2016-05-17 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15288199#comment-15288199
 ] 

Jerry He commented on HBASE-15834:
--

Will assign to 'li xiang' when his JIRA permission is fixed.

> Correct Bloom filter documentation in section 96.4 of Reference Guide
> -
>
> Key: HBASE-15834
> URL: https://issues.apache.org/jira/browse/HBASE-15834
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: li xiang
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15834.patch.v0, HBASE-15834.patch.v1
>
>
> In section 96.4, the second paragraph from the bottom
> {code}
> Since HBase 0.96, row-based Bloom filters are enabled by default. (HBASE-)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15841) Performance Evaluation tool total rows may not be set correctly

2016-05-17 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-15841:
-
Attachment: HBASE-15841-master-v2.patch
HBASE-15841-branch-1-v2.patch

v2 addressed [~appy]'s comment.

> Performance Evaluation tool total rows may not be set correctly
> ---
>
> Key: HBASE-15841
> URL: https://issues.apache.org/jira/browse/HBASE-15841
> Project: HBase
>  Issue Type: Bug
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.2
>
> Attachments: HBASE-15841-branch-1-v2.patch, 
> HBASE-15841-branch-1.patch, HBASE-15841-master-v2.patch, 
> HBASE-15841-master.patch
>
>
> Carried my comment on HBASE-15403 to here:
> Recently when I ran PerformanceEvaluation, I did notice some problem with the 
> number of rows.
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable1 
> randomWrite 1
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable5 
> randomWrite 5
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable10 
> randomWrite 10
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable10 
> randomWrite 20
> All produced similar number of rows, and on the file system, they look like 
> in similar size as well:
> hadoop fs -du -h /apps/hbase/data/data/default
> 786.5 M /apps/hbase/data/data/default/TestTable1
> 786.0 M /apps/hbase/data/data/default/TestTable10
> 782.0 M /apps/hbase/data/data/default/TestTable20
> 713.4 M /apps/hbase/data/data/default/TestTable5
> HBase is 1.2.0. Looks like a regression somewhere.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15841) Performance Evaluation tool total rows may not be set correctly

2016-05-17 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15288203#comment-15288203
 ] 

Jerry He commented on HBASE-15841:
--

Tested to make sure it words as expected:

hadoop fs -du -h /apps/hbase/data/data/default |grep TestTable
7.2 G/apps/hbase/data/data/default/TestTableNew10
3.5 G/apps/hbase/data/data/default/TestTableNew5


> Performance Evaluation tool total rows may not be set correctly
> ---
>
> Key: HBASE-15841
> URL: https://issues.apache.org/jira/browse/HBASE-15841
> Project: HBase
>  Issue Type: Bug
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.2
>
> Attachments: HBASE-15841-branch-1-v2.patch, 
> HBASE-15841-branch-1.patch, HBASE-15841-master-v2.patch, 
> HBASE-15841-master.patch
>
>
> Carried my comment on HBASE-15403 to here:
> Recently when I ran PerformanceEvaluation, I did notice some problem with the 
> number of rows.
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable1 
> randomWrite 1
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable5 
> randomWrite 5
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable10 
> randomWrite 10
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable10 
> randomWrite 20
> All produced similar number of rows, and on the file system, they look like 
> in similar size as well:
> hadoop fs -du -h /apps/hbase/data/data/default
> 786.5 M /apps/hbase/data/data/default/TestTable1
> 786.0 M /apps/hbase/data/data/default/TestTable10
> 782.0 M /apps/hbase/data/data/default/TestTable20
> 713.4 M /apps/hbase/data/data/default/TestTable5
> HBase is 1.2.0. Looks like a regression somewhere.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15834) Correct Bloom filter documentation in section 96.4 of Reference Guide

2016-05-18 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-15834:
-
Assignee: li xiang

> Correct Bloom filter documentation in section 96.4 of Reference Guide
> -
>
> Key: HBASE-15834
> URL: https://issues.apache.org/jira/browse/HBASE-15834
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: li xiang
>Assignee: li xiang
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15834.patch.v0, HBASE-15834.patch.v1
>
>
> In section 96.4, the second paragraph from the bottom
> {code}
> Since HBase 0.96, row-based Bloom filters are enabled by default. (HBASE-)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15841) Performance Evaluation tool total rows may not be set correctly

2016-05-18 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-15841:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Performance Evaluation tool total rows may not be set correctly
> ---
>
> Key: HBASE-15841
> URL: https://issues.apache.org/jira/browse/HBASE-15841
> Project: HBase
>  Issue Type: Bug
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.2
>
> Attachments: HBASE-15841-branch-1-v2.patch, 
> HBASE-15841-branch-1.patch, HBASE-15841-master-v2.patch, 
> HBASE-15841-master.patch
>
>
> Carried my comment on HBASE-15403 to here:
> Recently when I ran PerformanceEvaluation, I did notice some problem with the 
> number of rows.
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable1 
> randomWrite 1
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable5 
> randomWrite 5
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable10 
> randomWrite 10
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable10 
> randomWrite 20
> All produced similar number of rows, and on the file system, they look like 
> in similar size as well:
> hadoop fs -du -h /apps/hbase/data/data/default
> 786.5 M /apps/hbase/data/data/default/TestTable1
> 786.0 M /apps/hbase/data/data/default/TestTable10
> 782.0 M /apps/hbase/data/data/default/TestTable20
> 713.4 M /apps/hbase/data/data/default/TestTable5
> HBase is 1.2.0. Looks like a regression somewhere.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15841) Performance Evaluation tool total rows may not be set correctly

2016-05-18 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15289629#comment-15289629
 ] 

Jerry He commented on HBASE-15841:
--

Thanks for the review, guys!

> Performance Evaluation tool total rows may not be set correctly
> ---
>
> Key: HBASE-15841
> URL: https://issues.apache.org/jira/browse/HBASE-15841
> Project: HBase
>  Issue Type: Bug
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.2
>
> Attachments: HBASE-15841-branch-1-v2.patch, 
> HBASE-15841-branch-1.patch, HBASE-15841-master-v2.patch, 
> HBASE-15841-master.patch
>
>
> Carried my comment on HBASE-15403 to here:
> Recently when I ran PerformanceEvaluation, I did notice some problem with the 
> number of rows.
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable1 
> randomWrite 1
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable5 
> randomWrite 5
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable10 
> randomWrite 10
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestTable10 
> randomWrite 20
> All produced similar number of rows, and on the file system, they look like 
> in similar size as well:
> hadoop fs -du -h /apps/hbase/data/data/default
> 786.5 M /apps/hbase/data/data/default/TestTable1
> 786.0 M /apps/hbase/data/data/default/TestTable10
> 782.0 M /apps/hbase/data/data/default/TestTable20
> 713.4 M /apps/hbase/data/data/default/TestTable5
> HBase is 1.2.0. Looks like a regression somewhere.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14818) user_permission does not list namespace permissions

2016-05-18 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-14818:
-
Status: Patch Available  (was: Open)

> user_permission does not list namespace permissions
> ---
>
> Key: HBASE-14818
> URL: https://issues.apache.org/jira/browse/HBASE-14818
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.0.2
>Reporter: Steven Hancz
>Assignee: li xiang
>Priority: Minor
> Attachments: HBASE-14818-v0.patch, HBASE-14818-v1.patch, 
> HBASE-14818-v2.patch
>
>
> The user_permission command does not list namespace permissions:
> For example: if I create a new namespace or use an existing namespace and 
> grant a user privileges to that namespace, the command user_permission does 
> not list it. The permission is visible in the acl table.
> Example:
> hbase(main):005:0>  create_namespace 'ns3'
> 0 row(s) in 0.1640 seconds
> hbase(main):007:0> grant 'test_user','RWXAC','@ns3'
> 0 row(s) in 0.5680 seconds
> hbase(main):008:0> user_permission '.*'
> User   
> Namespace,Table,Family,Qualifier:Permission   
>  
>  sh82993   finance,finance:emp,,: [Permission: 
> actions=READ,WRITE,EXEC,CREATE,ADMIN]  
>  @hbaseglobaldba   hbase,hbase:acl,,: [Permission: 
> actions=EXEC,CREATE,ADMIN] 
>  @hbaseglobaloper  hbase,hbase:acl,,: [Permission: 
> actions=EXEC,ADMIN]
>  hdfs  hbase,hbase:acl,,: [Permission: 
> actions=READ,WRITE,CREATE,ADMIN,EXEC]  
>  sh82993   ns1,ns1:tbl1,,: [Permission: 
> actions=READ,WRITE,EXEC,CREATE,ADMIN] 
>  ns1admin  ns1,ns1:tbl2,,: [Permission: 
> actions=EXEC,CREATE,ADMIN]
>  @hbaseappltest_ns1funct   ns1,ns1:tbl2,,: [Permission: 
> actions=READ,WRITE,EXEC]  
>  ns1funct  ns1,ns1:tbl2,,: [Permission: 
> actions=READ,WRITE,EXEC,CREATE,ADMIN] 
>  hbase ns2,ns2:tbl1,,: [Permission: 
> actions=READ,WRITE,EXEC,CREATE,ADMIN] 
> 9 row(s) in 1.8090 seconds
> As you can see user test_user does not appear in the output, but we can see 
> the permission in the ACL table. 
> hbase(main):001:0>  scan 'hbase:acl'
> ROWCOLUMN+CELL
> 
>  @finance  column=l:sh82993, timestamp=105519510, 
> value=RWXCA 
>  @gcbcppdn column=l:hdfs, timestamp=1446141119602, 
> value=RWCXA
>  @hbasecolumn=l:hdfs, timestamp=1446141485136, 
> value=RWCAX
>  @ns1  column=l:@hbaseappltest_ns1admin, 
> timestamp=1447437007467, value=RWXCA 
>  @ns1  column=l:@hbaseappltest_ns1funct, 
> timestamp=1447427366835, value=RWX   
>  @ns2  column=l:@hbaseappltest_ns2admin, 
> timestamp=1446674470456, value=XCA   
>  @ns2  column=l:test_user, 
> timestamp=1447692840030, value=RWAC   
>  
>  @ns3  column=l:test_user, 
> timestamp=1447692860434, value=RWXAC  
>  
>  finance:emp   column=l:sh82993, timestamp=107723316, 
> value=RWXCA 
>  hbase:acl column=l:@hbaseglobaldba, 
> timestamp=1446590375370, value=XCA   
>  hbase:acl column=l:@hbaseglobaloper, 
> timestamp=1446590387965, value=XA   
>  hbase:acl column=l:hdfs, timestamp=1446141737213, 
> value=RWCAX
>  ns1:tbl1  column=l:sh82993, timestamp=1446674153058, 
> value=RWXCA 
>  ns1:tbl2  column=l:@hbaseappltest_ns1funct, 
> timestamp=1447183824580, value=RWX   
>  ns1:tbl2  column=l:ns1admin, 
> t

[jira] [Commented] (HBASE-14818) user_permission does not list namespace permissions

2016-05-18 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15290357#comment-15290357
 ] 

Jerry He commented on HBASE-14818:
--

+1

> user_permission does not list namespace permissions
> ---
>
> Key: HBASE-14818
> URL: https://issues.apache.org/jira/browse/HBASE-14818
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.0.2
>Reporter: Steven Hancz
>Assignee: li xiang
>Priority: Minor
> Attachments: HBASE-14818-v0.patch, HBASE-14818-v1.patch, 
> HBASE-14818-v2.patch
>
>
> The user_permission command does not list namespace permissions:
> For example: if I create a new namespace or use an existing namespace and 
> grant a user privileges to that namespace, the command user_permission does 
> not list it. The permission is visible in the acl table.
> Example:
> hbase(main):005:0>  create_namespace 'ns3'
> 0 row(s) in 0.1640 seconds
> hbase(main):007:0> grant 'test_user','RWXAC','@ns3'
> 0 row(s) in 0.5680 seconds
> hbase(main):008:0> user_permission '.*'
> User   
> Namespace,Table,Family,Qualifier:Permission   
>  
>  sh82993   finance,finance:emp,,: [Permission: 
> actions=READ,WRITE,EXEC,CREATE,ADMIN]  
>  @hbaseglobaldba   hbase,hbase:acl,,: [Permission: 
> actions=EXEC,CREATE,ADMIN] 
>  @hbaseglobaloper  hbase,hbase:acl,,: [Permission: 
> actions=EXEC,ADMIN]
>  hdfs  hbase,hbase:acl,,: [Permission: 
> actions=READ,WRITE,CREATE,ADMIN,EXEC]  
>  sh82993   ns1,ns1:tbl1,,: [Permission: 
> actions=READ,WRITE,EXEC,CREATE,ADMIN] 
>  ns1admin  ns1,ns1:tbl2,,: [Permission: 
> actions=EXEC,CREATE,ADMIN]
>  @hbaseappltest_ns1funct   ns1,ns1:tbl2,,: [Permission: 
> actions=READ,WRITE,EXEC]  
>  ns1funct  ns1,ns1:tbl2,,: [Permission: 
> actions=READ,WRITE,EXEC,CREATE,ADMIN] 
>  hbase ns2,ns2:tbl1,,: [Permission: 
> actions=READ,WRITE,EXEC,CREATE,ADMIN] 
> 9 row(s) in 1.8090 seconds
> As you can see user test_user does not appear in the output, but we can see 
> the permission in the ACL table. 
> hbase(main):001:0>  scan 'hbase:acl'
> ROWCOLUMN+CELL
> 
>  @finance  column=l:sh82993, timestamp=105519510, 
> value=RWXCA 
>  @gcbcppdn column=l:hdfs, timestamp=1446141119602, 
> value=RWCXA
>  @hbasecolumn=l:hdfs, timestamp=1446141485136, 
> value=RWCAX
>  @ns1  column=l:@hbaseappltest_ns1admin, 
> timestamp=1447437007467, value=RWXCA 
>  @ns1  column=l:@hbaseappltest_ns1funct, 
> timestamp=1447427366835, value=RWX   
>  @ns2  column=l:@hbaseappltest_ns2admin, 
> timestamp=1446674470456, value=XCA   
>  @ns2  column=l:test_user, 
> timestamp=1447692840030, value=RWAC   
>  
>  @ns3  column=l:test_user, 
> timestamp=1447692860434, value=RWXAC  
>  
>  finance:emp   column=l:sh82993, timestamp=107723316, 
> value=RWXCA 
>  hbase:acl column=l:@hbaseglobaldba, 
> timestamp=1446590375370, value=XCA   
>  hbase:acl column=l:@hbaseglobaloper, 
> timestamp=1446590387965, value=XA   
>  hbase:acl column=l:hdfs, timestamp=1446141737213, 
> value=RWCAX
>  ns1:tbl1  column=l:sh82993, timestamp=1446674153058, 
> value=RWXCA 
>  ns1:tbl2  column=l:@hbaseappltest_ns1funct, 
> timestamp=1447183824580, value=RWX   
>  ns1:tbl2  col

[jira] [Commented] (HBASE-15465) userPermission returned by getUserPermission() for the selected namespace does not have namespace set

2016-05-19 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15292135#comment-15292135
 ] 

Jerry He commented on HBASE-15465:
--

+1

> userPermission returned by getUserPermission() for the selected namespace 
> does not have namespace set
> -
>
> Key: HBASE-15465
> URL: https://issues.apache.org/jira/browse/HBASE-15465
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Affects Versions: 1.2.0
>Reporter: li xiang
>Assignee: li xiang
>Priority: Minor
> Fix For: master
>
> Attachments: HBASE-15465-master-v2.patch, HBASE-15465.patch.v0, 
> HBASE-15465.patch.v1
>
>
> The request sent is with type = Namespace, but the response returned contains 
> Global permissions (that is, the field of namespace is not set)
> It is in 
> hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java,
>  from line 2380, and I made some comments into it
> {code}
> /**
>* A utility used to get permissions for selected namespace.
>* 
>* It's also called by the shell, in case you want to find references.
>*
>* @param protocol the AccessControlService protocol proxy
>* @param namespace name of the namespace
>* @throws ServiceException
>*/
>   public static List getUserPermissions(
>   AccessControlService.BlockingInterface protocol,
>   byte[] namespace) throws ServiceException {
> AccessControlProtos.GetUserPermissionsRequest.Builder builder =
>   AccessControlProtos.GetUserPermissionsRequest.newBuilder();
> if (namespace != null) {
>   builder.setNamespaceName(ByteStringer.wrap(namespace)); 
> }
> builder.setType(AccessControlProtos.Permission.Type.Namespace);  
> //builder is set with type = Namespace
> AccessControlProtos.GetUserPermissionsRequest request = builder.build();  
> //I printed the request, its type is Namespace, which is correct.
> AccessControlProtos.GetUserPermissionsResponse response =  
>protocol.getUserPermissions(null, request);
> /* I printed the response, it contains Global permissions, as below, not a 
> Namespace permission.
> user_permission {
>   user: "a1"
>   permission {
> type: Global
> global_permission {
>   action: READ
>   action: WRITE
>   action: ADMIN
>   action: EXEC
>   action: CREATE
> }
>   }
> }
> AccessControlProtos.GetUserPermissionsRequest has a member called type_ to 
> store the type, but AccessControlProtos.GetUserPermissionsResponse does not.
> */
>  
> List perms = new 
> ArrayList(response.getUserPermissionCount());
> for (AccessControlProtos.UserPermission perm: 
> response.getUserPermissionList()) {
>   perms.add(ProtobufUtil.toUserPermission(perm));  // (1)
> }
> return perms;
>   }
> {code}
> it could be more reasonable to return user permissions with namespace set in 
> getUserPermission() for selected namespace ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15465) userPermission returned by getUserPermission() for the selected namespace does not have namespace set

2016-05-19 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-15465:
-
   Resolution: Fixed
Fix Version/s: (was: master)
   1.2.2
   1.4.0
   1.3.0
   2.0.0
   Status: Resolved  (was: Patch Available)

> userPermission returned by getUserPermission() for the selected namespace 
> does not have namespace set
> -
>
> Key: HBASE-15465
> URL: https://issues.apache.org/jira/browse/HBASE-15465
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Affects Versions: 1.2.0
>Reporter: li xiang
>Assignee: li xiang
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.2
>
> Attachments: HBASE-15465-master-v2.patch, HBASE-15465.patch.v0, 
> HBASE-15465.patch.v1
>
>
> The request sent is with type = Namespace, but the response returned contains 
> Global permissions (that is, the field of namespace is not set)
> It is in 
> hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java,
>  from line 2380, and I made some comments into it
> {code}
> /**
>* A utility used to get permissions for selected namespace.
>* 
>* It's also called by the shell, in case you want to find references.
>*
>* @param protocol the AccessControlService protocol proxy
>* @param namespace name of the namespace
>* @throws ServiceException
>*/
>   public static List getUserPermissions(
>   AccessControlService.BlockingInterface protocol,
>   byte[] namespace) throws ServiceException {
> AccessControlProtos.GetUserPermissionsRequest.Builder builder =
>   AccessControlProtos.GetUserPermissionsRequest.newBuilder();
> if (namespace != null) {
>   builder.setNamespaceName(ByteStringer.wrap(namespace)); 
> }
> builder.setType(AccessControlProtos.Permission.Type.Namespace);  
> //builder is set with type = Namespace
> AccessControlProtos.GetUserPermissionsRequest request = builder.build();  
> //I printed the request, its type is Namespace, which is correct.
> AccessControlProtos.GetUserPermissionsResponse response =  
>protocol.getUserPermissions(null, request);
> /* I printed the response, it contains Global permissions, as below, not a 
> Namespace permission.
> user_permission {
>   user: "a1"
>   permission {
> type: Global
> global_permission {
>   action: READ
>   action: WRITE
>   action: ADMIN
>   action: EXEC
>   action: CREATE
> }
>   }
> }
> AccessControlProtos.GetUserPermissionsRequest has a member called type_ to 
> store the type, but AccessControlProtos.GetUserPermissionsResponse does not.
> */
>  
> List perms = new 
> ArrayList(response.getUserPermissionCount());
> for (AccessControlProtos.UserPermission perm: 
> response.getUserPermissionList()) {
>   perms.add(ProtobufUtil.toUserPermission(perm));  // (1)
> }
> return perms;
>   }
> {code}
> it could be more reasonable to return user permissions with namespace set in 
> getUserPermission() for selected namespace ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15465) userPermission returned by getUserPermission() for the selected namespace does not have namespace set

2016-05-19 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15292645#comment-15292645
 ] 

Jerry He commented on HBASE-15465:
--

Pushed to 1.2+ branches.

> userPermission returned by getUserPermission() for the selected namespace 
> does not have namespace set
> -
>
> Key: HBASE-15465
> URL: https://issues.apache.org/jira/browse/HBASE-15465
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Affects Versions: 1.2.0
>Reporter: li xiang
>Assignee: li xiang
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.2
>
> Attachments: HBASE-15465-master-v2.patch, HBASE-15465.patch.v0, 
> HBASE-15465.patch.v1
>
>
> The request sent is with type = Namespace, but the response returned contains 
> Global permissions (that is, the field of namespace is not set)
> It is in 
> hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java,
>  from line 2380, and I made some comments into it
> {code}
> /**
>* A utility used to get permissions for selected namespace.
>* 
>* It's also called by the shell, in case you want to find references.
>*
>* @param protocol the AccessControlService protocol proxy
>* @param namespace name of the namespace
>* @throws ServiceException
>*/
>   public static List getUserPermissions(
>   AccessControlService.BlockingInterface protocol,
>   byte[] namespace) throws ServiceException {
> AccessControlProtos.GetUserPermissionsRequest.Builder builder =
>   AccessControlProtos.GetUserPermissionsRequest.newBuilder();
> if (namespace != null) {
>   builder.setNamespaceName(ByteStringer.wrap(namespace)); 
> }
> builder.setType(AccessControlProtos.Permission.Type.Namespace);  
> //builder is set with type = Namespace
> AccessControlProtos.GetUserPermissionsRequest request = builder.build();  
> //I printed the request, its type is Namespace, which is correct.
> AccessControlProtos.GetUserPermissionsResponse response =  
>protocol.getUserPermissions(null, request);
> /* I printed the response, it contains Global permissions, as below, not a 
> Namespace permission.
> user_permission {
>   user: "a1"
>   permission {
> type: Global
> global_permission {
>   action: READ
>   action: WRITE
>   action: ADMIN
>   action: EXEC
>   action: CREATE
> }
>   }
> }
> AccessControlProtos.GetUserPermissionsRequest has a member called type_ to 
> store the type, but AccessControlProtos.GetUserPermissionsResponse does not.
> */
>  
> List perms = new 
> ArrayList(response.getUserPermissionCount());
> for (AccessControlProtos.UserPermission perm: 
> response.getUserPermissionList()) {
>   perms.add(ProtobufUtil.toUserPermission(perm));  // (1)
> }
> return perms;
>   }
> {code}
> it could be more reasonable to return user permissions with namespace set in 
> getUserPermission() for selected namespace ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14818) user_permission does not list namespace permissions

2016-05-21 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-14818:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: (was: master)
   1.4.0
   1.3.0
   2.0.0
   Status: Resolved  (was: Patch Available)

Thanks for the fix, [~water]. Thanks for the review, [~ashish singhi].

> user_permission does not list namespace permissions
> ---
>
> Key: HBASE-14818
> URL: https://issues.apache.org/jira/browse/HBASE-14818
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.2.0
>Reporter: Steven Hancz
>Assignee: li xiang
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.2
>
> Attachments: HBASE-14818-1.2-v4.patch, HBASE-14818-master-v3.patch, 
> HBASE-14818-master-v4.patch, HBASE-14818-v0.patch, HBASE-14818-v1.patch, 
> HBASE-14818-v2.patch
>
>
> The user_permission command does not list namespace permissions:
> For example: if I create a new namespace or use an existing namespace and 
> grant a user privileges to that namespace, the command user_permission does 
> not list it. The permission is visible in the acl table.
> Example:
> hbase(main):005:0>  create_namespace 'ns3'
> 0 row(s) in 0.1640 seconds
> hbase(main):007:0> grant 'test_user','RWXAC','@ns3'
> 0 row(s) in 0.5680 seconds
> hbase(main):008:0> user_permission '.*'
> User   
> Namespace,Table,Family,Qualifier:Permission   
>  
>  sh82993   finance,finance:emp,,: [Permission: 
> actions=READ,WRITE,EXEC,CREATE,ADMIN]  
>  @hbaseglobaldba   hbase,hbase:acl,,: [Permission: 
> actions=EXEC,CREATE,ADMIN] 
>  @hbaseglobaloper  hbase,hbase:acl,,: [Permission: 
> actions=EXEC,ADMIN]
>  hdfs  hbase,hbase:acl,,: [Permission: 
> actions=READ,WRITE,CREATE,ADMIN,EXEC]  
>  sh82993   ns1,ns1:tbl1,,: [Permission: 
> actions=READ,WRITE,EXEC,CREATE,ADMIN] 
>  ns1admin  ns1,ns1:tbl2,,: [Permission: 
> actions=EXEC,CREATE,ADMIN]
>  @hbaseappltest_ns1funct   ns1,ns1:tbl2,,: [Permission: 
> actions=READ,WRITE,EXEC]  
>  ns1funct  ns1,ns1:tbl2,,: [Permission: 
> actions=READ,WRITE,EXEC,CREATE,ADMIN] 
>  hbase ns2,ns2:tbl1,,: [Permission: 
> actions=READ,WRITE,EXEC,CREATE,ADMIN] 
> 9 row(s) in 1.8090 seconds
> As you can see user test_user does not appear in the output, but we can see 
> the permission in the ACL table. 
> hbase(main):001:0>  scan 'hbase:acl'
> ROWCOLUMN+CELL
> 
>  @finance  column=l:sh82993, timestamp=105519510, 
> value=RWXCA 
>  @gcbcppdn column=l:hdfs, timestamp=1446141119602, 
> value=RWCXA
>  @hbasecolumn=l:hdfs, timestamp=1446141485136, 
> value=RWCAX
>  @ns1  column=l:@hbaseappltest_ns1admin, 
> timestamp=1447437007467, value=RWXCA 
>  @ns1  column=l:@hbaseappltest_ns1funct, 
> timestamp=1447427366835, value=RWX   
>  @ns2  column=l:@hbaseappltest_ns2admin, 
> timestamp=1446674470456, value=XCA   
>  @ns2  column=l:test_user, 
> timestamp=1447692840030, value=RWAC   
>  
>  @ns3  column=l:test_user, 
> timestamp=1447692860434, value=RWXAC  
>  
>  finance:emp   column=l:sh82993, timestamp=107723316, 
> value=RWXCA 
>  hbase:acl column=l:@hbaseglobaldba, 
> timestamp=1446590375370, value=XCA   
>  hbase:acl column=l:@hbaseglobaloper, 
> timestamp=1446590387965, value=XA   
>  hbase:acl column=l:hdfs, timestamp=1446141737213, 
> value=RWCAX  

[jira] [Commented] (HBASE-15790) Force "hbase" ownership on bulkload

2016-05-21 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15295388#comment-15295388
 ] 

Jerry He commented on HBASE-15790:
--

Just thought of another question.
Does this mean 'hbase' has to be hdfs superuser now?  To use setOwner().

> Force "hbase" ownership on bulkload
> ---
>
> Key: HBASE-15790
> URL: https://issues.apache.org/jira/browse/HBASE-15790
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.2.1, 1.1.4, 0.98.19
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Attachments: HBASE-15790-v0.patch, HBASE-15790-v1.patch, 
> HBASE-15790-v2.patch
>
>
> When a user different than "hbase" bulkload files, in general we end up with 
> files owned by a user different than hbase. sometimes this causes problems 
> with hbase not be able to move files around archiving/deleting.
> A simple solution is probably to change the ownership of the files to "hbase" 
> during bulkload.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15790) Force "hbase" ownership on bulkload

2016-05-21 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15295406#comment-15295406
 ] 

Jerry He commented on HBASE-15790:
--

Hmm.  My impression is that only hdfs superuser can chown, or the root in the 
old Unix world, with some specific exceptions.
But your UT passed ...

> Force "hbase" ownership on bulkload
> ---
>
> Key: HBASE-15790
> URL: https://issues.apache.org/jira/browse/HBASE-15790
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.2.1, 1.1.4, 0.98.19
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Attachments: HBASE-15790-v0.patch, HBASE-15790-v1.patch, 
> HBASE-15790-v2.patch
>
>
> When a user different than "hbase" bulkload files, in general we end up with 
> files owned by a user different than hbase. sometimes this causes problems 
> with hbase not be able to move files around archiving/deleting.
> A simple solution is probably to change the ownership of the files to "hbase" 
> during bulkload.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15790) Force "hbase" ownership on bulkload

2016-05-21 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15295407#comment-15295407
 ] 

Jerry He commented on HBASE-15790:
--

Ok. The UT can be explained.
The build id starts the hdfs minicluster. It is the hdfs superuser already.

> Force "hbase" ownership on bulkload
> ---
>
> Key: HBASE-15790
> URL: https://issues.apache.org/jira/browse/HBASE-15790
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.2.1, 1.1.4, 0.98.19
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Attachments: HBASE-15790-v0.patch, HBASE-15790-v1.patch, 
> HBASE-15790-v2.patch
>
>
> When a user different than "hbase" bulkload files, in general we end up with 
> files owned by a user different than hbase. sometimes this causes problems 
> with hbase not be able to move files around archiving/deleting.
> A simple solution is probably to change the ownership of the files to "hbase" 
> during bulkload.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15790) Force "hbase" ownership on bulkload

2016-05-26 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15302690#comment-15302690
 ] 

Jerry He commented on HBASE-15790:
--

On patch-v3, checking 777 is probably not good. 
For example, the user (say 'hive') and 'hbase' are in the same group, and the 
permission is rw for group. 
This case works currently.  But after v3, we will throw exception?

> Force "hbase" ownership on bulkload
> ---
>
> Key: HBASE-15790
> URL: https://issues.apache.org/jira/browse/HBASE-15790
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.2.1, 1.1.4, 0.98.19
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Attachments: HBASE-15790-v0.patch, HBASE-15790-v1.patch, 
> HBASE-15790-v2.patch, HBASE-15790-v3.patch
>
>
> When a user different than "hbase" bulkload files, in general we end up with 
> files owned by a user different than hbase. sometimes this causes problems 
> with hbase not be able to move files around archiving/deleting.
> A simple solution is probably to change the ownership of the files to "hbase" 
> during bulkload.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15790) Force "hbase" ownership on bulkload

2016-05-26 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15303139#comment-15303139
 ] 

Jerry He commented on HBASE-15790:
--

Could you give a specific example of the problem on the permissions from hbase 
bulk load.

Yes. The SecureBulkLoad is the way to go. I have a JIRA to get it unified and 
be default HBASE-13701. I think I should start working on it ...

> Force "hbase" ownership on bulkload
> ---
>
> Key: HBASE-15790
> URL: https://issues.apache.org/jira/browse/HBASE-15790
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.2.1, 1.1.4, 0.98.19
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Attachments: HBASE-15790-v0.patch, HBASE-15790-v1.patch, 
> HBASE-15790-v2.patch, HBASE-15790-v3.patch
>
>
> When a user different than "hbase" bulkload files, in general we end up with 
> files owned by a user different than hbase. sometimes this causes problems 
> with hbase not be able to move files around archiving/deleting.
> A simple solution is probably to change the ownership of the files to "hbase" 
> during bulkload.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15790) Force "hbase" ownership on bulkload

2016-05-26 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15303181#comment-15303181
 ] 

Jerry He commented on HBASE-15790:
--

Hmm. Then how could the files be moved into hbase by bulk load?  Would the bulk 
load fail in this case?  If it is rename, hbase needs write permission on the 
files. If it is copy, hbase needs read, but then copy does change the owner.

> Force "hbase" ownership on bulkload
> ---
>
> Key: HBASE-15790
> URL: https://issues.apache.org/jira/browse/HBASE-15790
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.2.1, 1.1.4, 0.98.19
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Attachments: HBASE-15790-v0.patch, HBASE-15790-v1.patch, 
> HBASE-15790-v2.patch, HBASE-15790-v3.patch
>
>
> When a user different than "hbase" bulkload files, in general we end up with 
> files owned by a user different than hbase. sometimes this causes problems 
> with hbase not be able to move files around archiving/deleting.
> A simple solution is probably to change the ownership of the files to "hbase" 
> during bulkload.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-20565) ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect result since 1.4

2018-05-10 Thread Jerry He (JIRA)
Jerry He created HBASE-20565:


 Summary: ColumnRangeFilter combined with ColumnPaginationFilter 
can produce incorrect result since 1.4
 Key: HBASE-20565
 URL: https://issues.apache.org/jira/browse/HBASE-20565
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 1.4.4
Reporter: Jerry He


When ColumnPaginationFilter is combined with ColumnRangeFilter, we may see 
incorrect result.

Here is a simple example.

One row with 10 columns c0, c1, c2, .., c9.  I have a ColumnRangeFilter for 
range c2 to c9.  Then I have a ColumnPaginationFilter with limit 5 and offset 
0.  FileterList is FilterList(Operator.MUST_PASS_ALL, ColumnRangeFilter, 
ColumnPaginationFilter).
We expect 5 columns being returned.  But in HBase 1.4 and after, 4 columns are 
returned.
In 1.2.x, the correct 5 columns are returned.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20565) ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect result since 1.4

2018-05-10 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-20565:
-
Attachment: test-branch-1.4.patch

> ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect 
> result since 1.4
> -
>
> Key: HBASE-20565
> URL: https://issues.apache.org/jira/browse/HBASE-20565
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 1.4.4
>Reporter: Jerry He
>Priority: Major
> Attachments: test-branch-1.4.patch
>
>
> When ColumnPaginationFilter is combined with ColumnRangeFilter, we may see 
> incorrect result.
> Here is a simple example.
> One row with 10 columns c0, c1, c2, .., c9.  I have a ColumnRangeFilter for 
> range c2 to c9.  Then I have a ColumnPaginationFilter with limit 5 and offset 
> 0.  FileterList is FilterList(Operator.MUST_PASS_ALL, ColumnRangeFilter, 
> ColumnPaginationFilter).
> We expect 5 columns being returned.  But in HBase 1.4 and after, 4 columns 
> are returned.
> In 1.2.x, the correct 5 columns are returned.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20565) ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect result since 1.4

2018-05-10 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16471010#comment-16471010
 ] 

Jerry He commented on HBASE-20565:
--

I attached a test case that can be used to re-create the problem.  The test 
passes in branch 1.2, but fails in 1.4 and later.

> ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect 
> result since 1.4
> -
>
> Key: HBASE-20565
> URL: https://issues.apache.org/jira/browse/HBASE-20565
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 1.4.4
>Reporter: Jerry He
>Priority: Major
> Attachments: test-branch-1.4.patch
>
>
> When ColumnPaginationFilter is combined with ColumnRangeFilter, we may see 
> incorrect result.
> Here is a simple example.
> One row with 10 columns c0, c1, c2, .., c9.  I have a ColumnRangeFilter for 
> range c2 to c9.  Then I have a ColumnPaginationFilter with limit 5 and offset 
> 0.  FileterList is FilterList(Operator.MUST_PASS_ALL, ColumnRangeFilter, 
> ColumnPaginationFilter).
> We expect 5 columns being returned.  But in HBase 1.4 and after, 4 columns 
> are returned.
> In 1.2.x, the correct 5 columns are returned.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20565) ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect result since 1.4

2018-05-10 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16471024#comment-16471024
 ] 

Jerry He commented on HBASE-20565:
--

Some research shows it is caused by HBASE-18993. 
FYI . [~openinx], [~apurtell]

> ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect 
> result since 1.4
> -
>
> Key: HBASE-20565
> URL: https://issues.apache.org/jira/browse/HBASE-20565
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 1.4.4
>Reporter: Jerry He
>Priority: Major
> Attachments: test-branch-1.4.patch
>
>
> When ColumnPaginationFilter is combined with ColumnRangeFilter, we may see 
> incorrect result.
> Here is a simple example.
> One row with 10 columns c0, c1, c2, .., c9.  I have a ColumnRangeFilter for 
> range c2 to c9.  Then I have a ColumnPaginationFilter with limit 5 and offset 
> 0.  FileterList is FilterList(Operator.MUST_PASS_ALL, ColumnRangeFilter, 
> ColumnPaginationFilter).
> We expect 5 columns being returned.  But in HBase 1.4 and after, 4 columns 
> are returned.
> In 1.2.x, the correct 5 columns are returned.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20565) ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect result since 1.4

2018-05-11 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16472774#comment-16472774
 ] 

Jerry He commented on HBASE-20565:
--

Hi, [~openinx] Thanks for taking a look!

i did a little more testing with the test patch.  Here is what I got:

Branch 1.4:
---
ColumnRangeFilter StringRange(“1”, true, "9", false), ColumnPaginationFilter(5, 
0)
Result:  0, 1, 2, 3, 4 

StringRange(“1”, true, "9", false), ColumnPaginationFilter(5, 0)
Result:  1, 2, 3, 4 

StringRange(“3”, true, "9", false), ColumnPaginationFilter(5, 0)
Result:  3, 4, 5, 6

StringRange(“0”, true, "9", false), ColumnPaginationFilter(5, 1)
Result:  1, 2, 3, 4, 5 

StringRange(“1”, true, "9", false), ColumnPaginationFilter(5, 1)
Result:  1, 2, 3, 4 , 5

StringRange(“3”, true, "9", false), ColumnPaginationFilter(5, 1)
Result:  3, 4, 5, 6, 7

Branch 1.2
---
ColumnRangeFilter StringRange(“0”, true, "9", false), ColumnPaginationFilter(5, 
0)
Result:  0, 1, 2, 3, 4 

StringRange(“1”, true, "9", false), ColumnPaginationFilter(5, 0)
Result:  1, 2, 3, 4, 5

StringRange(“3”, true, "9", false), ColumnPaginationFilter(5, 0)
Result:  3, 4, 5, 6, 7

StringRange(“0”, true, "9", false), ColumnPaginationFilter(5, 1)
Result:  1, 2, 3, 4, 5 

StringRange(“1”, true, "9", false), ColumnPaginationFilter(5, 1)
Result:  2, 3, 4 , 5, 6

StringRange(“3”, true, "9", false), ColumnPaginationFilter(5, 1)
Result:  4, 5, 6, 7, 8

> ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect 
> result since 1.4
> -
>
> Key: HBASE-20565
> URL: https://issues.apache.org/jira/browse/HBASE-20565
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 1.4.4
>Reporter: Jerry He
>Assignee: Zheng Hu
>Priority: Major
> Attachments: debug.diff, debug.log, test-branch-1.4.patch
>
>
> When ColumnPaginationFilter is combined with ColumnRangeFilter, we may see 
> incorrect result.
> Here is a simple example.
> One row with 10 columns c0, c1, c2, .., c9.  I have a ColumnRangeFilter for 
> range c2 to c9.  Then I have a ColumnPaginationFilter with limit 5 and offset 
> 0.  FileterList is FilterList(Operator.MUST_PASS_ALL, ColumnRangeFilter, 
> ColumnPaginationFilter).
> We expect 5 columns being returned.  But in HBase 1.4 and after, 4 columns 
> are returned.
> In 1.2.x, the correct 5 columns are returned.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20565) ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect result since 1.4

2018-05-11 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16472775#comment-16472775
 ] 

Jerry He commented on HBASE-20565:
--

You can see the results from branch-1.2 makes more sense and have correctly 
identify the ColumnPaginationFilter's offset of either 0 or 1 relative to the 
ColumnRangeFilter's range.

> ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect 
> result since 1.4
> -
>
> Key: HBASE-20565
> URL: https://issues.apache.org/jira/browse/HBASE-20565
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 1.4.4
>Reporter: Jerry He
>Assignee: Zheng Hu
>Priority: Major
> Attachments: debug.diff, debug.log, test-branch-1.4.patch
>
>
> When ColumnPaginationFilter is combined with ColumnRangeFilter, we may see 
> incorrect result.
> Here is a simple example.
> One row with 10 columns c0, c1, c2, .., c9.  I have a ColumnRangeFilter for 
> range c2 to c9.  Then I have a ColumnPaginationFilter with limit 5 and offset 
> 0.  FileterList is FilterList(Operator.MUST_PASS_ALL, ColumnRangeFilter, 
> ColumnPaginationFilter).
> We expect 5 columns being returned.  But in HBase 1.4 and after, 4 columns 
> are returned.
> In 1.2.x, the correct 5 columns are returned.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-19145) Look into hbase-2 client going to hbase-1 server

2018-08-07 Thread Jerry He (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-19145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He reassigned HBASE-19145:


Assignee: Jerry He

> Look into hbase-2 client going to hbase-1 server
> 
>
> Key: HBASE-19145
> URL: https://issues.apache.org/jira/browse/HBASE-19145
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0-beta-1
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Major
>
> From the "[DISCUSS] hbase-2.0.0 compatibility expectations" thread.
> Do we support hbase-2 client going against hbase-1 server?
> We seem to be fine mix-and-match the clients and servers within the
> hbase-1 releases.   And IIRC hbase-1 client is ok against 0.98 server.
> Suppose I have a product  that depends and bundles HBase client. I
> want to upgrade the dependency to hbase-2 so that it can take
> advantages of and claim support of hbase-2.
> But does it mean that I will need drop the claims that the new version
> of the product support any hbase-1 backend?
> It has not been an objective. It might work doing basic Client API on a
> later branch-1 but will fail doing Admin functions (and figuring if a Table
> is online).  If it was a small thing to make it
> work, lets get it in.
> Let's look into it to see what works and what not.  Have a statement at least.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19145) Look into hbase-2 client going to hbase-1 server

2018-08-07 Thread Jerry He (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16572648#comment-16572648
 ] 

Jerry He commented on HBASE-19145:
--

Trying to put some result here with the latest HBase 2.1.0 client going to 
1.4.6 server.

Simple put, delete, scan work ok.

But some of the Admin APIs will fail.  The main reason is in HBase 2.x, the 
table state is kept in meta. Therefore 2.x client will always go ask for table 
state from meta on the server, but 1.x server does not have it in meta.
{code:java}
org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column family 
table does not exist in region hbase:meta,,1.1588230740 in table 'hbase:meta', 
{TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => 
'|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}, 
{NAME => 'info', BLOOMFILTER => 'NONE', VERSIONS => '3', IN_MEMORY => 'true', 
KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', 
COMPRESSION => 'NONE', CACHE_DATA_IN_L1 => 'true', MIN_VERSIONS => '0', 
BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}
at org.apache.hadoop.hbase.regionserver.HRegion.checkFamily(HRegion.java:8298)
at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7306)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2259)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36609)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2354){code}
See HBASE-12035.

Any API asking for the status of the table will fail: disable, enable, flush, 
alter, exists, clone_snapshot, etc.

It is not an extensive testing.  There does not seem to be a need or 
requirement to make it work. So I am closing this task for now.

> Look into hbase-2 client going to hbase-1 server
> 
>
> Key: HBASE-19145
> URL: https://issues.apache.org/jira/browse/HBASE-19145
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0-beta-1
>Reporter: Jerry He
>Priority: Major
>
> From the "[DISCUSS] hbase-2.0.0 compatibility expectations" thread.
> Do we support hbase-2 client going against hbase-1 server?
> We seem to be fine mix-and-match the clients and servers within the
> hbase-1 releases.   And IIRC hbase-1 client is ok against 0.98 server.
> Suppose I have a product  that depends and bundles HBase client. I
> want to upgrade the dependency to hbase-2 so that it can take
> advantages of and claim support of hbase-2.
> But does it mean that I will need drop the claims that the new version
> of the product support any hbase-1 backend?
> It has not been an objective. It might work doing basic Client API on a
> later branch-1 but will fail doing Admin functions (and figuring if a Table
> is online).  If it was a small thing to make it
> work, lets get it in.
> Let's look into it to see what works and what not.  Have a statement at least.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-19145) Look into hbase-2 client going to hbase-1 server

2018-08-07 Thread Jerry He (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-19145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He resolved HBASE-19145.
--
Resolution: Done

> Look into hbase-2 client going to hbase-1 server
> 
>
> Key: HBASE-19145
> URL: https://issues.apache.org/jira/browse/HBASE-19145
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0-beta-1
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Major
>
> From the "[DISCUSS] hbase-2.0.0 compatibility expectations" thread.
> Do we support hbase-2 client going against hbase-1 server?
> We seem to be fine mix-and-match the clients and servers within the
> hbase-1 releases.   And IIRC hbase-1 client is ok against 0.98 server.
> Suppose I have a product  that depends and bundles HBase client. I
> want to upgrade the dependency to hbase-2 so that it can take
> advantages of and claim support of hbase-2.
> But does it mean that I will need drop the claims that the new version
> of the product support any hbase-1 backend?
> It has not been an objective. It might work doing basic Client API on a
> later branch-1 but will fail doing Admin functions (and figuring if a Table
> is online).  If it was a small thing to make it
> work, lets get it in.
> Let's look into it to see what works and what not.  Have a statement at least.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19557) Build and release source jars for hbase-shaded-client and others

2018-03-29 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16419860#comment-16419860
 ] 

Jerry He commented on HBASE-19557:
--

[~busbey] They are empty with only meta info in the artifacts, IIRC.

> Build and release source jars for hbase-shaded-client and others
> 
>
> Key: HBASE-19557
> URL: https://issues.apache.org/jira/browse/HBASE-19557
> Project: HBase
>  Issue Type: Sub-task
>  Components: shading
>Affects Versions: 1.3.1, 1.2.6
>Reporter: Jerry He
>Priority: Major
>
> It seems that currently we don't build and release source jars for 
> hbase-shaded-client (and server or mapreduce).  IDEs will complain from the 
> dependent users. We should provide them.
> http://central.maven.org/maven2/org/apache/hbase/hbase-shaded-client/1.3.1/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20565) ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect result since 1.4

2018-07-11 Thread Jerry He (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16541187#comment-16541187
 ] 

Jerry He commented on HBASE-20565:
--

[~openinx] Is there any good news on this one?  I think we need to fix it.  
Thanks.

> ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect 
> result since 1.4
> -
>
> Key: HBASE-20565
> URL: https://issues.apache.org/jira/browse/HBASE-20565
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 1.4.4
>Reporter: Jerry He
>Assignee: Zheng Hu
>Priority: Major
> Attachments: debug.diff, debug.log, test-branch-1.4.patch
>
>
> When ColumnPaginationFilter is combined with ColumnRangeFilter, we may see 
> incorrect result.
> Here is a simple example.
> One row with 10 columns c0, c1, c2, .., c9.  I have a ColumnRangeFilter for 
> range c2 to c9.  Then I have a ColumnPaginationFilter with limit 5 and offset 
> 0.  FileterList is FilterList(Operator.MUST_PASS_ALL, ColumnRangeFilter, 
> ColumnPaginationFilter).
> We expect 5 columns being returned.  But in HBase 1.4 and after, 4 columns 
> are returned.
> In 1.2.x, the correct 5 columns are returned.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20565) ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect result since 1.4

2018-07-15 Thread Jerry He (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16544805#comment-16544805
 ] 

Jerry He commented on HBASE-20565:
--

[~openinx] Thanks for the update.
I can twist the application to use ColumnRangeFilter (startColumn, endColumn), 
and ColumnPaginationFilter(limit, column-name-offset). It is ok to do it, and 
easy when starting from the beginning since column-name-offset is the 
startColumn.  But then we will have to know the last retrieved column name to 
set the next round.

> ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect 
> result since 1.4
> -
>
> Key: HBASE-20565
> URL: https://issues.apache.org/jira/browse/HBASE-20565
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 1.4.4
>Reporter: Jerry He
>Assignee: Zheng Hu
>Priority: Major
> Attachments: debug.diff, debug.log, test-branch-1.4.patch
>
>
> When ColumnPaginationFilter is combined with ColumnRangeFilter, we may see 
> incorrect result.
> Here is a simple example.
> One row with 10 columns c0, c1, c2, .., c9.  I have a ColumnRangeFilter for 
> range c2 to c9.  Then I have a ColumnPaginationFilter with limit 5 and offset 
> 0.  FileterList is FilterList(Operator.MUST_PASS_ALL, ColumnRangeFilter, 
> ColumnPaginationFilter).
> We expect 5 columns being returned.  But in HBase 1.4 and after, 4 columns 
> are returned.
> In 1.2.x, the correct 5 columns are returned.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20565) ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect result since 1.4

2018-07-19 Thread Jerry He (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16550281#comment-16550281
 ] 

Jerry He commented on HBASE-20565:
--

{quote}place the count-related filters at the last position
{quote}
{quote}ColumnPaginationFilter is order dependence filter
{quote}
Makes sense. I would think people use it last anyways. Thanks for the fix and 
explanation.

+1

> ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect 
> result since 1.4
> -
>
> Key: HBASE-20565
> URL: https://issues.apache.org/jira/browse/HBASE-20565
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 2.1.0, 1.4.4, 2.0.1
>Reporter: Jerry He
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 2.1.0, 1.5.0, 1.4.6, 2.0.2
>
> Attachments: HBASE-20565.v1.patch, HBASE-20565.v2.patch, debug.diff, 
> debug.log, test-branch-1.4.patch
>
>
> When ColumnPaginationFilter is combined with ColumnRangeFilter, we may see 
> incorrect result.
> Here is a simple example.
> One row with 10 columns c0, c1, c2, .., c9.  I have a ColumnRangeFilter for 
> range c2 to c9.  Then I have a ColumnPaginationFilter with limit 5 and offset 
> 0.  FileterList is FilterList(Operator.MUST_PASS_ALL, ColumnRangeFilter, 
> ColumnPaginationFilter).
> We expect 5 columns being returned.  But in HBase 1.4 and after, 4 columns 
> are returned.
> In 1.2.x, the correct 5 columns are returned.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21008) HBase 1.x can not read HBase2 hfiles due to TimeRangeTracker

2018-08-03 Thread Jerry He (JIRA)
Jerry He created HBASE-21008:


 Summary: HBase 1.x can not read HBase2 hfiles due to 
TimeRangeTracker
 Key: HBASE-21008
 URL: https://issues.apache.org/jira/browse/HBASE-21008
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.4.6, 2.1.0
Reporter: Jerry He


It looks like HBase 1.x can not open hfiiles written by HBase2 still.

I tested the latest HBase 1.4.6 and 2.1.0.  1.4.6 tried to read and open 
regions written by 2.1.0.

{code}
2018-07-30 16:01:31,274 ERROR [StoreFileOpenerThread-info-1] 
regionserver.StoreFile: Error reading timestamp range data from meta -- 
proceeding without
java.lang.IllegalArgumentException: Timestamp cannot be negative. 
minStamp:5783278630776778969, maxStamp:-4698050386518222402
at org.apache.hadoop.hbase.io.TimeRange.check(TimeRange.java:112)
at org.apache.hadoop.hbase.io.TimeRange.(TimeRange.java:100)
at 
org.apache.hadoop.hbase.regionserver.TimeRangeTracker.toTimeRange(TimeRangeTracker.java:214)
at 
org.apache.hadoop.hbase.regionserver.TimeRangeTracker.getTimeRange(TimeRangeTracker.java:198)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:507)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:531)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:521)
at 
org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:679)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:122)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:538)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:535)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
{code}
Or:
{code}
2018-07-30 16:01:31,305 ERROR [RS_OPEN_REGION-throb1:34004-0] 
handler.OpenRegionHandler: Failed open of 
region=janusgraph,,1532630557542.b0fa15cb0bf1b0bf740997b7056c., starting to 
roll back the global memstore size.
java.io.IOException: java.io.IOException: java.io.EOFException
at 
org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1033)
at 
org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:908)
at 
org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:876)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6995)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6956)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6927)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6883)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6834)
at 
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:364)
at 
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:131)
at 
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: java.io.EOFException
at 
org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:564)
at 
org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:518)
at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:281)
at 
org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5378)
at 
org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1007)
at 
org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1004)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
... 3 more
Caused by: java.io.EOFException
at java.io.DataInputStream.readFully(DataInputStream.java:197)
at java.io.DataInputStream.readLong(DataInputStream.java:416)
at 
org.apache.hadoop.hbase.regionserver.TimeRangeTracker.readFields(TimeRangeTracker.java:170)
at 
org.apache.hadoop.hbase.util.Writables.copyWritable(Writables.java:161)
at 
org.apache.hadoop.hbase.regionserver.TimeRangeTracker.getTimeRangeTracker(TimeRangeTracker.java:187)
at 
org.apache.hadoop.hbase.regionserver.TimeRangeTracker.getTimeRange(TimeRangeTracker.java:197)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:507)
at 
org.a

[jira] [Commented] (HBASE-21008) HBase 1.x can not read HBase2 hfiles due to TimeRangeTracker

2018-08-03 Thread Jerry He (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16568513#comment-16568513
 ] 

Jerry He commented on HBASE-21008:
--

The problem seems to come from HBASE-18754, which removed the TimeRangeTracker 
Writable, but added a protobuf HBaseProtos.TimeRangeTracker. HBase 1.x will not 
be able to read the protobuf serialized TimeRangeTracker in hfiles.

> HBase 1.x can not read HBase2 hfiles due to TimeRangeTracker
> 
>
> Key: HBASE-21008
> URL: https://issues.apache.org/jira/browse/HBASE-21008
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.0, 1.4.6
>Reporter: Jerry He
>Priority: Major
>
> It looks like HBase 1.x can not open hfiiles written by HBase2 still.
> I tested the latest HBase 1.4.6 and 2.1.0.  1.4.6 tried to read and open 
> regions written by 2.1.0.
> {code}
> 2018-07-30 16:01:31,274 ERROR [StoreFileOpenerThread-info-1] 
> regionserver.StoreFile: Error reading timestamp range data from meta -- 
> proceeding without
> java.lang.IllegalArgumentException: Timestamp cannot be negative. 
> minStamp:5783278630776778969, maxStamp:-4698050386518222402
> at org.apache.hadoop.hbase.io.TimeRange.check(TimeRange.java:112)
> at org.apache.hadoop.hbase.io.TimeRange.(TimeRange.java:100)
> at 
> org.apache.hadoop.hbase.regionserver.TimeRangeTracker.toTimeRange(TimeRangeTracker.java:214)
> at 
> org.apache.hadoop.hbase.regionserver.TimeRangeTracker.getTimeRange(TimeRangeTracker.java:198)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:507)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:531)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:521)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:679)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:122)
> at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:538)
> at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:535)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> {code}
> Or:
> {code}
> 2018-07-30 16:01:31,305 ERROR [RS_OPEN_REGION-throb1:34004-0] 
> handler.OpenRegionHandler: Failed open of 
> region=janusgraph,,1532630557542.b0fa15cb0bf1b0bf740997b7056c., starting 
> to roll back the global memstore size.
> java.io.IOException: java.io.IOException: java.io.EOFException
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1033)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:908)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:876)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6995)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6956)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6927)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6883)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6834)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:364)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:131)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: java.io.EOFException
> at 
> org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:564)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:518)
> at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:281)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5378)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1007)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1004)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> ... 3 more
> Caused by: java.io.EOFEx

[jira] [Commented] (HBASE-21008) HBase 1.x can not read HBase2 hfiles due to TimeRangeTracker

2018-08-03 Thread Jerry He (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16568532#comment-16568532
 ] 

Jerry He commented on HBASE-21008:
--

FYI [~chia7712].

> HBase 1.x can not read HBase2 hfiles due to TimeRangeTracker
> 
>
> Key: HBASE-21008
> URL: https://issues.apache.org/jira/browse/HBASE-21008
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.0, 1.4.6
>Reporter: Jerry He
>Priority: Major
>
> It looks like HBase 1.x can not open hfiiles written by HBase2 still.
> I tested the latest HBase 1.4.6 and 2.1.0.  1.4.6 tried to read and open 
> regions written by 2.1.0.
> {code}
> 2018-07-30 16:01:31,274 ERROR [StoreFileOpenerThread-info-1] 
> regionserver.StoreFile: Error reading timestamp range data from meta -- 
> proceeding without
> java.lang.IllegalArgumentException: Timestamp cannot be negative. 
> minStamp:5783278630776778969, maxStamp:-4698050386518222402
> at org.apache.hadoop.hbase.io.TimeRange.check(TimeRange.java:112)
> at org.apache.hadoop.hbase.io.TimeRange.(TimeRange.java:100)
> at 
> org.apache.hadoop.hbase.regionserver.TimeRangeTracker.toTimeRange(TimeRangeTracker.java:214)
> at 
> org.apache.hadoop.hbase.regionserver.TimeRangeTracker.getTimeRange(TimeRangeTracker.java:198)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:507)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:531)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:521)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:679)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:122)
> at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:538)
> at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:535)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> {code}
> Or:
> {code}
> 2018-07-30 16:01:31,305 ERROR [RS_OPEN_REGION-throb1:34004-0] 
> handler.OpenRegionHandler: Failed open of 
> region=janusgraph,,1532630557542.b0fa15cb0bf1b0bf740997b7056c., starting 
> to roll back the global memstore size.
> java.io.IOException: java.io.IOException: java.io.EOFException
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1033)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:908)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:876)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6995)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6956)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6927)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6883)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6834)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:364)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:131)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: java.io.EOFException
> at 
> org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:564)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:518)
> at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:281)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5378)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1007)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1004)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> ... 3 more
> Caused by: java.io.EOFException
> at java.io.DataInputStream.readFully(DataInputStream.java:197)
> at java.io.DataInputStream.readLong(DataInputStream.java:416)
> at 
> org.apache.hadoop.hbase.regionserver.TimeRa

[jira] [Commented] (HBASE-21008) HBase 1.x can not read HBase2 hfiles due to TimeRangeTracker

2018-08-03 Thread Jerry He (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16569045#comment-16569045
 ] 

Jerry He commented on HBASE-21008:
--

{quote}Perhaps we can backport a part of HBASE-18754 to all active 1.x branch 
in order to make them "can" read the hfiles generated by 2.x
{quote}
Yeah, only the 'read' part needs to be put in 1.x. This approach is similarly 
used in HBASE-16189 and HBASE-19052. However, in HBASE-19116, [~stack] made 
changes in 2.x so that 1.x deployment no longer needs to upgrade to the latest 
1.x to work.  What is preferred here?

> HBase 1.x can not read HBase2 hfiles due to TimeRangeTracker
> 
>
> Key: HBASE-21008
> URL: https://issues.apache.org/jira/browse/HBASE-21008
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.0, 1.4.6
>Reporter: Jerry He
>Priority: Major
>
> It looks like HBase 1.x can not open hfiiles written by HBase2 still.
> I tested the latest HBase 1.4.6 and 2.1.0.  1.4.6 tried to read and open 
> regions written by 2.1.0.
> {code}
> 2018-07-30 16:01:31,274 ERROR [StoreFileOpenerThread-info-1] 
> regionserver.StoreFile: Error reading timestamp range data from meta -- 
> proceeding without
> java.lang.IllegalArgumentException: Timestamp cannot be negative. 
> minStamp:5783278630776778969, maxStamp:-4698050386518222402
> at org.apache.hadoop.hbase.io.TimeRange.check(TimeRange.java:112)
> at org.apache.hadoop.hbase.io.TimeRange.(TimeRange.java:100)
> at 
> org.apache.hadoop.hbase.regionserver.TimeRangeTracker.toTimeRange(TimeRangeTracker.java:214)
> at 
> org.apache.hadoop.hbase.regionserver.TimeRangeTracker.getTimeRange(TimeRangeTracker.java:198)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:507)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:531)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:521)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:679)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:122)
> at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:538)
> at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:535)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> {code}
> Or:
> {code}
> 2018-07-30 16:01:31,305 ERROR [RS_OPEN_REGION-throb1:34004-0] 
> handler.OpenRegionHandler: Failed open of 
> region=janusgraph,,1532630557542.b0fa15cb0bf1b0bf740997b7056c., starting 
> to roll back the global memstore size.
> java.io.IOException: java.io.IOException: java.io.EOFException
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1033)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:908)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:876)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6995)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6956)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6927)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6883)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6834)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:364)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:131)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: java.io.EOFException
> at 
> org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:564)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:518)
> at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:281)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5378)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1007)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1004)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at

[jira] [Commented] (HBASE-21008) HBase 1.x can not read HBase2 hfiles due to TimeRangeTracker

2018-08-03 Thread Jerry He (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16569078#comment-16569078
 ] 

Jerry He commented on HBASE-21008:
--

This is good with me!

> HBase 1.x can not read HBase2 hfiles due to TimeRangeTracker
> 
>
> Key: HBASE-21008
> URL: https://issues.apache.org/jira/browse/HBASE-21008
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.0, 1.4.6
>Reporter: Jerry He
>Priority: Major
>
> It looks like HBase 1.x can not open hfiiles written by HBase2 still.
> I tested the latest HBase 1.4.6 and 2.1.0.  1.4.6 tried to read and open 
> regions written by 2.1.0.
> {code}
> 2018-07-30 16:01:31,274 ERROR [StoreFileOpenerThread-info-1] 
> regionserver.StoreFile: Error reading timestamp range data from meta -- 
> proceeding without
> java.lang.IllegalArgumentException: Timestamp cannot be negative. 
> minStamp:5783278630776778969, maxStamp:-4698050386518222402
> at org.apache.hadoop.hbase.io.TimeRange.check(TimeRange.java:112)
> at org.apache.hadoop.hbase.io.TimeRange.(TimeRange.java:100)
> at 
> org.apache.hadoop.hbase.regionserver.TimeRangeTracker.toTimeRange(TimeRangeTracker.java:214)
> at 
> org.apache.hadoop.hbase.regionserver.TimeRangeTracker.getTimeRange(TimeRangeTracker.java:198)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:507)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:531)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:521)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:679)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:122)
> at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:538)
> at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:535)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> {code}
> Or:
> {code}
> 2018-07-30 16:01:31,305 ERROR [RS_OPEN_REGION-throb1:34004-0] 
> handler.OpenRegionHandler: Failed open of 
> region=janusgraph,,1532630557542.b0fa15cb0bf1b0bf740997b7056c., starting 
> to roll back the global memstore size.
> java.io.IOException: java.io.IOException: java.io.EOFException
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1033)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:908)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:876)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6995)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6956)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6927)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6883)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6834)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:364)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:131)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: java.io.EOFException
> at 
> org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:564)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:518)
> at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:281)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5378)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1007)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1004)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> ... 3 more
> Caused by: java.io.EOFException
> at java.io.DataInputStream.readFully(DataInputStream.java:197)
> at java.io.DataInputStream.readLong(DataInputStream.java:416)
> at 
> org.apache.hadoop.hbase.regionserver.T

[jira] [Commented] (HBASE-21008) HBase 1.x can not read HBase2 hfiles due to TimeRangeTracker

2018-08-05 Thread Jerry He (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16569741#comment-16569741
 ] 

Jerry He commented on HBASE-21008:
--

I had the same question for you :) 

But go ahead. You will be faster than me. Thanks for the quick response on this 
issue!

> HBase 1.x can not read HBase2 hfiles due to TimeRangeTracker
> 
>
> Key: HBASE-21008
> URL: https://issues.apache.org/jira/browse/HBASE-21008
> Project: HBase
>  Issue Type: Bug
>  Components: compatibility, HFile
>Affects Versions: 2.1.0, 1.4.6
>Reporter: Jerry He
>Priority: Critical
>
> It looks like HBase 1.x can not open hfiiles written by HBase2 still.
> I tested the latest HBase 1.4.6 and 2.1.0.  1.4.6 tried to read and open 
> regions written by 2.1.0.
> {code}
> 2018-07-30 16:01:31,274 ERROR [StoreFileOpenerThread-info-1] 
> regionserver.StoreFile: Error reading timestamp range data from meta -- 
> proceeding without
> java.lang.IllegalArgumentException: Timestamp cannot be negative. 
> minStamp:5783278630776778969, maxStamp:-4698050386518222402
> at org.apache.hadoop.hbase.io.TimeRange.check(TimeRange.java:112)
> at org.apache.hadoop.hbase.io.TimeRange.(TimeRange.java:100)
> at 
> org.apache.hadoop.hbase.regionserver.TimeRangeTracker.toTimeRange(TimeRangeTracker.java:214)
> at 
> org.apache.hadoop.hbase.regionserver.TimeRangeTracker.getTimeRange(TimeRangeTracker.java:198)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:507)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:531)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:521)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:679)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:122)
> at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:538)
> at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:535)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> {code}
> Or:
> {code}
> 2018-07-30 16:01:31,305 ERROR [RS_OPEN_REGION-throb1:34004-0] 
> handler.OpenRegionHandler: Failed open of 
> region=janusgraph,,1532630557542.b0fa15cb0bf1b0bf740997b7056c., starting 
> to roll back the global memstore size.
> java.io.IOException: java.io.IOException: java.io.EOFException
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1033)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:908)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:876)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6995)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6956)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6927)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6883)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6834)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:364)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:131)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: java.io.EOFException
> at 
> org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:564)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:518)
> at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:281)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5378)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1007)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1004)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> ... 3 more
> Caused by: java.io.EOFException
> at java.io.DataInputStream.readFully(Da

[jira] [Commented] (HBASE-19120) IllegalArgumentException from ZNodeClearer when master shuts down

2017-10-30 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16225487#comment-16225487
 ] 

Jerry He commented on HBASE-19120:
--

+1

> IllegalArgumentException from ZNodeClearer when master shuts down
> -
>
> Key: HBASE-19120
> URL: https://issues.apache.org/jira/browse/HBASE-19120
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 19120.v1.txt
>
>
> Found the following in master log (build as of commit eee3b0) :
> {code}
> 2017-10-30 15:40:24,383 ERROR [main] util.ServerCommandLine: Failed to run
> java.lang.IllegalArgumentException: Path must start with / character
> at 
> org.apache.zookeeper.common.PathUtils.validatePath(PathUtils.java:51)
> at org.apache.zookeeper.ZooKeeper.delete(ZooKeeper.java:851)
> at 
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.delete(RecoverableZooKeeper.java:182)
> at 
> org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNodeFailSilent(ZKUtil.java:1266)
> at 
> org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNodeFailSilent(ZKUtil.java:1258)
> at org.apache.hadoop.hbase.ZNodeClearer.clear(ZNodeClearer.java:186)
> at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:143)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at 
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:127)
> at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2873)
> {code}
> Looking at ZNodeClearer, it seems that intention was to remove znode under 
> /rs subtree.
> However, the znode name was passed without path.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19096) Add RowMutions batch support in AsyncTable

2017-10-30 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16225500#comment-16225500
 ] 

Jerry He commented on HBASE-19096:
--

Ping [~Apache9] [~zghaobac], or [~stack]  Can one of you give a quick review?

> Add RowMutions batch support in AsyncTable
> --
>
> Key: HBASE-19096
> URL: https://issues.apache.org/jira/browse/HBASE-19096
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jerry He
>Assignee: Jerry He
> Fix For: 2.0.0
>
> Attachments: HBASE-19096-master.patch
>
>
> Batch support for RowMutations has been added in the Table interface, but is 
> not in AsyncTable. This JIRA will add it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HBASE-19003) Make sure all balancer actions respect decommissioned server

2017-10-30 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He reassigned HBASE-19003:


Assignee: Jerry He

> Make sure all balancer actions respect decommissioned server
> 
>
> Key: HBASE-19003
> URL: https://issues.apache.org/jira/browse/HBASE-19003
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jerry He
>Assignee: Jerry He
> Fix For: 2.0.0-beta-1
>
>
> There have been questions raised in HBASE-10367 and other related JIRAs. We 
> want to make sure all aspects of the balancer respect the draining flag. We 
> will have a good look, and fix if any violation is found.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19003) Make sure all balancer actions respect decommissioned server

2017-10-30 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16225510#comment-16225510
 ] 

Jerry He commented on HBASE-19003:
--

It looks good overall. 
The balancer's retainAssignment, randomAssignment and roundRobinAssignment all 
take a list of servers as parameter. 
We always call ServerManager.createDestinationServersList() to get the server 
list. This is a good list, with consideration of only online servers and 
avoiding the drainning servers.
The balancer's balanceCluster call has the draining servers removed from 
consideration to begin with.
Moreover, the assign phase will check the plan against the list obtained by 
ServerManager.createDestinationServersList(), which doubly make it unlikely
that a region is assigned to a wrong server.

> Make sure all balancer actions respect decommissioned server
> 
>
> Key: HBASE-19003
> URL: https://issues.apache.org/jira/browse/HBASE-19003
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jerry He
> Fix For: 2.0.0-beta-1
>
>
> There have been questions raised in HBASE-10367 and other related JIRAs. We 
> want to make sure all aspects of the balancer respect the draining flag. We 
> will have a good look, and fix if any violation is found.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-19003) Make sure all balancer actions respect decommissioned server

2017-10-30 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He resolved HBASE-19003.
--
Resolution: Done

> Make sure all balancer actions respect decommissioned server
> 
>
> Key: HBASE-19003
> URL: https://issues.apache.org/jira/browse/HBASE-19003
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jerry He
>Assignee: Jerry He
> Fix For: 2.0.0-beta-1
>
>
> There have been questions raised in HBASE-10367 and other related JIRAs. We 
> want to make sure all aspects of the balancer respect the draining flag. We 
> will have a good look, and fix if any violation is found.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19145) Look into hbase-2 client going to hbase-1 server

2017-10-31 Thread Jerry He (JIRA)
Jerry He created HBASE-19145:


 Summary: Look into hbase-2 client going to hbase-1 server
 Key: HBASE-19145
 URL: https://issues.apache.org/jira/browse/HBASE-19145
 Project: HBase
  Issue Type: Task
Affects Versions: 2.0.0-beta-1
Reporter: Jerry He
Priority: Major


>From the "[DISCUSS] hbase-2.0.0 compatibility expectations" thread.

Do we support hbase-2 client going against hbase-1 server?
We seem to be fine mix-and-match the clients and servers within the
hbase-1 releases.   And IIRC hbase-1 client is ok against 0.98 server.
Suppose I have a product  that depends and bundles HBase client. I
want to upgrade the dependency to hbase-2 so that it can take
advantages of and claim support of hbase-2.
But does it mean that I will need drop the claims that the new version
of the product support any hbase-1 backend?

It has not been an objective. It might work doing basic Client API on a
later branch-1 but will fail doing Admin functions (and figuring if a Table
is online).  If it was a small thing to make it
work, lets get it in.

Let's look into it to see what works and what not.  Have a statement at least.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19127) Set State.SPLITTING, MERGING, MERGING_NEW, SPLITTING_NEW properly in RegionStatesNode

2017-10-31 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16233629#comment-16233629
 ] 

Jerry He commented on HBASE-19127:
--

Will the change/shuffle of the state codes affect backward compatibility? For 
example, old proc wals can not be read correctly anymore?
If this is new code, it should be fine.  
Just asking.

> Set State.SPLITTING, MERGING, MERGING_NEW, SPLITTING_NEW properly in 
> RegionStatesNode
> -
>
> Key: HBASE-19127
> URL: https://issues.apache.org/jira/browse/HBASE-19127
> Project: HBase
>  Issue Type: Improvement
>Reporter: Yi Liang
>Assignee: Yi Liang
>Priority: Major
> Attachments: state.patch
>
>
> In current code, we did not set above states to a region node at all, but we 
> still have statements like below to check if node have above states.
> {code}
> else if (!regionNode.isInState(State.CLOSING, State.SPLITTING)) {
> 
> }
> {code}
> We need to set above states in a correct place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19127) Set State.SPLITTING, MERGING, MERGING_NEW, SPLITTING_NEW properly in RegionStatesNode

2017-11-01 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-19127:
-
Issue Type: Sub-task  (was: Improvement)
Parent: HBASE-19126

> Set State.SPLITTING, MERGING, MERGING_NEW, SPLITTING_NEW properly in 
> RegionStatesNode
> -
>
> Key: HBASE-19127
> URL: https://issues.apache.org/jira/browse/HBASE-19127
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Yi Liang
>Assignee: Yi Liang
>Priority: Major
> Attachments: state.patch
>
>
> In current code, we did not set above states to a region node at all, but we 
> still have statements like below to check if node have above states.
> {code}
> else if (!regionNode.isInState(State.CLOSING, State.SPLITTING)) {
> 
> }
> {code}
> We need to set above states in a correct place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19170) [hbase-thirdparty] Change the relocation offset of shaded artifacts

2017-11-03 Thread Jerry He (JIRA)
Jerry He created HBASE-19170:


 Summary: [hbase-thirdparty] Change the relocation offset of shaded 
artifacts
 Key: HBASE-19170
 URL: https://issues.apache.org/jira/browse/HBASE-19170
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.1
Reporter: Jerry He
Priority: Critical
 Fix For: 1.0.2


On the dev@hbase list, we conclude that we need to change the relocation offset 
in hbase-thirdparty to avoid shading conflicts with the other hbase shaded 
components (hbase-shaded-client and hbase-shaded-mapreduce components).
https://lists.apache.org/thread.html/1aa5d1d7f6d176df49e72096926b011cafe1315932515346d06e8342@%3Cdev.hbase.apache.org%3E
The suggestion is to use "o.a.h.hbase.thirdparty" in hbase-thirdparty to 
differentiate between "shaded" for downstream of us vs "thirdparty" for our 
internal use.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19171) Update package references to match new shaded offset in hbase-thirdparty

2017-11-03 Thread Jerry He (JIRA)
Jerry He created HBASE-19171:


 Summary: Update package references to match new shaded offset in 
hbase-thirdparty
 Key: HBASE-19171
 URL: https://issues.apache.org/jira/browse/HBASE-19171
 Project: HBase
  Issue Type: Sub-task
Reporter: Jerry He
Priority: Critical
 Fix For: 2.0.0


This has dependency on the parent task, and can only be done after a new 
hbase-thirdparty release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19257) Tool to dump information from MasterProcWALs file

2017-11-14 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251937#comment-16251937
 ] 

Jerry He commented on HBASE-19257:
--

HBASE-15592?

> Tool to dump information from MasterProcWALs file
> -
>
> Key: HBASE-19257
> URL: https://issues.apache.org/jira/browse/HBASE-19257
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>
> I was troubleshooting customer case where high number of files piled up under 
> MasterProcWALs directory.
> Gaining insight into (sample) file from MasterProcWALs dir would help find 
> the root cause.
> This JIRA is to add such tool which reads proc wal file and prints (selected) 
> information.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19096) Add RowMutions batch support in AsyncTable

2017-11-19 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16258597#comment-16258597
 ] 

Jerry He commented on HBASE-19096:
--

bq. We do this if (row instanceof RowMutations) { ... but if the row is 
anything else, we do nothing? Could it be something else?
All other cases handled in the code below that.

I will add more comment.


> Add RowMutions batch support in AsyncTable
> --
>
> Key: HBASE-19096
> URL: https://issues.apache.org/jira/browse/HBASE-19096
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jerry He
>Assignee: Jerry He
> Fix For: 2.0.0
>
> Attachments: HBASE-19096-master.patch
>
>
> Batch support for RowMutations has been added in the Table interface, but is 
> not in AsyncTable. This JIRA will add it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19096) Add RowMutions batch support in AsyncTable

2017-11-19 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16258600#comment-16258600
 ] 

Jerry He commented on HBASE-19096:
--

[~zghaobac] Thanks for chiming in.
bq. Seems we will iterate RegionAction's actions twice. One is in buildReq. 
Another one is in buildNoDataRegionAction. Can we move this to one loop?
That is a good idea. There is an old TODO there.
Probably need some refracting around buildNoDataRegionAction.  Let me see what 
I can do.
bq. If a RegionAction's actions have a RowMutations and some put/delete, then 
there are 2 RegionAction will be added to MutliRequest?
Yes, each RowMutations will be a separate RegionAction.
bq. About buildReq, there are some code same with MultiServerCallable, can we 
do some refactor to avoid this?
Again, need some refactoring. Let me see if it is too much.

> Add RowMutions batch support in AsyncTable
> --
>
> Key: HBASE-19096
> URL: https://issues.apache.org/jira/browse/HBASE-19096
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jerry He
>Assignee: Jerry He
> Fix For: 2.0.0
>
> Attachments: HBASE-19096-master.patch
>
>
> Batch support for RowMutations has been added in the Table interface, but is 
> not in AsyncTable. This JIRA will add it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19257) Document tool to dump information from MasterProcWALs file

2017-11-21 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16261435#comment-16261435
 ] 

Jerry He commented on HBASE-19257:
--

The shortname means the shortcut in the 'hbase' command?
{noformat}
Usage: hbase []  []
...
Commands:
Some commands take arguments. Pass no args or -h for usage.
  shell   Run the HBase shell
  hbckRun the hbase 'fsck' tool
  snapshotCreate a new snapshot of a table
  snapshotinfoTool for dumping snapshot information
  wal Write-ahead-log analyzer
  hfile   Store file analyzer
...
  pe  Run PerformanceEvaluation
  ltt Run LoadTestTool
  version Print the version
  CLASSNAME   Run the class named CLASSNAME
{noformat}

Then 'procwal'?  To be similar to the 'wal' and 'hfile'.

> Document tool to dump information from MasterProcWALs file
> --
>
> Key: HBASE-19257
> URL: https://issues.apache.org/jira/browse/HBASE-19257
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>
> I was troubleshooting customer case where high number of files piled up under 
> MasterProcWALs directory.
> Gaining insight into (sample) file from MasterProcWALs dir would help find 
> the root cause.
> This JIRA is to document ProcedureWALPrettyPrinter which reads proc wal file 
> and prints (selected) information.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19096) Add RowMutions batch support in AsyncTable

2017-11-26 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-19096:
-
Attachment: HBASE-19096-master-v2.patch

> Add RowMutions batch support in AsyncTable
> --
>
> Key: HBASE-19096
> URL: https://issues.apache.org/jira/browse/HBASE-19096
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jerry He
>Assignee: Jerry He
> Fix For: 2.0.0
>
> Attachments: HBASE-19096-master-v2.patch, HBASE-19096-master.patch
>
>
> Batch support for RowMutations has been added in the Table interface, but is 
> not in AsyncTable. This JIRA will add it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19096) Add RowMutions batch support in AsyncTable

2017-11-26 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16266285#comment-16266285
 ] 

Jerry He commented on HBASE-19096:
--

Attached v2 to address comments from [~zghaobac] and [~stack].
Pushed down the looping to RequestConverter and refactored.

> Add RowMutions batch support in AsyncTable
> --
>
> Key: HBASE-19096
> URL: https://issues.apache.org/jira/browse/HBASE-19096
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jerry He
>Assignee: Jerry He
> Fix For: 2.0.0
>
> Attachments: HBASE-19096-master-v2.patch, HBASE-19096-master.patch
>
>
> Batch support for RowMutations has been added in the Table interface, but is 
> not in AsyncTable. This JIRA will add it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-15437) Response size calculated in RPCServer for warning tooLarge responses does NOT count CellScanner payload

2016-11-22 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-15437:
-
Attachment: HBASE-15437-v2.patch

Attached  a patch with a different approach.
[~anoop.hbase], what do you think?

> Response size calculated in RPCServer for warning tooLarge responses does NOT 
> count CellScanner payload
> ---
>
> Key: HBASE-15437
> URL: https://issues.apache.org/jira/browse/HBASE-15437
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Reporter: deepankar
>Assignee: deepankar
> Attachments: HBASE-15437-v1.patch, HBASE-15437-v2.patch, 
> HBASE-15437.patch
>
>
> After HBASE-13158 where we respond back to RPCs with cells in the payload , 
> the protobuf response will just have the count the cells to read from 
> payload, but there are set of features where we log warn in RPCServer 
> whenever the response is tooLarge, but this size now is not considering the 
> sizes of the cells in the PayloadCellScanner. Code form RPCServer
> {code}
>   long responseSize = result.getSerializedSize();
>   // log any RPC responses that are slower than the configured warn
>   // response time or larger than configured warning size
>   boolean tooSlow = (processingTime > warnResponseTime && 
> warnResponseTime > -1);
>   boolean tooLarge = (responseSize > warnResponseSize && warnResponseSize 
> > -1);
>   if (tooSlow || tooLarge) {
> // when tagging, we let TooLarge trump TooSmall to keep output simple
> // note that large responses will often also be slow.
> logResponse(new Object[]{param},
> md.getName(), md.getName() + "(" + param.getClass().getName() + 
> ")",
> (tooLarge ? "TooLarge" : "TooSlow"),
> status.getClient(), startTime, processingTime, qTime,
> responseSize);
>   }
> {code}
> Should this feature be not supported any more or should we add a method to 
> CellScanner or a new interface which returns the serialized size (but this 
> might not include the compression codecs which might be used during response 
> ?) Any other Idea this could be fixed ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16010) Put draining function through Admin API

2016-11-22 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15687741#comment-15687741
 ] 

Jerry He commented on HBASE-16010:
--

[~mwarhaftig]

Could you do a rebase with the latest master?  The master has quite a bit of 
refactoring on the protobuf classes.

> Put draining function through Admin API
> ---
>
> Key: HBASE-16010
> URL: https://issues.apache.org/jira/browse/HBASE-16010
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jerry He
>Assignee: Matt Warhaftig
>Priority: Minor
> Attachments: hbase-16010-v1.patch
>
>
> Currently, there is no Amdin API for draining function. Client has to 
> interact directly with Zookeeper draining node to add and remove draining 
> servers.
> For example, in draining_servers.rb:
> {code}
>   zkw = org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.new(config, 
> "draining_servers", nil)
>   parentZnode = zkw.drainingZNode
>   begin
> for server in servers
>   node = ZKUtil.joinZNode(parentZnode, server)
>   ZKUtil.createAndFailSilent(zkw, node)
> end
>   ensure
> zkw.close()
>   end
> {code}
> This is not good in cases like secure clusters with protected Zookeeper nodes.
> Let's put draining function through Admin API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17167) Pass mvcc to client when scan

2016-11-23 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15691218#comment-15691218
 ] 

Jerry He commented on HBASE-17167:
--

Good point.  The mvcc/seqid should be in the server/hfiles long enough.  There 
were discussions on that previously as well.

> Pass mvcc to client when scan
> -
>
> Key: HBASE-17167
> URL: https://issues.apache.org/jira/browse/HBASE-17167
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, scan
>Reporter: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
>
> For the current implementation, if we use batch or allowPartial when scan, 
> then the row level atomic can not be guaranteed if we need to restart a scan 
> in the middle of a record due to region move or something else.
> We can return the mvcc used to open scanner to client and client could use 
> this mvcc to restart a scan to get row level atomic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17116) [PerformanceEvaluation] Add option to configure block size

2016-11-23 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15692321#comment-15692321
 ] 

Jerry He commented on HBASE-17116:
--

+1

> [PerformanceEvaluation] Add option to configure block size
> --
>
> Key: HBASE-17116
> URL: https://issues.apache.org/jira/browse/HBASE-17116
> Project: HBase
>  Issue Type: Bug
>  Components: tooling
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.2.5
>Reporter: Esteban Gutierrez
>Assignee: Yi Liang
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-17116-V1.patch
>
>
> Followup from HBASE-9940 to add option to configure block size for 
> PerformanceEvaluation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17174) Use shared threadpool in BufferedMutatorImpl

2016-11-24 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15694660#comment-15694660
 ] 

Jerry He commented on HBASE-17174:
--

BufferedMutator should not close an external ExecutorService pool.
Patch looks good.
Besides this fix, the other changes are mostly refactoring/cleanup?
Can you upload patch to RB just to be careful?

> Use shared threadpool in BufferedMutatorImpl
> 
>
> Key: HBASE-17174
> URL: https://issues.apache.org/jira/browse/HBASE-17174
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17174.v0.patch, HBASE-17174.v1.patch, 
> HBASE-17174.v2.patch
>
>
> A update-heavy application, for example, loader, creates many BufferedMutator 
> for batch updates. But these BufferedMutators can’t share a large threadpool 
> because the shutdown() method will be called when closing any 
> BufferedMutator. This patch adds a flag into BufferedMutatorParams for 
> preventing calling the shutdown() method in BufferedMutatorImpl#close



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15437) Response size calculated in RPCServer for warning tooLarge responses does NOT count CellScanner payload

2016-11-25 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15697133#comment-15697133
 ] 

Jerry He commented on HBASE-15437:
--

bq. This extra accounting is not needed.
Hmm. This above added accounting is for mutate.  Also where is the accounting 
done in the private get method?

If I understand [~enis] comment correctly, we can change the signature of call 
method in RpcServer.  Let me see what we can do.

> Response size calculated in RPCServer for warning tooLarge responses does NOT 
> count CellScanner payload
> ---
>
> Key: HBASE-15437
> URL: https://issues.apache.org/jira/browse/HBASE-15437
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Reporter: deepankar
>Assignee: deepankar
> Attachments: HBASE-15437-v1.patch, HBASE-15437-v2.patch, 
> HBASE-15437.patch
>
>
> After HBASE-13158 where we respond back to RPCs with cells in the payload , 
> the protobuf response will just have the count the cells to read from 
> payload, but there are set of features where we log warn in RPCServer 
> whenever the response is tooLarge, but this size now is not considering the 
> sizes of the cells in the PayloadCellScanner. Code form RPCServer
> {code}
>   long responseSize = result.getSerializedSize();
>   // log any RPC responses that are slower than the configured warn
>   // response time or larger than configured warning size
>   boolean tooSlow = (processingTime > warnResponseTime && 
> warnResponseTime > -1);
>   boolean tooLarge = (responseSize > warnResponseSize && warnResponseSize 
> > -1);
>   if (tooSlow || tooLarge) {
> // when tagging, we let TooLarge trump TooSmall to keep output simple
> // note that large responses will often also be slow.
> logResponse(new Object[]{param},
> md.getName(), md.getName() + "(" + param.getClass().getName() + 
> ")",
> (tooLarge ? "TooLarge" : "TooSlow"),
> status.getClient(), startTime, processingTime, qTime,
> responseSize);
>   }
> {code}
> Should this feature be not supported any more or should we add a method to 
> CellScanner or a new interface which returns the serialized size (but this 
> might not include the compression codecs which might be used during response 
> ?) Any other Idea this could be fixed ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15437) Response size calculated in RPCServer for warning tooLarge responses does NOT count CellScanner payload

2016-11-27 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15700745#comment-15700745
 ] 

Jerry He commented on HBASE-15437:
--

Looking at it more. 
It does not seem right to pass RpcServer.Call, which is an implementation 
class, to the interface level RpcServerInterface.
We could do it, but ideal.  
Ideally, we should define an interface level 'Call', and give it to 
RpcServerInterface.  And because of this separation of a 'Call' interface and 
the implementation class, we can hopefully remove the 
{noformat}InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC, 
HBaseInterfaceAudience.PHOENIX}){noformat} from the RpcServer.Call class.
Also, why the entire RpcServer is annotated with 
{noformat}InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC, 
HBaseInterfaceAudience.PHOENIX}){noformat}

> Response size calculated in RPCServer for warning tooLarge responses does NOT 
> count CellScanner payload
> ---
>
> Key: HBASE-15437
> URL: https://issues.apache.org/jira/browse/HBASE-15437
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Reporter: deepankar
>Assignee: deepankar
> Attachments: HBASE-15437-v1.patch, HBASE-15437-v2.patch, 
> HBASE-15437.patch
>
>
> After HBASE-13158 where we respond back to RPCs with cells in the payload , 
> the protobuf response will just have the count the cells to read from 
> payload, but there are set of features where we log warn in RPCServer 
> whenever the response is tooLarge, but this size now is not considering the 
> sizes of the cells in the PayloadCellScanner. Code form RPCServer
> {code}
>   long responseSize = result.getSerializedSize();
>   // log any RPC responses that are slower than the configured warn
>   // response time or larger than configured warning size
>   boolean tooSlow = (processingTime > warnResponseTime && 
> warnResponseTime > -1);
>   boolean tooLarge = (responseSize > warnResponseSize && warnResponseSize 
> > -1);
>   if (tooSlow || tooLarge) {
> // when tagging, we let TooLarge trump TooSmall to keep output simple
> // note that large responses will often also be slow.
> logResponse(new Object[]{param},
> md.getName(), md.getName() + "(" + param.getClass().getName() + 
> ")",
> (tooLarge ? "TooLarge" : "TooSlow"),
> status.getClient(), startTime, processingTime, qTime,
> responseSize);
>   }
> {code}
> Should this feature be not supported any more or should we add a method to 
> CellScanner or a new interface which returns the serialized size (but this 
> might not include the compression codecs which might be used during response 
> ?) Any other Idea this could be fixed ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17116) [PerformanceEvaluation] Add option to configure block size

2016-11-27 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-17116:
-
   Resolution: Fixed
Fix Version/s: 1.4.0
   Status: Resolved  (was: Patch Available)

> [PerformanceEvaluation] Add option to configure block size
> --
>
> Key: HBASE-17116
> URL: https://issues.apache.org/jira/browse/HBASE-17116
> Project: HBase
>  Issue Type: Bug
>  Components: tooling
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.2.5
>Reporter: Esteban Gutierrez
>Assignee: Yi Liang
>Priority: Trivial
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17116-V1.patch
>
>
> Followup from HBASE-9940 to add option to configure block size for 
> PerformanceEvaluation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17194) Assign the new region to the idle server after splitting

2016-11-29 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15706160#comment-15706160
 ] 

Jerry He commented on HBASE-17194:
--

{noformat}
78static final BiFunction predicater
79  = (name, load) -> load.getNumberOfRegions() == 0;
{noformat}
Can you rename 'predicater' to something like 'idleServerPredicator' to be 
specific?

Also, could you rename the new getOnlineServersList() to something like 
'getOnlineServersListWithIdlePredicator' so that it won't cause confusion like 
[~stack] mentioned?

{noformat}
1096  public List getOnlineServersList(List keys,
1097BiFunction predicater) {
1098List names = new ArrayList<>();
1099if (keys != null) {
1100  names.forEach(name -> {
{noformat}

Should it be 'keys.forEach'?





> Assign the new region to the idle server after splitting
> 
>
> Key: HBASE-17194
> URL: https://issues.apache.org/jira/browse/HBASE-17194
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17194.v0.patch, evaluation-v0.png
>
>
> The new regions are assigned to the random servers after splitting, but there 
> always are some idle servers which don’t be assigned any regions on the new 
> cluster. It is a bad start of load balance, hence we should give priority to 
> the idle servers for assignment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17174) Use shared threadpool and AsyncProcess in BufferedMutatorImpl

2016-11-29 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15706163#comment-15706163
 ] 

Jerry He commented on HBASE-17174:
--

+1 on v5.

> Use shared threadpool and AsyncProcess in BufferedMutatorImpl
> -
>
> Key: HBASE-17174
> URL: https://issues.apache.org/jira/browse/HBASE-17174
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17174.v0.patch, HBASE-17174.v1.patch, 
> HBASE-17174.v2.patch, HBASE-17174.v3.patch, HBASE-17174.v4.patch, 
> HBASE-17174.v5.patch
>
>
> The following are reasons of reusing the pool and AP.
> # A update-heavy application, for example, loader, creates many 
> BufferedMutator for batch updates. But these BufferedMutators can’t share a 
> large threadpool because the shutdown() method will be called when closing 
> any BufferedMutator. This patch adds a flag into BufferedMutatorParams for 
> preventing calling the shutdown() method in BufferedMutatorImpl#close
> # The AsyncProcess has the powerful traffic control, but the control is 
> against the single table currently. Because the AP is constructed at 
> BufferedMutatorImpl's construction. This patch change the 
> BufferedMutatorImpl's construction for reuse the AP so that the updates for 
> different tables can be restricted by the same AP.
> Additionally, there are two changes(aren't included in latest patch) for #2.
> 1) The AP will be exposed to user.
> 2) A new method will be added to Connection for instantiating a AP.
> All suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17221) Abstract out an interface for RpcServer.Call

2016-11-30 Thread Jerry He (JIRA)
Jerry He created HBASE-17221:


 Summary: Abstract out an interface for RpcServer.Call
 Key: HBASE-17221
 URL: https://issues.apache.org/jira/browse/HBASE-17221
 Project: HBase
  Issue Type: Improvement
Reporter: Jerry He
Assignee: Jerry He
 Fix For: 2.0.0


RpcServer.Call is a concrete class, but it is marked as:
{noformat}
@InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC, 
HBaseInterfaceAudience.PHOENIX})
{noformat}

Let's abstract out an interface out of it for potential consumers that want to 
pass it around.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17221) Abstract out an interface for RpcServer.Call

2016-11-30 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-17221:
-
Attachment: HBASE-17221.patch

> Abstract out an interface for RpcServer.Call
> 
>
> Key: HBASE-17221
> URL: https://issues.apache.org/jira/browse/HBASE-17221
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jerry He
>Assignee: Jerry He
> Fix For: 2.0.0
>
> Attachments: HBASE-17221.patch
>
>
> RpcServer.Call is a concrete class, but it is marked as:
> {noformat}
> @InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC, 
> HBaseInterfaceAudience.PHOENIX})
> {noformat}
> Let's abstract out an interface out of it for potential consumers that want 
> to pass it around.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17221) Abstract out an interface for RpcServer.Call

2016-11-30 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-17221:
-
Status: Patch Available  (was: Open)

Initial patch.

> Abstract out an interface for RpcServer.Call
> 
>
> Key: HBASE-17221
> URL: https://issues.apache.org/jira/browse/HBASE-17221
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jerry He
>Assignee: Jerry He
> Fix For: 2.0.0
>
> Attachments: HBASE-17221.patch
>
>
> RpcServer.Call is a concrete class, but it is marked as:
> {noformat}
> @InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC, 
> HBaseInterfaceAudience.PHOENIX})
> {noformat}
> Let's abstract out an interface out of it for potential consumers that want 
> to pass it around.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15437) Response size calculated in RPCServer for warning tooLarge responses does NOT count CellScanner payload

2016-11-30 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710314#comment-15710314
 ] 

Jerry He commented on HBASE-15437:
--

Filed HBASE-17221 to see if I can abstract out an interface for RpcServer.Call.

> Response size calculated in RPCServer for warning tooLarge responses does NOT 
> count CellScanner payload
> ---
>
> Key: HBASE-15437
> URL: https://issues.apache.org/jira/browse/HBASE-15437
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Reporter: deepankar
>Assignee: deepankar
> Attachments: HBASE-15437-v1.patch, HBASE-15437-v2.patch, 
> HBASE-15437.patch
>
>
> After HBASE-13158 where we respond back to RPCs with cells in the payload , 
> the protobuf response will just have the count the cells to read from 
> payload, but there are set of features where we log warn in RPCServer 
> whenever the response is tooLarge, but this size now is not considering the 
> sizes of the cells in the PayloadCellScanner. Code form RPCServer
> {code}
>   long responseSize = result.getSerializedSize();
>   // log any RPC responses that are slower than the configured warn
>   // response time or larger than configured warning size
>   boolean tooSlow = (processingTime > warnResponseTime && 
> warnResponseTime > -1);
>   boolean tooLarge = (responseSize > warnResponseSize && warnResponseSize 
> > -1);
>   if (tooSlow || tooLarge) {
> // when tagging, we let TooLarge trump TooSmall to keep output simple
> // note that large responses will often also be slow.
> logResponse(new Object[]{param},
> md.getName(), md.getName() + "(" + param.getClass().getName() + 
> ")",
> (tooLarge ? "TooLarge" : "TooSlow"),
> status.getClient(), startTime, processingTime, qTime,
> responseSize);
>   }
> {code}
> Should this feature be not supported any more or should we add a method to 
> CellScanner or a new interface which returns the serialized size (but this 
> might not include the compression codecs which might be used during response 
> ?) Any other Idea this could be fixed ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15437) Response size calculated in RPCServer for warning tooLarge responses does NOT count CellScanner payload

2016-12-01 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15712872#comment-15712872
 ] 

Jerry He commented on HBASE-15437:
--

Hi, [~anoop.hbase]

It would be cleaner, hierarchy and conceptual. RpcServer.Call is heavily 
tangled in RpcServer,  RpcServer implements RpcServerInterface. If we put 
RpcServer.Call in RpcServerInterface, it would be messy.

bq. Also, why the entire RpcServer is annotated with
Is it because there are these public methods in the RpcServer? For example,
{code}
public static RpcCallContext getCurrentCall()
public static boolean isInRpcCallContext()
public static User getRequestUser()
public static String getRequestUserName()
public static InetAddress getRemoteAddress()
{code}


> Response size calculated in RPCServer for warning tooLarge responses does NOT 
> count CellScanner payload
> ---
>
> Key: HBASE-15437
> URL: https://issues.apache.org/jira/browse/HBASE-15437
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Reporter: deepankar
>Assignee: deepankar
> Attachments: HBASE-15437-v1.patch, HBASE-15437-v2.patch, 
> HBASE-15437.patch
>
>
> After HBASE-13158 where we respond back to RPCs with cells in the payload , 
> the protobuf response will just have the count the cells to read from 
> payload, but there are set of features where we log warn in RPCServer 
> whenever the response is tooLarge, but this size now is not considering the 
> sizes of the cells in the PayloadCellScanner. Code form RPCServer
> {code}
>   long responseSize = result.getSerializedSize();
>   // log any RPC responses that are slower than the configured warn
>   // response time or larger than configured warning size
>   boolean tooSlow = (processingTime > warnResponseTime && 
> warnResponseTime > -1);
>   boolean tooLarge = (responseSize > warnResponseSize && warnResponseSize 
> > -1);
>   if (tooSlow || tooLarge) {
> // when tagging, we let TooLarge trump TooSmall to keep output simple
> // note that large responses will often also be slow.
> logResponse(new Object[]{param},
> md.getName(), md.getName() + "(" + param.getClass().getName() + 
> ")",
> (tooLarge ? "TooLarge" : "TooSlow"),
> status.getClient(), startTime, processingTime, qTime,
> responseSize);
>   }
> {code}
> Should this feature be not supported any more or should we add a method to 
> CellScanner or a new interface which returns the serialized size (but this 
> might not include the compression codecs which might be used during response 
> ?) Any other Idea this could be fixed ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17194) Assign the new region to the idle server after splitting

2016-12-01 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15712883#comment-15712883
 ] 

Jerry He commented on HBASE-17194:
--

Patch looks good. 
one small thing, what about the case servers.size() = 0
{noformat}
-int numServers = servers == null ? 0 : servers.size();
-if (numServers == 0) {
+if (servers == null) {
{noformat}

> Assign the new region to the idle server after splitting
> 
>
> Key: HBASE-17194
> URL: https://issues.apache.org/jira/browse/HBASE-17194
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17194.v0.patch, HBASE-17194.v1.patch, 
> HBASE-17194.v2.patch, evaluation-v0.png, tests.xlsx
>
>
> The new regions are assigned to the random servers after splitting, but there 
> always are some idle servers which don’t be assigned any regions on the new 
> cluster. It is a bad start of load balance, hence we should give priority to 
> the idle servers for assignment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17221) Abstract out an interface for RpcServer.Call

2016-12-01 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15713165#comment-15713165
 ] 

Jerry He commented on HBASE-17221:
--

ok.  There is a org.apache.hadoop.hbase.ipc.Call in hbase-client.  I'll get 
another name then.

> Abstract out an interface for RpcServer.Call
> 
>
> Key: HBASE-17221
> URL: https://issues.apache.org/jira/browse/HBASE-17221
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jerry He
>Assignee: Jerry He
> Fix For: 2.0.0
>
> Attachments: HBASE-17221.patch
>
>
> RpcServer.Call is a concrete class, but it is marked as:
> {noformat}
> @InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC, 
> HBaseInterfaceAudience.PHOENIX})
> {noformat}
> Let's abstract out an interface out of it for potential consumers that want 
> to pass it around.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17194) Assign the new region to the idle server after splitting

2016-12-01 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15713168#comment-15713168
 ] 

Jerry He commented on HBASE-17194:
--

+1 on v3.

> Assign the new region to the idle server after splitting
> 
>
> Key: HBASE-17194
> URL: https://issues.apache.org/jira/browse/HBASE-17194
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17194.v0.patch, HBASE-17194.v1.patch, 
> HBASE-17194.v2.patch, HBASE-17194.v3.patch, evaluation-v0.png, tests.xlsx
>
>
> The new regions are assigned to the random servers after splitting, but there 
> always are some idle servers which don’t be assigned any regions on the new 
> cluster. It is a bad start of load balance, hence we should give priority to 
> the idle servers for assignment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16894) Create more than 1 split per region, generalize HBASE-12590

2016-12-02 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-16894:
-
Labels:   (was: beginner beginners)

> Create more than 1 split per region, generalize HBASE-12590
> ---
>
> Key: HBASE-16894
> URL: https://issues.apache.org/jira/browse/HBASE-16894
> Project: HBase
>  Issue Type: Improvement
>Reporter: Enis Soztutar
>Assignee: Yi Liang
>
> A common request from users is to be able to better control how many map 
> tasks are created per region. Right now, it is always 1 region = 1 input 
> split = 1 map task. Same goes for Spark since it uses the TIF. With region 
> sizes as large as 50 GBs, it is desirable to be able to create more than 1 
> split per region.
> HBASE-12590 adds a config property for MR jobs to be able to handle skew in 
> region sizes. The algorithm is roughly: 
> {code}
> If (region size >= average size*ratio) : cut the region into two MR input 
> splits
> If (average size <= region size < average size*ratio) : one region as one MR 
> input split
> If (sum of several continuous regions size < average size * ratio): combine 
> these regions into one MR input split.
> {code}
> Although we can set data skew ratio to be 0.5 or something to abuse 
> HBASE-12590 into creating more than 1 split task per region, it is not ideal. 
> But there is no way to create more with the patch as it is. For example we 
> cannot create more than 2 tasks per region. 
> If we want to fix this properly, we should extend the approach in 
> HBASE-12590, and make it so that the client can specify the desired num of 
> mappers, or desired split size, and the TIF generates the splits based on the 
> current region sizes very similar to the algorithm in HBASE-12590, but a more 
> generic way. This also would eliminate the hand tuning of data skew ratio.
> We also can think about the guidepost approach that Phoenix has in the stats 
> table which is used for exactly this purpose. Right now, the region can be 
> split into powers of two assuming uniform distribution within the region. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17240) ImportTsv encounters ClassNotFoundException for MasterProtos$MasterService$BlockingInterface

2016-12-02 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15715937#comment-15715937
 ] 

Jerry He commented on HBASE-17240:
--

Update the list in TableMapReduceUtil.addHBaseDependencyJars() ?

> ImportTsv encounters ClassNotFoundException for 
> MasterProtos$MasterService$BlockingInterface
> 
>
> Key: HBASE-17240
> URL: https://issues.apache.org/jira/browse/HBASE-17240
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
> Fix For: 2.0.0
>
> Attachments: 17240.v1.txt
>
>
> [~romil.choksi] reported the following problem.
> With command:
> {code}
> hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.separator=, 
> -Dimporttsv.bulk.output=output -Dimporttsv.columns=HBASE_ROW_KEY,f:count 
> wordcount word_count.csv
> {code}
> The following error showed up:
> {code}
> 2016-11-29 06:39:48,861 INFO  [main] mapreduce.Job: Task Id : 
> attempt_1479850535804_0004_m_00_2, Status : FAILED
> Error: java.lang.ClassNotFoundException: 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$BlockingInterface
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:264)
> at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:225)
> at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:122)
> at 
> org.apache.hadoop.hbase.mapreduce.DefaultVisibilityExpressionResolver.init(DefaultVisibilityExpressionResolver.java:75)
> at org.apache.hadoop.hbase.mapreduce.CellCreator.(CellCreator.java:48)
> at 
> org.apache.hadoop.hbase.mapreduce.TsvImporterMapper.setup(TsvImporterMapper.java:115)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17240) ImportTsv encounters ClassNotFoundException for MasterProtos$MasterService$BlockingInterface

2016-12-02 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15716786#comment-15716786
 ] 

Jerry He commented on HBASE-17240:
--

Oh, the shaded proto class in already in there.
{noformat}
   org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.class, // 
hbase-protocol-shaded
+  org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.class, // 
hbase-protocol-shaded
{noformat}

HBASE-17166 just added it. No need to add it again.
ImportTsv.createSubmittableJob() already calls 
TableMapReduceUtil.addDependencyJars()

Maybe an old build? 

> ImportTsv encounters ClassNotFoundException for 
> MasterProtos$MasterService$BlockingInterface
> 
>
> Key: HBASE-17240
> URL: https://issues.apache.org/jira/browse/HBASE-17240
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
> Fix For: 2.0.0
>
> Attachments: 17240.v1.txt, 17240.v2.txt
>
>
> [~romil.choksi] reported the following problem.
> With command:
> {code}
> hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.separator=, 
> -Dimporttsv.bulk.output=output -Dimporttsv.columns=HBASE_ROW_KEY,f:count 
> wordcount word_count.csv
> {code}
> The following error showed up:
> {code}
> 2016-11-29 06:39:48,861 INFO  [main] mapreduce.Job: Task Id : 
> attempt_1479850535804_0004_m_00_2, Status : FAILED
> Error: java.lang.ClassNotFoundException: 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$BlockingInterface
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:264)
> at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:225)
> at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:122)
> at 
> org.apache.hadoop.hbase.mapreduce.DefaultVisibilityExpressionResolver.init(DefaultVisibilityExpressionResolver.java:75)
> at org.apache.hadoop.hbase.mapreduce.CellCreator.(CellCreator.java:48)
> at 
> org.apache.hadoop.hbase.mapreduce.TsvImporterMapper.setup(TsvImporterMapper.java:115)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17221) Abstract out an interface for RpcServer.Call

2016-12-03 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-17221:
-
Attachment: HBASE-17221-v2.patch

v2 of the patch, more complete.

A new interface named 'RpcCall'.  Implementation class RpcServer.Call keeps the 
same name for backward compatibility.  Let me know if you have a better name or 
approach.

The interface 'RpcCall' will be pass around in CallRunner, RpcServerInterface, 
etc.

This abstraction would also by used by coprocessor/Phoenix, instead of the 
implementation class RpcServer.Call 

> Abstract out an interface for RpcServer.Call
> 
>
> Key: HBASE-17221
> URL: https://issues.apache.org/jira/browse/HBASE-17221
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jerry He
>Assignee: Jerry He
> Fix For: 2.0.0
>
> Attachments: HBASE-17221-v2.patch, HBASE-17221.patch
>
>
> RpcServer.Call is a concrete class, but it is marked as:
> {noformat}
> @InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC, 
> HBaseInterfaceAudience.PHOENIX})
> {noformat}
> Let's abstract out an interface out of it for potential consumers that want 
> to pass it around.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17221) Abstract out an interface for RpcServer.Call

2016-12-03 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-17221:
-
Attachment: HBASE-17221-v3.patch

v3 re-base to the latest master so that the patch can apply.

> Abstract out an interface for RpcServer.Call
> 
>
> Key: HBASE-17221
> URL: https://issues.apache.org/jira/browse/HBASE-17221
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jerry He
>Assignee: Jerry He
> Fix For: 2.0.0
>
> Attachments: HBASE-17221-v2.patch, HBASE-17221-v3.patch, 
> HBASE-17221.patch
>
>
> RpcServer.Call is a concrete class, but it is marked as:
> {noformat}
> @InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC, 
> HBaseInterfaceAudience.PHOENIX})
> {noformat}
> Let's abstract out an interface out of it for potential consumers that want 
> to pass it around.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17221) Abstract out an interface for RpcServer.Call

2016-12-03 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15719089#comment-15719089
 ] 

Jerry He commented on HBASE-17221:
--

Yes. @stack.  I follow you, and had the same thinking.
I wondered if I could get rid of getHeader  (request header) from the interface 
because we have all things in the Header already (priority, timeout, etc). But 
there are places that take the Header as param just to get the Priority.  
Changing these places may raise backward compatibility questions.  The
{noformat}
@InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC, 
HBaseInterfaceAudience.PHOENIX})
{noformat}
is the problem.


> Abstract out an interface for RpcServer.Call
> 
>
> Key: HBASE-17221
> URL: https://issues.apache.org/jira/browse/HBASE-17221
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jerry He
>Assignee: Jerry He
> Fix For: 2.0.0
>
> Attachments: HBASE-17221-v2.patch, HBASE-17221-v3.patch, 
> HBASE-17221.patch
>
>
> RpcServer.Call is a concrete class, but it is marked as:
> {noformat}
> @InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC, 
> HBaseInterfaceAudience.PHOENIX})
> {noformat}
> Let's abstract out an interface out of it for potential consumers that want 
> to pass it around.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17221) Abstract out an interface for RpcServer.Call

2016-12-04 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15720445#comment-15720445
 ] 

Jerry He commented on HBASE-17221:
--

Let me see if the TestSimpleRpcScheduler failure is related.

> Abstract out an interface for RpcServer.Call
> 
>
> Key: HBASE-17221
> URL: https://issues.apache.org/jira/browse/HBASE-17221
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jerry He
>Assignee: Jerry He
> Fix For: 2.0.0
>
> Attachments: HBASE-17221-v2.patch, HBASE-17221-v3.patch, 
> HBASE-17221.patch
>
>
> RpcServer.Call is a concrete class, but it is marked as:
> {noformat}
> @InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC, 
> HBaseInterfaceAudience.PHOENIX})
> {noformat}
> Let's abstract out an interface out of it for potential consumers that want 
> to pass it around.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17221) Abstract out an interface for RpcServer.Call

2016-12-04 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-17221:
-
Attachment: HBASE-17221-v4.patch

v4 patch.
1. Add a couple of Mockito 'when' in TestSimpleRpcScheduler to fix the failed 
test cases.
2. Removed two methods that don't seem to be absolutely necessary.

> Abstract out an interface for RpcServer.Call
> 
>
> Key: HBASE-17221
> URL: https://issues.apache.org/jira/browse/HBASE-17221
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jerry He
>Assignee: Jerry He
> Fix For: 2.0.0
>
> Attachments: HBASE-17221-v2.patch, HBASE-17221-v3.patch, 
> HBASE-17221-v4.patch, HBASE-17221.patch
>
>
> RpcServer.Call is a concrete class, but it is marked as:
> {noformat}
> @InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC, 
> HBaseInterfaceAudience.PHOENIX})
> {noformat}
> Let's abstract out an interface out of it for potential consumers that want 
> to pass it around.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17221) Abstract out an interface for RpcServer.Call

2016-12-05 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-17221:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the review, [~stack]

> Abstract out an interface for RpcServer.Call
> 
>
> Key: HBASE-17221
> URL: https://issues.apache.org/jira/browse/HBASE-17221
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jerry He
>Assignee: Jerry He
> Fix For: 2.0.0
>
> Attachments: HBASE-17221-v2.patch, HBASE-17221-v3.patch, 
> HBASE-17221-v4.patch, HBASE-17221.patch
>
>
> RpcServer.Call is a concrete class, but it is marked as:
> {noformat}
> @InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC, 
> HBaseInterfaceAudience.PHOENIX})
> {noformat}
> Let's abstract out an interface out of it for potential consumers that want 
> to pass it around.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15437) Response size calculated in RPCServer for warning tooLarge responses does NOT count CellScanner payload

2016-12-05 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-15437:
-
Attachment: HBASE-15437-v3.patch

A new patch after HBASE-17221

> Response size calculated in RPCServer for warning tooLarge responses does NOT 
> count CellScanner payload
> ---
>
> Key: HBASE-15437
> URL: https://issues.apache.org/jira/browse/HBASE-15437
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Reporter: deepankar
>Assignee: deepankar
> Attachments: HBASE-15437-v1.patch, HBASE-15437-v2.patch, 
> HBASE-15437-v3.patch, HBASE-15437.patch
>
>
> After HBASE-13158 where we respond back to RPCs with cells in the payload , 
> the protobuf response will just have the count the cells to read from 
> payload, but there are set of features where we log warn in RPCServer 
> whenever the response is tooLarge, but this size now is not considering the 
> sizes of the cells in the PayloadCellScanner. Code form RPCServer
> {code}
>   long responseSize = result.getSerializedSize();
>   // log any RPC responses that are slower than the configured warn
>   // response time or larger than configured warning size
>   boolean tooSlow = (processingTime > warnResponseTime && 
> warnResponseTime > -1);
>   boolean tooLarge = (responseSize > warnResponseSize && warnResponseSize 
> > -1);
>   if (tooSlow || tooLarge) {
> // when tagging, we let TooLarge trump TooSmall to keep output simple
> // note that large responses will often also be slow.
> logResponse(new Object[]{param},
> md.getName(), md.getName() + "(" + param.getClass().getName() + 
> ")",
> (tooLarge ? "TooLarge" : "TooSlow"),
> status.getClient(), startTime, processingTime, qTime,
> responseSize);
>   }
> {code}
> Should this feature be not supported any more or should we add a method to 
> CellScanner or a new interface which returns the serialized size (but this 
> might not include the compression codecs which might be used during response 
> ?) Any other Idea this could be fixed ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15437) Response size calculated in RPCServer for warning tooLarge responses does NOT count CellScanner payload

2016-12-05 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-15437:
-
Status: Open  (was: Patch Available)

> Response size calculated in RPCServer for warning tooLarge responses does NOT 
> count CellScanner payload
> ---
>
> Key: HBASE-15437
> URL: https://issues.apache.org/jira/browse/HBASE-15437
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Reporter: deepankar
>Assignee: deepankar
> Attachments: HBASE-15437-v1.patch, HBASE-15437-v2.patch, 
> HBASE-15437-v3.patch, HBASE-15437.patch
>
>
> After HBASE-13158 where we respond back to RPCs with cells in the payload , 
> the protobuf response will just have the count the cells to read from 
> payload, but there are set of features where we log warn in RPCServer 
> whenever the response is tooLarge, but this size now is not considering the 
> sizes of the cells in the PayloadCellScanner. Code form RPCServer
> {code}
>   long responseSize = result.getSerializedSize();
>   // log any RPC responses that are slower than the configured warn
>   // response time or larger than configured warning size
>   boolean tooSlow = (processingTime > warnResponseTime && 
> warnResponseTime > -1);
>   boolean tooLarge = (responseSize > warnResponseSize && warnResponseSize 
> > -1);
>   if (tooSlow || tooLarge) {
> // when tagging, we let TooLarge trump TooSmall to keep output simple
> // note that large responses will often also be slow.
> logResponse(new Object[]{param},
> md.getName(), md.getName() + "(" + param.getClass().getName() + 
> ")",
> (tooLarge ? "TooLarge" : "TooSlow"),
> status.getClient(), startTime, processingTime, qTime,
> responseSize);
>   }
> {code}
> Should this feature be not supported any more or should we add a method to 
> CellScanner or a new interface which returns the serialized size (but this 
> might not include the compression codecs which might be used during response 
> ?) Any other Idea this could be fixed ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15437) Response size calculated in RPCServer for warning tooLarge responses does NOT count CellScanner payload

2016-12-05 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-15437:
-
Assignee: Jerry He  (was: deepankar)
  Status: Patch Available  (was: Open)

> Response size calculated in RPCServer for warning tooLarge responses does NOT 
> count CellScanner payload
> ---
>
> Key: HBASE-15437
> URL: https://issues.apache.org/jira/browse/HBASE-15437
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Reporter: deepankar
>Assignee: Jerry He
> Attachments: HBASE-15437-v1.patch, HBASE-15437-v2.patch, 
> HBASE-15437-v3.patch, HBASE-15437.patch
>
>
> After HBASE-13158 where we respond back to RPCs with cells in the payload , 
> the protobuf response will just have the count the cells to read from 
> payload, but there are set of features where we log warn in RPCServer 
> whenever the response is tooLarge, but this size now is not considering the 
> sizes of the cells in the PayloadCellScanner. Code form RPCServer
> {code}
>   long responseSize = result.getSerializedSize();
>   // log any RPC responses that are slower than the configured warn
>   // response time or larger than configured warning size
>   boolean tooSlow = (processingTime > warnResponseTime && 
> warnResponseTime > -1);
>   boolean tooLarge = (responseSize > warnResponseSize && warnResponseSize 
> > -1);
>   if (tooSlow || tooLarge) {
> // when tagging, we let TooLarge trump TooSmall to keep output simple
> // note that large responses will often also be slow.
> logResponse(new Object[]{param},
> md.getName(), md.getName() + "(" + param.getClass().getName() + 
> ")",
> (tooLarge ? "TooLarge" : "TooSlow"),
> status.getClient(), startTime, processingTime, qTime,
> responseSize);
>   }
> {code}
> Should this feature be not supported any more or should we add a method to 
> CellScanner or a new interface which returns the serialized size (but this 
> might not include the compression codecs which might be used during response 
> ?) Any other Idea this could be fixed ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15437) Response size calculated in RPCServer for warning tooLarge responses does NOT count CellScanner payload

2016-12-05 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-15437:
-
Attachment: HBASE-15437-v4.patch

v4 to fix the findbugs.

> Response size calculated in RPCServer for warning tooLarge responses does NOT 
> count CellScanner payload
> ---
>
> Key: HBASE-15437
> URL: https://issues.apache.org/jira/browse/HBASE-15437
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Reporter: deepankar
>Assignee: Jerry He
> Attachments: HBASE-15437-v1.patch, HBASE-15437-v2.patch, 
> HBASE-15437-v3.patch, HBASE-15437-v4.patch, HBASE-15437.patch
>
>
> After HBASE-13158 where we respond back to RPCs with cells in the payload , 
> the protobuf response will just have the count the cells to read from 
> payload, but there are set of features where we log warn in RPCServer 
> whenever the response is tooLarge, but this size now is not considering the 
> sizes of the cells in the PayloadCellScanner. Code form RPCServer
> {code}
>   long responseSize = result.getSerializedSize();
>   // log any RPC responses that are slower than the configured warn
>   // response time or larger than configured warning size
>   boolean tooSlow = (processingTime > warnResponseTime && 
> warnResponseTime > -1);
>   boolean tooLarge = (responseSize > warnResponseSize && warnResponseSize 
> > -1);
>   if (tooSlow || tooLarge) {
> // when tagging, we let TooLarge trump TooSmall to keep output simple
> // note that large responses will often also be slow.
> logResponse(new Object[]{param},
> md.getName(), md.getName() + "(" + param.getClass().getName() + 
> ")",
> (tooLarge ? "TooLarge" : "TooSlow"),
> status.getClient(), startTime, processingTime, qTime,
> responseSize);
>   }
> {code}
> Should this feature be not supported any more or should we add a method to 
> CellScanner or a new interface which returns the serialized size (but this 
> might not include the compression codecs which might be used during response 
> ?) Any other Idea this could be fixed ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15437) Response size calculated in RPCServer for warning tooLarge responses does NOT count CellScanner payload

2016-12-06 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-15437:
-
Attachment: HBASE-15437-v5.patch

> Response size calculated in RPCServer for warning tooLarge responses does NOT 
> count CellScanner payload
> ---
>
> Key: HBASE-15437
> URL: https://issues.apache.org/jira/browse/HBASE-15437
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Reporter: deepankar
>Assignee: Jerry He
> Attachments: HBASE-15437-v1.patch, HBASE-15437-v2.patch, 
> HBASE-15437-v3.patch, HBASE-15437-v4.patch, HBASE-15437-v5.patch, 
> HBASE-15437.patch
>
>
> After HBASE-13158 where we respond back to RPCs with cells in the payload , 
> the protobuf response will just have the count the cells to read from 
> payload, but there are set of features where we log warn in RPCServer 
> whenever the response is tooLarge, but this size now is not considering the 
> sizes of the cells in the PayloadCellScanner. Code form RPCServer
> {code}
>   long responseSize = result.getSerializedSize();
>   // log any RPC responses that are slower than the configured warn
>   // response time or larger than configured warning size
>   boolean tooSlow = (processingTime > warnResponseTime && 
> warnResponseTime > -1);
>   boolean tooLarge = (responseSize > warnResponseSize && warnResponseSize 
> > -1);
>   if (tooSlow || tooLarge) {
> // when tagging, we let TooLarge trump TooSmall to keep output simple
> // note that large responses will often also be slow.
> logResponse(new Object[]{param},
> md.getName(), md.getName() + "(" + param.getClass().getName() + 
> ")",
> (tooLarge ? "TooLarge" : "TooSlow"),
> status.getClient(), startTime, processingTime, qTime,
> responseSize);
>   }
> {code}
> Should this feature be not supported any more or should we add a method to 
> CellScanner or a new interface which returns the serialized size (but this 
> might not include the compression codecs which might be used during response 
> ?) Any other Idea this could be fixed ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15437) Response size calculated in RPCServer for warning tooLarge responses does NOT count CellScanner payload

2016-12-06 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15726601#comment-15726601
 ] 

Jerry He commented on HBASE-15437:
--

HI, [~anoop.hbase]

Thanks for the review. All good points. 
v5 addressed your suggestions.

> Response size calculated in RPCServer for warning tooLarge responses does NOT 
> count CellScanner payload
> ---
>
> Key: HBASE-15437
> URL: https://issues.apache.org/jira/browse/HBASE-15437
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Reporter: deepankar
>Assignee: Jerry He
> Attachments: HBASE-15437-v1.patch, HBASE-15437-v2.patch, 
> HBASE-15437-v3.patch, HBASE-15437-v4.patch, HBASE-15437-v5.patch, 
> HBASE-15437.patch
>
>
> After HBASE-13158 where we respond back to RPCs with cells in the payload , 
> the protobuf response will just have the count the cells to read from 
> payload, but there are set of features where we log warn in RPCServer 
> whenever the response is tooLarge, but this size now is not considering the 
> sizes of the cells in the PayloadCellScanner. Code form RPCServer
> {code}
>   long responseSize = result.getSerializedSize();
>   // log any RPC responses that are slower than the configured warn
>   // response time or larger than configured warning size
>   boolean tooSlow = (processingTime > warnResponseTime && 
> warnResponseTime > -1);
>   boolean tooLarge = (responseSize > warnResponseSize && warnResponseSize 
> > -1);
>   if (tooSlow || tooLarge) {
> // when tagging, we let TooLarge trump TooSmall to keep output simple
> // note that large responses will often also be slow.
> logResponse(new Object[]{param},
> md.getName(), md.getName() + "(" + param.getClass().getName() + 
> ")",
> (tooLarge ? "TooLarge" : "TooSlow"),
> status.getClient(), startTime, processingTime, qTime,
> responseSize);
>   }
> {code}
> Should this feature be not supported any more or should we add a method to 
> CellScanner or a new interface which returns the serialized size (but this 
> might not include the compression codecs which might be used during response 
> ?) Any other Idea this could be fixed ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17221) Abstract out an interface for RpcServer.Call

2016-12-06 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-17221:
-
Hadoop Flags: Incompatible change,Reviewed
Release Note: 
Provide an interface RpcCall on the server side. 
RpcServer.Call now is marked as @InterfaceAudience.Private, and implements the 
interface RpcCall,

> Abstract out an interface for RpcServer.Call
> 
>
> Key: HBASE-17221
> URL: https://issues.apache.org/jira/browse/HBASE-17221
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jerry He
>Assignee: Jerry He
> Fix For: 2.0.0
>
> Attachments: HBASE-17221-v2.patch, HBASE-17221-v3.patch, 
> HBASE-17221-v4.patch, HBASE-17221.patch
>
>
> RpcServer.Call is a concrete class, but it is marked as:
> {noformat}
> @InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC, 
> HBaseInterfaceAudience.PHOENIX})
> {noformat}
> Let's abstract out an interface out of it for potential consumers that want 
> to pass it around.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17221) Abstract out an interface for RpcServer.Call

2016-12-06 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15726619#comment-15726619
 ] 

Jerry He commented on HBASE-17221:
--

Updated, [~anoop.hbase]

> Abstract out an interface for RpcServer.Call
> 
>
> Key: HBASE-17221
> URL: https://issues.apache.org/jira/browse/HBASE-17221
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jerry He
>Assignee: Jerry He
> Fix For: 2.0.0
>
> Attachments: HBASE-17221-v2.patch, HBASE-17221-v3.patch, 
> HBASE-17221-v4.patch, HBASE-17221.patch
>
>
> RpcServer.Call is a concrete class, but it is marked as:
> {noformat}
> @InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC, 
> HBaseInterfaceAudience.PHOENIX})
> {noformat}
> Let's abstract out an interface out of it for potential consumers that want 
> to pass it around.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15437) Response size calculated in RPCServer for warning tooLarge responses does NOT count CellScanner payload

2016-12-07 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15730164#comment-15730164
 ] 

Jerry He commented on HBASE-15437:
--

Hi, [~anoop.hbase]
v6 did what you suggested.
addSize() is called regardless of isClientCellBlockSupported in scan and multi, 
which is correct because the size is used for other checks.
For get and mutate, there is no other use of the size.



> Response size calculated in RPCServer for warning tooLarge responses does NOT 
> count CellScanner payload
> ---
>
> Key: HBASE-15437
> URL: https://issues.apache.org/jira/browse/HBASE-15437
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Reporter: deepankar
>Assignee: Jerry He
> Attachments: HBASE-15437-v1.patch, HBASE-15437-v2.patch, 
> HBASE-15437-v3.patch, HBASE-15437-v4.patch, HBASE-15437-v5.patch, 
> HBASE-15437.patch
>
>
> After HBASE-13158 where we respond back to RPCs with cells in the payload , 
> the protobuf response will just have the count the cells to read from 
> payload, but there are set of features where we log warn in RPCServer 
> whenever the response is tooLarge, but this size now is not considering the 
> sizes of the cells in the PayloadCellScanner. Code form RPCServer
> {code}
>   long responseSize = result.getSerializedSize();
>   // log any RPC responses that are slower than the configured warn
>   // response time or larger than configured warning size
>   boolean tooSlow = (processingTime > warnResponseTime && 
> warnResponseTime > -1);
>   boolean tooLarge = (responseSize > warnResponseSize && warnResponseSize 
> > -1);
>   if (tooSlow || tooLarge) {
> // when tagging, we let TooLarge trump TooSmall to keep output simple
> // note that large responses will often also be slow.
> logResponse(new Object[]{param},
> md.getName(), md.getName() + "(" + param.getClass().getName() + 
> ")",
> (tooLarge ? "TooLarge" : "TooSlow"),
> status.getClient(), startTime, processingTime, qTime,
> responseSize);
>   }
> {code}
> Should this feature be not supported any more or should we add a method to 
> CellScanner or a new interface which returns the serialized size (but this 
> might not include the compression codecs which might be used during response 
> ?) Any other Idea this could be fixed ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


<    1   2   3   4   5   6   7   8   9   10   >