[jira] [Updated] (HBASE-5878) Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.

2015-08-06 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-5878:

Fix Version/s: (was: 1.0.3)
   (was: 1.2.1)
   (was: 1.1.2)

Dropping 1.0, 1.1, and 1.2 from fix targets as I don't see this as a bug fix, 
more of a code-smell cleanup. [~enis]/[~apurtell], [~busbey] speak up if you 
think I have that wrong.

 Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.
 ---

 Key: HBASE-5878
 URL: https://issues.apache.org/jira/browse/HBASE-5878
 Project: HBase
  Issue Type: Bug
  Components: wal
Reporter: Uma Maheswara Rao G
Assignee: Ashish Singhi
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-5878-v2.patch, HBASE-5878-v3.patch, 
 HBASE-5878-v4.patch, HBASE-5878-v5.patch, HBASE-5878-v5.patch, 
 HBASE-5878.patch


 SequencFileLogReader: 
 Currently Hbase using getFileLength api from DFSInputStream class by 
 reflection. DFSInputStream is not exposed as public. So, this may change in 
 future. Now HDFS exposed HdfsDataInputStream as public API.
 We can make use of it, when we are not able to find the getFileLength api 
 from DFSInputStream as a else condition. So, that we will not have any sudden 
 surprise like we are facing today.
 Also,  it is just logging one warn message and proceeding if it throws any 
 exception while getting the length. I think we can re-throw the exception 
 because there is no point in continuing with dataloss.
 {code}
 long adjust = 0;
   try {
 Field fIn = FilterInputStream.class.getDeclaredField(in);
 fIn.setAccessible(true);
 Object realIn = fIn.get(this.in);
 // In hadoop 0.22, DFSInputStream is a standalone class.  Before 
 this,
 // it was an inner class of DFSClient.
 if (realIn.getClass().getName().endsWith(DFSInputStream)) {
   Method getFileLength = realIn.getClass().
 getDeclaredMethod(getFileLength, new Class? []{});
   getFileLength.setAccessible(true);
   long realLength = ((Long)getFileLength.
 invoke(realIn, new Object []{})).longValue();
   assert(realLength = this.length);
   adjust = realLength - this.length;
 } else {
   LOG.info(Input stream class:  + realIn.getClass().getName() +
   , not adjusting length);
 }
   } catch(Exception e) {
 SequenceFileLogReader.LOG.warn(
   Error while trying to get accurate file length.   +
   Truncation / data loss may occur if RegionServers die., e);
   }
   return adjust + super.getPos();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14191) HBase grant at specific column family level does not work for Groups

2015-08-06 Thread Tom James (JIRA)
Tom James created HBASE-14191:
-

 Summary: HBase grant at specific column family level does not work 
for Groups
 Key: HBASE-14191
 URL: https://issues.apache.org/jira/browse/HBASE-14191
 Project: HBase
  Issue Type: Bug
 Environment: Version 0.98.6-cdh5.3.3, rUnknown, Wed Apr  8 15:00:15 
PDT 2015
Reporter: Tom James


While performing Grant command to a specific column family in a table - to a 
specific group does not produce needed results. 
However, when specific user is mentioned (instead of group name) in grant 
command, it becomes effective

Steps to Reproduce : 
1) using super-user, Grant a table/column family level grant to a group
2) login using a user ( part of the above group) and scan the table. It does 
not return any results

3) using super-user, Grant a table/column family level grant to a specific user 
( instead of group) 
4) login using that specific user and scan the table. It produces correct 
results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14191) HBase grant at specific column family level does not work for Groups

2015-08-06 Thread Tom James (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660282#comment-14660282
 ] 

Tom James commented on HBASE-14191:
---

Seems there is a fix for column qualifier level. HBASE-13239

 HBase grant at specific column family level does not work for Groups
 

 Key: HBASE-14191
 URL: https://issues.apache.org/jira/browse/HBASE-14191
 Project: HBase
  Issue Type: Bug
 Environment: Version 0.98.6-cdh5.3.3, rUnknown, Wed Apr  8 15:00:15 
 PDT 2015
Reporter: Tom James

 While performing Grant command to a specific column family in a table - to a 
 specific group does not produce needed results. 
 However, when specific user is mentioned (instead of group name) in grant 
 command, it becomes effective
 Steps to Reproduce : 
 1) using super-user, Grant a table/column family level grant to a group
 2) login using a user ( part of the above group) and scan the table. It does 
 not return any results
 3) using super-user, Grant a table/column family level grant to a specific 
 user ( instead of group) 
 4) login using that specific user and scan the table. It produces correct 
 results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-5878) Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.

2015-08-06 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-5878:

Attachment: HBASE-5878-v5-0.98.patch

Patch applies mostly cleanly to 0.98, attaching here. You want this back ported 
[~apurtell]?

 Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.
 ---

 Key: HBASE-5878
 URL: https://issues.apache.org/jira/browse/HBASE-5878
 Project: HBase
  Issue Type: Bug
  Components: wal
Reporter: Uma Maheswara Rao G
Assignee: Ashish Singhi
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-5878-v2.patch, HBASE-5878-v3.patch, 
 HBASE-5878-v4.patch, HBASE-5878-v5-0.98.patch, HBASE-5878-v5.patch, 
 HBASE-5878-v5.patch, HBASE-5878.patch


 SequencFileLogReader: 
 Currently Hbase using getFileLength api from DFSInputStream class by 
 reflection. DFSInputStream is not exposed as public. So, this may change in 
 future. Now HDFS exposed HdfsDataInputStream as public API.
 We can make use of it, when we are not able to find the getFileLength api 
 from DFSInputStream as a else condition. So, that we will not have any sudden 
 surprise like we are facing today.
 Also,  it is just logging one warn message and proceeding if it throws any 
 exception while getting the length. I think we can re-throw the exception 
 because there is no point in continuing with dataloss.
 {code}
 long adjust = 0;
   try {
 Field fIn = FilterInputStream.class.getDeclaredField(in);
 fIn.setAccessible(true);
 Object realIn = fIn.get(this.in);
 // In hadoop 0.22, DFSInputStream is a standalone class.  Before 
 this,
 // it was an inner class of DFSClient.
 if (realIn.getClass().getName().endsWith(DFSInputStream)) {
   Method getFileLength = realIn.getClass().
 getDeclaredMethod(getFileLength, new Class? []{});
   getFileLength.setAccessible(true);
   long realLength = ((Long)getFileLength.
 invoke(realIn, new Object []{})).longValue();
   assert(realLength = this.length);
   adjust = realLength - this.length;
 } else {
   LOG.info(Input stream class:  + realIn.getClass().getName() +
   , not adjusting length);
 }
   } catch(Exception e) {
 SequenceFileLogReader.LOG.warn(
   Error while trying to get accurate file length.   +
   Truncation / data loss may occur if RegionServers die., e);
   }
   return adjust + super.getPos();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14191) HBase grant at specific column family level does not work for Groups

2015-08-06 Thread Tom James (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660285#comment-14660285
 ] 

Tom James commented on HBASE-14191:
---

(DEV/PAT) USER-1@r01mgt:~ $ hbase shell
15/08/06 10:47:13 INFO Configuration.deprecation: hadoop.native.lib is 
deprecated. Instead, use io.native.lib.available
HBase Shell; enter 'helpRETURN' for list of supported commands.
Type exitRETURN to leave the HBase Shell
Version 0.98.6-cdh5.3.3, rUnknown, Wed Apr  8 15:00:15 PDT 2015

hbase(main):001:0 (DEV/PAT) USER-1@r01mgt:~ $
(DEV/PAT) USER-1@r01mgt:~ $ id
uid=578384069(USER-1) gid=1001(pat_users) 
groups=1001(pat_users),578001513,578051727,Removed for 
brevity,578316368(PAT_APP_SUP_RW_P-TDBFG),removed for brevity

(DEV/PAT) USER-1@r01mgt:~ $ kinit hbasea...@hadoop.com
Password for hbasea...@hadoop.com:
(DEV/PAT) USER-1@r01mgt:~ $ klist
Ticket cache: FILE:/tmp/krb5cc_578384069_mfR9h0
Default principal: hbasea...@hadoop.com

Valid starting ExpiresService principal
08/06/15 10:48:11  08/06/15 20:48:11  krbtgt/hadoop@hadoop.com
renew until 08/07/15 10:48:11
(DEV/PAT) USER-1@r01mgt:~ $ hbase shell
15/08/06 10:48:29 INFO Configuration.deprecation: hadoop.native.lib is 
deprecated. Instead, use io.native.lib.available
HBase Shell; enter 'helpRETURN' for list of supported commands.
Type exitRETURN to leave the HBase Shell
Version 0.98.6-cdh5.3.3, rUnknown, Wed Apr  8 15:00:15 PDT 2015

hbase(main):001:0 create 'DEFECT-1745_TEST', 'cf'
0 row(s) in 2.7310 seconds

= Hbase::Table - DEFECT-1745_TEST
hbase(main):002:0 user_permission 'DEFECT-1745_TEST'
User   
Table,Family,Qualifier:Permission
 hbaseadmn  DEFECT-1745_TEST,,: 
[Permission: actions=READ,WRITE,EXEC,CREATE,ADMIN]
1 row(s) in 0.8110 seconds

hbase(main):003:0 grant '@PAT_APP_SUP_RW_P-TDBFG', 'R', 'DEFECT-1745_TEST','cf'
0 row(s) in 0.4190 seconds

hbase(main):004:0 user_permission 'DEFECT-1745_TEST'
User   
Table,Family,Qualifier:Permission
 @PAT_APP_SUP_RW_P-TDBFG  DEFECT-1745_TEST,cf,: 
[Permission: actions=READ]
 hbaseadmn  DEFECT-1745_TEST,,: 
[Permission: actions=READ,WRITE,EXEC,CREATE,ADMIN]
2 row(s) in 0.2830 seconds

hbase(main):005:0 put 'DEFECT-1745_TEST', 'r1','cf:a','v'
0 row(s) in 0.1810 seconds

hbase(main):006:0 scan 'DEFECT-1745_TEST'
ROWCOLUMN+CELL
 r1column=cf:a, 
timestamp=1438872836661, value=v
1 row(s) in 0.0410 seconds

hbase(main):007:0 (DEV/PAT) USER-1@r01mgt:~ $ kdestroy
(DEV/PAT) USER-1@r01mgt:~ $ hbase shell
15/08/06 10:54:58 INFO Configuration.deprecation: hadoop.native.lib is 
deprecated. Instead, use io.native.lib.available
HBase Shell; enter 'helpRETURN' for list of supported commands.
Type exitRETURN to leave the HBase Shell
Version 0.98.6-cdh5.3.3, rUnknown, Wed Apr  8 15:00:15 PDT 2015

hbase(main):001:0 scan 'DEFECT-1745_TEST'
ROWCOLUMN+CELL

ERROR: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
permissions (table=DEFECT-1745_TEST, action=READ)


 HBase grant at specific column family level does not work for Groups
 

 Key: HBASE-14191
 URL: https://issues.apache.org/jira/browse/HBASE-14191
 Project: HBase
  Issue Type: Bug
 Environment: Version 0.98.6-cdh5.3.3, rUnknown, Wed Apr  8 15:00:15 
 PDT 2015
Reporter: Tom James

 While performing Grant command to a specific column family in a table - to a 
 specific group does not produce needed results. 
 However, when specific user is mentioned (instead of group name) in grant 
 command, it becomes effective
 Steps to Reproduce : 
 1) using super-user, Grant a table/column family level grant to a group
 2) login using a user ( part of the above group) and scan the table. It does 
 not return any results
 3) using super-user, Grant a table/column family level grant to a specific 
 user ( instead of group) 
 4) login using that specific user and scan the table. It produces correct 
 results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13865) Increase the default value for hbase.hregion.memstore.block.multipler from 2 to 4 (part 2)

2015-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660707#comment-14660707
 ] 

Hudson commented on HBASE-13865:


SUCCESS: Integrated in HBase-1.0 #1005 (See 
[https://builds.apache.org/job/HBase-1.0/1005/])
HBASE-13865 Increase the default value for 
hbase.hregion.memstore.block.multipler from 2 to 4 (part 2) (ndimiduk: rev 
cb4b395bafd207523d7bea286aa419f4c41a3d3a)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMajorCompaction.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientPushback.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMinorCompaction.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


 Increase the default value for hbase.hregion.memstore.block.multipler from 2 
 to 4 (part 2)
 --

 Key: HBASE-13865
 URL: https://issues.apache.org/jira/browse/HBASE-13865
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Vladimir Rodionov
Assignee: Gabor Liptak
Priority: Trivial
 Fix For: 2.0.0, 0.98.14, 1.1.2, 1.3.0, 1.2.1, 1.0.3

 Attachments: HBASE-13865.1.patch, HBASE-13865.2.patch, 
 HBASE-13865.2.patch


 Its 4 in the book and 2 in a current master. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14192) HBASE REST Cluster Constructor with String List

2015-08-06 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660708#comment-14660708
 ] 

Ted Yu commented on HBASE-14192:


Mind attaching a patch ?

 HBASE REST Cluster Constructor with String List
 ---

 Key: HBASE-14192
 URL: https://issues.apache.org/jira/browse/HBASE-14192
 Project: HBase
  Issue Type: Bug
  Components: REST
Affects Versions: 1.0.0, 1.0.1, 1.1.0, 0.98.13, 1.1.1, 1.1.0.1
Reporter: Rick Kellogg
Priority: Minor

 The HBase REST Cluster which takes a list of hostname colon port numbers is 
 not setting the internal list of nodes correctly.
 Existing method:
 public Cluster(ListString nodes) {
nodes.addAll(nodes)
 }
 Corrected method:
 public Cluster(ListString nodes) {
this.nodes.addAll(nodes)
 }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14150) Add BulkLoad functionality to HBase-Spark Module

2015-08-06 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14150:

Fix Version/s: 2.0.0
   Status: Patch Available  (was: Open)

 Add BulkLoad functionality to HBase-Spark Module
 

 Key: HBASE-14150
 URL: https://issues.apache.org/jira/browse/HBASE-14150
 Project: HBase
  Issue Type: New Feature
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
 Fix For: 2.0.0

 Attachments: HBASE-14150.1.patch, HBASE-14150.2.patch, 
 HBASE-14150.3.patch


 Add on to the work done in HBASE-13992 to add functionality to do a bulk load 
 from a given RDD.
 This will do the following:
 1. figure out the number of regions and sort and partition the data correctly 
 to be written out to HFiles
 2. Also unlike the MR bulkload I would like that the columns to be sorted in 
 the shuffle stage and not in the memory of the reducer.  This will allow this 
 design to support super wide records with out going out of memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14150) Add BulkLoad functionality to HBase-Spark Module

2015-08-06 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14150:

Status: Open  (was: Patch Available)

 Add BulkLoad functionality to HBase-Spark Module
 

 Key: HBASE-14150
 URL: https://issues.apache.org/jira/browse/HBASE-14150
 Project: HBase
  Issue Type: New Feature
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
 Attachments: HBASE-14150.1.patch, HBASE-14150.2.patch, 
 HBASE-14150.3.patch


 Add on to the work done in HBASE-13992 to add functionality to do a bulk load 
 from a given RDD.
 This will do the following:
 1. figure out the number of regions and sort and partition the data correctly 
 to be written out to HFiles
 2. Also unlike the MR bulkload I would like that the columns to be sorted in 
 the shuffle stage and not in the memory of the reducer.  This will allow this 
 design to support super wide records with out going out of memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14181) Add Spark DataFrame DataSource to HBase-Spark Module

2015-08-06 Thread Ted Malaska (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Malaska updated HBASE-14181:

Attachment: HBASE-14181.1.patch

This is NOT a final draft.  It doesn't even deserve a review.  I just wanted to 
post it to show progress.  I should have a first draft done by early next week.

I still need to do:
1. Filter push down of RowKeys
2. Filter push down of Columns
3. Code clear up
4. More testing

But this patch does contain code that work, which was pretty cool just to see.

 Add Spark DataFrame DataSource to HBase-Spark Module
 

 Key: HBASE-14181
 URL: https://issues.apache.org/jira/browse/HBASE-14181
 Project: HBase
  Issue Type: New Feature
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
Priority: Minor
 Attachments: HBASE-14181.1.patch


 Build a RelationProvider for HBase-Spark Module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14150) Add BulkLoad functionality to HBase-Spark Module

2015-08-06 Thread Ted Malaska (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Malaska updated HBASE-14150:

Attachment: HBASE-14150.3.patch

Added the following:
1. Partitioner is now in it's own class file
2. There are unit tests that just tests the partitioner
3. Added unit test for multi region bulk load
  3.1 Tested that the data got into HBase but also tested that the right number 
of HFiles get created
  3.2 Made sure that the partition works fine for the EMPTY_START_ROW rowKey
4. Fixed some spelling
5. Added Javadoc for some function parameters that I missed


 Add BulkLoad functionality to HBase-Spark Module
 

 Key: HBASE-14150
 URL: https://issues.apache.org/jira/browse/HBASE-14150
 Project: HBase
  Issue Type: New Feature
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
 Attachments: HBASE-14150.1.patch, HBASE-14150.2.patch, 
 HBASE-14150.3.patch


 Add on to the work done in HBASE-13992 to add functionality to do a bulk load 
 from a given RDD.
 This will do the following:
 1. figure out the number of regions and sort and partition the data correctly 
 to be written out to HFiles
 2. Also unlike the MR bulkload I would like that the columns to be sorted in 
 the shuffle stage and not in the memory of the reducer.  This will allow this 
 design to support super wide records with out going out of memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14184) Fix indention and type-o in JavaHBaseContext

2015-08-06 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660837#comment-14660837
 ] 

Sean Busbey commented on HBASE-14184:
-

lgtm. my working repo is dirty from HBASE-14085 on 0.94 ATM. once I clean that 
out I'll push this this evening unless someone else beats me to it.

 Fix indention and type-o in JavaHBaseContext
 

 Key: HBASE-14184
 URL: https://issues.apache.org/jira/browse/HBASE-14184
 Project: HBase
  Issue Type: Wish
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
Priority: Minor
 Attachments: HBASE-14184.3.patch


 Looks like there is a Ddd that should be Rdd.
 Also looks like everything is one space over too much



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-5878) Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.

2015-08-06 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-5878:
-
Attachment: HBASE-5878-v6.patch

Thanks for comments Andrew and Nick.
Updated master patch.

 Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.
 ---

 Key: HBASE-5878
 URL: https://issues.apache.org/jira/browse/HBASE-5878
 Project: HBase
  Issue Type: Bug
  Components: wal
Reporter: Uma Maheswara Rao G
Assignee: Ashish Singhi
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-5878-v2.patch, HBASE-5878-v3.patch, 
 HBASE-5878-v4.patch, HBASE-5878-v5-0.98.patch, HBASE-5878-v5.patch, 
 HBASE-5878-v5.patch, HBASE-5878-v6.patch, HBASE-5878.patch


 SequencFileLogReader: 
 Currently Hbase using getFileLength api from DFSInputStream class by 
 reflection. DFSInputStream is not exposed as public. So, this may change in 
 future. Now HDFS exposed HdfsDataInputStream as public API.
 We can make use of it, when we are not able to find the getFileLength api 
 from DFSInputStream as a else condition. So, that we will not have any sudden 
 surprise like we are facing today.
 Also,  it is just logging one warn message and proceeding if it throws any 
 exception while getting the length. I think we can re-throw the exception 
 because there is no point in continuing with dataloss.
 {code}
 long adjust = 0;
   try {
 Field fIn = FilterInputStream.class.getDeclaredField(in);
 fIn.setAccessible(true);
 Object realIn = fIn.get(this.in);
 // In hadoop 0.22, DFSInputStream is a standalone class.  Before 
 this,
 // it was an inner class of DFSClient.
 if (realIn.getClass().getName().endsWith(DFSInputStream)) {
   Method getFileLength = realIn.getClass().
 getDeclaredMethod(getFileLength, new Class? []{});
   getFileLength.setAccessible(true);
   long realLength = ((Long)getFileLength.
 invoke(realIn, new Object []{})).longValue();
   assert(realLength = this.length);
   adjust = realLength - this.length;
 } else {
   LOG.info(Input stream class:  + realIn.getClass().getName() +
   , not adjusting length);
 }
   } catch(Exception e) {
 SequenceFileLogReader.LOG.warn(
   Error while trying to get accurate file length.   +
   Truncation / data loss may occur if RegionServers die., e);
   }
   return adjust + super.getPos();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13865) Increase the default value for hbase.hregion.memstore.block.multipler from 2 to 4 (part 2)

2015-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660762#comment-14660762
 ] 

Hudson commented on HBASE-13865:


FAILURE: Integrated in HBase-1.1 #601 (See 
[https://builds.apache.org/job/HBase-1.1/601/])
HBASE-13865 Increase the default value for 
hbase.hregion.memstore.block.multipler from 2 to 4 (part 2) (ndimiduk: rev 
5ae95de049b8d82eaadf0c3b161303c7406c4754)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMajorCompaction.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientPushback.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMinorCompaction.java


 Increase the default value for hbase.hregion.memstore.block.multipler from 2 
 to 4 (part 2)
 --

 Key: HBASE-13865
 URL: https://issues.apache.org/jira/browse/HBASE-13865
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Vladimir Rodionov
Assignee: Gabor Liptak
Priority: Trivial
 Fix For: 2.0.0, 0.98.14, 1.1.2, 1.3.0, 1.2.1, 1.0.3

 Attachments: HBASE-13865.1.patch, HBASE-13865.2.patch, 
 HBASE-13865.2.patch


 Its 4 in the book and 2 in a current master. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5878) Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.

2015-08-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660755#comment-14660755
 ] 

Hadoop QA commented on HBASE-5878:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12749093/HBASE-5878-v5-0.98.patch
  against 0.98 branch at commit 7a9e10dc11877420c53245c403897d746bebc077.
  ATTACHMENT ID: 12749093

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
22 warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
3879 checkstyle errors (more than the master's current 3877 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn post-site goal 
to fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.rest.TestTableResource.testTableListXML(TestTableResource.java:207)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14992//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14992//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14992//artifact/patchprocess/checkstyle-aggregate.html

Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14992//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14992//console

This message is automatically generated.

 Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.
 ---

 Key: HBASE-5878
 URL: https://issues.apache.org/jira/browse/HBASE-5878
 Project: HBase
  Issue Type: Bug
  Components: wal
Reporter: Uma Maheswara Rao G
Assignee: Ashish Singhi
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-5878-v2.patch, HBASE-5878-v3.patch, 
 HBASE-5878-v4.patch, HBASE-5878-v5-0.98.patch, HBASE-5878-v5.patch, 
 HBASE-5878-v5.patch, HBASE-5878-v6-0.98.patch, HBASE-5878-v6.patch, 
 HBASE-5878.patch


 SequencFileLogReader: 
 Currently Hbase using getFileLength api from DFSInputStream class by 
 reflection. DFSInputStream is not exposed as public. So, this may change in 
 future. Now HDFS exposed HdfsDataInputStream as public API.
 We can make use of it, when we are not able to find the getFileLength api 
 from DFSInputStream as a else condition. So, that we will not have any sudden 
 surprise like we are facing today.
 Also,  it is just logging one warn message and proceeding if it throws any 
 exception while getting the length. I think we can re-throw the exception 
 because there is no point in continuing with dataloss.
 {code}
 long adjust = 0;
   try {
 Field fIn = FilterInputStream.class.getDeclaredField(in);
 fIn.setAccessible(true);
 Object realIn = fIn.get(this.in);
 // In hadoop 0.22, DFSInputStream is a standalone class.  Before 
 this,
 // it was an inner class of DFSClient.
 if (realIn.getClass().getName().endsWith(DFSInputStream)) {
   Method getFileLength = realIn.getClass().
 getDeclaredMethod(getFileLength, new Class? []{});
   getFileLength.setAccessible(true);
   long realLength = ((Long)getFileLength.
 invoke(realIn, new Object []{})).longValue();
   assert(realLength = this.length);
   adjust = realLength - this.length;
 } else {
   

[jira] [Commented] (HBASE-14082) Add replica id to JMX metrics names

2015-08-06 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660839#comment-14660839
 ] 

Nick Dimiduk commented on HBASE-14082:
--

[~enis]'s proposal should work, and I believe we've followed this pattern in 
other places related to replicas (tacking on the replicaId when it's  0). At 
least for OpenTSDB users, this will [effectively 
result|https://github.com/OpenTSDB/tcollector/blob/master/collectors/0/hbase_regionserver.py#L29]
 in new regions showing up of the form {{_replicaid_x}}. I don't think it's 
native tools will allow for rollups by logical region though.

[~leochen4891]'s most recent suggestion seems also fine, but it means replica 
metrics will always be aggregated by OpenTSDB (if I'm understanding this 
correctly).

Pros/cons of these scenarios [~eclark], [~clayb], [~tsuna], [~phobos182], 
[~toffer] ?

 Add replica id to JMX metrics names
 ---

 Key: HBASE-14082
 URL: https://issues.apache.org/jira/browse/HBASE-14082
 Project: HBase
  Issue Type: Improvement
  Components: metrics
Reporter: Lei Chen
Assignee: Lei Chen
 Attachments: HBASE-14082-v1.patch, HBASE-14082-v2.patch


 Today, via JMX, one cannot distinguish a primary region from a replica. A 
 possible solution is to add replica id to JMX metrics names. The benefits may 
 include, for example:
 # Knowing the latency of a read request on a replica region means the first 
 attempt to the primary region has timeout.
 # Write requests on replicas are due to the replication process, while the 
 ones on primary are from clients.
 # In case of looking for hot spots of read operations, replicas should be 
 excluded since TIMELINE reads are sent to all replicas.
 To implement, we can change the format of metrics names found at 
 {code}Hadoop-HBase-RegionServer-Regions-Attributes{code}
 from 
 {code}namespace_namespace_table_tablename_region_regionname_metric_metricname{code}
 to
 {code}namespace_namespace_table_tablename_region_regionname_replicaid_replicaid_metric_metricname{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13865) Increase the default value for hbase.hregion.memstore.block.multipler from 2 to 4 (part 2)

2015-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660863#comment-14660863
 ] 

Hudson commented on HBASE-13865:


FAILURE: Integrated in HBase-TRUNK #6702 (See 
[https://builds.apache.org/job/HBase-TRUNK/6702/])
HBASE-13865 Increase the default value for 
hbase.hregion.memstore.block.multipler from 2 to 4 (part 2) (ndimiduk: rev 
741783585306e03eec8074841b342ab742cf37e7)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMajorCompaction.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMinorCompaction.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientPushback.java


 Increase the default value for hbase.hregion.memstore.block.multipler from 2 
 to 4 (part 2)
 --

 Key: HBASE-13865
 URL: https://issues.apache.org/jira/browse/HBASE-13865
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Vladimir Rodionov
Assignee: Gabor Liptak
Priority: Trivial
 Fix For: 2.0.0, 0.98.14, 1.1.2, 1.3.0, 1.2.1, 1.0.3

 Attachments: HBASE-13865.1.patch, HBASE-13865.2.patch, 
 HBASE-13865.2.patch


 Its 4 in the book and 2 in a current master. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-5878) Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.

2015-08-06 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-5878:
-
Attachment: HBASE-5878-v6-0.98.patch

Updated 0.98 branch patch

 Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.
 ---

 Key: HBASE-5878
 URL: https://issues.apache.org/jira/browse/HBASE-5878
 Project: HBase
  Issue Type: Bug
  Components: wal
Reporter: Uma Maheswara Rao G
Assignee: Ashish Singhi
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-5878-v2.patch, HBASE-5878-v3.patch, 
 HBASE-5878-v4.patch, HBASE-5878-v5-0.98.patch, HBASE-5878-v5.patch, 
 HBASE-5878-v5.patch, HBASE-5878-v6-0.98.patch, HBASE-5878-v6.patch, 
 HBASE-5878.patch


 SequencFileLogReader: 
 Currently Hbase using getFileLength api from DFSInputStream class by 
 reflection. DFSInputStream is not exposed as public. So, this may change in 
 future. Now HDFS exposed HdfsDataInputStream as public API.
 We can make use of it, when we are not able to find the getFileLength api 
 from DFSInputStream as a else condition. So, that we will not have any sudden 
 surprise like we are facing today.
 Also,  it is just logging one warn message and proceeding if it throws any 
 exception while getting the length. I think we can re-throw the exception 
 because there is no point in continuing with dataloss.
 {code}
 long adjust = 0;
   try {
 Field fIn = FilterInputStream.class.getDeclaredField(in);
 fIn.setAccessible(true);
 Object realIn = fIn.get(this.in);
 // In hadoop 0.22, DFSInputStream is a standalone class.  Before 
 this,
 // it was an inner class of DFSClient.
 if (realIn.getClass().getName().endsWith(DFSInputStream)) {
   Method getFileLength = realIn.getClass().
 getDeclaredMethod(getFileLength, new Class? []{});
   getFileLength.setAccessible(true);
   long realLength = ((Long)getFileLength.
 invoke(realIn, new Object []{})).longValue();
   assert(realLength = this.length);
   adjust = realLength - this.length;
 } else {
   LOG.info(Input stream class:  + realIn.getClass().getName() +
   , not adjusting length);
 }
   } catch(Exception e) {
 SequenceFileLogReader.LOG.warn(
   Error while trying to get accurate file length.   +
   Truncation / data loss may occur if RegionServers die., e);
   }
   return adjust + super.getPos();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14191) HBase grant at specific column family level does not work for Groups

2015-08-06 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660738#comment-14660738
 ] 

Ashish Singhi commented on HBASE-14191:
---

Sure, will look into this in the morning as per IST, if no else beats me.

 HBase grant at specific column family level does not work for Groups
 

 Key: HBASE-14191
 URL: https://issues.apache.org/jira/browse/HBASE-14191
 Project: HBase
  Issue Type: Bug
 Environment: Version 0.98.6-cdh5.3.3, rUnknown, Wed Apr  8 15:00:15 
 PDT 2015
Reporter: Tom James

 While performing Grant command to a specific column family in a table - to a 
 specific group does not produce needed results. 
 However, when specific user is mentioned (instead of group name) in grant 
 command, it becomes effective
 Steps to Reproduce : 
 1) using super-user, Grant a table/column family level grant to a group
 2) login using a user ( part of the above group) and scan the table. It does 
 not return any results
 3) using super-user, Grant a table/column family level grant to a specific 
 user ( instead of group) 
 4) login using that specific user and scan the table. It produces correct 
 results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14184) Fix indention and type-o in JavaHBaseContext

2015-08-06 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660823#comment-14660823
 ] 

Ted Malaska commented on HBASE-14184:
-

How is this ticket doing?  Is there anything else I need to do to help it get 
submitted?

 Fix indention and type-o in JavaHBaseContext
 

 Key: HBASE-14184
 URL: https://issues.apache.org/jira/browse/HBASE-14184
 Project: HBase
  Issue Type: Wish
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
Priority: Minor
 Attachments: HBASE-14184.3.patch


 Looks like there is a Ddd that should be Rdd.
 Also looks like everything is one space over too much



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13965) Stochastic Load Balancer JMX Metrics

2015-08-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-13965:
---
   Resolution: Fixed
Fix Version/s: 1.3.0
   Status: Resolved  (was: Patch Available)

 Stochastic Load Balancer JMX Metrics
 

 Key: HBASE-13965
 URL: https://issues.apache.org/jira/browse/HBASE-13965
 Project: HBase
  Issue Type: Improvement
  Components: Balancer, metrics
Reporter: Lei Chen
Assignee: Lei Chen
 Fix For: 2.0.0, 1.3.0

 Attachments: 13965-addendum.txt, HBASE-13965-branch-1-v2.patch, 
 HBASE-13965-branch-1.patch, HBASE-13965-v10.patch, HBASE-13965-v11.patch, 
 HBASE-13965-v3.patch, HBASE-13965-v4.patch, HBASE-13965-v5.patch, 
 HBASE-13965-v6.patch, HBASE-13965-v7.patch, HBASE-13965-v8.patch, 
 HBASE-13965-v9.patch, HBASE-13965_v2.patch, HBase-13965-JConsole.png, 
 HBase-13965-v1.patch, stochasticloadbalancerclasses_v2.png


 Today’s default HBase load balancer (the Stochastic load balancer) is cost 
 function based. The cost function weights are tunable but no visibility into 
 those cost function results is directly provided.
 A driving example is a cluster we have been tuning which has skewed rack size 
 (one rack has half the nodes of the other few racks). We are tuning the 
 cluster for uniform response time from all region servers with the ability to 
 tolerate a rack failure. Balancing LocalityCost, RegionReplicaRack Cost and 
 RegionCountSkew Cost is difficult without a way to attribute each cost 
 function’s contribution to overall cost. 
 What this jira proposes is to provide visibility via JMX into each cost 
 function of the stochastic load balancer, as well as the overall cost of the 
 balancing plan.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14189) BlockCache options should consider CF Level BlockCacheEnabled setting

2015-08-06 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659804#comment-14659804
 ] 

Heng Chen commented on HBASE-14189:
---

After review the code in BlockConfig, I has a plan.  

I will submit a patch as soon as possible.

 BlockCache options should consider CF Level BlockCacheEnabled setting
 -

 Key: HBASE-14189
 URL: https://issues.apache.org/jira/browse/HBASE-14189
 Project: HBase
  Issue Type: Improvement
  Components: BlockCache
Reporter: Heng Chen
Assignee: Heng Chen

 While using BlockCache,  we use {{cacheDataOnRead}}({{cacheDataOnWrite}}) 
 represents for whether we should cache block after read(write) block from(to) 
 hdfs.  We should honour BC setting and CF Level cache setting while using 
 BlockCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14092) Add -noLock and -noBalanceSwitch options to hbck

2015-08-06 Thread Simon Law (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Law updated HBASE-14092:
--
Attachment: HBASE-14092-v2.patch

By default, hbck is run in a read-only checker mode. In this case, it is
sensible to let others run. By default, the balancer is left alone,
which may cause spurious errors, but cannot leave the balancer in a bad
state. It is dangerous to leave the balancer by accident, so it is only
ever enabled after fixing, it will never be forced off because of
racing.

When hbck is run in fixer mode, it must take an exclusive lock and
disable the balancer, or all havoc will break loose.

If you want to stop hbck from running in parallel, the -exclusive flag
will create the lock file. If you want to force -disableBalancer, that
option is available too. This makes more semantic sense than -noLock and
-noSwitchBalancer, respectively.


 Add -noLock and -noBalanceSwitch options to hbck
 

 Key: HBASE-14092
 URL: https://issues.apache.org/jira/browse/HBASE-14092
 Project: HBase
  Issue Type: Bug
  Components: hbck, util
Affects Versions: 2.0.0, 1.3.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.2.0, 1.3.0

 Attachments: HBASE-14092-v1.patch, HBASE-14092-v2.patch, 
 HBASE-14092.patch


 HBCK is sometimes used as a way to check the health of the cluster. When 
 doing that it's not necessary to turn off the balancer. As such it's not 
 needed to lock other runs of hbck out.
 We should add the --no-lock and --no-balancer command line flags.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14186) Read mvcc vlong optimization

2015-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659618#comment-14659618
 ] 

Hudson commented on HBASE-14186:


FAILURE: Integrated in HBase-TRUNK #6699 (See 
[https://builds.apache.org/job/HBase-TRUNK/6699/])
HBASE-14186 Read mvcc vlong optimization. (anoopsamjohn: rev 
5d2708f628d4718f6267e9da6c8cbafeda66f4fb)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java


 Read mvcc vlong optimization
 

 Key: HBASE-14186
 URL: https://issues.apache.org/jira/browse/HBASE-14186
 Project: HBase
  Issue Type: Sub-task
  Components: Performance, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-14186.patch


 {code}
 for (int idx = 0; idx  remaining; idx++) {
   byte b = blockBuffer.getByteAfterPosition(offsetFromPos + idx);
   i = i  8;
   i = i | (b  0xFF);
 }
 {code}
 Doing the read as in case of BIG_ENDIAN.
 After HBASE-12600, we tend to keep the mvcc and so byte by byte read looks 
 eating up lot of CPU time. (In my test HFileReaderImpl#_readMvccVersion comes 
 on top in terms of hot methods). We can optimize here by reading 4 or 2 bytes 
 in one shot when the length of the vlong is more than 4 bytes. We will in 
 turn use UnsafeAccess methods which handles ENDIAN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14178) regionserver blocks because of waiting for offsetLock

2015-08-06 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-14178:
--
Attachment: HBASE-14178-0.98_v8.patch

 regionserver blocks because of waiting for offsetLock
 -

 Key: HBASE-14178
 URL: https://issues.apache.org/jira/browse/HBASE-14178
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.6
Reporter: Heng Chen
Assignee: Heng Chen
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2

 Attachments: HBASE-14178-0.98.patch, HBASE-14178-0.98_v8.patch, 
 HBASE-14178-branch-1_v8.patch, HBASE-14178.patch, HBASE-14178_v1.patch, 
 HBASE-14178_v2.patch, HBASE-14178_v3.patch, HBASE-14178_v4.patch, 
 HBASE-14178_v5.patch, HBASE-14178_v6.patch, HBASE-14178_v7.patch, 
 HBASE-14178_v8.patch, jstack


 My regionserver blocks, and all client rpc timeout. 
 I print the regionserver's jstack,  it seems a lot of threads were blocked 
 for waiting offsetLock, detail infomation belows:
 PS:  my table's block cache is off
 {code}
 B.DefaultRpcServer.handler=2,queue=2,port=60020 #82 daemon prio=5 os_prio=0 
 tid=0x01827000 nid=0x2cdc in Object.wait() [0x7f3831b72000]
java.lang.Thread.State: WAITING (on object monitor)
 at java.lang.Object.wait(Native Method)
 at java.lang.Object.wait(Object.java:502)
 at org.apache.hadoop.hbase.util.IdLock.getLockEntry(IdLock.java:79)
 - locked 0x000773af7c18 (a 
 org.apache.hadoop.hbase.util.IdLock$Entry)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:352)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:524)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:572)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
 at 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:269)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:695)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:683)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:533)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3889)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3969)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3847)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3820)
 - locked 0x0005e5c55ad0 (a 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3807)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4779)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4753)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2916)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29583)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
 at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
 at java.lang.Thread.run(Thread.java:745)
Locked ownable synchronizers:
 - 0x0005e5c55c08 (a 
 java.util.concurrent.locks.ReentrantLock$NonfairSync)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13044) Configuration option for disabling coprocessor loading

2015-08-06 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659763#comment-14659763
 ] 

Ted Yu commented on HBASE-13044:


When hbase.coprocessor.enabled is set false, system coprocessors such as 
MultiRowMutationEndpoint should still be loaded. Right ?


 Configuration option for disabling coprocessor loading
 --

 Key: HBASE-13044
 URL: https://issues.apache.org/jira/browse/HBASE-13044
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-13044.patch, HBASE-13044.patch


 Some users would like complete assurance coprocessors cannot be loaded. Add a 
 configuration option that prevents coprocessors from ever being loaded by 
 ignoring any load directives found in the site file or table metadata. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14178) regionserver blocks because of waiting for offsetLock

2015-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659798#comment-14659798
 ] 

Hudson commented on HBASE-14178:


FAILURE: Integrated in HBase-1.3 #91 (See 
[https://builds.apache.org/job/HBase-1.3/91/])
HBASE-14178 regionserver blocks because of waiting for offsetLock (zhangduo: 
rev fe9de40e6c16b6e030a89a759fa278f0e27722aa)
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
Revert HBASE-14178 regionserver blocks because of waiting for offsetLock 
(zhangduo: rev 4623c843c137888d606578ed1bc579272a5ab2c2)
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
HBASE-14178 regionserver blocks because of waiting for offsetLock (zhangduo: 
rev 5c0c389b7a1b32a045e4bc1557b96a56291ab2ab)
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java


 regionserver blocks because of waiting for offsetLock
 -

 Key: HBASE-14178
 URL: https://issues.apache.org/jira/browse/HBASE-14178
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.6
Reporter: Heng Chen
Assignee: Heng Chen
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2

 Attachments: HBASE-14178-0.98.patch, HBASE-14178-0.98_v8.patch, 
 HBASE-14178-branch_1_v8.patch, HBASE-14178.patch, HBASE-14178_v1.patch, 
 HBASE-14178_v2.patch, HBASE-14178_v3.patch, HBASE-14178_v4.patch, 
 HBASE-14178_v5.patch, HBASE-14178_v6.patch, HBASE-14178_v7.patch, 
 HBASE-14178_v8.patch, jstack


 My regionserver blocks, and all client rpc timeout. 
 I print the regionserver's jstack,  it seems a lot of threads were blocked 
 for waiting offsetLock, detail infomation belows:
 PS:  my table's block cache is off
 {code}
 B.DefaultRpcServer.handler=2,queue=2,port=60020 #82 daemon prio=5 os_prio=0 
 tid=0x01827000 nid=0x2cdc in Object.wait() [0x7f3831b72000]
java.lang.Thread.State: WAITING (on object monitor)
 at java.lang.Object.wait(Native Method)
 at java.lang.Object.wait(Object.java:502)
 at org.apache.hadoop.hbase.util.IdLock.getLockEntry(IdLock.java:79)
 - locked 0x000773af7c18 (a 
 org.apache.hadoop.hbase.util.IdLock$Entry)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:352)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:524)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:572)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
 at 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:269)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:695)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:683)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:533)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3889)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3969)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3847)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3820)
 - locked 0x0005e5c55ad0 (a 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3807)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4779)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4753)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2916)
 at 
 

[jira] [Commented] (HBASE-14178) regionserver blocks because of waiting for offsetLock

2015-08-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659802#comment-14659802
 ] 

Hadoop QA commented on HBASE-14178:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12749008/HBASE-14178_v7.patch
  against master branch at commit 5d2708f628d4718f6267e9da6c8cbafeda66f4fb.
  ATTACHMENT ID: 12749008

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.procedure2.store.TestProcedureStoreTracker.testRandLoad(TestProcedureStoreTracker.java:186)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14990//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14990//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14990//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14990//console

This message is automatically generated.

 regionserver blocks because of waiting for offsetLock
 -

 Key: HBASE-14178
 URL: https://issues.apache.org/jira/browse/HBASE-14178
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.6
Reporter: Heng Chen
Assignee: Heng Chen
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2

 Attachments: HBASE-14178-0.98.patch, HBASE-14178-0.98_v8.patch, 
 HBASE-14178-branch_1_v8.patch, HBASE-14178.patch, HBASE-14178_v1.patch, 
 HBASE-14178_v2.patch, HBASE-14178_v3.patch, HBASE-14178_v4.patch, 
 HBASE-14178_v5.patch, HBASE-14178_v6.patch, HBASE-14178_v7.patch, 
 HBASE-14178_v8.patch, jstack


 My regionserver blocks, and all client rpc timeout. 
 I print the regionserver's jstack,  it seems a lot of threads were blocked 
 for waiting offsetLock, detail infomation belows:
 PS:  my table's block cache is off
 {code}
 B.DefaultRpcServer.handler=2,queue=2,port=60020 #82 daemon prio=5 os_prio=0 
 tid=0x01827000 nid=0x2cdc in Object.wait() [0x7f3831b72000]
java.lang.Thread.State: WAITING (on object monitor)
 at java.lang.Object.wait(Native Method)
 at java.lang.Object.wait(Object.java:502)
 at org.apache.hadoop.hbase.util.IdLock.getLockEntry(IdLock.java:79)
 - locked 0x000773af7c18 (a 
 org.apache.hadoop.hbase.util.IdLock$Entry)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:352)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:524)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:572)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
 at 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
 at 
 

[jira] [Commented] (HBASE-14178) regionserver blocks because of waiting for offsetLock

2015-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659803#comment-14659803
 ] 

Hudson commented on HBASE-14178:


FAILURE: Integrated in HBase-TRUNK #6700 (See 
[https://builds.apache.org/job/HBase-TRUNK/6700/])
HBASE-14178 regionserver blocks because of waiting for offsetLock (zhangduo: 
rev 75a6cb2be6ae95654561213a247aa7ba62505072)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java


 regionserver blocks because of waiting for offsetLock
 -

 Key: HBASE-14178
 URL: https://issues.apache.org/jira/browse/HBASE-14178
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.6
Reporter: Heng Chen
Assignee: Heng Chen
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2

 Attachments: HBASE-14178-0.98.patch, HBASE-14178-0.98_v8.patch, 
 HBASE-14178-branch_1_v8.patch, HBASE-14178.patch, HBASE-14178_v1.patch, 
 HBASE-14178_v2.patch, HBASE-14178_v3.patch, HBASE-14178_v4.patch, 
 HBASE-14178_v5.patch, HBASE-14178_v6.patch, HBASE-14178_v7.patch, 
 HBASE-14178_v8.patch, jstack


 My regionserver blocks, and all client rpc timeout. 
 I print the regionserver's jstack,  it seems a lot of threads were blocked 
 for waiting offsetLock, detail infomation belows:
 PS:  my table's block cache is off
 {code}
 B.DefaultRpcServer.handler=2,queue=2,port=60020 #82 daemon prio=5 os_prio=0 
 tid=0x01827000 nid=0x2cdc in Object.wait() [0x7f3831b72000]
java.lang.Thread.State: WAITING (on object monitor)
 at java.lang.Object.wait(Native Method)
 at java.lang.Object.wait(Object.java:502)
 at org.apache.hadoop.hbase.util.IdLock.getLockEntry(IdLock.java:79)
 - locked 0x000773af7c18 (a 
 org.apache.hadoop.hbase.util.IdLock$Entry)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:352)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:524)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:572)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
 at 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:269)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:695)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:683)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:533)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3889)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3969)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3847)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3820)
 - locked 0x0005e5c55ad0 (a 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3807)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4779)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4753)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2916)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29583)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
 at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
 at java.lang.Thread.run(Thread.java:745)
Locked ownable synchronizers:
 - 0x0005e5c55c08 (a 
 java.util.concurrent.locks.ReentrantLock$NonfairSync)
 {code}



--
This message 

[jira] [Commented] (HBASE-14188) Read path optimizations after HBASE-11425 profiling

2015-08-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659809#comment-14659809
 ] 

Hadoop QA commented on HBASE-14188:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12749004/HBASE-14188_1.patch
  against master branch at commit 5d2708f628d4718f6267e9da6c8cbafeda66f4fb.
  ATTACHMENT ID: 12749004

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14989//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14989//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14989//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14989//console

This message is automatically generated.

 Read path optimizations after HBASE-11425 profiling
 ---

 Key: HBASE-14188
 URL: https://issues.apache.org/jira/browse/HBASE-14188
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-14188.patch, HBASE-14188_1.patch, setSeqId.png


 This subtask deals with some improvments that can be done in the read path 
 (scans) after the changes for HBASE-11425 went in.
 - Avoid CellUtil.setSequenceId in hot path.
 - Use BBUtils in the MultibyteBuff.
 - Use ByteBuff.skip() API in HFileReader rather than 
 MultiByteBuff.position().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14178) regionserver blocks because of waiting for offsetLock

2015-08-06 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659558#comment-14659558
 ] 

ramkrishna.s.vasudevan commented on HBASE-14178:


Reading thro the comments and the current code in CacheConfig, I think
{code}
 if (cacheDataOnWrite) {
448   return true;
449 }
{code}
These conditions may really be not needed. As you are saying 
isBlockCacheEnabled() wouild mean cacheDataOnRead is also true.  (I think this 
setting is going to be implemented per family only).

 regionserver blocks because of waiting for offsetLock
 -

 Key: HBASE-14178
 URL: https://issues.apache.org/jira/browse/HBASE-14178
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.6
Reporter: Heng Chen
Priority: Critical
 Fix For: 0.98.6

 Attachments: HBASE-14178-0.98.patch, HBASE-14178.patch, 
 HBASE-14178_v1.patch, HBASE-14178_v2.patch, HBASE-14178_v3.patch, 
 HBASE-14178_v4.patch, HBASE-14178_v5.patch, HBASE-14178_v6.patch, jstack


 My regionserver blocks, and all client rpc timeout. 
 I print the regionserver's jstack,  it seems a lot of threads were blocked 
 for waiting offsetLock, detail infomation belows:
 PS:  my table's block cache is off
 {code}
 B.DefaultRpcServer.handler=2,queue=2,port=60020 #82 daemon prio=5 os_prio=0 
 tid=0x01827000 nid=0x2cdc in Object.wait() [0x7f3831b72000]
java.lang.Thread.State: WAITING (on object monitor)
 at java.lang.Object.wait(Native Method)
 at java.lang.Object.wait(Object.java:502)
 at org.apache.hadoop.hbase.util.IdLock.getLockEntry(IdLock.java:79)
 - locked 0x000773af7c18 (a 
 org.apache.hadoop.hbase.util.IdLock$Entry)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:352)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:524)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:572)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
 at 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:269)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:695)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:683)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:533)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3889)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3969)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3847)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3820)
 - locked 0x0005e5c55ad0 (a 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3807)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4779)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4753)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2916)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29583)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
 at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
 at java.lang.Thread.run(Thread.java:745)
Locked ownable synchronizers:
 - 0x0005e5c55c08 (a 
 java.util.concurrent.locks.ReentrantLock$NonfairSync)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14169) API to refreshSuperUserGroupsConfiguration

2015-08-06 Thread Francis Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659557#comment-14659557
 ] 

Francis Liu commented on HBASE-14169:
-

Sounds good. I've updated the patch to check for system user. Also

[~apurtell] and [~mbertozzi], Should I follow the same semantics to as acl 
table to propagate requests (ie use zk as a message bus), seemed a bit clunky 
and overly complicated to me for this particular case? I've uploaded a draft 
which sends the rpc request to the master and the master propagates it to the 
RSes, since the master has the list of online servers on-hand. I haven't read 
how ProcedureV2 plans to propagate messages. So let me know what would work 
best.

 API to refreshSuperUserGroupsConfiguration
 --

 Key: HBASE-14169
 URL: https://issues.apache.org/jira/browse/HBASE-14169
 Project: HBase
  Issue Type: New Feature
Reporter: Francis Liu
Assignee: Francis Liu
 Attachments: HBASE-14169.patch, HBASE-14169_2.patch


 For deployments that use security. User impersonation (AKA doAs()) is needed 
 for some services (ie Stargate, thriftserver, Oozie, etc). Impersonation 
 definitions are defined in a xml config file and read and cached by the 
 ProxyUsers class. Calling this api will refresh cached information, 
 eliminating the need to restart the master/regionserver whenever the 
 configuration is changed. 
 Implementation just adds another method to AccessControlService.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14085) Correct LICENSE and NOTICE files in artifacts

2015-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659556#comment-14659556
 ] 

Hudson commented on HBASE-14085:


FAILURE: Integrated in HBase-0.98 #1067 (See 
[https://builds.apache.org/job/HBase-0.98/1067/])
HBASE-14085 Update LICENSE and NOTICE files. (apurtell: rev 
edc48c10f2fba231c146aeb26a37adc3540e9d60)
* hbase-thrift/pom.xml
* hbase-thrift/src/main/appended-resources/META-INF/LICENSE
* LICENSE.txt
* hbase-thrift/src/test/resources/META-INF/NOTICE
* hbase-common/src/test/resources/META-INF/NOTICE
* hbase-common/pom.xml
* hbase-assembly/pom.xml
* pom.xml
* hbase-thrift/src/main/appended-resources/META-INF/NOTICE
* hbase-annotations/pom.xml
* hbase-common/src/main/appended-resources/META-INF/NOTICE
* hbase-examples/pom.xml
* hbase-shell/pom.xml
* hbase-server/pom.xml
* hbase-testing-util/pom.xml
* hbase-resource-bundle/src/main/resources/supplemental-models.xml
* hbase-it/pom.xml
* hbase-rest/pom.xml
* hbase-assembly/src/main/resources/META-INF/LEGAL
* hbase-assembly/src/main/assembly/components.xml
* hbase-resource-bundle/src/main/resources/META-INF/LICENSE.vm
* hbase-assembly/src/main/assembly/hadoop-two-compat.xml
* hbase-resource-bundle/pom.xml
* hbase-checkstyle/pom.xml
* hbase-hadoop-compat/pom.xml
* hbase-client/pom.xml
* hbase-resource-bundle/src/main/resources/META-INF/NOTICE.vm
* hbase-thrift/src/test/resources/META-INF/LICENSE
* hbase-server/src/test/resources/META-INF/LICENSE
* hbase-server/src/test/resources/META-INF/NOTICE
* hbase-hadoop2-compat/pom.xml
* hbase-protocol/pom.xml
* NOTICE.txt
* hbase-prefix-tree/pom.xml


 Correct LICENSE and NOTICE files in artifacts
 -

 Key: HBASE-14085
 URL: https://issues.apache.org/jira/browse/HBASE-14085
 Project: HBase
  Issue Type: Task
  Components: build
Affects Versions: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2

 Attachments: HBASE-14085.1.patch, HBASE-14085.2.patch, 
 HBASE-14085.3.patch


 +Problems:
 * checked LICENSE/NOTICE on binary
 ** binary artifact LICENSE file has not been updated to include the 
 additional license terms for contained third party dependencies
 ** binary artifact NOTICE file does not include a copyright line
 ** binary artifact NOTICE file does not appear to propagate appropriate info 
 from the NOTICE files from bundled dependencies
 * checked NOTICE on source
 ** source artifact NOTICE file does not include a copyright line
 ** source NOTICE file includes notices for third party dependencies not 
 included in the artifact
 * checked NOTICE files shipped in maven jars
 ** copyright line only says 2015 when it's very likely the contents are under 
 copyright prior to this year
 * nit: NOTICE file on jars in maven say HBase - ${module} rather than 
 Apache HBase - ${module} as required 
 refs:
 http://www.apache.org/dev/licensing-howto.html#bundled-vs-non-bundled
 http://www.apache.org/dev/licensing-howto.html#binary
 http://www.apache.org/dev/licensing-howto.html#simple



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14190) Assign hbase:namespace table ahead of user region assignment

2015-08-06 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660800#comment-14660800
 ] 

Ted Yu commented on HBASE-14190:


At start up, if column=info:server has value for hbase:namespace region, we can 
use retain assignment.
Otherwise we can randomly assign hbase:namespace region.

 Assign hbase:namespace table ahead of user region assignment
 

 Key: HBASE-14190
 URL: https://issues.apache.org/jira/browse/HBASE-14190
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu

 Currently the namespace table region is assigned like user regions.
 I spent several hours working with a customer where master couldn't finish 
 initialization.
 Even though master was restarted quite a few times, it went down with the 
 following:
 {code}
 2015-08-05 17:16:57,530 FATAL [hdpmaster1:6.activeMasterManager] 
 master.HMaster: Master server abort: loaded coprocessors are: []
 2015-08-05 17:16:57,530 FATAL [hdpmaster1:6.activeMasterManager] 
 master.HMaster: Unhandled exception. Starting shutdown.
 java.io.IOException: Timedout 30ms waiting for namespace table to be 
 assigned
   at 
 org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:104)
   at org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:985)
   at 
 org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:779)
   at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:182)
   at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1646)
   at java.lang.Thread.run(Thread.java:744)
 {code}
 During previous run(s), namespace table was created, hence leaving an entry 
 in hbase:meta.
 The following if block in TableNamespaceManager#start() was skipped:
 {code}
 if (!MetaTableAccessor.tableExists(masterServices.getConnection(),
   TableName.NAMESPACE_TABLE_NAME)) {
 {code}
 TableNamespaceManager#start() spins, waiting for namespace region to be 
 assigned.
 There was issue in master assigning user regions.
 We tried issuing 'assign' command from hbase shell which didn't work because 
 of the following check in MasterRpcServices#assignRegion():
 {code}
   master.checkInitialized();
 {code}
 This scenario can be avoided if we assign hbase:namespace table after 
 hbase:meta is assigned but before user table region assignment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14150) Add BulkLoad functionality to HBase-Spark Module

2015-08-06 Thread Ted Malaska (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Malaska updated HBASE-14150:

Attachment: HBASE-14150.4.patch

Fixed a diff issue Sean B found.  No other changes

 Add BulkLoad functionality to HBase-Spark Module
 

 Key: HBASE-14150
 URL: https://issues.apache.org/jira/browse/HBASE-14150
 Project: HBase
  Issue Type: New Feature
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
 Fix For: 2.0.0

 Attachments: HBASE-14150.1.patch, HBASE-14150.2.patch, 
 HBASE-14150.3.patch, HBASE-14150.4.patch


 Add on to the work done in HBASE-13992 to add functionality to do a bulk load 
 from a given RDD.
 This will do the following:
 1. figure out the number of regions and sort and partition the data correctly 
 to be written out to HFiles
 2. Also unlike the MR bulkload I would like that the columns to be sorted in 
 the shuffle stage and not in the memory of the reducer.  This will allow this 
 design to support super wide records with out going out of memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13706) CoprocessorClassLoader should not exempt Hive classes

2015-08-06 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661347#comment-14661347
 ] 

Jerry He commented on HBASE-13706:
--

Thanks, [~apurtell]
Let me trigger the UT again.  They seem to be fine.

 CoprocessorClassLoader should not exempt Hive classes
 -

 Key: HBASE-13706
 URL: https://issues.apache.org/jira/browse/HBASE-13706
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 2.0.0, 1.0.1, 1.1.0, 0.98.12
Reporter: Jerry He
Assignee: Jerry He
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.1.3

 Attachments: HBASE-13706-master-v2.patch, HBASE-13706.patch


 CoprocessorClassLoader is used to load classes from the coprocessor jar.
 Certain classes are exempt from being loaded by this ClassLoader, which means 
 they will be ignored in the coprocessor jar, but loaded from parent classpath 
 instead.
 One problem is that we categorically exempt org.apache.hadoop.
 But it happens that Hive packages start with org.apache.hadoop.
 There is no reason to exclude hive classes from theCoprocessorClassLoader.
 HBase does not even include Hive jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13706) CoprocessorClassLoader should not exempt Hive classes

2015-08-06 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-13706:
-
Status: Open  (was: Patch Available)

 CoprocessorClassLoader should not exempt Hive classes
 -

 Key: HBASE-13706
 URL: https://issues.apache.org/jira/browse/HBASE-13706
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 0.98.12, 1.1.0, 1.0.1, 2.0.0
Reporter: Jerry He
Assignee: Jerry He
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.1.3

 Attachments: HBASE-13706-master-v2.patch, HBASE-13706.patch


 CoprocessorClassLoader is used to load classes from the coprocessor jar.
 Certain classes are exempt from being loaded by this ClassLoader, which means 
 they will be ignored in the coprocessor jar, but loaded from parent classpath 
 instead.
 One problem is that we categorically exempt org.apache.hadoop.
 But it happens that Hive packages start with org.apache.hadoop.
 There is no reason to exclude hive classes from theCoprocessorClassLoader.
 HBase does not even include Hive jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14189) BlockCache options should consider CF Level BlockCacheEnabled setting

2015-08-06 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661240#comment-14661240
 ] 

Heng Chen commented on HBASE-14189:
---

Now the options about cache configuration are below:
{code}
  /**
   * Whether blocks should be cached on read (default is on if there is a
   * cache but this can be turned off on a per-family or per-request basis).
   * If off we will STILL cache meta blocks; i.e. INDEX and BLOOM types.
   * This cannot be disabled.
   */
  private boolean cacheDataOnRead;

  /** Whether blocks should be flagged as in-memory when being cached */
  private final boolean inMemory;

  /** Whether data blocks should be cached when new files are written */
  private boolean cacheDataOnWrite;

  /** Whether index blocks should be cached when new files are written */
  private final boolean cacheIndexesOnWrite;

  /** Whether compound bloom filter blocks should be cached on write */
  private final boolean cacheBloomsOnWrite;

  /** Whether blocks of a file should be evicted when the file is closed */
  private boolean evictOnClose;

  /** Whether data blocks should be stored in compressed and/or encrypted form 
in the cache */
  private final boolean cacheDataCompressed;

  /** Whether data blocks should be prefetched into the cache */
  private final boolean prefetchOnOpen;

  /**
   * If true and if more than one tier in this cache deploy -- e.g. 
CombinedBlockCache has an L1
   * and an L2 tier -- then cache data blocks up in the L1 tier (The meta 
blocks are likely being
   * cached up in L1 already.  At least this is the case if CombinedBlockCache).
   */
  private boolean cacheDataInL1;
{code}

I think all the options are CF Level Cache Setting, and they are useless for 
the whole RS cache setting.

The whole RS cache only cares about whether the BC is enabled or not, and it 
has no business with how to use BC

So i think the options in config as like blow are useless, and i will remove 
them. At the same time, i will add one option
to decide whether open BC or not, for example, {{hbase.rs.blockcacheopen}}


{code}
  public static final String CACHE_BLOCKS_ON_READ_KEY =
  hbase.rs.cacheblockonread;

  /**
   * Configuration key to cache data blocks on write. There are separate
   * switches for bloom blocks and non-root index blocks.
   */
  public static final String CACHE_BLOCKS_ON_WRITE_KEY =
  hbase.rs.cacheblocksonwrite;

  /**
   * Configuration key to cache leaf and intermediate-level index blocks on
   * write.
   */
  public static final String CACHE_INDEX_BLOCKS_ON_WRITE_KEY =
  hfile.block.index.cacheonwrite;

  /**
   * Configuration key to cache compound bloom filter blocks on write.
   */
  public static final String CACHE_BLOOM_BLOCKS_ON_WRITE_KEY =
  hfile.block.bloom.cacheonwrite;

  /**
   * Configuration key to cache data blocks in compressed and/or encrypted 
format.
   */
  public static final String CACHE_DATA_BLOCKS_COMPRESSED_KEY =
  hbase.block.data.cachecompressed;

  /**
   * Configuration key to evict all blocks of a given file from the block cache
   * when the file is closed.
   */
  public static final String EVICT_BLOCKS_ON_CLOSE_KEY =
  hbase.rs.evictblocksonclose;

{code}

Am i right?





 BlockCache options should consider CF Level BlockCacheEnabled setting
 -

 Key: HBASE-14189
 URL: https://issues.apache.org/jira/browse/HBASE-14189
 Project: HBase
  Issue Type: Improvement
  Components: BlockCache
Reporter: Heng Chen
Assignee: Heng Chen

 While using BlockCache,  we use {{cacheDataOnRead}}({{cacheDataOnWrite}}) 
 represents for whether we should cache block after read(write) block from(to) 
 hdfs.  We should honour BC setting and CF Level cache setting while using 
 BlockCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13706) CoprocessorClassLoader should not exempt Hive classes

2015-08-06 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-13706:
-
Fix Version/s: (was: 1.1.3)
   (was: 1.0.2)
   (was: 0.98.14)
   1.3.0
   1.2.0
   Status: Patch Available  (was: Open)

 CoprocessorClassLoader should not exempt Hive classes
 -

 Key: HBASE-13706
 URL: https://issues.apache.org/jira/browse/HBASE-13706
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 0.98.12, 1.1.0, 1.0.1, 2.0.0
Reporter: Jerry He
Assignee: Jerry He
Priority: Minor
 Fix For: 2.0.0, 1.2.0, 1.3.0

 Attachments: HBASE-13706-master-v2.patch, HBASE-13706.patch


 CoprocessorClassLoader is used to load classes from the coprocessor jar.
 Certain classes are exempt from being loaded by this ClassLoader, which means 
 they will be ignored in the coprocessor jar, but loaded from parent classpath 
 instead.
 One problem is that we categorically exempt org.apache.hadoop.
 But it happens that Hive packages start with org.apache.hadoop.
 There is no reason to exclude hive classes from theCoprocessorClassLoader.
 HBase does not even include Hive jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14190) Assign hbase:namespace table ahead of user region assignment

2015-08-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14190:
---
Attachment: 14190-v1.txt

Tentative patch.

Other system tables are not assigned ahead of user regions yet.

 Assign hbase:namespace table ahead of user region assignment
 

 Key: HBASE-14190
 URL: https://issues.apache.org/jira/browse/HBASE-14190
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
 Attachments: 14190-v1.txt


 Currently the namespace table region is assigned like user regions.
 I spent several hours working with a customer where master couldn't finish 
 initialization.
 Even though master was restarted quite a few times, it went down with the 
 following:
 {code}
 2015-08-05 17:16:57,530 FATAL [hdpmaster1:6.activeMasterManager] 
 master.HMaster: Master server abort: loaded coprocessors are: []
 2015-08-05 17:16:57,530 FATAL [hdpmaster1:6.activeMasterManager] 
 master.HMaster: Unhandled exception. Starting shutdown.
 java.io.IOException: Timedout 30ms waiting for namespace table to be 
 assigned
   at 
 org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:104)
   at org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:985)
   at 
 org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:779)
   at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:182)
   at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1646)
   at java.lang.Thread.run(Thread.java:744)
 {code}
 During previous run(s), namespace table was created, hence leaving an entry 
 in hbase:meta.
 The following if block in TableNamespaceManager#start() was skipped:
 {code}
 if (!MetaTableAccessor.tableExists(masterServices.getConnection(),
   TableName.NAMESPACE_TABLE_NAME)) {
 {code}
 TableNamespaceManager#start() spins, waiting for namespace region to be 
 assigned.
 There was issue in master assigning user regions.
 We tried issuing 'assign' command from hbase shell which didn't work because 
 of the following check in MasterRpcServices#assignRegion():
 {code}
   master.checkInitialized();
 {code}
 This scenario can be avoided if we assign hbase:namespace table after 
 hbase:meta is assigned but before user table region assignment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13865) Increase the default value for hbase.hregion.memstore.block.multipler from 2 to 4 (part 2)

2015-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660910#comment-14660910
 ] 

Hudson commented on HBASE-13865:


FAILURE: Integrated in HBase-1.2 #94 (See 
[https://builds.apache.org/job/HBase-1.2/94/])
HBASE-13865 Increase the default value for 
hbase.hregion.memstore.block.multipler from 2 to 4 (part 2) (ndimiduk: rev 
c79bc6d746d0e4bbfa79e69df56ad59beee7b1f4)
* hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMinorCompaction.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientPushback.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMajorCompaction.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java


 Increase the default value for hbase.hregion.memstore.block.multipler from 2 
 to 4 (part 2)
 --

 Key: HBASE-13865
 URL: https://issues.apache.org/jira/browse/HBASE-13865
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Vladimir Rodionov
Assignee: Gabor Liptak
Priority: Trivial
 Fix For: 2.0.0, 0.98.14, 1.1.2, 1.3.0, 1.2.1, 1.0.3

 Attachments: HBASE-13865.1.patch, HBASE-13865.2.patch, 
 HBASE-13865.2.patch


 Its 4 in the book and 2 in a current master. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13867) Add endpoint coprocessor guide to HBase book

2015-08-06 Thread Gaurav Bhardwaj (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661101#comment-14661101
 ] 

Gaurav Bhardwaj commented on HBASE-13867:
-

If 100 character style can be relaxed then if possible please use 
[HBASE-13867.1.patch|https://issues.apache.org/jira/secure/attachment/12743985/HBASE-13867.1.patch],
 as it is well formatted for lengthy URL and don't have line break. The sole 
purpose of Patch 2 was to satisfy the 100 character requirement as reported by 
build tool.

If any other effort is required, please let me know. :)

 Add endpoint coprocessor guide to HBase book
 

 Key: HBASE-13867
 URL: https://issues.apache.org/jira/browse/HBASE-13867
 Project: HBase
  Issue Type: Task
  Components: Coprocessors, documentation
Reporter: Vladimir Rodionov
Assignee: Gaurav Bhardwaj
 Fix For: 2.0.0

 Attachments: HBASE-13867.1.patch, HBASE-13867.2.patch, 
 HBASE-13867.2.patch


 Endpoint coprocessors are very poorly documented.
 Coprocessor section of HBase book must be updated either with its own 
 endpoint coprocessors HOW-TO guide or, at least, with the link(s) to some 
 other guides. There is good description here:
 http://www.3pillarglobal.com/insights/hbase-coprocessors



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13867) Add endpoint coprocessor guide to HBase book

2015-08-06 Thread Gaurav Bhardwaj (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661103#comment-14661103
 ] 

Gaurav Bhardwaj commented on HBASE-13867:
-

If 100 character style can be relaxed then if possible please use 
[HBASE-13867.1.patch|https://issues.apache.org/jira/secure/attachment/12743985/HBASE-13867.1.patch],
 as it is well formatted for lengthy URL and don't have line break. The sole 
purpose of Patch 2 was to satisfy the 100 character requirement as reported by 
build tool.

If any other effort is required, please let me know. :)

 Add endpoint coprocessor guide to HBase book
 

 Key: HBASE-13867
 URL: https://issues.apache.org/jira/browse/HBASE-13867
 Project: HBase
  Issue Type: Task
  Components: Coprocessors, documentation
Reporter: Vladimir Rodionov
Assignee: Gaurav Bhardwaj
 Fix For: 2.0.0

 Attachments: HBASE-13867.1.patch, HBASE-13867.2.patch, 
 HBASE-13867.2.patch


 Endpoint coprocessors are very poorly documented.
 Coprocessor section of HBase book must be updated either with its own 
 endpoint coprocessors HOW-TO guide or, at least, with the link(s) to some 
 other guides. There is good description here:
 http://www.3pillarglobal.com/insights/hbase-coprocessors



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13867) Add endpoint coprocessor guide to HBase book

2015-08-06 Thread Gaurav Bhardwaj (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661098#comment-14661098
 ] 

Gaurav Bhardwaj commented on HBASE-13867:
-

If 100 character style can be relaxed then if possible please use 
[HBASE-13867.1.patch|https://issues.apache.org/jira/secure/attachment/12743985/HBASE-13867.1.patch],
 as it is well formatted for lengthy URL and don't have line break. The sole 
purpose of Patch 2 was to satisfy the 100 character requirement as reported by 
build tool.

If any other effort is required, please let me know. :)

 Add endpoint coprocessor guide to HBase book
 

 Key: HBASE-13867
 URL: https://issues.apache.org/jira/browse/HBASE-13867
 Project: HBase
  Issue Type: Task
  Components: Coprocessors, documentation
Reporter: Vladimir Rodionov
Assignee: Gaurav Bhardwaj
 Fix For: 2.0.0

 Attachments: HBASE-13867.1.patch, HBASE-13867.2.patch, 
 HBASE-13867.2.patch


 Endpoint coprocessors are very poorly documented.
 Coprocessor section of HBase book must be updated either with its own 
 endpoint coprocessors HOW-TO guide or, at least, with the link(s) to some 
 other guides. There is good description here:
 http://www.3pillarglobal.com/insights/hbase-coprocessors



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13867) Add endpoint coprocessor guide to HBase book

2015-08-06 Thread Gaurav Bhardwaj (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661097#comment-14661097
 ] 

Gaurav Bhardwaj commented on HBASE-13867:
-

If 100 character style can be relaxed then if possible please use 
[HBASE-13867.1.patch|https://issues.apache.org/jira/secure/attachment/12743985/HBASE-13867.1.patch],
 as it is well formatted for lengthy URL and don't have line break. The sole 
purpose of Patch 2 was to satisfy the 100 character requirement as reported by 
build tool.

If any other effort is required, please let me know. :)

 Add endpoint coprocessor guide to HBase book
 

 Key: HBASE-13867
 URL: https://issues.apache.org/jira/browse/HBASE-13867
 Project: HBase
  Issue Type: Task
  Components: Coprocessors, documentation
Reporter: Vladimir Rodionov
Assignee: Gaurav Bhardwaj
 Fix For: 2.0.0

 Attachments: HBASE-13867.1.patch, HBASE-13867.2.patch, 
 HBASE-13867.2.patch


 Endpoint coprocessors are very poorly documented.
 Coprocessor section of HBase book must be updated either with its own 
 endpoint coprocessors HOW-TO guide or, at least, with the link(s) to some 
 other guides. There is good description here:
 http://www.3pillarglobal.com/insights/hbase-coprocessors



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13867) Add endpoint coprocessor guide to HBase book

2015-08-06 Thread Gaurav Bhardwaj (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661102#comment-14661102
 ] 

Gaurav Bhardwaj commented on HBASE-13867:
-

If 100 character style can be relaxed then if possible please use 
[HBASE-13867.1.patch|https://issues.apache.org/jira/secure/attachment/12743985/HBASE-13867.1.patch],
 as it is well formatted for lengthy URL and don't have line break. The sole 
purpose of Patch 2 was to satisfy the 100 character requirement as reported by 
build tool.

If any other effort is required, please let me know. :)

 Add endpoint coprocessor guide to HBase book
 

 Key: HBASE-13867
 URL: https://issues.apache.org/jira/browse/HBASE-13867
 Project: HBase
  Issue Type: Task
  Components: Coprocessors, documentation
Reporter: Vladimir Rodionov
Assignee: Gaurav Bhardwaj
 Fix For: 2.0.0

 Attachments: HBASE-13867.1.patch, HBASE-13867.2.patch, 
 HBASE-13867.2.patch


 Endpoint coprocessors are very poorly documented.
 Coprocessor section of HBase book must be updated either with its own 
 endpoint coprocessors HOW-TO guide or, at least, with the link(s) to some 
 other guides. There is good description here:
 http://www.3pillarglobal.com/insights/hbase-coprocessors



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13867) Add endpoint coprocessor guide to HBase book

2015-08-06 Thread Gaurav Bhardwaj (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661100#comment-14661100
 ] 

Gaurav Bhardwaj commented on HBASE-13867:
-

If 100 character style can be relaxed then if possible please use 
[HBASE-13867.1.patch|https://issues.apache.org/jira/secure/attachment/12743985/HBASE-13867.1.patch],
 as it is well formatted for lengthy URL and don't have line break. The sole 
purpose of Patch 2 was to satisfy the 100 character requirement as reported by 
build tool.

If any other effort is required, please let me know. :)

 Add endpoint coprocessor guide to HBase book
 

 Key: HBASE-13867
 URL: https://issues.apache.org/jira/browse/HBASE-13867
 Project: HBase
  Issue Type: Task
  Components: Coprocessors, documentation
Reporter: Vladimir Rodionov
Assignee: Gaurav Bhardwaj
 Fix For: 2.0.0

 Attachments: HBASE-13867.1.patch, HBASE-13867.2.patch, 
 HBASE-13867.2.patch


 Endpoint coprocessors are very poorly documented.
 Coprocessor section of HBase book must be updated either with its own 
 endpoint coprocessors HOW-TO guide or, at least, with the link(s) to some 
 other guides. There is good description here:
 http://www.3pillarglobal.com/insights/hbase-coprocessors



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13867) Add endpoint coprocessor guide to HBase book

2015-08-06 Thread Gaurav Bhardwaj (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661095#comment-14661095
 ] 

Gaurav Bhardwaj commented on HBASE-13867:
-

If 100 character style can be relaxed then if possible please use 
[HBASE-13867.1.patch|https://issues.apache.org/jira/secure/attachment/12743985/HBASE-13867.1.patch],
 as it is well formatted for lengthy URL and don't have line break. The sole 
purpose of Patch 2 was to satisfy the 100 character requirement as reported by 
build tool.

If any other effort is required, please let me know. :)

 Add endpoint coprocessor guide to HBase book
 

 Key: HBASE-13867
 URL: https://issues.apache.org/jira/browse/HBASE-13867
 Project: HBase
  Issue Type: Task
  Components: Coprocessors, documentation
Reporter: Vladimir Rodionov
Assignee: Gaurav Bhardwaj
 Fix For: 2.0.0

 Attachments: HBASE-13867.1.patch, HBASE-13867.2.patch, 
 HBASE-13867.2.patch


 Endpoint coprocessors are very poorly documented.
 Coprocessor section of HBase book must be updated either with its own 
 endpoint coprocessors HOW-TO guide or, at least, with the link(s) to some 
 other guides. There is good description here:
 http://www.3pillarglobal.com/insights/hbase-coprocessors



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13867) Add endpoint coprocessor guide to HBase book

2015-08-06 Thread Gaurav Bhardwaj (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661096#comment-14661096
 ] 

Gaurav Bhardwaj commented on HBASE-13867:
-

If 100 character style can be relaxed then if possible please use 
[HBASE-13867.1.patch|https://issues.apache.org/jira/secure/attachment/12743985/HBASE-13867.1.patch],
 as it is well formatted for lengthy URL and don't have line break. The sole 
purpose of Patch 2 was to satisfy the 100 character requirement as reported by 
build tool.

If any other effort is required, please let me know. :)

 Add endpoint coprocessor guide to HBase book
 

 Key: HBASE-13867
 URL: https://issues.apache.org/jira/browse/HBASE-13867
 Project: HBase
  Issue Type: Task
  Components: Coprocessors, documentation
Reporter: Vladimir Rodionov
Assignee: Gaurav Bhardwaj
 Fix For: 2.0.0

 Attachments: HBASE-13867.1.patch, HBASE-13867.2.patch, 
 HBASE-13867.2.patch


 Endpoint coprocessors are very poorly documented.
 Coprocessor section of HBase book must be updated either with its own 
 endpoint coprocessors HOW-TO guide or, at least, with the link(s) to some 
 other guides. There is good description here:
 http://www.3pillarglobal.com/insights/hbase-coprocessors



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13867) Add endpoint coprocessor guide to HBase book

2015-08-06 Thread Gaurav Bhardwaj (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661099#comment-14661099
 ] 

Gaurav Bhardwaj commented on HBASE-13867:
-

If 100 character style can be relaxed then if possible please use 
[HBASE-13867.1.patch|https://issues.apache.org/jira/secure/attachment/12743985/HBASE-13867.1.patch],
 as it is well formatted for lengthy URL and don't have line break. The sole 
purpose of Patch 2 was to satisfy the 100 character requirement as reported by 
build tool.

If any other effort is required, please let me know. :)

 Add endpoint coprocessor guide to HBase book
 

 Key: HBASE-13867
 URL: https://issues.apache.org/jira/browse/HBASE-13867
 Project: HBase
  Issue Type: Task
  Components: Coprocessors, documentation
Reporter: Vladimir Rodionov
Assignee: Gaurav Bhardwaj
 Fix For: 2.0.0

 Attachments: HBASE-13867.1.patch, HBASE-13867.2.patch, 
 HBASE-13867.2.patch


 Endpoint coprocessors are very poorly documented.
 Coprocessor section of HBase book must be updated either with its own 
 endpoint coprocessors HOW-TO guide or, at least, with the link(s) to some 
 other guides. There is good description here:
 http://www.3pillarglobal.com/insights/hbase-coprocessors



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14190) Assign hbase:namespace table ahead of user region assignment

2015-08-06 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661120#comment-14661120
 ] 

Andrew Purtell commented on HBASE-14190:


When doing this we should also consider, when the security coprocessors are 
installed, deploying the ACL and labels tables before user tables. 

 Assign hbase:namespace table ahead of user region assignment
 

 Key: HBASE-14190
 URL: https://issues.apache.org/jira/browse/HBASE-14190
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu

 Currently the namespace table region is assigned like user regions.
 I spent several hours working with a customer where master couldn't finish 
 initialization.
 Even though master was restarted quite a few times, it went down with the 
 following:
 {code}
 2015-08-05 17:16:57,530 FATAL [hdpmaster1:6.activeMasterManager] 
 master.HMaster: Master server abort: loaded coprocessors are: []
 2015-08-05 17:16:57,530 FATAL [hdpmaster1:6.activeMasterManager] 
 master.HMaster: Unhandled exception. Starting shutdown.
 java.io.IOException: Timedout 30ms waiting for namespace table to be 
 assigned
   at 
 org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:104)
   at org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:985)
   at 
 org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:779)
   at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:182)
   at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1646)
   at java.lang.Thread.run(Thread.java:744)
 {code}
 During previous run(s), namespace table was created, hence leaving an entry 
 in hbase:meta.
 The following if block in TableNamespaceManager#start() was skipped:
 {code}
 if (!MetaTableAccessor.tableExists(masterServices.getConnection(),
   TableName.NAMESPACE_TABLE_NAME)) {
 {code}
 TableNamespaceManager#start() spins, waiting for namespace region to be 
 assigned.
 There was issue in master assigning user regions.
 We tried issuing 'assign' command from hbase shell which didn't work because 
 of the following check in MasterRpcServices#assignRegion():
 {code}
   master.checkInitialized();
 {code}
 This scenario can be avoided if we assign hbase:namespace table after 
 hbase:meta is assigned but before user table region assignment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13706) CoprocessorClassLoader should not exempt Hive classes

2015-08-06 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661127#comment-14661127
 ] 

Andrew Purtell commented on HBASE-13706:


I'm glad you've taken this up [~jerryhe]. 

bq. The current way is ambiguous and un-intended? Coprocessors should share 
with the host env only via clearly defined interfaces.

Agreed. Getting there will be a process. It's been that way from the beginning.

bq. What about for the other branches? Explicit listing of the hadoop packages?

We can try that for branch-1.

Could also try for branch-1.2 since 1.2.0 has not been released yet. Depends 
what [~busbey] thinks. 

Would not be an appropriate change for 1.0.x or 1.1.x since it's a 
compatibility concern with other releases in those lines. 

We could talk about putting it in 0.98.

 CoprocessorClassLoader should not exempt Hive classes
 -

 Key: HBASE-13706
 URL: https://issues.apache.org/jira/browse/HBASE-13706
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 2.0.0, 1.0.1, 1.1.0, 0.98.12
Reporter: Jerry He
Assignee: Jerry He
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.1.3

 Attachments: HBASE-13706-master-v2.patch, HBASE-13706.patch


 CoprocessorClassLoader is used to load classes from the coprocessor jar.
 Certain classes are exempt from being loaded by this ClassLoader, which means 
 they will be ignored in the coprocessor jar, but loaded from parent classpath 
 instead.
 One problem is that we categorically exempt org.apache.hadoop.
 But it happens that Hive packages start with org.apache.hadoop.
 There is no reason to exclude hive classes from theCoprocessorClassLoader.
 HBase does not even include Hive jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13825) Use ProtobufUtil#mergeFrom and ProtobufUtil#mergeDelimitedFrom in place of builder methods of same name

2015-08-06 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661129#comment-14661129
 ] 

Andrew Purtell commented on HBASE-13825:


Ok, I have a +1, going to commit shortly

 Use ProtobufUtil#mergeFrom and ProtobufUtil#mergeDelimitedFrom in place of 
 builder methods of same name
 ---

 Key: HBASE-13825
 URL: https://issues.apache.org/jira/browse/HBASE-13825
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 1.0.1
Reporter: Dev Lakhani
Assignee: Andrew Purtell
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-13825-0.98.patch, HBASE-13825-0.98.patch, 
 HBASE-13825-branch-1.patch, HBASE-13825-branch-1.patch, HBASE-13825.patch


 When performing a get operation on a column family with more than 64MB of 
 data, the operation fails with:
 Caused by: Portable(java.io.IOException): Call to host:port failed on local 
 exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
 message was too large.  May be malicious.  Use 
 CodedInputStream.setSizeLimit() to increase the size limit.
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
 at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
 at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
 at 
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
 at 
 org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
 This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
 that issue is related to cluster status. 
 Scan and put operations on the same data work fine
 Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14095) Add license to SVGs

2015-08-06 Thread Gabor Liptak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661155#comment-14661155
 ] 

Gabor Liptak commented on HBASE-14095:
--

I see these svg-s:

./hbase-server/src/main/resources/hbase-webapps/static/fonts/glyphicons-halflings-regular.svg
./hbase-server/target/hbase-webapps/static/fonts/glyphicons-halflings-regular.svg
./hbase-server/target/classes/hbase-webapps/static/fonts/glyphicons-halflings-regular.svg
./hbase-thrift/src/main/resources/hbase-webapps/static/fonts/glyphicons-halflings-regular.svg
./hbase-thrift/target/hbase-webapps/static/fonts/glyphicons-halflings-regular.svg
./hbase-thrift/target/classes/hbase-webapps/static/fonts/glyphicons-halflings-regular.svg
./src/main/site/resources/images/big_h_logo.svg
./src/main/site/resources/images/hbase_logo.svg

The glyphicons likely came with Bootstrap

https://github.com/twbs/bootstrap/tree/master/fonts

None of them have copyright (comment). 

 Add license to SVGs
 ---

 Key: HBASE-14095
 URL: https://issues.apache.org/jira/browse/HBASE-14095
 Project: HBase
  Issue Type: Sub-task
  Components: build
Reporter: Sean Busbey

 we have SVGs that we exclude from checks. since they're XML we ought to be 
 able to properly add licenses to them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13865) Increase the default value for hbase.hregion.memstore.block.multipler from 2 to 4 (part 2)

2015-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660940#comment-14660940
 ] 

Hudson commented on HBASE-13865:


SUCCESS: Integrated in HBase-1.2-IT #77 (See 
[https://builds.apache.org/job/HBase-1.2-IT/77/])
HBASE-13865 Increase the default value for 
hbase.hregion.memstore.block.multipler from 2 to 4 (part 2) (ndimiduk: rev 
c79bc6d746d0e4bbfa79e69df56ad59beee7b1f4)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMajorCompaction.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMinorCompaction.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientPushback.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


 Increase the default value for hbase.hregion.memstore.block.multipler from 2 
 to 4 (part 2)
 --

 Key: HBASE-13865
 URL: https://issues.apache.org/jira/browse/HBASE-13865
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Vladimir Rodionov
Assignee: Gabor Liptak
Priority: Trivial
 Fix For: 2.0.0, 0.98.14, 1.1.2, 1.3.0, 1.2.1, 1.0.3

 Attachments: HBASE-13865.1.patch, HBASE-13865.2.patch, 
 HBASE-13865.2.patch


 Its 4 in the book and 2 in a current master. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13706) CoprocessorClassLoader should not exempt Hive classes

2015-08-06 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661128#comment-14661128
 ] 

Andrew Purtell commented on HBASE-13706:


+1 for the v2 patch for master if it passes unit tests

 CoprocessorClassLoader should not exempt Hive classes
 -

 Key: HBASE-13706
 URL: https://issues.apache.org/jira/browse/HBASE-13706
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 2.0.0, 1.0.1, 1.1.0, 0.98.12
Reporter: Jerry He
Assignee: Jerry He
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.1.3

 Attachments: HBASE-13706-master-v2.patch, HBASE-13706.patch


 CoprocessorClassLoader is used to load classes from the coprocessor jar.
 Certain classes are exempt from being loaded by this ClassLoader, which means 
 they will be ignored in the coprocessor jar, but loaded from parent classpath 
 instead.
 One problem is that we categorically exempt org.apache.hadoop.
 But it happens that Hive packages start with org.apache.hadoop.
 There is no reason to exclude hive classes from theCoprocessorClassLoader.
 HBase does not even include Hive jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13865) Increase the default value for hbase.hregion.memstore.block.multipler from 2 to 4 (part 2)

2015-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660949#comment-14660949
 ] 

Hudson commented on HBASE-13865:


FAILURE: Integrated in HBase-1.3-IT #74 (See 
[https://builds.apache.org/job/HBase-1.3-IT/74/])
HBASE-13865 Increase the default value for 
hbase.hregion.memstore.block.multipler from 2 to 4 (part 2) (ndimiduk: rev 
51061f08a338c2ba6f18e4f702b645fddf194aed)
* hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMinorCompaction.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMajorCompaction.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientPushback.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java


 Increase the default value for hbase.hregion.memstore.block.multipler from 2 
 to 4 (part 2)
 --

 Key: HBASE-13865
 URL: https://issues.apache.org/jira/browse/HBASE-13865
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Vladimir Rodionov
Assignee: Gabor Liptak
Priority: Trivial
 Fix For: 2.0.0, 0.98.14, 1.1.2, 1.3.0, 1.2.1, 1.0.3

 Attachments: HBASE-13865.1.patch, HBASE-13865.2.patch, 
 HBASE-13865.2.patch


 Its 4 in the book and 2 in a current master. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13865) Increase the default value for hbase.hregion.memstore.block.multipler from 2 to 4 (part 2)

2015-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660966#comment-14660966
 ] 

Hudson commented on HBASE-13865:


FAILURE: Integrated in HBase-0.98 #1069 (See 
[https://builds.apache.org/job/HBase-0.98/1069/])
HBASE-13865 Increase the default value for 
hbase.hregion.memstore.block.multipler from 2 to 4 (part 2) (ndimiduk: rev 
fde7b3fd74741237bee0e060073d9dc5c39922c5)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientPushback.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMinorCompaction.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMajorCompaction.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java


 Increase the default value for hbase.hregion.memstore.block.multipler from 2 
 to 4 (part 2)
 --

 Key: HBASE-13865
 URL: https://issues.apache.org/jira/browse/HBASE-13865
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Vladimir Rodionov
Assignee: Gabor Liptak
Priority: Trivial
 Fix For: 2.0.0, 0.98.14, 1.1.2, 1.3.0, 1.2.1, 1.0.3

 Attachments: HBASE-13865.1.patch, HBASE-13865.2.patch, 
 HBASE-13865.2.patch


 Its 4 in the book and 2 in a current master. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13867) Add endpoint coprocessor guide to HBase book

2015-08-06 Thread Gaurav Bhardwaj (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661107#comment-14661107
 ] 

Gaurav Bhardwaj commented on HBASE-13867:
-

I have put the comment just once, ant it got replicated 7 times. may be a bug 
in JIRA. 

 Add endpoint coprocessor guide to HBase book
 

 Key: HBASE-13867
 URL: https://issues.apache.org/jira/browse/HBASE-13867
 Project: HBase
  Issue Type: Task
  Components: Coprocessors, documentation
Reporter: Vladimir Rodionov
Assignee: Gaurav Bhardwaj
 Fix For: 2.0.0

 Attachments: HBASE-13867.1.patch, HBASE-13867.2.patch, 
 HBASE-13867.2.patch


 Endpoint coprocessors are very poorly documented.
 Coprocessor section of HBase book must be updated either with its own 
 endpoint coprocessors HOW-TO guide or, at least, with the link(s) to some 
 other guides. There is good description here:
 http://www.3pillarglobal.com/insights/hbase-coprocessors



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12865) WALs may be deleted before they are replicated to peers

2015-08-06 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661131#comment-14661131
 ] 

Andrew Purtell commented on HBASE-12865:


I will commit this soon and will fix up that nit at commit time.

 WALs may be deleted before they are replicated to peers
 ---

 Key: HBASE-12865
 URL: https://issues.apache.org/jira/browse/HBASE-12865
 Project: HBase
  Issue Type: Bug
  Components: Replication
Reporter: Liu Shaohui
Assignee: He Liangliang
Priority: Critical
 Attachments: HBASE-12865-V1.diff, HBASE-12865-V2.diff


 By design, ReplicationLogCleaner guarantee that the WALs  being in 
 replication queue can't been deleted by the HMaster. The 
 ReplicationLogCleaner gets the WAL set from zookeeper by scanning the 
 replication zk node. But it may get uncompleted WAL set during replication 
 failover for the scan operation is not atomic.
 For example: There are three region servers: rs1, rs2, rs3, and peer id 10.  
 The layout of replication zookeeper nodes is:
 {code}
 /hbase/replication/rs/rs1/10/wals
  /rs2/10/wals
  /rs3/10/wals
 {code}
 - t1: the ReplicationLogCleaner finished scanning the replication queue of 
 rs1, and start to scan the queue of rs2.
 - t2: region server rs3 is down, and rs1 take over rs3's replication queue. 
 The new layout is
 {code}
 /hbase/replication/rs/rs1/10/wals
  /rs1/10-rs3/wals
  /rs2/10/wals
  /rs3
 {code}
 - t3, the ReplicationLogCleaner finished scanning the queue of rs2, and start 
 to scan the node of rs3. But the the queue has been moved to  
 replication/rs1/10-rs3/WALS
 So the  ReplicationLogCleaner will miss the WALs of rs3 in peer 10 and the 
 hmaster may delete these WALs before they are replicated to peer clusters.
 We encountered this problem in our cluster and I think it's a serious bug for 
 replication.
 Suggestions are welcomed to fix this bug. thx~



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14190) Assign hbase:namespace table ahead of user region assignment

2015-08-06 Thread Francis Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661136#comment-14661136
 ] 

Francis Liu commented on HBASE-14190:
-

{quote}
When doing this we should also consider, when the security coprocessors are 
installed, deploying the ACL and labels tables before user tables.
{quote}

+1

It'd be good to handle the general case of assigning system ( hbase:* ) tables 
first. If I remember right the original patch used to do that. 

 Assign hbase:namespace table ahead of user region assignment
 

 Key: HBASE-14190
 URL: https://issues.apache.org/jira/browse/HBASE-14190
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu

 Currently the namespace table region is assigned like user regions.
 I spent several hours working with a customer where master couldn't finish 
 initialization.
 Even though master was restarted quite a few times, it went down with the 
 following:
 {code}
 2015-08-05 17:16:57,530 FATAL [hdpmaster1:6.activeMasterManager] 
 master.HMaster: Master server abort: loaded coprocessors are: []
 2015-08-05 17:16:57,530 FATAL [hdpmaster1:6.activeMasterManager] 
 master.HMaster: Unhandled exception. Starting shutdown.
 java.io.IOException: Timedout 30ms waiting for namespace table to be 
 assigned
   at 
 org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:104)
   at org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:985)
   at 
 org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:779)
   at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:182)
   at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1646)
   at java.lang.Thread.run(Thread.java:744)
 {code}
 During previous run(s), namespace table was created, hence leaving an entry 
 in hbase:meta.
 The following if block in TableNamespaceManager#start() was skipped:
 {code}
 if (!MetaTableAccessor.tableExists(masterServices.getConnection(),
   TableName.NAMESPACE_TABLE_NAME)) {
 {code}
 TableNamespaceManager#start() spins, waiting for namespace region to be 
 assigned.
 There was issue in master assigning user regions.
 We tried issuing 'assign' command from hbase shell which didn't work because 
 of the following check in MasterRpcServices#assignRegion():
 {code}
   master.checkInitialized();
 {code}
 This scenario can be avoided if we assign hbase:namespace table after 
 hbase:meta is assigned but before user table region assignment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14105) Add shell tests for Snapshot

2015-08-06 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661168#comment-14661168
 ] 

Andrew Purtell commented on HBASE-14105:


The master patch applied cleanly to master and branch-1, TestShell passed. I 
applied the patch for branch-1.0 to branch-1.0, branch-1.1, and branch-1.2, 
TestShell passed. 

The patch for 0.98 applies but TestShell fails consistently after application. 
Mind taking a look [~ashish singhi]? 

 Add shell tests for Snapshot
 

 Key: HBASE-14105
 URL: https://issues.apache.org/jira/browse/HBASE-14105
 Project: HBase
  Issue Type: Sub-task
  Components: test
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-14105-0.98.patch, HBASE-14105-branch-1.0.patch, 
 HBASE-14105.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14105) Add shell tests for Snapshot

2015-08-06 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14105:
---
Status: Open  (was: Patch Available)

 Add shell tests for Snapshot
 

 Key: HBASE-14105
 URL: https://issues.apache.org/jira/browse/HBASE-14105
 Project: HBase
  Issue Type: Sub-task
  Components: test
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-14105-0.98.patch, HBASE-14105-branch-1.0.patch, 
 HBASE-14105.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13865) Increase the default value for hbase.hregion.memstore.block.multipler from 2 to 4 (part 2)

2015-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660954#comment-14660954
 ] 

Hudson commented on HBASE-13865:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #1022 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1022/])
HBASE-13865 Increase the default value for 
hbase.hregion.memstore.block.multipler from 2 to 4 (part 2) (ndimiduk: rev 
fde7b3fd74741237bee0e060073d9dc5c39922c5)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientPushback.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMinorCompaction.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMajorCompaction.java


 Increase the default value for hbase.hregion.memstore.block.multipler from 2 
 to 4 (part 2)
 --

 Key: HBASE-13865
 URL: https://issues.apache.org/jira/browse/HBASE-13865
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Vladimir Rodionov
Assignee: Gabor Liptak
Priority: Trivial
 Fix For: 2.0.0, 0.98.14, 1.1.2, 1.3.0, 1.2.1, 1.0.3

 Attachments: HBASE-13865.1.patch, HBASE-13865.2.patch, 
 HBASE-13865.2.patch


 Its 4 in the book and 2 in a current master. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-6721) RegionServer Group based Assignment

2015-08-06 Thread Francis Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659561#comment-14659561
 ] 

Francis Liu commented on HBASE-6721:


Thanks [~apurtell], that'd be great let me work on one.

 RegionServer Group based Assignment
 ---

 Key: HBASE-6721
 URL: https://issues.apache.org/jira/browse/HBASE-6721
 Project: HBase
  Issue Type: New Feature
Reporter: Francis Liu
Assignee: Francis Liu
 Attachments: 6721-master-webUI.patch, HBASE-6721-DesigDoc.pdf, 
 HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, 
 HBASE-6721_10.patch, HBASE-6721_11.patch, HBASE-6721_8.patch, 
 HBASE-6721_9.patch, HBASE-6721_9.patch, HBASE-6721_94.patch, 
 HBASE-6721_94.patch, HBASE-6721_94_2.patch, HBASE-6721_94_3.patch, 
 HBASE-6721_94_3.patch, HBASE-6721_94_4.patch, HBASE-6721_94_5.patch, 
 HBASE-6721_94_6.patch, HBASE-6721_94_7.patch, HBASE-6721_trunk.patch, 
 HBASE-6721_trunk.patch, HBASE-6721_trunk.patch, HBASE-6721_trunk1.patch, 
 HBASE-6721_trunk2.patch


 In multi-tenant deployments of HBase, it is likely that a RegionServer will 
 be serving out regions from a number of different tables owned by various 
 client applications. Being able to group a subset of running RegionServers 
 and assign specific tables to it, provides a client application a level of 
 isolation and resource allocation.
 The proposal essentially is to have an AssignmentManager which is aware of 
 RegionServer groups and assigns tables to region servers based on groupings. 
 Load balancing will occur on a per group basis as well. 
 This is essentially a simplification of the approach taken in HBASE-4120. See 
 attached document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14178) regionserver blocks because of waiting for offsetLock

2015-08-06 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659560#comment-14659560
 ] 

Anoop Sam John commented on HBASE-14178:


I see.. I get you now..  Basically we need it as 2 new methods in CacheConfig 
so as to accommodate some of the issues in today's impl.  Like we discussed, 
when the CF level BC usage is explicitly disabled, we should honour that. In 
such a case whatever be the value of cache on write or prefetch option, we 
should not cache those data blocks..   Here in this patch, we try to stick with 
the current behaviour. FIne..  We can correct them as part of another jira. 
+1
Just add a TODO with some comments as we discussed here in these new methods 
and well as old methods in CacheConfig. Which we need change as part of the 
above fix.

 regionserver blocks because of waiting for offsetLock
 -

 Key: HBASE-14178
 URL: https://issues.apache.org/jira/browse/HBASE-14178
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.6
Reporter: Heng Chen
Priority: Critical
 Fix For: 0.98.6

 Attachments: HBASE-14178-0.98.patch, HBASE-14178.patch, 
 HBASE-14178_v1.patch, HBASE-14178_v2.patch, HBASE-14178_v3.patch, 
 HBASE-14178_v4.patch, HBASE-14178_v5.patch, HBASE-14178_v6.patch, jstack


 My regionserver blocks, and all client rpc timeout. 
 I print the regionserver's jstack,  it seems a lot of threads were blocked 
 for waiting offsetLock, detail infomation belows:
 PS:  my table's block cache is off
 {code}
 B.DefaultRpcServer.handler=2,queue=2,port=60020 #82 daemon prio=5 os_prio=0 
 tid=0x01827000 nid=0x2cdc in Object.wait() [0x7f3831b72000]
java.lang.Thread.State: WAITING (on object monitor)
 at java.lang.Object.wait(Native Method)
 at java.lang.Object.wait(Object.java:502)
 at org.apache.hadoop.hbase.util.IdLock.getLockEntry(IdLock.java:79)
 - locked 0x000773af7c18 (a 
 org.apache.hadoop.hbase.util.IdLock$Entry)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:352)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:524)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:572)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
 at 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:269)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:695)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:683)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:533)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3889)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3969)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3847)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3820)
 - locked 0x0005e5c55ad0 (a 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3807)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4779)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4753)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2916)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29583)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
 at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
 at java.lang.Thread.run(Thread.java:745)
Locked ownable 

[jira] [Commented] (HBASE-14188) Read path optimizations after HBASE-11425 profiling

2015-08-06 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659588#comment-14659588
 ] 

ramkrishna.s.vasudevan commented on HBASE-14188:


The test failure is due to 
{code}
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:713)
at 
org.apache.hadoop.hbase.MultithreadedTestUtil$TestContext.startThreads(MultithreadedTestUtil.java:61)
at 
org.apache.hadoop.hbase.io.hfile.CacheTestUtils.hammerSingleKey(CacheTestUtils.java:200)
at 
org.apache.hadoop.hbase.io.hfile.bucket.TestBucketCache.testCacheMultiThreadedSingleKey(TestBucketCache.java:177)
{code}
Seem unrelated. It is passing locally.
bq.getNextCellStartPosition - Now it is not returning the next cell's start 
pos. It is returning cur cell serialization size in the HFile.. Pls change the 
method name accordingly. Can fix on commit.
Changed this to getCurCellSize().

 Read path optimizations after HBASE-11425 profiling
 ---

 Key: HBASE-14188
 URL: https://issues.apache.org/jira/browse/HBASE-14188
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-14188.patch, setSeqId.png


 This subtask deals with some improvments that can be done in the read path 
 (scans) after the changes for HBASE-11425 went in.
 - Avoid CellUtil.setSequenceId in hot path.
 - Use BBUtils in the MultibyteBuff.
 - Use ByteBuff.skip() API in HFileReader rather than 
 MultiByteBuff.position().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14188) Read path optimizations after HBASE-11425 profiling

2015-08-06 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14188:
---
Status: Open  (was: Patch Available)

 Read path optimizations after HBASE-11425 profiling
 ---

 Key: HBASE-14188
 URL: https://issues.apache.org/jira/browse/HBASE-14188
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-14188.patch, setSeqId.png


 This subtask deals with some improvments that can be done in the read path 
 (scans) after the changes for HBASE-11425 went in.
 - Avoid CellUtil.setSequenceId in hot path.
 - Use BBUtils in the MultibyteBuff.
 - Use ByteBuff.skip() API in HFileReader rather than 
 MultiByteBuff.position().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14178) regionserver blocks because of waiting for offsetLock

2015-08-06 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659553#comment-14659553
 ] 

Heng Chen commented on HBASE-14178:
---

{quote}
Why we need different condition for reading from the cache with and with out 
lock? with out lock we do at first , as an optimistic approach. If block is not 
there by then, we are doing one more round of check for a possible another 
concurrent thread doing the caching of this block. So we use lock then. So am 
not sure whether to read from cache why it has to have different condition. 
Looks like the patch tend to make an impression that we will read from cache 
with lock in order to cache that block into the BC. But it is not the case. 
Sorry if I missed some other discussion parts in above comments
{quote}

Thanks for your reply!

The lock purpose is to improve the perf when other thread read same block.  If 
we make sure the block is not in BC,  there is no need to lock. So in 
{{shouldLockOnCacheMiss}} , we will check if the block will be cached after 
read from hdfs.



 regionserver blocks because of waiting for offsetLock
 -

 Key: HBASE-14178
 URL: https://issues.apache.org/jira/browse/HBASE-14178
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.6
Reporter: Heng Chen
Priority: Critical
 Fix For: 0.98.6

 Attachments: HBASE-14178-0.98.patch, HBASE-14178.patch, 
 HBASE-14178_v1.patch, HBASE-14178_v2.patch, HBASE-14178_v3.patch, 
 HBASE-14178_v4.patch, HBASE-14178_v5.patch, HBASE-14178_v6.patch, jstack


 My regionserver blocks, and all client rpc timeout. 
 I print the regionserver's jstack,  it seems a lot of threads were blocked 
 for waiting offsetLock, detail infomation belows:
 PS:  my table's block cache is off
 {code}
 B.DefaultRpcServer.handler=2,queue=2,port=60020 #82 daemon prio=5 os_prio=0 
 tid=0x01827000 nid=0x2cdc in Object.wait() [0x7f3831b72000]
java.lang.Thread.State: WAITING (on object monitor)
 at java.lang.Object.wait(Native Method)
 at java.lang.Object.wait(Object.java:502)
 at org.apache.hadoop.hbase.util.IdLock.getLockEntry(IdLock.java:79)
 - locked 0x000773af7c18 (a 
 org.apache.hadoop.hbase.util.IdLock$Entry)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:352)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:524)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:572)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
 at 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:269)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:695)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:683)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:533)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3889)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3969)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3847)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3820)
 - locked 0x0005e5c55ad0 (a 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3807)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4779)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4753)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2916)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29583)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
 at 

[jira] [Comment Edited] (HBASE-14178) regionserver blocks because of waiting for offsetLock

2015-08-06 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659574#comment-14659574
 ] 

Duo Zhang edited comment on HBASE-14178 at 8/6/15 6:46 AM:
---

[~ram_krish] A bit strange but now, {{family.isBlockCacheEnabled}} only means 
{{cacheDataOnRead}} ... {{cacheDataOnWrite}} and some other configurations such 
as {{prefetchOnOpen}} are separated which means you can set them to {{true}} 
and will take effect even if you explicitly call 
{{family.setBlockCacheEnabled(false)}}. So theoretically even if you disable BC 
at family level, it is still possible that we could find the block in BC...


was (Author: apache9):
[~ram_krish] A bit strange but now, {{family.isBlockCacheEnabled}} only means 
{{cacheDataOnRead }}... {{cacheDataOnWrite}} and some other configurations such 
as {{prefetchOnOpen}} are separated which means you can set them to {{true}} 
and will take effect even if you explicitly call 
{{family.setBlockCacheEnabled(false)}}. So theoretically even if you disable BC 
at family level, it is still possible that we could find the block in BC...

 regionserver blocks because of waiting for offsetLock
 -

 Key: HBASE-14178
 URL: https://issues.apache.org/jira/browse/HBASE-14178
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.6
Reporter: Heng Chen
Priority: Critical
 Fix For: 0.98.6

 Attachments: HBASE-14178-0.98.patch, HBASE-14178.patch, 
 HBASE-14178_v1.patch, HBASE-14178_v2.patch, HBASE-14178_v3.patch, 
 HBASE-14178_v4.patch, HBASE-14178_v5.patch, HBASE-14178_v6.patch, jstack


 My regionserver blocks, and all client rpc timeout. 
 I print the regionserver's jstack,  it seems a lot of threads were blocked 
 for waiting offsetLock, detail infomation belows:
 PS:  my table's block cache is off
 {code}
 B.DefaultRpcServer.handler=2,queue=2,port=60020 #82 daemon prio=5 os_prio=0 
 tid=0x01827000 nid=0x2cdc in Object.wait() [0x7f3831b72000]
java.lang.Thread.State: WAITING (on object monitor)
 at java.lang.Object.wait(Native Method)
 at java.lang.Object.wait(Object.java:502)
 at org.apache.hadoop.hbase.util.IdLock.getLockEntry(IdLock.java:79)
 - locked 0x000773af7c18 (a 
 org.apache.hadoop.hbase.util.IdLock$Entry)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:352)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:524)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:572)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
 at 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:269)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:695)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:683)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:533)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3889)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3969)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3847)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3820)
 - locked 0x0005e5c55ad0 (a 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3807)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4779)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4753)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2916)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29583)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
 

[jira] [Commented] (HBASE-14178) regionserver blocks because of waiting for offsetLock

2015-08-06 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659574#comment-14659574
 ] 

Duo Zhang commented on HBASE-14178:
---

[~ram_krish] A bit strange but now, {{family.isBlockCacheEnabled}} only means 
{{cacheDataOnRead }}... {{cacheDataOnWrite}} and some other configurations such 
as {{prefetchOnOpen}} are separated which means you can set them to {{true}} 
and will take effect even if you explicitly call 
{{family.setBlockCacheEnabled(false)}}. So theoretically even if you disable BC 
at family level, it is still possible that we could find the block in BC...

 regionserver blocks because of waiting for offsetLock
 -

 Key: HBASE-14178
 URL: https://issues.apache.org/jira/browse/HBASE-14178
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.6
Reporter: Heng Chen
Priority: Critical
 Fix For: 0.98.6

 Attachments: HBASE-14178-0.98.patch, HBASE-14178.patch, 
 HBASE-14178_v1.patch, HBASE-14178_v2.patch, HBASE-14178_v3.patch, 
 HBASE-14178_v4.patch, HBASE-14178_v5.patch, HBASE-14178_v6.patch, jstack


 My regionserver blocks, and all client rpc timeout. 
 I print the regionserver's jstack,  it seems a lot of threads were blocked 
 for waiting offsetLock, detail infomation belows:
 PS:  my table's block cache is off
 {code}
 B.DefaultRpcServer.handler=2,queue=2,port=60020 #82 daemon prio=5 os_prio=0 
 tid=0x01827000 nid=0x2cdc in Object.wait() [0x7f3831b72000]
java.lang.Thread.State: WAITING (on object monitor)
 at java.lang.Object.wait(Native Method)
 at java.lang.Object.wait(Object.java:502)
 at org.apache.hadoop.hbase.util.IdLock.getLockEntry(IdLock.java:79)
 - locked 0x000773af7c18 (a 
 org.apache.hadoop.hbase.util.IdLock$Entry)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:352)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:524)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:572)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
 at 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:269)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:695)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:683)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:533)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3889)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3969)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3847)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3820)
 - locked 0x0005e5c55ad0 (a 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3807)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4779)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4753)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2916)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29583)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
 at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
 at java.lang.Thread.run(Thread.java:745)
Locked ownable synchronizers:
 - 0x0005e5c55c08 (a 
 java.util.concurrent.locks.ReentrantLock$NonfairSync)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14188) Read path optimizations after HBASE-11425 profiling

2015-08-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659579#comment-14659579
 ] 

Hadoop QA commented on HBASE-14188:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12748997/HBASE-14188.patch
  against master branch at commit 5d2708f628d4718f6267e9da6c8cbafeda66f4fb.
  ATTACHMENT ID: 12748997

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.io.hfile.bucket.TestBucketCache

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14988//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14988//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14988//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14988//console

This message is automatically generated.

 Read path optimizations after HBASE-11425 profiling
 ---

 Key: HBASE-14188
 URL: https://issues.apache.org/jira/browse/HBASE-14188
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-14188.patch, setSeqId.png


 This subtask deals with some improvments that can be done in the read path 
 (scans) after the changes for HBASE-11425 went in.
 - Avoid CellUtil.setSequenceId in hot path.
 - Use BBUtils in the MultibyteBuff.
 - Use ByteBuff.skip() API in HFileReader rather than 
 MultiByteBuff.position().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14178) regionserver blocks because of waiting for offsetLock

2015-08-06 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659576#comment-14659576
 ] 

ramkrishna.s.vasudevan commented on HBASE-14178:


Okie. Agree for a TODO. +1 for adding a comment regarding this. +1.

 regionserver blocks because of waiting for offsetLock
 -

 Key: HBASE-14178
 URL: https://issues.apache.org/jira/browse/HBASE-14178
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.6
Reporter: Heng Chen
Priority: Critical
 Fix For: 0.98.6

 Attachments: HBASE-14178-0.98.patch, HBASE-14178.patch, 
 HBASE-14178_v1.patch, HBASE-14178_v2.patch, HBASE-14178_v3.patch, 
 HBASE-14178_v4.patch, HBASE-14178_v5.patch, HBASE-14178_v6.patch, jstack


 My regionserver blocks, and all client rpc timeout. 
 I print the regionserver's jstack,  it seems a lot of threads were blocked 
 for waiting offsetLock, detail infomation belows:
 PS:  my table's block cache is off
 {code}
 B.DefaultRpcServer.handler=2,queue=2,port=60020 #82 daemon prio=5 os_prio=0 
 tid=0x01827000 nid=0x2cdc in Object.wait() [0x7f3831b72000]
java.lang.Thread.State: WAITING (on object monitor)
 at java.lang.Object.wait(Native Method)
 at java.lang.Object.wait(Object.java:502)
 at org.apache.hadoop.hbase.util.IdLock.getLockEntry(IdLock.java:79)
 - locked 0x000773af7c18 (a 
 org.apache.hadoop.hbase.util.IdLock$Entry)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:352)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:524)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:572)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
 at 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:269)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:695)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:683)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:533)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3889)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3969)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3847)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3820)
 - locked 0x0005e5c55ad0 (a 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3807)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4779)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4753)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2916)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29583)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
 at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
 at java.lang.Thread.run(Thread.java:745)
Locked ownable synchronizers:
 - 0x0005e5c55c08 (a 
 java.util.concurrent.locks.ReentrantLock$NonfairSync)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14188) Read path optimizations after HBASE-11425 profiling

2015-08-06 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14188:
---
Attachment: HBASE-14188_2.patch

This is the patch I will commit, adds the filterRowKey implementation in 
FilterAllfilter without which there will always be a copy of the row part 
happening.

 Read path optimizations after HBASE-11425 profiling
 ---

 Key: HBASE-14188
 URL: https://issues.apache.org/jira/browse/HBASE-14188
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-14188.patch, HBASE-14188_1.patch, 
 HBASE-14188_2.patch, setSeqId.png


 This subtask deals with some improvments that can be done in the read path 
 (scans) after the changes for HBASE-11425 went in.
 - Avoid CellUtil.setSequenceId in hot path.
 - Use BBUtils in the MultibyteBuff.
 - Use ByteBuff.skip() API in HFileReader rather than 
 MultiByteBuff.position().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13044) Configuration option for disabling coprocessor loading

2015-08-06 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659900#comment-14659900
 ] 

Lars Hofhansl commented on HBASE-13044:
---

That would be the idea? The only dangerous ones are the user provided one on 
HDFS.

 Configuration option for disabling coprocessor loading
 --

 Key: HBASE-13044
 URL: https://issues.apache.org/jira/browse/HBASE-13044
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-13044.patch, HBASE-13044.patch


 Some users would like complete assurance coprocessors cannot be loaded. Add a 
 configuration option that prevents coprocessors from ever being loaded by 
 ignoring any load directives found in the site file or table metadata. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-12988) [Replication]Parallel apply edits across regions

2015-08-06 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659903#comment-14659903
 ] 

Lars Hofhansl edited comment on HBASE-12988 at 8/6/15 12:11 PM:


I think we can commit this as a stop gap measure. It lays the foundation for a 
multithreaded replication source (or at least supports the idea).

Any objections?


was (Author: lhofhansl):
I think we can commit this as a stop gap measure. It lays the foundation for a 
multithreaded replication source (or at least supports the idea).

 [Replication]Parallel apply edits across regions
 

 Key: HBASE-12988
 URL: https://issues.apache.org/jira/browse/HBASE-12988
 Project: HBase
  Issue Type: Improvement
  Components: Replication
Reporter: hongyu bi
Assignee: Lars Hofhansl
 Attachments: 12988-v2.txt, 12988-v3.txt, 12988-v4.txt, 12988-v5.txt, 
 12988.txt, HBASE-12988-0.98.patch, ParallelReplication-v2.txt


 we can apply  edits to slave cluster in parallel on table-level to speed up 
 replication .
 update : per conversation blow , it's better to apply edits on row-level in 
 parallel



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13865) Increase the default value for hbase.hregion.memstore.block.multipler from 2 to 4 (part 2)

2015-08-06 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-13865:

Release Note: Increase default hbase.hregion.memstore.block.multiplier from 
2 to 4 in the code to match the default value in the config files.  (was: 
Increase hbase.hregion.memstore.block.multiplier from 2 to 4)

 Increase the default value for hbase.hregion.memstore.block.multipler from 2 
 to 4 (part 2)
 --

 Key: HBASE-13865
 URL: https://issues.apache.org/jira/browse/HBASE-13865
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Vladimir Rodionov
Assignee: Gabor Liptak
Priority: Trivial
 Fix For: 2.0.0, 0.98.14, 1.3.0, 1.2.1, 1.0.3, 1.1.3

 Attachments: HBASE-13865.1.patch, HBASE-13865.2.patch, 
 HBASE-13865.2.patch


 Its 4 in the book and 2 in a current master. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14188) Read path optimizations after HBASE-11425 profiling

2015-08-06 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659921#comment-14659921
 ] 

Lars Hofhansl commented on HBASE-14188:
---

Postmortem +1 :)

 Read path optimizations after HBASE-11425 profiling
 ---

 Key: HBASE-14188
 URL: https://issues.apache.org/jira/browse/HBASE-14188
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-14188.patch, HBASE-14188_1.patch, 
 HBASE-14188_2.patch, setSeqId.png


 This subtask deals with some improvments that can be done in the read path 
 (scans) after the changes for HBASE-11425 went in.
 - Avoid CellUtil.setSequenceId in hot path.
 - Use BBUtils in the MultibyteBuff.
 - Use ByteBuff.skip() API in HFileReader rather than 
 MultiByteBuff.position().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14178) regionserver blocks because of waiting for offsetLock

2015-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659940#comment-14659940
 ] 

Hudson commented on HBASE-14178:


FAILURE: Integrated in HBase-1.2 #93 (See 
[https://builds.apache.org/job/HBase-1.2/93/])
HBASE-14178 regionserver blocks because of waiting for offsetLock (zhangduo: 
rev 922c3ba554eeb13c2390cdd1140b26006bb8a7e9)
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
Revert HBASE-14178 regionserver blocks because of waiting for offsetLock 
(zhangduo: rev a4092444e6eba39e7523c118e80b3fb726485984)
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
HBASE-14178 regionserver blocks because of waiting for offsetLock (zhangduo: 
rev 7a45596b40e9a6a011a7854d684fc69013b83e73)
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java


 regionserver blocks because of waiting for offsetLock
 -

 Key: HBASE-14178
 URL: https://issues.apache.org/jira/browse/HBASE-14178
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.6
Reporter: Heng Chen
Assignee: Heng Chen
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2

 Attachments: HBASE-14178-0.98.patch, HBASE-14178-0.98_v8.patch, 
 HBASE-14178-branch_1_v8.patch, HBASE-14178.patch, HBASE-14178_v1.patch, 
 HBASE-14178_v2.patch, HBASE-14178_v3.patch, HBASE-14178_v4.patch, 
 HBASE-14178_v5.patch, HBASE-14178_v6.patch, HBASE-14178_v7.patch, 
 HBASE-14178_v8.patch, jstack


 My regionserver blocks, and all client rpc timeout. 
 I print the regionserver's jstack,  it seems a lot of threads were blocked 
 for waiting offsetLock, detail infomation belows:
 PS:  my table's block cache is off
 {code}
 B.DefaultRpcServer.handler=2,queue=2,port=60020 #82 daemon prio=5 os_prio=0 
 tid=0x01827000 nid=0x2cdc in Object.wait() [0x7f3831b72000]
java.lang.Thread.State: WAITING (on object monitor)
 at java.lang.Object.wait(Native Method)
 at java.lang.Object.wait(Object.java:502)
 at org.apache.hadoop.hbase.util.IdLock.getLockEntry(IdLock.java:79)
 - locked 0x000773af7c18 (a 
 org.apache.hadoop.hbase.util.IdLock$Entry)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:352)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:524)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:572)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
 at 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:269)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:695)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:683)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:533)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3889)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3969)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3847)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3820)
 - locked 0x0005e5c55ad0 (a 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3807)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4779)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4753)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2916)
 at 
 

[jira] [Commented] (HBASE-12988) [Replication]Parallel apply edits across regions

2015-08-06 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659903#comment-14659903
 ] 

Lars Hofhansl commented on HBASE-12988:
---

I think we can commit this as a stop gap measure. It lays the foundation for a 
multithreaded replication source (or at least supports the idea).

 [Replication]Parallel apply edits across regions
 

 Key: HBASE-12988
 URL: https://issues.apache.org/jira/browse/HBASE-12988
 Project: HBase
  Issue Type: Improvement
  Components: Replication
Reporter: hongyu bi
Assignee: Lars Hofhansl
 Attachments: 12988-v2.txt, 12988-v3.txt, 12988-v4.txt, 12988-v5.txt, 
 12988.txt, HBASE-12988-0.98.patch, ParallelReplication-v2.txt


 we can apply  edits to slave cluster in parallel on table-level to speed up 
 replication .
 update : per conversation blow , it's better to apply edits on row-level in 
 parallel



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14178) regionserver blocks because of waiting for offsetLock

2015-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659831#comment-14659831
 ] 

Hudson commented on HBASE-14178:


FAILURE: Integrated in HBase-1.0 #1002 (See 
[https://builds.apache.org/job/HBase-1.0/1002/])
HBASE-14178 regionserver blocks because of waiting for offsetLock (zhangduo: 
rev 949092004379c7f2e8f08896c90b59d3a9272fbb)
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java


 regionserver blocks because of waiting for offsetLock
 -

 Key: HBASE-14178
 URL: https://issues.apache.org/jira/browse/HBASE-14178
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.6
Reporter: Heng Chen
Assignee: Heng Chen
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2

 Attachments: HBASE-14178-0.98.patch, HBASE-14178-0.98_v8.patch, 
 HBASE-14178-branch_1_v8.patch, HBASE-14178.patch, HBASE-14178_v1.patch, 
 HBASE-14178_v2.patch, HBASE-14178_v3.patch, HBASE-14178_v4.patch, 
 HBASE-14178_v5.patch, HBASE-14178_v6.patch, HBASE-14178_v7.patch, 
 HBASE-14178_v8.patch, jstack


 My regionserver blocks, and all client rpc timeout. 
 I print the regionserver's jstack,  it seems a lot of threads were blocked 
 for waiting offsetLock, detail infomation belows:
 PS:  my table's block cache is off
 {code}
 B.DefaultRpcServer.handler=2,queue=2,port=60020 #82 daemon prio=5 os_prio=0 
 tid=0x01827000 nid=0x2cdc in Object.wait() [0x7f3831b72000]
java.lang.Thread.State: WAITING (on object monitor)
 at java.lang.Object.wait(Native Method)
 at java.lang.Object.wait(Object.java:502)
 at org.apache.hadoop.hbase.util.IdLock.getLockEntry(IdLock.java:79)
 - locked 0x000773af7c18 (a 
 org.apache.hadoop.hbase.util.IdLock$Entry)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:352)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:524)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:572)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
 at 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:269)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:695)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:683)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:533)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3889)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3969)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3847)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3820)
 - locked 0x0005e5c55ad0 (a 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3807)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4779)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4753)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2916)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29583)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
 at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
 at java.lang.Thread.run(Thread.java:745)
Locked ownable synchronizers:
 - 0x0005e5c55c08 (a 
 java.util.concurrent.locks.ReentrantLock$NonfairSync)
 {code}



--
This message was sent 

[jira] [Updated] (HBASE-14188) Read path optimizations after HBASE-11425 profiling

2015-08-06 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14188:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks for the review Anoop.

 Read path optimizations after HBASE-11425 profiling
 ---

 Key: HBASE-14188
 URL: https://issues.apache.org/jira/browse/HBASE-14188
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-14188.patch, HBASE-14188_1.patch, 
 HBASE-14188_2.patch, setSeqId.png


 This subtask deals with some improvments that can be done in the read path 
 (scans) after the changes for HBASE-11425 went in.
 - Avoid CellUtil.setSequenceId in hot path.
 - Use BBUtils in the MultibyteBuff.
 - Use ByteBuff.skip() API in HFileReader rather than 
 MultiByteBuff.position().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14178) regionserver blocks because of waiting for offsetLock

2015-08-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659872#comment-14659872
 ] 

Hadoop QA commented on HBASE-14178:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12749022/HBASE-14178-branch-1_v8.patch
  against branch-1 branch at commit 5d2708f628d4718f6267e9da6c8cbafeda66f4fb.
  ATTACHMENT ID: 12749022

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14991//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14991//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14991//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14991//console

This message is automatically generated.

 regionserver blocks because of waiting for offsetLock
 -

 Key: HBASE-14178
 URL: https://issues.apache.org/jira/browse/HBASE-14178
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.6
Reporter: Heng Chen
Assignee: Heng Chen
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2

 Attachments: HBASE-14178-0.98.patch, HBASE-14178-0.98_v8.patch, 
 HBASE-14178-branch_1_v8.patch, HBASE-14178.patch, HBASE-14178_v1.patch, 
 HBASE-14178_v2.patch, HBASE-14178_v3.patch, HBASE-14178_v4.patch, 
 HBASE-14178_v5.patch, HBASE-14178_v6.patch, HBASE-14178_v7.patch, 
 HBASE-14178_v8.patch, jstack


 My regionserver blocks, and all client rpc timeout. 
 I print the regionserver's jstack,  it seems a lot of threads were blocked 
 for waiting offsetLock, detail infomation belows:
 PS:  my table's block cache is off
 {code}
 B.DefaultRpcServer.handler=2,queue=2,port=60020 #82 daemon prio=5 os_prio=0 
 tid=0x01827000 nid=0x2cdc in Object.wait() [0x7f3831b72000]
java.lang.Thread.State: WAITING (on object monitor)
 at java.lang.Object.wait(Native Method)
 at java.lang.Object.wait(Object.java:502)
 at org.apache.hadoop.hbase.util.IdLock.getLockEntry(IdLock.java:79)
 - locked 0x000773af7c18 (a 
 org.apache.hadoop.hbase.util.IdLock$Entry)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:352)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:524)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:572)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
 at 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:269)
 at 
 

[jira] [Commented] (HBASE-14178) regionserver blocks because of waiting for offsetLock

2015-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659861#comment-14659861
 ] 

Hudson commented on HBASE-14178:


SUCCESS: Integrated in HBase-1.1 #599 (See 
[https://builds.apache.org/job/HBase-1.1/599/])
HBASE-14178 regionserver blocks because of waiting for offsetLock (zhangduo: 
rev ade125a4ce7825735ee99da4999ec290509d92c8)
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java


 regionserver blocks because of waiting for offsetLock
 -

 Key: HBASE-14178
 URL: https://issues.apache.org/jira/browse/HBASE-14178
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.6
Reporter: Heng Chen
Assignee: Heng Chen
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2

 Attachments: HBASE-14178-0.98.patch, HBASE-14178-0.98_v8.patch, 
 HBASE-14178-branch_1_v8.patch, HBASE-14178.patch, HBASE-14178_v1.patch, 
 HBASE-14178_v2.patch, HBASE-14178_v3.patch, HBASE-14178_v4.patch, 
 HBASE-14178_v5.patch, HBASE-14178_v6.patch, HBASE-14178_v7.patch, 
 HBASE-14178_v8.patch, jstack


 My regionserver blocks, and all client rpc timeout. 
 I print the regionserver's jstack,  it seems a lot of threads were blocked 
 for waiting offsetLock, detail infomation belows:
 PS:  my table's block cache is off
 {code}
 B.DefaultRpcServer.handler=2,queue=2,port=60020 #82 daemon prio=5 os_prio=0 
 tid=0x01827000 nid=0x2cdc in Object.wait() [0x7f3831b72000]
java.lang.Thread.State: WAITING (on object monitor)
 at java.lang.Object.wait(Native Method)
 at java.lang.Object.wait(Object.java:502)
 at org.apache.hadoop.hbase.util.IdLock.getLockEntry(IdLock.java:79)
 - locked 0x000773af7c18 (a 
 org.apache.hadoop.hbase.util.IdLock$Entry)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:352)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:524)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:572)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
 at 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:269)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:695)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:683)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:533)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3889)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3969)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3847)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3820)
 - locked 0x0005e5c55ad0 (a 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3807)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4779)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4753)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2916)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29583)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
 at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
 at java.lang.Thread.run(Thread.java:745)
Locked ownable synchronizers:
 - 0x0005e5c55c08 (a 
 java.util.concurrent.locks.ReentrantLock$NonfairSync)
 {code}



--
This message was sent by 

[jira] [Commented] (HBASE-13865) Increase the default value for hbase.hregion.memstore.block.multipler from 2 to 4 (part 2)

2015-08-06 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14659909#comment-14659909
 ] 

Nicolas Liochon commented on HBASE-13865:
-

Hey Nick :-)

If I'm not mistaken (I'm always confused by the various config files...), the 
patch should not change the behavior for most common deployments, because the 
value is set to 4 in the hbase-default.xml (and for the users who set it to 2: 
the xml config is used first, it won't change for them as well).

So:
- The patch is a good cleanup imho
- It's safe as it does not change the behavior.

+1

I updated the release notes.

 Increase the default value for hbase.hregion.memstore.block.multipler from 2 
 to 4 (part 2)
 --

 Key: HBASE-13865
 URL: https://issues.apache.org/jira/browse/HBASE-13865
 Project: HBase
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.0.0
Reporter: Vladimir Rodionov
Assignee: Gabor Liptak
Priority: Trivial
 Fix For: 2.0.0, 0.98.14, 1.3.0, 1.2.1, 1.0.3, 1.1.3

 Attachments: HBASE-13865.1.patch, HBASE-13865.2.patch, 
 HBASE-13865.2.patch


 Its 4 in the book and 2 in a current master. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13865) Increase the default value for hbase.hregion.memstore.block.multipler from 2 to 4 (part 2)

2015-08-06 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-13865:

Component/s: (was: documentation)
 regionserver

 Increase the default value for hbase.hregion.memstore.block.multipler from 2 
 to 4 (part 2)
 --

 Key: HBASE-13865
 URL: https://issues.apache.org/jira/browse/HBASE-13865
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Vladimir Rodionov
Assignee: Gabor Liptak
Priority: Trivial
 Fix For: 2.0.0, 0.98.14, 1.3.0, 1.2.1, 1.0.3, 1.1.3

 Attachments: HBASE-13865.1.patch, HBASE-13865.2.patch, 
 HBASE-13865.2.patch


 Its 4 in the book and 2 in a current master. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13867) Add endpoint coprocessor guide to HBase book

2015-08-06 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660327#comment-14660327
 ] 

Nick Dimiduk commented on HBASE-13867:
--

The 100 character style guide is followed less strictly in the book documents, 
I'd say. As you say, some URL's are excessively long and cannot be split. 
Please make a best effort to follow the 100 character line length guideline.

 Add endpoint coprocessor guide to HBase book
 

 Key: HBASE-13867
 URL: https://issues.apache.org/jira/browse/HBASE-13867
 Project: HBase
  Issue Type: Task
  Components: Coprocessors, documentation
Reporter: Vladimir Rodionov
Assignee: Gaurav Bhardwaj
 Fix For: 2.0.0

 Attachments: HBASE-13867.1.patch, HBASE-13867.2.patch, 
 HBASE-13867.2.patch


 Endpoint coprocessors are very poorly documented.
 Coprocessor section of HBase book must be updated either with its own 
 endpoint coprocessors HOW-TO guide or, at least, with the link(s) to some 
 other guides. There is good description here:
 http://www.3pillarglobal.com/insights/hbase-coprocessors



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14021) Quota table has a wrong description on the UI

2015-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660324#comment-14660324
 ] 

Hudson commented on HBASE-14021:


SUCCESS: Integrated in HBase-1.1 #600 (See 
[https://builds.apache.org/job/HBase-1.1/600/])
HBASE-14021 Quota table has a wrong description on the UI (Ashish Singhi) 
(tedyu: rev 93dbf8baae46e4cb2470956e36e383f8ed0e308a)
* 
hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.jamon


 Quota table has a wrong description on the UI
 -

 Key: HBASE-14021
 URL: https://issues.apache.org/jira/browse/HBASE-14021
 Project: HBase
  Issue Type: Bug
  Components: UI
Affects Versions: 1.1.0
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor
 Fix For: 2.0.0, 1.1.2, 1.3.0, 1.2.1

 Attachments: HBASE-14021.patch, HBASE-14021.patch, 
 HBASE-14021branch-1.1.patch, error.png, fix.png


 !error.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14021) Quota table has a wrong description on the UI

2015-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660348#comment-14660348
 ] 

Hudson commented on HBASE-14021:


SUCCESS: Integrated in HBase-1.0 #1004 (See 
[https://builds.apache.org/job/HBase-1.0/1004/])
HBASE-14021 Revert - there is no quota support in branch-1.0 (tedyu: rev 
821ce6a1bb4242ed0a026893cbdb12f881844999)
* 
hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.jamon


 Quota table has a wrong description on the UI
 -

 Key: HBASE-14021
 URL: https://issues.apache.org/jira/browse/HBASE-14021
 Project: HBase
  Issue Type: Bug
  Components: UI
Affects Versions: 1.1.0
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor
 Fix For: 2.0.0, 1.1.2, 1.3.0, 1.2.1

 Attachments: HBASE-14021.patch, HBASE-14021.patch, 
 HBASE-14021branch-1.1.patch, error.png, fix.png


 !error.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5878) Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.

2015-08-06 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660301#comment-14660301
 ] 

Andrew Purtell commented on HBASE-5878:
---

That's fine by me [~ndimiduk].

What about taking in a modified patch that just rethrows the exception? 
{code}
-   } catch(Exception e) {  
-  SequenceFileLogReader.LOG.warn(  
-Error while trying to get accurate file length.   +  
-Truncation / data loss may occur if RegionServers die., e);
{code}

 Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.
 ---

 Key: HBASE-5878
 URL: https://issues.apache.org/jira/browse/HBASE-5878
 Project: HBase
  Issue Type: Bug
  Components: wal
Reporter: Uma Maheswara Rao G
Assignee: Ashish Singhi
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-5878-v2.patch, HBASE-5878-v3.patch, 
 HBASE-5878-v4.patch, HBASE-5878-v5-0.98.patch, HBASE-5878-v5.patch, 
 HBASE-5878-v5.patch, HBASE-5878.patch


 SequencFileLogReader: 
 Currently Hbase using getFileLength api from DFSInputStream class by 
 reflection. DFSInputStream is not exposed as public. So, this may change in 
 future. Now HDFS exposed HdfsDataInputStream as public API.
 We can make use of it, when we are not able to find the getFileLength api 
 from DFSInputStream as a else condition. So, that we will not have any sudden 
 surprise like we are facing today.
 Also,  it is just logging one warn message and proceeding if it throws any 
 exception while getting the length. I think we can re-throw the exception 
 because there is no point in continuing with dataloss.
 {code}
 long adjust = 0;
   try {
 Field fIn = FilterInputStream.class.getDeclaredField(in);
 fIn.setAccessible(true);
 Object realIn = fIn.get(this.in);
 // In hadoop 0.22, DFSInputStream is a standalone class.  Before 
 this,
 // it was an inner class of DFSClient.
 if (realIn.getClass().getName().endsWith(DFSInputStream)) {
   Method getFileLength = realIn.getClass().
 getDeclaredMethod(getFileLength, new Class? []{});
   getFileLength.setAccessible(true);
   long realLength = ((Long)getFileLength.
 invoke(realIn, new Object []{})).longValue();
   assert(realLength = this.length);
   adjust = realLength - this.length;
 } else {
   LOG.info(Input stream class:  + realIn.getClass().getName() +
   , not adjusting length);
 }
   } catch(Exception e) {
 SequenceFileLogReader.LOG.warn(
   Error while trying to get accurate file length.   +
   Truncation / data loss may occur if RegionServers die., e);
   }
   return adjust + super.getPos();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14169) API to refreshSuperUserGroupsConfiguration

2015-08-06 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660383#comment-14660383
 ] 

Matteo Bertozzi commented on HBASE-14169:
-

+1 you have to remove the ServerName, ClusterStatus import in 
AccessControlClient to make the checkstyle happy, but it's ok for me. 
on failure we have some server with the old conf and some with the new one and 
the user must re-exec the operation, but that's what proc-v2 will solve. so ok, 
for now just making it as a note.

 API to refreshSuperUserGroupsConfiguration
 --

 Key: HBASE-14169
 URL: https://issues.apache.org/jira/browse/HBASE-14169
 Project: HBase
  Issue Type: New Feature
Reporter: Francis Liu
Assignee: Francis Liu
 Attachments: HBASE-14169.patch, HBASE-14169_2.patch


 For deployments that use security. User impersonation (AKA doAs()) is needed 
 for some services (ie Stargate, thriftserver, Oozie, etc). Impersonation 
 definitions are defined in a xml config file and read and cached by the 
 ProxyUsers class. Calling this api will refresh cached information, 
 eliminating the need to restart the master/regionserver whenever the 
 configuration is changed. 
 Implementation just adds another method to AccessControlService.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13865) Increase the default value for hbase.hregion.memstore.block.multipler from 2 to 4 (part 2)

2015-08-06 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13865:
-
Fix Version/s: (was: 1.1.3)
   1.1.2

 Increase the default value for hbase.hregion.memstore.block.multipler from 2 
 to 4 (part 2)
 --

 Key: HBASE-13865
 URL: https://issues.apache.org/jira/browse/HBASE-13865
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Vladimir Rodionov
Assignee: Gabor Liptak
Priority: Trivial
 Fix For: 2.0.0, 0.98.14, 1.1.2, 1.3.0, 1.2.1, 1.0.3

 Attachments: HBASE-13865.1.patch, HBASE-13865.2.patch, 
 HBASE-13865.2.patch


 Its 4 in the book and 2 in a current master. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14191) HBase grant at specific column family level does not work for Groups

2015-08-06 Thread Tom James (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom James updated HBASE-14191:
--
Attachment: HBase Auth issue for user group.txt

Attaching the steps to reproduce.

 HBase grant at specific column family level does not work for Groups
 

 Key: HBASE-14191
 URL: https://issues.apache.org/jira/browse/HBASE-14191
 Project: HBase
  Issue Type: Bug
 Environment: Version 0.98.6-cdh5.3.3, rUnknown, Wed Apr  8 15:00:15 
 PDT 2015
Reporter: Tom James
 Attachments: HBase Auth issue for user group.txt


 While performing Grant command to a specific column family in a table - to a 
 specific group does not produce needed results. 
 However, when specific user is mentioned (instead of group name) in grant 
 command, it becomes effective
 Steps to Reproduce : 
 1) using super-user, Grant a table/column family level grant to a group
 2) login using a user ( part of the above group) and scan the table. It does 
 not return any results
 3) using super-user, Grant a table/column family level grant to a specific 
 user ( instead of group) 
 4) login using that specific user and scan the table. It produces correct 
 results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13044) Configuration option for disabling coprocessor loading

2015-08-06 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660311#comment-14660311
 ] 

Andrew Purtell commented on HBASE-13044:


bq. How should the above combination be interpreted ?

It doesn't make sense. Coprocessors should be disabled except for table 
coprocessors? And then the complaint is system coprocessors do not load? 

If I recall correctly, the setting for hbase.coprocessor.enabled will override 
the setting for hbase.coprocessor.user.enabled. If you want to globally disable 
coprocessors, use only hbase.coprocessor.enabled=false. If you want to disable 
table coprocessors but allow system coprocessors, use only 
hbase.coprocessor.user.enabled=false

I'm sure we'd take up a doc patch that describes these settings in detail 
somewhere in the online manual.



 Configuration option for disabling coprocessor loading
 --

 Key: HBASE-13044
 URL: https://issues.apache.org/jira/browse/HBASE-13044
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-13044.patch, HBASE-13044.patch


 Some users would like complete assurance coprocessors cannot be loaded. Add a 
 configuration option that prevents coprocessors from ever being loaded by 
 ignoring any load directives found in the site file or table metadata. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13865) Increase the default value for hbase.hregion.memstore.block.multipler from 2 to 4 (part 2)

2015-08-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660662#comment-14660662
 ] 

Hudson commented on HBASE-13865:


FAILURE: Integrated in HBase-1.3 #92 (See 
[https://builds.apache.org/job/HBase-1.3/92/])
HBASE-13865 Increase the default value for 
hbase.hregion.memstore.block.multipler from 2 to 4 (part 2) (ndimiduk: rev 
51061f08a338c2ba6f18e4f702b645fddf194aed)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMinorCompaction.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMajorCompaction.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientPushback.java


 Increase the default value for hbase.hregion.memstore.block.multipler from 2 
 to 4 (part 2)
 --

 Key: HBASE-13865
 URL: https://issues.apache.org/jira/browse/HBASE-13865
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Vladimir Rodionov
Assignee: Gabor Liptak
Priority: Trivial
 Fix For: 2.0.0, 0.98.14, 1.1.2, 1.3.0, 1.2.1, 1.0.3

 Attachments: HBASE-13865.1.patch, HBASE-13865.2.patch, 
 HBASE-13865.2.patch


 Its 4 in the book and 2 in a current master. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14191) HBase grant at specific column family level does not work for Groups

2015-08-06 Thread Tom James (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom James updated HBASE-14191:
--
Attachment: (was: HBase Auth issue for user group.txt)

 HBase grant at specific column family level does not work for Groups
 

 Key: HBASE-14191
 URL: https://issues.apache.org/jira/browse/HBASE-14191
 Project: HBase
  Issue Type: Bug
 Environment: Version 0.98.6-cdh5.3.3, rUnknown, Wed Apr  8 15:00:15 
 PDT 2015
Reporter: Tom James

 While performing Grant command to a specific column family in a table - to a 
 specific group does not produce needed results. 
 However, when specific user is mentioned (instead of group name) in grant 
 command, it becomes effective
 Steps to Reproduce : 
 1) using super-user, Grant a table/column family level grant to a group
 2) login using a user ( part of the above group) and scan the table. It does 
 not return any results
 3) using super-user, Grant a table/column family level grant to a specific 
 user ( instead of group) 
 4) login using that specific user and scan the table. It produces correct 
 results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14191) HBase grant at specific column family level does not work for Groups

2015-08-06 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660490#comment-14660490
 ] 

Ashish Singhi commented on HBASE-14191:
---

[~aczire], can you check this with 0.98.12+ version ?
I think this issue doesn't exist any more because we have a unit test for it in 
TestAccessController2#testPostGrantAndRevokeScanAction and it is passing.
I also added one more action there as per defect description as per the below 
code and it passing.
{code}
AccessTestAction scanFamilyActionForGroupWithFamilyLevelAccess_14191 = new 
AccessTestAction() {
  @Override
  public Void run() throws Exception {
try (Connection connection = ConnectionFactory.createConnection(conf);
Table table = connection.getTable(tableName);) {
  Scan s1 = new Scan();
  s1.addFamily(TEST_FAMILY);
  try (ResultScanner scanner1 = table.getScanner(s1);) {
  }
}
return null;
  }
};
{code}
{code}
grantOnTable(TEST_UTIL, TESTGROUP_1_NAME, tableName, TEST_FAMILY, null,
  Permission.Action.READ);
verifyAllowed(TESTGROUP1_USER1, 
scanFamilyActionForGroupWithFamilyLevelAccess_14191);
{code}

 HBase grant at specific column family level does not work for Groups
 

 Key: HBASE-14191
 URL: https://issues.apache.org/jira/browse/HBASE-14191
 Project: HBase
  Issue Type: Bug
 Environment: Version 0.98.6-cdh5.3.3, rUnknown, Wed Apr  8 15:00:15 
 PDT 2015
Reporter: Tom James

 While performing Grant command to a specific column family in a table - to a 
 specific group does not produce needed results. 
 However, when specific user is mentioned (instead of group name) in grant 
 command, it becomes effective
 Steps to Reproduce : 
 1) using super-user, Grant a table/column family level grant to a group
 2) login using a user ( part of the above group) and scan the table. It does 
 not return any results
 3) using super-user, Grant a table/column family level grant to a specific 
 user ( instead of group) 
 4) login using that specific user and scan the table. It produces correct 
 results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5878) Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.

2015-08-06 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660582#comment-14660582
 ] 

Andrew Purtell commented on HBASE-5878:
---

bq. I can wrap the exception in a new IOE and throw that, is this ok for all 
the fix versions
That could work. What do you think [~ndimiduk] ? Continuing after likely data 
loss seems like a bug. 

I missed this before:
bq. Patch applies mostly cleanly to 0.98, attaching here. You want this back 
ported Andrew Purtell?

Sure, we can take this change there

 Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.
 ---

 Key: HBASE-5878
 URL: https://issues.apache.org/jira/browse/HBASE-5878
 Project: HBase
  Issue Type: Bug
  Components: wal
Reporter: Uma Maheswara Rao G
Assignee: Ashish Singhi
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-5878-v2.patch, HBASE-5878-v3.patch, 
 HBASE-5878-v4.patch, HBASE-5878-v5-0.98.patch, HBASE-5878-v5.patch, 
 HBASE-5878-v5.patch, HBASE-5878.patch


 SequencFileLogReader: 
 Currently Hbase using getFileLength api from DFSInputStream class by 
 reflection. DFSInputStream is not exposed as public. So, this may change in 
 future. Now HDFS exposed HdfsDataInputStream as public API.
 We can make use of it, when we are not able to find the getFileLength api 
 from DFSInputStream as a else condition. So, that we will not have any sudden 
 surprise like we are facing today.
 Also,  it is just logging one warn message and proceeding if it throws any 
 exception while getting the length. I think we can re-throw the exception 
 because there is no point in continuing with dataloss.
 {code}
 long adjust = 0;
   try {
 Field fIn = FilterInputStream.class.getDeclaredField(in);
 fIn.setAccessible(true);
 Object realIn = fIn.get(this.in);
 // In hadoop 0.22, DFSInputStream is a standalone class.  Before 
 this,
 // it was an inner class of DFSClient.
 if (realIn.getClass().getName().endsWith(DFSInputStream)) {
   Method getFileLength = realIn.getClass().
 getDeclaredMethod(getFileLength, new Class? []{});
   getFileLength.setAccessible(true);
   long realLength = ((Long)getFileLength.
 invoke(realIn, new Object []{})).longValue();
   assert(realLength = this.length);
   adjust = realLength - this.length;
 } else {
   LOG.info(Input stream class:  + realIn.getClass().getName() +
   , not adjusting length);
 }
   } catch(Exception e) {
 SequenceFileLogReader.LOG.warn(
   Error while trying to get accurate file length.   +
   Truncation / data loss may occur if RegionServers die., e);
   }
   return adjust + super.getPos();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-5878) Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.

2015-08-06 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14660301#comment-14660301
 ] 

Andrew Purtell edited comment on HBASE-5878 at 8/6/15 4:46 PM:
---

Dropping this  1.3 is fine by me [~ndimiduk].

What about taking in a modified patch that just rethrows the exception? 
{code}
-   } catch(Exception e) {  
-  SequenceFileLogReader.LOG.warn(  
-Error while trying to get accurate file length.   +  
-Truncation / data loss may occur if RegionServers die., e);
{code}


was (Author: apurtell):
That's fine by me [~ndimiduk].

What about taking in a modified patch that just rethrows the exception? 
{code}
-   } catch(Exception e) {  
-  SequenceFileLogReader.LOG.warn(  
-Error while trying to get accurate file length.   +  
-Truncation / data loss may occur if RegionServers die., e);
{code}

 Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.
 ---

 Key: HBASE-5878
 URL: https://issues.apache.org/jira/browse/HBASE-5878
 Project: HBase
  Issue Type: Bug
  Components: wal
Reporter: Uma Maheswara Rao G
Assignee: Ashish Singhi
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-5878-v2.patch, HBASE-5878-v3.patch, 
 HBASE-5878-v4.patch, HBASE-5878-v5-0.98.patch, HBASE-5878-v5.patch, 
 HBASE-5878-v5.patch, HBASE-5878.patch


 SequencFileLogReader: 
 Currently Hbase using getFileLength api from DFSInputStream class by 
 reflection. DFSInputStream is not exposed as public. So, this may change in 
 future. Now HDFS exposed HdfsDataInputStream as public API.
 We can make use of it, when we are not able to find the getFileLength api 
 from DFSInputStream as a else condition. So, that we will not have any sudden 
 surprise like we are facing today.
 Also,  it is just logging one warn message and proceeding if it throws any 
 exception while getting the length. I think we can re-throw the exception 
 because there is no point in continuing with dataloss.
 {code}
 long adjust = 0;
   try {
 Field fIn = FilterInputStream.class.getDeclaredField(in);
 fIn.setAccessible(true);
 Object realIn = fIn.get(this.in);
 // In hadoop 0.22, DFSInputStream is a standalone class.  Before 
 this,
 // it was an inner class of DFSClient.
 if (realIn.getClass().getName().endsWith(DFSInputStream)) {
   Method getFileLength = realIn.getClass().
 getDeclaredMethod(getFileLength, new Class? []{});
   getFileLength.setAccessible(true);
   long realLength = ((Long)getFileLength.
 invoke(realIn, new Object []{})).longValue();
   assert(realLength = this.length);
   adjust = realLength - this.length;
 } else {
   LOG.info(Input stream class:  + realIn.getClass().getName() +
   , not adjusting length);
 }
   } catch(Exception e) {
 SequenceFileLogReader.LOG.warn(
   Error while trying to get accurate file length.   +
   Truncation / data loss may occur if RegionServers die., e);
   }
   return adjust + super.getPos();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13865) Increase the default value for hbase.hregion.memstore.block.multipler from 2 to 4 (part 2)

2015-08-06 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13865:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to branches 0.98+. Thanks for the patch [~gliptak].

FYI [~apurtell], [~enis], [~busbey].

 Increase the default value for hbase.hregion.memstore.block.multipler from 2 
 to 4 (part 2)
 --

 Key: HBASE-13865
 URL: https://issues.apache.org/jira/browse/HBASE-13865
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Vladimir Rodionov
Assignee: Gabor Liptak
Priority: Trivial
 Fix For: 2.0.0, 0.98.14, 1.1.2, 1.3.0, 1.2.1, 1.0.3

 Attachments: HBASE-13865.1.patch, HBASE-13865.2.patch, 
 HBASE-13865.2.patch


 Its 4 in the book and 2 in a current master. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >