[jira] [Updated] (HBASE-21470) [hbase-connectors] Build shaded versions of the connectors libs

2018-11-12 Thread Adrian Muraru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-21470:
--
Description: 
For downstream users it would be helpful to generate shaded versions of the 
connectors libs, e.g hbase-shaded-spark and hbase-shaded-kafka.
These would ease integrating this libs in Spark/Hadoop projects where 
transitive dependencies of the connectors libs conflict with the runtime ones

  was:Add automated testing for pull requests and patch files created for 
hbase-connectors repository. 


> [hbase-connectors] Build shaded versions of the connectors libs
> ---
>
> Key: HBASE-21470
> URL: https://issues.apache.org/jira/browse/HBASE-21470
> Project: HBase
>  Issue Type: Task
>  Components: build, hbase-connectors
>Affects Versions: connector-1.0.0
>Reporter: Adrian Muraru
>Priority: Major
>
> For downstream users it would be helpful to generate shaded versions of the 
> connectors libs, e.g hbase-shaded-spark and hbase-shaded-kafka.
> These would ease integrating this libs in Spark/Hadoop projects where 
> transitive dependencies of the connectors libs conflict with the runtime ones



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21470) [hbase-connectors] Build shaded versions of the connectors libs

2018-11-12 Thread Adrian Muraru (JIRA)
Adrian Muraru created HBASE-21470:
-

 Summary: [hbase-connectors] Build shaded versions of the 
connectors libs
 Key: HBASE-21470
 URL: https://issues.apache.org/jira/browse/HBASE-21470
 Project: HBase
  Issue Type: Task
  Components: build, hbase-connectors
Affects Versions: connector-1.0.0
Reporter: Adrian Muraru


Add automated testing for pull requests and patch files created for 
hbase-connectors repository. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20140) HRegion FileSystem should be instantiated from hbase rootDir not default

2018-10-11 Thread Adrian Muraru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-20140:
--
Attachment: (was: HBASE-20140.v01.patch)

> HRegion FileSystem should be instantiated from hbase rootDir not default
> 
>
> Key: HBASE-20140
> URL: https://issues.apache.org/jira/browse/HBASE-20140
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 3.0.0, 2.0.2, 1.4.7
>Reporter: Adrian Muraru
>Assignee: Adrian Muraru
>Priority: Minor
> Attachments: HBASE-20140.branch-1.v01.patch, 
> HBASE-20140.branch-2.v01.patch, HBASE-20140.v01.patch
>
>
> HRegion fs initialization is done based on HDFS default {{fs.defaultFs}} 
> instead of {{hbase.rootdir}}
> This is breaking in cases where the {{fs.defaultFs}} is set to a different  
> filesystem than the one set in fully qualified {{hbase.rootdir}}
>  
> The usecase is the following: running an yarn cluster with `fs.defaultFs` set 
> to local HDFS and running a MR job over HBase Snapshots setting the 
> {{hbase.rootDir}} to external S3 Filesystem (fully qualified). In this case 
> when the regions are actually cold loaded in map tasks the defaultFs is used 
> instead of actual hbase.rootDir



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20140) HRegion FileSystem should be instantiated from hbase rootDir not default

2018-10-11 Thread Adrian Muraru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-20140:
--
Attachment: HBASE-20140.v01.patch

> HRegion FileSystem should be instantiated from hbase rootDir not default
> 
>
> Key: HBASE-20140
> URL: https://issues.apache.org/jira/browse/HBASE-20140
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 3.0.0, 2.0.2, 1.4.7
>Reporter: Adrian Muraru
>Assignee: Adrian Muraru
>Priority: Minor
> Attachments: HBASE-20140.branch-1.v01.patch, 
> HBASE-20140.branch-2.v01.patch, HBASE-20140.v01.patch
>
>
> HRegion fs initialization is done based on HDFS default {{fs.defaultFs}} 
> instead of {{hbase.rootdir}}
> This is breaking in cases where the {{fs.defaultFs}} is set to a different  
> filesystem than the one set in fully qualified {{hbase.rootdir}}
>  
> The usecase is the following: running an yarn cluster with `fs.defaultFs` set 
> to local HDFS and running a MR job over HBase Snapshots setting the 
> {{hbase.rootDir}} to external S3 Filesystem (fully qualified). In this case 
> when the regions are actually cold loaded in map tasks the defaultFs is used 
> instead of actual hbase.rootDir



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20140) HRegion FileSystem should be instantiated from hbase rootDir not default

2018-10-11 Thread Adrian Muraru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-20140:
--
Status: Patch Available  (was: Open)

> HRegion FileSystem should be instantiated from hbase rootDir not default
> 
>
> Key: HBASE-20140
> URL: https://issues.apache.org/jira/browse/HBASE-20140
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 1.4.7, 2.0.2, 3.0.0
>Reporter: Adrian Muraru
>Assignee: Adrian Muraru
>Priority: Minor
> Attachments: HBASE-20140.branch-1.v01.patch, 
> HBASE-20140.branch-2.v01.patch, HBASE-20140.v01.patch
>
>
> HRegion fs initialization is done based on HDFS default {{fs.defaultFs}} 
> instead of {{hbase.rootdir}}
> This is breaking in cases where the {{fs.defaultFs}} is set to a different  
> filesystem than the one set in fully qualified {{hbase.rootdir}}
>  
> The usecase is the following: running an yarn cluster with `fs.defaultFs` set 
> to local HDFS and running a MR job over HBase Snapshots setting the 
> {{hbase.rootDir}} to external S3 Filesystem (fully qualified). In this case 
> when the regions are actually cold loaded in map tasks the defaultFs is used 
> instead of actual hbase.rootDir



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20140) HRegion FileSystem should be instantiated from hbase rootDir not default

2018-10-11 Thread Adrian Muraru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-20140:
--
Status: Open  (was: Patch Available)

> HRegion FileSystem should be instantiated from hbase rootDir not default
> 
>
> Key: HBASE-20140
> URL: https://issues.apache.org/jira/browse/HBASE-20140
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 1.4.7, 2.0.2, 3.0.0
>Reporter: Adrian Muraru
>Assignee: Adrian Muraru
>Priority: Minor
> Attachments: HBASE-20140.branch-1.v01.patch, 
> HBASE-20140.branch-2.v01.patch, HBASE-20140.v01.patch
>
>
> HRegion fs initialization is done based on HDFS default {{fs.defaultFs}} 
> instead of {{hbase.rootdir}}
> This is breaking in cases where the {{fs.defaultFs}} is set to a different  
> filesystem than the one set in fully qualified {{hbase.rootdir}}
>  
> The usecase is the following: running an yarn cluster with `fs.defaultFs` set 
> to local HDFS and running a MR job over HBase Snapshots setting the 
> {{hbase.rootDir}} to external S3 Filesystem (fully qualified). In this case 
> when the regions are actually cold loaded in map tasks the defaultFs is used 
> instead of actual hbase.rootDir



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20140) HRegion FileSystem should be instantiated from hbase rootDir not default

2018-10-11 Thread Adrian Muraru (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646383#comment-16646383
 ] 

Adrian Muraru commented on HBASE-20140:
---

[~yuzhih...@gmail.com] can we revamp this please? We are seeing this a blocker 
for our batch jobs and would like to have it in mainline

> HRegion FileSystem should be instantiated from hbase rootDir not default
> 
>
> Key: HBASE-20140
> URL: https://issues.apache.org/jira/browse/HBASE-20140
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 3.0.0, 2.0.2, 1.4.7
>Reporter: Adrian Muraru
>Assignee: Adrian Muraru
>Priority: Minor
> Attachments: HBASE-20140.branch-1.v01.patch, 
> HBASE-20140.branch-2.v01.patch, HBASE-20140.v01.patch
>
>
> HRegion fs initialization is done based on HDFS default {{fs.defaultFs}} 
> instead of {{hbase.rootdir}}
> This is breaking in cases where the {{fs.defaultFs}} is set to a different  
> filesystem than the one set in fully qualified {{hbase.rootdir}}
>  
> The usecase is the following: running an yarn cluster with `fs.defaultFs` set 
> to local HDFS and running a MR job over HBase Snapshots setting the 
> {{hbase.rootDir}} to external S3 Filesystem (fully qualified). In this case 
> when the regions are actually cold loaded in map tasks the defaultFs is used 
> instead of actual hbase.rootDir



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20140) HRegion FileSystem should be instantiated from hbase rootDir not default

2018-10-11 Thread Adrian Muraru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-20140:
--
Affects Version/s: 2.0.2
   1.4.7

> HRegion FileSystem should be instantiated from hbase rootDir not default
> 
>
> Key: HBASE-20140
> URL: https://issues.apache.org/jira/browse/HBASE-20140
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 3.0.0, 2.0.2, 1.4.7
>Reporter: Adrian Muraru
>Assignee: Adrian Muraru
>Priority: Minor
> Attachments: HBASE-20140.branch-1.v01.patch, 
> HBASE-20140.branch-2.v01.patch, HBASE-20140.v01.patch
>
>
> HRegion fs initialization is done based on HDFS default {{fs.defaultFs}} 
> instead of {{hbase.rootdir}}
> This is breaking in cases where the {{fs.defaultFs}} is set to a different  
> filesystem than the one set in fully qualified {{hbase.rootdir}}
>  
> The usecase is the following: running an yarn cluster with `fs.defaultFs` set 
> to local HDFS and running a MR job over HBase Snapshots setting the 
> {{hbase.rootDir}} to external S3 Filesystem (fully qualified). In this case 
> when the regions are actually cold loaded in map tasks the defaultFs is used 
> instead of actual hbase.rootDir



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20140) HRegion FileSystem should be instantiated from hbase rootDir not default

2018-03-07 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-20140:
--
Attachment: HBASE-20140.branch-2.v01.patch

> HRegion FileSystem should be instantiated from hbase rootDir not default
> 
>
> Key: HBASE-20140
> URL: https://issues.apache.org/jira/browse/HBASE-20140
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 3.0.0
>Reporter: Adrian Muraru
>Assignee: Adrian Muraru
>Priority: Minor
> Attachments: HBASE-20140.branch-1.v01.patch, 
> HBASE-20140.branch-2.v01.patch, HBASE-20140.v01.patch
>
>
> HRegion fs initialization is done based on HDFS default {{fs.defaultFs}} 
> instead of {{hbase.rootdir}}
> This is breaking in cases where the {{fs.defaultFs}} is set to a different  
> filesystem than the one set in fully qualified {{hbase.rootdir}}
>  
> The usecase is the following: running an yarn cluster with `fs.defaultFs` set 
> to local HDFS and running a MR job over HBase Snapshots setting the 
> {{hbase.rootDir}} to external S3 Filesystem (fully qualified). In this case 
> when the regions are actually cold loaded in map tasks the defaultFs is used 
> instead of actual hbase.rootDir



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20140) HRegion FileSystem should be instantiated from hbase rootDir not default

2018-03-07 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-20140:
--
Attachment: HBASE-20140.branch-1.v01.patch

> HRegion FileSystem should be instantiated from hbase rootDir not default
> 
>
> Key: HBASE-20140
> URL: https://issues.apache.org/jira/browse/HBASE-20140
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 3.0.0
>Reporter: Adrian Muraru
>Assignee: Adrian Muraru
>Priority: Minor
> Attachments: HBASE-20140.branch-1.v01.patch, HBASE-20140.v01.patch
>
>
> HRegion fs initialization is done based on HDFS default {{fs.defaultFs}} 
> instead of {{hbase.rootdir}}
> This is breaking in cases where the {{fs.defaultFs}} is set to a different  
> filesystem than the one set in fully qualified {{hbase.rootdir}}
>  
> The usecase is the following: running an yarn cluster with `fs.defaultFs` set 
> to local HDFS and running a MR job over HBase Snapshots setting the 
> {{hbase.rootDir}} to external S3 Filesystem (fully qualified). In this case 
> when the regions are actually cold loaded in map tasks the defaultFs is used 
> instead of actual hbase.rootDir



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20140) HRegion FileSystem should be instantiated from hbase rootDir not default

2018-03-07 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-20140:
--
Description: 
HRegion fs initialization is done based on HDFS default {{fs.defaultFs}} 
instead of {{hbase.rootdir}}

This is breaking in cases where the {{fs.defaultFs}} is set to a different  
filesystem than the one set in fully qualified {{hbase.rootdir}}

 

The usecase is the following: running an yarn cluster with `fs.defaultFs` set 
to local HDFS and running a MR job over HBase Snapshots setting the 
{{hbase.rootDir}} to external S3 Filesystem (fully qualified). In this case 
when the regions are actually cold loaded in map tasks the defaultFs is used 
instead of actual hbase.rootDir

  was:
HRegion fs initialization is done based on HDFS default {{fs.defaultFs}} 
instead of {{hbase.rootdir}}

This is breaking in cases where the {{fs.defaultFs}} is set to a different  
filesystem than the one set in fully qualified {{hbase.rootdir}}


> HRegion FileSystem should be instantiated from hbase rootDir not default
> 
>
> Key: HBASE-20140
> URL: https://issues.apache.org/jira/browse/HBASE-20140
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 3.0.0
>Reporter: Adrian Muraru
>Assignee: Adrian Muraru
>Priority: Minor
> Attachments: HBASE-20140.v01.patch
>
>
> HRegion fs initialization is done based on HDFS default {{fs.defaultFs}} 
> instead of {{hbase.rootdir}}
> This is breaking in cases where the {{fs.defaultFs}} is set to a different  
> filesystem than the one set in fully qualified {{hbase.rootdir}}
>  
> The usecase is the following: running an yarn cluster with `fs.defaultFs` set 
> to local HDFS and running a MR job over HBase Snapshots setting the 
> {{hbase.rootDir}} to external S3 Filesystem (fully qualified). In this case 
> when the regions are actually cold loaded in map tasks the defaultFs is used 
> instead of actual hbase.rootDir



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20140) HRegion FileSystem should be instantiated from hbase rootDir not default

2018-03-07 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16389246#comment-16389246
 ] 

Adrian Muraru commented on HBASE-20140:
---

[~yuzhih...@gmail.com]
{quote}Did you find the issue using the above config ?
{quote}
The usecase is the following: running an yarn cluster with `fs.defaultFs` set 
to local HDFS and running a MR job over HBase Snapshots setting the 
{{hbase.rootDir}} to external S3 Filesystem (fully qualified). In this case 
when the regions are actually cold loaded in map tasks the defaultFs is used 
insteaf of actual hbase.rootDir

A bit of an edge case, but still important
{quote}If so, is there any other place where similar change should be made ?
{quote}
There are probably other places, yes, but to be honest I limited the patch to 
HFile code to avoid extra side-effects.

> HRegion FileSystem should be instantiated from hbase rootDir not default
> 
>
> Key: HBASE-20140
> URL: https://issues.apache.org/jira/browse/HBASE-20140
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 3.0.0
>Reporter: Adrian Muraru
>Assignee: Adrian Muraru
>Priority: Minor
> Attachments: HBASE-20140.v01.patch
>
>
> HRegion fs initialization is done based on HDFS default {{fs.defaultFs}} 
> instead of {{hbase.rootdir}}
> This is breaking in cases where the {{fs.defaultFs}} is set to a different  
> filesystem than the one set in fully qualified {{hbase.rootdir}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20140) HRegion FileSystem should be instantiated from hbase rootDir not default

2018-03-06 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-20140:
--
 Assignee: Adrian Muraru
Affects Version/s: 3.0.0
   Status: Patch Available  (was: Open)

> HRegion FileSystem should be instantiated from hbase rootDir not default
> 
>
> Key: HBASE-20140
> URL: https://issues.apache.org/jira/browse/HBASE-20140
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 3.0.0
>Reporter: Adrian Muraru
>Assignee: Adrian Muraru
>Priority: Minor
> Attachments: HBASE-20140.v01.patch
>
>
> HRegion fs initialization is done based on HDFS default {{fs.defaultFs}} 
> instead of {{hbase.rootdir}}
> This is breaking in cases where the {{fs.defaultFs}} is set to a different  
> filesystem than the one set in fully qualified {{hbase.rootdir}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20140) HRegion FileSystem should be instantiated from hbase rootDir not default

2018-03-06 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-20140:
--
Attachment: HBASE-20140.v01.patch

> HRegion FileSystem should be instantiated from hbase rootDir not default
> 
>
> Key: HBASE-20140
> URL: https://issues.apache.org/jira/browse/HBASE-20140
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 3.0.0
>Reporter: Adrian Muraru
>Priority: Minor
> Attachments: HBASE-20140.v01.patch
>
>
> HRegion fs initialization is done based on HDFS default {{fs.defaultFs}} 
> instead of {{hbase.rootdir}}
> This is breaking in cases where the {{fs.defaultFs}} is set to a different  
> filesystem than the one set in fully qualified {{hbase.rootdir}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20140) HRegion FileSystem should be instantiated from hbase rootDir not default

2018-03-06 Thread Adrian Muraru (JIRA)
Adrian Muraru created HBASE-20140:
-

 Summary: HRegion FileSystem should be instantiated from hbase 
rootDir not default
 Key: HBASE-20140
 URL: https://issues.apache.org/jira/browse/HBASE-20140
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: Adrian Muraru


HRegion fs initialization is done based on HDFS default {{fs.defaultFs}} 
instead of {{hbase.rootdir}}

This is breaking in cases where the {{fs.defaultFs}} is set to a different  
filesystem than the one set in fully qualified {{hbase.rootdir}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-14770) RowCounter argument input parse error

2016-01-29 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124359#comment-15124359
 ] 

Adrian Muraru commented on HBASE-14770:
---

This doesn't seem to be related to this patch

> RowCounter argument input parse error
> -
>
> Key: HBASE-14770
> URL: https://issues.apache.org/jira/browse/HBASE-14770
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 2.0.0, 1.3.0, 1.2.1, 1.0.3
>Reporter: Frank Chang
>Assignee: Adrian Muraru
>Priority: Minor
> Attachments: HBASE-14770-master-2.patch, HBASE-14770-master.patch
>
>
> I'm tried to use 
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java
>  code and package a new jar then excuted following shell script:
> {code:none}
> hadoop jar test.jar  --range=row001,row002 cf:c2
> {code}
> Then I got "NoSuchColumnFamilyException".
> It seems input argument parsing problem.
> And I tried to add 
> {code:java}
> continue; 
> {code}
> after #L123 to avoid "--range=*" string be appended to qualifer.
> Then the problem seems solve.
> --
> data in table:
> ||row||cf:c1||cf:c2||cf:c3||cf:c4||
> |row001|v1|v2| | |
> |row002| |v2|v3| |
> |row003| | |v3|v4|
> |row004|v1| | |v4|
> Exception Message:
> {code:java}
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column 
> family --range=row001,row002 does not exist in region 
> frank_rowcounttest1,,1446191360354.6c52c71a82f0fa041c467002a2bf433c. in table 
> 'frank_rowcounttest1', {NAME => 'cf', DATA_BLOCK_ENCODING => 'NONE', 
> BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', 
> VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 
> 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14770) RowCounter argument input parse error

2016-01-24 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-14770:
--
Affects Version/s: (was: 1.0.0)
   1.2.1
   1.3.0
   2.0.0
   1.0.3
   Status: Patch Available  (was: Open)

> RowCounter argument input parse error
> -
>
> Key: HBASE-14770
> URL: https://issues.apache.org/jira/browse/HBASE-14770
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 1.0.3, 2.0.0, 1.3.0, 1.2.1
>Reporter: Frank Chang
>Assignee: Adrian Muraru
>Priority: Minor
> Attachments: HBASE-14770-master-2.patch, HBASE-14770-master.patch
>
>
> I'm tried to use 
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java
>  code and package a new jar then excuted following shell script:
> {code:none}
> hadoop jar test.jar  --range=row001,row002 cf:c2
> {code}
> Then I got "NoSuchColumnFamilyException".
> It seems input argument parsing problem.
> And I tried to add 
> {code:java}
> continue; 
> {code}
> after #L123 to avoid "--range=*" string be appended to qualifer.
> Then the problem seems solve.
> --
> data in table:
> ||row||cf:c1||cf:c2||cf:c3||cf:c4||
> |row001|v1|v2| | |
> |row002| |v2|v3| |
> |row003| | |v3|v4|
> |row004|v1| | |v4|
> Exception Message:
> {code:java}
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column 
> family --range=row001,row002 does not exist in region 
> frank_rowcounttest1,,1446191360354.6c52c71a82f0fa041c467002a2bf433c. in table 
> 'frank_rowcounttest1', {NAME => 'cf', DATA_BLOCK_ENCODING => 'NONE', 
> BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', 
> VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 
> 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14770) RowCounter argument input parse error

2016-01-24 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-14770:
--
Attachment: HBASE-14770-master-2.patch

Added tests

> RowCounter argument input parse error
> -
>
> Key: HBASE-14770
> URL: https://issues.apache.org/jira/browse/HBASE-14770
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 1.0.0
>Reporter: Frank Chang
>Assignee: Adrian Muraru
>Priority: Minor
> Attachments: HBASE-14770-master-2.patch, HBASE-14770-master.patch
>
>
> I'm tried to use 
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java
>  code and package a new jar then excuted following shell script:
> {code:none}
> hadoop jar test.jar  --range=row001,row002 cf:c2
> {code}
> Then I got "NoSuchColumnFamilyException".
> It seems input argument parsing problem.
> And I tried to add 
> {code:java}
> continue; 
> {code}
> after #L123 to avoid "--range=*" string be appended to qualifer.
> Then the problem seems solve.
> --
> data in table:
> ||row||cf:c1||cf:c2||cf:c3||cf:c4||
> |row001|v1|v2| | |
> |row002| |v2|v3| |
> |row003| | |v3|v4|
> |row004|v1| | |v4|
> Exception Message:
> {code:java}
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column 
> family --range=row001,row002 does not exist in region 
> frank_rowcounttest1,,1446191360354.6c52c71a82f0fa041c467002a2bf433c. in table 
> 'frank_rowcounttest1', {NAME => 'cf', DATA_BLOCK_ENCODING => 'NONE', 
> BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', 
> VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 
> 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14770) RowCounter argument input parse error

2016-01-24 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-14770:
--
Status: Open  (was: Patch Available)

> RowCounter argument input parse error
> -
>
> Key: HBASE-14770
> URL: https://issues.apache.org/jira/browse/HBASE-14770
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 1.0.0
>Reporter: Frank Chang
>Assignee: Adrian Muraru
>Priority: Minor
> Attachments: HBASE-14770-master.patch
>
>
> I'm tried to use 
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java
>  code and package a new jar then excuted following shell script:
> {code:none}
> hadoop jar test.jar  --range=row001,row002 cf:c2
> {code}
> Then I got "NoSuchColumnFamilyException".
> It seems input argument parsing problem.
> And I tried to add 
> {code:java}
> continue; 
> {code}
> after #L123 to avoid "--range=*" string be appended to qualifer.
> Then the problem seems solve.
> --
> data in table:
> ||row||cf:c1||cf:c2||cf:c3||cf:c4||
> |row001|v1|v2| | |
> |row002| |v2|v3| |
> |row003| | |v3|v4|
> |row004|v1| | |v4|
> Exception Message:
> {code:java}
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column 
> family --range=row001,row002 does not exist in region 
> frank_rowcounttest1,,1446191360354.6c52c71a82f0fa041c467002a2bf433c. in table 
> 'frank_rowcounttest1', {NAME => 'cf', DATA_BLOCK_ENCODING => 'NONE', 
> BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', 
> VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 
> 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14770) RowCounter argument input parse error

2016-01-24 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15114468#comment-15114468
 ] 

Adrian Muraru commented on HBASE-14770:
---

Indeed, I'll ammend the patch


> RowCounter argument input parse error
> -
>
> Key: HBASE-14770
> URL: https://issues.apache.org/jira/browse/HBASE-14770
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 1.0.0
>Reporter: Frank Chang
>Assignee: Adrian Muraru
>Priority: Minor
> Attachments: HBASE-14770-master.patch
>
>
> I'm tried to use 
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java
>  code and package a new jar then excuted following shell script:
> {code:none}
> hadoop jar test.jar  --range=row001,row002 cf:c2
> {code}
> Then I got "NoSuchColumnFamilyException".
> It seems input argument parsing problem.
> And I tried to add 
> {code:java}
> continue; 
> {code}
> after #L123 to avoid "--range=*" string be appended to qualifer.
> Then the problem seems solve.
> --
> data in table:
> ||row||cf:c1||cf:c2||cf:c3||cf:c4||
> |row001|v1|v2| | |
> |row002| |v2|v3| |
> |row003| | |v3|v4|
> |row004|v1| | |v4|
> Exception Message:
> {code:java}
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column 
> family --range=row001,row002 does not exist in region 
> frank_rowcounttest1,,1446191360354.6c52c71a82f0fa041c467002a2bf433c. in table 
> 'frank_rowcounttest1', {NAME => 'cf', DATA_BLOCK_ENCODING => 'NONE', 
> BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', 
> VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 
> 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-14770) RowCounter argument input parse error

2016-01-24 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru reassigned HBASE-14770:
-

Assignee: Adrian Muraru  (was: Frank Chang)

> RowCounter argument input parse error
> -
>
> Key: HBASE-14770
> URL: https://issues.apache.org/jira/browse/HBASE-14770
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 1.0.0
>Reporter: Frank Chang
>Assignee: Adrian Muraru
>Priority: Minor
>
> I'm tried to use 
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java
>  code and package a new jar then excuted following shell script:
> {code:none}
> hadoop jar test.jar  --range=row001,row002 cf:c2
> {code}
> Then I got "NoSuchColumnFamilyException".
> It seems input argument parsing problem.
> And I tried to add 
> {code:java}
> continue; 
> {code}
> after #L123 to avoid "--range=*" string be appended to qualifer.
> Then the problem seems solve.
> --
> data in table:
> ||row||cf:c1||cf:c2||cf:c3||cf:c4||
> |row001|v1|v2| | |
> |row002| |v2|v3| |
> |row003| | |v3|v4|
> |row004|v1| | |v4|
> Exception Message:
> {code:java}
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column 
> family --range=row001,row002 does not exist in region 
> frank_rowcounttest1,,1446191360354.6c52c71a82f0fa041c467002a2bf433c. in table 
> 'frank_rowcounttest1', {NAME => 'cf', DATA_BLOCK_ENCODING => 'NONE', 
> BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', 
> VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 
> 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14770) RowCounter argument input parse error

2016-01-24 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-14770:
--
Component/s: (was: API)
 mapreduce

> RowCounter argument input parse error
> -
>
> Key: HBASE-14770
> URL: https://issues.apache.org/jira/browse/HBASE-14770
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 1.0.0
>Reporter: Frank Chang
>Assignee: Adrian Muraru
>Priority: Minor
> Attachments: HBASE-14770-master.patch
>
>
> I'm tried to use 
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java
>  code and package a new jar then excuted following shell script:
> {code:none}
> hadoop jar test.jar  --range=row001,row002 cf:c2
> {code}
> Then I got "NoSuchColumnFamilyException".
> It seems input argument parsing problem.
> And I tried to add 
> {code:java}
> continue; 
> {code}
> after #L123 to avoid "--range=*" string be appended to qualifer.
> Then the problem seems solve.
> --
> data in table:
> ||row||cf:c1||cf:c2||cf:c3||cf:c4||
> |row001|v1|v2| | |
> |row002| |v2|v3| |
> |row003| | |v3|v4|
> |row004|v1| | |v4|
> Exception Message:
> {code:java}
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column 
> family --range=row001,row002 does not exist in region 
> frank_rowcounttest1,,1446191360354.6c52c71a82f0fa041c467002a2bf433c. in table 
> 'frank_rowcounttest1', {NAME => 'cf', DATA_BLOCK_ENCODING => 'NONE', 
> BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', 
> VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 
> 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14770) RowCounter argument input parse error

2016-01-24 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-14770:
--
Status: Patch Available  (was: In Progress)

> RowCounter argument input parse error
> -
>
> Key: HBASE-14770
> URL: https://issues.apache.org/jira/browse/HBASE-14770
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 1.0.0
>Reporter: Frank Chang
>Assignee: Adrian Muraru
>Priority: Minor
> Attachments: HBASE-14770-master.patch
>
>
> I'm tried to use 
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java
>  code and package a new jar then excuted following shell script:
> {code:none}
> hadoop jar test.jar  --range=row001,row002 cf:c2
> {code}
> Then I got "NoSuchColumnFamilyException".
> It seems input argument parsing problem.
> And I tried to add 
> {code:java}
> continue; 
> {code}
> after #L123 to avoid "--range=*" string be appended to qualifer.
> Then the problem seems solve.
> --
> data in table:
> ||row||cf:c1||cf:c2||cf:c3||cf:c4||
> |row001|v1|v2| | |
> |row002| |v2|v3| |
> |row003| | |v3|v4|
> |row004|v1| | |v4|
> Exception Message:
> {code:java}
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column 
> family --range=row001,row002 does not exist in region 
> frank_rowcounttest1,,1446191360354.6c52c71a82f0fa041c467002a2bf433c. in table 
> 'frank_rowcounttest1', {NAME => 'cf', DATA_BLOCK_ENCODING => 'NONE', 
> BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', 
> VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 
> 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14770) RowCounter argument input parse error

2016-01-24 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-14770:
--
Attachment: HBASE-14770-master.patch

> RowCounter argument input parse error
> -
>
> Key: HBASE-14770
> URL: https://issues.apache.org/jira/browse/HBASE-14770
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 1.0.0
>Reporter: Frank Chang
>Assignee: Adrian Muraru
>Priority: Minor
> Attachments: HBASE-14770-master.patch
>
>
> I'm tried to use 
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java
>  code and package a new jar then excuted following shell script:
> {code:none}
> hadoop jar test.jar  --range=row001,row002 cf:c2
> {code}
> Then I got "NoSuchColumnFamilyException".
> It seems input argument parsing problem.
> And I tried to add 
> {code:java}
> continue; 
> {code}
> after #L123 to avoid "--range=*" string be appended to qualifer.
> Then the problem seems solve.
> --
> data in table:
> ||row||cf:c1||cf:c2||cf:c3||cf:c4||
> |row001|v1|v2| | |
> |row002| |v2|v3| |
> |row003| | |v3|v4|
> |row004|v1| | |v4|
> Exception Message:
> {code:java}
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column 
> family --range=row001,row002 does not exist in region 
> frank_rowcounttest1,,1446191360354.6c52c71a82f0fa041c467002a2bf433c. in table 
> 'frank_rowcounttest1', {NAME => 'cf', DATA_BLOCK_ENCODING => 'NONE', 
> BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', 
> VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 
> 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HBASE-14770) RowCounter argument input parse error

2016-01-24 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-14770 started by Adrian Muraru.
-
> RowCounter argument input parse error
> -
>
> Key: HBASE-14770
> URL: https://issues.apache.org/jira/browse/HBASE-14770
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 1.0.0
>Reporter: Frank Chang
>Assignee: Adrian Muraru
>Priority: Minor
>
> I'm tried to use 
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java
>  code and package a new jar then excuted following shell script:
> {code:none}
> hadoop jar test.jar  --range=row001,row002 cf:c2
> {code}
> Then I got "NoSuchColumnFamilyException".
> It seems input argument parsing problem.
> And I tried to add 
> {code:java}
> continue; 
> {code}
> after #L123 to avoid "--range=*" string be appended to qualifer.
> Then the problem seems solve.
> --
> data in table:
> ||row||cf:c1||cf:c2||cf:c3||cf:c4||
> |row001|v1|v2| | |
> |row002| |v2|v3| |
> |row003| | |v3|v4|
> |row004|v1| | |v4|
> Exception Message:
> {code:java}
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column 
> family --range=row001,row002 does not exist in region 
> frank_rowcounttest1,,1446191360354.6c52c71a82f0fa041c467002a2bf433c. in table 
> 'frank_rowcounttest1', {NAME => 'cf', DATA_BLOCK_ENCODING => 'NONE', 
> BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', 
> VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 
> 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14770) RowCounter argument input parse error

2016-01-24 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-14770:
--
Summary: RowCounter argument input parse error  (was: RowCounter argument 
input parse error.)

> RowCounter argument input parse error
> -
>
> Key: HBASE-14770
> URL: https://issues.apache.org/jira/browse/HBASE-14770
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 1.0.0
>Reporter: Frank Chang
>Assignee: Frank Chang
>Priority: Minor
>
> I'm tried to use 
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java
>  code and package a new jar then excuted following shell script:
> {code:none}
> hadoop jar test.jar  --range=row001,row002 cf:c2
> {code}
> Then I got "NoSuchColumnFamilyException".
> It seems input argument parsing problem.
> And I tried to add 
> {code:java}
> continue; 
> {code}
> after #L123 to avoid "--range=*" string be appended to qualifer.
> Then the problem seems solve.
> --
> data in table:
> ||row||cf:c1||cf:c2||cf:c3||cf:c4||
> |row001|v1|v2| | |
> |row002| |v2|v3| |
> |row003| | |v3|v4|
> |row004|v1| | |v4|
> Exception Message:
> {code:java}
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column 
> family --range=row001,row002 does not exist in region 
> frank_rowcounttest1,,1446191360354.6c52c71a82f0fa041c467002a2bf433c. in table 
> 'frank_rowcounttest1', {NAME => 'cf', DATA_BLOCK_ENCODING => 'NONE', 
> BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', 
> VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 
> 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12971) Replication stuck due to large default value for replication.source.maxretriesmultiplier

2015-02-12 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318825#comment-14318825
 ] 

Adrian Muraru commented on HBASE-12971:
---

[~lhofhansl] Makes sense!
Would it make sense to also sync default value for  {{this.maxRetriesMultiplier 
= this.conf.getInt("replication.source.maxretriesmultiplier", 10);}} ?
In ReplicationSource.java#L169 and 
HBaseInterClusterReplicationEndpoint.java#L79 ?
{{HBaseInterClusterReplicationEndpoint}} duplicates a lot of code from 
{{ReplicationSource.java}} so we should also factor it out - but probably in a 
different patch


> Replication stuck due to large default value for 
> replication.source.maxretriesmultiplier
> 
>
> Key: HBASE-12971
> URL: https://issues.apache.org/jira/browse/HBASE-12971
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.0.0, 0.98.10
>Reporter: Adrian Muraru
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.94.27, 0.98.11
>
> Attachments: 12971.txt
>
>
> We are setting in hbase-site the default value of 300 for 
> {{replication.source.maxretriesmultiplier}} introduced in HBASE-11964.
> While this value works fine to recover for transient errors with remote ZK 
> quorum from the peer Hbase cluster - it proved to have side effects in the 
> code introduced in HBASE-11367 Pluggable replication endpoint, where the 
> default is much lower (10).
> See:
> 1. 
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java#L169
> 2. 
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java#L79
> The the two default values are definitely conflicting - when 
> {{replication.source.maxretriesmultiplier}} is set in the hbase-site to 300 
> this will lead to a  sleep time of 300*300 (25h!) when a sockettimeout 
> exception is thrown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12971) Replication stuck due to large default value for replication.source.maxretriesmultiplier

2015-02-10 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14313835#comment-14313835
 ] 

Adrian Muraru commented on HBASE-12971:
---

+1 on two configuration parameters: {{maxRetryCount and maxRetrySleepTime}}

> Replication stuck due to large default value for 
> replication.source.maxretriesmultiplier
> 
>
> Key: HBASE-12971
> URL: https://issues.apache.org/jira/browse/HBASE-12971
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.0.0, 0.98.10
>Reporter: Adrian Muraru
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.94.27, 0.98.11
>
>
> We are setting in hbase-site the default value of 300 for 
> {{replication.source.maxretriesmultiplier}} introduced in HBASE-11964.
> While this value works fine to recover for transient errors with remote ZK 
> quorum from the peer Hbase cluster - it proved to have side effects in the 
> code introduced in HBASE-11367 Pluggable replication endpoint, where the 
> default is much lower (10).
> See:
> 1. 
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java#L169
> 2. 
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java#L79
> The the two default values are definitely conflicting - when 
> {{replication.source.maxretriesmultiplier}} is set in the hbase-site to 300 
> this will lead to a  sleep time of 300*300 (25h!) when a sockettimeout 
> exception is thrown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12971) Replication stuck due to large default value for replication.source.maxretriesmultiplier

2015-02-04 Thread Adrian Muraru (JIRA)
Adrian Muraru created HBASE-12971:
-

 Summary: Replication stuck due to large default value for 
replication.source.maxretriesmultiplier
 Key: HBASE-12971
 URL: https://issues.apache.org/jira/browse/HBASE-12971
 Project: HBase
  Issue Type: Bug
  Components: hbase
Affects Versions: 0.98.10, 1.0.0
Reporter: Adrian Muraru


We are setting in hbase-site the default value of 300 for 
{{replication.source.maxretriesmultiplier}} introduced in HBASE-11964.

While this value works fine to recover for transient errors with remote ZK 
quorum from the peer Hbase cluster - it proved to have side effects in the code 
introduced in HBASE-11367 Pluggable replication endpoint, where the default is 
much lower (10).
See:
1. 
https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java#L169
2. 
https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java#L79

The the two default values are definitely conflicting - when 
{{replication.source.maxretriesmultiplier}} is set in the hbase-site to 300 
this will lead to a  sleep time of 300*300 (25h!) when a sockettimeout 
exception is thrown.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12882) Log level for org.apache.hadoop.hbase package should be configurable

2015-01-19 Thread Adrian Muraru (JIRA)
Adrian Muraru created HBASE-12882:
-

 Summary: Log level for org.apache.hadoop.hbase package should be 
configurable 
 Key: HBASE-12882
 URL: https://issues.apache.org/jira/browse/HBASE-12882
 Project: HBase
  Issue Type: Bug
  Components: hbase
Affects Versions: 0.98.9, 0.94.26, 1.0.0
Reporter: Adrian Muraru


{{conf/log4j.properties}} hardcodes log level for the top hbase package to 
DEBUG: {{log4j.logger.org.apache.hadoop.hbase=DEBUG}}
and there is no easy way to override it without modifying this file.

It would be useful to have a variable, say {{hbase.log.level}}m in this file so 
it can be passed from site environment: {{-Dhbase.log.level=INFO}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12386) Replication gets stuck following a transient zookeeper error to remote peer cluster

2014-10-30 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-12386:
--
Status: Patch Available  (was: Open)

> Replication gets stuck following a transient zookeeper error to remote peer 
> cluster
> ---
>
> Key: HBASE-12386
> URL: https://issues.apache.org/jira/browse/HBASE-12386
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.98.7
>Reporter: Adrian Muraru
> Attachments: HBASE-12386.patch
>
>
> Following a transient ZK error replication gets stuck and remote peers are 
> never updated.
> Source region servers are reporting continuously the following error in logs:
> "No replication sinks are available"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12386) Replication gets stuck following a transient zookeeper error to remote peer cluster

2014-10-30 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-12386:
--
Attachment: HBASE-12386.patch

> Replication gets stuck following a transient zookeeper error to remote peer 
> cluster
> ---
>
> Key: HBASE-12386
> URL: https://issues.apache.org/jira/browse/HBASE-12386
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.98.7
>Reporter: Adrian Muraru
> Attachments: HBASE-12386.patch
>
>
> Following a transient ZK error replication gets stuck and remote peers are 
> never updated.
> Source region servers are reporting continuously the following error in logs:
> "No replication sinks are available"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12386) Replication gets stuck following a transient zookeeper error to remote peer cluster

2014-10-30 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-12386:
--
Status: Open  (was: Patch Available)

> Replication gets stuck following a transient zookeeper error to remote peer 
> cluster
> ---
>
> Key: HBASE-12386
> URL: https://issues.apache.org/jira/browse/HBASE-12386
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.98.7
>Reporter: Adrian Muraru
>
> Following a transient ZK error replication gets stuck and remote peers are 
> never updated.
> Source region servers are reporting continuously the following error in logs:
> "No replication sinks are available"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12386) Replication gets stuck following a transient zookeeper error to remote peer cluster

2014-10-30 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-12386:
--
Attachment: (was: HBASE-12386.patch)

> Replication gets stuck following a transient zookeeper error to remote peer 
> cluster
> ---
>
> Key: HBASE-12386
> URL: https://issues.apache.org/jira/browse/HBASE-12386
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.98.7
>Reporter: Adrian Muraru
>
> Following a transient ZK error replication gets stuck and remote peers are 
> never updated.
> Source region servers are reporting continuously the following error in logs:
> "No replication sinks are available"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12386) Replication gets stuck following a transient zookeeper error to remote peer cluster

2014-10-30 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-12386:
--
Status: Patch Available  (was: Open)

> Replication gets stuck following a transient zookeeper error to remote peer 
> cluster
> ---
>
> Key: HBASE-12386
> URL: https://issues.apache.org/jira/browse/HBASE-12386
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.98.7
>Reporter: Adrian Muraru
> Attachments: HBASE-12386.patch
>
>
> Following a transient ZK error replication gets stuck and remote peers are 
> never updated.
> Source region servers are reporting continuously the following error in logs:
> "No replication sinks are available"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12386) Replication gets stuck following a transient zookeeper error to remote peer cluster

2014-10-30 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-12386:
--
Attachment: HBASE-12386.patch

> Replication gets stuck following a transient zookeeper error to remote peer 
> cluster
> ---
>
> Key: HBASE-12386
> URL: https://issues.apache.org/jira/browse/HBASE-12386
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.98.7
>Reporter: Adrian Muraru
> Attachments: HBASE-12386.patch
>
>
> Following a transient ZK error replication gets stuck and remote peers are 
> never updated.
> Source region servers are reporting continuously the following error in logs:
> "No replication sinks are available"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12386) Replication gets stuck following a transient zookeeper error to remote peer cluster

2014-10-30 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190767#comment-14190767
 ] 

Adrian Muraru commented on HBASE-12386:
---

Looking at the code it seems that once the remote zk peers lookup fails, the 
refresh ts is updated and the return list of RS peers is empty.

Next time 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSinkManager does 
not retry the lookup on the next polling as the following condition is not met:
{code:java}
if (endpoint.getLastRegionServerUpdate() > this.lastUpdateToPeers) {
  LOG.info("Current list of sinks is out of date, updating");
  chooseSinks();
}
{code}

A fix would be to force a refresh when the list of peers is empty:
{code:java}
if (replicationPeers.getTimestampOfLastChangeToPeer(peerClusterId) > 
this.lastUpdateToPeers
|| sinks.isEmpty()) {
  LOG.info("Current list of sinks is out of date or empty, updating");
  chooseSinks();
}
{code}

Note that this is not reproducing in 0.94 where it seems the refresh is 
happening in this case.


> Replication gets stuck following a transient zookeeper error to remote peer 
> cluster
> ---
>
> Key: HBASE-12386
> URL: https://issues.apache.org/jira/browse/HBASE-12386
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.98.7
>Reporter: Adrian Muraru
>
> Following a transient ZK error replication gets stuck and remote peers are 
> never updated.
> Source region servers are reporting continuously the following error in logs:
> "No replication sinks are available"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12386) Replication gets stuck following a transient zookeeper error to remote peer cluster

2014-10-30 Thread Adrian Muraru (JIRA)
Adrian Muraru created HBASE-12386:
-

 Summary: Replication gets stuck following a transient zookeeper 
error to remote peer cluster
 Key: HBASE-12386
 URL: https://issues.apache.org/jira/browse/HBASE-12386
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.98.7
Reporter: Adrian Muraru


Following a transient ZK error replication gets stuck and remote peers are 
never updated.

Source region servers are reporting continuously the following error in logs:
"No replication sinks are available"





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11552) Read/Write requests count metric value is too short

2014-07-20 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-11552:
--

Status: Patch Available  (was: Open)

Renamed patch file to include 0.94 branch only

> Read/Write requests count metric value is too short
> ---
>
> Key: HBASE-11552
> URL: https://issues.apache.org/jira/browse/HBASE-11552
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.94.21
>Reporter: Adrian Muraru
> Fix For: 0.94.22
>
> Attachments: HBASE-11552_0.94_v1.diff
>
>
> I am using {{readRequestsCount}} and {{writeRequestsCount}} counters to plot 
> HBase activity in opentsdb and noticed that they are exported as int value 
> although the underlying counter backed by a {{long}} counter.
> Metric should be a {{long}} as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11552) Read/Write requests count metric value is too short

2014-07-20 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-11552:
--

Status: Open  (was: Patch Available)

> Read/Write requests count metric value is too short
> ---
>
> Key: HBASE-11552
> URL: https://issues.apache.org/jira/browse/HBASE-11552
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.94.21
>Reporter: Adrian Muraru
> Fix For: 0.94.22
>
> Attachments: HBASE-11552_0.94_v1.diff
>
>
> I am using {{readRequestsCount}} and {{writeRequestsCount}} counters to plot 
> HBase activity in opentsdb and noticed that they are exported as int value 
> although the underlying counter backed by a {{long}} counter.
> Metric should be a {{long}} as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11552) Read/Write requests count metric value is too short

2014-07-20 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-11552:
--

Attachment: (was: HBASE-11552_v1.diff)

> Read/Write requests count metric value is too short
> ---
>
> Key: HBASE-11552
> URL: https://issues.apache.org/jira/browse/HBASE-11552
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.94.21
>Reporter: Adrian Muraru
> Fix For: 0.94.22
>
> Attachments: HBASE-11552_0.94_v1.diff
>
>
> I am using {{readRequestsCount}} and {{writeRequestsCount}} counters to plot 
> HBase activity in opentsdb and noticed that they are exported as int value 
> although the underlying counter backed by a {{long}} counter.
> Metric should be a {{long}} as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11552) Read/Write requests count metric value is too short

2014-07-20 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-11552:
--

Attachment: HBASE-11552_0.94_v1.diff

> Read/Write requests count metric value is too short
> ---
>
> Key: HBASE-11552
> URL: https://issues.apache.org/jira/browse/HBASE-11552
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.94.21
>Reporter: Adrian Muraru
> Fix For: 0.94.22
>
> Attachments: HBASE-11552_0.94_v1.diff, HBASE-11552_v1.diff
>
>
> I am using {{readRequestsCount}} and {{writeRequestsCount}} counters to plot 
> HBase activity in opentsdb and noticed that they are exported as int value 
> although the underlying counter backed by a {{long}} counter.
> Metric should be a {{long}} as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11552) Read/Write requests count metric value is too short

2014-07-20 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-11552:
--

Status: Patch Available  (was: Open)

> Read/Write requests count metric value is too short
> ---
>
> Key: HBASE-11552
> URL: https://issues.apache.org/jira/browse/HBASE-11552
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.94.21
>Reporter: Adrian Muraru
> Fix For: 0.94.22
>
> Attachments: HBASE-11552_v1.diff
>
>
> I am using {{readRequestsCount}} and {{writeRequestsCount}} counters to plot 
> HBase activity in opentsdb and noticed that they are exported as int value 
> although the underlying counter backed by a {{long}} counter.
> Metric should be a {{long}} as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11552) Read/Write requests count metric value is too short

2014-07-20 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-11552:
--

Attachment: HBASE-11552_v1.diff

> Read/Write requests count metric value is too short
> ---
>
> Key: HBASE-11552
> URL: https://issues.apache.org/jira/browse/HBASE-11552
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.94.21
>Reporter: Adrian Muraru
> Fix For: 0.94.22
>
> Attachments: HBASE-11552_v1.diff
>
>
> I am using {{readRequestsCount}} and {{writeRequestsCount}} counters to plot 
> HBase activity in opentsdb and noticed that they are exported as int value 
> although the underlying counter backed by a {{long}} counter.
> Metric should be a {{long}} as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11552) Read/Write requests count metric value is too short

2014-07-20 Thread Adrian Muraru (JIRA)
Adrian Muraru created HBASE-11552:
-

 Summary: Read/Write requests count metric value is too short
 Key: HBASE-11552
 URL: https://issues.apache.org/jira/browse/HBASE-11552
 Project: HBase
  Issue Type: Bug
  Components: metrics
Affects Versions: 0.94.21
Reporter: Adrian Muraru


I am using {{readRequestsCount}} and {{writeRequestsCount}} counters to plot 
HBase activity in opentsdb and noticed that they are exported as int value 
although the underlying counter backed by a {{long}} counter.
Metric should be a {{long}} as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11552) Read/Write requests count metric value is too short

2014-07-20 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-11552:
--

Fix Version/s: 0.94.22

> Read/Write requests count metric value is too short
> ---
>
> Key: HBASE-11552
> URL: https://issues.apache.org/jira/browse/HBASE-11552
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.94.21
>Reporter: Adrian Muraru
> Fix For: 0.94.22
>
>
> I am using {{readRequestsCount}} and {{writeRequestsCount}} counters to plot 
> HBase activity in opentsdb and noticed that they are exported as int value 
> although the underlying counter backed by a {{long}} counter.
> Metric should be a {{long}} as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11052) Sending random data crashes thrift service

2014-06-07 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14020900#comment-14020900
 ] 

Adrian Muraru commented on HBASE-11052:
---

patch ver4 for trunk - reverting defaults in hbase-default.xml as they were 
making tests fail

> Sending random data crashes thrift service
> --
>
> Key: HBASE-11052
> URL: https://issues.apache.org/jira/browse/HBASE-11052
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.1, 1.0.0, 0.94.18
>Reporter: Adrian Muraru
> Fix For: 1.0.0, 0.94.21, 0.98.4
>
> Attachments: HBASE-11052_0.94_v2.patch, HBASE-11052_0.94_v4.patch, 
> HBASE-11052_trunk_v1.patch, HBASE-11052_trunk_v3.patch, 
> HBASE-11052_trunk_v4.patch
>
>
> Upstream thrift library has a know issue (THRIFT-601) causing the thrift 
> server to crash with an Out-of-Memory Error when bogus requests are sent.
> This reproduces when a very large request size is sent in the request header, 
> making the thrift server to allocate a large memory segment leading to OOM.
> LoadBalancer health checks are the first "candidate" for bogus requests
> Thrift developers admit this is a known issue with TBinaryProtocol and their 
> recommandation is to use TCompactProtocol/TFramedTransport but this requires 
> all thrift clients to be updated (might not be feasible atm)
> So we need a fix similar to CASSANDRA-475.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11052) Sending random data crashes thrift service

2014-06-07 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-11052:
--

Release Note: 
Thrift servers should use framed/compact protocol to protect against buffer 
overflow (default disabled as they are breaking old clients)
- hbase.regionserver.thrift.framed = true
- hbase.regionserver.thrift.compact = true

  was:
Thrift servers are now using framed/compact protocol to protect against buffer 
overflow:
- hbase.regionserver.thrift.framed = true
- hbase.regionserver.thrift.compact = true


> Sending random data crashes thrift service
> --
>
> Key: HBASE-11052
> URL: https://issues.apache.org/jira/browse/HBASE-11052
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.1, 1.0.0, 0.94.18
>Reporter: Adrian Muraru
> Fix For: 1.0.0, 0.94.21, 0.98.4
>
> Attachments: HBASE-11052_0.94_v2.patch, HBASE-11052_0.94_v4.patch, 
> HBASE-11052_trunk_v1.patch, HBASE-11052_trunk_v3.patch, 
> HBASE-11052_trunk_v4.patch
>
>
> Upstream thrift library has a know issue (THRIFT-601) causing the thrift 
> server to crash with an Out-of-Memory Error when bogus requests are sent.
> This reproduces when a very large request size is sent in the request header, 
> making the thrift server to allocate a large memory segment leading to OOM.
> LoadBalancer health checks are the first "candidate" for bogus requests
> Thrift developers admit this is a known issue with TBinaryProtocol and their 
> recommandation is to use TCompactProtocol/TFramedTransport but this requires 
> all thrift clients to be updated (might not be feasible atm)
> So we need a fix similar to CASSANDRA-475.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11052) Sending random data crashes thrift service

2014-06-07 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-11052:
--

Attachment: HBASE-11052_trunk_v4.patch

> Sending random data crashes thrift service
> --
>
> Key: HBASE-11052
> URL: https://issues.apache.org/jira/browse/HBASE-11052
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.1, 1.0.0, 0.94.18
>Reporter: Adrian Muraru
> Fix For: 1.0.0, 0.94.21, 0.98.4
>
> Attachments: HBASE-11052_0.94_v2.patch, HBASE-11052_0.94_v4.patch, 
> HBASE-11052_trunk_v1.patch, HBASE-11052_trunk_v3.patch, 
> HBASE-11052_trunk_v4.patch
>
>
> Upstream thrift library has a know issue (THRIFT-601) causing the thrift 
> server to crash with an Out-of-Memory Error when bogus requests are sent.
> This reproduces when a very large request size is sent in the request header, 
> making the thrift server to allocate a large memory segment leading to OOM.
> LoadBalancer health checks are the first "candidate" for bogus requests
> Thrift developers admit this is a known issue with TBinaryProtocol and their 
> recommandation is to use TCompactProtocol/TFramedTransport but this requires 
> all thrift clients to be updated (might not be feasible atm)
> So we need a fix similar to CASSANDRA-475.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11052) Sending random data crashes thrift service

2014-06-06 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14020442#comment-14020442
 ] 

Adrian Muraru commented on HBASE-11052:
---

Wrong patch format, resubmitted - jenkins is not happy with {{git 
format-patch}} 

> Sending random data crashes thrift service
> --
>
> Key: HBASE-11052
> URL: https://issues.apache.org/jira/browse/HBASE-11052
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.1, 1.0.0, 0.94.18
>Reporter: Adrian Muraru
> Fix For: 1.0.0, 0.94.21, 0.98.4
>
> Attachments: HBASE-11052_0.94_v2.patch, HBASE-11052_0.94_v4.patch, 
> HBASE-11052_trunk_v1.patch, HBASE-11052_trunk_v3.patch
>
>
> Upstream thrift library has a know issue (THRIFT-601) causing the thrift 
> server to crash with an Out-of-Memory Error when bogus requests are sent.
> This reproduces when a very large request size is sent in the request header, 
> making the thrift server to allocate a large memory segment leading to OOM.
> LoadBalancer health checks are the first "candidate" for bogus requests
> Thrift developers admit this is a known issue with TBinaryProtocol and their 
> recommandation is to use TCompactProtocol/TFramedTransport but this requires 
> all thrift clients to be updated (might not be feasible atm)
> So we need a fix similar to CASSANDRA-475.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11052) Sending random data crashes thrift service

2014-06-06 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-11052:
--

Status: Open  (was: Patch Available)

> Sending random data crashes thrift service
> --
>
> Key: HBASE-11052
> URL: https://issues.apache.org/jira/browse/HBASE-11052
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.94.18, 0.98.1, 1.0.0
>Reporter: Adrian Muraru
> Fix For: 1.0.0, 0.94.21, 0.98.4
>
> Attachments: HBASE-11052_0.94_v2.patch, HBASE-11052_0.94_v4.patch, 
> HBASE-11052_trunk_v1.patch, HBASE-11052_trunk_v3.patch
>
>
> Upstream thrift library has a know issue (THRIFT-601) causing the thrift 
> server to crash with an Out-of-Memory Error when bogus requests are sent.
> This reproduces when a very large request size is sent in the request header, 
> making the thrift server to allocate a large memory segment leading to OOM.
> LoadBalancer health checks are the first "candidate" for bogus requests
> Thrift developers admit this is a known issue with TBinaryProtocol and their 
> recommandation is to use TCompactProtocol/TFramedTransport but this requires 
> all thrift clients to be updated (might not be feasible atm)
> So we need a fix similar to CASSANDRA-475.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11052) Sending random data crashes thrift service

2014-06-06 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-11052:
--

Status: Patch Available  (was: Open)

> Sending random data crashes thrift service
> --
>
> Key: HBASE-11052
> URL: https://issues.apache.org/jira/browse/HBASE-11052
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.94.18, 0.98.1, 1.0.0
>Reporter: Adrian Muraru
> Fix For: 1.0.0, 0.94.21, 0.98.4
>
> Attachments: HBASE-11052_0.94_v2.patch, HBASE-11052_0.94_v4.patch, 
> HBASE-11052_trunk_v1.patch, HBASE-11052_trunk_v3.patch
>
>
> Upstream thrift library has a know issue (THRIFT-601) causing the thrift 
> server to crash with an Out-of-Memory Error when bogus requests are sent.
> This reproduces when a very large request size is sent in the request header, 
> making the thrift server to allocate a large memory segment leading to OOM.
> LoadBalancer health checks are the first "candidate" for bogus requests
> Thrift developers admit this is a known issue with TBinaryProtocol and their 
> recommandation is to use TCompactProtocol/TFramedTransport but this requires 
> all thrift clients to be updated (might not be feasible atm)
> So we need a fix similar to CASSANDRA-475.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11052) Sending random data crashes thrift service

2014-06-06 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-11052:
--

Attachment: (was: HBASE-11052_trunk_v2.patch)

> Sending random data crashes thrift service
> --
>
> Key: HBASE-11052
> URL: https://issues.apache.org/jira/browse/HBASE-11052
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.1, 1.0.0, 0.94.18
>Reporter: Adrian Muraru
> Fix For: 1.0.0, 0.94.21, 0.98.4
>
> Attachments: HBASE-11052_0.94_v2.patch, HBASE-11052_0.94_v4.patch, 
> HBASE-11052_trunk_v1.patch, HBASE-11052_trunk_v3.patch
>
>
> Upstream thrift library has a know issue (THRIFT-601) causing the thrift 
> server to crash with an Out-of-Memory Error when bogus requests are sent.
> This reproduces when a very large request size is sent in the request header, 
> making the thrift server to allocate a large memory segment leading to OOM.
> LoadBalancer health checks are the first "candidate" for bogus requests
> Thrift developers admit this is a known issue with TBinaryProtocol and their 
> recommandation is to use TCompactProtocol/TFramedTransport but this requires 
> all thrift clients to be updated (might not be feasible atm)
> So we need a fix similar to CASSANDRA-475.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11052) Sending random data crashes thrift service

2014-06-06 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-11052:
--

Attachment: HBASE-11052_trunk_v3.patch

> Sending random data crashes thrift service
> --
>
> Key: HBASE-11052
> URL: https://issues.apache.org/jira/browse/HBASE-11052
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.1, 1.0.0, 0.94.18
>Reporter: Adrian Muraru
> Fix For: 1.0.0, 0.94.21, 0.98.4
>
> Attachments: HBASE-11052_0.94_v2.patch, HBASE-11052_0.94_v4.patch, 
> HBASE-11052_trunk_v1.patch, HBASE-11052_trunk_v3.patch
>
>
> Upstream thrift library has a know issue (THRIFT-601) causing the thrift 
> server to crash with an Out-of-Memory Error when bogus requests are sent.
> This reproduces when a very large request size is sent in the request header, 
> making the thrift server to allocate a large memory segment leading to OOM.
> LoadBalancer health checks are the first "candidate" for bogus requests
> Thrift developers admit this is a known issue with TBinaryProtocol and their 
> recommandation is to use TCompactProtocol/TFramedTransport but this requires 
> all thrift clients to be updated (might not be feasible atm)
> So we need a fix similar to CASSANDRA-475.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11052) Sending random data crashes thrift service

2014-06-06 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-11052:
--

Attachment: HBASE-11052_0.94_v4.patch

> Sending random data crashes thrift service
> --
>
> Key: HBASE-11052
> URL: https://issues.apache.org/jira/browse/HBASE-11052
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.1, 1.0.0, 0.94.18
>Reporter: Adrian Muraru
> Fix For: 1.0.0, 0.94.21, 0.98.4
>
> Attachments: HBASE-11052_0.94_v2.patch, HBASE-11052_0.94_v4.patch, 
> HBASE-11052_trunk_v1.patch, HBASE-11052_trunk_v2.patch
>
>
> Upstream thrift library has a know issue (THRIFT-601) causing the thrift 
> server to crash with an Out-of-Memory Error when bogus requests are sent.
> This reproduces when a very large request size is sent in the request header, 
> making the thrift server to allocate a large memory segment leading to OOM.
> LoadBalancer health checks are the first "candidate" for bogus requests
> Thrift developers admit this is a known issue with TBinaryProtocol and their 
> recommandation is to use TCompactProtocol/TFramedTransport but this requires 
> all thrift clients to be updated (might not be feasible atm)
> So we need a fix similar to CASSANDRA-475.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11052) Sending random data crashes thrift service

2014-06-06 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-11052:
--

Attachment: (was: HBASE-11052_0.94_v3.patch)

> Sending random data crashes thrift service
> --
>
> Key: HBASE-11052
> URL: https://issues.apache.org/jira/browse/HBASE-11052
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.1, 1.0.0, 0.94.18
>Reporter: Adrian Muraru
> Fix For: 1.0.0, 0.94.21, 0.98.4
>
> Attachments: HBASE-11052_0.94_v2.patch, HBASE-11052_0.94_v4.patch, 
> HBASE-11052_trunk_v1.patch, HBASE-11052_trunk_v2.patch
>
>
> Upstream thrift library has a know issue (THRIFT-601) causing the thrift 
> server to crash with an Out-of-Memory Error when bogus requests are sent.
> This reproduces when a very large request size is sent in the request header, 
> making the thrift server to allocate a large memory segment leading to OOM.
> LoadBalancer health checks are the first "candidate" for bogus requests
> Thrift developers admit this is a known issue with TBinaryProtocol and their 
> recommandation is to use TCompactProtocol/TFramedTransport but this requires 
> all thrift clients to be updated (might not be feasible atm)
> So we need a fix similar to CASSANDRA-475.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11052) Sending random data crashes thrift service

2014-06-06 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-11052:
--

Attachment: (was: HBASE-11052_0.94_v3.patch)

> Sending random data crashes thrift service
> --
>
> Key: HBASE-11052
> URL: https://issues.apache.org/jira/browse/HBASE-11052
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.1, 1.0.0, 0.94.18
>Reporter: Adrian Muraru
> Fix For: 1.0.0, 0.94.21, 0.98.4
>
> Attachments: HBASE-11052_0.94_v2.patch, HBASE-11052_0.94_v3.patch, 
> HBASE-11052_trunk_v1.patch, HBASE-11052_trunk_v2.patch
>
>
> Upstream thrift library has a know issue (THRIFT-601) causing the thrift 
> server to crash with an Out-of-Memory Error when bogus requests are sent.
> This reproduces when a very large request size is sent in the request header, 
> making the thrift server to allocate a large memory segment leading to OOM.
> LoadBalancer health checks are the first "candidate" for bogus requests
> Thrift developers admit this is a known issue with TBinaryProtocol and their 
> recommandation is to use TCompactProtocol/TFramedTransport but this requires 
> all thrift clients to be updated (might not be feasible atm)
> So we need a fix similar to CASSANDRA-475.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11052) Sending random data crashes thrift service

2014-06-06 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-11052:
--

Attachment: HBASE-11052_0.94_v3.patch

> Sending random data crashes thrift service
> --
>
> Key: HBASE-11052
> URL: https://issues.apache.org/jira/browse/HBASE-11052
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.1, 1.0.0, 0.94.18
>Reporter: Adrian Muraru
> Fix For: 1.0.0, 0.94.21, 0.98.4
>
> Attachments: HBASE-11052_0.94_v2.patch, HBASE-11052_0.94_v3.patch, 
> HBASE-11052_trunk_v1.patch, HBASE-11052_trunk_v2.patch
>
>
> Upstream thrift library has a know issue (THRIFT-601) causing the thrift 
> server to crash with an Out-of-Memory Error when bogus requests are sent.
> This reproduces when a very large request size is sent in the request header, 
> making the thrift server to allocate a large memory segment leading to OOM.
> LoadBalancer health checks are the first "candidate" for bogus requests
> Thrift developers admit this is a known issue with TBinaryProtocol and their 
> recommandation is to use TCompactProtocol/TFramedTransport but this requires 
> all thrift clients to be updated (might not be feasible atm)
> So we need a fix similar to CASSANDRA-475.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11052) Sending random data crashes thrift service

2014-06-06 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-11052:
--

Fix Version/s: 0.98.4
   0.94.21
   1.0.0
   Status: Patch Available  (was: Open)

> Sending random data crashes thrift service
> --
>
> Key: HBASE-11052
> URL: https://issues.apache.org/jira/browse/HBASE-11052
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.94.18, 0.98.1, 1.0.0
>Reporter: Adrian Muraru
> Fix For: 1.0.0, 0.94.21, 0.98.4
>
> Attachments: HBASE-11052_0.94_v2.patch, HBASE-11052_0.94_v3.patch, 
> HBASE-11052_trunk_v1.patch, HBASE-11052_trunk_v2.patch
>
>
> Upstream thrift library has a know issue (THRIFT-601) causing the thrift 
> server to crash with an Out-of-Memory Error when bogus requests are sent.
> This reproduces when a very large request size is sent in the request header, 
> making the thrift server to allocate a large memory segment leading to OOM.
> LoadBalancer health checks are the first "candidate" for bogus requests
> Thrift developers admit this is a known issue with TBinaryProtocol and their 
> recommandation is to use TCompactProtocol/TFramedTransport but this requires 
> all thrift clients to be updated (might not be feasible atm)
> So we need a fix similar to CASSANDRA-475.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11052) Sending random data crashes thrift service

2014-06-06 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14020418#comment-14020418
 ] 

Adrian Muraru commented on HBASE-11052:
---

Patches updated based on [~apurtell] review

> Sending random data crashes thrift service
> --
>
> Key: HBASE-11052
> URL: https://issues.apache.org/jira/browse/HBASE-11052
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.1, 1.0.0, 0.94.18
>Reporter: Adrian Muraru
> Attachments: HBASE-11052_0.94_v2.patch, HBASE-11052_0.94_v3.patch, 
> HBASE-11052_trunk_v1.patch, HBASE-11052_trunk_v2.patch
>
>
> Upstream thrift library has a know issue (THRIFT-601) causing the thrift 
> server to crash with an Out-of-Memory Error when bogus requests are sent.
> This reproduces when a very large request size is sent in the request header, 
> making the thrift server to allocate a large memory segment leading to OOM.
> LoadBalancer health checks are the first "candidate" for bogus requests
> Thrift developers admit this is a known issue with TBinaryProtocol and their 
> recommandation is to use TCompactProtocol/TFramedTransport but this requires 
> all thrift clients to be updated (might not be feasible atm)
> So we need a fix similar to CASSANDRA-475.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11052) Sending random data crashes thrift service

2014-06-06 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-11052:
--

Attachment: HBASE-11052_0.94_v3.patch

> Sending random data crashes thrift service
> --
>
> Key: HBASE-11052
> URL: https://issues.apache.org/jira/browse/HBASE-11052
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.1, 1.0.0, 0.94.18
>Reporter: Adrian Muraru
> Attachments: HBASE-11052_0.94_v2.patch, HBASE-11052_0.94_v3.patch, 
> HBASE-11052_trunk_v1.patch, HBASE-11052_trunk_v2.patch
>
>
> Upstream thrift library has a know issue (THRIFT-601) causing the thrift 
> server to crash with an Out-of-Memory Error when bogus requests are sent.
> This reproduces when a very large request size is sent in the request header, 
> making the thrift server to allocate a large memory segment leading to OOM.
> LoadBalancer health checks are the first "candidate" for bogus requests
> Thrift developers admit this is a known issue with TBinaryProtocol and their 
> recommandation is to use TCompactProtocol/TFramedTransport but this requires 
> all thrift clients to be updated (might not be feasible atm)
> So we need a fix similar to CASSANDRA-475.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11052) Sending random data crashes thrift service

2014-06-06 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-11052:
--

Attachment: HBASE-11052_trunk_v2.patch

> Sending random data crashes thrift service
> --
>
> Key: HBASE-11052
> URL: https://issues.apache.org/jira/browse/HBASE-11052
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.1, 1.0.0, 0.94.18
>Reporter: Adrian Muraru
> Attachments: HBASE-11052_0.94_v2.patch, HBASE-11052_trunk_v1.patch, 
> HBASE-11052_trunk_v2.patch
>
>
> Upstream thrift library has a know issue (THRIFT-601) causing the thrift 
> server to crash with an Out-of-Memory Error when bogus requests are sent.
> This reproduces when a very large request size is sent in the request header, 
> making the thrift server to allocate a large memory segment leading to OOM.
> LoadBalancer health checks are the first "candidate" for bogus requests
> Thrift developers admit this is a known issue with TBinaryProtocol and their 
> recommandation is to use TCompactProtocol/TFramedTransport but this requires 
> all thrift clients to be updated (might not be feasible atm)
> So we need a fix similar to CASSANDRA-475.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11052) Sending random data crashes thrift service

2014-06-06 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-11052:
--

Release Note: 
Thrift servers are now using framed/compact protocol to protect against buffer 
overflow:
- hbase.regionserver.thrift.framed = true
- hbase.regionserver.thrift.compact = true

> Sending random data crashes thrift service
> --
>
> Key: HBASE-11052
> URL: https://issues.apache.org/jira/browse/HBASE-11052
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.1, 1.0.0, 0.94.18
>Reporter: Adrian Muraru
> Attachments: HBASE-11052_0.94_v2.patch, HBASE-11052_trunk_v1.patch
>
>
> Upstream thrift library has a know issue (THRIFT-601) causing the thrift 
> server to crash with an Out-of-Memory Error when bogus requests are sent.
> This reproduces when a very large request size is sent in the request header, 
> making the thrift server to allocate a large memory segment leading to OOM.
> LoadBalancer health checks are the first "candidate" for bogus requests
> Thrift developers admit this is a known issue with TBinaryProtocol and their 
> recommandation is to use TCompactProtocol/TFramedTransport but this requires 
> all thrift clients to be updated (might not be feasible atm)
> So we need a fix similar to CASSANDRA-475.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11052) Sending random data crashes thrift service

2014-04-23 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-11052:
--

Attachment: HBASE-11052_trunk_v1.patch

Trunk patch attached - both thrift v1 and v2 server are now using by default 
compact-protocol/framed-transport

> Sending random data crashes thrift service
> --
>
> Key: HBASE-11052
> URL: https://issues.apache.org/jira/browse/HBASE-11052
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.1, 1.0.0, 0.94.18
>Reporter: Adrian Muraru
> Attachments: HBASE-11052_0.94_v2.patch, HBASE-11052_trunk_v1.patch
>
>
> Upstream thrift library has a know issue (THRIFT-601) causing the thrift 
> server to crash with an Out-of-Memory Error when bogus requests are sent.
> This reproduces when a very large request size is sent in the request header, 
> making the thrift server to allocate a large memory segment leading to OOM.
> LoadBalancer health checks are the first "candidate" for bogus requests
> Thrift developers admit this is a known issue with TBinaryProtocol and their 
> recommandation is to use TCompactProtocol/TFramedTransport but this requires 
> all thrift clients to be updated (might not be feasible atm)
> So we need a fix similar to CASSANDRA-475.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11052) Sending random data crashes thrift service

2014-04-23 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-11052:
--

Attachment: (was: HBASE-11052.v1.patch)

> Sending random data crashes thrift service
> --
>
> Key: HBASE-11052
> URL: https://issues.apache.org/jira/browse/HBASE-11052
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.1, 1.0.0, 0.94.18
>Reporter: Adrian Muraru
> Attachments: HBASE-11052_0.94_v2.patch
>
>
> Upstream thrift library has a know issue (THRIFT-601) causing the thrift 
> server to crash with an Out-of-Memory Error when bogus requests are sent.
> This reproduces when a very large request size is sent in the request header, 
> making the thrift server to allocate a large memory segment leading to OOM.
> LoadBalancer health checks are the first "candidate" for bogus requests
> Thrift developers admit this is a known issue with TBinaryProtocol and their 
> recommandation is to use TCompactProtocol/TFramedTransport but this requires 
> all thrift clients to be updated (might not be feasible atm)
> So we need a fix similar to CASSANDRA-475.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11052) Sending random data crashes thrift service

2014-04-23 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-11052:
--

Attachment: HBASE-11052_0.94_v2.patch

v2 for 0.94 branch - HThrift2 is now using default 
compact-protocol/framed-transport

> Sending random data crashes thrift service
> --
>
> Key: HBASE-11052
> URL: https://issues.apache.org/jira/browse/HBASE-11052
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.1, 1.0.0, 0.94.18
>Reporter: Adrian Muraru
> Attachments: HBASE-11052_0.94_v2.patch
>
>
> Upstream thrift library has a know issue (THRIFT-601) causing the thrift 
> server to crash with an Out-of-Memory Error when bogus requests are sent.
> This reproduces when a very large request size is sent in the request header, 
> making the thrift server to allocate a large memory segment leading to OOM.
> LoadBalancer health checks are the first "candidate" for bogus requests
> Thrift developers admit this is a known issue with TBinaryProtocol and their 
> recommandation is to use TCompactProtocol/TFramedTransport but this requires 
> all thrift clients to be updated (might not be feasible atm)
> So we need a fix similar to CASSANDRA-475.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11052) Sending random data crashes thrift service

2014-04-23 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13978941#comment-13978941
 ] 

Adrian Muraru commented on HBASE-11052:
---

Note that thrift 0.9.0 removes (weird!) the message limit support in 
TBinaryProtocol (THRIFT-820), so for 0.95 onwards we should consider 
configuring HThrift server to use compact/framed transport by default to avoid 
the OOM.


> Sending random data crashes thrift service
> --
>
> Key: HBASE-11052
> URL: https://issues.apache.org/jira/browse/HBASE-11052
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.1, 1.0.0, 0.94.18
>Reporter: Adrian Muraru
> Attachments: HBASE-11052.v1.patch
>
>
> Upstream thrift library has a know issue (THRIFT-601) causing the thrift 
> server to crash with an Out-of-Memory Error when bogus requests are sent.
> This reproduces when a very large request size is sent in the request header, 
> making the thrift server to allocate a large memory segment leading to OOM.
> LoadBalancer health checks are the first "candidate" for bogus requests
> Thrift developers admit this is a known issue with TBinaryProtocol and their 
> recommandation is to use TCompactProtocol/TFramedTransport but this requires 
> all thrift clients to be updated (might not be feasible atm)
> So we need a fix similar to CASSANDRA-475.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11052) Sending random data crashes thrift service

2014-04-23 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-11052:
--

Attachment: HBASE-11052.v1.patch

Attached a patch for 0.94 branch

> Sending random data crashes thrift service
> --
>
> Key: HBASE-11052
> URL: https://issues.apache.org/jira/browse/HBASE-11052
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.1, 1.0.0, 0.94.18
>Reporter: Adrian Muraru
> Attachments: HBASE-11052.v1.patch
>
>
> Upstream thrift library has a know issue (THRIFT-601) causing the thrift 
> server to crash with an Out-of-Memory Error when bogus requests are sent.
> This reproduces when a very large request size is sent in the request header, 
> making the thrift server to allocate a large memory segment leading to OOM.
> LoadBalancer health checks are the first "candidate" for bogus requests
> Thrift developers admit this is a known issue with TBinaryProtocol and their 
> recommandation is to use TCompactProtocol/TFramedTransport but this requires 
> all thrift clients to be updated (might not be feasible atm)
> So we need a fix similar to CASSANDRA-475.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11052) Sending random data crashes thrift service

2014-04-23 Thread Adrian Muraru (JIRA)
Adrian Muraru created HBASE-11052:
-

 Summary: Sending random data crashes thrift service
 Key: HBASE-11052
 URL: https://issues.apache.org/jira/browse/HBASE-11052
 Project: HBase
  Issue Type: Bug
  Components: Thrift
Affects Versions: 0.94.18, 0.98.1, 1.0.0
Reporter: Adrian Muraru


Upstream thrift library has a know issue (THRIFT-601) causing the thrift server 
to crash with an Out-of-Memory Error when bogus requests are sent.

This reproduces when a very large request size is sent in the request header, 
making the thrift server to allocate a large memory segment leading to OOM.

LoadBalancer health checks are the first "candidate" for bogus requests
Thrift developers admit this is a known issue with TBinaryProtocol and their 
recommandation is to use TCompactProtocol/TFramedTransport but this requires 
all thrift clients to be updated (might not be feasible atm)

So we need a fix similar to CASSANDRA-475.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10524) Correct wrong handling and add proper handling for swallowed InterruptedException thrown by Thread.sleep in regionserver

2014-02-19 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-10524:
--

Description: 
A sub-task of HBASE-10497

# correct wrong handling of InterruptedException where 
Thread.currentThread.interrupt() is called within while loops
# add proper handling for swallowed InterruptedException



  was:
A sub-task of HBASE-10497

# correct wrong handling of InterruptedException where 
Thread.currentThread.interrupt() is called within while loops
# add proper handling for swallowed InterruptedException


> Correct wrong handling and add proper handling for swallowed 
> InterruptedException thrown by Thread.sleep in regionserver
> 
>
> Key: HBASE-10524
> URL: https://issues.apache.org/jira/browse/HBASE-10524
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: Feng Honghua
>Assignee: Feng Honghua
> Attachments: HBASE-10524-trunk_v1.patch, HBASE-10524-trunk_v2.patch, 
> split.patch
>
>
> A sub-task of HBASE-10497
> # correct wrong handling of InterruptedException where 
> Thread.currentThread.interrupt() is called within while loops
> # add proper handling for swallowed InterruptedException



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-7404) Bucket Cache:A solution about CMS,Heap Fragment and Big Cache on HBASE

2013-09-29 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13781345#comment-13781345
 ] 

Adrian Muraru commented on HBASE-7404:
--

+1 on integrating this in 0.94 branch, thanks Lars.
One comment regarding the configs: reading through the patch I find misleading 
the usage of "hbase.offheapcache.percentage" config

Say I want to use the bucket-cache as a secondary, off-heap block cache
{code}
if (offHeapCacheSize <= 0) {
// hbase.bucketcache.ioengine="file://ramdisk/hbase"
// hbase.bucketcache.combinedcache.enabled" false
... init BucketCache ...
}
{code}

{{hbase.offheapcache.percentage}} name is misleading with the addition on this 
block-cache. 
One suggestion would be to have it renamed to something like:
{{hbase.slab_offheapcache}} or something.


> Bucket Cache:A solution about CMS,Heap Fragment and Big Cache on HBASE
> --
>
> Key: HBASE-7404
> URL: https://issues.apache.org/jira/browse/HBASE-7404
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 0.94.3
>Reporter: chunhui shen
>Assignee: chunhui shen
> Fix For: 0.95.0
>
> Attachments: 7404-0.94-fixed-lines.txt, 7404-trunk-v10.patch, 
> 7404-trunk-v11.patch, 7404-trunk-v12.patch, 7404-trunk-v13.patch, 
> 7404-trunk-v13.txt, 7404-trunk-v14.patch, BucketCache.pdf, 
> hbase-7404-94v2.patch, HBASE-7404-backport-0.94.patch, 
> hbase-7404-trunkv2.patch, hbase-7404-trunkv9.patch, Introduction of Bucket 
> Cache.pdf
>
>
> First, thanks @neil from Fusion-IO share the source code.
> Usage:
> 1.Use bucket cache as main memory cache, configured as the following:
> –"hbase.bucketcache.ioengine" "heap"
> –"hbase.bucketcache.size" 0.4 (size for bucket cache, 0.4 is a percentage of 
> max heap size)
> 2.Use bucket cache as a secondary cache, configured as the following:
> –"hbase.bucketcache.ioengine" "file:/disk1/hbase/cache.data"(The file path 
> where to store the block data)
> –"hbase.bucketcache.size" 1024 (size for bucket cache, unit is MB, so 1024 
> means 1GB)
> –"hbase.bucketcache.combinedcache.enabled" false (default value being true)
> See more configurations from org.apache.hadoop.hbase.io.hfile.CacheConfig and 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache
> What's Bucket Cache? 
> It could greatly decrease CMS and heap fragment by GC
> It support a large cache space for High Read Performance by using high speed 
> disk like Fusion-io
> 1.An implementation of block cache like LruBlockCache
> 2.Self manage blocks' storage position through Bucket Allocator
> 3.The cached blocks could be stored in the memory or file system
> 4.Bucket Cache could be used as a mainly block cache(see CombinedBlockCache), 
> combined with LruBlockCache to decrease CMS and fragment by GC.
> 5.BucketCache also could be used as a secondary cache(e.g. using Fusionio to 
> store block) to enlarge cache space
> How about SlabCache?
> We have studied and test SlabCache first, but the result is bad, because:
> 1.SlabCache use SingleSizeCache, its use ratio of memory is low because kinds 
> of block size, especially using DataBlockEncoding
> 2.SlabCache is uesd in DoubleBlockCache, block is cached both in SlabCache 
> and LruBlockCache, put the block to LruBlockCache again if hit in SlabCache , 
> it causes CMS and heap fragment don't get any better
> 3.Direct heap performance is not good as heap, and maybe cause OOM, so we 
> recommend using "heap" engine 
> See more in the attachment and in the patch



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-7205) Coprocessor classloader is replicated for all regions in the HRegionServer

2012-12-13 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13530904#comment-13530904
 ] 

Adrian Muraru commented on HBASE-7205:
--

I'm 100% sure it works - but I wouldn't swear ever for a piece of software :D

> Coprocessor classloader is replicated for all regions in the HRegionServer
> --
>
> Key: HBASE-7205
> URL: https://issues.apache.org/jira/browse/HBASE-7205
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 0.92.2, 0.94.2
>Reporter: Adrian Muraru
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7205-0.94.txt, 7205-v10.txt, 7205-v1.txt, 7205-v3.txt, 
> 7205-v4.txt, 7205-v5.txt, 7205-v6.txt, 7205-v7.txt, 7205-v8.txt, 7205-v9.txt, 
> HBASE-7205_v2.patch
>
>
> HBASE-6308 introduced a new custom CoprocessorClassLoader to load the 
> coprocessor classes and a new instance of this CL is created for each single 
> HRegion opened. This leads to OOME-PermGen when the number of regions go 
> above hundres / region server. 
> Having the table coprocessor jailed in a separate classloader is good however 
> we should create only one for all regions of a table in each HRS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7205) Coprocessor classloader is replicated for all regions in the HRegionServer

2012-12-12 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13530088#comment-13530088
 ] 

Adrian Muraru commented on HBASE-7205:
--

Lars you're right, apparently there is one thread keeping a strong reference to 
our custom classloader. The thing is that this seems to be a junit thread, when 
I'm testing manually with HBase standalone by enabling/disabling a multi-region 
table I can see these instances GC'ed. 
Not 100% sure but I suspect the junit is doing some sort of class loading 
accounting - for reporting purposes or so and keeps these references

> Coprocessor classloader is replicated for all regions in the HRegionServer
> --
>
> Key: HBASE-7205
> URL: https://issues.apache.org/jira/browse/HBASE-7205
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 0.92.2, 0.94.2
>Reporter: Adrian Muraru
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7205-0.94.txt, 7205-v10.txt, 7205-v1.txt, 7205-v3.txt, 
> 7205-v4.txt, 7205-v5.txt, 7205-v6.txt, 7205-v7.txt, 7205-v8.txt, 7205-v9.txt, 
> HBASE-7205_v2.patch
>
>
> HBASE-6308 introduced a new custom CoprocessorClassLoader to load the 
> coprocessor classes and a new instance of this CL is created for each single 
> HRegion opened. This leads to OOME-PermGen when the number of regions go 
> above hundres / region server. 
> Having the table coprocessor jailed in a separate classloader is good however 
> we should create only one for all regions of a table in each HRS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7205) Coprocessor classloader is replicated for all regions in the HRegionServer

2012-12-10 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13528274#comment-13528274
 ] 

Adrian Muraru commented on HBASE-7205:
--

[~te...@apache.org] 

http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/util/concurrent/CopyOnWriteArrayList.java#956
See - the iterator is a COWIterator 
documented in javadoc :
{quote}Traversal via iterators is fast and cannot encounter interference from 
other threads. Iterators rely on unchanging snapshots of the array at the time 
the iterators were constructed.{quote}

compared to :
http://grepcode.com/file/repo1.maven.org/maven2/org.apache.hbase/hbase/0.94.1/org/apache/hadoop/hbase/util/SortedCopyOnWriteSet.java#77
which returns a TreeSet iterator known not be thread-safe in java collections


@Stack - you eagle eye - is this what you were referring to when you asked 
about the safeness of the iterator?

> Coprocessor classloader is replicated for all regions in the HRegionServer
> --
>
> Key: HBASE-7205
> URL: https://issues.apache.org/jira/browse/HBASE-7205
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 0.92.2, 0.94.2
>Reporter: Adrian Muraru
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7205-v10.txt, 7205-v1.txt, 7205-v3.txt, 7205-v4.txt, 
> 7205-v5.txt, 7205-v6.txt, 7205-v7.txt, 7205-v8.txt, 7205-v9.txt, 
> HBASE-7205_v2.patch
>
>
> HBASE-6308 introduced a new custom CoprocessorClassLoader to load the 
> coprocessor classes and a new instance of this CL is created for each single 
> HRegion opened. This leads to OOME-PermGen when the number of regions go 
> above hundres / region server. 
> Having the table coprocessor jailed in a separate classloader is good however 
> we should create only one for all regions of a table in each HRS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7205) Coprocessor classloader is replicated for all regions in the HRegionServer

2012-12-10 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13528267#comment-13528267
 ] 

Adrian Muraru commented on HBASE-7205:
--

[~saint@gmail.com] 
bq. It looks like a ClassNotFoundException throws an IOE if a null path passed 
and otherwise, we go to load from filesystem if not in cache which would 
seem to address @Andrew Purtell concern. Is that so?
The discussions around ClassNotFoundException were related to 
{{RegionCoprocessorHost#loadTableCoprocessors}} which catch any Exception 
thrown by {{CPH.load}} logs it and continue ignoring the faulty Coprocessor.

bq. Why we do this? setContextClassLoader on currentThread? Is it in case we 
have stale cl? One that was just replaced in cache?
This something I borrowed off jetty similar sandboxed classloader. 
The reason is that custom CP class implementation might do some gymnastics and 
explicitly retrieve the caller thread context to load other classes and thus 
escaping the custom loader.

> Coprocessor classloader is replicated for all regions in the HRegionServer
> --
>
> Key: HBASE-7205
> URL: https://issues.apache.org/jira/browse/HBASE-7205
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 0.92.2, 0.94.2
>Reporter: Adrian Muraru
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7205-v10.txt, 7205-v1.txt, 7205-v3.txt, 7205-v4.txt, 
> 7205-v5.txt, 7205-v6.txt, 7205-v7.txt, 7205-v8.txt, 7205-v9.txt, 
> HBASE-7205_v2.patch
>
>
> HBASE-6308 introduced a new custom CoprocessorClassLoader to load the 
> coprocessor classes and a new instance of this CL is created for each single 
> HRegion opened. This leads to OOME-PermGen when the number of regions go 
> above hundres / region server. 
> Having the table coprocessor jailed in a separate classloader is good however 
> we should create only one for all regions of a table in each HRS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7205) Coprocessor classloader is replicated for all regions in the HRegionServer

2012-12-10 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13528250#comment-13528250
 ] 

Adrian Muraru commented on HBASE-7205:
--

{quote} Is this iteration safe?
coprocessors is backed by SortedCopyOnWriteSet. So the iteration is safe.
{quote}
Is it Ted ? That was my guess initially when I saw that HBase provide an 
"CopyOnWrite" tree-set. Thread-safeness is a guarantee for all 
java.util.concurrent CopyOnWrite collections. they provide snapshot based 
iterators (no ConcurrentModificationException thrown) but that's not the case 
for SortedCopyOnWriteSet in hbase - it just returns the treeset iterator which 
is *not thread-safe*! 


> Coprocessor classloader is replicated for all regions in the HRegionServer
> --
>
> Key: HBASE-7205
> URL: https://issues.apache.org/jira/browse/HBASE-7205
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 0.92.2, 0.94.2
>Reporter: Adrian Muraru
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7205-v10.txt, 7205-v1.txt, 7205-v3.txt, 7205-v4.txt, 
> 7205-v5.txt, 7205-v6.txt, 7205-v7.txt, 7205-v8.txt, 7205-v9.txt, 
> HBASE-7205_v2.patch
>
>
> HBASE-6308 introduced a new custom CoprocessorClassLoader to load the 
> coprocessor classes and a new instance of this CL is created for each single 
> HRegion opened. This leads to OOME-PermGen when the number of regions go 
> above hundres / region server. 
> Having the table coprocessor jailed in a separate classloader is good however 
> we should create only one for all regions of a table in each HRS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7205) Coprocessor classloader is replicated for all regions in the HRegionServer

2012-12-10 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13528100#comment-13528100
 ] 

Adrian Muraru commented on HBASE-7205:
--

I'm not sure what's the convention in Hbase for this kind of methods (again, I 
hate "forTesting" suffixes  :D and I usually avoid to add this kind of code) 
but in this case - the getExternalClassLoaders() could be used from other 
places aswell like : JMX query ? or even Metrics ?


> Coprocessor classloader is replicated for all regions in the HRegionServer
> --
>
> Key: HBASE-7205
> URL: https://issues.apache.org/jira/browse/HBASE-7205
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 0.92.2, 0.94.2
>Reporter: Adrian Muraru
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7205-v1.txt, 7205-v3.txt, 7205-v4.txt, 7205-v5.txt, 
> 7205-v6.txt, 7205-v7.txt, 7205-v8.txt, 7205-v9.txt, HBASE-7205_v2.patch
>
>
> HBASE-6308 introduced a new custom CoprocessorClassLoader to load the 
> coprocessor classes and a new instance of this CL is created for each single 
> HRegion opened. This leads to OOME-PermGen when the number of regions go 
> above hundres / region server. 
> Having the table coprocessor jailed in a separate classloader is good however 
> we should create only one for all regions of a table in each HRS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7205) Coprocessor classloader is replicated for all regions in the HRegionServer

2012-12-09 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-7205:
-

Attachment: 7205-v9.txt

Just realised that we don't need at all the {{activeCoprocessorClassLoaders}} 
set of strong references. As long there is an open HRegion using a Coprocessor 
loaded from an external jar, there is a GC path to the classloader keeping it 
in memory.
See this trace:
http://img.ly/images/6350023/full

v9 patch removes this attributes and adds a new method to query the active list 
of classloaders from CPH


> Coprocessor classloader is replicated for all regions in the HRegionServer
> --
>
> Key: HBASE-7205
> URL: https://issues.apache.org/jira/browse/HBASE-7205
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 0.92.2, 0.94.2
>Reporter: Adrian Muraru
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7205-v1.txt, 7205-v3.txt, 7205-v4.txt, 7205-v5.txt, 
> 7205-v6.txt, 7205-v7.txt, 7205-v8.txt, 7205-v9.txt, HBASE-7205_v2.patch
>
>
> HBASE-6308 introduced a new custom CoprocessorClassLoader to load the 
> coprocessor classes and a new instance of this CL is created for each single 
> HRegion opened. This leads to OOME-PermGen when the number of regions go 
> above hundres / region server. 
> Having the table coprocessor jailed in a separate classloader is good however 
> we should create only one for all regions of a table in each HRS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7309) Metrics refresh-task is not canceled when regions are closed, leaking HRegion objects

2012-12-09 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13527452#comment-13527452
 ] 

Adrian Muraru commented on HBASE-7309:
--

Lars, you're right, its trunk only - updated "Affects Version" field

> Metrics refresh-task is not canceled when regions are closed, leaking HRegion 
> objects
> -
>
> Key: HBASE-7309
> URL: https://issues.apache.org/jira/browse/HBASE-7309
> Project: HBase
>  Issue Type: Bug
>  Components: metrics, regionserver
>Affects Versions: 0.96.0
>Reporter: Adrian Muraru
>Priority: Critical
> Attachments: HBASE-7309_v1.patch, HBASE-7309_v2.patch
>
>
> While investigating HBASE-7205 by repeatedly enabling and disabling one table 
> having 100 regions I noticed that closed HRegion objects are kept forever in 
> memory. 
> The memory analyzer tool indicates a reference to HRegion object in metrics 
> refresh-task ({{MetricsRegionWrapperImpl.HRegionMetricsWrapperRunnable}}) 
> that prevents the HRegion object to be collected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7309) Metrics refresh-task is not canceled when regions are closed, leaking HRegion objects

2012-12-09 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-7309:
-

Affects Version/s: (was: 0.94.3)
   (was: 0.92.2)
   0.96.0

> Metrics refresh-task is not canceled when regions are closed, leaking HRegion 
> objects
> -
>
> Key: HBASE-7309
> URL: https://issues.apache.org/jira/browse/HBASE-7309
> Project: HBase
>  Issue Type: Bug
>  Components: metrics, regionserver
>Affects Versions: 0.96.0
>Reporter: Adrian Muraru
>Priority: Critical
> Attachments: HBASE-7309_v1.patch, HBASE-7309_v2.patch
>
>
> While investigating HBASE-7205 by repeatedly enabling and disabling one table 
> having 100 regions I noticed that closed HRegion objects are kept forever in 
> memory. 
> The memory analyzer tool indicates a reference to HRegion object in metrics 
> refresh-task ({{MetricsRegionWrapperImpl.HRegionMetricsWrapperRunnable}}) 
> that prevents the HRegion object to be collected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7205) Coprocessor classloader is replicated for all regions in the HRegionServer

2012-12-09 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13527447#comment-13527447
 ] 

Adrian Muraru commented on HBASE-7205:
--

So you think we should have a separate JIRA for CP ClassNotFound handling ?

> Coprocessor classloader is replicated for all regions in the HRegionServer
> --
>
> Key: HBASE-7205
> URL: https://issues.apache.org/jira/browse/HBASE-7205
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 0.92.2, 0.94.2
>Reporter: Adrian Muraru
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7205-v1.txt, 7205-v3.txt, 7205-v4.txt, 7205-v5.txt, 
> 7205-v6.txt, 7205-v7.txt, 7205-v8.txt, HBASE-7205_v2.patch
>
>
> HBASE-6308 introduced a new custom CoprocessorClassLoader to load the 
> coprocessor classes and a new instance of this CL is created for each single 
> HRegion opened. This leads to OOME-PermGen when the number of regions go 
> above hundres / region server. 
> Having the table coprocessor jailed in a separate classloader is good however 
> we should create only one for all regions of a table in each HRS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7309) Metrics refresh-task is not canceled when regions are closed, leaking HRegion objects

2012-12-08 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-7309:
-

Attachment: HBASE-7309_v2.patch

TestHeapSize was failing due to the new regionwrapper ref added in v1 patch.
Incremented HRegion.FIXED_OVERHEAD

> Metrics refresh-task is not canceled when regions are closed, leaking HRegion 
> objects
> -
>
> Key: HBASE-7309
> URL: https://issues.apache.org/jira/browse/HBASE-7309
> Project: HBase
>  Issue Type: Bug
>  Components: metrics, regionserver
>Affects Versions: 0.92.2, 0.94.3
>Reporter: Adrian Muraru
>Priority: Critical
> Attachments: HBASE-7309_v1.patch, HBASE-7309_v2.patch
>
>
> While investigating HBASE-7205 by repeatedly enabling and disabling one table 
> having 100 regions I noticed that closed HRegion objects are kept forever in 
> memory. 
> The memory analyzer tool indicates a reference to HRegion object in metrics 
> refresh-task ({{MetricsRegionWrapperImpl.HRegionMetricsWrapperRunnable}}) 
> that prevents the HRegion object to be collected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7309) Metrics refresh-task is not canceled when regions are closed, leaking HRegion objects

2012-12-08 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-7309:
-

Attachment: HBASE-7309_v1.patch

Attaced patch that cancels the refresh-task when the region is closed. 

> Metrics refresh-task is not canceled when regions are closed, leaking HRegion 
> objects
> -
>
> Key: HBASE-7309
> URL: https://issues.apache.org/jira/browse/HBASE-7309
> Project: HBase
>  Issue Type: Bug
>  Components: metrics, regionserver
>Affects Versions: 0.92.2, 0.94.3
>Reporter: Adrian Muraru
>Priority: Critical
> Attachments: HBASE-7309_v1.patch
>
>
> While investigating HBASE-7205 by repeatedly enabling and disabling one table 
> having 100 regions I noticed that closed HRegion objects are kept forever in 
> memory. 
> The memory analyzer tool indicates a reference to HRegion object in metrics 
> refresh-task ({{MetricsRegionWrapperImpl.HRegionMetricsWrapperRunnable}}) 
> that prevents the HRegion object to be collected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7309) Metrics refresh-task is not canceled when regions are closed, leaking HRegion objects

2012-12-08 Thread Adrian Muraru (JIRA)
Adrian Muraru created HBASE-7309:


 Summary: Metrics refresh-task is not canceled when regions are 
closed, leaking HRegion objects
 Key: HBASE-7309
 URL: https://issues.apache.org/jira/browse/HBASE-7309
 Project: HBase
  Issue Type: Bug
  Components: metrics, regionserver
Affects Versions: 0.94.3, 0.92.2
Reporter: Adrian Muraru
Priority: Critical


While investigating HBASE-7205 by repeatedly enabling and disabling one table 
having 100 regions I noticed that closed HRegion objects are kept forever in 
memory. 
The memory analyzer tool indicates a reference to HRegion object in metrics 
refresh-task ({{MetricsRegionWrapperImpl.HRegionMetricsWrapperRunnable}}) that 
prevents the HRegion object to be collected.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7205) Coprocessor classloader is replicated for all regions in the HRegionServer

2012-12-08 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13527317#comment-13527317
 ] 

Adrian Muraru commented on HBASE-7205:
--

Right, classloading java is tough, I agree 
Regarding your comments:
{quote}
The reason why the above assertion failed was that the classloader for 
jarFileOnHDFS2 was removed from classLoadersCache in the middle of the test 
because of attempt of loading cpNameInvalid class.
{quote}
True, *cpNameInvalid* fails to load (no such class in cp jar) however the same 
jar (i.e. its associated classloader) manages to successfully loads *another 
coprocessor: cpName2*
Take a closer look on how the *test* table is created in 
{{TestClassLoading#testClassLoadingFromHDFS}}:
{code:java}
htd.addFamily(new HColumnDescriptor("test"));
  // without configuration values
htd.setValue("COPROCESSOR$1", jarFileOnHDFS1.toString() + "|" + cpName1 + 
"|" + Coprocessor.PRIORITY_USER);
  // with configuration values
htd.setValue("COPROCESSOR$2", jarFileOnHDFS2.toString() + "|" + cpName2 + 
"|" + Coprocessor.PRIORITY_USER + "|k1=v1,k2=v2,k3=v3");
// invalid class name (should fail to load this class)
htd.setValue("COPROCESSOR$3", jarFileOnHDFS2.toString() + "|" + 
cpNameInvalid + "|" + Coprocessor.PRIORITY_USER);
{code}
See, the same jar file {{jarFileOnHDFS2}} is used to load two different 
coprocessor classes (one is successfully loaded, the other not). 
What should we do in this case? 
My take is to keep the classloader in cache and allow other regions to re-use 
it.
That's the reason I removed classloaderCache.remove() and I strongly go for it.

Now, the fundamental question : 
*Should we silently ignore failures in CP loading (excepting the warn message 
in log) ?*
I think we should be more restrictive, propagate the failures upstream to the 
table handler and fail to bring the HRegion online in this case.
What do you think?


{quote}
I think the above assertion places extra limit on how CoprocessorHost.load() 
handles ClassNotFoundException. It assumes that the classloader corresponding 
to attempt of loading invalid classname (more strictly, classname and jar file 
mismatch) would be retained in cache.
{quote}
No, that's not true: The assertion is checking that *all region 
active-classloaders (i.e. those that managed to successfully load at least one 
CP) are all cached*


> Coprocessor classloader is replicated for all regions in the HRegionServer
> --
>
> Key: HBASE-7205
> URL: https://issues.apache.org/jira/browse/HBASE-7205
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 0.92.2, 0.94.2
>Reporter: Adrian Muraru
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7205-v1.txt, 7205-v3.txt, 7205-v4.txt, 7205-v5.txt, 
> 7205-v6.txt, 7205-v7.txt, 7205-v8.txt, HBASE-7205_v2.patch
>
>
> HBASE-6308 introduced a new custom CoprocessorClassLoader to load the 
> coprocessor classes and a new instance of this CL is created for each single 
> HRegion opened. This leads to OOME-PermGen when the number of regions go 
> above hundres / region server. 
> Having the table coprocessor jailed in a separate classloader is good however 
> we should create only one for all regions of a table in each HRS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7205) Coprocessor classloader is replicated for all regions in the HRegionServer

2012-12-08 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13527256#comment-13527256
 ] 

Adrian Muraru commented on HBASE-7205:
--

bq. {code:java}java.lang.AssertionError: Some CP classloaders for region 
TestClassLoading,,1354995818580.7fbabc669828f0c6435df9b6c0a57709. are not 
cached{code}
That's exactly why cache.remove() should not be called.
See the case 2 above:
{quote} 2. Same jar packages multiple CP classes possible set on multiple 
tables:
 First CP class loading will cache the jar classloader.
 If then,one of the coproc config wrongly refer an invalid classname in this 
jar, the classlader shouldnt be evicted, he bravely loaded other cp classes and 
should stay in cache.
{quote}
works fine with v7 so I propose to stick with that version at this stage

> Coprocessor classloader is replicated for all regions in the HRegionServer
> --
>
> Key: HBASE-7205
> URL: https://issues.apache.org/jira/browse/HBASE-7205
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 0.92.2, 0.94.2
>Reporter: Adrian Muraru
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7205-v1.txt, 7205-v3.txt, 7205-v4.txt, 7205-v5.txt, 
> 7205-v6.txt, 7205-v7.txt, 7205-v8.txt, HBASE-7205_v2.patch
>
>
> HBASE-6308 introduced a new custom CoprocessorClassLoader to load the 
> coprocessor classes and a new instance of this CL is created for each single 
> HRegion opened. This leads to OOME-PermGen when the number of regions go 
> above hundres / region server. 
> Having the table coprocessor jailed in a separate classloader is good however 
> we should create only one for all regions of a table in each HRS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7205) Coprocessor classloader is replicated for all regions in the HRegionServer

2012-12-08 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13527197#comment-13527197
 ] 

Adrian Muraru commented on HBASE-7205:
--

Ted, I don't agree with v8 patch:
{code:java}
+  cl = new CoprocessorClassLoader(paths, this.getClass().getClassLoader());
+
   try {
 implClass = cl.loadClass(className);
{code}
Imagine 3 concurrent RS_OPEN_REGION threads calling #load in parallel (this is 
actually happening when a pre-split table is created/enabled).
All these threads would create 3 instances of CoprocessorClassLoader (same jar) 
and in turn implClass  would be loaded from different classloaders.
That's the whole mission of this patch - and #putIfAbsent before #loadClass 
does the trick


P.S.
I'll try to catch this case in TestClassLoading

> Coprocessor classloader is replicated for all regions in the HRegionServer
> --
>
> Key: HBASE-7205
> URL: https://issues.apache.org/jira/browse/HBASE-7205
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 0.92.2, 0.94.2
>Reporter: Adrian Muraru
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7205-v1.txt, 7205-v3.txt, 7205-v4.txt, 7205-v5.txt, 
> 7205-v6.txt, 7205-v7.txt, 7205-v8.txt, HBASE-7205_v2.patch
>
>
> HBASE-6308 introduced a new custom CoprocessorClassLoader to load the 
> coprocessor classes and a new instance of this CL is created for each single 
> HRegion opened. This leads to OOME-PermGen when the number of regions go 
> above hundres / region server. 
> Having the table coprocessor jailed in a separate classloader is good however 
> we should create only one for all regions of a table in each HRS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7205) Coprocessor classloader is replicated for all regions in the HRegionServer

2012-12-08 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13527140#comment-13527140
 ] 

Adrian Muraru commented on HBASE-7205:
--

Right, two cases here:
1. Classloader for jarX added to cache but CPH fails to load CP classname from 
this cl (ie no strong ref kept to cached cl)
  Valid in the current patch, the cl will be GCed by jvm eventually

2. Same jar packages multiple CP classes possible set on multiple tables:
First CP class loading will cache the jar classloader.
If then,one of the coproc config wrongly refer an invalid classname in this 
jar, the classlader shouldnt be evicted, he bravely loaded other cp classes and 
should stay in cache.

These cases are covered in v7 patch

On 08.12.2012, at 07:41, "Ted Yu (JIRA)" 
mailto:j...@apache.org>> wrote:

[https://issues.apache.org/jira/s/en_UK-chy13t-418945332/813/7/_/jira-logo-scaled.png]
[https://issues.apache.org/jira/secure/useravatar?avatarId=10452]
Ted 
Yu
 commented on [https://issues.apache.org/jira/images/icons/bug.gif]  
HBASE-7205
Coprocessor classloader is replicated for all regions in the 
HRegionServer




cache.remote not needed as it'll be GC'ed if not used anyway

Is it possible that the classloader referenced by cache eclipses another 
classloader which would load className class correctly ?





NeeThis message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


> Coprocessor classloader is replicated for all regions in the HRegionServer
> --
>
> Key: HBASE-7205
> URL: https://issues.apache.org/jira/browse/HBASE-7205
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 0.92.2, 0.94.2
>Reporter: Adrian Muraru
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7205-v1.txt, 7205-v3.txt, 7205-v4.txt, 7205-v5.txt, 
> 7205-v6.txt, 7205-v7.txt, HBASE-7205_v2.patch
>
>
> HBASE-6308 introduced a new custom CoprocessorClassLoader to load the 
> coprocessor classes and a new instance of this CL is created for each single 
> HRegion opened. This leads to OOME-PermGen when the number of regions go 
> above hundres / region server. 
> Having the table coprocessor jailed in a separate classloader is good however 
> we should create only one for all regions of a table in each HRS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7205) Coprocessor classloader is replicated for all regions in the HRegionServer

2012-12-07 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-7205:
-

Attachment: 7205-v7.txt

Great, we're almost there :)

Few more things in patch v7:

 1. An invalid CP classname would evict the entire classloader from cache (this 
is not good, as the same classloader might be used to load other valid CP 
classes) 
{code:java}
   try {
 implClass = cl.loadClass(className);
+// cache cp classloader as a weak value, will be GC'ed when no 
reference left
+classLoadersCache.put (path, cl);
   } catch (ClassNotFoundException e) {
+classLoadersCache.remove(path);
 throw new IOException(e);
   }
{code}
 - cache.put not needed as it's already been added
 - cache.remote not needed as it'll be GC'ed if not used anyway

 2. TestClassLoading#testClassLoadingFromHDFS checks now for invalid CP 
classnames and double-checks that jar classloaders are actually cached


> Coprocessor classloader is replicated for all regions in the HRegionServer
> --
>
> Key: HBASE-7205
> URL: https://issues.apache.org/jira/browse/HBASE-7205
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 0.92.2, 0.94.2
>Reporter: Adrian Muraru
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7205-v1.txt, 7205-v3.txt, 7205-v4.txt, 7205-v5.txt, 
> 7205-v6.txt, 7205-v7.txt, HBASE-7205_v2.patch
>
>
> HBASE-6308 introduced a new custom CoprocessorClassLoader to load the 
> coprocessor classes and a new instance of this CL is created for each single 
> HRegion opened. This leads to OOME-PermGen when the number of regions go 
> above hundres / region server. 
> Having the table coprocessor jailed in a separate classloader is good however 
> we should create only one for all regions of a table in each HRS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7205) Coprocessor classloader is replicated for all regions in the HRegionServer

2012-12-07 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-7205:
-

Attachment: 7205-v5.txt

Ted, patch v5 also checks in TestClassLoading#testClassLoadingFromHDFS that all 
HRegions CPH are actually re-using cached classloaders. That is, all 4 regions 
should load the CPs from two classloaders only(two jars)

This check spotted actually an issue in the activeClassLoader registration - we 
did it only when the classloader is firstly created - should be whenever the 
classloader is successfully used to load a CP.
Patch v5 also contains a fix for this.

As for 
1. static void clearCacheForTesting();
2. static int getClassloaderCountForTesting(Path path);

Do you think we need them ? I wouldn't add them and use 
CoprocessorHost.classLoadersCache reference instead. (TestClassLoading is in 
the same package as CPH so can access package level attributes). Personally, I 
am not a big fan of adding methods used solely in tests :)


> Coprocessor classloader is replicated for all regions in the HRegionServer
> --
>
> Key: HBASE-7205
> URL: https://issues.apache.org/jira/browse/HBASE-7205
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 0.92.2, 0.94.2
>Reporter: Adrian Muraru
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7205-v1.txt, 7205-v3.txt, 7205-v4.txt, 7205-v5.txt, 
> HBASE-7205_v2.patch
>
>
> HBASE-6308 introduced a new custom CoprocessorClassLoader to load the 
> coprocessor classes and a new instance of this CL is created for each single 
> HRegion opened. This leads to OOME-PermGen when the number of regions go 
> above hundres / region server. 
> Having the table coprocessor jailed in a separate classloader is good however 
> we should create only one for all regions of a table in each HRS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7205) Coprocessor classloader is replicated for all regions in the HRegionServer

2012-12-05 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13510755#comment-13510755
 ] 

Adrian Muraru commented on HBASE-7205:
--

[~ted_yu]
bq. I only found one reference to classLoaders, shown above. I wonder what 
purpose classLoaders would serve.
instance attribute "classLoaders" is keeping (strong) references to (possible 
multiple) region coproc classloaders.

classLoadersCache on the other hand is the global cache (static) shared by all 
regions in RS.
The idea is that classloaders instances (values in classLoadersCache keyed by 
jar path) are WeakReferences so that they are GC eligible once there is no 
region using them (what keeps them in cache is the above "active classLoaders")

Agree, the naming is a bit misleading in my patch, should be:
{code:java}
+  protected Set activeCoprocessorClassLoaders = new 
HashSet();
+  static ConcurrentMap classLoadersCache = new MapMaker()
{code}

> Coprocessor classloader is replicated for all regions in the HRegionServer
> --
>
> Key: HBASE-7205
> URL: https://issues.apache.org/jira/browse/HBASE-7205
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 0.92.2, 0.94.2
>Reporter: Adrian Muraru
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7205-v1.txt, HBASE-7205_v2.patch
>
>
> HBASE-6308 introduced a new custom CoprocessorClassLoader to load the 
> coprocessor classes and a new instance of this CL is created for each single 
> HRegion opened. This leads to OOME-PermGen when the number of regions go 
> above hundres / region server. 
> Having the table coprocessor jailed in a separate classloader is good however 
> we should create only one for all regions of a table in each HRS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6580) New HTable pool, based on HBase(byte[], HConnection, ExecutorService) constructor

2012-12-05 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13510741#comment-13510741
 ] 

Adrian Muraru commented on HBASE-6580:
--

renaming the issue title would be good as well; to reflect the changes planned 
: something like "Remove HTablePool and replace it with lightweight HTable" 
would grab the necessary attn I guess :)

> New HTable pool, based on HBase(byte[], HConnection, ExecutorService) 
> constructor
> -
>
> Key: HBASE-6580
> URL: https://issues.apache.org/jira/browse/HBASE-6580
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.92.2, 0.94.2
>Reporter: Lars Hofhansl
>Priority: Minor
> Attachments: HBASE-6580_v1.patch, HBASE-6580_v2.patch
>
>
> Here I propose a very simple TablePool.
> It could be called LightHTablePool (or something - if you have a better name).
> Internally it would maintain an HConnection and an Executor service and each 
> invocation of getTable(...) would create a new HTable and close() would just 
> close it.
> In testing I find this more light weight than HTablePool and easier to 
> monitor in terms of resources used.
> It would hardly be more than a few dozen lines of code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7205) Coprocessor classloader is replicated for all regions in the HRegionServer

2012-12-04 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-7205:
-

Attachment: HBASE-7205_v2.patch

Adding v2 for this patch. We need to merge the tests though once discussed.

> Coprocessor classloader is replicated for all regions in the HRegionServer
> --
>
> Key: HBASE-7205
> URL: https://issues.apache.org/jira/browse/HBASE-7205
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 0.92.2, 0.94.2
>Reporter: Adrian Muraru
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7205-v1.txt, HBASE-7205_v2.patch
>
>
> HBASE-6308 introduced a new custom CoprocessorClassLoader to load the 
> coprocessor classes and a new instance of this CL is created for each single 
> HRegion opened. This leads to OOME-PermGen when the number of regions go 
> above hundres / region server. 
> Having the table coprocessor jailed in a separate classloader is good however 
> we should create only one for all regions of a table in each HRS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7205) Coprocessor classloader is replicated for all regions in the HRegionServer

2012-12-04 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13510166#comment-13510166
 ] 

Adrian Muraru commented on HBASE-7205:
--

[~ted_yu] This is really similar with my wip patch - unfortunately I did not 
find enough time to clean it up until now.
Couple of comments on your patch
{noformat}101 +  if (clsLoaderCache.containsKey(path)) {{noformat}
1. CPH.load() is executed concurrently from multiple RS threads so 
clsLoaderCache needs to be synchronized/ConcurrentHashMap.


2. We need a way to drop cached classloaders that are not being used by any 
online Region.






> Coprocessor classloader is replicated for all regions in the HRegionServer
> --
>
> Key: HBASE-7205
> URL: https://issues.apache.org/jira/browse/HBASE-7205
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 0.92.2, 0.94.2
>Reporter: Adrian Muraru
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7205-v1.txt
>
>
> HBASE-6308 introduced a new custom CoprocessorClassLoader to load the 
> coprocessor classes and a new instance of this CL is created for each single 
> HRegion opened. This leads to OOME-PermGen when the number of regions go 
> above hundres / region server. 
> Having the table coprocessor jailed in a separate classloader is good however 
> we should create only one for all regions of a table in each HRS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6580) New HTable pool, based on HBase(byte[], HConnection, ExecutorService) constructor

2012-12-02 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13508252#comment-13508252
 ] 

Adrian Muraru commented on HBASE-6580:
--

Lars, you're right, having a way to pass an already fine tuned executor is 
useful such cases. building one from configuration params might not be enough. 
Thinking of this, I'm more in favour of adding executor param to #getTable In 
order to Alloa mixt workloads to share the HConnection placement info

ExecutorService lowPriority=
ExecutorService highPriority=

HC conn= HCM.createConnection()
conn.getTable(table1, lowPriority)
conn.getTable(table2, highPriority)



+1 removing HTablePool
+1 removing managed HConnections - is this doable? Do we use it elsewhere?
+1 keep only lightweight HTable ctors

> New HTable pool, based on HBase(byte[], HConnection, ExecutorService) 
> constructor
> -
>
> Key: HBASE-6580
> URL: https://issues.apache.org/jira/browse/HBASE-6580
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.92.2, 0.94.2
>Reporter: Lars Hofhansl
>Priority: Minor
> Attachments: HBASE-6580_v1.patch, HBASE-6580_v2.patch
>
>
> Here I propose a very simple TablePool.
> It could be called LightHTablePool (or something - if you have a better name).
> Internally it would maintain an HConnection and an Executor service and each 
> invocation of getTable(...) would create a new HTable and close() would just 
> close it.
> In testing I find this more light weight than HTablePool and easier to 
> monitor in terms of resources used.
> It would hardly be more than a few dozen lines of code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6580) New HTable pool, based on HBase(byte[], HConnection, ExecutorService) constructor

2012-12-01 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13508113#comment-13508113
 ] 

Adrian Muraru commented on HBASE-6580:
--

[~lhofhansl] 
bq. Should allow passing an ExecutorService to 
HConnectionManager.createConnection(...). In fact I would require that now, and 
have that as the only option to setup the ExecutorService.

Managing the shared ExecutorService internally would be more usable for the 
user in my opinion, no need to create or shutdown  - I look at this executor as 
an internal detail used to execute *some* table operations in parallel.

bq. getTable must fail if this is a "managed", i.e. not created by 
createConnection (check the managed flag for that). Otherwise the HTable and 
the HConnection will get very confused.

Can you elaborate? Why is not advisable to use a "managed" connection when 
creating a HTable.
I can do today :
HTable t = new HTable(tableName, HConnectionManager.getConnection(conf), pool) 
and have an HTable using a "managed" connection 

> New HTable pool, based on HBase(byte[], HConnection, ExecutorService) 
> constructor
> -
>
> Key: HBASE-6580
> URL: https://issues.apache.org/jira/browse/HBASE-6580
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.92.2, 0.94.2
>Reporter: Lars Hofhansl
>Priority: Minor
> Attachments: HBASE-6580_v1.patch, HBASE-6580_v2.patch
>
>
> Here I propose a very simple TablePool.
> It could be called LightHTablePool (or something - if you have a better name).
> Internally it would maintain an HConnection and an Executor service and each 
> invocation of getTable(...) would create a new HTable and close() would just 
> close it.
> In testing I find this more light weight than HTablePool and easier to 
> monitor in terms of resources used.
> It would hardly be more than a few dozen lines of code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6580) New HTable pool, based on HBase(byte[], HConnection, ExecutorService) constructor

2012-12-01 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13508112#comment-13508112
 ] 

Adrian Muraru commented on HBASE-6580:
--

[~te...@apache.org] Thanks for having a look
bq. Why use double checked locking ? Connection would be used to create (at 
least) one table, right ?
We want to have *multiple* HTable instances sharing the *same* HConnection 
(this) and the same *ExecutorService* so I ensure only one executor is ever 
instantiated 

> New HTable pool, based on HBase(byte[], HConnection, ExecutorService) 
> constructor
> -
>
> Key: HBASE-6580
> URL: https://issues.apache.org/jira/browse/HBASE-6580
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.92.2, 0.94.2
>Reporter: Lars Hofhansl
>Priority: Minor
> Attachments: HBASE-6580_v1.patch, HBASE-6580_v2.patch
>
>
> Here I propose a very simple TablePool.
> It could be called LightHTablePool (or something - if you have a better name).
> Internally it would maintain an HConnection and an Executor service and each 
> invocation of getTable(...) would create a new HTable and close() would just 
> close it.
> In testing I find this more light weight than HTablePool and easier to 
> monitor in terms of resources used.
> It would hardly be more than a few dozen lines of code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6580) New HTable pool, based on HBase(byte[], HConnection, ExecutorService) constructor

2012-11-30 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-6580:
-

Attachment: HBASE-6580_v2.patch

Added HConnection#getTable()

> New HTable pool, based on HBase(byte[], HConnection, ExecutorService) 
> constructor
> -
>
> Key: HBASE-6580
> URL: https://issues.apache.org/jira/browse/HBASE-6580
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.92.2, 0.94.2
>Reporter: Lars Hofhansl
>Priority: Minor
> Attachments: HBASE-6580_v1.patch, HBASE-6580_v2.patch
>
>
> Here I propose a very simple TablePool.
> It could be called LightHTablePool (or something - if you have a better name).
> Internally it would maintain an HConnection and an Executor service and each 
> invocation of getTable(...) would create a new HTable and close() would just 
> close it.
> In testing I find this more light weight than HTablePool and easier to 
> monitor in terms of resources used.
> It would hardly be more than a few dozen lines of code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6580) New HTable pool, based on HBase(byte[], HConnection, ExecutorService) constructor

2012-11-30 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-6580:
-

Attachment: HBASE_v2.patch

Added HConnection#getTable()

> New HTable pool, based on HBase(byte[], HConnection, ExecutorService) 
> constructor
> -
>
> Key: HBASE-6580
> URL: https://issues.apache.org/jira/browse/HBASE-6580
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.92.2, 0.94.2
>Reporter: Lars Hofhansl
>Priority: Minor
> Attachments: HBASE-6580_v1.patch
>
>
> Here I propose a very simple TablePool.
> It could be called LightHTablePool (or something - if you have a better name).
> Internally it would maintain an HConnection and an Executor service and each 
> invocation of getTable(...) would create a new HTable and close() would just 
> close it.
> In testing I find this more light weight than HTablePool and easier to 
> monitor in terms of resources used.
> It would hardly be more than a few dozen lines of code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6580) New HTable pool, based on HBase(byte[], HConnection, ExecutorService) constructor

2012-11-30 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated HBASE-6580:
-

Attachment: (was: HBASE_v2.patch)

> New HTable pool, based on HBase(byte[], HConnection, ExecutorService) 
> constructor
> -
>
> Key: HBASE-6580
> URL: https://issues.apache.org/jira/browse/HBASE-6580
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.92.2, 0.94.2
>Reporter: Lars Hofhansl
>Priority: Minor
> Attachments: HBASE-6580_v1.patch
>
>
> Here I propose a very simple TablePool.
> It could be called LightHTablePool (or something - if you have a better name).
> Internally it would maintain an HConnection and an Executor service and each 
> invocation of getTable(...) would create a new HTable and close() would just 
> close it.
> In testing I find this more light weight than HTablePool and easier to 
> monitor in terms of resources used.
> It would hardly be more than a few dozen lines of code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >