[jira] [Commented] (HBASE-25568) Upgrade Thrift jar to fix CVE-2020-13949

2021-03-02 Thread Chao Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17293604#comment-17293604
 ] 

Chao Wang commented on HBASE-25568:
---

thanks pankaj , + 1

> Upgrade Thrift jar to fix CVE-2020-13949
> 
>
> Key: HBASE-25568
> URL: https://issues.apache.org/jira/browse/HBASE-25568
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Critical
>
> There is potential DoS when processing untrusted Thrift payloads,
>   https://seclists.org/oss-sec/2021/q1/140



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-25011) Swallow Throwable if catch throw other exception

2020-09-11 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated HBASE-25011:
--
Description: 
if this.close() throw exception,  will not explain "throw t", we will not seem 
actual error.As shown in the figure below. this is code in hbase master branch.

!image-2020-09-11-09-45-06-952.png!

this is log in my production environment, actual error is not printing:

2020-09-10 16:38:17,249 | INFO  | RS_OPEN_REGION-regionserve | Open 
aes_table,,1599722342956.c05438c8c2e3ec250e8fcbf35b49694d. | 
org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.process(AssignRegionHandler.java:135)2020-09-10
 16:38:17,249 | INFO  | RS_OPEN_REGION-regionserve | Open 
aes_table,,1599722342956.c05438c8c2e3ec250e8fcbf35b49694d. | 
org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.process(AssignRegionHandler.java:135)2020-09-10
 16:38:17,254 | INFO  | RS_OPEN_REGION-regionserver | System coprocessor 
org.apache.hadoop.hbase.hindex.server.regionserver.HIndexRegionCoprocessor 
loaded, priority=536870911. | 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadSystemCoprocessors(CoprocessorHost.java:165)2020-09-10
 16:38:17,254 | INFO  | RS_OPEN_REGION-regionserver | System coprocessor 
org.apache.hadoop.hbase.security.token.TokenProvider loaded, 
priority=536870912. | 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadSystemCoprocessors(CoprocessorHost.java:165)2020-09-10
 16:38:17,254 | INFO  | RS_OPEN_REGION-regionserver | System coprocessor 
com.huawei.hadoop.hbase.backup.services.RecoveryCoprocessor loaded, 
priority=536870913. | 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadSystemCoprocessors(CoprocessorHost.java:165)2020-09-10
 16:38:17,254 | WARN  | RS_OPEN_REGION-regionserver | 
HbaseUserUtilsImpl.initialize: Unexpected: initialization called more than 
once! | 
org.apache.ranger.authorization.hbase.HbaseUserUtilsImpl.initiailize(HbaseUserUtilsImpl.java:48)2020-09-10
 16:38:17,254 | INFO  | RS_OPEN_REGION-regionserver | System coprocessor 
org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor loaded, 
priority=536870914. | 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadSystemCoprocessors(CoprocessorHost.java:165)2020-09-10
 16:38:17,254 | WARN  | RS_OPEN_REGION-regionserver | SecureBulkLoadEndpoint is 
deprecated. It will be removed in future releases. | 
org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(SecureBulkLoadEndpoint.java:75)2020-09-10
 16:38:17,255 | WARN  | RS_OPEN_REGION-regionserver | Secure bulk load has been 
integrated into HBase core. | 
org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(SecureBulkLoadEndpoint.java:76)2020-09-10
 16:38:17,255 | INFO  | RS_OPEN_REGION-regionserver | System coprocessor 
org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint loaded, 
priority=536870915. | 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadSystemCoprocessors(CoprocessorHost.java:165)2020-09-10
 16:38:17,255 | INFO  | RS_OPEN_REGION-regionserver | System coprocessor 
org.apache.hadoop.hbase.security.access.ReadOnlyClusterEnabler loaded, 
priority=536870916. | 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadSystemCoprocessors(CoprocessorHost.java:165)2020-09-10
 16:38:17,255 | INFO  | RS_OPEN_REGION-regionserver | System coprocessor 
org.apache.hadoop.hbase.coprocessor.MetaTableMetrics loaded, 
priority=536870917. | 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadSystemCoprocessors(CoprocessorHost.java:165)2020-09-10
 16:38:17,255 | INFO  | RS_OPEN_REGION-regionserver | Unable to get remote 
Address | 
org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.getRemoteAddress(RangerAuthorizationCoprocessor.java:210)2020-09-10
 16:38:17,255 | INFO  | RS_OPEN_REGION-regionserver | Waiting for flushes and 
compactions to finish for the region 
aes_table,,1599722342956.c05438c8c2e3ec250e8fcbf35b49694d. | 
org.apache.hadoop.hbase.regionserver.HRegion.waitForFlushesAndCompactions(HRegion.java:1812)2020-09-10
 16:38:17,255 | INFO  | RS_OPEN_REGION-regionserver | Total wait time for 
flushes and compaction for the region 
aes_table,,1599722342956.c05438c8c2e3ec250e8fcbf35b49694d. is: 0ms | 
org.apache.hadoop.hbase.regionserver.HRegion.waitForFlushesAndCompactions(HRegion.java:1848)2020-09-10
 16:38:17,256 | INFO  | RS_OPEN_REGION-regionserver | Closing region 
aes_table,,1599722342956.c05438c8c2e3ec250e8fcbf35b49694d. | 
org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1676)2020-09-10
 16:38:17,337 | WARN  | RS_OPEN_REGION-regionserver | Failed to open region 
aes_table,,1599722342956.c05438c8c2e3ec250e8fcbf35b49694d., will report to 
master | 
org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.cleanUpAndReportFailure(AssignRegionHandler.java:89)java.io.IOException:
 The new max sequence id 1 is less than the old max sequence 

[jira] [Created] (HBASE-25011) Swallow Throwable if catch throw other exception

2020-09-10 Thread Chao Wang (Jira)
Chao Wang created HBASE-25011:
-

 Summary: Swallow Throwable if catch throw other exception
 Key: HBASE-25011
 URL: https://issues.apache.org/jira/browse/HBASE-25011
 Project: HBase
  Issue Type: Bug
  Components: Region Assignment
Affects Versions: 2.2.3
Reporter: Chao Wang
Assignee: Chao Wang
 Attachments: image-2020-09-11-09-45-06-952.png

if this.close() throw exception,  will not explain "throw t", we will not seem 
actual error.As shown in the figure below. this is code in hbase master branch.

!image-2020-09-11-09-45-06-952.png!

this is log in my production environment, actual error is not printing:

2020-09-10 16:38:17,255 | INFO  | 
RS_OPEN_REGION-regionserver/node-group-1wKzN0003:16020-11 | Waiting for flushes 
and compactions to finish for the region 
aes_table,,1599722342956.c05438c8c2e3ec250e8fcbf35b49694d. | 
org.apache.hadoop.hbase.regionserver.HRegion.waitForFlushesAndCompactions(HRegion.java:1812)2020-09-10
 16:38:17,255 | INFO  | 
RS_OPEN_REGION-regionserver/node-group-1wKzN0003:16020-11 | Waiting for flushes 
and compactions to finish for the region 
aes_table,,1599722342956.c05438c8c2e3ec250e8fcbf35b49694d. | 
org.apache.hadoop.hbase.regionserver.HRegion.waitForFlushesAndCompactions(HRegion.java:1812)2020-09-10
 16:38:17,255 | INFO  | 
RS_OPEN_REGION-regionserver/node-group-1wKzN0003:16020-11 | Total wait time for 
flushes and compaction for the region 
aes_table,,1599722342956.c05438c8c2e3ec250e8fcbf35b49694d. is: 0ms | 
org.apache.hadoop.hbase.regionserver.HRegion.waitForFlushesAndCompactions(HRegion.java:1848)2020-09-10
 16:38:17,256 | INFO  | 
RS_OPEN_REGION-regionserver/node-group-1wKzN0003:16020-11 | Closing region 
aes_table,,1599722342956.c05438c8c2e3ec250e8fcbf35b49694d. | 
org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1676)2020-09-10
 16:38:17,337 | WARN  | 
RS_OPEN_REGION-regionserver/node-group-1wKzN0003:16020-11 | Failed to open 
region aes_table,,1599722342956.c05438c8c2e3ec250e8fcbf35b49694d., will report 
to master | 
org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.cleanUpAndReportFailure(AssignRegionHandler.java:89)java.io.IOException:
 The new max sequence id 1 is less than the old max sequence id 10 at 
org.apache.hadoop.hbase.wal.WALSplitUtil.writeRegionSequenceIdFile(WALSplitUtil.java:413)
 at 
org.apache.hadoop.hbase.regionserver.HRegion.writeRegionCloseMarker(HRegion.java:1241)
 at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1781) at 
org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1594) at 
org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1540) at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7484) at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7429) at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7401) at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7359) at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7310) at 
org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.process(AssignRegionHandler.java:145)
 at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23340) hmaster /hbase/replication/rs session expired (hbase replication default value is true, we don't use ) causes logcleaner can not clean oldWALs, which resulits in old

2020-06-11 Thread Chao Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17133525#comment-17133525
 ] 

Chao Wang commented on HBASE-23340:
---

yes,I can

> hmaster  /hbase/replication/rs  session expired (hbase replication default 
> value is true, we don't use ) causes logcleaner can not clean oldWALs, which 
> resulits in oldWALs too large (more than 2TB)
> -
>
> Key: HBASE-23340
> URL: https://issues.apache.org/jira/browse/HBASE-23340
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Reporter: jackylau
>Assignee: jackylau
>Priority: Major
> Fix For: master
>
> Attachments: Snipaste_2019-11-21_10-39-25.png, 
> Snipaste_2019-11-21_14-10-36.png
>
>
> hmaster /hbase/replication/rs session expired (hbase replication default 
> value is true, we don't use ) causes logcleaner can not clean oldWALs, which 
> resulits in oldWALs too large (more than 2TB).
> !Snipaste_2019-11-21_10-39-25.png!
>  
> !Snipaste_2019-11-21_14-10-36.png!
>  
> we can solve it by following :
> 1) increase the session timeout(but i think it is not a good idea. because we 
> do not know how long to set is suitable)
> 2) close the hbase replication. It is not a good idea too, when our user uses 
> this feature
> 3) we need add retry times, for example when it has already happened three 
> times, we set the ReplicationLogCleaner and SnapShotCleaner stop
> that is all my ideas, i do not konw it is suitable, If it is suitable, could 
> i commit a PR?
> Does anynode have a good idea.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23340) hmaster /hbase/replication/rs session expired (hbase replication default value is true, we don't use ) causes logcleaner can not clean oldWALs, which resulits in old

2020-06-05 Thread Chao Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17126433#comment-17126433
 ] 

Chao Wang commented on HBASE-23340:
---

hello,I solved this issue in my enviroment.  I want to contribute for community

> hmaster  /hbase/replication/rs  session expired (hbase replication default 
> value is true, we don't use ) causes logcleaner can not clean oldWALs, which 
> resulits in oldWALs too large (more than 2TB)
> -
>
> Key: HBASE-23340
> URL: https://issues.apache.org/jira/browse/HBASE-23340
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Reporter: jackylau
>Assignee: jackylau
>Priority: Major
> Fix For: master
>
> Attachments: Snipaste_2019-11-21_10-39-25.png, 
> Snipaste_2019-11-21_14-10-36.png
>
>
> hmaster /hbase/replication/rs session expired (hbase replication default 
> value is true, we don't use ) causes logcleaner can not clean oldWALs, which 
> resulits in oldWALs too large (more than 2TB).
> !Snipaste_2019-11-21_10-39-25.png!
>  
> !Snipaste_2019-11-21_14-10-36.png!
>  
> we can solve it by following :
> 1) increase the session timeout(but i think it is not a good idea. because we 
> do not know how long to set is suitable)
> 2) close the hbase replication. It is not a good idea too, when our user uses 
> this feature
> 3) we need add retry times, for example when it has already happened three 
> times, we set the ReplicationLogCleaner and SnapShotCleaner stop
> that is all my ideas, i do not konw it is suitable, If it is suitable, could 
> i commit a PR?
> Does anynode have a good idea.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23340) hmaster /hbase/replication/rs session expired (hbase replication default value is true, we don't use ) causes logcleaner can not clean oldWALs, which resulits in old

2020-05-31 Thread Chao Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17120694#comment-17120694
 ] 

Chao Wang commented on HBASE-23340:
---

Thanck you for this issue.I seem my code of 1.3.1 has merged this issue for 
HBASE-15234, but this issue still exist. 

> hmaster  /hbase/replication/rs  session expired (hbase replication default 
> value is true, we don't use ) causes logcleaner can not clean oldWALs, which 
> resulits in oldWALs too large (more than 2TB)
> -
>
> Key: HBASE-23340
> URL: https://issues.apache.org/jira/browse/HBASE-23340
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Reporter: jackylau
>Assignee: jackylau
>Priority: Major
> Fix For: master
>
> Attachments: Snipaste_2019-11-21_10-39-25.png, 
> Snipaste_2019-11-21_14-10-36.png
>
>
> hmaster /hbase/replication/rs session expired (hbase replication default 
> value is true, we don't use ) causes logcleaner can not clean oldWALs, which 
> resulits in oldWALs too large (more than 2TB).
> !Snipaste_2019-11-21_10-39-25.png!
>  
> !Snipaste_2019-11-21_14-10-36.png!
>  
> we can solve it by following :
> 1) increase the session timeout(but i think it is not a good idea. because we 
> do not know how long to set is suitable)
> 2) close the hbase replication. It is not a good idea too, when our user uses 
> this feature
> 3) we need add retry times, for example when it has already happened three 
> times, we set the ReplicationLogCleaner and SnapShotCleaner stop
> that is all my ideas, i do not konw it is suitable, If it is suitable, could 
> i commit a PR?
> Does anynode have a good idea.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23340) hmaster /hbase/replication/rs session expired (hbase replication default value is true, we don't use ) causes logcleaner can not clean oldWALs, which resulits in old

2020-05-28 Thread Chao Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119233#comment-17119233
 ] 

Chao Wang commented on HBASE-23340:
---

hello , I find this issue in my environment .I  think you can add trying for 
zk.this is not a simple try, and then close zk which session is expired, new a 
zk for trying.  

> hmaster  /hbase/replication/rs  session expired (hbase replication default 
> value is true, we don't use ) causes logcleaner can not clean oldWALs, which 
> resulits in oldWALs too large (more than 2TB)
> -
>
> Key: HBASE-23340
> URL: https://issues.apache.org/jira/browse/HBASE-23340
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Reporter: jackylau
>Assignee: jackylau
>Priority: Major
> Fix For: master
>
> Attachments: Snipaste_2019-11-21_10-39-25.png, 
> Snipaste_2019-11-21_14-10-36.png
>
>
> hmaster /hbase/replication/rs session expired (hbase replication default 
> value is true, we don't use ) causes logcleaner can not clean oldWALs, which 
> resulits in oldWALs too large (more than 2TB).
> !Snipaste_2019-11-21_10-39-25.png!
>  
> !Snipaste_2019-11-21_14-10-36.png!
>  
> we can solve it by following :
> 1) increase the session timeout(but i think it is not a good idea. because we 
> do not know how long to set is suitable)
> 2) close the hbase replication. It is not a good idea too, when our user uses 
> this feature
> 3) we need add retry times, for example when it has already happened three 
> times, we set the ReplicationLogCleaner and SnapShotCleaner stop
> that is all my ideas, i do not konw it is suitable, If it is suitable, could 
> i commit a PR?
> Does anynode have a good idea.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)