[jira] [Created] (HBASE-23656) [MERGETOOL]JDHBASE Support Merge region by pattern

2020-01-07 Thread zhengsicheng (Jira)
zhengsicheng created HBASE-23656:


 Summary: [MERGETOOL]JDHBASE Support Merge region by pattern
 Key: HBASE-23656
 URL: https://issues.apache.org/jira/browse/HBASE-23656
 Project: HBase
  Issue Type: New Feature
  Components: master
Reporter: zhengsicheng
 Fix For: 3.0.0


Usage: bin/hbase onlinemerge [--tableName=] [--startRegion=] [--stopRegion=] 
[--maxRegionSize=] [--maxRegionCreateTime=] [--numMaxMergePlans=] 
[--targetRegionCount=] [--printExecutionPlan=] [--configMergePauseTime=]

Options:
--h or --h print help
--tableName table name must be not null
--startRegion start region
--stopRegion stop region
--maxRegionSize max region size Unit GB
--maxRegionCreateTime max Region Create Time /MM/dd HH:mm:ss
--numMaxMergePlans num MaxMerge Plans
--targetRegionCount target Region Count
--configMergePauseTime config Merge Pause Time In milliseconds
--printExecutionPlan Value default is true print execution plans false is 
execution merge

Examples:
bin/hbase onlinemerge --tableName=test:test1 
--startRegion=test:test1,,1576835912332.01d0d6c2b41e204104524d9aec6074fb. 
--stopRegion=test:test1,,1573044786980.0c9b5bd93f3b19eb9bd1a1011ddff66f.
 --maxRegionSize=0 --maxRegionCreateTime=/MM/dd HH:mm:ss 
--numMaxMergePlans=2 --targetRegionCount=4 --printExecutionPlan=false



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23656) [MERGETOOL]JDHBASE Support Merge region by pattern

2020-01-07 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-23656:
-
Description: 
Design Objective:
 # Merge empty region
 # Neat region
 # merge expired region

Usage: bin/hbase onlinemerge [--tableName=] [--startRegion=] [--stopRegion=] 
[--maxRegionSize=] [--maxRegionCreateTime=] [--numMaxMergePlans=] 
[--targetRegionCount=] [--printExecutionPlan=] [--configMergePauseTime=]

Options:
 --h or --h print help
 --tableName table name must be not null
 --startRegion start region
 --stopRegion stop region
 --maxRegionSize max region size Unit GB
 --maxRegionCreateTime max Region Create Time /MM/dd HH:mm:ss
 --numMaxMergePlans num MaxMerge Plans
 --targetRegionCount target Region Count
 --configMergePauseTime config Merge Pause Time In milliseconds
 --printExecutionPlan Value default is true print execution plans false is 
execution merge

Examples:
 bin/hbase onlinemerge --tableName=test:test1 
--startRegion=test:test1,,1576835912332.01d0d6c2b41e204104524d9aec6074fb. 
--stopRegion=test:test1,,1573044786980.0c9b5bd93f3b19eb9bd1a1011ddff66f.
 --maxRegionSize=0 --maxRegionCreateTime=/MM/dd HH:mm:ss 
--numMaxMergePlans=2 --targetRegionCount=4 --printExecutionPlan=false

  was:
Usage: bin/hbase onlinemerge [--tableName=] [--startRegion=] [--stopRegion=] 
[--maxRegionSize=] [--maxRegionCreateTime=] [--numMaxMergePlans=] 
[--targetRegionCount=] [--printExecutionPlan=] [--configMergePauseTime=]

Options:
--h or --h print help
--tableName table name must be not null
--startRegion start region
--stopRegion stop region
--maxRegionSize max region size Unit GB
--maxRegionCreateTime max Region Create Time /MM/dd HH:mm:ss
--numMaxMergePlans num MaxMerge Plans
--targetRegionCount target Region Count
--configMergePauseTime config Merge Pause Time In milliseconds
--printExecutionPlan Value default is true print execution plans false is 
execution merge

Examples:
bin/hbase onlinemerge --tableName=test:test1 
--startRegion=test:test1,,1576835912332.01d0d6c2b41e204104524d9aec6074fb. 
--stopRegion=test:test1,,1573044786980.0c9b5bd93f3b19eb9bd1a1011ddff66f.
 --maxRegionSize=0 --maxRegionCreateTime=/MM/dd HH:mm:ss 
--numMaxMergePlans=2 --targetRegionCount=4 --printExecutionPlan=false


> [MERGETOOL]JDHBASE Support Merge region by pattern
> --
>
> Key: HBASE-23656
> URL: https://issues.apache.org/jira/browse/HBASE-23656
> Project: HBase
>  Issue Type: New Feature
>  Components: master
>Reporter: zhengsicheng
>Priority: Minor
> Fix For: 3.0.0
>
>
> Design Objective:
>  # Merge empty region
>  # Neat region
>  # merge expired region
> Usage: bin/hbase onlinemerge [--tableName=] [--startRegion=] [--stopRegion=] 
> [--maxRegionSize=] [--maxRegionCreateTime=] [--numMaxMergePlans=] 
> [--targetRegionCount=] [--printExecutionPlan=] [--configMergePauseTime=]
> Options:
>  --h or --h print help
>  --tableName table name must be not null
>  --startRegion start region
>  --stopRegion stop region
>  --maxRegionSize max region size Unit GB
>  --maxRegionCreateTime max Region Create Time /MM/dd HH:mm:ss
>  --numMaxMergePlans num MaxMerge Plans
>  --targetRegionCount target Region Count
>  --configMergePauseTime config Merge Pause Time In milliseconds
>  --printExecutionPlan Value default is true print execution plans false is 
> execution merge
> Examples:
>  bin/hbase onlinemerge --tableName=test:test1 
> --startRegion=test:test1,,1576835912332.01d0d6c2b41e204104524d9aec6074fb. 
> --stopRegion=test:test1,,1573044786980.0c9b5bd93f3b19eb9bd1a1011ddff66f.
>  --maxRegionSize=0 --maxRegionCreateTime=/MM/dd HH:mm:ss 
> --numMaxMergePlans=2 --targetRegionCount=4 --printExecutionPlan=false



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23656) [MERGETOOL] HBASE Support Merge region by pattern

2020-01-07 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-23656:
-
Summary: [MERGETOOL] HBASE Support Merge region by pattern  (was: 
[MERGETOOL]JDHBASE Support Merge region by pattern)

> [MERGETOOL] HBASE Support Merge region by pattern
> -
>
> Key: HBASE-23656
> URL: https://issues.apache.org/jira/browse/HBASE-23656
> Project: HBase
>  Issue Type: New Feature
>  Components: master
>Reporter: zhengsicheng
>Priority: Minor
> Fix For: 3.0.0
>
>
> Design Objective:
>  # Merge empty region
>  # Neat region
>  # merge expired region
> Usage: bin/hbase onlinemerge [--tableName=] [--startRegion=] [--stopRegion=] 
> [--maxRegionSize=] [--maxRegionCreateTime=] [--numMaxMergePlans=] 
> [--targetRegionCount=] [--printExecutionPlan=] [--configMergePauseTime=]
> Options:
>  --h or --h print help
>  --tableName table name must be not null
>  --startRegion start region
>  --stopRegion stop region
>  --maxRegionSize max region size Unit GB
>  --maxRegionCreateTime max Region Create Time /MM/dd HH:mm:ss
>  --numMaxMergePlans num MaxMerge Plans
>  --targetRegionCount target Region Count
>  --configMergePauseTime config Merge Pause Time In milliseconds
>  --printExecutionPlan Value default is true print execution plans false is 
> execution merge
> Examples:
>  bin/hbase onlinemerge --tableName=test:test1 
> --startRegion=test:test1,,1576835912332.01d0d6c2b41e204104524d9aec6074fb. 
> --stopRegion=test:test1,,1573044786980.0c9b5bd93f3b19eb9bd1a1011ddff66f.
>  --maxRegionSize=0 --maxRegionCreateTime=/MM/dd HH:mm:ss 
> --numMaxMergePlans=2 --targetRegionCount=4 --printExecutionPlan=false



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23656) [MERGETOOL] HBASE Support Merge region by pattern

2020-01-08 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-23656:
-
Attachment: HBASE-23656.master.v1.patch

> [MERGETOOL] HBASE Support Merge region by pattern
> -
>
> Key: HBASE-23656
> URL: https://issues.apache.org/jira/browse/HBASE-23656
> Project: HBase
>  Issue Type: New Feature
>  Components: master
>Reporter: zhengsicheng
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HBASE-23656.master.v1.patch
>
>
> Design Objective:
>  # Merge empty region
>  # Neat region
>  # merge expired region
> Usage: bin/hbase onlinemerge [--tableName=] [--startRegion=] [--stopRegion=] 
> [--maxRegionSize=] [--maxRegionCreateTime=] [--numMaxMergePlans=] 
> [--targetRegionCount=] [--printExecutionPlan=] [--configMergePauseTime=]
> Options:
>  --h or --h print help
>  --tableName table name must be not null
>  --startRegion start region
>  --stopRegion stop region
>  --maxRegionSize max region size Unit GB
>  --maxRegionCreateTime max Region Create Time /MM/dd HH:mm:ss
>  --numMaxMergePlans num MaxMerge Plans
>  --targetRegionCount target Region Count
>  --configMergePauseTime config Merge Pause Time In milliseconds
>  --printExecutionPlan Value default is true print execution plans false is 
> execution merge
> Examples:
>  bin/hbase onlinemerge --tableName=test:test1 
> --startRegion=test:test1,,1576835912332.01d0d6c2b41e204104524d9aec6074fb. 
> --stopRegion=test:test1,,1573044786980.0c9b5bd93f3b19eb9bd1a1011ddff66f.
>  --maxRegionSize=0 --maxRegionCreateTime=/MM/dd HH:mm:ss 
> --numMaxMergePlans=2 --targetRegionCount=4 --printExecutionPlan=false



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23656) [MERGETOOL] HBASE Support Merge region by pattern

2020-01-08 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-23656:
-
Attachment: (was: HBASE-23656.master.v1.patch)

> [MERGETOOL] HBASE Support Merge region by pattern
> -
>
> Key: HBASE-23656
> URL: https://issues.apache.org/jira/browse/HBASE-23656
> Project: HBase
>  Issue Type: New Feature
>  Components: master
>Reporter: zhengsicheng
>Priority: Minor
> Fix For: 3.0.0
>
>
> Design Objective:
>  # Merge empty region
>  # Neat region
>  # merge expired region
> Usage: bin/hbase onlinemerge [--tableName=] [--startRegion=] [--stopRegion=] 
> [--maxRegionSize=] [--maxRegionCreateTime=] [--numMaxMergePlans=] 
> [--targetRegionCount=] [--printExecutionPlan=] [--configMergePauseTime=]
> Options:
>  --h or --h print help
>  --tableName table name must be not null
>  --startRegion start region
>  --stopRegion stop region
>  --maxRegionSize max region size Unit GB
>  --maxRegionCreateTime max Region Create Time /MM/dd HH:mm:ss
>  --numMaxMergePlans num MaxMerge Plans
>  --targetRegionCount target Region Count
>  --configMergePauseTime config Merge Pause Time In milliseconds
>  --printExecutionPlan Value default is true print execution plans false is 
> execution merge
> Examples:
>  bin/hbase onlinemerge --tableName=test:test1 
> --startRegion=test:test1,,1576835912332.01d0d6c2b41e204104524d9aec6074fb. 
> --stopRegion=test:test1,,1573044786980.0c9b5bd93f3b19eb9bd1a1011ddff66f.
>  --maxRegionSize=0 --maxRegionCreateTime=/MM/dd HH:mm:ss 
> --numMaxMergePlans=2 --targetRegionCount=4 --printExecutionPlan=false



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23656) [MERGETOOL] HBASE Support Merge region by pattern

2020-01-08 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-23656:
-
Attachment: HBASE-23656.master.v1.patch
Status: Patch Available  (was: Open)

> [MERGETOOL] HBASE Support Merge region by pattern
> -
>
> Key: HBASE-23656
> URL: https://issues.apache.org/jira/browse/HBASE-23656
> Project: HBase
>  Issue Type: New Feature
>  Components: master
>Reporter: zhengsicheng
>Priority: Minor
> Fix For: 3.0.0
>
>
> Design Objective:
>  # Merge empty region
>  # Neat region
>  # merge expired region
> Usage: bin/hbase onlinemerge [--tableName=] [--startRegion=] [--stopRegion=] 
> [--maxRegionSize=] [--maxRegionCreateTime=] [--numMaxMergePlans=] 
> [--targetRegionCount=] [--printExecutionPlan=] [--configMergePauseTime=]
> Options:
>  --h or --h print help
>  --tableName table name must be not null
>  --startRegion start region
>  --stopRegion stop region
>  --maxRegionSize max region size Unit GB
>  --maxRegionCreateTime max Region Create Time /MM/dd HH:mm:ss
>  --numMaxMergePlans num MaxMerge Plans
>  --targetRegionCount target Region Count
>  --configMergePauseTime config Merge Pause Time In milliseconds
>  --printExecutionPlan Value default is true print execution plans false is 
> execution merge
> Examples:
>  bin/hbase onlinemerge --tableName=test:test1 
> --startRegion=test:test1,,1576835912332.01d0d6c2b41e204104524d9aec6074fb. 
> --stopRegion=test:test1,,1573044786980.0c9b5bd93f3b19eb9bd1a1011ddff66f.
>  --maxRegionSize=0 --maxRegionCreateTime=/MM/dd HH:mm:ss 
> --numMaxMergePlans=2 --targetRegionCount=4 --printExecutionPlan=false



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23656) [MERGETOOL] HBASE Support Merge region by pattern

2020-01-08 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-23656:
-
Status: Open  (was: Patch Available)

> [MERGETOOL] HBASE Support Merge region by pattern
> -
>
> Key: HBASE-23656
> URL: https://issues.apache.org/jira/browse/HBASE-23656
> Project: HBase
>  Issue Type: New Feature
>  Components: master
>Reporter: zhengsicheng
>Priority: Minor
> Fix For: 3.0.0
>
>
> Design Objective:
>  # Merge empty region
>  # Neat region
>  # merge expired region
> Usage: bin/hbase onlinemerge [--tableName=] [--startRegion=] [--stopRegion=] 
> [--maxRegionSize=] [--maxRegionCreateTime=] [--numMaxMergePlans=] 
> [--targetRegionCount=] [--printExecutionPlan=] [--configMergePauseTime=]
> Options:
>  --h or --h print help
>  --tableName table name must be not null
>  --startRegion start region
>  --stopRegion stop region
>  --maxRegionSize max region size Unit GB
>  --maxRegionCreateTime max Region Create Time /MM/dd HH:mm:ss
>  --numMaxMergePlans num MaxMerge Plans
>  --targetRegionCount target Region Count
>  --configMergePauseTime config Merge Pause Time In milliseconds
>  --printExecutionPlan Value default is true print execution plans false is 
> execution merge
> Examples:
>  bin/hbase onlinemerge --tableName=test:test1 
> --startRegion=test:test1,,1576835912332.01d0d6c2b41e204104524d9aec6074fb. 
> --stopRegion=test:test1,,1573044786980.0c9b5bd93f3b19eb9bd1a1011ddff66f.
>  --maxRegionSize=0 --maxRegionCreateTime=/MM/dd HH:mm:ss 
> --numMaxMergePlans=2 --targetRegionCount=4 --printExecutionPlan=false



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23656) [MERGETOOL] HBASE Support Merge region by pattern

2020-01-08 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-23656:
-
Attachment: (was: HBASE-23656.master.v1.patch)

> [MERGETOOL] HBASE Support Merge region by pattern
> -
>
> Key: HBASE-23656
> URL: https://issues.apache.org/jira/browse/HBASE-23656
> Project: HBase
>  Issue Type: New Feature
>  Components: master
>Reporter: zhengsicheng
>Priority: Minor
> Fix For: 3.0.0
>
>
> Design Objective:
>  # Merge empty region
>  # Neat region
>  # merge expired region
> Usage: bin/hbase onlinemerge [--tableName=] [--startRegion=] [--stopRegion=] 
> [--maxRegionSize=] [--maxRegionCreateTime=] [--numMaxMergePlans=] 
> [--targetRegionCount=] [--printExecutionPlan=] [--configMergePauseTime=]
> Options:
>  --h or --h print help
>  --tableName table name must be not null
>  --startRegion start region
>  --stopRegion stop region
>  --maxRegionSize max region size Unit GB
>  --maxRegionCreateTime max Region Create Time /MM/dd HH:mm:ss
>  --numMaxMergePlans num MaxMerge Plans
>  --targetRegionCount target Region Count
>  --configMergePauseTime config Merge Pause Time In milliseconds
>  --printExecutionPlan Value default is true print execution plans false is 
> execution merge
> Examples:
>  bin/hbase onlinemerge --tableName=test:test1 
> --startRegion=test:test1,,1576835912332.01d0d6c2b41e204104524d9aec6074fb. 
> --stopRegion=test:test1,,1573044786980.0c9b5bd93f3b19eb9bd1a1011ddff66f.
>  --maxRegionSize=0 --maxRegionCreateTime=/MM/dd HH:mm:ss 
> --numMaxMergePlans=2 --targetRegionCount=4 --printExecutionPlan=false



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23656) [MERGETOOL] HBASE Support Merge region by pattern

2020-01-08 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-23656:
-
Attachment: HBASE-23656.master.v1.patch
Status: Patch Available  (was: Open)

> [MERGETOOL] HBASE Support Merge region by pattern
> -
>
> Key: HBASE-23656
> URL: https://issues.apache.org/jira/browse/HBASE-23656
> Project: HBase
>  Issue Type: New Feature
>  Components: master
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HBASE-23656.master.v1.patch
>
>
> Design Objective:
>  # Merge empty region
>  # Neat region
>  # merge expired region
> Usage: bin/hbase onlinemerge [--tableName=] [--startRegion=] [--stopRegion=] 
> [--maxRegionSize=] [--maxRegionCreateTime=] [--numMaxMergePlans=] 
> [--targetRegionCount=] [--printExecutionPlan=] [--configMergePauseTime=]
> Options:
>  --h or --h print help
>  --tableName table name must be not null
>  --startRegion start region
>  --stopRegion stop region
>  --maxRegionSize max region size Unit GB
>  --maxRegionCreateTime max Region Create Time /MM/dd HH:mm:ss
>  --numMaxMergePlans num MaxMerge Plans
>  --targetRegionCount target Region Count
>  --configMergePauseTime config Merge Pause Time In milliseconds
>  --printExecutionPlan Value default is true print execution plans false is 
> execution merge
> Examples:
>  bin/hbase onlinemerge --tableName=test:test1 
> --startRegion=test:test1,,1576835912332.01d0d6c2b41e204104524d9aec6074fb. 
> --stopRegion=test:test1,,1573044786980.0c9b5bd93f3b19eb9bd1a1011ddff66f.
>  --maxRegionSize=0 --maxRegionCreateTime=/MM/dd HH:mm:ss 
> --numMaxMergePlans=2 --targetRegionCount=4 --printExecutionPlan=false



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23656) [MERGETOOL] HBASE Support Merge region by pattern

2020-01-10 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-23656:
-
Attachment: HBASE-23656.branch-1.4.v1.patch

> [MERGETOOL] HBASE Support Merge region by pattern
> -
>
> Key: HBASE-23656
> URL: https://issues.apache.org/jira/browse/HBASE-23656
> Project: HBase
>  Issue Type: New Feature
>  Components: master
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HBASE-23656.branch-1.4.v1.patch, 
> HBASE-23656.master.v1.patch
>
>
> Design Objective:
>  # Merge empty region
>  # Neat region
>  # merge expired region
> Usage: bin/hbase onlinemerge [--tableName=] [--startRegion=] [--stopRegion=] 
> [--maxRegionSize=] [--maxRegionCreateTime=] [--numMaxMergePlans=] 
> [--targetRegionCount=] [--printExecutionPlan=] [--configMergePauseTime=]
> Options:
>  --h or --h print help
>  --tableName table name must be not null
>  --startRegion start region
>  --stopRegion stop region
>  --maxRegionSize max region size Unit GB
>  --maxRegionCreateTime max Region Create Time /MM/dd HH:mm:ss
>  --numMaxMergePlans num MaxMerge Plans
>  --targetRegionCount target Region Count
>  --configMergePauseTime config Merge Pause Time In milliseconds
>  --printExecutionPlan Value default is true print execution plans false is 
> execution merge
> Examples:
>  bin/hbase onlinemerge --tableName=test:test1 
> --startRegion=test:test1,,1576835912332.01d0d6c2b41e204104524d9aec6074fb. 
> --stopRegion=test:test1,,1573044786980.0c9b5bd93f3b19eb9bd1a1011ddff66f.
>  --maxRegionSize=0 --maxRegionCreateTime=/MM/dd HH:mm:ss 
> --numMaxMergePlans=2 --targetRegionCount=4 --printExecutionPlan=false



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23656) [MERGETOOL] HBASE Support Merge region by pattern

2020-01-10 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-23656:
-
Fix Version/s: 1.4.13

> [MERGETOOL] HBASE Support Merge region by pattern
> -
>
> Key: HBASE-23656
> URL: https://issues.apache.org/jira/browse/HBASE-23656
> Project: HBase
>  Issue Type: New Feature
>  Components: master
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Fix For: 3.0.0, 1.4.13
>
> Attachments: HBASE-23656.branch-1.4.v1.patch, 
> HBASE-23656.master.v1.patch
>
>
> Design Objective:
>  # Merge empty region
>  # Neat region
>  # merge expired region
> Usage: bin/hbase onlinemerge [--tableName=] [--startRegion=] [--stopRegion=] 
> [--maxRegionSize=] [--maxRegionCreateTime=] [--numMaxMergePlans=] 
> [--targetRegionCount=] [--printExecutionPlan=] [--configMergePauseTime=]
> Options:
>  --h or --h print help
>  --tableName table name must be not null
>  --startRegion start region
>  --stopRegion stop region
>  --maxRegionSize max region size Unit GB
>  --maxRegionCreateTime max Region Create Time /MM/dd HH:mm:ss
>  --numMaxMergePlans num MaxMerge Plans
>  --targetRegionCount target Region Count
>  --configMergePauseTime config Merge Pause Time In milliseconds
>  --printExecutionPlan Value default is true print execution plans false is 
> execution merge
> Examples:
>  bin/hbase onlinemerge --tableName=test:test1 
> --startRegion=test:test1,,1576835912332.01d0d6c2b41e204104524d9aec6074fb. 
> --stopRegion=test:test1,,1573044786980.0c9b5bd93f3b19eb9bd1a1011ddff66f.
>  --maxRegionSize=0 --maxRegionCreateTime=/MM/dd HH:mm:ss 
> --numMaxMergePlans=2 --targetRegionCount=4 --printExecutionPlan=false



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23656) [MERGETOOL] HBASE Support Merge region by pattern

2020-01-13 Thread zhengsicheng (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17014121#comment-17014121
 ] 

zhengsicheng commented on HBASE-23656:
--

[~zhangduo] Thank you review code

> [MERGETOOL] HBASE Support Merge region by pattern
> -
>
> Key: HBASE-23656
> URL: https://issues.apache.org/jira/browse/HBASE-23656
> Project: HBase
>  Issue Type: New Feature
>  Components: master
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Fix For: 3.0.0, 1.4.13
>
> Attachments: HBASE-23656.branch-1.4.v1.patch, 
> HBASE-23656.master.v1.patch
>
>
> Design Objective:
>  # Merge empty region
>  # Neat region
>  # merge expired region
> Usage: bin/hbase onlinemerge [--tableName=] [--startRegion=] [--stopRegion=] 
> [--maxRegionSize=] [--maxRegionCreateTime=] [--numMaxMergePlans=] 
> [--targetRegionCount=] [--printExecutionPlan=] [--configMergePauseTime=]
> Options:
>  --h or --h print help
>  --tableName table name must be not null
>  --startRegion start region
>  --stopRegion stop region
>  --maxRegionSize max region size Unit GB
>  --maxRegionCreateTime max Region Create Time /MM/dd HH:mm:ss
>  --numMaxMergePlans num MaxMerge Plans
>  --targetRegionCount target Region Count
>  --configMergePauseTime config Merge Pause Time In milliseconds
>  --printExecutionPlan Value default is true print execution plans false is 
> execution merge
> Examples:
>  bin/hbase onlinemerge --tableName=test:test1 
> --startRegion=test:test1,,1576835912332.01d0d6c2b41e204104524d9aec6074fb. 
> --stopRegion=test:test1,,1573044786980.0c9b5bd93f3b19eb9bd1a1011ddff66f.
>  --maxRegionSize=0 --maxRegionCreateTime=/MM/dd HH:mm:ss 
> --numMaxMergePlans=2 --targetRegionCount=4 --printExecutionPlan=false



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-25634) The client frequently exceeds the quota, which causes the meta table scan to be too high

2021-07-09 Thread zhengsicheng (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17377959#comment-17377959
 ] 

zhengsicheng commented on HBASE-25634:
--

[https://github.com/apache/hbase/pull/3030/files] [~stack] Thank you!

> The client frequently exceeds the quota, which causes the meta table scan to 
> be too high
> 
>
> Key: HBASE-25634
> URL: https://issues.apache.org/jira/browse/HBASE-25634
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.3.4
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: image-2021-03-05-12-00-33-522.png, 
> image-2021-03-05-12-01-08-769.png
>
>
>  When the client scan operation, the server frequently returns 
> RpcThrottlingException, which will cause the meta table request to become 
> high.
>  
>  
> /hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerImpl.java
> {code:java}
> // code placeholder
> @Override
> public T callWithRetries(RetryingCallable callable, int callTimeout)
> throws IOException, RuntimeException {
>   List exceptions = new 
> ArrayList<>();
>   tracker.start();
>   context.clear();
>   for (int tries = 0;; tries++) {
> long expectedSleep;
> try {
>   // bad cache entries are cleared in the call to 
> RetryingCallable#throwable() in catch block
>   // callable.prepare() reload force reload of server location
>   callable.prepare(tries != 0);
>   interceptor.intercept(context.prepare(callable, tries));
>   return callable.call(getTimeout(callTimeout));
> } catch (PreemptiveFastFailException e) {
>   throw e;
> } catch (Throwable t) {
>   ExceptionUtil.rethrowIfInterrupt(t);
>   Throwable cause = t.getCause();
>   if (cause instanceof DoNotRetryIOException) {
> // Fail fast
> throw (DoNotRetryIOException) cause;
>   }
>   // translateException throws exception when should not retry: i.e. when 
> request is bad.
>   interceptor.handleFailure(context, t);
>   t = translateException(t);
> {code}
>  
>  
>  
> !image-2021-03-05-12-00-33-522.png!
> !image-2021-03-05-12-01-08-769.png!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-26156) tablelist show pages in masterPage

2021-07-30 Thread zhengsicheng (Jira)
zhengsicheng created HBASE-26156:


 Summary: tablelist show pages in masterPage
 Key: HBASE-26156
 URL: https://issues.apache.org/jira/browse/HBASE-26156
 Project: HBase
  Issue Type: Improvement
Reporter: zhengsicheng
Assignee: zhengsicheng






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26156) tablelist show pages in masterPage

2021-07-30 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-26156:
-
Description: 
When the amount of data in the table is large, the display length of the master 
page is too long, and it needs to be displayed in pages.
 !screenshot-1.png|thumbnail! 
Table list supports paging and retrieval:
 !screenshot-2.png|thumbnail! 
 !screenshot-3.png|thumbnail!

> tablelist show pages in masterPage
> --
>
> Key: HBASE-26156
> URL: https://issues.apache.org/jira/browse/HBASE-26156
> Project: HBase
>  Issue Type: Improvement
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
>
> When the amount of data in the table is large, the display length of the 
> master page is too long, and it needs to be displayed in pages.
>  !screenshot-1.png|thumbnail! 
> Table list supports paging and retrieval:
>  !screenshot-2.png|thumbnail! 
>  !screenshot-3.png|thumbnail!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26156) tablelist show pages in masterPage

2021-07-30 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-26156:
-
Description: 
When the amount of data in the table is large, the display length of the master 
page is too long, and it needs to be displayed in pages.

 Table list supports paging and retrieval:

  was:
When the amount of data in the table is large, the display length of the master 
page is too long, and it needs to be displayed in pages.
 !screenshot-1.png|thumbnail! 
Table list supports paging and retrieval:
 !screenshot-2.png|thumbnail! 
 !screenshot-3.png|thumbnail!


> tablelist show pages in masterPage
> --
>
> Key: HBASE-26156
> URL: https://issues.apache.org/jira/browse/HBASE-26156
> Project: HBase
>  Issue Type: Improvement
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
>
> When the amount of data in the table is large, the display length of the 
> master page is too long, and it needs to be displayed in pages.
>  Table list supports paging and retrieval:



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26156) tablelist show pages in masterPage

2021-07-30 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-26156:
-
Attachment: image-2021-07-30-20-27-56-468.png

> tablelist show pages in masterPage
> --
>
> Key: HBASE-26156
> URL: https://issues.apache.org/jira/browse/HBASE-26156
> Project: HBase
>  Issue Type: Improvement
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: image-2021-07-30-20-27-56-468.png
>
>
> When the amount of data in the table is large, the display length of the 
> master page is too long, and it needs to be displayed in pages.
>  Table list supports paging and retrieval:



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26156) tablelist show pages in masterPage

2021-07-30 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-26156:
-
Description: 
When the amount of data in the table is large, the display length of the master 
page is too long, and it needs to be displayed in pages.

!image-2021-07-30-20-26-56-124.png!

Table list supports paging and retrieval:

!image-2021-07-30-20-27-56-468.png!

 

  was:
When the amount of data in the table is large, the display length of the master 
page is too long, and it needs to be displayed in pages.

 Table list supports paging and retrieval:


> tablelist show pages in masterPage
> --
>
> Key: HBASE-26156
> URL: https://issues.apache.org/jira/browse/HBASE-26156
> Project: HBase
>  Issue Type: Improvement
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: image-2021-07-30-20-27-56-468.png
>
>
> When the amount of data in the table is large, the display length of the 
> master page is too long, and it needs to be displayed in pages.
> !image-2021-07-30-20-26-56-124.png!
> Table list supports paging and retrieval:
> !image-2021-07-30-20-27-56-468.png!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26156) tablelist show pages in masterPage

2021-07-30 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-26156:
-
Description: 
When the amount of data in the table is large, the display length of the master 
page is too long, and it needs to be displayed in pages.

!image-2021-07-30-20-26-56-124.png!

Table list supports paging and retrieval:

!image-2021-07-30-20-27-56-468.png!

!image-2021-07-30-20-29-30-347.png!

 

  was:
When the amount of data in the table is large, the display length of the master 
page is too long, and it needs to be displayed in pages.

!image-2021-07-30-20-26-56-124.png!

Table list supports paging and retrieval:

!image-2021-07-30-20-27-56-468.png!

 


> tablelist show pages in masterPage
> --
>
> Key: HBASE-26156
> URL: https://issues.apache.org/jira/browse/HBASE-26156
> Project: HBase
>  Issue Type: Improvement
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: image-2021-07-30-20-27-56-468.png, 
> image-2021-07-30-20-29-30-347.png
>
>
> When the amount of data in the table is large, the display length of the 
> master page is too long, and it needs to be displayed in pages.
> !image-2021-07-30-20-26-56-124.png!
> Table list supports paging and retrieval:
> !image-2021-07-30-20-27-56-468.png!
> !image-2021-07-30-20-29-30-347.png!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26156) tablelist show pages in masterPage

2021-07-30 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-26156:
-
Description: 
When the amount of data in the table is large, the display length of the master 
page is too long, and it needs to be displayed in pages.

 

Table list supports paging and retrieval:

!image-2021-07-30-20-27-56-468.png!

!image-2021-07-30-20-29-30-347.png!

 

  was:
When the amount of data in the table is large, the display length of the master 
page is too long, and it needs to be displayed in pages.

!image-2021-07-30-20-26-56-124.png!

Table list supports paging and retrieval:

!image-2021-07-30-20-27-56-468.png!

!image-2021-07-30-20-29-30-347.png!

 


> tablelist show pages in masterPage
> --
>
> Key: HBASE-26156
> URL: https://issues.apache.org/jira/browse/HBASE-26156
> Project: HBase
>  Issue Type: Improvement
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: image-2021-07-30-20-27-56-468.png, 
> image-2021-07-30-20-29-30-347.png
>
>
> When the amount of data in the table is large, the display length of the 
> master page is too long, and it needs to be displayed in pages.
>  
> Table list supports paging and retrieval:
> !image-2021-07-30-20-27-56-468.png!
> !image-2021-07-30-20-29-30-347.png!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26156) tablelist show pages in masterPage

2021-07-30 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-26156:
-
Description: 
When the number of tables is large, the display length of the master page is 
too long, and it needs to be displayed in pages.

 

Table list supports paging and retrieval:

!image-2021-07-30-20-27-56-468.png!

!image-2021-07-30-20-29-30-347.png!

 

  was:
When the amount of data in the table is large, the display length of the master 
page is too long, and it needs to be displayed in pages.

 

Table list supports paging and retrieval:

!image-2021-07-30-20-27-56-468.png!

!image-2021-07-30-20-29-30-347.png!

 


> tablelist show pages in masterPage
> --
>
> Key: HBASE-26156
> URL: https://issues.apache.org/jira/browse/HBASE-26156
> Project: HBase
>  Issue Type: Improvement
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: image-2021-07-30-20-27-56-468.png, 
> image-2021-07-30-20-29-30-347.png
>
>
> When the number of tables is large, the display length of the master page is 
> too long, and it needs to be displayed in pages.
>  
> Table list supports paging and retrieval:
> !image-2021-07-30-20-27-56-468.png!
> !image-2021-07-30-20-29-30-347.png!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26156) tablelist show pages in masterPage

2021-07-30 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-26156:
-
Attachment: image-2021-07-30-20-29-30-347.png

> tablelist show pages in masterPage
> --
>
> Key: HBASE-26156
> URL: https://issues.apache.org/jira/browse/HBASE-26156
> Project: HBase
>  Issue Type: Improvement
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: image-2021-07-30-20-27-56-468.png, 
> image-2021-07-30-20-29-30-347.png
>
>
> When the amount of data in the table is large, the display length of the 
> master page is too long, and it needs to be displayed in pages.
> !image-2021-07-30-20-26-56-124.png!
> Table list supports paging and retrieval:
> !image-2021-07-30-20-27-56-468.png!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-26208) Supports revoke @ns single permission

2021-08-19 Thread zhengsicheng (Jira)
zhengsicheng created HBASE-26208:


 Summary: Supports revoke  @ns single permission
 Key: HBASE-26208
 URL: https://issues.apache.org/jira/browse/HBASE-26208
 Project: HBase
  Issue Type: Improvement
Reporter: zhengsicheng
Assignee: zhengsicheng






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26208) Supports revoke @ns single permission

2021-08-19 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-26208:
-
Affects Version/s: 2.3.4

> Supports revoke  @ns single permission
> --
>
> Key: HBASE-26208
> URL: https://issues.apache.org/jira/browse/HBASE-26208
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.3.4
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26208) Supports revoke @ns single permission

2021-08-19 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-26208:
-
Description: HBASE-2.3.4 

> Supports revoke  @ns single permission
> --
>
> Key: HBASE-26208
> URL: https://issues.apache.org/jira/browse/HBASE-26208
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.3.4
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
>
> HBASE-2.3.4 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26208) Supports revoke @ns single permission

2021-08-19 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-26208:
-
Description: Not supports revoke @ns single permission :revoke 'bobsmith', 
'@ns1', 'C'  (was: HBASE-2.3.4 )

> Supports revoke  @ns single permission
> --
>
> Key: HBASE-26208
> URL: https://issues.apache.org/jira/browse/HBASE-26208
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.3.4
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
>
> Not supports revoke @ns single permission :revoke 'bobsmith', '@ns1', 'C'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26208) Supports revoke @ns single permission

2021-08-19 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-26208:
-
Description: 
Not supports revoke @ns single permission :revoke 'bobsmith', '@ns1', 'C'

Supports revoke @ns single permission :revoke 'bobsmith', '@ns1', '\{C}'

 

!image-2021-08-19-18-09-47-215.png!

  was:Not supports revoke @ns single permission :revoke 'bobsmith', '@ns1', 'C'


> Supports revoke  @ns single permission
> --
>
> Key: HBASE-26208
> URL: https://issues.apache.org/jira/browse/HBASE-26208
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.3.4
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
>
> Not supports revoke @ns single permission :revoke 'bobsmith', '@ns1', 'C'
> Supports revoke @ns single permission :revoke 'bobsmith', '@ns1', '\{C}'
>  
> !image-2021-08-19-18-09-47-215.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26208) Supports revoke @ns single permission

2021-08-19 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-26208:
-
Attachment: 1.jpeg

> Supports revoke  @ns single permission
> --
>
> Key: HBASE-26208
> URL: https://issues.apache.org/jira/browse/HBASE-26208
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.3.4
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: 1.jpeg
>
>
> Not supports revoke @ns single permission :revoke 'bobsmith', '@ns1', 'C'
> Supports revoke @ns single permission :revoke 'bobsmith', '@ns1', '\{C}'
>  
> !image-2021-08-19-18-09-47-215.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26208) Supports revoke @ns single permission

2021-08-19 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-26208:
-
Description: 
Not supports revoke @ns single permission :revoke 'bobsmith', '@ns1', 'C'

Supports revoke @ns single permission :revoke 'bobsmith', '@ns1', '\{C}'

!1.jpeg!

  was:
Not supports revoke @ns single permission :revoke 'bobsmith', '@ns1', 'C'

Supports revoke @ns single permission :revoke 'bobsmith', '@ns1', '\{C}'

 

!image-2021-08-19-18-09-47-215.png!


> Supports revoke  @ns single permission
> --
>
> Key: HBASE-26208
> URL: https://issues.apache.org/jira/browse/HBASE-26208
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.3.4
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: 1.jpeg
>
>
> Not supports revoke @ns single permission :revoke 'bobsmith', '@ns1', 'C'
> Supports revoke @ns single permission :revoke 'bobsmith', '@ns1', '\{C}'
> !1.jpeg!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-26208) Supports revoke @ns single permission

2021-08-19 Thread zhengsicheng (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17401589#comment-17401589
 ] 

zhengsicheng commented on HBASE-26208:
--

[~zhangduo] review code thank you!

> Supports revoke  @ns single permission
> --
>
> Key: HBASE-26208
> URL: https://issues.apache.org/jira/browse/HBASE-26208
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.3.4
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: 1.jpeg
>
>
> Not supports revoke @ns single permission :revoke 'bobsmith', '@ns1', 'C'
> Supports revoke @ns single permission :revoke 'bobsmith', '@ns1', '\{C}'
> !1.jpeg!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-26208) Supports revoke @ns single permission

2021-08-23 Thread zhengsicheng (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17403037#comment-17403037
 ] 

zhengsicheng commented on HBASE-26208:
--

[~stack] review code thank you! 

> Supports revoke  @ns single permission
> --
>
> Key: HBASE-26208
> URL: https://issues.apache.org/jira/browse/HBASE-26208
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.3.4
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: 1.jpeg
>
>
> Not supports revoke @ns single permission :revoke 'bobsmith', '@ns1', 'C'
> Supports revoke @ns single permission :revoke 'bobsmith', '@ns1', '\{C}'
> !1.jpeg!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-21678) Port HBASE-20636 (Introduce two bloom filter type ROWPREFIX and ROWPREFIX_DELIMITED) to branch-1

2020-09-21 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-21678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-21678:
-
Attachment: image-2020-09-21-19-37-31-970.png

> Port HBASE-20636 (Introduce two bloom filter type ROWPREFIX and 
> ROWPREFIX_DELIMITED) to branch-1
> 
>
> Key: HBASE-21678
> URL: https://issues.apache.org/jira/browse/HBASE-21678
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Kyle Purtell
>Priority: Major
> Attachments: HBASE-21678-branch-1.patch, HBASE-21678-branch-1.patch, 
> HBASE-21678-branch-1.patch, image-2020-09-21-19-37-31-970.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-21678) Port HBASE-20636 (Introduce two bloom filter type ROWPREFIX and ROWPREFIX_DELIMITED) to branch-1

2020-09-21 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-21678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-21678:
-
Attachment: (was: image-2020-09-21-19-37-31-970.png)

> Port HBASE-20636 (Introduce two bloom filter type ROWPREFIX and 
> ROWPREFIX_DELIMITED) to branch-1
> 
>
> Key: HBASE-21678
> URL: https://issues.apache.org/jira/browse/HBASE-21678
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Kyle Purtell
>Priority: Major
> Attachments: HBASE-21678-branch-1.patch, HBASE-21678-branch-1.patch, 
> HBASE-21678-branch-1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-21678) Port HBASE-20636 (Introduce two bloom filter type ROWPREFIX and ROWPREFIX_DELIMITED) to branch-1

2020-09-21 Thread zhengsicheng (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-21678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17199347#comment-17199347
 ] 

zhengsicheng commented on HBASE-21678:
--

branch-1 
{code:java}
// code placeholder
org.apache.hadoop.hbase.regionserver.StoreFileScanner.java
@Override
public boolean shouldUseScanner(Scan scan, SortedSet columns, long 
oldestUnexpiredTS) {
  return reader.passesTimerangeFilter(scan, oldestUnexpiredTS)
  && reader.passesKeyRangeFilter(scan) && reader.passesBloomFilter(scan, 
columns);
}

org.apache.hadoop.hbase.regionserver.TestRowPrefixBloomFilter.java
HStore store = mock(HStore.class);

boolean exists = scanner.shouldUseScanner(scan, store, Long.MIN_VALUE);

The test failed !
{code}

> Port HBASE-20636 (Introduce two bloom filter type ROWPREFIX and 
> ROWPREFIX_DELIMITED) to branch-1
> 
>
> Key: HBASE-21678
> URL: https://issues.apache.org/jira/browse/HBASE-21678
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Kyle Purtell
>Priority: Major
> Attachments: HBASE-21678-branch-1.patch, HBASE-21678-branch-1.patch, 
> HBASE-21678-branch-1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-21678) Port HBASE-20636 (Introduce two bloom filter type ROWPREFIX and ROWPREFIX_DELIMITED) to branch-1

2020-09-22 Thread zhengsicheng (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-21678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17199901#comment-17199901
 ] 

zhengsicheng commented on HBASE-21678:
--

[~apurtell]

> Port HBASE-20636 (Introduce two bloom filter type ROWPREFIX and 
> ROWPREFIX_DELIMITED) to branch-1
> 
>
> Key: HBASE-21678
> URL: https://issues.apache.org/jira/browse/HBASE-21678
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Kyle Purtell
>Priority: Major
> Attachments: HBASE-21678-branch-1.patch, HBASE-21678-branch-1.patch, 
> HBASE-21678-branch-1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-21678) Port HBASE-20636 (Introduce two bloom filter type ROWPREFIX and ROWPREFIX_DELIMITED) to branch-1

2020-09-22 Thread zhengsicheng (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-21678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17199901#comment-17199901
 ] 

zhengsicheng edited comment on HBASE-21678 at 9/22/20, 7:36 AM:


[~apurtell] test failed


was (Author: zhengsicheng):
[~apurtell]

> Port HBASE-20636 (Introduce two bloom filter type ROWPREFIX and 
> ROWPREFIX_DELIMITED) to branch-1
> 
>
> Key: HBASE-21678
> URL: https://issues.apache.org/jira/browse/HBASE-21678
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Kyle Purtell
>Priority: Major
> Attachments: HBASE-21678-branch-1.patch, HBASE-21678-branch-1.patch, 
> HBASE-21678-branch-1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-25634) The client frequently exceeds the quota, which causes the meta table scan to be too high

2021-03-04 Thread zhengsicheng (Jira)
zhengsicheng created HBASE-25634:


 Summary: The client frequently exceeds the quota, which causes the 
meta table scan to be too high
 Key: HBASE-25634
 URL: https://issues.apache.org/jira/browse/HBASE-25634
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.3.4
Reporter: zhengsicheng
Assignee: zhengsicheng






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-25634) The client frequently exceeds the quota, which causes the meta table scan to be too high

2021-03-04 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-25634:
-
Attachment: image-2021-03-05-12-00-33-522.png

> The client frequently exceeds the quota, which causes the meta table scan to 
> be too high
> 
>
> Key: HBASE-25634
> URL: https://issues.apache.org/jira/browse/HBASE-25634
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.3.4
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: image-2021-03-05-12-00-33-522.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-25634) The client frequently exceeds the quota, which causes the meta table scan to be too high

2021-03-04 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-25634:
-
Attachment: image-2021-03-05-12-01-08-769.png

> The client frequently exceeds the quota, which causes the meta table scan to 
> be too high
> 
>
> Key: HBASE-25634
> URL: https://issues.apache.org/jira/browse/HBASE-25634
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.3.4
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: image-2021-03-05-12-00-33-522.png, 
> image-2021-03-05-12-01-08-769.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-25634) The client frequently exceeds the quota, which causes the meta table scan to be too high

2021-03-04 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-25634:
-
Description: 
2021-03-03 10:35:07,372 [hconnection-0x12a53d4 clusterId: 
824ef0d2-aeca-4880-b8a7-507a4253d976-shared--pool1-t269] WARN 
[org.apache.hadoop.hbase.client.RpcRetryingCaller] - Client Excess Quota is 
Retrying, tries=9, retries=63, started=28243ms ago, cancelled=false, 
msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: 
org.apache.hadoop.hbase.quotas.RpcThrottlingException: ClientTest:CPUTEST2 - 
request size limit exceeded - wait 999hrs, 0sec in 
LF-HBASE-QINGLONG1.hadoop.jd.local,16020,1614698288842
 
2021-03-03 10:35:07,372 [hconnection-0x12a53d4 clusterId: 
824ef0d2-aeca-4880-b8a7-507a4253d976-shared--pool1-t267] WARN 
[org.apache.hadoop.hbase.client.RpcRetryingCaller] - Client Excess Quota is 
Retrying, tries=9, retries=63, started=28322ms ago, cancelled=false, 
msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: 
org.apache.hadoop.hbase.quotas.RpcThrottlingException: ClientTest:CPUTEST2 - 
request size limit exceeded - wait 999hrs, 0sec in 
LF-HBASE-QINGLONG1.hadoop.jd.local,16020,1614698288842
 
2021-03-03 10:35:07,380 [Thread-1-SendThread(172.20.161.121:2181)] DEBUG 
[org.apache.zookeeper.ClientCnxn] - Reading reply sessionid:0x1756ec5a1bf2157, 
packet:: clientPath:null serverPath:null finished:false header:: 181570,4 
replyHeader:: 181570,38683795827,0 request:: 
'/hbase_qinglong/table/ClientTest:CPUTEST2,F response:: 
#000146d61737465723a3136303030fffd1c16ffa1fffcffcb64ff8e5042554680,s\{38683609136,38683795820,1613703475740,1614738735797,7,0,0,0,31,0,38683609136}
 
2021-03-03 10:35:07,382 [hconnection-0x12a53d4 clusterId: 
824ef0d2-aeca-4880-b8a7-507a4253d976-shared--pool1-t271] WARN 
[org.apache.hadoop.hbase.client.RpcRetryingCaller] - Client Excess Quota is 
Retrying, tries=9, retries=63, started=28241ms ago, cancelled=false, 
msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: 
org.apache.hadoop.hbase.quotas.RpcThrottlingException: ClientTest:CPUTEST2 - 
request size limit exceeded - wait 999hrs, 0sec in 
LF-HBASE-QINGLONG1.hadoop.jd.local,16020,1614698288842
 
2021-03-03 10:35:07,389 [Thread-1-SendThread(172.20.161.121:2181)] DEBUG 
[org.apache.zookeeper.ClientCnxn] - Reading reply sessionid:0x1756ec5a1bf2157, 
packet:: clientPath:null serverPath:null finished:false header:: 181571,4 
replyHeader:: 181571,38683795827,0 request:: 
'/hbase_qinglong/table/ClientTest:CPUTEST2,F response:: 
#000146d61737465723a3136303030fffd1c16ffa1fffcffcb64ff8e5042554680,s\{38683609136,38683795820,1613703475740,1614738735797,7,0,0,0,31,0,38683609136}
 
2021-03-03 10:35:07,390 [hconnection-0x12a53d4 clusterId: 
824ef0d2-aeca-4880-b8a7-507a4253d976-shared--pool1-t268] WARN 
[org.apache.hadoop.hbase.client.RpcRetryingCaller] - Client Excess Quota is 
Retrying, tries=9, retries=63, started=28324ms ago, cancelled=false, 
msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: 
org.apache.hadoop.hbase.quotas.RpcThrottlingException: ClientTest:CPUTEST2 - 
request size limit exceeded - wait 999hrs, 0sec in 
LF-HBASE-QINGLONG1.hadoop.jd.local,16020,1614698288842
 
2021-03-03 10:35:07,429 [Thread-1-SendThread(172.20.161.121:2181)] DEBUG 
[org.apache.zookeeper.ClientCnxn] - Reading reply sessionid:0x1756ec5a1bf2157, 
packet:: clientPath:null serverPath:null finished:false header:: 181572,4 
replyHeader::
 181572,38683795827,0 request:: '/hbase_qinglong/table/ClientTest:CPUTEST2,F 
response:: 
#000146d61737465723a3136303030fffd1c16ffa1fffcffcb64ff8e5042554680,s{38683609136,38683795820,1613703475740,1614738735797,7
,0,0,0,31,0,38683609136} 
2021-03-03 10:35:07,431 [hconnection-0x12a53d4 clusterId: 
824ef0d2-aeca-4880-b8a7-507a4253d976-shared--pool1-t272] WARN 
[org.apache.hadoop.hbase.client.RpcRetryingCaller] - Client Excess Quota is 
Retrying, tries=9, retries=63, started=2825
3ms ago, cancelled=false, 
msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: 
org.apache.hadoop.hbase.quotas.RpcThrottlingException: ClientTest:CPUTEST2 - 
request size limit exceeded - wait 999hrs, 0sec in LF-HBASE-QINGLONG1-172-20-16
1-121.hadoop.jd.local,16020,1614698288842
 
2021-03-03 10:35:07,436 [Thread-1-SendThread(172.20.161.121:2181)] DEBUG 
[org.apache.zookeeper.ClientCnxn] - Reading reply sessionid:0x1756ec5a1bf2157, 
packet:: clientPath:null serverPath:null finished:false header:: 181573,4 
replyHeader::
 181573,38683795827,0 request:: '/hbase_qinglong/table/ClientTest:CPUTEST2,F 
response:: 
#000146d61737465723a3136303030fffd1c16ffa1fffcffcb64ff8e5042554680,s{38683609136,38683795820,1613703475740,1614738735797,7
,0,0,0,31,0,38683609136} 
2021-03-03 10:35:07,437 [hconnection-0x12a53d4 clusterId: 
824ef0d2-aeca-4880-b8a7-507a4253d976-shared--pool1-t270] WARN 
[org.apache.hadoop.hbase.

[jira] [Commented] (HBASE-25634) The client frequently exceeds the quota, which causes the meta table scan to be too high

2021-03-04 Thread zhengsicheng (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17295790#comment-17295790
 ] 

zhengsicheng commented on HBASE-25634:
--

[~sakthi] review code thank you1

> The client frequently exceeds the quota, which causes the meta table scan to 
> be too high
> 
>
> Key: HBASE-25634
> URL: https://issues.apache.org/jira/browse/HBASE-25634
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.3.4
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: image-2021-03-05-12-00-33-522.png, 
> image-2021-03-05-12-01-08-769.png
>
>
> 2021-03-03 10:35:07,372 [hconnection-0x12a53d4 clusterId: 
> 824ef0d2-aeca-4880-b8a7-507a4253d976-shared--pool1-t269] WARN 
> [org.apache.hadoop.hbase.client.RpcRetryingCaller] - Client Excess Quota is 
> Retrying, tries=9, retries=63, started=28243ms ago, cancelled=false, 
> msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: 
> org.apache.hadoop.hbase.quotas.RpcThrottlingException: ClientTest:CPUTEST2 - 
> request size limit exceeded - wait 999hrs, 0sec in 
> LF-HBASE-QINGLONG1.hadoop.jd.local,16020,1614698288842
>  
> 2021-03-03 10:35:07,372 [hconnection-0x12a53d4 clusterId: 
> 824ef0d2-aeca-4880-b8a7-507a4253d976-shared--pool1-t267] WARN 
> [org.apache.hadoop.hbase.client.RpcRetryingCaller] - Client Excess Quota is 
> Retrying, tries=9, retries=63, started=28322ms ago, cancelled=false, 
> msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: 
> org.apache.hadoop.hbase.quotas.RpcThrottlingException: ClientTest:CPUTEST2 - 
> request size limit exceeded - wait 999hrs, 0sec in 
> LF-HBASE-QINGLONG1.hadoop.jd.local,16020,1614698288842
>  
> 2021-03-03 10:35:07,380 [Thread-1-SendThread(172.20.161.121:2181)] DEBUG 
> [org.apache.zookeeper.ClientCnxn] - Reading reply 
> sessionid:0x1756ec5a1bf2157, packet:: clientPath:null serverPath:null 
> finished:false header:: 181570,4 replyHeader:: 181570,38683795827,0 request:: 
> '/hbase_qinglong/table/ClientTest:CPUTEST2,F response:: 
> #000146d61737465723a3136303030fffd1c16ffa1fffcffcb64ff8e5042554680,s\{38683609136,38683795820,1613703475740,1614738735797,7,0,0,0,31,0,38683609136}
>  
> 2021-03-03 10:35:07,382 [hconnection-0x12a53d4 clusterId: 
> 824ef0d2-aeca-4880-b8a7-507a4253d976-shared--pool1-t271] WARN 
> [org.apache.hadoop.hbase.client.RpcRetryingCaller] - Client Excess Quota is 
> Retrying, tries=9, retries=63, started=28241ms ago, cancelled=false, 
> msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: 
> org.apache.hadoop.hbase.quotas.RpcThrottlingException: ClientTest:CPUTEST2 - 
> request size limit exceeded - wait 999hrs, 0sec in 
> LF-HBASE-QINGLONG1.hadoop.jd.local,16020,1614698288842
>  
> 2021-03-03 10:35:07,389 [Thread-1-SendThread(172.20.161.121:2181)] DEBUG 
> [org.apache.zookeeper.ClientCnxn] - Reading reply 
> sessionid:0x1756ec5a1bf2157, packet:: clientPath:null serverPath:null 
> finished:false header:: 181571,4 replyHeader:: 181571,38683795827,0 request:: 
> '/hbase_qinglong/table/ClientTest:CPUTEST2,F response:: 
> #000146d61737465723a3136303030fffd1c16ffa1fffcffcb64ff8e5042554680,s\{38683609136,38683795820,1613703475740,1614738735797,7,0,0,0,31,0,38683609136}
>  
> 2021-03-03 10:35:07,390 [hconnection-0x12a53d4 clusterId: 
> 824ef0d2-aeca-4880-b8a7-507a4253d976-shared--pool1-t268] WARN 
> [org.apache.hadoop.hbase.client.RpcRetryingCaller] - Client Excess Quota is 
> Retrying, tries=9, retries=63, started=28324ms ago, cancelled=false, 
> msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: 
> org.apache.hadoop.hbase.quotas.RpcThrottlingException: ClientTest:CPUTEST2 - 
> request size limit exceeded - wait 999hrs, 0sec in 
> LF-HBASE-QINGLONG1.hadoop.jd.local,16020,1614698288842
>  
> 2021-03-03 10:35:07,429 [Thread-1-SendThread(172.20.161.121:2181)] DEBUG 
> [org.apache.zookeeper.ClientCnxn] - Reading reply 
> sessionid:0x1756ec5a1bf2157, packet:: clientPath:null serverPath:null 
> finished:false header:: 181572,4 replyHeader::
>  181572,38683795827,0 request:: '/hbase_qinglong/table/ClientTest:CPUTEST2,F 
> response:: 
> #000146d61737465723a3136303030fffd1c16ffa1fffcffcb64ff8e5042554680,s{38683609136,38683795820,1613703475740,1614738735797,7
> ,0,0,0,31,0,38683609136} 
> 2021-03-03 10:35:07,431 [hconnection-0x12a53d4 clusterId: 
> 824ef0d2-aeca-4880-b8a7-507a4253d976-shared--pool1-t272] WARN 
> [org.apache.hadoop.hbase.client.RpcRetryingCaller] - Client Excess Quota is 
> Retrying, tries=9, retries=63, started=2825
> 3ms ago, cancelled=false, 
> msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: 
> org.apache.hadoop.hbase.quotas.RpcThrottlingException: ClientTest:CPUTEST2 

[jira] [Updated] (HBASE-25634) The client frequently exceeds the quota, which causes the meta table scan to be too high

2021-03-04 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-25634:
-
Description: 
 

2021-03-03 10:35:07,372 [hconnection-0x12a53d4 clusterId: 
824ef0d2-aeca-4880-b8a7-507a4253d976-shared--pool1-t269] WARN 
[org.apache.hadoop.hbase.client.RpcRetryingCaller] - Client Excess Quota is 
Retrying, tries=9, retries=63, started=28243ms ago, cancelled=false, 
msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: 
org.apache.hadoop.hbase.quotas.RpcThrottlingException: ClientTest:CPUTEST2 - 
request size limit exceeded - wait 999hrs, 0sec in 
LF-HBASE-QINGLONG1.hadoop.jd.local,16020,1614698288842

2021-03-03 10:35:07,372 [hconnection-0x12a53d4 clusterId: 
824ef0d2-aeca-4880-b8a7-507a4253d976-shared--pool1-t267] WARN 
[org.apache.hadoop.hbase.client.RpcRetryingCaller] - Client Excess Quota is 
Retrying, tries=9, retries=63, started=28322ms ago, cancelled=false, 
msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: 
org.apache.hadoop.hbase.quotas.RpcThrottlingException: ClientTest:CPUTEST2 - 
request size limit exceeded - wait 999hrs, 0sec in 
LF-HBASE-QINGLONG1.hadoop.jd.local,16020,1614698288842

2021-03-03 10:35:07,380 [Thread-1-SendThread(172.20.161.121:2181)] DEBUG 
[org.apache.zookeeper.ClientCnxn] - Reading reply sessionid:0x1756ec5a1bf2157, 
packet:: clientPath:null serverPath:null finished:false header:: 181570,4 
replyHeader:: 181570,38683795827,0 request:: 
'/hbase_qinglong/table/ClientTest:CPUTEST2,F response:: 
#000146d61737465723a3136303030fffd1c16ffa1fffcffcb64ff8e5042554680,s\{38683609136,38683795820,1613703475740,1614738735797,7,0,0,0,31,0,38683609136}
 
 2021-03-03 10:35:07,382 [hconnection-0x12a53d4 clusterId: 
824ef0d2-aeca-4880-b8a7-507a4253d976-shared--pool1-t271] WARN 
[org.apache.hadoop.hbase.client.RpcRetryingCaller] - Client Excess Quota is 
Retrying, tries=9, retries=63, started=28241ms ago, cancelled=false, 
msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: 
org.apache.hadoop.hbase.quotas.RpcThrottlingException: ClientTest:CPUTEST2 - 
request size limit exceeded - wait 999hrs, 0sec in 
LF-HBASE-QINGLONG1.hadoop.jd.local,16020,1614698288842

2021-03-03 10:35:07,389 [Thread-1-SendThread(172.20.161.121:2181)] DEBUG 
[org.apache.zookeeper.ClientCnxn] - Reading reply sessionid:0x1756ec5a1bf2157, 
packet:: clientPath:null serverPath:null finished:false header:: 181571,4 
replyHeader:: 181571,38683795827,0 request:: 
'/hbase_qinglong/table/ClientTest:CPUTEST2,F response:: 
#000146d61737465723a3136303030fffd1c16ffa1fffcffcb64ff8e5042554680,s\{38683609136,38683795820,1613703475740,1614738735797,7,0,0,0,31,0,38683609136}
 
 2021-03-03 10:35:07,390 [hconnection-0x12a53d4 clusterId: 
824ef0d2-aeca-4880-b8a7-507a4253d976-shared--pool1-t268] WARN 
[org.apache.hadoop.hbase.client.RpcRetryingCaller] - Client Excess Quota is 
Retrying, tries=9, retries=63, started=28324ms ago, cancelled=false, 
msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: 
org.apache.hadoop.hbase.quotas.RpcThrottlingException: ClientTest:CPUTEST2 - 
request size limit exceeded - wait 999hrs, 0sec in 
LF-HBASE-QINGLONG1.hadoop.jd.local,16020,1614698288842

2021-03-03 10:35:07,429 [Thread-1-SendThread(172.20.161.121:2181)] DEBUG 
[org.apache.zookeeper.ClientCnxn] - Reading reply sessionid:0x1756ec5a1bf2157, 
packet:: clientPath:null serverPath:null finished:false header:: 181572,4 
replyHeader::
 181572,38683795827,0 request:: '/hbase_qinglong/table/ClientTest:CPUTEST2,F 
response:: 
#000146d61737465723a3136303030fffd1c16ffa1fffcffcb64ff8e5042554680,s

{38683609136,38683795820,1613703475740,1614738735797,7 ,0,0,0,31,0,38683609136} 
 2021-03-03 10:35:07,431 [hconnection-0x12a53d4 clusterId: 
824ef0d2-aeca-4880-b8a7-507a4253d976-shared--pool1-t272] WARN 
[org.apache.hadoop.hbase.client.RpcRetryingCaller] - Client Excess Quota is 
Retrying, tries=9, retries=63, started=2825
 3ms ago, cancelled=false, 
msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: 
org.apache.hadoop.hbase.quotas.RpcThrottlingException: ClientTest:CPUTEST2 - 
request size limit exceeded - wait 999hrs, 0sec in LF-HBASE-QINGLONG1-172-20-16
 1-121.hadoop.jd.local,16020,1614698288842
 
 2021-03-03 10:35:07,436 [Thread-1-SendThread(172.20.161.121:2181)] DEBUG 
[org.apache.zookeeper.ClientCnxn] - Reading reply sessionid:0x1756ec5a1bf2157, 
packet:: clientPath:null serverPath:null finished:false header:: 181573,4 
replyHeader::
 181573,38683795827,0 request:: '/hbase_qinglong/table/ClientTest:CPUTEST2,F 
response:: 
#000146d61737465723a3136303030fffd1c16ffa1fffcffcb64ff8e5042554680,s\{38683609136,38683795820,1613703475740,1614738735797,7
 ,0,0,0,31,0,38683609136}


 2021-03-03 10:35:07,437 [hconnection-0x12a53d4 clusterId: 
824ef0d2-aeca-4880-b8a7-507a4253d976-shared--pool1-t270] WARN 
[org.apache.ha

[jira] [Updated] (HBASE-25634) The client frequently exceeds the quota, which causes the meta table scan to be too high

2021-03-04 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-25634:
-
Description: 
 When the client scan operation, the server frequently returns 
RpcThrottlingException, which will cause the meta table request to become high.

 

2021-03-03 10:35:07,372 [hconnection-0x12a53d4 clusterId: 
824ef0d2-aeca-4880-b8a7-507a4253d976-shared--pool1-t269] WARN 
[org.apache.hadoop.hbase.client.RpcRetryingCaller] - Client Excess Quota is 
Retrying, tries=9, retries=63, started=28243ms ago, cancelled=false, 
msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: 
org.apache.hadoop.hbase.quotas.RpcThrottlingException: ClientTest:CPUTEST2 - 
request size limit exceeded - wait 999hrs, 0sec in 
LF-HBASE-QINGLONG1.hadoop.jd.local,16020,1614698288842

2021-03-03 10:35:07,372 [hconnection-0x12a53d4 clusterId: 
824ef0d2-aeca-4880-b8a7-507a4253d976-shared--pool1-t267] WARN 
[org.apache.hadoop.hbase.client.RpcRetryingCaller] - Client Excess Quota is 
Retrying, tries=9, retries=63, started=28322ms ago, cancelled=false, 
msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: 
org.apache.hadoop.hbase.quotas.RpcThrottlingException: ClientTest:CPUTEST2 - 
request size limit exceeded - wait 999hrs, 0sec in 
LF-HBASE-QINGLONG1.hadoop.jd.local,16020,1614698288842

2021-03-03 10:35:07,380 [Thread-1-SendThread(172.20.161.121:2181)] DEBUG 
[org.apache.zookeeper.ClientCnxn] - Reading reply sessionid:0x1756ec5a1bf2157, 
packet:: clientPath:null serverPath:null finished:false header:: 181570,4 
replyHeader:: 181570,38683795827,0 request:: 
'/hbase_qinglong/table/ClientTest:CPUTEST2,F response:: 
#000146d61737465723a3136303030fffd1c16ffa1fffcffcb64ff8e5042554680,s\{38683609136,38683795820,1613703475740,1614738735797,7,0,0,0,31,0,38683609136}
 
 2021-03-03 10:35:07,382 [hconnection-0x12a53d4 clusterId: 
824ef0d2-aeca-4880-b8a7-507a4253d976-shared--pool1-t271] WARN 
[org.apache.hadoop.hbase.client.RpcRetryingCaller] - Client Excess Quota is 
Retrying, tries=9, retries=63, started=28241ms ago, cancelled=false, 
msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: 
org.apache.hadoop.hbase.quotas.RpcThrottlingException: ClientTest:CPUTEST2 - 
request size limit exceeded - wait 999hrs, 0sec in 
LF-HBASE-QINGLONG1.hadoop.jd.local,16020,1614698288842

2021-03-03 10:35:07,389 [Thread-1-SendThread(172.20.161.121:2181)] DEBUG 
[org.apache.zookeeper.ClientCnxn] - Reading reply sessionid:0x1756ec5a1bf2157, 
packet:: clientPath:null serverPath:null finished:false header:: 181571,4 
replyHeader:: 181571,38683795827,0 request:: 
'/hbase_qinglong/table/ClientTest:CPUTEST2,F response:: 
#000146d61737465723a3136303030fffd1c16ffa1fffcffcb64ff8e5042554680,s\{38683609136,38683795820,1613703475740,1614738735797,7,0,0,0,31,0,38683609136}
 
 2021-03-03 10:35:07,390 [hconnection-0x12a53d4 clusterId: 
824ef0d2-aeca-4880-b8a7-507a4253d976-shared--pool1-t268] WARN 
[org.apache.hadoop.hbase.client.RpcRetryingCaller] - Client Excess Quota is 
Retrying, tries=9, retries=63, started=28324ms ago, cancelled=false, 
msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: 
org.apache.hadoop.hbase.quotas.RpcThrottlingException: ClientTest:CPUTEST2 - 
request size limit exceeded - wait 999hrs, 0sec in 
LF-HBASE-QINGLONG1.hadoop.jd.local,16020,1614698288842

2021-03-03 10:35:07,429 [Thread-1-SendThread(172.20.161.121:2181)] DEBUG 
[org.apache.zookeeper.ClientCnxn] - Reading reply sessionid:0x1756ec5a1bf2157, 
packet:: clientPath:null serverPath:null finished:false header:: 181572,4 
replyHeader::
 181572,38683795827,0 request:: '/hbase_qinglong/table/ClientTest:CPUTEST2,F 
response:: 
#000146d61737465723a3136303030fffd1c16ffa1fffcffcb64ff8e5042554680,s

{38683609136,38683795820,1613703475740,1614738735797,7 ,0,0,0,31,0,38683609136} 
 2021-03-03 10:35:07,431 [hconnection-0x12a53d4 clusterId: 
824ef0d2-aeca-4880-b8a7-507a4253d976-shared--pool1-t272] WARN 
[org.apache.hadoop.hbase.client.RpcRetryingCaller] - Client Excess Quota is 
Retrying, tries=9, retries=63, started=2825
 3ms ago, cancelled=false, 
msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: 
org.apache.hadoop.hbase.quotas.RpcThrottlingException: ClientTest:CPUTEST2 - 
request size limit exceeded - wait 999hrs, 0sec in LF-HBASE-QINGLONG1-172-20-16
 1-121.hadoop.jd.local,16020,1614698288842
 
 2021-03-03 10:35:07,436 [Thread-1-SendThread(172.20.161.121:2181)] DEBUG 
[org.apache.zookeeper.ClientCnxn] - Reading reply sessionid:0x1756ec5a1bf2157, 
packet:: clientPath:null serverPath:null finished:false header:: 181573,4 
replyHeader::
 181573,38683795827,0 request:: '/hbase_qinglong/table/ClientTest:CPUTEST2,F 
response:: 
#000146d61737465723a3136303030fffd1c16ffa1fffcffcb64ff8e5042554680,s\{38683609136,38683795820,1613703475740,1614738735797,7
 ,0,0,0,31,0,3868360

[jira] [Updated] (HBASE-25634) The client frequently exceeds the quota, which causes the meta table scan to be too high

2021-03-04 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-25634:
-
Description: 
 When the client scan operation, the server frequently returns 
RpcThrottlingException, which will cause the meta table request to become high.

 
/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerImpl.java
{code:java}
// code placeholder
@Override
public T callWithRetries(RetryingCallable callable, int callTimeout)
throws IOException, RuntimeException {
  List exceptions = new 
ArrayList<>();
  tracker.start();
  context.clear();
  for (int tries = 0;; tries++) {
long expectedSleep;
try {
  // bad cache entries are cleared in the call to 
RetryingCallable#throwable() in catch block
  Throwable t = null;
  if (exceptions != null && !exceptions.isEmpty()) {
t = exceptions.get(exceptions.size() - 1).throwable;
  }
  if (!(t instanceof RpcThrottlingException)) {
callable.prepare(tries != 0);
  }
  interceptor.intercept(context.prepare(callable, tries));
  return callable.call(getTimeout(callTimeout));
} catch (PreemptiveFastFailException e) {
  throw e;
} catch (Throwable t) {
  ExceptionUtil.rethrowIfInterrupt(t);
  Throwable cause = t.getCause();
  if (cause instanceof DoNotRetryIOException) {
// Fail fast
throw (DoNotRetryIOException) cause;
  }
  // translateException throws exception when should not retry: i.e. when 
request is bad.
  interceptor.handleFailure(context, t);
  t = translateException(t);

{code}
 

 

 

!image-2021-03-05-12-00-33-522.png!

!image-2021-03-05-12-01-08-769.png!

 

  was:
 When the client scan operation, the server frequently returns 
RpcThrottlingException, which will cause the meta table request to become high.

 

2021-03-03 10:35:07,372 [hconnection-0x12a53d4 clusterId: 
824ef0d2-aeca-4880-b8a7-507a4253d976-shared--pool1-t269] WARN 
[org.apache.hadoop.hbase.client.RpcRetryingCaller] - Client Excess Quota is 
Retrying, tries=9, retries=63, started=28243ms ago, cancelled=false, 
msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: 
org.apache.hadoop.hbase.quotas.RpcThrottlingException: ClientTest:CPUTEST2 - 
request size limit exceeded - wait 999hrs, 0sec in 
LF-HBASE-QINGLONG1.hadoop.jd.local,16020,1614698288842

2021-03-03 10:35:07,372 [hconnection-0x12a53d4 clusterId: 
824ef0d2-aeca-4880-b8a7-507a4253d976-shared--pool1-t267] WARN 
[org.apache.hadoop.hbase.client.RpcRetryingCaller] - Client Excess Quota is 
Retrying, tries=9, retries=63, started=28322ms ago, cancelled=false, 
msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: 
org.apache.hadoop.hbase.quotas.RpcThrottlingException: ClientTest:CPUTEST2 - 
request size limit exceeded - wait 999hrs, 0sec in 
LF-HBASE-QINGLONG1.hadoop.jd.local,16020,1614698288842

2021-03-03 10:35:07,380 [Thread-1-SendThread(172.20.161.121:2181)] DEBUG 
[org.apache.zookeeper.ClientCnxn] - Reading reply sessionid:0x1756ec5a1bf2157, 
packet:: clientPath:null serverPath:null finished:false header:: 181570,4 
replyHeader:: 181570,38683795827,0 request:: 
'/hbase_qinglong/table/ClientTest:CPUTEST2,F response:: 
#000146d61737465723a3136303030fffd1c16ffa1fffcffcb64ff8e5042554680,s\{38683609136,38683795820,1613703475740,1614738735797,7,0,0,0,31,0,38683609136}
 
 2021-03-03 10:35:07,382 [hconnection-0x12a53d4 clusterId: 
824ef0d2-aeca-4880-b8a7-507a4253d976-shared--pool1-t271] WARN 
[org.apache.hadoop.hbase.client.RpcRetryingCaller] - Client Excess Quota is 
Retrying, tries=9, retries=63, started=28241ms ago, cancelled=false, 
msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: 
org.apache.hadoop.hbase.quotas.RpcThrottlingException: ClientTest:CPUTEST2 - 
request size limit exceeded - wait 999hrs, 0sec in 
LF-HBASE-QINGLONG1.hadoop.jd.local,16020,1614698288842

2021-03-03 10:35:07,389 [Thread-1-SendThread(172.20.161.121:2181)] DEBUG 
[org.apache.zookeeper.ClientCnxn] - Reading reply sessionid:0x1756ec5a1bf2157, 
packet:: clientPath:null serverPath:null finished:false header:: 181571,4 
replyHeader:: 181571,38683795827,0 request:: 
'/hbase_qinglong/table/ClientTest:CPUTEST2,F response:: 
#000146d61737465723a3136303030fffd1c16ffa1fffcffcb64ff8e5042554680,s\{38683609136,38683795820,1613703475740,1614738735797,7,0,0,0,31,0,38683609136}
 
 2021-03-03 10:35:07,390 [hconnection-0x12a53d4 clusterId: 
824ef0d2-aeca-4880-b8a7-507a4253d976-shared--pool1-t268] WARN 
[org.apache.hadoop.hbase.client.RpcRetryingCaller] - Client Excess Quota is 
Retrying, tries=9, retries=63, started=28324ms ago, cancelled=false, 
msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: 
org.apache.hadoop.hbase.quotas.RpcThrottlingException: ClientTest:CPUTEST2 - 
request size limit exceeded - wait 999hrs, 0sec in 
LF-HBASE-QINGLONG1.hadoop.jd.local,16020,1614

[jira] [Updated] (HBASE-25634) The client frequently exceeds the quota, which causes the meta table scan to be too high

2021-03-04 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-25634:
-
Description: 
 When the client scan operation, the server frequently returns 
RpcThrottlingException, which will cause the meta table request to become high.

 
/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerImpl.java
{code:java}
// code placeholder
@Override
public T callWithRetries(RetryingCallable callable, int callTimeout)
throws IOException, RuntimeException {
  List exceptions = new 
ArrayList<>();
  tracker.start();
  context.clear();
  for (int tries = 0;; tries++) {
long expectedSleep;
try {
  // bad cache entries are cleared in the call to 
RetryingCallable#throwable() in catch block
  // callable.prepare() reload force reload of server location
  callable.prepare(tries != 0);
  interceptor.intercept(context.prepare(callable, tries));
  return callable.call(getTimeout(callTimeout));
} catch (PreemptiveFastFailException e) {
  throw e;
} catch (Throwable t) {
  ExceptionUtil.rethrowIfInterrupt(t);
  Throwable cause = t.getCause();
  if (cause instanceof DoNotRetryIOException) {
// Fail fast
throw (DoNotRetryIOException) cause;
  }
  // translateException throws exception when should not retry: i.e. when 
request is bad.
  interceptor.handleFailure(context, t);
  t = translateException(t);

{code}
 

 

 

!image-2021-03-05-12-00-33-522.png!

!image-2021-03-05-12-01-08-769.png!

 

  was:
 When the client scan operation, the server frequently returns 
RpcThrottlingException, which will cause the meta table request to become high.

 
/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerImpl.java
{code:java}
// code placeholder
@Override
public T callWithRetries(RetryingCallable callable, int callTimeout)
throws IOException, RuntimeException {
  List exceptions = new 
ArrayList<>();
  tracker.start();
  context.clear();
  for (int tries = 0;; tries++) {
long expectedSleep;
try {
  // bad cache entries are cleared in the call to 
RetryingCallable#throwable() in catch block
  Throwable t = null;
  if (exceptions != null && !exceptions.isEmpty()) {
t = exceptions.get(exceptions.size() - 1).throwable;
  }
  if (!(t instanceof RpcThrottlingException)) {
callable.prepare(tries != 0);
  }
  interceptor.intercept(context.prepare(callable, tries));
  return callable.call(getTimeout(callTimeout));
} catch (PreemptiveFastFailException e) {
  throw e;
} catch (Throwable t) {
  ExceptionUtil.rethrowIfInterrupt(t);
  Throwable cause = t.getCause();
  if (cause instanceof DoNotRetryIOException) {
// Fail fast
throw (DoNotRetryIOException) cause;
  }
  // translateException throws exception when should not retry: i.e. when 
request is bad.
  interceptor.handleFailure(context, t);
  t = translateException(t);

{code}
 

 

 

!image-2021-03-05-12-00-33-522.png!

!image-2021-03-05-12-01-08-769.png!

 


> The client frequently exceeds the quota, which causes the meta table scan to 
> be too high
> 
>
> Key: HBASE-25634
> URL: https://issues.apache.org/jira/browse/HBASE-25634
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.3.4
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: image-2021-03-05-12-00-33-522.png, 
> image-2021-03-05-12-01-08-769.png
>
>
>  When the client scan operation, the server frequently returns 
> RpcThrottlingException, which will cause the meta table request to become 
> high.
>  
> /hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerImpl.java
> {code:java}
> // code placeholder
> @Override
> public T callWithRetries(RetryingCallable callable, int callTimeout)
> throws IOException, RuntimeException {
>   List exceptions = new 
> ArrayList<>();
>   tracker.start();
>   context.clear();
>   for (int tries = 0;; tries++) {
> long expectedSleep;
> try {
>   // bad cache entries are cleared in the call to 
> RetryingCallable#throwable() in catch block
>   // callable.prepare() reload force reload of server location
>   callable.prepare(tries != 0);
>   interceptor.intercept(context.prepare(callable, tries));
>   return callable.call(getTimeout(callTimeout));
> } catch (PreemptiveFastFailException e) {
>   throw e;
> } catch (Throwable t) {
>   ExceptionUtil.rethrowIfInterrupt(t);
>   Throwable cause = t.getCause();
>   if (cause instanceof DoNotRetryIOException) {
> // Fail fast
> throw (DoNotRetryIOException) cause;
>   }
>   // translateE

[jira] [Updated] (HBASE-25634) The client frequently exceeds the quota, which causes the meta table scan to be too high

2021-03-04 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-25634:
-
Description: 
 When the client scan operation, the server frequently returns 
RpcThrottlingException, which will cause the meta table request to become high.

 

 
/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerImpl.java
{code:java}
// code placeholder
@Override
public T callWithRetries(RetryingCallable callable, int callTimeout)
throws IOException, RuntimeException {
  List exceptions = new 
ArrayList<>();
  tracker.start();
  context.clear();
  for (int tries = 0;; tries++) {
long expectedSleep;
try {
  // bad cache entries are cleared in the call to 
RetryingCallable#throwable() in catch block
  // callable.prepare() reload force reload of server location
  callable.prepare(tries != 0);
  interceptor.intercept(context.prepare(callable, tries));
  return callable.call(getTimeout(callTimeout));
} catch (PreemptiveFastFailException e) {
  throw e;
} catch (Throwable t) {
  ExceptionUtil.rethrowIfInterrupt(t);
  Throwable cause = t.getCause();
  if (cause instanceof DoNotRetryIOException) {
// Fail fast
throw (DoNotRetryIOException) cause;
  }
  // translateException throws exception when should not retry: i.e. when 
request is bad.
  interceptor.handleFailure(context, t);
  t = translateException(t);

{code}
 

 

 

!image-2021-03-05-12-00-33-522.png!

!image-2021-03-05-12-01-08-769.png!

 

  was:
 When the client scan operation, the server frequently returns 
RpcThrottlingException, which will cause the meta table request to become high.

 
/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerImpl.java
{code:java}
// code placeholder
@Override
public T callWithRetries(RetryingCallable callable, int callTimeout)
throws IOException, RuntimeException {
  List exceptions = new 
ArrayList<>();
  tracker.start();
  context.clear();
  for (int tries = 0;; tries++) {
long expectedSleep;
try {
  // bad cache entries are cleared in the call to 
RetryingCallable#throwable() in catch block
  // callable.prepare() reload force reload of server location
  callable.prepare(tries != 0);
  interceptor.intercept(context.prepare(callable, tries));
  return callable.call(getTimeout(callTimeout));
} catch (PreemptiveFastFailException e) {
  throw e;
} catch (Throwable t) {
  ExceptionUtil.rethrowIfInterrupt(t);
  Throwable cause = t.getCause();
  if (cause instanceof DoNotRetryIOException) {
// Fail fast
throw (DoNotRetryIOException) cause;
  }
  // translateException throws exception when should not retry: i.e. when 
request is bad.
  interceptor.handleFailure(context, t);
  t = translateException(t);

{code}
 

 

 

!image-2021-03-05-12-00-33-522.png!

!image-2021-03-05-12-01-08-769.png!

 


> The client frequently exceeds the quota, which causes the meta table scan to 
> be too high
> 
>
> Key: HBASE-25634
> URL: https://issues.apache.org/jira/browse/HBASE-25634
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.3.4
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: image-2021-03-05-12-00-33-522.png, 
> image-2021-03-05-12-01-08-769.png
>
>
>  When the client scan operation, the server frequently returns 
> RpcThrottlingException, which will cause the meta table request to become 
> high.
>  
>  
> /hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerImpl.java
> {code:java}
> // code placeholder
> @Override
> public T callWithRetries(RetryingCallable callable, int callTimeout)
> throws IOException, RuntimeException {
>   List exceptions = new 
> ArrayList<>();
>   tracker.start();
>   context.clear();
>   for (int tries = 0;; tries++) {
> long expectedSleep;
> try {
>   // bad cache entries are cleared in the call to 
> RetryingCallable#throwable() in catch block
>   // callable.prepare() reload force reload of server location
>   callable.prepare(tries != 0);
>   interceptor.intercept(context.prepare(callable, tries));
>   return callable.call(getTimeout(callTimeout));
> } catch (PreemptiveFastFailException e) {
>   throw e;
> } catch (Throwable t) {
>   ExceptionUtil.rethrowIfInterrupt(t);
>   Throwable cause = t.getCause();
>   if (cause instanceof DoNotRetryIOException) {
> // Fail fast
> throw (DoNotRetryIOException) cause;
>   }
>   // translateException throws exception when should not retry: i.e. when 
> request is bad.
>   interceptor.handleFailure(context, t);
>   t = tra

[jira] [Comment Edited] (HBASE-25634) The client frequently exceeds the quota, which causes the meta table scan to be too high

2021-03-04 Thread zhengsicheng (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17295790#comment-17295790
 ] 

zhengsicheng edited comment on HBASE-25634 at 3/5/21, 7:52 AM:
---

[~sakthi] review code thank you!


was (Author: zhengsicheng):
[~sakthi] review code thank you1

> The client frequently exceeds the quota, which causes the meta table scan to 
> be too high
> 
>
> Key: HBASE-25634
> URL: https://issues.apache.org/jira/browse/HBASE-25634
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.3.4
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: image-2021-03-05-12-00-33-522.png, 
> image-2021-03-05-12-01-08-769.png
>
>
>  When the client scan operation, the server frequently returns 
> RpcThrottlingException, which will cause the meta table request to become 
> high.
>  
>  
> /hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerImpl.java
> {code:java}
> // code placeholder
> @Override
> public T callWithRetries(RetryingCallable callable, int callTimeout)
> throws IOException, RuntimeException {
>   List exceptions = new 
> ArrayList<>();
>   tracker.start();
>   context.clear();
>   for (int tries = 0;; tries++) {
> long expectedSleep;
> try {
>   // bad cache entries are cleared in the call to 
> RetryingCallable#throwable() in catch block
>   // callable.prepare() reload force reload of server location
>   callable.prepare(tries != 0);
>   interceptor.intercept(context.prepare(callable, tries));
>   return callable.call(getTimeout(callTimeout));
> } catch (PreemptiveFastFailException e) {
>   throw e;
> } catch (Throwable t) {
>   ExceptionUtil.rethrowIfInterrupt(t);
>   Throwable cause = t.getCause();
>   if (cause instanceof DoNotRetryIOException) {
> // Fail fast
> throw (DoNotRetryIOException) cause;
>   }
>   // translateException throws exception when should not retry: i.e. when 
> request is bad.
>   interceptor.handleFailure(context, t);
>   t = translateException(t);
> {code}
>  
>  
>  
> !image-2021-03-05-12-00-33-522.png!
> !image-2021-03-05-12-01-08-769.png!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-25634) The client frequently exceeds the quota, which causes the meta table scan to be too high

2021-03-08 Thread zhengsicheng (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17297864#comment-17297864
 ] 

zhengsicheng commented on HBASE-25634:
--

[~zhangduo] review code thank you!

> The client frequently exceeds the quota, which causes the meta table scan to 
> be too high
> 
>
> Key: HBASE-25634
> URL: https://issues.apache.org/jira/browse/HBASE-25634
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.3.4
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: image-2021-03-05-12-00-33-522.png, 
> image-2021-03-05-12-01-08-769.png
>
>
>  When the client scan operation, the server frequently returns 
> RpcThrottlingException, which will cause the meta table request to become 
> high.
>  
>  
> /hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerImpl.java
> {code:java}
> // code placeholder
> @Override
> public T callWithRetries(RetryingCallable callable, int callTimeout)
> throws IOException, RuntimeException {
>   List exceptions = new 
> ArrayList<>();
>   tracker.start();
>   context.clear();
>   for (int tries = 0;; tries++) {
> long expectedSleep;
> try {
>   // bad cache entries are cleared in the call to 
> RetryingCallable#throwable() in catch block
>   // callable.prepare() reload force reload of server location
>   callable.prepare(tries != 0);
>   interceptor.intercept(context.prepare(callable, tries));
>   return callable.call(getTimeout(callTimeout));
> } catch (PreemptiveFastFailException e) {
>   throw e;
> } catch (Throwable t) {
>   ExceptionUtil.rethrowIfInterrupt(t);
>   Throwable cause = t.getCause();
>   if (cause instanceof DoNotRetryIOException) {
> // Fail fast
> throw (DoNotRetryIOException) cause;
>   }
>   // translateException throws exception when should not retry: i.e. when 
> request is bad.
>   interceptor.handleFailure(context, t);
>   t = translateException(t);
> {code}
>  
>  
>  
> !image-2021-03-05-12-00-33-522.png!
> !image-2021-03-05-12-01-08-769.png!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-25634) The client frequently exceeds the quota, which causes the meta table scan to be too high

2021-03-29 Thread zhengsicheng (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17310529#comment-17310529
 ] 

zhengsicheng commented on HBASE-25634:
--

[~stack]  review code thank you!

> The client frequently exceeds the quota, which causes the meta table scan to 
> be too high
> 
>
> Key: HBASE-25634
> URL: https://issues.apache.org/jira/browse/HBASE-25634
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.3.4
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: image-2021-03-05-12-00-33-522.png, 
> image-2021-03-05-12-01-08-769.png
>
>
>  When the client scan operation, the server frequently returns 
> RpcThrottlingException, which will cause the meta table request to become 
> high.
>  
>  
> /hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerImpl.java
> {code:java}
> // code placeholder
> @Override
> public T callWithRetries(RetryingCallable callable, int callTimeout)
> throws IOException, RuntimeException {
>   List exceptions = new 
> ArrayList<>();
>   tracker.start();
>   context.clear();
>   for (int tries = 0;; tries++) {
> long expectedSleep;
> try {
>   // bad cache entries are cleared in the call to 
> RetryingCallable#throwable() in catch block
>   // callable.prepare() reload force reload of server location
>   callable.prepare(tries != 0);
>   interceptor.intercept(context.prepare(callable, tries));
>   return callable.call(getTimeout(callTimeout));
> } catch (PreemptiveFastFailException e) {
>   throw e;
> } catch (Throwable t) {
>   ExceptionUtil.rethrowIfInterrupt(t);
>   Throwable cause = t.getCause();
>   if (cause instanceof DoNotRetryIOException) {
> // Fail fast
> throw (DoNotRetryIOException) cause;
>   }
>   // translateException throws exception when should not retry: i.e. when 
> request is bad.
>   interceptor.handleFailure(context, t);
>   t = translateException(t);
> {code}
>  
>  
>  
> !image-2021-03-05-12-00-33-522.png!
> !image-2021-03-05-12-01-08-769.png!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26208) Supports revoke @ns specified permission

2021-10-24 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-26208:
-
Summary: Supports revoke  @ns specified permission  (was: Supports revoke  
@ns single permission)

> Supports revoke  @ns specified permission
> -
>
> Key: HBASE-26208
> URL: https://issues.apache.org/jira/browse/HBASE-26208
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 2.3.4
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: 1.jpeg
>
>
> Not supports revoke @ns single permission :revoke 'bobsmith', '@ns1', 'C'
> Supports revoke @ns single permission :revoke 'bobsmith', '@ns1', '\{C}'
> !1.jpeg!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-20503) [AsyncFSWAL] Failed to get sync result after 300000 ms for txid=160912, WAL system stuck?

2021-12-10 Thread zhengsicheng (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-20503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17457179#comment-17457179
 ] 

zhengsicheng commented on HBASE-20503:
--

[~tr0k]  HI Is your problem solved? we experienced the same issue in HBase 
2.3.4  and Hadoop-2.7.1

> [AsyncFSWAL] Failed to get sync result after 30 ms for txid=160912, WAL 
> system stuck?
> -
>
> Key: HBASE-20503
> URL: https://issues.apache.org/jira/browse/HBASE-20503
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Michael Stack
>Priority: Major
> Attachments: 
> 0001-HBASE-20503-AsyncFSWAL-Failed-to-get-sync-result-aft.patch, 
> 0001-HBASE-20503-AsyncFSWAL-Failed-to-get-sync-result-aft.patch
>
>
> Scale test. Startup w/ 30k regions over ~250nodes. This RS is trying to 
> furiously open regions assigned by Master. It is importantly carrying 
> hbase:meta. Twenty minutes in, meta goes dead after an exception up out 
> AsyncFSWAL. Process had been restarted so I couldn't get a  thread dump. 
> Suspicious is we archive a WAL and we get a FNFE because we got to access WAL 
> in old location. [~Apache9] mind taking a look? Does this FNFE rolling kill 
> the WAL sub-system? Thanks.
> DFS complaining on file open for a few files getting blocks from remote dead 
> DNs: e.g. {{2018-04-25 10:05:21,506 WARN 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory: I/O error constructing 
> remote block reader.
> java.net.ConnectException: Connection refused}}
> AsyncFSWAL complaining: "AbstractFSWAL: Slow sync cost: 103 ms" .
> About ten minutes in, we get this:
> {code}
> 2018-04-25 10:15:16,532 WARN 
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL: sync failed
> java.io.IOException: stream already broken
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush0(FanOutOneBlockAsyncDFSOutput.java:424)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush(FanOutOneBlockAsyncDFSOutput.java:513)
>   
>   
>   
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.sync(AsyncProtobufLogWriter.java:134)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.sync(AsyncFSWAL.java:364)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.consume(AsyncFSWAL.java:547)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> 2018-04-25 10:15:16,680 INFO 
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Rolled WAL 
> /hbase/WALs/vc0205.halxg.cloudera.com,22101,1524675808073/vc0205.halxg.cloudera.com%2C22101%2C1524675808073.meta.1524676253923.meta
>  with entries=10819, filesize=7.57 MB; new WAL 
> /hbase/WALs/vc0205.halxg.cloudera.com,22101,1524675808073/vc0205.halxg.cloudera.com%2C22101%2C1524675808073.meta.1524676516535.meta
> 2018-04-25 10:15:16,680 INFO 
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Archiving 
> hdfs://ns1/hbase/WALs/vc0205.halxg.cloudera.com,22101,1524675808073/vc0205.halxg.cloudera.com%2C22101%2C1524675808073.meta.1524675848653.meta
>  to 
> hdfs://ns1/hbase/oldWALs/vc0205.halxg.cloudera.com%2C22101%2C1524675808073.meta.1524675848653.meta
> 2018-04-25 10:15:16,686 WARN 
> org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter: Failed to 
> write trailer, non-fatal, continuing...
> java.io.IOException: stream already broken
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush0(FanOutOneBlockAsyncDFSOutput.java:424)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush(FanOutOneBlockAsyncDFSOutput.java:513)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.lambda$writeWALTrailerAndMagic$3(AsyncProtobufLogWriter.java:210)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.write(AsyncProtobufLogWriter.java:166)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.writeWALTrailerAndMagic(AsyncProtobufLogWriter.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.writeWALTrailer(AbstractProtobufLogWriter.java:233)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.close(AsyncProtobufLogWriter.java:143)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.lambda$executeClose$8(AsyncFSWAL.java:742)
>   

[jira] [Assigned] (HBASE-20503) [AsyncFSWAL] Failed to get sync result after 300000 ms for txid=160912, WAL system stuck?

2021-12-10 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-20503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng reassigned HBASE-20503:


Assignee: zhengsicheng

> [AsyncFSWAL] Failed to get sync result after 30 ms for txid=160912, WAL 
> system stuck?
> -
>
> Key: HBASE-20503
> URL: https://issues.apache.org/jira/browse/HBASE-20503
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Michael Stack
>Assignee: zhengsicheng
>Priority: Major
> Attachments: 
> 0001-HBASE-20503-AsyncFSWAL-Failed-to-get-sync-result-aft.patch, 
> 0001-HBASE-20503-AsyncFSWAL-Failed-to-get-sync-result-aft.patch
>
>
> Scale test. Startup w/ 30k regions over ~250nodes. This RS is trying to 
> furiously open regions assigned by Master. It is importantly carrying 
> hbase:meta. Twenty minutes in, meta goes dead after an exception up out 
> AsyncFSWAL. Process had been restarted so I couldn't get a  thread dump. 
> Suspicious is we archive a WAL and we get a FNFE because we got to access WAL 
> in old location. [~Apache9] mind taking a look? Does this FNFE rolling kill 
> the WAL sub-system? Thanks.
> DFS complaining on file open for a few files getting blocks from remote dead 
> DNs: e.g. {{2018-04-25 10:05:21,506 WARN 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory: I/O error constructing 
> remote block reader.
> java.net.ConnectException: Connection refused}}
> AsyncFSWAL complaining: "AbstractFSWAL: Slow sync cost: 103 ms" .
> About ten minutes in, we get this:
> {code}
> 2018-04-25 10:15:16,532 WARN 
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL: sync failed
> java.io.IOException: stream already broken
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush0(FanOutOneBlockAsyncDFSOutput.java:424)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush(FanOutOneBlockAsyncDFSOutput.java:513)
>   
>   
>   
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.sync(AsyncProtobufLogWriter.java:134)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.sync(AsyncFSWAL.java:364)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.consume(AsyncFSWAL.java:547)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> 2018-04-25 10:15:16,680 INFO 
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Rolled WAL 
> /hbase/WALs/vc0205.halxg.cloudera.com,22101,1524675808073/vc0205.halxg.cloudera.com%2C22101%2C1524675808073.meta.1524676253923.meta
>  with entries=10819, filesize=7.57 MB; new WAL 
> /hbase/WALs/vc0205.halxg.cloudera.com,22101,1524675808073/vc0205.halxg.cloudera.com%2C22101%2C1524675808073.meta.1524676516535.meta
> 2018-04-25 10:15:16,680 INFO 
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Archiving 
> hdfs://ns1/hbase/WALs/vc0205.halxg.cloudera.com,22101,1524675808073/vc0205.halxg.cloudera.com%2C22101%2C1524675808073.meta.1524675848653.meta
>  to 
> hdfs://ns1/hbase/oldWALs/vc0205.halxg.cloudera.com%2C22101%2C1524675808073.meta.1524675848653.meta
> 2018-04-25 10:15:16,686 WARN 
> org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter: Failed to 
> write trailer, non-fatal, continuing...
> java.io.IOException: stream already broken
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush0(FanOutOneBlockAsyncDFSOutput.java:424)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush(FanOutOneBlockAsyncDFSOutput.java:513)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.lambda$writeWALTrailerAndMagic$3(AsyncProtobufLogWriter.java:210)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.write(AsyncProtobufLogWriter.java:166)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.writeWALTrailerAndMagic(AsyncProtobufLogWriter.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.writeWALTrailer(AbstractProtobufLogWriter.java:233)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.close(AsyncProtobufLogWriter.java:143)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.lambda$executeClose$8(AsyncFSWAL.java:742)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 

[jira] [Assigned] (HBASE-20503) [AsyncFSWAL] Failed to get sync result after 300000 ms for txid=160912, WAL system stuck?

2021-12-10 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-20503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng reassigned HBASE-20503:


Assignee: (was: zhengsicheng)

> [AsyncFSWAL] Failed to get sync result after 30 ms for txid=160912, WAL 
> system stuck?
> -
>
> Key: HBASE-20503
> URL: https://issues.apache.org/jira/browse/HBASE-20503
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Michael Stack
>Priority: Major
> Attachments: 
> 0001-HBASE-20503-AsyncFSWAL-Failed-to-get-sync-result-aft.patch, 
> 0001-HBASE-20503-AsyncFSWAL-Failed-to-get-sync-result-aft.patch
>
>
> Scale test. Startup w/ 30k regions over ~250nodes. This RS is trying to 
> furiously open regions assigned by Master. It is importantly carrying 
> hbase:meta. Twenty minutes in, meta goes dead after an exception up out 
> AsyncFSWAL. Process had been restarted so I couldn't get a  thread dump. 
> Suspicious is we archive a WAL and we get a FNFE because we got to access WAL 
> in old location. [~Apache9] mind taking a look? Does this FNFE rolling kill 
> the WAL sub-system? Thanks.
> DFS complaining on file open for a few files getting blocks from remote dead 
> DNs: e.g. {{2018-04-25 10:05:21,506 WARN 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory: I/O error constructing 
> remote block reader.
> java.net.ConnectException: Connection refused}}
> AsyncFSWAL complaining: "AbstractFSWAL: Slow sync cost: 103 ms" .
> About ten minutes in, we get this:
> {code}
> 2018-04-25 10:15:16,532 WARN 
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL: sync failed
> java.io.IOException: stream already broken
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush0(FanOutOneBlockAsyncDFSOutput.java:424)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush(FanOutOneBlockAsyncDFSOutput.java:513)
>   
>   
>   
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.sync(AsyncProtobufLogWriter.java:134)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.sync(AsyncFSWAL.java:364)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.consume(AsyncFSWAL.java:547)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> 2018-04-25 10:15:16,680 INFO 
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Rolled WAL 
> /hbase/WALs/vc0205.halxg.cloudera.com,22101,1524675808073/vc0205.halxg.cloudera.com%2C22101%2C1524675808073.meta.1524676253923.meta
>  with entries=10819, filesize=7.57 MB; new WAL 
> /hbase/WALs/vc0205.halxg.cloudera.com,22101,1524675808073/vc0205.halxg.cloudera.com%2C22101%2C1524675808073.meta.1524676516535.meta
> 2018-04-25 10:15:16,680 INFO 
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Archiving 
> hdfs://ns1/hbase/WALs/vc0205.halxg.cloudera.com,22101,1524675808073/vc0205.halxg.cloudera.com%2C22101%2C1524675808073.meta.1524675848653.meta
>  to 
> hdfs://ns1/hbase/oldWALs/vc0205.halxg.cloudera.com%2C22101%2C1524675808073.meta.1524675848653.meta
> 2018-04-25 10:15:16,686 WARN 
> org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter: Failed to 
> write trailer, non-fatal, continuing...
> java.io.IOException: stream already broken
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush0(FanOutOneBlockAsyncDFSOutput.java:424)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush(FanOutOneBlockAsyncDFSOutput.java:513)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.lambda$writeWALTrailerAndMagic$3(AsyncProtobufLogWriter.java:210)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.write(AsyncProtobufLogWriter.java:166)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.writeWALTrailerAndMagic(AsyncProtobufLogWriter.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.writeWALTrailer(AbstractProtobufLogWriter.java:233)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.close(AsyncProtobufLogWriter.java:143)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.lambda$executeClose$8(AsyncFSWAL.java:742)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurr

[jira] [Commented] (HBASE-20503) [AsyncFSWAL] Failed to get sync result after 300000 ms for txid=160912, WAL system stuck?

2021-12-21 Thread zhengsicheng (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-20503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17463575#comment-17463575
 ] 

zhengsicheng commented on HBASE-20503:
--

[~Xiaolin Ha]  [~tr0k] Thank you for your answer

> [AsyncFSWAL] Failed to get sync result after 30 ms for txid=160912, WAL 
> system stuck?
> -
>
> Key: HBASE-20503
> URL: https://issues.apache.org/jira/browse/HBASE-20503
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Michael Stack
>Priority: Major
> Attachments: 
> 0001-HBASE-20503-AsyncFSWAL-Failed-to-get-sync-result-aft.patch, 
> 0001-HBASE-20503-AsyncFSWAL-Failed-to-get-sync-result-aft.patch
>
>
> Scale test. Startup w/ 30k regions over ~250nodes. This RS is trying to 
> furiously open regions assigned by Master. It is importantly carrying 
> hbase:meta. Twenty minutes in, meta goes dead after an exception up out 
> AsyncFSWAL. Process had been restarted so I couldn't get a  thread dump. 
> Suspicious is we archive a WAL and we get a FNFE because we got to access WAL 
> in old location. [~Apache9] mind taking a look? Does this FNFE rolling kill 
> the WAL sub-system? Thanks.
> DFS complaining on file open for a few files getting blocks from remote dead 
> DNs: e.g. {{2018-04-25 10:05:21,506 WARN 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory: I/O error constructing 
> remote block reader.
> java.net.ConnectException: Connection refused}}
> AsyncFSWAL complaining: "AbstractFSWAL: Slow sync cost: 103 ms" .
> About ten minutes in, we get this:
> {code}
> 2018-04-25 10:15:16,532 WARN 
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL: sync failed
> java.io.IOException: stream already broken
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush0(FanOutOneBlockAsyncDFSOutput.java:424)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush(FanOutOneBlockAsyncDFSOutput.java:513)
>   
>   
>   
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.sync(AsyncProtobufLogWriter.java:134)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.sync(AsyncFSWAL.java:364)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.consume(AsyncFSWAL.java:547)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> 2018-04-25 10:15:16,680 INFO 
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Rolled WAL 
> /hbase/WALs/vc0205.halxg.cloudera.com,22101,1524675808073/vc0205.halxg.cloudera.com%2C22101%2C1524675808073.meta.1524676253923.meta
>  with entries=10819, filesize=7.57 MB; new WAL 
> /hbase/WALs/vc0205.halxg.cloudera.com,22101,1524675808073/vc0205.halxg.cloudera.com%2C22101%2C1524675808073.meta.1524676516535.meta
> 2018-04-25 10:15:16,680 INFO 
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Archiving 
> hdfs://ns1/hbase/WALs/vc0205.halxg.cloudera.com,22101,1524675808073/vc0205.halxg.cloudera.com%2C22101%2C1524675808073.meta.1524675848653.meta
>  to 
> hdfs://ns1/hbase/oldWALs/vc0205.halxg.cloudera.com%2C22101%2C1524675808073.meta.1524675848653.meta
> 2018-04-25 10:15:16,686 WARN 
> org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter: Failed to 
> write trailer, non-fatal, continuing...
> java.io.IOException: stream already broken
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush0(FanOutOneBlockAsyncDFSOutput.java:424)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush(FanOutOneBlockAsyncDFSOutput.java:513)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.lambda$writeWALTrailerAndMagic$3(AsyncProtobufLogWriter.java:210)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.write(AsyncProtobufLogWriter.java:166)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.writeWALTrailerAndMagic(AsyncProtobufLogWriter.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.writeWALTrailer(AbstractProtobufLogWriter.java:233)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.close(AsyncProtobufLogWriter.java:143)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.lambda$executeClose$8(AsyncFSWAL.java:742)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWo

[jira] [Assigned] (HBASE-27387) MetricsSource lastShippedTimeStamps ConcurrentModificationException cause RegionServer crash

2023-03-28 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng reassigned HBASE-27387:


Assignee: (was: zhengsicheng)

>  MetricsSource lastShippedTimeStamps ConcurrentModificationException cause 
> RegionServer crash
> -
>
> Key: HBASE-27387
> URL: https://issues.apache.org/jira/browse/HBASE-27387
> Project: HBase
>  Issue Type: Bug
>Reporter: zhengsicheng
>Priority: Minor
>
> 022-09-20 14:14:40,332 ERROR [regionserver/hostname1:16020] 
> regionserver.HRegionServer: * ABORTING region server 
> hostname1,16020,1663147531495: Unhandled: null *
>  8587 java.util.ConcurrentModificationException
>  8588     at java.util.HashMap$HashIterator.nextNode(HashMap.java:1442)
>  8589     at java.util.HashMap$ValueIterator.next(HashMap.java:1471)
>  8590     at 
> org.apache.hadoop.hbase.replication.regionserver.MetricsSource.getTimestampOfLastShippedOp(MetricsSource.java:321)
>  8591     at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationLoad.buildReplicationLoad(ReplicationLoad.java:80)
>  8592     at 
> org.apache.hadoop.hbase.replication.regionserver.Replication.buildReplicationLoad(Replication.java:264)
>  8593     at 
> org.apache.hadoop.hbase.replication.regionserver.Replication.refreshAndGetReplicationLoad(Replication.java:253)
>  8594     at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.buildServerLoad(HRegionServer.java:1436)
>  8595     at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:1243)
>  8596     at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1065)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HBASE-27763) Recover WAL encounter KeeperErrorCode = NoNode cause RegionServer crash

2023-04-13 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng reassigned HBASE-27763:


Assignee: zhengsicheng

> Recover WAL encounter  KeeperErrorCode = NoNode cause RegionServer crash
> 
>
> Key: HBASE-27763
> URL: https://issues.apache.org/jira/browse/HBASE-27763
> Project: HBase
>  Issue Type: Bug
>Reporter: guoxiaojiao
>Assignee: zhengsicheng
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HBASE-27763) Recover WAL encounter KeeperErrorCode = NoNode cause RegionServer crash

2023-04-13 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng reassigned HBASE-27763:


Assignee: (was: zhengsicheng)

> Recover WAL encounter  KeeperErrorCode = NoNode cause RegionServer crash
> 
>
> Key: HBASE-27763
> URL: https://issues.apache.org/jira/browse/HBASE-27763
> Project: HBase
>  Issue Type: Bug
>Reporter: guoxiaojiao
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27878) balance_rsgroup NullPointerException

2023-05-23 Thread zhengsicheng (Jira)
zhengsicheng created HBASE-27878:


 Summary: balance_rsgroup NullPointerException
 Key: HBASE-27878
 URL: https://issues.apache.org/jira/browse/HBASE-27878
 Project: HBase
  Issue Type: Bug
Reporter: zhengsicheng
Assignee: zhengsicheng


hbase(main):001:0> balance_rsgroup 'default'

ERROR: java.io.IOException: Cannot invoke 
"org.apache.hadoop.hbase.ServerName.getAddress()" because "currentHostServer" 
is null
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:466)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)
Caused by: java.lang.NullPointerException: Cannot invoke 
"org.apache.hadoop.hbase.ServerName.getAddress()" because "currentHostServer" 
is null
        at 
org.apache.hadoop.hbase.rsgroup.RSGroupBasedLoadBalancer.correctAssignments(RSGroupBasedLoadBalancer.java:320)
        at 
org.apache.hadoop.hbase.rsgroup.RSGroupBasedLoadBalancer.balanceCluster(RSGroupBasedLoadBalancer.java:126)
        at 
org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:461)
        at 
org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:301)
        at 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:14948)
        at 
org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:921)
        at 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:394)
        ... 3 more

For usage try 'help "balance_rsgroup"'



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27309) Add major compact table or region operation on master web table page

2022-11-09 Thread zhengsicheng (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17631109#comment-17631109
 ] 

zhengsicheng commented on HBASE-27309:
--

[~zhangduo]  [PR to brach-2|https://github.com/apache/hbase/pull/4870/files]  
is  available

> Add major compact table or region operation on master web table page
> 
>
> Key: HBASE-27309
> URL: https://issues.apache.org/jira/browse/HBASE-27309
> Project: HBase
>  Issue Type: Sub-task
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: image-2022-09-22-02-32-36-619.png
>
>
> Add major compact table or region operation on master web table page
> !image-2022-09-22-02-32-36-619.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27523) Add BulkLoad bandwidth throttling

2022-12-08 Thread zhengsicheng (Jira)
zhengsicheng created HBASE-27523:


 Summary: Add BulkLoad bandwidth throttling
 Key: HBASE-27523
 URL: https://issues.apache.org/jira/browse/HBASE-27523
 Project: HBase
  Issue Type: Task
Reporter: zhengsicheng
Assignee: zhengsicheng






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work started] (HBASE-27307) Add Move table to Target Group on master web

2022-12-21 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-27307 started by zhengsicheng.

> Add Move table to Target Group on master web
> 
>
> Key: HBASE-27307
> URL: https://issues.apache.org/jira/browse/HBASE-27307
> Project: HBase
>  Issue Type: Sub-task
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27307) Add Move table to Target Group on master web

2022-12-21 Thread zhengsicheng (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17651123#comment-17651123
 ] 

zhengsicheng commented on HBASE-27307:
--

[~zhangduo]  PR is available

> Add Move table to Target Group on master web
> 
>
> Key: HBASE-27307
> URL: https://issues.apache.org/jira/browse/HBASE-27307
> Project: HBase
>  Issue Type: Sub-task
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work started] (HBASE-27523) Add BulkLoad bandwidth throttling

2023-01-15 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-27523 started by zhengsicheng.

> Add BulkLoad bandwidth throttling
> -
>
> Key: HBASE-27523
> URL: https://issues.apache.org/jira/browse/HBASE-27523
> Project: HBase
>  Issue Type: Task
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27569) balance_rsgroup NullPointerException

2023-01-16 Thread zhengsicheng (Jira)
zhengsicheng created HBASE-27569:


 Summary: balance_rsgroup NullPointerException
 Key: HBASE-27569
 URL: https://issues.apache.org/jira/browse/HBASE-27569
 Project: HBase
  Issue Type: Bug
  Components: Client
Reporter: zhengsicheng
Assignee: zhengsicheng


hbase(main):001:0> balance_rsgroup 'default'

ERROR: java.io.IOException
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:469)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)
Caused by: java.lang.NullPointerException



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27569) balance_rsgroup NullPointerException

2023-01-16 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27569:
-
Description: 
hbase shell

hbase(main):001:0> balance_rsgroup 'default'

ERROR: java.io.IOException
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:469)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)
Caused by: java.lang.NullPointerException

 

hbase server log

2023-01-16 18:53:51,418 INFO  
[RpcServer.default.RWQ.Fifo.read.handler=315,queue=85,port=16000] 
rsgroup.RSGroupAdminEndpoint: Client=x balance rsgroup, group=default
2023-01-16 18:53:51,421 ERROR 
[RpcServer.default.RWQ.Fifo.read.handler=315,queue=85,port=16000] 
ipc.RpcServer: Unexpected throwable object
java.lang.NullPointerException

  was:
hbase(main):001:0> balance_rsgroup 'default'

ERROR: java.io.IOException
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:469)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)
Caused by: java.lang.NullPointerException


> balance_rsgroup NullPointerException
> 
>
> Key: HBASE-27569
> URL: https://issues.apache.org/jira/browse/HBASE-27569
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
>
> hbase shell
> hbase(main):001:0> balance_rsgroup 'default'
> ERROR: java.io.IOException
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:469)
>         at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>         at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
>         at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)
> Caused by: java.lang.NullPointerException
>  
> hbase server log
> 2023-01-16 18:53:51,418 INFO  
> [RpcServer.default.RWQ.Fifo.read.handler=315,queue=85,port=16000] 
> rsgroup.RSGroupAdminEndpoint: Client=x balance rsgroup, group=default
> 2023-01-16 18:53:51,421 ERROR 
> [RpcServer.default.RWQ.Fifo.read.handler=315,queue=85,port=16000] 
> ipc.RpcServer: Unexpected throwable object
> java.lang.NullPointerException



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27523) Add BulkLoad bandwidth throttling

2023-02-01 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27523:
-
Attachment: image-2023-02-02-11-08-58-479.png

> Add BulkLoad bandwidth throttling
> -
>
> Key: HBASE-27523
> URL: https://issues.apache.org/jira/browse/HBASE-27523
> Project: HBase
>  Issue Type: Task
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: image-2023-02-02-11-08-58-479.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27523) Add BulkLoad bandwidth throttling

2023-02-01 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27523:
-
Description: !image-2023-02-02-11-08-58-479.png!

> Add BulkLoad bandwidth throttling
> -
>
> Key: HBASE-27523
> URL: https://issues.apache.org/jira/browse/HBASE-27523
> Project: HBase
>  Issue Type: Task
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: image-2023-02-02-11-08-58-479.png
>
>
> !image-2023-02-02-11-08-58-479.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27523) Add BulkLoad bandwidth throttling

2023-02-01 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27523:
-
Description: 
#  Background:
During bulk load HFile copy, the server performance is affected to prevent the 
bandwidth of the server from being full. We can design a throttler to limit the 
speed of copy and thus limit the bandwidth used by copy
 #  Function:
 2.1 Dynamically Updating Traffic Limiting Configurations
 2.2 Node copy HFile Traffic Limiting
 2.3 Synchronous copy HFile of the Active and Standby Clusters is unlimited
 #  Mode of Use:
hbase.regionserver.bulkload.node.bandwidth is greater than 0 open unit Byte/sec
Update the limit value with update_all_config or update_config 'node'
 #  Effect:

// Begin 5M/sec

  hbase.regionserver.bulkload.node.bandwidth
  5242880
  hbase-site.xml


// After update_all_config to 10M/sec

  hbase.regionserver.bulkload.node.bandwidth
  5242880
  hbase-site.xml


!image-2023-02-02-11-08-58-479.png!

  was:!image-2023-02-02-11-08-58-479.png!


> Add BulkLoad bandwidth throttling
> -
>
> Key: HBASE-27523
> URL: https://issues.apache.org/jira/browse/HBASE-27523
> Project: HBase
>  Issue Type: Task
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: image-2023-02-02-11-08-58-479.png
>
>
> #  Background:
> During bulk load HFile copy, the server performance is affected to prevent 
> the bandwidth of the server from being full. We can design a throttler to 
> limit the speed of copy and thus limit the bandwidth used by copy
>  #  Function:
>  2.1 Dynamically Updating Traffic Limiting Configurations
>  2.2 Node copy HFile Traffic Limiting
>  2.3 Synchronous copy HFile of the Active and Standby Clusters is unlimited
>  #  Mode of Use:
> hbase.regionserver.bulkload.node.bandwidth is greater than 0 open unit 
> Byte/sec
> Update the limit value with update_all_config or update_config 'node'
>  #  Effect:
> // Begin 5M/sec
> 
>   hbase.regionserver.bulkload.node.bandwidth
>   5242880
>   hbase-site.xml
> 
> // After update_all_config to 10M/sec
> 
>   hbase.regionserver.bulkload.node.bandwidth
>   5242880
>   hbase-site.xml
> 
> !image-2023-02-02-11-08-58-479.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27523) Add BulkLoad bandwidth throttling

2023-02-01 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27523:
-
Attachment: (was: image-2023-02-02-11-08-58-479.png)

> Add BulkLoad bandwidth throttling
> -
>
> Key: HBASE-27523
> URL: https://issues.apache.org/jira/browse/HBASE-27523
> Project: HBase
>  Issue Type: Task
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: image-2023-02-02-13-10-06-619.png
>
>
> #  Background:
> During bulk load HFile copy, the server performance is affected to prevent 
> the bandwidth of the server from being full. We can design a throttler to 
> limit the speed of copy and thus limit the bandwidth used by copy
>  #  Function:
>  2.1 Dynamically Updating Traffic Limiting Configurations
>  2.2 Node copy HFile Traffic Limiting
>  2.3 Synchronous copy HFile of the Active and Standby Clusters is unlimited
>  #  Mode of Use:
> hbase.regionserver.bulkload.node.bandwidth is greater than 0 open unit 
> Byte/sec
> Update the limit value with update_all_config or update_config 'node'
>  #  Effect:
> // Begin 5M/sec
> 
>   hbase.regionserver.bulkload.node.bandwidth
>   5242880
>   hbase-site.xml
> 
> // After update_all_config to 10M/sec
> 
>   hbase.regionserver.bulkload.node.bandwidth
>   5242880
>   hbase-site.xml
> 
> !image-2023-02-02-13-10-06-619.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27523) Add BulkLoad bandwidth throttling

2023-02-01 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27523:
-
Description: 
#  Background:
During bulk load HFile copy, the server performance is affected to prevent the 
bandwidth of the server from being full. We can design a throttler to limit the 
speed of copy and thus limit the bandwidth used by copy
 #  Function:
 2.1 Dynamically Updating Traffic Limiting Configurations
 2.2 Node copy HFile Traffic Limiting
 2.3 Synchronous copy HFile of the Active and Standby Clusters is unlimited
 #  Mode of Use:
hbase.regionserver.bulkload.node.bandwidth is greater than 0 open unit Byte/sec
Update the limit value with update_all_config or update_config 'node'
 #  Effect:

// Begin 5M/sec

  hbase.regionserver.bulkload.node.bandwidth
  5242880
  hbase-site.xml


// After update_all_config to 10M/sec

  hbase.regionserver.bulkload.node.bandwidth
  5242880
  hbase-site.xml


!image-2023-02-02-13-10-06-619.png!

  was:
#  Background:
During bulk load HFile copy, the server performance is affected to prevent the 
bandwidth of the server from being full. We can design a throttler to limit the 
speed of copy and thus limit the bandwidth used by copy
 #  Function:
 2.1 Dynamically Updating Traffic Limiting Configurations
 2.2 Node copy HFile Traffic Limiting
 2.3 Synchronous copy HFile of the Active and Standby Clusters is unlimited
 #  Mode of Use:
hbase.regionserver.bulkload.node.bandwidth is greater than 0 open unit Byte/sec
Update the limit value with update_all_config or update_config 'node'
 #  Effect:

// Begin 5M/sec

  hbase.regionserver.bulkload.node.bandwidth
  5242880
  hbase-site.xml


// After update_all_config to 10M/sec

  hbase.regionserver.bulkload.node.bandwidth
  5242880
  hbase-site.xml


!image-2023-02-02-11-08-58-479.png!


> Add BulkLoad bandwidth throttling
> -
>
> Key: HBASE-27523
> URL: https://issues.apache.org/jira/browse/HBASE-27523
> Project: HBase
>  Issue Type: Task
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: image-2023-02-02-13-10-06-619.png
>
>
> #  Background:
> During bulk load HFile copy, the server performance is affected to prevent 
> the bandwidth of the server from being full. We can design a throttler to 
> limit the speed of copy and thus limit the bandwidth used by copy
>  #  Function:
>  2.1 Dynamically Updating Traffic Limiting Configurations
>  2.2 Node copy HFile Traffic Limiting
>  2.3 Synchronous copy HFile of the Active and Standby Clusters is unlimited
>  #  Mode of Use:
> hbase.regionserver.bulkload.node.bandwidth is greater than 0 open unit 
> Byte/sec
> Update the limit value with update_all_config or update_config 'node'
>  #  Effect:
> // Begin 5M/sec
> 
>   hbase.regionserver.bulkload.node.bandwidth
>   5242880
>   hbase-site.xml
> 
> // After update_all_config to 10M/sec
> 
>   hbase.regionserver.bulkload.node.bandwidth
>   5242880
>   hbase-site.xml
> 
> !image-2023-02-02-13-10-06-619.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27523) Add BulkLoad bandwidth throttling

2023-02-01 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27523:
-
Attachment: image-2023-02-02-13-10-06-619.png

> Add BulkLoad bandwidth throttling
> -
>
> Key: HBASE-27523
> URL: https://issues.apache.org/jira/browse/HBASE-27523
> Project: HBase
>  Issue Type: Task
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: image-2023-02-02-13-10-06-619.png
>
>
> #  Background:
> During bulk load HFile copy, the server performance is affected to prevent 
> the bandwidth of the server from being full. We can design a throttler to 
> limit the speed of copy and thus limit the bandwidth used by copy
>  #  Function:
>  2.1 Dynamically Updating Traffic Limiting Configurations
>  2.2 Node copy HFile Traffic Limiting
>  2.3 Synchronous copy HFile of the Active and Standby Clusters is unlimited
>  #  Mode of Use:
> hbase.regionserver.bulkload.node.bandwidth is greater than 0 open unit 
> Byte/sec
> Update the limit value with update_all_config or update_config 'node'
>  #  Effect:
> // Begin 5M/sec
> 
>   hbase.regionserver.bulkload.node.bandwidth
>   5242880
>   hbase-site.xml
> 
> // After update_all_config to 10M/sec
> 
>   hbase.regionserver.bulkload.node.bandwidth
>   5242880
>   hbase-site.xml
> 
> !image-2023-02-02-11-08-58-479.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27523) Add BulkLoad bandwidth throttling

2023-02-01 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27523:
-
Issue Type: New Feature  (was: Task)

> Add BulkLoad bandwidth throttling
> -
>
> Key: HBASE-27523
> URL: https://issues.apache.org/jira/browse/HBASE-27523
> Project: HBase
>  Issue Type: New Feature
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: image-2023-02-02-13-10-06-619.png
>
>
> #  Background:
> During bulk load HFile copy, the server performance is affected to prevent 
> the bandwidth of the server from being full. We can design a throttler to 
> limit the speed of copy and thus limit the bandwidth used by copy
>  #  Function:
>  2.1 Dynamically Updating Traffic Limiting Configurations
>  2.2 Node copy HFile Traffic Limiting
>  2.3 Synchronous copy HFile of the Active and Standby Clusters is unlimited
>  #  Mode of Use:
> hbase.regionserver.bulkload.node.bandwidth is greater than 0 open unit 
> Byte/sec
> Update the limit value with update_all_config or update_config 'node'
>  #  Effect:
> // Begin 5M/sec
> 
>   hbase.regionserver.bulkload.node.bandwidth
>   5242880
>   hbase-site.xml
> 
> // After update_all_config to 10M/sec
> 
>   hbase.regionserver.bulkload.node.bandwidth
>   5242880
>   hbase-site.xml
> 
> !image-2023-02-02-13-10-06-619.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27523) Add BulkLoad bandwidth throttling

2023-02-01 Thread zhengsicheng (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17683227#comment-17683227
 ] 

zhengsicheng commented on HBASE-27523:
--

[~zhangduo]  Added description RP available

> Add BulkLoad bandwidth throttling
> -
>
> Key: HBASE-27523
> URL: https://issues.apache.org/jira/browse/HBASE-27523
> Project: HBase
>  Issue Type: New Feature
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: image-2023-02-02-13-10-06-619.png
>
>
> #  Background:
> During bulk load HFile copy, the server performance is affected to prevent 
> the bandwidth of the server from being full. We can design a throttler to 
> limit the speed of copy and thus limit the bandwidth used by copy
>  #  Function:
>  2.1 Dynamically Updating Traffic Limiting Configurations
>  2.2 Node copy HFile Traffic Limiting
>  2.3 Synchronous copy HFile of the Active and Standby Clusters is unlimited
>  #  Mode of Use:
> hbase.regionserver.bulkload.node.bandwidth is greater than 0 open unit 
> Byte/sec
> Update the limit value with update_all_config or update_config 'node'
>  #  Effect:
> // Begin 5M/sec
> 
>   hbase.regionserver.bulkload.node.bandwidth
>   5242880
>   hbase-site.xml
> 
> // After update_all_config to 10M/sec
> 
>   hbase.regionserver.bulkload.node.bandwidth
>   5242880
>   hbase-site.xml
> 
> !image-2023-02-02-13-10-06-619.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27307) Add Move table to Target Group on master web

2023-02-21 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27307:
-
Attachment: image-2023-02-22-01-09-41-209.png

> Add Move table to Target Group on master web
> 
>
> Key: HBASE-27307
> URL: https://issues.apache.org/jira/browse/HBASE-27307
> Project: HBase
>  Issue Type: Sub-task
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: image-2023-02-22-01-09-41-209.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27307) Add Move table to Target Group on master web

2023-02-21 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27307:
-
Description: !image-2023-02-22-01-09-41-209.png!

> Add Move table to Target Group on master web
> 
>
> Key: HBASE-27307
> URL: https://issues.apache.org/jira/browse/HBASE-27307
> Project: HBase
>  Issue Type: Sub-task
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: image-2023-02-22-01-09-41-209.png
>
>
> !image-2023-02-22-01-09-41-209.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27307) Add Move table to Target Group on master web

2023-02-21 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27307:
-
Description: !image-2023-02-22-01-09-41-209.png|width=2073,height=1127!  
(was: !image-2023-02-22-01-09-41-209.png!)

> Add Move table to Target Group on master web
> 
>
> Key: HBASE-27307
> URL: https://issues.apache.org/jira/browse/HBASE-27307
> Project: HBase
>  Issue Type: Sub-task
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: image-2023-02-22-01-09-41-209.png
>
>
> !image-2023-02-22-01-09-41-209.png|width=2073,height=1127!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27307) Add Move table to Target Group on master web

2023-02-21 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27307:
-
Attachment: (was: image-2023-02-22-01-09-41-209.png)

> Add Move table to Target Group on master web
> 
>
> Key: HBASE-27307
> URL: https://issues.apache.org/jira/browse/HBASE-27307
> Project: HBase
>  Issue Type: Sub-task
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
>
> !image-2023-02-22-01-09-41-209.png|width=2073,height=1127!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27307) Add Move table to Target Group on master web

2023-02-21 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27307:
-
Description: (was: 
!image-2023-02-22-01-09-41-209.png|width=2073,height=1127!)

> Add Move table to Target Group on master web
> 
>
> Key: HBASE-27307
> URL: https://issues.apache.org/jira/browse/HBASE-27307
> Project: HBase
>  Issue Type: Sub-task
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27307) Add Move table to Target Group on master web

2023-02-21 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27307:
-
Attachment: move.png

> Add Move table to Target Group on master web
> 
>
> Key: HBASE-27307
> URL: https://issues.apache.org/jira/browse/HBASE-27307
> Project: HBase
>  Issue Type: Sub-task
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: move.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27307) Add Move table to Target Group on master web

2023-02-21 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27307:
-
Description: Move table to Target Group on master web  (was: Move table to 
Target Group)

> Add Move table to Target Group on master web
> 
>
> Key: HBASE-27307
> URL: https://issues.apache.org/jira/browse/HBASE-27307
> Project: HBase
>  Issue Type: Sub-task
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: move.png
>
>
> Move table to Target Group on master web



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27307) Add Move table to Target Group on master web

2023-02-21 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27307:
-
Description: Move table to Target Group

> Add Move table to Target Group on master web
> 
>
> Key: HBASE-27307
> URL: https://issues.apache.org/jira/browse/HBASE-27307
> Project: HBase
>  Issue Type: Sub-task
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: move.png
>
>
> Move table to Target Group



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27307) Add Move table to Target Group on master web

2023-02-22 Thread zhengsicheng (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17692525#comment-17692525
 ] 

zhengsicheng commented on HBASE-27307:
--

[~zhangduo]  PR updated

> Add Move table to Target Group on master web
> 
>
> Key: HBASE-27307
> URL: https://issues.apache.org/jira/browse/HBASE-27307
> Project: HBase
>  Issue Type: Sub-task
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Attachments: move.png
>
>
> Move table to Target Group on master web



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27664) RS crash on ipc request to big

2023-02-24 Thread zhengsicheng (Jira)
zhengsicheng created HBASE-27664:


 Summary: RS crash on ipc request to big
 Key: HBASE-27664
 URL: https://issues.apache.org/jira/browse/HBASE-27664
 Project: HBase
  Issue Type: Bug
Reporter: zhengsicheng
Assignee: zhengsicheng


 2023-02-17 16:44:09,601 WARN  [RS-EventLoopGroup-1-46] ipc.NettyRpcServer: RPC 
data length of 825701220 received from client_ip1 is greater than max allowed 
268435456. Set "hba      se.ipc.max.request.size" on server to override this 
limit (not recommended)
 9423 2023-02-17 16:44:12,668 ERROR 
[RpcServer.default.RWQ.Fifo.write.handler=62,queue=62,port=16020] 
ipc.RpcServer: Unexpected throwable object
 9424 java.lang.RuntimeException: Unknown code 98
 9425     at org.apache.hadoop.hbase.KeyValue$Type.codeToType(KeyValue.java:276)
 9426     at 
org.apache.hadoop.hbase.CellUtil.getCellKeyAsString(CellUtil.java:1340)
 9427     at 
org.apache.hadoop.hbase.CellUtil.getCellKeyAsString(CellUtil.java:1318)
 9428     at org.apache.hadoop.hbase.CellUtil.toString(CellUtil.java:1512)
 9429     at 
org.apache.hadoop.hbase.ByteBufferKeyValue.toString(ByteBufferKeyValue.java:301)
 9430     at org.apache.hadoop.hbase.client.Mutation.add(Mutation.java:786)
 9431     at org.apache.hadoop.hbase.client.Put.add(Put.java:282)
 9432     at 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toPut(ProtobufUtil.java:656)
 9433     at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1033)
 9434     at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:987)
 9435     at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:950)
 9436     at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2949)
 9437     at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45265)
 9438     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:394)
 9439     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
 9440     at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
 9441     at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)
 9442 2023-02-17 16:44:19,430 WARN  [RS-EventLoopGroup-1-60] ipc.RpcServer: 
Invalid request header: , should have param set in it
 9443 2023-02-17 16:44:19,431 WARN  [RS-EventLoopGroup-1-60] ipc.RpcServer: 
/hostname_rs1:16020 is unable to read call parameter from client client_ip1
 9444 org.apache.hadoop.hbase.DoNotRetryIOException: Invalid request header: , 
should have param set in it
 9445     at 
org.apache.hadoop.hbase.ipc.ServerRpcConnection.processRequest(ServerRpcConnection.java:654)
 9446     at 
org.apache.hadoop.hbase.ipc.ServerRpcConnection.processOneRpc(ServerRpcConnection.java:448)
 9447     at 
org.apache.hadoop.hbase.ipc.NettyServerRpcConnection.process(NettyServerRpcConnection.java:89)
 9448     at 
org.apache.hadoop.hbase.ipc.NettyServerRpcConnection.process(NettyServerRpcConnection.java:63)
 9449     at 
org.apache.hadoop.hbase.ipc.NettyRpcServerRequestDecoder.channelRead(NettyRpcServerRequestDecoder.java:62)
 9450     at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
 9451     at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
 9452     at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
 9453     at 
org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:327)
 9454     at 
org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:314)
 9455     at 
org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:435)
 9456     at 
org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:279)
 9457     at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
 9458     at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
 9459     at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
 9460     at 
org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
 9461     at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
 9462

[jira] [Created] (HBASE-27248) WALPrettyPrinter add print timestamp

2022-07-27 Thread zhengsicheng (Jira)
zhengsicheng created HBASE-27248:


 Summary: WALPrettyPrinter add print timestamp
 Key: HBASE-27248
 URL: https://issues.apache.org/jira/browse/HBASE-27248
 Project: HBase
  Issue Type: Task
Reporter: zhengsicheng
Assignee: zhengsicheng






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27248) WALPrettyPrinter add print timestamp

2022-07-27 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27248:
-
Description: 
hbase wal  -wal -p, print include timestamp

At first:

row=rowA, column=f:c1, type=Delete
    value: 
cell total size sum: 88

 

New:

row=rowA, column=f:c1, timestamp=1657851212679, type=Delete
    value: 
cell total size sum: 88

> WALPrettyPrinter add print timestamp
> 
>
> Key: HBASE-27248
> URL: https://issues.apache.org/jira/browse/HBASE-27248
> Project: HBase
>  Issue Type: Task
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
>
> hbase wal  -wal -p, print include timestamp
> At first:
> row=rowA, column=f:c1, type=Delete
>     value: 
> cell total size sum: 88
>  
> New:
> row=rowA, column=f:c1, timestamp=1657851212679, type=Delete
>     value: 
> cell total size sum: 88



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27249) Remove invalid peer RegionServer crash

2022-07-27 Thread zhengsicheng (Jira)
zhengsicheng created HBASE-27249:


 Summary: Remove invalid peer RegionServer crash
 Key: HBASE-27249
 URL: https://issues.apache.org/jira/browse/HBASE-27249
 Project: HBase
  Issue Type: Bug
Reporter: zhengsicheng
Assignee: zhengsicheng






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27249) Remove invalid peer RegionServer crash

2022-07-27 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27249:
-
Description: 
add_peer 'test', CLUSTER_KEY => "zookeeper-01:2181:/hbase_01"
remove_peer 'test'
find add peer wrong, remove peer but regionserver crash

The log information is as follows:
2022-07-18 13:26:11,016 ERROR 
[ReadOnlyZKClient-zookeeper-01:2181@0x44281bff-SendThread(zookeeper-01:2181)] 
client.StaticHostProvider: Unable to resolve address: 
zookeeper-01/:2181
java.net.UnknownHostException: zookeeper-01
at 
java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:800)
at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1507)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1366)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1300)
at 
org.apache.zookeeper.client.StaticHostProvider$1.getAllByName(StaticHostProvider.java:92)
at 
org.apache.zookeeper.client.StaticHostProvider.resolve(StaticHostProvider.java:147)
at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:375)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137)
2022-07-18 13:26:11,016 WARN  
[ReadOnlyZKClient-zookeeper-01:2181@0x44281bff-SendThread(zookeeper-01:2181)] 
zookeeper.ClientCnxn: Session 0x0 for server zookeeper-01/:2181, 
unexpected error, closing socket connection and attempting reconnect
java.lang.IllegalArgumentException: Unable to canonicalize address 
zookeeper-01/:2181 because it's not resolvable
at 
org.apache.zookeeper.SaslServerPrincipal.getServerPrincipal(SaslServerPrincipal.java:71)
at 
org.apache.zookeeper.SaslServerPrincipal.getServerPrincipal(SaslServerPrincipal.java:39)
at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1087)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139)
2022-07-18 13:26:11,116 WARN  [ReadOnlyZKClient-zookeeper-01:2181@0x44281bff] 
zookeeper.ReadOnlyZKClient: 0x44281bff to zookeeper-01:2181 failed for get of 
/hbase_01/hbaseid, code = CONNECTIONLOSS, retries = 48
2022-07-18 13:26:11,119 WARN  [regionserver/ip1:16020.logRoller] 
regionserver.ReplicationSource: peerId=test, WAL group 
ip1%2C16020%2C1658118295598.ip1%2C16020%2C1658118295598.regiongroup-2 queue 
size: 11 exceeds value of replication.source.log.queue.warn 2
2022-07-18 13:26:12,055 INFO  [MemStoreFlusher.1] regionserver.HRegion: 
Flushing 31bbfb9b76b6795e5d44fabd113174c0 1/2 column families, dataSize=245.67 
MB heapSize=257.48 MB; f1={dataSize=245.67 MB, heapSize=257.48 MB, 
offHeapSize=0 B}
2022-07-18 13:26:12,116 ERROR 
[ReadOnlyZKClient-zookeeper-01:2181@0x44281bff-SendThread(zookeeper-01:2181)] 
client.StaticHostProvider: Unable to resolve address: 
zookeeper-01/:2181
java.net.UnknownHostException: zookeeper-01
at 
java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:800)
at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1507)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1366)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1300)
at 
org.apache.zookeeper.client.StaticHostProvider$1.getAllByName(StaticHostProvider.java:92)
at 
org.apache.zookeeper.client.StaticHostProvider.resolve(StaticHostProvider.java:147)
at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:375)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137)



2022-07-18 13:26:30,270 INFO  [RS_REFRESH_PEER-regionserver/ip1:16020-1] 
regionserver.RefreshPeerCallable: Received a peer change event, peerId=test, 
type=REMOVE_PEER
2022-07-18 13:26:30,270 INFO  [RS_REFRESH_PEER-regionserver/ip1:16020-1] 
regionserver.ReplicationSourceManager: Number of deleted recovered sources for 
test: 0
2022-07-18 13:26:30,270 INFO  [RS_REFRESH_PEER-regionserver/ip1:16020-1] 
regionserver.ReplicationSource: peerId=test, Closing source test because: 
Replication stream was removed by a user
2022-07-18 13:26:30,271 WARN  
[RS_REFRESH_PEER-regionserver/ip1:16020-0.replicationSource,test] 
client.ConnectionImplementation: Retrieve cluster id failed
java.lang.InterruptedException
at 
java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:385)
at 
java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2063)
at 
org.apache.hadoop.hbase.client.ConnectionImplementation.retrieveClusterId(ConnectionImplementation.java:583)
at 
org.apache.hadoop.hbase.client.ConnectionImplementation.(ConnectionImplementation.java:316)
at 
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
 Method)
at 
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeCons

[jira] [Updated] (HBASE-27249) Remove invalid peer RegionServer crash

2022-07-27 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27249:
-
Component/s: Replication

> Remove invalid peer RegionServer crash
> --
>
> Key: HBASE-27249
> URL: https://issues.apache.org/jira/browse/HBASE-27249
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Major
>
> add_peer 'test', CLUSTER_KEY => "zookeeper-01:2181:/hbase_01"
> remove_peer 'test'
> find add peer wrong, remove peer but regionserver crash
> The log information is as follows:
> 2022-07-18 13:26:11,016 ERROR 
> [ReadOnlyZKClient-zookeeper-01:2181@0x44281bff-SendThread(zookeeper-01:2181)] 
> client.StaticHostProvider: Unable to resolve address: 
> zookeeper-01/:2181
> java.net.UnknownHostException: zookeeper-01
> at 
> java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:800)
> at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1507)
> at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1366)
> at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1300)
> at 
> org.apache.zookeeper.client.StaticHostProvider$1.getAllByName(StaticHostProvider.java:92)
> at 
> org.apache.zookeeper.client.StaticHostProvider.resolve(StaticHostProvider.java:147)
> at 
> org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:375)
> at 
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137)
> 2022-07-18 13:26:11,016 WARN  
> [ReadOnlyZKClient-zookeeper-01:2181@0x44281bff-SendThread(zookeeper-01:2181)] 
> zookeeper.ClientCnxn: Session 0x0 for server zookeeper-01/:2181, 
> unexpected error, closing socket connection and attempting reconnect
> java.lang.IllegalArgumentException: Unable to canonicalize address 
> zookeeper-01/:2181 because it's not resolvable
> at 
> org.apache.zookeeper.SaslServerPrincipal.getServerPrincipal(SaslServerPrincipal.java:71)
> at 
> org.apache.zookeeper.SaslServerPrincipal.getServerPrincipal(SaslServerPrincipal.java:39)
> at 
> org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1087)
> at 
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139)
> 2022-07-18 13:26:11,116 WARN  [ReadOnlyZKClient-zookeeper-01:2181@0x44281bff] 
> zookeeper.ReadOnlyZKClient: 0x44281bff to zookeeper-01:2181 failed for get of 
> /hbase_01/hbaseid, code = CONNECTIONLOSS, retries = 48
> 2022-07-18 13:26:11,119 WARN  [regionserver/ip1:16020.logRoller] 
> regionserver.ReplicationSource: peerId=test, WAL group 
> ip1%2C16020%2C1658118295598.ip1%2C16020%2C1658118295598.regiongroup-2 queue 
> size: 11 exceeds value of replication.source.log.queue.warn 2
> 2022-07-18 13:26:12,055 INFO  [MemStoreFlusher.1] regionserver.HRegion: 
> Flushing 31bbfb9b76b6795e5d44fabd113174c0 1/2 column families, 
> dataSize=245.67 MB heapSize=257.48 MB; f1={dataSize=245.67 MB, 
> heapSize=257.48 MB, offHeapSize=0 B}
> 2022-07-18 13:26:12,116 ERROR 
> [ReadOnlyZKClient-zookeeper-01:2181@0x44281bff-SendThread(zookeeper-01:2181)] 
> client.StaticHostProvider: Unable to resolve address: 
> zookeeper-01/:2181
> java.net.UnknownHostException: zookeeper-01
> at 
> java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:800)
> at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1507)
> at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1366)
> at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1300)
> at 
> org.apache.zookeeper.client.StaticHostProvider$1.getAllByName(StaticHostProvider.java:92)
> at 
> org.apache.zookeeper.client.StaticHostProvider.resolve(StaticHostProvider.java:147)
> at 
> org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:375)
> at 
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137)
> 2022-07-18 13:26:30,270 INFO  [RS_REFRESH_PEER-regionserver/ip1:16020-1] 
> regionserver.RefreshPeerCallable: Received a peer change event, peerId=test, 
> type=REMOVE_PEER
> 2022-07-18 13:26:30,270 INFO  [RS_REFRESH_PEER-regionserver/ip1:16020-1] 
> regionserver.ReplicationSourceManager: Number of deleted recovered sources 
> for test: 0
> 2022-07-18 13:26:30,270 INFO  [RS_REFRESH_PEER-regionserver/ip1:16020-1] 
> regionserver.ReplicationSource: peerId=test, Closing source test because: 
> Replication stream was removed by a user
> 2022-07-18 13:26:30,271 WARN  
> [RS_REFRESH_PEER-regionserver/ip1:16020-0.replicationSource,test] 
> client.ConnectionImplementation: Retrieve cluster id failed
> java.lang.InterruptedException
> at 
> java.base/java.util.concurrent.CompletableFutu

[jira] [Updated] (HBASE-27249) Remove invalid peer RegionServer crash

2022-07-27 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27249:
-
Component/s: (was: Replication)

> Remove invalid peer RegionServer crash
> --
>
> Key: HBASE-27249
> URL: https://issues.apache.org/jira/browse/HBASE-27249
> Project: HBase
>  Issue Type: Bug
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Major
>
> add_peer 'test', CLUSTER_KEY => "zookeeper-01:2181:/hbase_01"
> remove_peer 'test'
> find add peer wrong, remove peer but regionserver crash
> The log information is as follows:
> 2022-07-18 13:26:11,016 ERROR 
> [ReadOnlyZKClient-zookeeper-01:2181@0x44281bff-SendThread(zookeeper-01:2181)] 
> client.StaticHostProvider: Unable to resolve address: 
> zookeeper-01/:2181
> java.net.UnknownHostException: zookeeper-01
> at 
> java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:800)
> at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1507)
> at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1366)
> at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1300)
> at 
> org.apache.zookeeper.client.StaticHostProvider$1.getAllByName(StaticHostProvider.java:92)
> at 
> org.apache.zookeeper.client.StaticHostProvider.resolve(StaticHostProvider.java:147)
> at 
> org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:375)
> at 
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137)
> 2022-07-18 13:26:11,016 WARN  
> [ReadOnlyZKClient-zookeeper-01:2181@0x44281bff-SendThread(zookeeper-01:2181)] 
> zookeeper.ClientCnxn: Session 0x0 for server zookeeper-01/:2181, 
> unexpected error, closing socket connection and attempting reconnect
> java.lang.IllegalArgumentException: Unable to canonicalize address 
> zookeeper-01/:2181 because it's not resolvable
> at 
> org.apache.zookeeper.SaslServerPrincipal.getServerPrincipal(SaslServerPrincipal.java:71)
> at 
> org.apache.zookeeper.SaslServerPrincipal.getServerPrincipal(SaslServerPrincipal.java:39)
> at 
> org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1087)
> at 
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139)
> 2022-07-18 13:26:11,116 WARN  [ReadOnlyZKClient-zookeeper-01:2181@0x44281bff] 
> zookeeper.ReadOnlyZKClient: 0x44281bff to zookeeper-01:2181 failed for get of 
> /hbase_01/hbaseid, code = CONNECTIONLOSS, retries = 48
> 2022-07-18 13:26:11,119 WARN  [regionserver/ip1:16020.logRoller] 
> regionserver.ReplicationSource: peerId=test, WAL group 
> ip1%2C16020%2C1658118295598.ip1%2C16020%2C1658118295598.regiongroup-2 queue 
> size: 11 exceeds value of replication.source.log.queue.warn 2
> 2022-07-18 13:26:12,055 INFO  [MemStoreFlusher.1] regionserver.HRegion: 
> Flushing 31bbfb9b76b6795e5d44fabd113174c0 1/2 column families, 
> dataSize=245.67 MB heapSize=257.48 MB; f1={dataSize=245.67 MB, 
> heapSize=257.48 MB, offHeapSize=0 B}
> 2022-07-18 13:26:12,116 ERROR 
> [ReadOnlyZKClient-zookeeper-01:2181@0x44281bff-SendThread(zookeeper-01:2181)] 
> client.StaticHostProvider: Unable to resolve address: 
> zookeeper-01/:2181
> java.net.UnknownHostException: zookeeper-01
> at 
> java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:800)
> at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1507)
> at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1366)
> at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1300)
> at 
> org.apache.zookeeper.client.StaticHostProvider$1.getAllByName(StaticHostProvider.java:92)
> at 
> org.apache.zookeeper.client.StaticHostProvider.resolve(StaticHostProvider.java:147)
> at 
> org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:375)
> at 
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137)
> 2022-07-18 13:26:30,270 INFO  [RS_REFRESH_PEER-regionserver/ip1:16020-1] 
> regionserver.RefreshPeerCallable: Received a peer change event, peerId=test, 
> type=REMOVE_PEER
> 2022-07-18 13:26:30,270 INFO  [RS_REFRESH_PEER-regionserver/ip1:16020-1] 
> regionserver.ReplicationSourceManager: Number of deleted recovered sources 
> for test: 0
> 2022-07-18 13:26:30,270 INFO  [RS_REFRESH_PEER-regionserver/ip1:16020-1] 
> regionserver.ReplicationSource: peerId=test, Closing source test because: 
> Replication stream was removed by a user
> 2022-07-18 13:26:30,271 WARN  
> [RS_REFRESH_PEER-regionserver/ip1:16020-0.replicationSource,test] 
> client.ConnectionImplementation: Retrieve cluster id failed
> java.lang.InterruptedException
> at 
> java.base/java.util.concurrent.CompletableFuture.reportGet(Completable

[jira] [Commented] (HBASE-27249) Remove invalid peer RegionServer crash

2022-07-27 Thread zhengsicheng (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17572199#comment-17572199
 ] 

zhengsicheng commented on HBASE-27249:
--

[~zhangduo] When add peer cluster, but  peer cluster zookeeper shuotdown or 
invalid  cause  source cluster RS abort.

> Remove invalid peer RegionServer crash
> --
>
> Key: HBASE-27249
> URL: https://issues.apache.org/jira/browse/HBASE-27249
> Project: HBase
>  Issue Type: Bug
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Major
>
> add_peer 'test', CLUSTER_KEY => "zookeeper-01:2181:/hbase_01"
> remove_peer 'test'
> find add peer wrong, remove peer but regionserver crash
> The log information is as follows:
> 2022-07-18 13:26:11,016 ERROR 
> [ReadOnlyZKClient-zookeeper-01:2181@0x44281bff-SendThread(zookeeper-01:2181)] 
> client.StaticHostProvider: Unable to resolve address: 
> zookeeper-01/:2181
> java.net.UnknownHostException: zookeeper-01
> at 
> java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:800)
> at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1507)
> at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1366)
> at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1300)
> at 
> org.apache.zookeeper.client.StaticHostProvider$1.getAllByName(StaticHostProvider.java:92)
> at 
> org.apache.zookeeper.client.StaticHostProvider.resolve(StaticHostProvider.java:147)
> at 
> org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:375)
> at 
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137)
> 2022-07-18 13:26:11,016 WARN  
> [ReadOnlyZKClient-zookeeper-01:2181@0x44281bff-SendThread(zookeeper-01:2181)] 
> zookeeper.ClientCnxn: Session 0x0 for server zookeeper-01/:2181, 
> unexpected error, closing socket connection and attempting reconnect
> java.lang.IllegalArgumentException: Unable to canonicalize address 
> zookeeper-01/:2181 because it's not resolvable
> at 
> org.apache.zookeeper.SaslServerPrincipal.getServerPrincipal(SaslServerPrincipal.java:71)
> at 
> org.apache.zookeeper.SaslServerPrincipal.getServerPrincipal(SaslServerPrincipal.java:39)
> at 
> org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1087)
> at 
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139)
> 2022-07-18 13:26:11,116 WARN  [ReadOnlyZKClient-zookeeper-01:2181@0x44281bff] 
> zookeeper.ReadOnlyZKClient: 0x44281bff to zookeeper-01:2181 failed for get of 
> /hbase_01/hbaseid, code = CONNECTIONLOSS, retries = 48
> 2022-07-18 13:26:11,119 WARN  [regionserver/ip1:16020.logRoller] 
> regionserver.ReplicationSource: peerId=test, WAL group 
> ip1%2C16020%2C1658118295598.ip1%2C16020%2C1658118295598.regiongroup-2 queue 
> size: 11 exceeds value of replication.source.log.queue.warn 2
> 2022-07-18 13:26:12,055 INFO  [MemStoreFlusher.1] regionserver.HRegion: 
> Flushing 31bbfb9b76b6795e5d44fabd113174c0 1/2 column families, 
> dataSize=245.67 MB heapSize=257.48 MB; f1={dataSize=245.67 MB, 
> heapSize=257.48 MB, offHeapSize=0 B}
> 2022-07-18 13:26:12,116 ERROR 
> [ReadOnlyZKClient-zookeeper-01:2181@0x44281bff-SendThread(zookeeper-01:2181)] 
> client.StaticHostProvider: Unable to resolve address: 
> zookeeper-01/:2181
> java.net.UnknownHostException: zookeeper-01
> at 
> java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:800)
> at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1507)
> at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1366)
> at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1300)
> at 
> org.apache.zookeeper.client.StaticHostProvider$1.getAllByName(StaticHostProvider.java:92)
> at 
> org.apache.zookeeper.client.StaticHostProvider.resolve(StaticHostProvider.java:147)
> at 
> org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:375)
> at 
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137)
> 2022-07-18 13:26:30,270 INFO  [RS_REFRESH_PEER-regionserver/ip1:16020-1] 
> regionserver.RefreshPeerCallable: Received a peer change event, peerId=test, 
> type=REMOVE_PEER
> 2022-07-18 13:26:30,270 INFO  [RS_REFRESH_PEER-regionserver/ip1:16020-1] 
> regionserver.ReplicationSourceManager: Number of deleted recovered sources 
> for test: 0
> 2022-07-18 13:26:30,270 INFO  [RS_REFRESH_PEER-regionserver/ip1:16020-1] 
> regionserver.ReplicationSource: peerId=test, Closing source test because: 
> Replication stream was removed by a user
> 2022-07-18 13:26:30,271 WARN  
> [RS_REFRESH_PEER-regionserver/ip1:16020-0.replicationSource,test] 
> client.ConnectionImplementation: Retrieve c

[jira] [Commented] (HBASE-27144) Add special rpc handlers for bulkload operations

2022-07-27 Thread zhengsicheng (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17572207#comment-17572207
 ] 

zhengsicheng commented on HBASE-27144:
--

[~zhangduo] I as soon as possible provide a PR to fix

> Add special rpc handlers for bulkload operations
> 
>
> Key: HBASE-27144
> URL: https://issues.apache.org/jira/browse/HBASE-27144
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, rpc
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Minor
> Fix For: 2.6.0, 3.0.0-alpha-4
>
> Attachments: image-2022-06-22-11-47-26-963.png
>
>
> Bulkload will consume a lot of resources in the cluster. We try to reduce the 
> impact of bulkload on online services and do simple resource isolation for 
> bulkload.
> !image-2022-06-22-11-47-26-963.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27267) Delete causes timestamp to be negative

2022-08-02 Thread zhengsicheng (Jira)
zhengsicheng created HBASE-27267:


 Summary: Delete causes timestamp to be negative
 Key: HBASE-27267
 URL: https://issues.apache.org/jira/browse/HBASE-27267
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.3.4
Reporter: zhengsicheng






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27267) Delete causes timestamp to be negative

2022-08-02 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27267:
-
Description: 
RegionServer log message:
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, 
KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last good 
position in file, from 1099261 to 1078224
java.io.EOFException: EOF  while reading 660 WAL KVs; started reading at 
1078317 and read up to 1099261
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145)
Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, 
ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
at 
org.apache.hadoop.hbase.KeyValueUtil.checkKeyValueBytes(KeyValueUtil.java:612)
at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:346)
at 
org.apache.hadoop.hbase.KeyValueUtil.createKeyValueFromInputStream(KeyValueUtil.java:717)
at 
org.apache.hadoop.hbase.codec.KeyValueCodecWithTags$KeyValueDecoder.parseCell(KeyValueCodecWithTags.java:81)
at org.apache.hadoop.hbase.codec.BaseDecoder.advance(BaseDecoder.java:68)
at org.apache.hadoop.hbase.wal.WALEdit.readFromCells(WALEdit.java:276)
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:387)
... 7 more

> Delete causes timestamp to be negative
> --
>
> Key: HBASE-27267
> URL: https://issues.apache.org/jira/browse/HBASE-27267
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.3.4
>Reporter: zhengsicheng
>Priority: Major
>
> RegionServer log message:
> 2022-07-19 12:13:29,324 WARN  
> [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
>  hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, 
> KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
> 2022-07-19 12:13:29,324 WARN  
> [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
>  wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last 
> good position in file, from 1099261 to 1078224
> java.io.EOFException: EOF  while reading 660 WAL KVs; started reading at 
> 1078317 and read up to 1099261
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85)
> at 
> org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264)
> at 
> org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178)
> at 
> org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145)
> Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, 
> ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offs

[jira] [Updated] (HBASE-27267) Delete causes timestamp to be negative

2022-08-02 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27267:
-
Description: 
RegionServer log message:

{code:java}
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, 
KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last good 
position in file, from 1099261 to 1078224
java.io.EOFException: EOF  while reading 660 WAL KVs; started reading at 
1078317 and read up to 1099261
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145)
Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, 
ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
at 
org.apache.hadoop.hbase.KeyValueUtil.checkKeyValueBytes(KeyValueUtil.java:612)
at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:346)
at 
org.apache.hadoop.hbase.KeyValueUtil.createKeyValueFromInputStream(KeyValueUtil.java:717)
at 
org.apache.hadoop.hbase.codec.KeyValueCodecWithTags$KeyValueDecoder.parseCell(KeyValueCodecWithTags.java:81)
at org.apache.hadoop.hbase.codec.BaseDecoder.advance(BaseDecoder.java:68)
at org.apache.hadoop.hbase.wal.WALEdit.readFromCells(WALEdit.java:276)
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:387)
... 7 more
{code}


  was:
RegionServer log message:
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, 
KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last good 
position in file, from 1099261 to 1078224
java.io.EOFException: EOF  while reading 660 WAL KVs; started reading at 
1078317 and read up to 1099261
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145)
Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, 
ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
at 
org.apache.hadoop.hbase.KeyValueUtil.checkKeyValueBytes(KeyValueUtil.java:612)
at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:346)
at 
org.apache.hadoop.hbase.KeyValueUtil.createKeyValueFromInputStream(KeyValueUtil.java:717)
at 
org.apache.hadoop.hbase.codec.KeyValueCodecWithTags$KeyValueDecoder.parseCell(KeyValueCodecWithTags.java:

[jira] [Updated] (HBASE-27267) Delete causes timestamp to be negative

2022-08-02 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27267:
-
Description: 
RegionServer log message:

{code:java}
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, 
KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last good 
position in file, from 1099261 to 1078224
java.io.EOFException: EOF  while reading 660 WAL KVs; started reading at 
1078317 and read up to 1099261
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145)
Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, 
ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
at 
org.apache.hadoop.hbase.KeyValueUtil.checkKeyValueBytes(KeyValueUtil.java:612)
at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:346)
at 
org.apache.hadoop.hbase.KeyValueUtil.createKeyValueFromInputStream(KeyValueUtil.java:717)
at 
org.apache.hadoop.hbase.codec.KeyValueCodecWithTags$KeyValueDecoder.parseCell(KeyValueCodecWithTags.java:81)
at org.apache.hadoop.hbase.codec.BaseDecoder.advance(BaseDecoder.java:68)
at org.apache.hadoop.hbase.wal.WALEdit.readFromCells(WALEdit.java:276)
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:387)
... 7 more
{code}

debug WAL file  ,found that the  operation is caused
Sequence=365693989, table=tableA, region=148cedb7b8ca3145690800fd650e084d, at 
write timestamp=Sat Jul 16 00:50:01 CST 2022
2022-07-22 22:09:43,244 ERROR [main] wal.WALPrettyPrinter: Timestamp is 
negative row=rowkey1, column=d:act, timestamp=-4323977095312258207, type=Delete

  was:
RegionServer log message:

{code:java}
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, 
KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last good 
position in file, from 1099261 to 1078224
java.io.EOFException: EOF  while reading 660 WAL KVs; started reading at 
1078317 and read up to 1099261
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145)
Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, 
ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
at 
org.apa

[jira] [Updated] (HBASE-27267) Delete causes timestamp to be negative

2022-08-02 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27267:
-
Description: 
RegionServer log message:

{code:java}
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, 
KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last good 
position in file, from 1099261 to 1078224
java.io.EOFException: EOF  while reading 660 WAL KVs; started reading at 
1078317 and read up to 1099261
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145)
Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, 
ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
at 
org.apache.hadoop.hbase.KeyValueUtil.checkKeyValueBytes(KeyValueUtil.java:612)
at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:346)
at 
org.apache.hadoop.hbase.KeyValueUtil.createKeyValueFromInputStream(KeyValueUtil.java:717)
at 
org.apache.hadoop.hbase.codec.KeyValueCodecWithTags$KeyValueDecoder.parseCell(KeyValueCodecWithTags.java:81)
at org.apache.hadoop.hbase.codec.BaseDecoder.advance(BaseDecoder.java:68)
at org.apache.hadoop.hbase.wal.WALEdit.readFromCells(WALEdit.java:276)
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:387)
... 7 more
{code}
Sequence=365693989, table=tableA, region=148cedb7b8ca3145690800fd650e084d, at 
write timestamp=Sat Jul 16 00:50:01 CST 2022
2022-07-22 22:09:43,244 ERROR [main] wal.WALPrettyPrinter: Timestamp is 
negative row=rowkey1, column=d:act, timestamp=-4323977095312258207, type=Delete
{code}



  was:
RegionServer log message:

{code:java}
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, 
KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last good 
position in file, from 1099261 to 1078224
java.io.EOFException: EOF  while reading 660 WAL KVs; started reading at 
1078317 and read up to 1099261
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145)
Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, 
ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
at 
org.apache.hadoop.hbase.KeyValueUtil.checkKeyValueBy

[jira] [Updated] (HBASE-27267) Delete causes timestamp to be negative

2022-08-02 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27267:
-
Description: 
RegionServer log message:

{code:java}
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, 
KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last good 
position in file, from 1099261 to 1078224
java.io.EOFException: EOF  while reading 660 WAL KVs; started reading at 
1078317 and read up to 1099261
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145)
Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, 
ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
at 
org.apache.hadoop.hbase.KeyValueUtil.checkKeyValueBytes(KeyValueUtil.java:612)
at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:346)
at 
org.apache.hadoop.hbase.KeyValueUtil.createKeyValueFromInputStream(KeyValueUtil.java:717)
at 
org.apache.hadoop.hbase.codec.KeyValueCodecWithTags$KeyValueDecoder.parseCell(KeyValueCodecWithTags.java:81)
at org.apache.hadoop.hbase.codec.BaseDecoder.advance(BaseDecoder.java:68)
at org.apache.hadoop.hbase.wal.WALEdit.readFromCells(WALEdit.java:276)
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:387)
... 7 more
{code}

debug WAL file  ,found that the delete operation is caused
Sequence=365693989, table=tableA, region=148cedb7b8ca3145690800fd650e084d, at 
write timestamp=Sat Jul 16 00:50:01 CST 2022
2022-07-22 22:09:43,244 ERROR [main] wal.WALPrettyPrinter: Timestamp is 
negative row=rowkey1, column=d:act, timestamp=-4323977095312258207, type=Delete

  was:
RegionServer log message:

{code:java}
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, 
KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last good 
position in file, from 1099261 to 1078224
java.io.EOFException: EOF  while reading 660 WAL KVs; started reading at 
1078317 and read up to 1099261
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145)
Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, 
ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
at 
o

[jira] [Updated] (HBASE-27267) Delete causes timestamp to be negative

2022-08-02 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27267:
-
Description: 
RegionServer log message:

{code:java}
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, 
KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last good 
position in file, from 1099261 to 1078224
java.io.EOFException: EOF  while reading 660 WAL KVs; started reading at 
1078317 and read up to 1099261
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145)
Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, 
ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
at 
org.apache.hadoop.hbase.KeyValueUtil.checkKeyValueBytes(KeyValueUtil.java:612)
at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:346)
at 
org.apache.hadoop.hbase.KeyValueUtil.createKeyValueFromInputStream(KeyValueUtil.java:717)
at 
org.apache.hadoop.hbase.codec.KeyValueCodecWithTags$KeyValueDecoder.parseCell(KeyValueCodecWithTags.java:81)
at org.apache.hadoop.hbase.codec.BaseDecoder.advance(BaseDecoder.java:68)
at org.apache.hadoop.hbase.wal.WALEdit.readFromCells(WALEdit.java:276)
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:387)
... 7 more
{code}

Debug WAL file ,found that the delete operation is caused

Sequence=365693989, table=tableA, region=148cedb7b8ca3145690800fd650e084d, at 
write timestamp=Sat Jul 16 00:50:01 CST 2022
2022-07-22 22:09:43,244 ERROR [main] wal.WALPrettyPrinter: Timestamp is 
negative row=rowkey1, column=d:act, timestamp=-4323977095312258207, type=Delete
{code}


  was:
RegionServer log message:

{code:java}
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, 
KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last good 
position in file, from 1099261 to 1078224
java.io.EOFException: EOF  while reading 660 WAL KVs; started reading at 
1078317 and read up to 1099261
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145)
Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, 
ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
 

[jira] [Updated] (HBASE-27267) Delete causes timestamp to be negative

2022-08-03 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27267:
-
Description: 
When client-1.1.6 and server-2.3.4 there is a case where the batch delete 
timestamp is negative
RegionServer log message:

{code:java}
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, 
KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last good 
position in file, from 1099261 to 1078224
java.io.EOFException: EOF  while reading 660 WAL KVs; started reading at 
1078317 and read up to 1099261
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145)
Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, 
ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
at 
org.apache.hadoop.hbase.KeyValueUtil.checkKeyValueBytes(KeyValueUtil.java:612)
at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:346)
at 
org.apache.hadoop.hbase.KeyValueUtil.createKeyValueFromInputStream(KeyValueUtil.java:717)
at 
org.apache.hadoop.hbase.codec.KeyValueCodecWithTags$KeyValueDecoder.parseCell(KeyValueCodecWithTags.java:81)
at org.apache.hadoop.hbase.codec.BaseDecoder.advance(BaseDecoder.java:68)
at org.apache.hadoop.hbase.wal.WALEdit.readFromCells(WALEdit.java:276)
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:387)
... 7 more
{code}

Debug WAL file ,found that the delete operation is caused
{code:java}
Sequence=365693989, table=tableA, region=148cedb7b8ca3145690800fd650e084d, at 
write timestamp=Sat Jul 16 00:50:01 CST 2022
2022-07-22 22:09:43,244 ERROR [main] wal.WALPrettyPrinter: Timestamp is 
negative row=rowkey1, column=d:act, timestamp=-4323977095312258207, type=Delete
{code}


  was:
RegionServer log message:

{code:java}
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, 
KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last good 
position in file, from 1099261 to 1078224
java.io.EOFException: EOF  while reading 660 WAL KVs; started reading at 
1078317 and read up to 1099261
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145)
Caused by: java.lang.IllegalArgumentException: 

[jira] [Updated] (HBASE-27267) Delete causes timestamp to be negative

2022-08-03 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng updated HBASE-27267:
-
Description: 
When client-1.1.6 and server-2.3.4 there is a case where the batch delete 
timestamp is negative
#  1. RegionServer log message:

{code:java}
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, 
KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
2022-07-19 12:13:29,324 WARN  
[RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
 wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last good 
position in file, from 1099261 to 1078224
java.io.EOFException: EOF  while reading 660 WAL KVs; started reading at 
1078317 and read up to 1099261
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145)
Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, 
ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
at 
org.apache.hadoop.hbase.KeyValueUtil.checkKeyValueBytes(KeyValueUtil.java:612)
at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:346)
at 
org.apache.hadoop.hbase.KeyValueUtil.createKeyValueFromInputStream(KeyValueUtil.java:717)
at 
org.apache.hadoop.hbase.codec.KeyValueCodecWithTags$KeyValueDecoder.parseCell(KeyValueCodecWithTags.java:81)
at org.apache.hadoop.hbase.codec.BaseDecoder.advance(BaseDecoder.java:68)
at org.apache.hadoop.hbase.wal.WALEdit.readFromCells(WALEdit.java:276)
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:387)
... 7 more
{code}

# 2. Debug WAL file ,found that the delete operation is caused
{code:java}
Sequence=365693989, table=tableA, region=148cedb7b8ca3145690800fd650e084d, at 
write timestamp=Sat Jul 16 00:50:01 CST 2022
2022-07-22 22:09:43,244 ERROR [main] wal.WALPrettyPrinter: Timestamp is 
negative row=rowkey1, column=d:act, timestamp=-4323977095312258207, type=Delete
{code}

# 3. User use spark read/write hbase
batchsize is 1
{code:scala}
def dataDeleteFromHbase(rdd: RDD[(String, String)], hbase_table: String, 
hbase_instance: String, hbase_accesskey: String, accumulator: LongAccumulator, 
buffersize: String, batchsize: Int): Unit = {
rdd.foreachPartition(iterator => {
  val partitionId = TaskContext.getPartitionId()
  val conf = HBaseConfiguration.create()
  val connection = SparkHbaseUtils.getconnection(conf)
  val table = connection.getTable(TableName.valueOf(hbase_table))
  var deleteList = new util.LinkedList[Delete]()
  var count = 0
  var batchCount = 0
  while (iterator.hasNext) {
val element = iterator.next
val crc32 = new CRC32()
crc32.update(s"${element._1}_${element._2}".getBytes())
val crcArr = convertLow4bit2SmallEndan(crc32.getValue)
val key = concat(DigestUtils.md5(s"${element._1}_${element._2}"), 
crcArr)
val delete = new Delete(key)
deleteList.add(delete)
count += 1
if (count % batchsize.toInt == 0) {
  batchCount = batchCount + 1
  try {
table.delete(deleteList)
  } catch {
case _: RetriesExhaustedWithDetailsException => {
  LOGGER.warn(s"==partitionId: ${partitionId}===batchCount: 
${batchCount}===Wait 1000 ms, retry..")
  Thread.sleep(1000)
  processDelThrottlingException(table, deleteList, partitionId, 
batchCount)
}
case _: ThrottlingException => {
  LOGGER.warn(s"==partitionId: ${partitionId}===batchCount: 
${batchCount}===Wait 1000 ms, retry..")
  Thread.sleep(1000)
  processDelThrottlingException(table, delet

  1   2   >