[jira] [Commented] (HBASE-15439) getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores the TimeUnit

2016-03-14 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15192835#comment-15192835
 ] 

Anoop Sam John commented on HBASE-15439:


+1

> getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores the TimeUnit
> ---
>
> Key: HBASE-15439
> URL: https://issues.apache.org/jira/browse/HBASE-15439
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Jingcheng Du
> Attachments: HBASE-15439.patch
>
>
> I was running IntegrationTestIngestWithMOB test.
> I lower the mob compaction chore interval to this value:
> {code}
> 
>   hbase.mob.compaction.chore.period
>   6000
> 
> {code}
> After whole night, there was no indication from master log that mob 
> compaction ran.
> All I found was:
> {code}
> 2016-03-09 04:18:52,194 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 05:58:52,516 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 07:38:52,847 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 09:18:52,848 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 10:58:52,932 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 12:38:52,932 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 14:18:52,933 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 15:58:52,957 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 17:38:52,960 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15265) Implement an asynchronous FSHLog

2016-03-14 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15192838#comment-15192838
 ] 

Duo Zhang commented on HBASE-15265:
---

I mean that, the {{RegionGroupingProvider}} is an algorithm that could be used 
together with any {{WALProvider}}s, not only FSHLog. So I suggest to add an 
option in configuration like "multiwal.algo". If empty, we just use the 
configured {{WALProvider}}. If non-empty, for example 'RegionGrouping', then we 
will use RegionGroupingProvider, and inside the Provider, we will store 
{{WALProvider}} instead of {{FSHLog}}.

What do you think? [~carp84]
Thanks.

> Implement an asynchronous FSHLog
> 
>
> Key: HBASE-15265
> URL: https://issues.apache.org/jira/browse/HBASE-15265
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15265-v1.patch, HBASE-15265-v2.patch, 
> HBASE-15265-v3.patch, HBASE-15265-v4.patch, HBASE-15265-v5.patch, 
> HBASE-15265-v6.patch, HBASE-15265-v7.patch, HBASE-15265.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-15265) Implement an asynchronous FSHLog

2016-03-14 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15192838#comment-15192838
 ] 

Duo Zhang edited comment on HBASE-15265 at 3/14/16 7:13 AM:


I mean that, the {{RegionGroupingProvider}} is an algorithm that could be used 
together with any {{WALProvider}}, not only FSHLog. So I suggest to add an 
option in configuration like "multiwal.algo". If empty, we just use the 
configured {{WALProvider}}. If non-empty, for example 'RegionGrouping', then we 
will use RegionGroupingProvider, and inside the Provider, we will store 
{{WALProvider}} instead of {{FSHLog}}.

What do you think? [~carp84]
Thanks.


was (Author: apache9):
I mean that, the {{RegionGroupingProvider}} is an algorithm that could be used 
together with any {{WALProvider}}s, not only FSHLog. So I suggest to add an 
option in configuration like "multiwal.algo". If empty, we just use the 
configured {{WALProvider}}. If non-empty, for example 'RegionGrouping', then we 
will use RegionGroupingProvider, and inside the Provider, we will store 
{{WALProvider}} instead of {{FSHLog}}.

What do you think? [~carp84]
Thanks.

> Implement an asynchronous FSHLog
> 
>
> Key: HBASE-15265
> URL: https://issues.apache.org/jira/browse/HBASE-15265
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15265-v1.patch, HBASE-15265-v2.patch, 
> HBASE-15265-v3.patch, HBASE-15265-v4.patch, HBASE-15265-v5.patch, 
> HBASE-15265-v6.patch, HBASE-15265-v7.patch, HBASE-15265.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15406) Split / merge switch left disabled after early termination of hbck

2016-03-14 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-15406:
--
Attachment: HBASE-15406_v1.patch

Address [~stack] comments,  move cleanup logical into hbck and use protobuf to 
save data on zk.

> Split / merge switch left disabled after early termination of hbck
> --
>
> Key: HBASE-15406
> URL: https://issues.apache.org/jira/browse/HBASE-15406
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15406.patch, HBASE-15406.v1.patch, 
> HBASE-15406_v1.patch, wip.patch
>
>
> This was what I did on cluster with 1.4.0-SNAPSHOT built Thursday:
> Run 'hbase hbck -disableSplitAndMerge' on gateway node of the cluster
> Terminate hbck early
> Enter hbase shell where I observed:
> {code}
> hbase(main):001:0> splitormerge_enabled 'SPLIT'
> false
> 0 row(s) in 0.3280 seconds
> hbase(main):002:0> splitormerge_enabled 'MERGE'
> false
> 0 row(s) in 0.0070 seconds
> {code}
> Expectation is that the split / merge switches should be restored to default 
> value after hbck exits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15406) Split / merge switch left disabled after early termination of hbck

2016-03-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15192847#comment-15192847
 ] 

Hadoop QA commented on HBASE-15406:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 14s 
{color} | {color:red} Docker failed to build yetus/hbase:date2016-03-14. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12793258/HBASE-15406_v1.patch |
| JIRA Issue | HBASE-15406 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/953/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Split / merge switch left disabled after early termination of hbck
> --
>
> Key: HBASE-15406
> URL: https://issues.apache.org/jira/browse/HBASE-15406
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15406.patch, HBASE-15406.v1.patch, 
> HBASE-15406_v1.patch, wip.patch
>
>
> This was what I did on cluster with 1.4.0-SNAPSHOT built Thursday:
> Run 'hbase hbck -disableSplitAndMerge' on gateway node of the cluster
> Terminate hbck early
> Enter hbase shell where I observed:
> {code}
> hbase(main):001:0> splitormerge_enabled 'SPLIT'
> false
> 0 row(s) in 0.3280 seconds
> hbase(main):002:0> splitormerge_enabled 'MERGE'
> false
> 0 row(s) in 0.0070 seconds
> {code}
> Expectation is that the split / merge switches should be restored to default 
> value after hbck exits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15265) Implement an asynchronous FSHLog

2016-03-14 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15192848#comment-15192848
 ] 

Duo Zhang commented on HBASE-15265:
---

{quote}
At a glance of HBASE-14949 and comments there, it seems multiwal will have some 
problem with the patch?
{quote}

I used to worry about whether the wal entries of one region could be written to 
different wal files. This may cause some problems, I'm not sure. But this could 
not happen since wal is a final field in HRegion. And in the DLS mode, we will 
flush the memstore and clean up the splitted wal files before processing 
further requests. So it is still safe if a region moves out and then moves 
back. For DLR, first it does not need to split wal files, and second, we are 
going to remove the feature?

Thanks.

> Implement an asynchronous FSHLog
> 
>
> Key: HBASE-15265
> URL: https://issues.apache.org/jira/browse/HBASE-15265
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15265-v1.patch, HBASE-15265-v2.patch, 
> HBASE-15265-v3.patch, HBASE-15265-v4.patch, HBASE-15265-v5.patch, 
> HBASE-15265-v6.patch, HBASE-15265-v7.patch, HBASE-15265.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15389) Write out multiple files when compaction

2016-03-14 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-15389:
--
Attachment: HBASE-15389-v6.patch

Addressing the comments on RB. Especially the empty file issue for major 
compaction.

> Write out multiple files when compaction
> 
>
> Key: HBASE-15389
> URL: https://issues.apache.org/jira/browse/HBASE-15389
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Affects Versions: 2.0.0, 1.3.0, 0.98.19
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0, 1.3.0, 0.98.19
>
> Attachments: HBASE-15389-uc.patch, HBASE-15389-v1.patch, 
> HBASE-15389-v2.patch, HBASE-15389-v3.patch, HBASE-15389-v4.patch, 
> HBASE-15389-v5.patch, HBASE-15389-v6.patch, HBASE-15389.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15389) Write out multiple files when compaction

2016-03-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15192859#comment-15192859
 ] 

Hadoop QA commented on HBASE-15389:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 13s 
{color} | {color:red} Docker failed to build yetus/hbase:date2016-03-14. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12793262/HBASE-15389-v6.patch |
| JIRA Issue | HBASE-15389 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/954/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Write out multiple files when compaction
> 
>
> Key: HBASE-15389
> URL: https://issues.apache.org/jira/browse/HBASE-15389
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Affects Versions: 2.0.0, 1.3.0, 0.98.19
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0, 1.3.0, 0.98.19
>
> Attachments: HBASE-15389-uc.patch, HBASE-15389-v1.patch, 
> HBASE-15389-v2.patch, HBASE-15389-v3.patch, HBASE-15389-v4.patch, 
> HBASE-15389-v5.patch, HBASE-15389-v6.patch, HBASE-15389.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15398) Cells loss or disorder when using family essential filter and partial scanning protocol

2016-03-14 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15192875#comment-15192875
 ] 

Phil Yang commented on HBASE-15398:
---

{quote}
The following comparator will work for user-space results only 
Collections.sort(cells, CellComparator.COMPARATOR); and will give incorrect 
results if applied to meta emissions
{quote}
I find that MetaCellComparator only has difference on compareRows, and in 
Result.createCompete we only sort cells in the same row. Can we use 
CellComparator only here? Or use MetaCellComparator if region.isMeta anyway.

> Cells loss or disorder when using family essential filter and partial 
> scanning protocol
> ---
>
> Key: HBASE-15398
> URL: https://issues.apache.org/jira/browse/HBASE-15398
> Project: HBase
>  Issue Type: Bug
>  Components: dataloss, Scanners
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
>Priority: Critical
> Attachments: 15398-test.txt, HBASE-15398-v2.patch, HBASE-15398.v1.txt
>
>
> In RegionScannerImpl, we have two heaps, storeHeap and joinedHeap. If we have 
> a filter and it doesn't apply to all cf, the stores whose families needn't be 
>  filtered will be in joinedHeap. We scan storeHeap first, then joinedHeap, 
> and merge the results and sort and return to client. We need sort because the 
> order of Cell is rowkey/cf/cq/ts and a smaller cf may be in the joinedHeap.
> However, after HBASE-11544 we may transfer partial results when we get 
> SIZE_LIMIT_REACHED_MID_ROW or other similar states. We may return a larger cf 
> first because it is in storeHeap and then a smaller cf because it is in 
> joinedHeap. Server won't hold all cells in a row and client doesn't have a 
> sorting logic. The order of cf in Result for user is wrong.
> And a more critical bug is, if we get a LIMIT_REACHED_MID_ROW on the last 
> cell of a row in storeHeap, we will break scanning in RegionScannerImpl and 
> in populateResult we will change the state to SIZE_LIMIT_REACHED because next 
> peeked cell is next row. But this is only the last cell of one and we have 
> two... And SIZE_LIMIT_REACHED means this Result is not partial (by 
> ScannerContext.partialResultFormed), client will see it and merge them and 
> return to user with losing data of joinedHeap. On next scan we will read next 
> row of storeHeap and joinedHeap is forgotten and never be read...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15439) getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores the TimeUnit

2016-03-14 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du updated HBASE-15439:
-
Status: Patch Available  (was: Open)

> getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores the TimeUnit
> ---
>
> Key: HBASE-15439
> URL: https://issues.apache.org/jira/browse/HBASE-15439
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Jingcheng Du
> Attachments: HBASE-15439.patch
>
>
> I was running IntegrationTestIngestWithMOB test.
> I lower the mob compaction chore interval to this value:
> {code}
> 
>   hbase.mob.compaction.chore.period
>   6000
> 
> {code}
> After whole night, there was no indication from master log that mob 
> compaction ran.
> All I found was:
> {code}
> 2016-03-09 04:18:52,194 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 05:58:52,516 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 07:38:52,847 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 09:18:52,848 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 10:58:52,932 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 12:38:52,932 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 14:18:52,933 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 15:58:52,957 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 17:38:52,960 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15439) getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores the TimeUnit

2016-03-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15192942#comment-15192942
 ] 

Hadoop QA commented on HBASE-15439:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 14s 
{color} | {color:red} Docker failed to build yetus/hbase:date2016-03-14. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12793257/HBASE-15439.patch |
| JIRA Issue | HBASE-15439 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/955/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores the TimeUnit
> ---
>
> Key: HBASE-15439
> URL: https://issues.apache.org/jira/browse/HBASE-15439
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Jingcheng Du
> Attachments: HBASE-15439.patch
>
>
> I was running IntegrationTestIngestWithMOB test.
> I lower the mob compaction chore interval to this value:
> {code}
> 
>   hbase.mob.compaction.chore.period
>   6000
> 
> {code}
> After whole night, there was no indication from master log that mob 
> compaction ran.
> All I found was:
> {code}
> 2016-03-09 04:18:52,194 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 05:58:52,516 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 07:38:52,847 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 09:18:52,848 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 10:58:52,932 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 12:38:52,932 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 14:18:52,933 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 15:58:52,957 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 17:38:52,960 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15439) getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores the TimeUnit

2016-03-14 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15192956#comment-15192956
 ] 

ramkrishna.s.vasudevan commented on HBASE-15439:


+1. Nice catch.

> getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores the TimeUnit
> ---
>
> Key: HBASE-15439
> URL: https://issues.apache.org/jira/browse/HBASE-15439
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Jingcheng Du
> Attachments: HBASE-15439.patch
>
>
> I was running IntegrationTestIngestWithMOB test.
> I lower the mob compaction chore interval to this value:
> {code}
> 
>   hbase.mob.compaction.chore.period
>   6000
> 
> {code}
> After whole night, there was no indication from master log that mob 
> compaction ran.
> All I found was:
> {code}
> 2016-03-09 04:18:52,194 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 05:58:52,516 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 07:38:52,847 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 09:18:52,848 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 10:58:52,932 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 12:38:52,932 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 14:18:52,933 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 15:58:52,957 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 17:38:52,960 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15339) Improve DateTieredCompactionPolicy

2016-03-14 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-15339:
--
Description: 
Add multi-output support.
Add archive old data support.
...

  was:
For our MiCloud service, the old data is rarely touched but we still need to 
keep it, so we want to put the data on inexpensive device and reduce redundancy 
using EC to cut down the cost.

With date based tiered compaction introduced in HBASE-15181, new data and old 
data can be placed in different tier. But the tier boundary moves as time lapse 
so it is still possible that we do compaction on old tier which breaks our 
block moving and EC work.

So here we want to introduce an "archive tier" to better fit our scenario. Add 
an configuration called "archive unit", for example, year. That means, if we 
find that the tier boundary is already in the previous year, then we reset the 
boundary to the start of year and end of year, and if we want to do compaction 
in this tier, just compact all files into one file. The file will never be 
changed unless we force a major compaction so it is safe to apply EC and other 
cost reducing approach on the file. And we make more tiers before this tier 
year by year. 


> Improve DateTieredCompactionPolicy
> --
>
> Key: HBASE-15339
> URL: https://issues.apache.org/jira/browse/HBASE-15339
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Duo Zhang
>
> Add multi-output support.
> Add archive old data support.
> ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14921) Memory optimizations

2016-03-14 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15192967#comment-15192967
 ] 

Anastasia Braginsky commented on HBASE-14921:
-

Hi Guys,

Long time no speak. Meanwhile, CellBlocksSegment based on simple Cells Array is 
ready (as on second picture in the document). This means additional 
dereferencing and spending some additional memory on object headers. I'm also 
implementing the totally flat CellBlocks, so it would be possible to allocate 
it off-heap and it is generally more efficient (third picture in the updated 
document). In order to do that I need a toBytes representation of a reference 
to Chunk. As far as I know in pure Java you can’t get the address of an object. 
So it can be done only by defining an index per Chunk and having a 
translation/mapping from some integer index to Chunk and backward (from Chunk 
to its index).

Looking closer on HeapMemStoreLAB and MemStoreChunkPool I see that 
HeapMemStoreLAB.Chunk can be allocated both directly from JVM Heap and also via 
MemStoreChunkPool, because MemStoreChunkPool is sometimes not configured or 
can’t be used. It makes much more sense that all Chunk allocations are 
centralized and go only through MemStoreChunkPool. It helps in managing the 
mapping described above and also in off-heaping. Can MemStoreChunkPool exists 
always? MemStoreChunkPool can pre-allocate Chunks as a Pool or allocate single 
Chunk on demand as HeapMemStoreLAB currently does.

Then we have clear roles. HeapMemStoreLAB - deals only with reference counting 
and when Chunk can be deallocated. It gets/returns chunks from/to 
MemStoreChunkPool. While MemStoreChunkPool - deals only with chunks allocation, 
pre-allocation and re-allocation.
[~stack], [~anoop.hbase], and everybody, what do you think?

Thanks,
Anastasia

> Memory optimizations
> 
>
> Key: HBASE-14921
> URL: https://issues.apache.org/jira/browse/HBASE-14921
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Anastasia Braginsky
> Attachments: CellBlocksSegmentInMemStore.pdf, 
> CellBlocksSegmentinthecontextofMemStore(1).pdf
>
>
> Memory optimizations including compressed format representation and offheap 
> allocations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15454) Archive store files older than max age

2016-03-14 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-15454:
-

 Summary: Archive store files older than max age
 Key: HBASE-15454
 URL: https://issues.apache.org/jira/browse/HBASE-15454
 Project: HBase
  Issue Type: Sub-task
Reporter: Duo Zhang


Sometimes the old data is rarely touched but we can not remove it. So archive 
it to several big files(by year or something) and use EC to reduce the 
redundancy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14963) Remove use of Guava Stopwatch from HBase client code

2016-03-14 Thread Andrew Logvinov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15192972#comment-15192972
 ] 

Andrew Logvinov commented on HBASE-14963:
-

Can you please port it to 1.x as well? We're stuck with this version of guava 
and it is giving us hard time resolving conflicts with other libraries that 
depend on newer version.

> Remove use of Guava Stopwatch from HBase client code
> 
>
> Key: HBASE-14963
> URL: https://issues.apache.org/jira/browse/HBASE-14963
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Devaraj Das
>Assignee: Devaraj Das
>  Labels: needs_releasenote
> Fix For: 2.0.0
>
> Attachments: no-stopwatch.txt
>
>
> We ran into an issue where an application bundled its own Guava (and that 
> happened to be in the classpath first) and HBase's MetaTableLocator threw an 
> exception due to the fact that Stopwatch's constructor wasn't compatible... 
> Might be better to not depend on Stopwatch at all in MetaTableLocator since 
> the functionality is easily doable without.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15265) Implement an asynchronous FSHLog

2016-03-14 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193023#comment-15193023
 ] 

Yu Li commented on HBASE-15265:
---

Got your point, and I think these are two different ways of categorizing 
WALProvider: by FS type, or by WAL number. Actually we both think the current 
provider semantic putting filesystem and multiwal together is ambiguous, and 
the only divergency is to remove filesystem (add a property to specify WAL 
type) or multiwal (add a property to specify wal strategy, single or multiple), 
agree?

Regarding which way to choose, my concern mainly lies in backward 
compatibility. I guess (but may not be the truth) that currently few people 
will specify "filesystem" as the provider type since it's the same as default, 
but to use multiple wal they have to explicitly set provider type to multiwal. 
So if we categorize WALProvider into filesystem and asyncfs, user using 
multiple wal will have to update their configuration files (not a big deal but 
still some additional efforts to take).

> Implement an asynchronous FSHLog
> 
>
> Key: HBASE-15265
> URL: https://issues.apache.org/jira/browse/HBASE-15265
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15265-v1.patch, HBASE-15265-v2.patch, 
> HBASE-15265-v3.patch, HBASE-15265-v4.patch, HBASE-15265-v5.patch, 
> HBASE-15265-v6.patch, HBASE-15265-v7.patch, HBASE-15265.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15433) SnapshotManager#restoreSnapshot not update table and region count quota correctly when encountering exception

2016-03-14 Thread Jianwei Cui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193038#comment-15193038
 ] 

Jianwei Cui commented on HBASE-15433:
-

{quote}
When QEE is thrown we will still end up in updating the region quota which is 
not really required, may be we can avoid that.
{quote}
Yes, we should catch QEE firstly and not update the quota information in such 
situation as you suggested above.
{quote}
Also suggest to rename currentRegionCount to tableRegionCount and 
updatedRegionCount to snapshotRegionCount for better understanding. Please add 
more comments like why are we doing this way.
{quote}
Good suggestions, will update the patch.

{quote}
If this throws exception then there will be another issue, because now the 
snapshot has been successfully restored but in the catch clause we are updating 
the table region count in namespace quota.
{quote}
Good find. Here, the {{checkAndUpdateNamespaceRegionQuota}} should succeed 
because it will reduce the region count for the table? However, if the 
{{checkAndUpdateNamespaceRegionQuota}} throws exception, there must be some 
unexpected reasons, and call {{checkAndUpdateNamespaceRegionQuota}} in catch 
clause may also fail. We can log an error message in QEE catch clause and throw 
it directly? And the code here can be updated as:
{code}
  int tableRegionCount = -1;
  try {
// Table already exist. Check and update the region quota for this 
table namespace
// Table is disabled, table region count won't change during 
restoreSnapshot
tableRegionCount = getRegionCountOfTable(tableName);
int snapshotRegionCount = manifest.getRegionManifestsMap().size();

// Update region count before restoreSnapshot if snapshotRegionCount is 
larger. If we
// updated the region count to a smaller value before retoreSnapshot 
and the retoreSnapshot
// fails, we may fail to reset the region count to its original value 
if the namespace
// region count quota is consumed by other tables during the 
restoreSnapshot, such as
// region split or table create under the same namespace.
if (tableRegionCount > 0 && tableRegionCount < snapshotRegionCount) {
  checkAndUpdateNamespaceRegionQuota(snapshotRegionCount, tableName);
}

restoreSnapshot(snapshot, snapshotTableDesc);

// Update the region count after restoreSnapshot succeeded if 
snapshotRegionCount is
// smaller. This step should not fail because it will reduce the region 
count for table
if (tableRegionCount > 0 && tableRegionCount > snapshotRegionCount) {
  checkAndUpdateNamespaceRegionQuota(snapshotRegionCount, tableName);
}
  } catch (QuotaExceededException e) {
LOG.error("Exception occurred while restoring the snapshot " + 
snapshot.getName()
  + " as table " + tableName.getNameAsString(), e);
// If QEE is thrown before restoreSnapshot, quota information is not 
updated, and we
// should throw the exception directly. If QEE is thrown after 
restoreSnapshot, there
// must be unexpected reasons, we also throw the exception directly
throw e;
  } catch (IOException e) {
if (tableRegionCount > 0) {
  // reset region count for table
  checkAndUpdateNamespaceRegionQuota(tableRegionCount, tableName);
}
LOG.error("Exception occurred while restoring the snapshot " + 
snapshot.getName()
+ " as table " + tableName.getNameAsString(), e);
throw e;
  }
{code}
What's your opinion about this issue? [~ashish singhi]

> SnapshotManager#restoreSnapshot not update table and region count quota 
> correctly when encountering exception
> -
>
> Key: HBASE-15433
> URL: https://issues.apache.org/jira/browse/HBASE-15433
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
> Attachments: HBASE-15433-trunk-v1.patch, HBASE-15433-trunk-v2.patch, 
> HBASE-15433-trunk.patch
>
>
> In SnapshotManager#restoreSnapshot, the table and region quota will be 
> checked and updated as:
> {code}
>   try {
> // Table already exist. Check and update the region quota for this 
> table namespace
> checkAndUpdateNamespaceRegionQuota(manifest, tableName);
> restoreSnapshot(snapshot, snapshotTableDesc);
>   } catch (IOException e) {
> 
> this.master.getMasterQuotaManager().removeTableFromNamespaceQuota(tableName);
> LOG.error("Exception occurred while restoring the snapshot " + 
> snapshot.getName()
> + " as table " + tableName.getNameAsString(), e);
> throw e;
>   }

[jira] [Commented] (HBASE-15265) Implement an asynchronous FSHLog

2016-03-14 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193073#comment-15193073
 ] 

Yu Li commented on HBASE-15265:
---

I see, thanks for the clarification. We're using DLS so I'm not familiar with 
DLR, maybe [~zjushch] could share some thoughts here on DLR? Let me dig deeper 
into the changes in HBASE-14949 later and come back for discussion if any 
findings. Thanks.

> Implement an asynchronous FSHLog
> 
>
> Key: HBASE-15265
> URL: https://issues.apache.org/jira/browse/HBASE-15265
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15265-v1.patch, HBASE-15265-v2.patch, 
> HBASE-15265-v3.patch, HBASE-15265-v4.patch, HBASE-15265-v5.patch, 
> HBASE-15265-v6.patch, HBASE-15265-v7.patch, HBASE-15265.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15433) SnapshotManager#restoreSnapshot not update table and region count quota correctly when encountering exception

2016-03-14 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193091#comment-15193091
 ] 

Ashish Singhi commented on HBASE-15433:
---

bq. Here, the checkAndUpdateNamespaceRegionQuota should succeed because it will 
reduce the region count for the table?
Not necessary, suppose through some other thread operations there is a increase 
in the namespace regions count immediately after the snapshot is restored and 
there is not enough region quota available now.

> SnapshotManager#restoreSnapshot not update table and region count quota 
> correctly when encountering exception
> -
>
> Key: HBASE-15433
> URL: https://issues.apache.org/jira/browse/HBASE-15433
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
> Attachments: HBASE-15433-trunk-v1.patch, HBASE-15433-trunk-v2.patch, 
> HBASE-15433-trunk.patch
>
>
> In SnapshotManager#restoreSnapshot, the table and region quota will be 
> checked and updated as:
> {code}
>   try {
> // Table already exist. Check and update the region quota for this 
> table namespace
> checkAndUpdateNamespaceRegionQuota(manifest, tableName);
> restoreSnapshot(snapshot, snapshotTableDesc);
>   } catch (IOException e) {
> 
> this.master.getMasterQuotaManager().removeTableFromNamespaceQuota(tableName);
> LOG.error("Exception occurred while restoring the snapshot " + 
> snapshot.getName()
> + " as table " + tableName.getNameAsString(), e);
> throw e;
>   }
> {code}
> The 'checkAndUpdateNamespaceRegionQuota' will fail if regions in the snapshot 
> make the region count quota exceeded, then, the table will be removed in the 
> 'catch' block. This will make the current table count and region count 
> decrease, following table creation or region split will succeed even if the 
> actual quota is exceeded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15433) SnapshotManager#restoreSnapshot not update table and region count quota correctly when encountering exception

2016-03-14 Thread Jianwei Cui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193148#comment-15193148
 ] 

Jianwei Cui commented on HBASE-15433:
-

The table must be disabled during restoreSnapshot, so that the 
{{tableRegionCount}} won't change. Assume there won't be concurrent 
restoreSnapshot requests for the same table, the 
{{checkAndUpdateNamespaceRegionQuota}} after {{restoreSnapshot}} will be 
executed only when {{tableRegionCount > snapshotRegionCount}} satisfied, this 
means we have preserved enough region count for the 
{{checkAndUpdateNamespaceRegionQuota}} from the namespace quota. Therefore, 
other thread operations won't make the {{checkAndUpdateNamespaceRegionQuota}} 
fail if they operating on different tables? However, if there are concurrent 
restoreSnapshot requests for the same table, it will cause problem, and we may 
need lock to make sure the quota information is updated correctly, or we can 
move the quota check and update logic in the {{RestoreSnapshotHandler}} after 
table lock is held?

> SnapshotManager#restoreSnapshot not update table and region count quota 
> correctly when encountering exception
> -
>
> Key: HBASE-15433
> URL: https://issues.apache.org/jira/browse/HBASE-15433
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
> Attachments: HBASE-15433-trunk-v1.patch, HBASE-15433-trunk-v2.patch, 
> HBASE-15433-trunk.patch
>
>
> In SnapshotManager#restoreSnapshot, the table and region quota will be 
> checked and updated as:
> {code}
>   try {
> // Table already exist. Check and update the region quota for this 
> table namespace
> checkAndUpdateNamespaceRegionQuota(manifest, tableName);
> restoreSnapshot(snapshot, snapshotTableDesc);
>   } catch (IOException e) {
> 
> this.master.getMasterQuotaManager().removeTableFromNamespaceQuota(tableName);
> LOG.error("Exception occurred while restoring the snapshot " + 
> snapshot.getName()
> + " as table " + tableName.getNameAsString(), e);
> throw e;
>   }
> {code}
> The 'checkAndUpdateNamespaceRegionQuota' will fail if regions in the snapshot 
> make the region count quota exceeded, then, the table will be removed in the 
> 'catch' block. This will make the current table count and region count 
> decrease, following table creation or region split will succeed even if the 
> actual quota is exceeded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15433) SnapshotManager#restoreSnapshot not update table and region count quota correctly when encountering exception

2016-03-14 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193177#comment-15193177
 ] 

Ashish Singhi commented on HBASE-15433:
---

I was not saying about concurrent restoreSnapshot requests, any other operation 
on the namespace also can update the quota of the namespace.

bq. checkAndUpdateNamespaceRegionQuota after restoreSnapshot will be executed 
only when tableRegionCount > snapshotRegionCount satisfied
OK. That means we have reserved enough quota before hand.

bq. if there are concurrent restoreSnapshot requests for the same table, it 
will cause problem, and we may need lock to make sure the quota information is 
updated correctly, or we can move the quota check and update logic in the 
RestoreSnapshotHandler after table lock is held?
Not required I think, because we are having enough quota for this table in the 
cache before restoring the snapshot and after restoring snapshot we are only 
decrementing it, so it will work.

{code}
 } catch (QuotaExceededException e) {
LOG.error("Exception occurred while restoring the snapshot " + 
snapshot.getName()
  + " as table " + tableName.getNameAsString(), e);
// If QEE is thrown before restoreSnapshot, quota information is not 
updated, and we
// should throw the exception directly. If QEE is thrown after 
restoreSnapshot, there
// must be unexpected reasons, we also throw the exception directly
throw e;
{code}
We can also include quota has exceeded in the error message.

Thanks.

> SnapshotManager#restoreSnapshot not update table and region count quota 
> correctly when encountering exception
> -
>
> Key: HBASE-15433
> URL: https://issues.apache.org/jira/browse/HBASE-15433
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
> Attachments: HBASE-15433-trunk-v1.patch, HBASE-15433-trunk-v2.patch, 
> HBASE-15433-trunk.patch
>
>
> In SnapshotManager#restoreSnapshot, the table and region quota will be 
> checked and updated as:
> {code}
>   try {
> // Table already exist. Check and update the region quota for this 
> table namespace
> checkAndUpdateNamespaceRegionQuota(manifest, tableName);
> restoreSnapshot(snapshot, snapshotTableDesc);
>   } catch (IOException e) {
> 
> this.master.getMasterQuotaManager().removeTableFromNamespaceQuota(tableName);
> LOG.error("Exception occurred while restoring the snapshot " + 
> snapshot.getName()
> + " as table " + tableName.getNameAsString(), e);
> throw e;
>   }
> {code}
> The 'checkAndUpdateNamespaceRegionQuota' will fail if regions in the snapshot 
> make the region count quota exceeded, then, the table will be removed in the 
> 'catch' block. This will make the current table count and region count 
> decrease, following table creation or region split will succeed even if the 
> actual quota is exceeded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15439) getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores the TimeUnit

2016-03-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193201#comment-15193201
 ] 

Hadoop QA commented on HBASE-15439:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
11s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
12s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
44s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 45s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 30s 
{color} | {color:green} hbase-common in the patch passed with JDK v1.8.0. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 13s 
{color} | {color:green} hbase-common in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
8s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 26s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12793257/HBASE-15439.patch |
| JIRA Issue | HBASE-15439 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh

[jira] [Updated] (HBASE-15325) ResultScanner allowing partial result will miss the rest of the row if the region is moved between two rpc requests

2016-03-14 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-15325:
--
Attachment: HBASE-15325-v7.patch

Upload new patch with the conclusion we have.
It is a little more work when we get batching 3+5+5 and return 5+5+3 to user, 
so I changed a lot in getResultsToAddToCache

> ResultScanner allowing partial result will miss the rest of the row if the 
> region is moved between two rpc requests
> ---
>
> Key: HBASE-15325
> URL: https://issues.apache.org/jira/browse/HBASE-15325
> Project: HBase
>  Issue Type: Bug
>  Components: dataloss, Scanners
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
>Priority: Critical
> Attachments: 15325-test.txt, HBASE-15325-v1.txt, HBASE-15325-v2.txt, 
> HBASE-15325-v3.txt, HBASE-15325-v5.txt, HBASE-15325-v6.1.txt, 
> HBASE-15325-v6.2.txt, HBASE-15325-v6.3.txt, HBASE-15325-v6.4.txt, 
> HBASE-15325-v6.5.txt, HBASE-15325-v6.txt, HBASE-15325-v7.patch
>
>
> HBASE-11544 allow scan rpc return partial of a row to reduce memory usage for 
> one rpc request. And client can setAllowPartial or setBatch to get several 
> cells in a row instead of the whole row.
> However, the status of the scanner is saved on server and we need this to get 
> the next part if there is a partial result before. If we move the region to 
> another RS, client will get a NotServingRegionException and open a new 
> scanner to the new RS which will be regarded as a new scan from the end of 
> this row. So the rest cells of the row of last result will be missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15325) ResultScanner allowing partial result will miss the rest of the row if the region is moved between two rpc requests

2016-03-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193248#comment-15193248
 ] 

Hadoop QA commented on HBASE-15325:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 15m 41s 
{color} | {color:red} Docker failed to build yetus/hbase:date2016-03-14. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12793305/HBASE-15325-v7.patch |
| JIRA Issue | HBASE-15325 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/957/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> ResultScanner allowing partial result will miss the rest of the row if the 
> region is moved between two rpc requests
> ---
>
> Key: HBASE-15325
> URL: https://issues.apache.org/jira/browse/HBASE-15325
> Project: HBase
>  Issue Type: Bug
>  Components: dataloss, Scanners
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
>Priority: Critical
> Attachments: 15325-test.txt, HBASE-15325-v1.txt, HBASE-15325-v2.txt, 
> HBASE-15325-v3.txt, HBASE-15325-v5.txt, HBASE-15325-v6.1.txt, 
> HBASE-15325-v6.2.txt, HBASE-15325-v6.3.txt, HBASE-15325-v6.4.txt, 
> HBASE-15325-v6.5.txt, HBASE-15325-v6.txt, HBASE-15325-v7.patch
>
>
> HBASE-11544 allow scan rpc return partial of a row to reduce memory usage for 
> one rpc request. And client can setAllowPartial or setBatch to get several 
> cells in a row instead of the whole row.
> However, the status of the scanner is saved on server and we need this to get 
> the next part if there is a partial result before. If we move the region to 
> another RS, client will get a NotServingRegionException and open a new 
> scanner to the new RS which will be regarded as a new scan from the end of 
> this row. So the rest cells of the row of last result will be missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15325) ResultScanner allowing partial result will miss the rest of the row if the region is moved between two rpc requests

2016-03-14 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193296#comment-15193296
 ] 

Anoop Sam John commented on HBASE-15325:


bq.It is a little more work when we get batching 3+5+5 and return 5+5+3 to 
user, so I changed a lot in getResultsToAddToCache
Can we pls keep that as part of another issue?  Here in  this issue, we fix the 
cell miss issue which is more serious.  The correct batch sizing we can fix in 
another jira.

> ResultScanner allowing partial result will miss the rest of the row if the 
> region is moved between two rpc requests
> ---
>
> Key: HBASE-15325
> URL: https://issues.apache.org/jira/browse/HBASE-15325
> Project: HBase
>  Issue Type: Bug
>  Components: dataloss, Scanners
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
>Priority: Critical
> Attachments: 15325-test.txt, HBASE-15325-v1.txt, HBASE-15325-v2.txt, 
> HBASE-15325-v3.txt, HBASE-15325-v5.txt, HBASE-15325-v6.1.txt, 
> HBASE-15325-v6.2.txt, HBASE-15325-v6.3.txt, HBASE-15325-v6.4.txt, 
> HBASE-15325-v6.5.txt, HBASE-15325-v6.txt, HBASE-15325-v7.patch
>
>
> HBASE-11544 allow scan rpc return partial of a row to reduce memory usage for 
> one rpc request. And client can setAllowPartial or setBatch to get several 
> cells in a row instead of the whole row.
> However, the status of the scanner is saved on server and we need this to get 
> the next part if there is a partial result before. If we move the region to 
> another RS, client will get a NotServingRegionException and open a new 
> scanner to the new RS which will be regarded as a new scan from the end of 
> this row. So the rest cells of the row of last result will be missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15439) getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores the TimeUnit

2016-03-14 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15439:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.4.0
   1.1.4
   1.2.1
   1.3.0
   2.0.0
   Status: Resolved  (was: Patch Available)

Thanks for the patch, Jingcheng.

Thanks for the reviews.

> getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores the TimeUnit
> ---
>
> Key: HBASE-15439
> URL: https://issues.apache.org/jira/browse/HBASE-15439
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Jingcheng Du
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.4, 1.4.0
>
> Attachments: HBASE-15439.patch
>
>
> I was running IntegrationTestIngestWithMOB test.
> I lower the mob compaction chore interval to this value:
> {code}
> 
>   hbase.mob.compaction.chore.period
>   6000
> 
> {code}
> After whole night, there was no indication from master log that mob 
> compaction ran.
> All I found was:
> {code}
> 2016-03-09 04:18:52,194 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 05:58:52,516 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 07:38:52,847 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 09:18:52,848 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 10:58:52,932 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 12:38:52,932 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 14:18:52,933 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 15:58:52,957 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 17:38:52,960 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15439) getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores the TimeUnit

2016-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193399#comment-15193399
 ] 

Hudson commented on HBASE-15439:


SUCCESS: Integrated in HBase-1.3-IT #551 (See 
[https://builds.apache.org/job/HBase-1.3-IT/551/])
HBASE-15439 getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores (tedyu: 
rev 1cb82d91189606af0e07fd05955dde830439b509)
* hbase-common/src/main/java/org/apache/hadoop/hbase/ScheduledChore.java


> getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores the TimeUnit
> ---
>
> Key: HBASE-15439
> URL: https://issues.apache.org/jira/browse/HBASE-15439
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Jingcheng Du
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.4, 1.4.0
>
> Attachments: HBASE-15439.patch
>
>
> I was running IntegrationTestIngestWithMOB test.
> I lower the mob compaction chore interval to this value:
> {code}
> 
>   hbase.mob.compaction.chore.period
>   6000
> 
> {code}
> After whole night, there was no indication from master log that mob 
> compaction ran.
> All I found was:
> {code}
> 2016-03-09 04:18:52,194 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 05:58:52,516 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 07:38:52,847 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 09:18:52,848 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 10:58:52,932 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 12:38:52,932 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 14:18:52,933 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 15:58:52,957 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 17:38:52,960 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15392) Single Cell Get reads two HFileBlocks

2016-03-14 Thread Daniel Pol (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193473#comment-15193473
 ] 

Daniel Pol commented on HBASE-15392:


I think I need some basics here in index usage. How do you implement a ‘get 
row’  ?
In general when using RDBMS you look at the indexes in order to know where to 
get the info needed. So if you need to ‘get row’ which has multiple cells you 
would traverse the indexes looking for blocks that contain that row and then 
you read only the relevant blocks. Looking at the index keys you can see which 
blocks contain that row and avoid reading the extra block.
Can we do the same in Hbase ? That way it won't require knowing if you're the 
last cell in the row. 


> Single Cell Get reads two HFileBlocks
> -
>
> Key: HBASE-15392
> URL: https://issues.apache.org/jira/browse/HBASE-15392
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: stack
> Attachments: 15392-0.98-looksee.txt, 15392.wip.patch, 
> 15392v2.wip.patch, 15392v3.wip.patch, 15392v4.patch, 15392v5.patch, 
> 15392v6.patch, 15392v7.patch, HBASE-15392_suggest.patch, gc.png, gc.png, 
> io.png, no_optimize.patch, no_optimize.patch, reads.png, reads.png, 
> two_seeks.txt
>
>
> As found by Daniel "SystemTap" Pol, a simple Get results in our reading two 
> HFileBlocks, the one that contains the wanted Cell, and the block that 
> follows.
> Here is a bit of custom logging that logs a stack trace on each HFileBlock 
> read so you can see the call stack responsible:
> {code}
> 2016-03-03 22:20:30,191 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> START LOOP
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> QCODE SEEK_NEXT_COL
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileBlockIndex: 
> STARTED WHILE
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.CombinedBlockCache: 
> OUT OF L2
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.BucketCache: Read 
> offset=31409152, len=2103
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.FileIOEngine: 
> offset=31409152, length=2103
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> From Cache [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> Cache hit return [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> java.lang.Throwable
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1515)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:324)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:831)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.reseekTo(HFileReaderImpl.java:812)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:288)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:198)
> at 
> org.apache.hadoop.hbase.regionserver

[jira] [Updated] (HBASE-15392) Single Cell Get reads two HFileBlocks

2016-03-14 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15392:
--
Attachment: 15392v7.patch

Retry

> Single Cell Get reads two HFileBlocks
> -
>
> Key: HBASE-15392
> URL: https://issues.apache.org/jira/browse/HBASE-15392
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: stack
> Attachments: 15392-0.98-looksee.txt, 15392.wip.patch, 
> 15392v2.wip.patch, 15392v3.wip.patch, 15392v4.patch, 15392v5.patch, 
> 15392v6.patch, 15392v7.patch, 15392v7.patch, HBASE-15392_suggest.patch, 
> gc.png, gc.png, io.png, no_optimize.patch, no_optimize.patch, reads.png, 
> reads.png, two_seeks.txt
>
>
> As found by Daniel "SystemTap" Pol, a simple Get results in our reading two 
> HFileBlocks, the one that contains the wanted Cell, and the block that 
> follows.
> Here is a bit of custom logging that logs a stack trace on each HFileBlock 
> read so you can see the call stack responsible:
> {code}
> 2016-03-03 22:20:30,191 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> START LOOP
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> QCODE SEEK_NEXT_COL
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileBlockIndex: 
> STARTED WHILE
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.CombinedBlockCache: 
> OUT OF L2
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.BucketCache: Read 
> offset=31409152, len=2103
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.FileIOEngine: 
> offset=31409152, length=2103
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> From Cache [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> Cache hit return [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> java.lang.Throwable
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1515)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:324)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:831)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.reseekTo(HFileReaderImpl.java:812)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:288)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:198)
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:321)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:279)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:806)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:795)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:624)
> a

[jira] [Commented] (HBASE-15392) Single Cell Get reads two HFileBlocks

2016-03-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193481#comment-15193481
 ] 

Hadoop QA commented on HBASE-15392:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 16s 
{color} | {color:red} Docker failed to build yetus/hbase:date2016-03-14. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12793325/15392v7.patch |
| JIRA Issue | HBASE-15392 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/958/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Single Cell Get reads two HFileBlocks
> -
>
> Key: HBASE-15392
> URL: https://issues.apache.org/jira/browse/HBASE-15392
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: stack
> Attachments: 15392-0.98-looksee.txt, 15392.wip.patch, 
> 15392v2.wip.patch, 15392v3.wip.patch, 15392v4.patch, 15392v5.patch, 
> 15392v6.patch, 15392v7.patch, 15392v7.patch, HBASE-15392_suggest.patch, 
> gc.png, gc.png, io.png, no_optimize.patch, no_optimize.patch, reads.png, 
> reads.png, two_seeks.txt
>
>
> As found by Daniel "SystemTap" Pol, a simple Get results in our reading two 
> HFileBlocks, the one that contains the wanted Cell, and the block that 
> follows.
> Here is a bit of custom logging that logs a stack trace on each HFileBlock 
> read so you can see the call stack responsible:
> {code}
> 2016-03-03 22:20:30,191 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> START LOOP
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> QCODE SEEK_NEXT_COL
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileBlockIndex: 
> STARTED WHILE
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.CombinedBlockCache: 
> OUT OF L2
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.BucketCache: Read 
> offset=31409152, len=2103
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.FileIOEngine: 
> offset=31409152, length=2103
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> From Cache [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> Cache hit return [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> java.lang.Throwable
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1515)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:324)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:831)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.reseekTo(HFileReaderImpl.java:812)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfte

[jira] [Commented] (HBASE-15439) getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores the TimeUnit

2016-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193510#comment-15193510
 ] 

Hudson commented on HBASE-15439:


SUCCESS: Integrated in HBase-1.2-IT #463 (See 
[https://builds.apache.org/job/HBase-1.2-IT/463/])
HBASE-15439 getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores (tedyu: 
rev 62774516aa1c60cad9f9ba5ec1286f0bf26a1a0c)
* hbase-common/src/main/java/org/apache/hadoop/hbase/ScheduledChore.java


> getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores the TimeUnit
> ---
>
> Key: HBASE-15439
> URL: https://issues.apache.org/jira/browse/HBASE-15439
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Jingcheng Du
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.4, 1.4.0
>
> Attachments: HBASE-15439.patch
>
>
> I was running IntegrationTestIngestWithMOB test.
> I lower the mob compaction chore interval to this value:
> {code}
> 
>   hbase.mob.compaction.chore.period
>   6000
> 
> {code}
> After whole night, there was no indication from master log that mob 
> compaction ran.
> All I found was:
> {code}
> 2016-03-09 04:18:52,194 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 05:58:52,516 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 07:38:52,847 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 09:18:52,848 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 10:58:52,932 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 12:38:52,932 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 14:18:52,933 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 15:58:52,957 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 17:38:52,960 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15398) Cells loss or disorder when using family essential filter and partial scanning protocol

2016-03-14 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193512#comment-15193512
 ] 

ramkrishna.s.vasudevan commented on HBASE-15398:


bq.Or use MetaCellComparator if region.isMeta anyway.
If it is a META region you will have to create a MetaCellComparator only. The 
compareRows part is different. 

> Cells loss or disorder when using family essential filter and partial 
> scanning protocol
> ---
>
> Key: HBASE-15398
> URL: https://issues.apache.org/jira/browse/HBASE-15398
> Project: HBase
>  Issue Type: Bug
>  Components: dataloss, Scanners
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
>Priority: Critical
> Attachments: 15398-test.txt, HBASE-15398-v2.patch, HBASE-15398.v1.txt
>
>
> In RegionScannerImpl, we have two heaps, storeHeap and joinedHeap. If we have 
> a filter and it doesn't apply to all cf, the stores whose families needn't be 
>  filtered will be in joinedHeap. We scan storeHeap first, then joinedHeap, 
> and merge the results and sort and return to client. We need sort because the 
> order of Cell is rowkey/cf/cq/ts and a smaller cf may be in the joinedHeap.
> However, after HBASE-11544 we may transfer partial results when we get 
> SIZE_LIMIT_REACHED_MID_ROW or other similar states. We may return a larger cf 
> first because it is in storeHeap and then a smaller cf because it is in 
> joinedHeap. Server won't hold all cells in a row and client doesn't have a 
> sorting logic. The order of cf in Result for user is wrong.
> And a more critical bug is, if we get a LIMIT_REACHED_MID_ROW on the last 
> cell of a row in storeHeap, we will break scanning in RegionScannerImpl and 
> in populateResult we will change the state to SIZE_LIMIT_REACHED because next 
> peeked cell is next row. But this is only the last cell of one and we have 
> two... And SIZE_LIMIT_REACHED means this Result is not partial (by 
> ScannerContext.partialResultFormed), client will see it and merge them and 
> return to user with losing data of joinedHeap. On next scan we will read next 
> row of storeHeap and joinedHeap is forgotten and never be read...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15424) Add bulk load hfile-refs for replication in ZK after the event is appended in the WAL

2016-03-14 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15424:
---
Attachment: HBASE-15424.v1.patch

Re-attach for QA run.

> Add bulk load hfile-refs for replication in ZK after the event is appended in 
> the WAL
> -
>
> Key: HBASE-15424
> URL: https://issues.apache.org/jira/browse/HBASE-15424
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.3.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15424.patch, HBASE-15424.v1.patch, 
> HBASE-15424.v1.patch
>
>
> Currenlty hfile-refs znode used for tracking the bulk loaded data replication 
> is added first and then the bulk load event in appended in the WAL. So this 
> may lead to a issue where the znode is added in ZK but append to WAL is 
> failed(due to some probelm in DN), so this znode will be left in ZK as it is 
> and will not allow hfile to get deleted from archive directory. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15424) Add bulk load hfile-refs for replication in ZK after the event is appended in the WAL

2016-03-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193542#comment-15193542
 ] 

Hadoop QA commented on HBASE-15424:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 15s 
{color} | {color:red} Docker failed to build yetus/hbase:date2016-03-14. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12793331/HBASE-15424.v1.patch |
| JIRA Issue | HBASE-15424 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/961/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Add bulk load hfile-refs for replication in ZK after the event is appended in 
> the WAL
> -
>
> Key: HBASE-15424
> URL: https://issues.apache.org/jira/browse/HBASE-15424
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.3.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15424.patch, HBASE-15424.v1.patch, 
> HBASE-15424.v1.patch
>
>
> Currenlty hfile-refs znode used for tracking the bulk loaded data replication 
> is added first and then the bulk load event in appended in the WAL. So this 
> may lead to a issue where the znode is added in ZK but append to WAL is 
> failed(due to some probelm in DN), so this znode will be left in ZK as it is 
> and will not allow hfile to get deleted from archive directory. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15181) A simple implementation of date based tiered compaction

2016-03-14 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193566#comment-15193566
 ] 

ramkrishna.s.vasudevan commented on HBASE-15181:


Thank you for the info. That was useful.

> A simple implementation of date based tiered compaction
> ---
>
> Key: HBASE-15181
> URL: https://issues.apache.org/jira/browse/HBASE-15181
> Project: HBase
>  Issue Type: New Feature
>  Components: Compaction
>Reporter: Clara Xiong
>Assignee: Clara Xiong
> Fix For: 2.0.0, 1.3.0, 0.98.18
>
> Attachments: HBASE-15181-0.98-ADD.patch, HBASE-15181-0.98.patch, 
> HBASE-15181-0.98.v4.patch, HBASE-15181-98.patch, HBASE-15181-ADD.patch, 
> HBASE-15181-branch-1.patch, HBASE-15181-master-v1.patch, 
> HBASE-15181-master-v2.patch, HBASE-15181-master-v3.patch, 
> HBASE-15181-master-v4.patch, HBASE-15181-v1.patch, HBASE-15181-v2.patch
>
>
> This is a simple implementation of date-based tiered compaction similar to 
> Cassandra's for the following benefits:
> 1. Improve date-range-based scan by structuring store files in date-based 
> tiered layout.
> 2. Reduce compaction overhead.
> 3. Improve TTL efficiency.
> Perfect fit for the use cases that:
> 1. has mostly date-based date write and scan and a focus on the most recent 
> data. 
> 2. never or rarely deletes data.
> Out-of-order writes are handled gracefully. Time range overlapping among 
> store files is tolerated and the performance impact is minimized.
> Configuration can be set at hbase-site.xml or overriden at per-table or 
> per-column-famly level by hbase shell.
> Design spec is at 
> https://docs.google.com/document/d/1_AmlNb2N8Us1xICsTeGDLKIqL6T-oHoRLZ323MG_uy8/edit?usp=sharing
> Results in our production is at 
> https://docs.google.com/document/d/1GqRtQZMMkTEWOijZc8UCTqhACNmdxBSjtAQSYIWsmGU/edit#



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15392) Single Cell Get reads two HFileBlocks

2016-03-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193575#comment-15193575
 ] 

stack commented on HBASE-15392:
---

[~danielpol]
 
* The index contains the first key in a block only.
 * The only way currently to find what rows are contained by a block currently 
is to read it. We have no metadata listing point at which rows transition. We 
know we are at the end of the current row when we've hit a key from a different 
row.

For example: if two adjacent blocks, one keyed by 'A', and the following one by 
'D', if a Get for row 'B', then using the index, we find the block whose first 
key is 'A'. We scan forward till we find 'B' (if it is present at all) and then 
once inside the row 'B' row, we keep scanning keys until we hit one that is not 
of the 'B' row. In our case here, we may run into row 'C' or there may be no 
'C' and we can't stop scanning till we hit 'D' in the next block.

Index does not help with end-of-row.

We could look to keeping a richer index with row transitions or at least, last 
cell in block so we can avoid loading next-block in the case where end-of-row 
aligns with end-of-block.



> Single Cell Get reads two HFileBlocks
> -
>
> Key: HBASE-15392
> URL: https://issues.apache.org/jira/browse/HBASE-15392
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: stack
> Attachments: 15392-0.98-looksee.txt, 15392.wip.patch, 
> 15392v2.wip.patch, 15392v3.wip.patch, 15392v4.patch, 15392v5.patch, 
> 15392v6.patch, 15392v7.patch, 15392v7.patch, HBASE-15392_suggest.patch, 
> gc.png, gc.png, io.png, no_optimize.patch, no_optimize.patch, reads.png, 
> reads.png, two_seeks.txt
>
>
> As found by Daniel "SystemTap" Pol, a simple Get results in our reading two 
> HFileBlocks, the one that contains the wanted Cell, and the block that 
> follows.
> Here is a bit of custom logging that logs a stack trace on each HFileBlock 
> read so you can see the call stack responsible:
> {code}
> 2016-03-03 22:20:30,191 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> START LOOP
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> QCODE SEEK_NEXT_COL
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileBlockIndex: 
> STARTED WHILE
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.CombinedBlockCache: 
> OUT OF L2
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.BucketCache: Read 
> offset=31409152, len=2103
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.FileIOEngine: 
> offset=31409152, length=2103
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> From Cache [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> Cache hit return [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> java.lang.Throwable
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1515)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:324)
> at 
> org.apache.hadoop.hbase.io.hfile.HFil

[jira] [Commented] (HBASE-15424) Add bulk load hfile-refs for replication in ZK after the event is appended in the WAL

2016-03-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193590#comment-15193590
 ] 

Hadoop QA commented on HBASE-15424:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
43s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
53s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 32s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 39s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.8.0. {color} 
|
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 39s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.8.0. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 31s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_79. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 31s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
30s {color} | {color:green} hbase-server: patch generated 0 new + 80 unchanged 
- 1 fixed = 80 total (was 81) {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 0m 53s 
{color} | {color:red} Patch causes 24 errors with Hadoop v2.4.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 1m 46s 
{color} | {color:red} Patch causes 24 errors with Hadoop v2.4.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 2m 39s 
{color} | {color:red} Patch causes 24 errors with Hadoop v2.5.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 32s 
{color} | {color:red} Patch causes 24 errors with Hadoop v2.5.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 25s 
{color} | {color:red} Patch causes 24 errors with Hadoop v2.5.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 5m 18s 
{color} | {color:red} Patch causes 24 errors with Hadoop v2.6.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 6m 11s 
{color} | {color:red} Patch causes 24 errors with Hadoop v2.6.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 7m 6s 
{color} | {color:red} Patch causes 24 errors with Hadoop v2.6.3. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 7m 59s 
{color} | {color:red} Patch causes 24 errors with Hadoop v2.7.1. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 18s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed

[jira] [Updated] (HBASE-15392) Single Cell Get reads two HFileBlocks

2016-03-14 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15392:
--
Attachment: 15392v7.patch

Some hadoopqa builds are running... others are getting this docker fail 
retry

> Single Cell Get reads two HFileBlocks
> -
>
> Key: HBASE-15392
> URL: https://issues.apache.org/jira/browse/HBASE-15392
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: stack
> Attachments: 15392-0.98-looksee.txt, 15392.wip.patch, 
> 15392v2.wip.patch, 15392v3.wip.patch, 15392v4.patch, 15392v5.patch, 
> 15392v6.patch, 15392v7.patch, 15392v7.patch, 15392v7.patch, 
> HBASE-15392_suggest.patch, gc.png, gc.png, io.png, no_optimize.patch, 
> no_optimize.patch, reads.png, reads.png, two_seeks.txt
>
>
> As found by Daniel "SystemTap" Pol, a simple Get results in our reading two 
> HFileBlocks, the one that contains the wanted Cell, and the block that 
> follows.
> Here is a bit of custom logging that logs a stack trace on each HFileBlock 
> read so you can see the call stack responsible:
> {code}
> 2016-03-03 22:20:30,191 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> START LOOP
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> QCODE SEEK_NEXT_COL
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileBlockIndex: 
> STARTED WHILE
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.CombinedBlockCache: 
> OUT OF L2
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.BucketCache: Read 
> offset=31409152, len=2103
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.FileIOEngine: 
> offset=31409152, length=2103
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> From Cache [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> Cache hit return [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> java.lang.Throwable
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1515)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:324)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:831)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.reseekTo(HFileReaderImpl.java:812)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:288)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:198)
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:321)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:279)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:806)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:795)
> at 

[jira] [Created] (HBASE-15455) Upgrade hadoop to 2.7.x

2016-03-14 Thread Krzysztof Gardo (JIRA)
Krzysztof Gardo created HBASE-15455:
---

 Summary: Upgrade hadoop to 2.7.x
 Key: HBASE-15455
 URL: https://issues.apache.org/jira/browse/HBASE-15455
 Project: HBase
  Issue Type: Task
  Components: Client
Affects Versions: 1.1.3, 1.2.0
Reporter: Krzysztof Gardo


Duplicate finder maven plugin discovered
{code}
[WARNING] Found duplicate and different classes in 
[org.apache.hadoop:hadoop-yarn-api:2.5.1, 
org.apache.hadoop:hadoop-yarn-common:2.5.1]:
[WARNING]   org.apache.hadoop.yarn.factories.package-info
[WARNING]   org.apache.hadoop.yarn.factory.providers.package-info
[WARNING]   org.apache.hadoop.yarn.util.package-info
{code}
2.7.x is free from that issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15265) Implement an asynchronous FSHLog

2016-03-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193616#comment-15193616
 ] 

stack commented on HBASE-15265:
---

DLR never worked and is now deprecated and to be removed (HBASE-15020). The 
idea IS good; the implementation rickety. To be redone.

> Implement an asynchronous FSHLog
> 
>
> Key: HBASE-15265
> URL: https://issues.apache.org/jira/browse/HBASE-15265
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15265-v1.patch, HBASE-15265-v2.patch, 
> HBASE-15265-v3.patch, HBASE-15265-v4.patch, HBASE-15265-v5.patch, 
> HBASE-15265-v6.patch, HBASE-15265-v7.patch, HBASE-15265.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15392) Single Cell Get reads two HFileBlocks

2016-03-14 Thread Daniel Pol (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193639#comment-15193639
 ] 

Daniel Pol commented on HBASE-15392:


Think a little bit like bloom filters. The index tells us for sure what blocks 
don't have the row we are looking for (all blocks with index key > 'search 
row'). So in your example we know for sure that the 2nd block (keyed by 'D') 
cannot contain row 'B'. Block 'A' could contain row 'B' and that's why you're 
scanning it. 
Maybe it a matter of how scan is implemented. Maybe the scan can do only until 
a cell is different. I'm talking about adding a "length" type limit to scan. 
Kind like stop after XX blocks argument.

> Single Cell Get reads two HFileBlocks
> -
>
> Key: HBASE-15392
> URL: https://issues.apache.org/jira/browse/HBASE-15392
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: stack
> Attachments: 15392-0.98-looksee.txt, 15392.wip.patch, 
> 15392v2.wip.patch, 15392v3.wip.patch, 15392v4.patch, 15392v5.patch, 
> 15392v6.patch, 15392v7.patch, 15392v7.patch, 15392v7.patch, 
> HBASE-15392_suggest.patch, gc.png, gc.png, io.png, no_optimize.patch, 
> no_optimize.patch, reads.png, reads.png, two_seeks.txt
>
>
> As found by Daniel "SystemTap" Pol, a simple Get results in our reading two 
> HFileBlocks, the one that contains the wanted Cell, and the block that 
> follows.
> Here is a bit of custom logging that logs a stack trace on each HFileBlock 
> read so you can see the call stack responsible:
> {code}
> 2016-03-03 22:20:30,191 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> START LOOP
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> QCODE SEEK_NEXT_COL
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileBlockIndex: 
> STARTED WHILE
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.CombinedBlockCache: 
> OUT OF L2
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.BucketCache: Read 
> offset=31409152, len=2103
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.FileIOEngine: 
> offset=31409152, length=2103
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> From Cache [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> Cache hit return [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> java.lang.Throwable
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1515)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:324)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:831)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.reseekTo(HFileReaderImpl.java:812)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:288)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:198)
> at 
> org.apache.hadoop.hbase.regionserver.NonLaz

[jira] [Commented] (HBASE-15392) Single Cell Get reads two HFileBlocks

2016-03-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193662#comment-15193662
 ] 

stack commented on HBASE-15392:
---

[~danielpol]

bq. Maybe it a matter of how scan is implemented. Maybe the scan can do only 
until a cell is different.

Yeah, this is how it currently works. And above is done at a few levels above 
the loading-of-blocks down in the Reader.

Say more on this "... I'm talking about adding a "length" type limit to scan. 
Kind like stop after XX blocks argument" What would the length be in this 
case?

> Single Cell Get reads two HFileBlocks
> -
>
> Key: HBASE-15392
> URL: https://issues.apache.org/jira/browse/HBASE-15392
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: stack
> Attachments: 15392-0.98-looksee.txt, 15392.wip.patch, 
> 15392v2.wip.patch, 15392v3.wip.patch, 15392v4.patch, 15392v5.patch, 
> 15392v6.patch, 15392v7.patch, 15392v7.patch, 15392v7.patch, 
> HBASE-15392_suggest.patch, gc.png, gc.png, io.png, no_optimize.patch, 
> no_optimize.patch, reads.png, reads.png, two_seeks.txt
>
>
> As found by Daniel "SystemTap" Pol, a simple Get results in our reading two 
> HFileBlocks, the one that contains the wanted Cell, and the block that 
> follows.
> Here is a bit of custom logging that logs a stack trace on each HFileBlock 
> read so you can see the call stack responsible:
> {code}
> 2016-03-03 22:20:30,191 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> START LOOP
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> QCODE SEEK_NEXT_COL
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileBlockIndex: 
> STARTED WHILE
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.CombinedBlockCache: 
> OUT OF L2
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.BucketCache: Read 
> offset=31409152, len=2103
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.FileIOEngine: 
> offset=31409152, length=2103
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> From Cache [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> Cache hit return [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> java.lang.Throwable
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1515)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:324)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:831)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.reseekTo(HFileReaderImpl.java:812)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:288)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:198)
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.gene

[jira] [Updated] (HBASE-15452) Consider removing checkScanOrder from StoreScanner.next

2016-03-14 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-15452:
--
Attachment: 15452-0.98.txt

Oops. Here it is. Trivial.

> Consider removing checkScanOrder from StoreScanner.next
> ---
>
> Key: HBASE-15452
> URL: https://issues.apache.org/jira/browse/HBASE-15452
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Attachments: 15452-0.98.txt
>
>
> In looking why we spent so much time in StoreScanner.next when doing a simple 
> Phoenix count\(*) query I came across checkScanOrder. Not only is this a 
> function dispatch (that the JIT would eventually inline), it also requires 
> setting the prevKV member for every Cell encountered.
> Removing that logic a yields measurable end-to-end improvement of 5-20% (in 
> 0.98).
> I will repeat this test on my work machine tomorrow.
> I think we're stable enough to remove that check anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15334) Add avro support for spark hbase connector

2016-03-14 Thread Zhan Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhan Zhang updated HBASE-15334:
---
Attachment: (was: HBASE-15334-4.patch)

> Add avro support for spark hbase connector
> --
>
> Key: HBASE-15334
> URL: https://issues.apache.org/jira/browse/HBASE-15334
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zhan Zhang
>Assignee: Zhan Zhang
> Attachments: HBASE-15334-1.patch, HBASE-15334-2.patch, 
> HBASE-15334-3.patch, HBASE-15334-4.patch
>
>
> Avro is a popular format for hbase storage. User may want the support 
> natively in the connector.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15334) Add avro support for spark hbase connector

2016-03-14 Thread Zhan Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhan Zhang updated HBASE-15334:
---
Attachment: HBASE-15334-4.patch

> Add avro support for spark hbase connector
> --
>
> Key: HBASE-15334
> URL: https://issues.apache.org/jira/browse/HBASE-15334
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zhan Zhang
>Assignee: Zhan Zhang
> Attachments: HBASE-15334-1.patch, HBASE-15334-2.patch, 
> HBASE-15334-3.patch, HBASE-15334-4.patch
>
>
> Avro is a popular format for hbase storage. User may want the support 
> natively in the connector.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15452) Consider removing checkScanOrder from StoreScanner.next

2016-03-14 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193690#comment-15193690
 ] 

Lars Hofhansl commented on HBASE-15452:
---

Two runs with patch applied (test from HBASE-15453)
10 runs  mean:1727.2 sigma:45.20353968440967
10 runs  mean:1769.9 sigma:16.53148511174964

Two runs without patch:
10 runs  mean:2194.8 sigma:56.31838065853811
10 runs  mean:2183.0 sigma:31.135189095298585

So this saved about 20% of runtime! Shows again that these code paths are 
extremely hot and every instruction we can save in these will be noticed!

> Consider removing checkScanOrder from StoreScanner.next
> ---
>
> Key: HBASE-15452
> URL: https://issues.apache.org/jira/browse/HBASE-15452
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Attachments: 15452-0.98.txt
>
>
> In looking why we spent so much time in StoreScanner.next when doing a simple 
> Phoenix count\(*) query I came across checkScanOrder. Not only is this a 
> function dispatch (that the JIT would eventually inline), it also requires 
> setting the prevKV member for every Cell encountered.
> Removing that logic a yields measurable end-to-end improvement of 5-20% (in 
> 0.98).
> I will repeat this test on my work machine tomorrow.
> I think we're stable enough to remove that check anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15453) Considering reverting HBASE-10015 - reinstance synchronized in StoreScanner

2016-03-14 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193701#comment-15193701
 ] 

Lars Hofhansl commented on HBASE-15453:
---

What's the best way to just commit the perf test? I want to easily be able to 
run it without compiling and installing HBase somewhere.

Right now it's a test that in its failure message displays the runtime. Any 
better way to do this? I suppose I could check in but comment the \@test marker 
by default...?

> Considering reverting HBASE-10015 - reinstance synchronized in StoreScanner
> ---
>
> Key: HBASE-15453
> URL: https://issues.apache.org/jira/browse/HBASE-15453
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Attachments: 15453-0.98.txt
>
>
> In HBASE-10015 back then I found that intrinsic locks (synchronized) in 
> StoreScanner are slower that explicit locks.
> I was surprised by this. To make sure I added a simple perf test and many 
> folks ran it on their machines. All found that explicit locks were faster.
> Now... I just ran that test again. On the latest JDK8 I find that now the 
> intrinsic locks are significantly faster:
> (OpenJDK Runtime Environment (build 1.8.0_72-b15))
> Explicit locks:
> 10 runs  mean:2223.6 sigma:72.29412147609237
> Intrinsic locks:
> 10 runs  mean:1865.3 sigma:32.63755505548784
> I confirmed the same with timing some Phoenix scans. We can save a bunch of 
> time by changing this back 
> Arrghhh... So maybe it's time to revert this now...?
> (Note that in trunk due to [~ram_krish]'s work, we do not lock in 
> StoreScanner anymore)
> I'll attach the perf test and a patch that changes lock to synchronized, if 
> some folks could run this on 0.98, that'd be great.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15411) Rewrite backup with Procedure V2

2016-03-14 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15411:
---
Attachment: 15411-v11.txt

Patch v11 makes incremental backup work.

Existing tests are switched to using Admin API calls.

> Rewrite backup with Procedure V2
> 
>
> Key: HBASE-15411
> URL: https://issues.apache.org/jira/browse/HBASE-15411
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15411-v1.txt, 15411-v11.txt, 15411-v3.txt, 15411-v5.txt, 
> 15411-v6.txt, 15411-v7.txt, 15411-v9.txt, FullTableBackupProcedure.java
>
>
> Currently full / incremental backup is driven by BackupHandler (see call() 
> method for flow).
> This issue is to rewrite the flow using Procedure V2.
> States (enum) for full / incremental backup would be introduced in 
> Backup.proto which correspond to the steps performed in BackupHandler#call().
> executeFromState() would pace the backup based on the current state.
> serializeStateData() / deserializeStateData() would be used to persist state 
> into procedure WAL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15452) Consider removing checkScanOrder from StoreScanner.next

2016-03-14 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193712#comment-15193712
 ] 

Lars Hofhansl commented on HBASE-15452:
---

This patch together with HBASE-15453 (JDK8)
10 runs  mean:1430.6 sigma:11.612062693595828

That saves over 1/3 of the runtime!!


> Consider removing checkScanOrder from StoreScanner.next
> ---
>
> Key: HBASE-15452
> URL: https://issues.apache.org/jira/browse/HBASE-15452
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Attachments: 15452-0.98.txt
>
>
> In looking why we spent so much time in StoreScanner.next when doing a simple 
> Phoenix count\(*) query I came across checkScanOrder. Not only is this a 
> function dispatch (that the JIT would eventually inline), it also requires 
> setting the prevKV member for every Cell encountered.
> Removing that logic a yields measurable end-to-end improvement of 5-20% (in 
> 0.98).
> I will repeat this test on my work machine tomorrow.
> I think we're stable enough to remove that check anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15392) Single Cell Get reads two HFileBlocks

2016-03-14 Thread Daniel Pol (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193739#comment-15193739
 ] 

Daniel Pol commented on HBASE-15392:


Looks to me that indexes are used to chose the first block to read but they 
could also be used to judge the last block to read. Let's say you have 80 
blocks in the table and the row you're looking for is in blocks 11-17 based on 
index data.
Right now the "upper" layer tell the scanner to start at block 11 and go until 
it finds a different row. Maybe the "upper" layer can tell the scanner to start 
at block 11 and go at maximum up to block 17. The length could be anything that 
translates to block 17. Either block offset based on start block, absolute 
block value or calculated number of KVs based on average key/value length. As 
analogy : it's like strcmp vs strncmp.


> Single Cell Get reads two HFileBlocks
> -
>
> Key: HBASE-15392
> URL: https://issues.apache.org/jira/browse/HBASE-15392
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: stack
> Attachments: 15392-0.98-looksee.txt, 15392.wip.patch, 
> 15392v2.wip.patch, 15392v3.wip.patch, 15392v4.patch, 15392v5.patch, 
> 15392v6.patch, 15392v7.patch, 15392v7.patch, 15392v7.patch, 
> HBASE-15392_suggest.patch, gc.png, gc.png, io.png, no_optimize.patch, 
> no_optimize.patch, reads.png, reads.png, two_seeks.txt
>
>
> As found by Daniel "SystemTap" Pol, a simple Get results in our reading two 
> HFileBlocks, the one that contains the wanted Cell, and the block that 
> follows.
> Here is a bit of custom logging that logs a stack trace on each HFileBlock 
> read so you can see the call stack responsible:
> {code}
> 2016-03-03 22:20:30,191 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> START LOOP
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> QCODE SEEK_NEXT_COL
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileBlockIndex: 
> STARTED WHILE
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.CombinedBlockCache: 
> OUT OF L2
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.BucketCache: Read 
> offset=31409152, len=2103
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.FileIOEngine: 
> offset=31409152, length=2103
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> From Cache [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> Cache hit return [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> java.lang.Throwable
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1515)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:324)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:831)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.reseekTo(HFileReaderImpl.java:812)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:288)
>   

[jira] [Commented] (HBASE-15334) Add avro support for spark hbase connector

2016-03-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193745#comment-15193745
 ] 

Hadoop QA commented on HBASE-15334:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 13m 51s 
{color} | {color:red} Docker failed to build yetus/hbase:date2016-03-14. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12793350/HBASE-15334-4.patch |
| JIRA Issue | HBASE-15334 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/962/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Add avro support for spark hbase connector
> --
>
> Key: HBASE-15334
> URL: https://issues.apache.org/jira/browse/HBASE-15334
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zhan Zhang
>Assignee: Zhan Zhang
> Attachments: HBASE-15334-1.patch, HBASE-15334-2.patch, 
> HBASE-15334-3.patch, HBASE-15334-4.patch
>
>
> Avro is a popular format for hbase storage. User may want the support 
> natively in the connector.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15392) Single Cell Get reads two HFileBlocks

2016-03-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193765#comment-15193765
 ] 

Hadoop QA commented on HBASE-15392:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 16m 8s 
{color} | {color:red} Docker failed to build yetus/hbase:date2016-03-14. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12793340/15392v7.patch |
| JIRA Issue | HBASE-15392 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/963/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Single Cell Get reads two HFileBlocks
> -
>
> Key: HBASE-15392
> URL: https://issues.apache.org/jira/browse/HBASE-15392
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: stack
> Attachments: 15392-0.98-looksee.txt, 15392.wip.patch, 
> 15392v2.wip.patch, 15392v3.wip.patch, 15392v4.patch, 15392v5.patch, 
> 15392v6.patch, 15392v7.patch, 15392v7.patch, 15392v7.patch, 
> HBASE-15392_suggest.patch, gc.png, gc.png, io.png, no_optimize.patch, 
> no_optimize.patch, reads.png, reads.png, two_seeks.txt
>
>
> As found by Daniel "SystemTap" Pol, a simple Get results in our reading two 
> HFileBlocks, the one that contains the wanted Cell, and the block that 
> follows.
> Here is a bit of custom logging that logs a stack trace on each HFileBlock 
> read so you can see the call stack responsible:
> {code}
> 2016-03-03 22:20:30,191 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> START LOOP
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> QCODE SEEK_NEXT_COL
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileBlockIndex: 
> STARTED WHILE
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.CombinedBlockCache: 
> OUT OF L2
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.BucketCache: Read 
> offset=31409152, len=2103
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.FileIOEngine: 
> offset=31409152, length=2103
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> From Cache [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> Cache hit return [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> java.lang.Throwable
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1515)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:324)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:831)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.reseekTo(HFileReaderImpl.java:812)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner

[jira] [Updated] (HBASE-15441) Fix WAL splitting when region has moved multiple times

2016-03-14 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-15441:
--
Attachment: HBASE-15441-v2.patch

> Fix WAL splitting when region has moved multiple times
> --
>
> Key: HBASE-15441
> URL: https://issues.apache.org/jira/browse/HBASE-15441
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.0, 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15441-v1.patch, HBASE-15441-v2.patch, 
> HBASE-15441.patch
>
>
> Currently WAL splitting is broken when a region has been opened multiple 
> times in recent minutes.
> Region open and region close write event markers to the wal. These markers 
> should have the sequence id in them. However it is currently getting 1. That 
> means that if a region has moved multiple times in the last few mins then 
> multiple split log workers will try and create the recovered edits file for 
> sequence id 1. One of the workers will fail and on failing they will delete 
> the recovered edits. Causing all split wal attempts to fail.
> We need to:
> # make sure that close get the correct sequence id for open.
> # Filter all region events from recovered edits
> It appears that the close event with a sequence id of one is coming from 
> region warm up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15392) Single Cell Get reads two HFileBlocks

2016-03-14 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15392:
--
Attachment: 15392v7.patch

Retry. I unchecked RUN_IN_DOCKER

> Single Cell Get reads two HFileBlocks
> -
>
> Key: HBASE-15392
> URL: https://issues.apache.org/jira/browse/HBASE-15392
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: stack
> Attachments: 15392-0.98-looksee.txt, 15392.wip.patch, 
> 15392v2.wip.patch, 15392v3.wip.patch, 15392v4.patch, 15392v5.patch, 
> 15392v6.patch, 15392v7.patch, 15392v7.patch, 15392v7.patch, 15392v7.patch, 
> HBASE-15392_suggest.patch, gc.png, gc.png, io.png, no_optimize.patch, 
> no_optimize.patch, reads.png, reads.png, two_seeks.txt
>
>
> As found by Daniel "SystemTap" Pol, a simple Get results in our reading two 
> HFileBlocks, the one that contains the wanted Cell, and the block that 
> follows.
> Here is a bit of custom logging that logs a stack trace on each HFileBlock 
> read so you can see the call stack responsible:
> {code}
> 2016-03-03 22:20:30,191 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> START LOOP
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> QCODE SEEK_NEXT_COL
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileBlockIndex: 
> STARTED WHILE
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.CombinedBlockCache: 
> OUT OF L2
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.BucketCache: Read 
> offset=31409152, len=2103
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.FileIOEngine: 
> offset=31409152, length=2103
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> From Cache [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> Cache hit return [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> java.lang.Throwable
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1515)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:324)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:831)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.reseekTo(HFileReaderImpl.java:812)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:288)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:198)
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:321)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:279)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:806)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:795)
> at 
> org.apache.hadoop.hbase.regionse

[jira] [Updated] (HBASE-15456) CreateTableProcedure/ModifyTableProcedure needs to fail when there is no family in descriptor

2016-03-14 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-15456:
-
Description: 
If there is only one family in the table, DeleteColumnFamilyProcedure will 
fail. 
Currently, when hbase.table.sanity.checks is set to false, hbase master logs a 
warning and CreateTableProcedure/ModifyTableProcedure will succeed. 
This behavior is not consistent with DeleteColumnFamilyProcedure's. 

Another point, before HBASE-13145, PeriodicMemstoreFlusher will run into the 
following exception. lastStoreFlushTimeMap is populated for families, if there 
is no family in the table, there is no entry in lastStoreFlushTimeMap.

16/02/01 11:14:26 ERROR regionserver.HRegionServer$PeriodicMemstoreFlusher: 
Caught exception 
java.util.NoSuchElementException 
at 
java.util.concurrent.ConcurrentHashMap$HashIterator.nextEntry(ConcurrentHashMap.java:1354)
 
at 
java.util.concurrent.ConcurrentHashMap$ValueIterator.next(ConcurrentHashMap.java:1384)
 
at java.util.Collections.min(Collections.java:628) 
at 
org.apache.hadoop.hbase.regionserver.HRegion.getEarliestFlushTimeForAllStores(HRegion.java:1572)
 
at org.apache.hadoop.hbase.regionserver.HRegion.shouldFlush(HRegion.java:1904) 
at 
org.apache.hadoop.hbase.regionserver.HRegionServer$PeriodicMemstoreFlusher.chore(HRegionServer.java:1509)
 
at org.apache.hadoop.hbase.Chore.run(Chore.java:87) 
at java.lang.Thread.run(Thread.java:745) 


  was:
If there is only one family in the table, DeleteColumnFamilyProcedure will 
fail. 
Currently, when hbase.table.sanity.checks is set to false, hbase master logs a 
warning and CreateTableProcedure/ModifyTableProcedure will succeed. 
This behavior is not consistent with DeleteColumnFamilyProcedure's. 

Another point, before HBASE-13145, PeriodicMemstoreFlusher will run into the 
following exception if there is no family in the table, lastStoreFlushTimeMap 
is populated for families, if there is no family in the table, there is no 
entry in lastStoreFlushTimeMap.

16/02/01 11:14:26 ERROR regionserver.HRegionServer$PeriodicMemstoreFlusher: 
Caught exception 
java.util.NoSuchElementException 
at 
java.util.concurrent.ConcurrentHashMap$HashIterator.nextEntry(ConcurrentHashMap.java:1354)
 
at 
java.util.concurrent.ConcurrentHashMap$ValueIterator.next(ConcurrentHashMap.java:1384)
 
at java.util.Collections.min(Collections.java:628) 
at 
org.apache.hadoop.hbase.regionserver.HRegion.getEarliestFlushTimeForAllStores(HRegion.java:1572)
 
at org.apache.hadoop.hbase.regionserver.HRegion.shouldFlush(HRegion.java:1904) 
at 
org.apache.hadoop.hbase.regionserver.HRegionServer$PeriodicMemstoreFlusher.chore(HRegionServer.java:1509)
 
at org.apache.hadoop.hbase.Chore.run(Chore.java:87) 
at java.lang.Thread.run(Thread.java:745) 



> CreateTableProcedure/ModifyTableProcedure needs to fail when there is no 
> family in descriptor
> -
>
> Key: HBASE-15456
> URL: https://issues.apache.org/jira/browse/HBASE-15456
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
>
> If there is only one family in the table, DeleteColumnFamilyProcedure will 
> fail. 
> Currently, when hbase.table.sanity.checks is set to false, hbase master logs 
> a warning and CreateTableProcedure/ModifyTableProcedure will succeed. 
> This behavior is not consistent with DeleteColumnFamilyProcedure's. 
> Another point, before HBASE-13145, PeriodicMemstoreFlusher will run into the 
> following exception. lastStoreFlushTimeMap is populated for families, if 
> there is no family in the table, there is no entry in lastStoreFlushTimeMap.
> 16/02/01 11:14:26 ERROR regionserver.HRegionServer$PeriodicMemstoreFlusher: 
> Caught exception 
> java.util.NoSuchElementException 
> at 
> java.util.concurrent.ConcurrentHashMap$HashIterator.nextEntry(ConcurrentHashMap.java:1354)
>  
> at 
> java.util.concurrent.ConcurrentHashMap$ValueIterator.next(ConcurrentHashMap.java:1384)
>  
> at java.util.Collections.min(Collections.java:628) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getEarliestFlushTimeForAllStores(HRegion.java:1572)
>  
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.shouldFlush(HRegion.java:1904) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer$PeriodicMemstoreFlusher.chore(HRegionServer.java:1509)
>  
> at org.apache.hadoop.hbase.Chore.run(Chore.java:87) 
> at java.lang.Thread.run(Thread.java:745) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15456) CreateTableProcedure/ModifyTableProcedure needs to fail when there is no family in descriptor

2016-03-14 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-15456:
-
Description: 
If there is only one family in the table, DeleteColumnFamilyProcedure will 
fail. 
Currently, when hbase.table.sanity.checks is set to false, hbase master logs a 
warning and CreateTableProcedure/ModifyTableProcedure will succeed. 
This behavior is not consistent with DeleteColumnFamilyProcedure's. 

Another point, before HBASE-13145, PeriodicMemstoreFlusher will run into the 
following exception. lastStoreFlushTimeMap is populated for families, if there 
is no family in the table, there is no entry in lastStoreFlushTimeMap.

{code}
16/02/01 11:14:26 ERROR regionserver.HRegionServer$PeriodicMemstoreFlusher: 
Caught exception 
java.util.NoSuchElementException 
at 
java.util.concurrent.ConcurrentHashMap$HashIterator.nextEntry(ConcurrentHashMap.java:1354)
 
at 
java.util.concurrent.ConcurrentHashMap$ValueIterator.next(ConcurrentHashMap.java:1384)
 
at java.util.Collections.min(Collections.java:628) 
at 
org.apache.hadoop.hbase.regionserver.HRegion.getEarliestFlushTimeForAllStores(HRegion.java:1572)
 
at org.apache.hadoop.hbase.regionserver.HRegion.shouldFlush(HRegion.java:1904) 
at 
org.apache.hadoop.hbase.regionserver.HRegionServer$PeriodicMemstoreFlusher.chore(HRegionServer.java:1509)
 
at org.apache.hadoop.hbase.Chore.run(Chore.java:87) 
at java.lang.Thread.run(Thread.java:745) 
{code}

  was:
If there is only one family in the table, DeleteColumnFamilyProcedure will 
fail. 
Currently, when hbase.table.sanity.checks is set to false, hbase master logs a 
warning and CreateTableProcedure/ModifyTableProcedure will succeed. 
This behavior is not consistent with DeleteColumnFamilyProcedure's. 

Another point, before HBASE-13145, PeriodicMemstoreFlusher will run into the 
following exception. lastStoreFlushTimeMap is populated for families, if there 
is no family in the table, there is no entry in lastStoreFlushTimeMap.

16/02/01 11:14:26 ERROR regionserver.HRegionServer$PeriodicMemstoreFlusher: 
Caught exception 
java.util.NoSuchElementException 
at 
java.util.concurrent.ConcurrentHashMap$HashIterator.nextEntry(ConcurrentHashMap.java:1354)
 
at 
java.util.concurrent.ConcurrentHashMap$ValueIterator.next(ConcurrentHashMap.java:1384)
 
at java.util.Collections.min(Collections.java:628) 
at 
org.apache.hadoop.hbase.regionserver.HRegion.getEarliestFlushTimeForAllStores(HRegion.java:1572)
 
at org.apache.hadoop.hbase.regionserver.HRegion.shouldFlush(HRegion.java:1904) 
at 
org.apache.hadoop.hbase.regionserver.HRegionServer$PeriodicMemstoreFlusher.chore(HRegionServer.java:1509)
 
at org.apache.hadoop.hbase.Chore.run(Chore.java:87) 
at java.lang.Thread.run(Thread.java:745) 



> CreateTableProcedure/ModifyTableProcedure needs to fail when there is no 
> family in descriptor
> -
>
> Key: HBASE-15456
> URL: https://issues.apache.org/jira/browse/HBASE-15456
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
>
> If there is only one family in the table, DeleteColumnFamilyProcedure will 
> fail. 
> Currently, when hbase.table.sanity.checks is set to false, hbase master logs 
> a warning and CreateTableProcedure/ModifyTableProcedure will succeed. 
> This behavior is not consistent with DeleteColumnFamilyProcedure's. 
> Another point, before HBASE-13145, PeriodicMemstoreFlusher will run into the 
> following exception. lastStoreFlushTimeMap is populated for families, if 
> there is no family in the table, there is no entry in lastStoreFlushTimeMap.
> {code}
> 16/02/01 11:14:26 ERROR regionserver.HRegionServer$PeriodicMemstoreFlusher: 
> Caught exception 
> java.util.NoSuchElementException 
> at 
> java.util.concurrent.ConcurrentHashMap$HashIterator.nextEntry(ConcurrentHashMap.java:1354)
>  
> at 
> java.util.concurrent.ConcurrentHashMap$ValueIterator.next(ConcurrentHashMap.java:1384)
>  
> at java.util.Collections.min(Collections.java:628) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getEarliestFlushTimeForAllStores(HRegion.java:1572)
>  
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.shouldFlush(HRegion.java:1904) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer$PeriodicMemstoreFlusher.chore(HRegionServer.java:1509)
>  
> at org.apache.hadoop.hbase.Chore.run(Chore.java:87) 
> at java.lang.Thread.run(Thread.java:745) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15456) CreateTableProcedure/ModifyTableProcedure needs to fail when there is no family in descriptor

2016-03-14 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-15456:
-
Attachment: HBASE-15456-v001.patch

> CreateTableProcedure/ModifyTableProcedure needs to fail when there is no 
> family in descriptor
> -
>
> Key: HBASE-15456
> URL: https://issues.apache.org/jira/browse/HBASE-15456
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Attachments: HBASE-15456-v001.patch
>
>
> If there is only one family in the table, DeleteColumnFamilyProcedure will 
> fail. 
> Currently, when hbase.table.sanity.checks is set to false, hbase master logs 
> a warning and CreateTableProcedure/ModifyTableProcedure will succeed. 
> This behavior is not consistent with DeleteColumnFamilyProcedure's. 
> Another point, before HBASE-13145, PeriodicMemstoreFlusher will run into the 
> following exception. lastStoreFlushTimeMap is populated for families, if 
> there is no family in the table, there is no entry in lastStoreFlushTimeMap.
> {code}
> 16/02/01 11:14:26 ERROR regionserver.HRegionServer$PeriodicMemstoreFlusher: 
> Caught exception 
> java.util.NoSuchElementException 
> at 
> java.util.concurrent.ConcurrentHashMap$HashIterator.nextEntry(ConcurrentHashMap.java:1354)
>  
> at 
> java.util.concurrent.ConcurrentHashMap$ValueIterator.next(ConcurrentHashMap.java:1384)
>  
> at java.util.Collections.min(Collections.java:628) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getEarliestFlushTimeForAllStores(HRegion.java:1572)
>  
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.shouldFlush(HRegion.java:1904) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer$PeriodicMemstoreFlusher.chore(HRegionServer.java:1509)
>  
> at org.apache.hadoop.hbase.Chore.run(Chore.java:87) 
> at java.lang.Thread.run(Thread.java:745) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15456) CreateTableProcedure/ModifyTableProcedure needs to fail when there is no family in descriptor

2016-03-14 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-15456:
-
Status: Patch Available  (was: Open)

First try

> CreateTableProcedure/ModifyTableProcedure needs to fail when there is no 
> family in descriptor
> -
>
> Key: HBASE-15456
> URL: https://issues.apache.org/jira/browse/HBASE-15456
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Attachments: HBASE-15456-v001.patch
>
>
> If there is only one family in the table, DeleteColumnFamilyProcedure will 
> fail. 
> Currently, when hbase.table.sanity.checks is set to false, hbase master logs 
> a warning and CreateTableProcedure/ModifyTableProcedure will succeed. 
> This behavior is not consistent with DeleteColumnFamilyProcedure's. 
> Another point, before HBASE-13145, PeriodicMemstoreFlusher will run into the 
> following exception. lastStoreFlushTimeMap is populated for families, if 
> there is no family in the table, there is no entry in lastStoreFlushTimeMap.
> {code}
> 16/02/01 11:14:26 ERROR regionserver.HRegionServer$PeriodicMemstoreFlusher: 
> Caught exception 
> java.util.NoSuchElementException 
> at 
> java.util.concurrent.ConcurrentHashMap$HashIterator.nextEntry(ConcurrentHashMap.java:1354)
>  
> at 
> java.util.concurrent.ConcurrentHashMap$ValueIterator.next(ConcurrentHashMap.java:1384)
>  
> at java.util.Collections.min(Collections.java:628) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getEarliestFlushTimeForAllStores(HRegion.java:1572)
>  
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.shouldFlush(HRegion.java:1904) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer$PeriodicMemstoreFlusher.chore(HRegionServer.java:1509)
>  
> at org.apache.hadoop.hbase.Chore.run(Chore.java:87) 
> at java.lang.Thread.run(Thread.java:745) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15456) CreateTableProcedure/ModifyTableProcedure needs to fail when there is no family in descriptor

2016-03-14 Thread huaxiang sun (JIRA)
huaxiang sun created HBASE-15456:


 Summary: CreateTableProcedure/ModifyTableProcedure needs to fail 
when there is no family in descriptor
 Key: HBASE-15456
 URL: https://issues.apache.org/jira/browse/HBASE-15456
 Project: HBase
  Issue Type: Improvement
  Components: master
Affects Versions: 2.0.0
Reporter: huaxiang sun
Assignee: huaxiang sun
Priority: Minor


If there is only one family in the table, DeleteColumnFamilyProcedure will 
fail. 
Currently, when hbase.table.sanity.checks is set to false, hbase master logs a 
warning and CreateTableProcedure/ModifyTableProcedure will succeed. 
This behavior is not consistent with DeleteColumnFamilyProcedure's. 

Another point, before HBASE-13145, PeriodicMemstoreFlusher will run into the 
following exception if there is no family in the table, lastStoreFlushTimeMap 
is populated for families, if there is no family in the table, there is no 
entry in lastStoreFlushTimeMap.

16/02/01 11:14:26 ERROR regionserver.HRegionServer$PeriodicMemstoreFlusher: 
Caught exception 
java.util.NoSuchElementException 
at 
java.util.concurrent.ConcurrentHashMap$HashIterator.nextEntry(ConcurrentHashMap.java:1354)
 
at 
java.util.concurrent.ConcurrentHashMap$ValueIterator.next(ConcurrentHashMap.java:1384)
 
at java.util.Collections.min(Collections.java:628) 
at 
org.apache.hadoop.hbase.regionserver.HRegion.getEarliestFlushTimeForAllStores(HRegion.java:1572)
 
at org.apache.hadoop.hbase.regionserver.HRegion.shouldFlush(HRegion.java:1904) 
at 
org.apache.hadoop.hbase.regionserver.HRegionServer$PeriodicMemstoreFlusher.chore(HRegionServer.java:1509)
 
at org.apache.hadoop.hbase.Chore.run(Chore.java:87) 
at java.lang.Thread.run(Thread.java:745) 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15334) Add avro support for spark hbase connector

2016-03-14 Thread Zhan Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193784#comment-15193784
 ] 

Zhan Zhang commented on HBASE-15334:


[~jmhsieh] [~ted.m] Would you like to take a look at the patch and provide your 
comments?

> Add avro support for spark hbase connector
> --
>
> Key: HBASE-15334
> URL: https://issues.apache.org/jira/browse/HBASE-15334
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zhan Zhang
>Assignee: Zhan Zhang
> Attachments: HBASE-15334-1.patch, HBASE-15334-2.patch, 
> HBASE-15334-3.patch, HBASE-15334-4.patch
>
>
> Avro is a popular format for hbase storage. User may want the support 
> natively in the connector.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15452) Consider removing checkScanOrder from StoreScanner.next

2016-03-14 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193773#comment-15193773
 ] 

Lars Hofhansl commented on HBASE-15452:
---

Wait... We're running the tests with assertions enabled, right? So this would 
not be like the production thing.
In the description I measure this end-to-end in "production" with Phoenix.


> Consider removing checkScanOrder from StoreScanner.next
> ---
>
> Key: HBASE-15452
> URL: https://issues.apache.org/jira/browse/HBASE-15452
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Attachments: 15452-0.98.txt
>
>
> In looking why we spent so much time in StoreScanner.next when doing a simple 
> Phoenix count\(*) query I came across checkScanOrder. Not only is this a 
> function dispatch (that the JIT would eventually inline), it also requires 
> setting the prevKV member for every Cell encountered.
> Removing that logic a yields measurable end-to-end improvement of 5-20% (in 
> 0.98).
> I will repeat this test on my work machine tomorrow.
> I think we're stable enough to remove that check anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15441) Fix WAL splitting when region has moved multiple times

2016-03-14 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193771#comment-15193771
 ] 

Elliott Clark commented on HBASE-15441:
---

Yea, it sure seems like the close isn't needed at all.
It tries to:
* clean the cache ( something that warmup shouldn't be doing anyway ).
* Flush all edit ( there should be no edits ever ).
* Disable all compactions ( there are no compactions running since this is


> Fix WAL splitting when region has moved multiple times
> --
>
> Key: HBASE-15441
> URL: https://issues.apache.org/jira/browse/HBASE-15441
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.0, 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15441-v1.patch, HBASE-15441-v2.patch, 
> HBASE-15441.patch
>
>
> Currently WAL splitting is broken when a region has been opened multiple 
> times in recent minutes.
> Region open and region close write event markers to the wal. These markers 
> should have the sequence id in them. However it is currently getting 1. That 
> means that if a region has moved multiple times in the last few mins then 
> multiple split log workers will try and create the recovered edits file for 
> sequence id 1. One of the workers will fail and on failing they will delete 
> the recovered edits. Causing all split wal attempts to fail.
> We need to:
> # make sure that close get the correct sequence id for open.
> # Filter all region events from recovered edits
> It appears that the close event with a sequence id of one is coming from 
> region warm up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15439) getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores the TimeUnit

2016-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193825#comment-15193825
 ] 

Hudson commented on HBASE-15439:


FAILURE: Integrated in HBase-1.4 #16 (See 
[https://builds.apache.org/job/HBase-1.4/16/])
HBASE-15439 getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores (tedyu: 
rev 1cb82d91189606af0e07fd05955dde830439b509)
* hbase-common/src/main/java/org/apache/hadoop/hbase/ScheduledChore.java


> getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores the TimeUnit
> ---
>
> Key: HBASE-15439
> URL: https://issues.apache.org/jira/browse/HBASE-15439
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Jingcheng Du
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.4, 1.4.0
>
> Attachments: HBASE-15439.patch
>
>
> I was running IntegrationTestIngestWithMOB test.
> I lower the mob compaction chore interval to this value:
> {code}
> 
>   hbase.mob.compaction.chore.period
>   6000
> 
> {code}
> After whole night, there was no indication from master log that mob 
> compaction ran.
> All I found was:
> {code}
> 2016-03-09 04:18:52,194 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 05:58:52,516 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 07:38:52,847 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 09:18:52,848 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 10:58:52,932 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 12:38:52,932 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 14:18:52,933 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 15:58:52,957 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 17:38:52,960 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-15441) Fix WAL splitting when region has moved multiple times

2016-03-14 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193771#comment-15193771
 ] 

Elliott Clark edited comment on HBASE-15441 at 3/14/16 6:24 PM:


Yea, it sure seems like the close isn't needed at all.
It tries to:
* clean the cache ( something that warmup shouldn't be doing anyway ).
* Flush all edit ( there should be no edits ever ).
* Disable all compactions ( there are no compactions running since this is read 
only).



was (Author: eclark):
Yea, it sure seems like the close isn't needed at all.
It tries to:
* clean the cache ( something that warmup shouldn't be doing anyway ).
* Flush all edit ( there should be no edits ever ).
* Disable all compactions ( there are no compactions running since this is


> Fix WAL splitting when region has moved multiple times
> --
>
> Key: HBASE-15441
> URL: https://issues.apache.org/jira/browse/HBASE-15441
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.0, 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15441-v1.patch, HBASE-15441-v2.patch, 
> HBASE-15441.patch
>
>
> Currently WAL splitting is broken when a region has been opened multiple 
> times in recent minutes.
> Region open and region close write event markers to the wal. These markers 
> should have the sequence id in them. However it is currently getting 1. That 
> means that if a region has moved multiple times in the last few mins then 
> multiple split log workers will try and create the recovered edits file for 
> sequence id 1. One of the workers will fail and on failing they will delete 
> the recovered edits. Causing all split wal attempts to fail.
> We need to:
> # make sure that close get the correct sequence id for open.
> # Filter all region events from recovered edits
> It appears that the close event with a sequence id of one is coming from 
> region warm up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15392) Single Cell Get reads two HFileBlocks

2016-03-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193850#comment-15193850
 ] 

stack commented on HBASE-15392:
---

I filed HBASE-15457 [~danielpol] Lets continue this discussion over there. 
Looking down in the HFileReaderImpl, when scanning, we could pass down a hint 
and if a "Get" scan and the next block starts with a different row, we could 
return end-of-scan rather than load next block. Thanks.

> Single Cell Get reads two HFileBlocks
> -
>
> Key: HBASE-15392
> URL: https://issues.apache.org/jira/browse/HBASE-15392
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: stack
> Attachments: 15392-0.98-looksee.txt, 15392.wip.patch, 
> 15392v2.wip.patch, 15392v3.wip.patch, 15392v4.patch, 15392v5.patch, 
> 15392v6.patch, 15392v7.patch, 15392v7.patch, 15392v7.patch, 15392v7.patch, 
> HBASE-15392_suggest.patch, gc.png, gc.png, io.png, no_optimize.patch, 
> no_optimize.patch, reads.png, reads.png, two_seeks.txt
>
>
> As found by Daniel "SystemTap" Pol, a simple Get results in our reading two 
> HFileBlocks, the one that contains the wanted Cell, and the block that 
> follows.
> Here is a bit of custom logging that logs a stack trace on each HFileBlock 
> read so you can see the call stack responsible:
> {code}
> 2016-03-03 22:20:30,191 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> START LOOP
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> QCODE SEEK_NEXT_COL
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileBlockIndex: 
> STARTED WHILE
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.CombinedBlockCache: 
> OUT OF L2
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.BucketCache: Read 
> offset=31409152, len=2103
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.FileIOEngine: 
> offset=31409152, length=2103
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> From Cache [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> Cache hit return [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> java.lang.Throwable
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1515)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:324)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:831)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.reseekTo(HFileReaderImpl.java:812)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:288)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:198)
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:321)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.r

[jira] [Created] (HBASE-15457) [performance] Save-a-seek; hint HFileReader when Scan is a "Get" Scan

2016-03-14 Thread stack (JIRA)
stack created HBASE-15457:
-

 Summary: [performance] Save-a-seek; hint HFileReader when Scan is 
a "Get" Scan
 Key: HBASE-15457
 URL: https://issues.apache.org/jira/browse/HBASE-15457
 Project: HBase
  Issue Type: Bug
Reporter: stack


Have the Scan hint the lower-level Reader when its a 'Get' Scan. Reader is 
currently doing checks for EOF and when time to load next block on each next 
invocation. Seems easy enough to return null/end-of-scan if a get-scan and the 
next block is a different row.

Prompted by @daniel pol questions/suggestions over on HBASE-15392; see towards 
the end.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14030) HBase Backup/Restore Phase 1

2016-03-14 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14030:
---
Attachment: HBASE-14030.v40.patch

Patch v40 introduces BackupUtility (reviewed by Vlad offline) in the 
hbase-client module.

> HBase Backup/Restore Phase 1
> 
>
> Key: HBASE-14030
> URL: https://issues.apache.org/jira/browse/HBASE-14030
> Project: HBase
>  Issue Type: Umbrella
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-14030-v0.patch, HBASE-14030-v1.patch, 
> HBASE-14030-v10.patch, HBASE-14030-v11.patch, HBASE-14030-v12.patch, 
> HBASE-14030-v13.patch, HBASE-14030-v14.patch, HBASE-14030-v15.patch, 
> HBASE-14030-v17.patch, HBASE-14030-v18.patch, HBASE-14030-v2.patch, 
> HBASE-14030-v20.patch, HBASE-14030-v21.patch, HBASE-14030-v22.patch, 
> HBASE-14030-v23.patch, HBASE-14030-v24.patch, HBASE-14030-v25.patch, 
> HBASE-14030-v26.patch, HBASE-14030-v27.patch, HBASE-14030-v28.patch, 
> HBASE-14030-v3.patch, HBASE-14030-v30.patch, HBASE-14030-v35.patch, 
> HBASE-14030-v37.patch, HBASE-14030-v4.patch, HBASE-14030-v5.patch, 
> HBASE-14030-v6.patch, HBASE-14030-v7.patch, HBASE-14030-v8.patch, 
> HBASE-14030.v38.patch, HBASE-14030.v39.patch, HBASE-14030.v40.patch, 
> hbase-14030_v36.patch
>
>
> This is the umbrella ticket for Backup/Restore Phase 1. See HBASE-7912 design 
> doc for the phase description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15439) getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores the TimeUnit

2016-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193896#comment-15193896
 ] 

Hudson commented on HBASE-15439:


FAILURE: Integrated in HBase-1.3 #600 (See 
[https://builds.apache.org/job/HBase-1.3/600/])
HBASE-15439 getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores (tedyu: 
rev 42b97f12ef6605b7a718943b69dd9d8687b5499d)
* hbase-common/src/main/java/org/apache/hadoop/hbase/ScheduledChore.java


> getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores the TimeUnit
> ---
>
> Key: HBASE-15439
> URL: https://issues.apache.org/jira/browse/HBASE-15439
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Jingcheng Du
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.4, 1.4.0
>
> Attachments: HBASE-15439.patch
>
>
> I was running IntegrationTestIngestWithMOB test.
> I lower the mob compaction chore interval to this value:
> {code}
> 
>   hbase.mob.compaction.chore.period
>   6000
> 
> {code}
> After whole night, there was no indication from master log that mob 
> compaction ran.
> All I found was:
> {code}
> 2016-03-09 04:18:52,194 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 05:58:52,516 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 07:38:52,847 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 09:18:52,848 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 10:58:52,932 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 12:38:52,932 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 14:18:52,933 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 15:58:52,957 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 17:38:52,960 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15334) Add avro support for spark hbase connector

2016-03-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193897#comment-15193897
 ] 

Hadoop QA commented on HBASE-15334:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
20s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 12s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 41s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} scaladoc {color} | {color:green} 0m 
35s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 21s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} scalac {color} | {color:green} 1m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} scalac {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m 21s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} scaladoc {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} scaladoc {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 21s 
{color} | {color:green} hbase-spark in the patch passed with JDK v1.8.0. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 22s 
{color} | {color:green} hbase-spark in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
8s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 56s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12793350/HBASE-15334-4.patch |
| JIRA Issue | HBASE-15334 |
| Optional Tests |  asflicense  javac  javadoc  unit  xml  compile  scalac  
scaladoc  |
| uname | Linux pomona.apache.org 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT 
Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 122

[jira] [Updated] (HBASE-15392) Single Cell Get reads two HFileBlocks

2016-03-14 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15392:
--
Release Note: 
When an explicit Get with a one or more columns specified, we at a minimum, 
were overseeking, reading until we tripped over the next row, regardless, and 
only then returning. If the next row was in-block, we'd just do too much 
seeking but if the next row was in the next (or in the next block beyond that), 
we would keep seeking and loading blocks until we found the next row before 
we'd return.

There remains one case where we will still 'overread'. It is when the row end 
aligns with the end of the block. In this case we will load the next block just 
to find that there are no more cells in the current row. See HBASE-15457.

  was:When an explicit Get with a one or more columns specified, we at a 
minimum, were overseeking reading until we tripped over the next row, 
regardless, and only then returning. If the next row was in-block, we'd just do 
too much seeking but if the next row is in the next (or in the next block 
beyond that), we will keep seeking and loading blocks until we find the next 
row before we'd return.


> Single Cell Get reads two HFileBlocks
> -
>
> Key: HBASE-15392
> URL: https://issues.apache.org/jira/browse/HBASE-15392
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: stack
> Attachments: 15392-0.98-looksee.txt, 15392.wip.patch, 
> 15392v2.wip.patch, 15392v3.wip.patch, 15392v4.patch, 15392v5.patch, 
> 15392v6.patch, 15392v7.patch, 15392v7.patch, 15392v7.patch, 15392v7.patch, 
> HBASE-15392_suggest.patch, gc.png, gc.png, io.png, no_optimize.patch, 
> no_optimize.patch, reads.png, reads.png, two_seeks.txt
>
>
> As found by Daniel "SystemTap" Pol, a simple Get results in our reading two 
> HFileBlocks, the one that contains the wanted Cell, and the block that 
> follows.
> Here is a bit of custom logging that logs a stack trace on each HFileBlock 
> read so you can see the call stack responsible:
> {code}
> 2016-03-03 22:20:30,191 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> START LOOP
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> QCODE SEEK_NEXT_COL
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileBlockIndex: 
> STARTED WHILE
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.CombinedBlockCache: 
> OUT OF L2
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.BucketCache: Read 
> offset=31409152, len=2103
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.FileIOEngine: 
> offset=31409152, length=2103
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> From Cache [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> Cache hit return [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> java.lang.Throwable
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1515)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:324)
> at 
> org.apache.

[jira] [Updated] (HBASE-14983) Create metrics for per block type hit/miss ratios

2016-03-14 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14983:
--
Attachment: HBASE-14983-v6.patch

Rebased

> Create metrics for per block type hit/miss ratios
> -
>
> Key: HBASE-14983
> URL: https://issues.apache.org/jira/browse/HBASE-14983
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14983-v1.patch, HBASE-14983-v2.patch, 
> HBASE-14983-v3.patch, HBASE-14983-v4.patch, HBASE-14983-v5.patch, 
> HBASE-14983-v6.patch, HBASE-14983.patch, Screen Shot 2015-12-15 at 3.33.09 
> PM.png
>
>
> Missing a root index block is worse than missing a data block. We should know 
> the difference



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-14979) Update to the newest Zookeeper release

2016-03-14 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark resolved HBASE-14979.
---
Resolution: Duplicate

> Update to the newest Zookeeper release
> --
>
> Key: HBASE-14979
> URL: https://issues.apache.org/jira/browse/HBASE-14979
> Project: HBase
>  Issue Type: Improvement
>  Components: Zookeeper
>Affects Versions: 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14979.patch
>
>
> ZOOKEEPER-706 is nice to have for anyone running replication that sometimes 
> gets stalled. We should update to the latest patch version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14979) Update to the newest Zookeeper release

2016-03-14 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193918#comment-15193918
 ] 

Elliott Clark commented on HBASE-14979:
---

HBASE-15300

> Update to the newest Zookeeper release
> --
>
> Key: HBASE-14979
> URL: https://issues.apache.org/jira/browse/HBASE-14979
> Project: HBase
>  Issue Type: Improvement
>  Components: Zookeeper
>Affects Versions: 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14979.patch
>
>
> ZOOKEEPER-706 is nice to have for anyone running replication that sometimes 
> gets stalled. We should update to the latest patch version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15439) getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores the TimeUnit

2016-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193975#comment-15193975
 ] 

Hudson commented on HBASE-15439:


FAILURE: Integrated in HBase-1.2 #578 (See 
[https://builds.apache.org/job/HBase-1.2/578/])
HBASE-15439 getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores (tedyu: 
rev 62774516aa1c60cad9f9ba5ec1286f0bf26a1a0c)
* hbase-common/src/main/java/org/apache/hadoop/hbase/ScheduledChore.java


> getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores the TimeUnit
> ---
>
> Key: HBASE-15439
> URL: https://issues.apache.org/jira/browse/HBASE-15439
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Jingcheng Du
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.4, 1.4.0
>
> Attachments: HBASE-15439.patch
>
>
> I was running IntegrationTestIngestWithMOB test.
> I lower the mob compaction chore interval to this value:
> {code}
> 
>   hbase.mob.compaction.chore.period
>   6000
> 
> {code}
> After whole night, there was no indication from master log that mob 
> compaction ran.
> All I found was:
> {code}
> 2016-03-09 04:18:52,194 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 05:58:52,516 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 07:38:52,847 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 09:18:52,848 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 10:58:52,932 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 12:38:52,932 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 14:18:52,933 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 15:58:52,957 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 17:38:52,960 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15439) getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores the TimeUnit

2016-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193985#comment-15193985
 ] 

Hudson commented on HBASE-15439:


FAILURE: Integrated in HBase-Trunk_matrix #776 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/776/])
HBASE-15439 getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores (tedyu: 
rev 122e6f5793ba3b0c4d4e43fc7a75499aaf7e5ee3)
* hbase-common/src/main/java/org/apache/hadoop/hbase/ScheduledChore.java


> getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores the TimeUnit
> ---
>
> Key: HBASE-15439
> URL: https://issues.apache.org/jira/browse/HBASE-15439
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Jingcheng Du
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.4, 1.4.0
>
> Attachments: HBASE-15439.patch
>
>
> I was running IntegrationTestIngestWithMOB test.
> I lower the mob compaction chore interval to this value:
> {code}
> 
>   hbase.mob.compaction.chore.period
>   6000
> 
> {code}
> After whole night, there was no indication from master log that mob 
> compaction ran.
> All I found was:
> {code}
> 2016-03-09 04:18:52,194 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 05:58:52,516 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 07:38:52,847 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 09:18:52,848 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 10:58:52,932 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 12:38:52,932 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 14:18:52,933 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 15:58:52,957 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_1] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> 2016-03-09 17:38:52,960 INFO  
> [tyu-hbase-rhel-re-2.novalocal,2,1457491115327_ChoreService_2] 
> hbase.ScheduledChore: Chore: 
> tyu-hbase-rhel-re-2.novalocal,2,1457491115327-  MobCompactionChore missed 
> its start time
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15455) Upgrade hadoop to 2.7.x

2016-03-14 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193994#comment-15193994
 ] 

Sean Busbey commented on HBASE-15455:
-

We are already use a default of Hadoop 2.7.1 on master as of HBASE-13339.

What's the impact of this conflict? There was a pretty long discussion around 
not updating the default Hadoop from 2.5.z in the branch-1 line, revisiting 
that will require something severe.

> Upgrade hadoop to 2.7.x
> ---
>
> Key: HBASE-15455
> URL: https://issues.apache.org/jira/browse/HBASE-15455
> Project: HBase
>  Issue Type: Task
>  Components: Client
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Krzysztof Gardo
>
> Duplicate finder maven plugin discovered
> {code}
> [WARNING] Found duplicate and different classes in 
> [org.apache.hadoop:hadoop-yarn-api:2.5.1, 
> org.apache.hadoop:hadoop-yarn-common:2.5.1]:
> [WARNING]   org.apache.hadoop.yarn.factories.package-info
> [WARNING]   org.apache.hadoop.yarn.factory.providers.package-info
> [WARNING]   org.apache.hadoop.yarn.util.package-info
> {code}
> 2.7.x is free from that issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15456) CreateTableProcedure/ModifyTableProcedure needs to fail when there is no family in descriptor

2016-03-14 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15194011#comment-15194011
 ] 

Matteo Bertozzi commented on HBASE-15456:
-

+1 looks good to me. let's see what the QA has to say

> CreateTableProcedure/ModifyTableProcedure needs to fail when there is no 
> family in descriptor
> -
>
> Key: HBASE-15456
> URL: https://issues.apache.org/jira/browse/HBASE-15456
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Attachments: HBASE-15456-v001.patch
>
>
> If there is only one family in the table, DeleteColumnFamilyProcedure will 
> fail. 
> Currently, when hbase.table.sanity.checks is set to false, hbase master logs 
> a warning and CreateTableProcedure/ModifyTableProcedure will succeed. 
> This behavior is not consistent with DeleteColumnFamilyProcedure's. 
> Another point, before HBASE-13145, PeriodicMemstoreFlusher will run into the 
> following exception. lastStoreFlushTimeMap is populated for families, if 
> there is no family in the table, there is no entry in lastStoreFlushTimeMap.
> {code}
> 16/02/01 11:14:26 ERROR regionserver.HRegionServer$PeriodicMemstoreFlusher: 
> Caught exception 
> java.util.NoSuchElementException 
> at 
> java.util.concurrent.ConcurrentHashMap$HashIterator.nextEntry(ConcurrentHashMap.java:1354)
>  
> at 
> java.util.concurrent.ConcurrentHashMap$ValueIterator.next(ConcurrentHashMap.java:1384)
>  
> at java.util.Collections.min(Collections.java:628) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getEarliestFlushTimeForAllStores(HRegion.java:1572)
>  
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.shouldFlush(HRegion.java:1904) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer$PeriodicMemstoreFlusher.chore(HRegionServer.java:1509)
>  
> at org.apache.hadoop.hbase.Chore.run(Chore.java:87) 
> at java.lang.Thread.run(Thread.java:745) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15453) Considering reverting HBASE-10015 - reinstance synchronized in StoreScanner

2016-03-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15194017#comment-15194017
 ] 

stack commented on HBASE-15453:
---

Yeah... remove the @Test... Add a main that calls your test on this class? You 
want to test a long scan? We have test of a 10k scan in PE. You want to run 
longer than this? The full 5M table?

> Considering reverting HBASE-10015 - reinstance synchronized in StoreScanner
> ---
>
> Key: HBASE-15453
> URL: https://issues.apache.org/jira/browse/HBASE-15453
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Attachments: 15453-0.98.txt
>
>
> In HBASE-10015 back then I found that intrinsic locks (synchronized) in 
> StoreScanner are slower that explicit locks.
> I was surprised by this. To make sure I added a simple perf test and many 
> folks ran it on their machines. All found that explicit locks were faster.
> Now... I just ran that test again. On the latest JDK8 I find that now the 
> intrinsic locks are significantly faster:
> (OpenJDK Runtime Environment (build 1.8.0_72-b15))
> Explicit locks:
> 10 runs  mean:2223.6 sigma:72.29412147609237
> Intrinsic locks:
> 10 runs  mean:1865.3 sigma:32.63755505548784
> I confirmed the same with timing some Phoenix scans. We can save a bunch of 
> time by changing this back 
> Arrghhh... So maybe it's time to revert this now...?
> (Note that in trunk due to [~ram_krish]'s work, we do not lock in 
> StoreScanner anymore)
> I'll attach the perf test and a patch that changes lock to synchronized, if 
> some folks could run this on 0.98, that'd be great.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15453) [Performance] Considering reverting HBASE-10015 - reinstance synchronized in StoreScanner

2016-03-14 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15453:
--
   Assignee: Lars Hofhansl
   Priority: Critical  (was: Major)
Component/s: Performance
 Issue Type: Improvement  (was: Bug)
Summary: [Performance] Considering reverting HBASE-10015 - reinstance 
synchronized in StoreScanner  (was: Considering reverting HBASE-10015 - 
reinstance synchronized in StoreScanner)

Marking Critical because seems like big benefit for simple change. I can try 
this later

> [Performance] Considering reverting HBASE-10015 - reinstance synchronized in 
> StoreScanner
> -
>
> Key: HBASE-15453
> URL: https://issues.apache.org/jira/browse/HBASE-15453
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Attachments: 15453-0.98.txt
>
>
> In HBASE-10015 back then I found that intrinsic locks (synchronized) in 
> StoreScanner are slower that explicit locks.
> I was surprised by this. To make sure I added a simple perf test and many 
> folks ran it on their machines. All found that explicit locks were faster.
> Now... I just ran that test again. On the latest JDK8 I find that now the 
> intrinsic locks are significantly faster:
> (OpenJDK Runtime Environment (build 1.8.0_72-b15))
> Explicit locks:
> 10 runs  mean:2223.6 sigma:72.29412147609237
> Intrinsic locks:
> 10 runs  mean:1865.3 sigma:32.63755505548784
> I confirmed the same with timing some Phoenix scans. We can save a bunch of 
> time by changing this back 
> Arrghhh... So maybe it's time to revert this now...?
> (Note that in trunk due to [~ram_krish]'s work, we do not lock in 
> StoreScanner anymore)
> I'll attach the perf test and a patch that changes lock to synchronized, if 
> some folks could run this on 0.98, that'd be great.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15430) Failed taking snapshot - Manifest proto-message too large

2016-03-14 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-15430:

Attachment: HBASE-15430-v6.patch

v6 removes the extra region/files created by SnapshotMock and uses the mock htd 
convert instead of having the TableSchema protobuf manually created. 
TestingUtil.createHRegionInfo() had an extra new HTD creation that was not 
needed, a simple new HRegionInfo was enough. also I added files and families to 
the manifest to reproduce a real/correct manifest. 

> Failed taking snapshot - Manifest proto-message too large
> -
>
> Key: HBASE-15430
> URL: https://issues.apache.org/jira/browse/HBASE-15430
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 0.98.11
>Reporter: JunHo Cho
>Assignee: JunHo Cho
>Priority: Critical
> Fix For: 0.98.18
>
> Attachments: HBASE-15430-v4.patch, HBASE-15430-v5.patch, 
> HBASE-15430-v6.patch, hbase-15430-v1.patch, hbase-15430-v2.patch, 
> hbase-15430-v3.branch.0.98.patch, hbase-15430.patch
>
>
> the size of a protobuf message is 64MB (default). but the size of snapshot 
> meta is over 64MB. 
> Caused by: com.google.protobuf.InvalidProtocolBufferException via Failed 
> taking snapshot { ss=snapshot_xxx table=xxx type=FLUSH } due to 
> exception:Protocol message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size 
> limit.:com.google.protobuf.InvalidProtocolBufferException: Protocol message 
> was too large.  May be malicious.  Use CodedInputStream.setSizeLimit() to 
> increase the size limit.
> at 
> org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher.rethrowException(ForeignExceptionDispatcher.java:83)
> at 
> org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.rethrowExceptionIfFailed(TakeSnapshotHandler.java:307)
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:341)
> ... 10 more
> Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
> at 
> com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110)
> at 
> com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
> at 
> com.google.protobuf.CodedInputStream.readRawBytes(CodedInputStream.java:811)
> at 
> com.google.protobuf.CodedInputStream.readBytes(CodedInputStream.java:329)
> at 
> org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$RegionInfo.(HBaseProtos.java:3767)
> at 
> org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$RegionInfo.(HBaseProtos.java:3699)
> at 
> org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$RegionInfo$1.parsePartialFrom(HBaseProtos.java:3815)
> at 
> org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$RegionInfo$1.parsePartialFrom(HBaseProtos.java:3810)
> at 
> com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
> at 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotRegionManifest.(SnapshotProtos.java:1152)
> at 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotRegionManifest.(SnapshotProtos.java:1094)
> at 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotRegionManifest$1.parsePartialFrom(SnapshotProtos.java:1201)
> at 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotRegionManifest$1.parsePartialFrom(SnapshotProtos.java:1196)
> at 
> com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
> at 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotDataManifest.(SnapshotProtos.java:3858)
> at 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotDataManifest.(SnapshotProtos.java:3792)
> at 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotDataManifest$1.parsePartialFrom(SnapshotProtos.java:3894)
> at 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotDataManifest$1.parsePartialFrom(SnapshotProtos.java:3889)
> at 
> com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:200)
> at 
> com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:217)
> at 
> com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:223)
> at 
> com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
> at 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotDataManifest.parseFrom(SnapshotProtos.java:4094)

[jira] [Created] (HBASE-15458) Fix bugs that Infer finds in HBase

2016-03-14 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-15458:
-

 Summary: Fix bugs that Infer finds in HBase
 Key: HBASE-15458
 URL: https://issues.apache.org/jira/browse/HBASE-15458
 Project: HBase
  Issue Type: Bug
Reporter: Elliott Clark


http://fbinfer.com/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15459) Fix infer issues in hbase-server

2016-03-14 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-15459:
-

 Summary: Fix infer issues in hbase-server
 Key: HBASE-15459
 URL: https://issues.apache.org/jira/browse/HBASE-15459
 Project: HBase
  Issue Type: Sub-task
Reporter: Elliott Clark






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15461) ref guide has bad link to user auth blog

2016-03-14 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-15461:
---

 Summary: ref guide has bad link to user auth blog
 Key: HBASE-15461
 URL: https://issues.apache.org/jira/browse/HBASE-15461
 Project: HBase
  Issue Type: Bug
  Components: website
Reporter: Sean Busbey
Assignee: Sean Busbey


The ref guide section on "Secure Client Access to Apache HBase" starts with a 
link to a blog post from Matteo, but the link is broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15460) Fix infer issues in hbase-common

2016-03-14 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-15460:
-

 Summary: Fix infer issues in hbase-common
 Key: HBASE-15460
 URL: https://issues.apache.org/jira/browse/HBASE-15460
 Project: HBase
  Issue Type: Sub-task
Reporter: Elliott Clark






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14845) hbase-server leaks jdk.tools dependency to mapreduce consumers

2016-03-14 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15194059#comment-15194059
 ] 

Sean Busbey commented on HBASE-14845:
-

I'm have a bit of a time getting the test scope jdk.tools to not leak out for 
downstream folks. probably best to bump this unless we're waiting for something 
else or the jar is ending up in our assembly.

> hbase-server leaks jdk.tools dependency to mapreduce consumers
> --
>
> Key: HBASE-14845
> URL: https://issues.apache.org/jira/browse/HBASE-14845
> Project: HBase
>  Issue Type: Bug
>  Components: build, dependencies
>Affects Versions: 2.0.0, 0.98.14, 1.2.0, 1.1.2, 1.3.0, 1.0.3
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.4, 1.0.4, 0.98.18
>
> Attachments: HBASE-14845.1.patch
>
>
> HBASE-13963 / HBASE-14844 take care of removing leaks of our dependency on 
> jdk-tools.
> Until we move the mapreduce support classes out of hbase-server 
> (HBASE-11843), we need to also avoid leaking the dependency from that module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15461) ref guide has bad link to user auth blog

2016-03-14 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15461:

Status: Patch Available  (was: Open)

> ref guide has bad link to user auth blog
> 
>
> Key: HBASE-15461
> URL: https://issues.apache.org/jira/browse/HBASE-15461
> Project: HBase
>  Issue Type: Bug
>  Components: website
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HBASE-15461.1.patch
>
>
> The ref guide section on "Secure Client Access to Apache HBase" starts with a 
> link to a blog post from Matteo, but the link is broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15461) ref guide has bad link to user auth blog

2016-03-14 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15461:

Attachment: HBASE-15461.1.patch

-01

  - updates link for destination site redesign.

> ref guide has bad link to user auth blog
> 
>
> Key: HBASE-15461
> URL: https://issues.apache.org/jira/browse/HBASE-15461
> Project: HBase
>  Issue Type: Bug
>  Components: website
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HBASE-15461.1.patch
>
>
> The ref guide section on "Secure Client Access to Apache HBase" starts with a 
> link to a blog post from Matteo, but the link is broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15461) ref guide has bad link to user auth blog

2016-03-14 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15461:

Attachment: HBASE-15461.2.patch

-02
  - update additional out of date links to cloudera blogs.

> ref guide has bad link to user auth blog
> 
>
> Key: HBASE-15461
> URL: https://issues.apache.org/jira/browse/HBASE-15461
> Project: HBase
>  Issue Type: Bug
>  Components: website
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HBASE-15461.1.patch, HBASE-15461.2.patch
>
>
> The ref guide section on "Secure Client Access to Apache HBase" starts with a 
> link to a blog post from Matteo, but the link is broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15461) ref guide has bad links to blogs originally posted on cloudera website

2016-03-14 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15461:

Summary: ref guide has bad links to blogs originally posted on cloudera 
website  (was: ref guide has bad link to user auth blog)

> ref guide has bad links to blogs originally posted on cloudera website
> --
>
> Key: HBASE-15461
> URL: https://issues.apache.org/jira/browse/HBASE-15461
> Project: HBase
>  Issue Type: Bug
>  Components: website
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HBASE-15461.1.patch, HBASE-15461.2.patch
>
>
> The ref guide section on "Secure Client Access to Apache HBase" starts with a 
> link to a blog post from Matteo, but the link is broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15462) Docker-based builds failing...

2016-03-14 Thread stack (JIRA)
stack created HBASE-15462:
-

 Summary: Docker-based builds failing...
 Key: HBASE-15462
 URL: https://issues.apache.org/jira/browse/HBASE-15462
 Project: HBase
  Issue Type: Bug
  Components: build
Reporter: stack


Our hadoopqa builds have been failing the last few days setting up the docker 
container. Failure looks like this from console:

{code}
Removing intermediate container 3ae7349600a3
Step 16 : RUN gem install rubocop --no-ri --no-rdoc
 ---> Running in 7c3bedd253b0
Invalid gemspec in 
[/var/lib/gems/1.9.1/specifications/unicode-display_width-1.0.2.gemspec]: 
Illformed requirement ["< 3.0.0, >= 1.9.3"]
Invalid gemspec in 
[/var/lib/gems/1.9.1/specifications/unicode-display_width-1.0.2.gemspec]: 
Illformed requirement ["< 3.0.0, >= 1.9.3"]
Invalid gemspec in 
[/var/lib/gems/1.9.1/specifications/unicode-display_width-1.0.2.gemspec]: 
Illformed requirement ["< 3.0.0, >= 1.9.3"]
ERROR:  Error installing rubocop:
rubocop requires unicode-display_width (>= 1.0.1, ~> 1.0)
The command '/bin/sh -c gem install rubocop --no-ri --no-rdoc' returned a 
non-zero code: 1
{code}

Looks fixable?

I changed the config on hadoopqa to not use docker builds in meantime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15460) Fix infer issues in hbase-common

2016-03-14 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-15460:
--
Attachment: hbase-common.infer

> Fix infer issues in hbase-common
> 
>
> Key: HBASE-15460
> URL: https://issues.apache.org/jira/browse/HBASE-15460
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
> Attachments: hbase-common.infer
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-15460) Fix infer issues in hbase-common

2016-03-14 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark reassigned HBASE-15460:
-

Assignee: Elliott Clark

> Fix infer issues in hbase-common
> 
>
> Key: HBASE-15460
> URL: https://issues.apache.org/jira/browse/HBASE-15460
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: hbase-common.infer
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15459) Fix infer issues in hbase-server

2016-03-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15194088#comment-15194088
 ] 

stack commented on HBASE-15459:
---

Looks great Good stuff

> Fix infer issues in hbase-server
> 
>
> Key: HBASE-15459
> URL: https://issues.apache.org/jira/browse/HBASE-15459
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
> Attachments: hbase-server.infer
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15459) Fix infer issues in hbase-server

2016-03-14 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-15459:
--
Attachment: hbase-server.infer

Here's what infer found while running it on hbase-server.

> Fix infer issues in hbase-server
> 
>
> Key: HBASE-15459
> URL: https://issues.apache.org/jira/browse/HBASE-15459
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
> Attachments: hbase-server.infer
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15411) Rewrite backup with Procedure V2

2016-03-14 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15411:
---
Attachment: 15411-v12.txt

Patch v12 accommodates the BackupUtil refactoring over in HBASE-14030

> Rewrite backup with Procedure V2
> 
>
> Key: HBASE-15411
> URL: https://issues.apache.org/jira/browse/HBASE-15411
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15411-v1.txt, 15411-v11.txt, 15411-v12.txt, 
> 15411-v3.txt, 15411-v5.txt, 15411-v6.txt, 15411-v7.txt, 15411-v9.txt, 
> FullTableBackupProcedure.java
>
>
> Currently full / incremental backup is driven by BackupHandler (see call() 
> method for flow).
> This issue is to rewrite the flow using Procedure V2.
> States (enum) for full / incremental backup would be introduced in 
> Backup.proto which correspond to the steps performed in BackupHandler#call().
> executeFromState() would pace the backup based on the current state.
> serializeStateData() / deserializeStateData() would be used to persist state 
> into procedure WAL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15456) CreateTableProcedure/ModifyTableProcedure needs to fail when there is no family in descriptor

2016-03-14 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15194165#comment-15194165
 ] 

Stephen Yuan Jiang commented on HBASE-15456:


[~huaxiang]], could you add a UT in TestCreateTableProcedure.java and 
TestModifyTableProcedure.java to have the 0-CF table descriptor? 

> CreateTableProcedure/ModifyTableProcedure needs to fail when there is no 
> family in descriptor
> -
>
> Key: HBASE-15456
> URL: https://issues.apache.org/jira/browse/HBASE-15456
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Attachments: HBASE-15456-v001.patch
>
>
> If there is only one family in the table, DeleteColumnFamilyProcedure will 
> fail. 
> Currently, when hbase.table.sanity.checks is set to false, hbase master logs 
> a warning and CreateTableProcedure/ModifyTableProcedure will succeed. 
> This behavior is not consistent with DeleteColumnFamilyProcedure's. 
> Another point, before HBASE-13145, PeriodicMemstoreFlusher will run into the 
> following exception. lastStoreFlushTimeMap is populated for families, if 
> there is no family in the table, there is no entry in lastStoreFlushTimeMap.
> {code}
> 16/02/01 11:14:26 ERROR regionserver.HRegionServer$PeriodicMemstoreFlusher: 
> Caught exception 
> java.util.NoSuchElementException 
> at 
> java.util.concurrent.ConcurrentHashMap$HashIterator.nextEntry(ConcurrentHashMap.java:1354)
>  
> at 
> java.util.concurrent.ConcurrentHashMap$ValueIterator.next(ConcurrentHashMap.java:1384)
>  
> at java.util.Collections.min(Collections.java:628) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getEarliestFlushTimeForAllStores(HRegion.java:1572)
>  
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.shouldFlush(HRegion.java:1904) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer$PeriodicMemstoreFlusher.chore(HRegionServer.java:1509)
>  
> at org.apache.hadoop.hbase.Chore.run(Chore.java:87) 
> at java.lang.Thread.run(Thread.java:745) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15463) Region normalizer should check whether split/merge is enabled

2016-03-14 Thread Ted Yu (JIRA)
Ted Yu created HBASE-15463:
--

 Summary: Region normalizer should check whether split/merge is 
enabled
 Key: HBASE-15463
 URL: https://issues.apache.org/jira/browse/HBASE-15463
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor


HBASE-15128 added switch for disabling split / merge.

When split / merge switch is turned off, region normalizer should not perform 
split / merge operation, respectively.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15463) Region normalizer should check whether split/merge is enabled

2016-03-14 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15463:
---
Attachment: HBASE-15463.v1.patch

> Region normalizer should check whether split/merge is enabled
> -
>
> Key: HBASE-15463
> URL: https://issues.apache.org/jira/browse/HBASE-15463
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: HBASE-15463.v1.patch
>
>
> HBASE-15128 added switch for disabling split / merge.
> When split / merge switch is turned off, region normalizer should not perform 
> split / merge operation, respectively.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15463) Region normalizer should check whether split/merge is enabled

2016-03-14 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15463:
---
Status: Patch Available  (was: Open)

> Region normalizer should check whether split/merge is enabled
> -
>
> Key: HBASE-15463
> URL: https://issues.apache.org/jira/browse/HBASE-15463
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: HBASE-15463.v1.patch
>
>
> HBASE-15128 added switch for disabling split / merge.
> When split / merge switch is turned off, region normalizer should not perform 
> split / merge operation, respectively.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >