[jira] [Updated] (HBASE-18160) Fix incorrect logic in FilterList.filterKeyValue

2017-07-19 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-18160:
-
Issue Type: Sub-task  (was: Bug)
Parent: HBASE-18410

> Fix incorrect  logic in FilterList.filterKeyValue
> -
>
> Key: HBASE-18160
> URL: https://issues.apache.org/jira/browse/HBASE-18160
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Attachments: HBASE-18160.branch-1.1.v1.patch, 
> HBASE-18160.branch-1.v1.patch, HBASE-18160.v1.patch, HBASE-18160.v2.patch, 
> HBASE-18160.v2.patch
>
>
> As HBASE-17678 said, there are two problems in FilterList.filterKeyValue 
> implementation: 
> 1.  FilterList did not consider INCLUDE_AND_SEEK_NEXT_ROW case( seems like 
> INCLUDE_AND_SEEK_NEXT_ROW is a newly added case, and the dev forgot to 
> consider FilterList), So if a user use INCLUDE_AND_SEEK_NEXT_ROW in his own 
> Filter and wrapped by a FilterList,  it'll  throw  an 
> IllegalStateException("Received code is not valid."). 
> 2.  For FilterList with MUST_PASS_ONE,   if filter-A in filter list return  
> INCLUDE and filter-B in filter list return INCLUDE_AND_NEXT_COL,   the 
> FilterList will return  INCLUDE_AND_NEXT_COL finally.  According to the 
> mininal step rule , It's incorrect.  (filter list with MUST_PASS_ONE choose 
> the mininal step among filters in filter list. Let's call it: The Mininal 
> Step Rule).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18368) Filters with OR do not work

2017-07-19 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092683#comment-16092683
 ] 

Guanghao Zhang commented on HBASE-18368:


bq. when rowCountFilter return NEXT_ROW for the kv1, we still pass the same row 
with different family (kv2) to rowCounterFilter
[~openinx] Even we don't use filter list, we still have this problem, right?

> Filters with OR do not work
> ---
>
> Key: HBASE-18368
> URL: https://issues.apache.org/jira/browse/HBASE-18368
> Project: HBase
>  Issue Type: Sub-task
>  Components: Filters
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Peter Somogyi
>Assignee: Allan Yang
>Priority: Critical
> Attachments: HBASE-18368.branch-1.patch, 
> HBASE-18368.branch-1.v2.patch, HBASE-18368.branch-1.v3.patch, 
> HBASE-18368.patch
>
>
> Scan gives back incomplete list if multiple filters are combined with OR / 
> MUST_PASS_ONE.
> Using 2 FamilyFilters in a FilterList using MUST_PASS_ONE operator will give 
> back results for only the first Filter.
> {code:java|title=Test code}
>   @Test
>   public void testFiltersWithOr() throws Exception {
> TableName tn = TableName.valueOf("MyTest");
> Table table = utility.createTable(tn, new String[] {"cf1", "cf2"});
> byte[] CF1 = Bytes.toBytes("cf1");
> byte[] CF2 = Bytes.toBytes("cf2");
> Put put1 = new Put(Bytes.toBytes("0"));
> put1.addColumn(CF1, Bytes.toBytes("col_a"), Bytes.toBytes(0));
> table.put(put1);
> Put put2 = new Put(Bytes.toBytes("0"));
> put2.addColumn(CF2, Bytes.toBytes("col_b"), Bytes.toBytes(0));
> table.put(put2);
> FamilyFilter filterCF1 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF1));
> FamilyFilter filterCF2 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF2));
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ONE);
> filterList.addFilter(filterCF1);
> filterList.addFilter(filterCF2);
> Scan scan = new Scan();
> scan.setFilter(filterList);
> ResultScanner scanner = table.getScanner(scan);
> System.out.println(filterList);
> for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {
>   System.out.println(rr);
> }
>   }
> {code}
> {noformat:title=Output}
> FilterList OR (2/2): [FamilyFilter (EQUAL, cf1), FamilyFilter (EQUAL, cf2)]
> keyvalues={0/cf1:col_a/1499852754957/Put/vlen=4/seqid=0}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18411) Dividing FiterList into two separate sub-classes: FilterListWithOR , FilterListWithAND

2017-07-19 Thread Zheng Hu (JIRA)
Zheng Hu created HBASE-18411:


 Summary: Dividing FiterList  into two separate sub-classes:  
FilterListWithOR , FilterListWithAND
 Key: HBASE-18411
 URL: https://issues.apache.org/jira/browse/HBASE-18411
 Project: HBase
  Issue Type: Sub-task
Reporter: Zheng Hu






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18412) [Shell] Support unset of a list of table configuration

2017-07-19 Thread Yun Zhao (JIRA)
Yun Zhao created HBASE-18412:


 Summary: [Shell] Support unset of a list of table configuration
 Key: HBASE-18412
 URL: https://issues.apache.org/jira/browse/HBASE-18412
 Project: HBase
  Issue Type: Improvement
Reporter: Yun Zhao
Assignee: Yun Zhao
Priority: Minor


Add a method 'table_conf_unset' in admin.rb to reset the configuration as the 
system default.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18412) [Shell] Support unset of a list of table configuration

2017-07-19 Thread Yun Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Zhao updated HBASE-18412:
-
Attachment: HBASE-18412.master.001.patch

> [Shell] Support unset of a list of table configuration
> --
>
> Key: HBASE-18412
> URL: https://issues.apache.org/jira/browse/HBASE-18412
> Project: HBase
>  Issue Type: Improvement
>Reporter: Yun Zhao
>Assignee: Yun Zhao
>Priority: Minor
> Attachments: HBASE-18412.master.001.patch
>
>
> Add a method 'table_conf_unset' in admin.rb to reset the configuration as the 
> system default.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18412) [Shell] Support unset of a list of table configuration

2017-07-19 Thread Yun Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Zhao updated HBASE-18412:
-
Status: Patch Available  (was: Open)

> [Shell] Support unset of a list of table configuration
> --
>
> Key: HBASE-18412
> URL: https://issues.apache.org/jira/browse/HBASE-18412
> Project: HBase
>  Issue Type: Improvement
>Reporter: Yun Zhao
>Assignee: Yun Zhao
>Priority: Minor
> Attachments: HBASE-18412.master.001.patch
>
>
> Add a method 'table_conf_unset' in admin.rb to reset the configuration as the 
> system default.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18412) [Shell] Support unset of a list of table configuration

2017-07-19 Thread Yun Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092688#comment-16092688
 ] 

Yun Zhao commented on HBASE-18412:
--

I did not support column family:(

> [Shell] Support unset of a list of table configuration
> --
>
> Key: HBASE-18412
> URL: https://issues.apache.org/jira/browse/HBASE-18412
> Project: HBase
>  Issue Type: Improvement
>Reporter: Yun Zhao
>Assignee: Yun Zhao
>Priority: Minor
> Attachments: HBASE-18412.master.001.patch
>
>
> Add a method 'table_conf_unset' in admin.rb to reset the configuration as the 
> system default.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18390) Sleep too long when finding region location failed

2017-07-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092687#comment-16092687
 ] 

Hudson commented on HBASE-18390:


FAILURE: Integrated in Jenkins build HBase-2.0 #197 (See 
[https://builds.apache.org/job/HBase-2.0/197/])
HBASE-18390 Sleep too long when finding region location failed (yangzhe1991: 
rev fc362f69cba3aaee3ddef832f9d06a1ae5293836)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionUtils.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionAdminServiceCallable.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestConnectionUtils.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionServerCallable.java


> Sleep too long when finding region location failed
> --
>
> Key: HBASE-18390
> URL: https://issues.apache.org/jira/browse/HBASE-18390
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6, 1.1.11, 2.0.0-alpha-1
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-2, 1.1.12
>
> Attachments: HBASE-18390.v01.patch, HBASE-18390.v02.patch, 
> HBASE-18390.v03.patch
>
>
> If RegionServerCallable#prepare failed when getRegionLocation, the location 
> in this callable object is null. And before we retry we will sleep. However, 
> when location is null we will sleep at least 10 seconds. And the request will 
> be failed directly if operation timeout is less than 10 seconds. I think it 
> is no need to keep MIN_WAIT_DEAD_SERVER logic. Use backoff sleeping logic is 
> ok for most cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18368) FilterList with multiple FamilyFilters concatenated by OR does not work.

2017-07-19 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-18368:
-
Summary: FilterList with multiple FamilyFilters concatenated by OR does not 
work.  (was: Filters with OR do not work)

> FilterList with multiple FamilyFilters concatenated by OR does not work.
> 
>
> Key: HBASE-18368
> URL: https://issues.apache.org/jira/browse/HBASE-18368
> Project: HBase
>  Issue Type: Sub-task
>  Components: Filters
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Peter Somogyi
>Assignee: Allan Yang
>Priority: Critical
> Attachments: HBASE-18368.branch-1.patch, 
> HBASE-18368.branch-1.v2.patch, HBASE-18368.branch-1.v3.patch, 
> HBASE-18368.patch
>
>
> Scan gives back incomplete list if multiple filters are combined with OR / 
> MUST_PASS_ONE.
> Using 2 FamilyFilters in a FilterList using MUST_PASS_ONE operator will give 
> back results for only the first Filter.
> {code:java|title=Test code}
>   @Test
>   public void testFiltersWithOr() throws Exception {
> TableName tn = TableName.valueOf("MyTest");
> Table table = utility.createTable(tn, new String[] {"cf1", "cf2"});
> byte[] CF1 = Bytes.toBytes("cf1");
> byte[] CF2 = Bytes.toBytes("cf2");
> Put put1 = new Put(Bytes.toBytes("0"));
> put1.addColumn(CF1, Bytes.toBytes("col_a"), Bytes.toBytes(0));
> table.put(put1);
> Put put2 = new Put(Bytes.toBytes("0"));
> put2.addColumn(CF2, Bytes.toBytes("col_b"), Bytes.toBytes(0));
> table.put(put2);
> FamilyFilter filterCF1 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF1));
> FamilyFilter filterCF2 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF2));
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ONE);
> filterList.addFilter(filterCF1);
> filterList.addFilter(filterCF2);
> Scan scan = new Scan();
> scan.setFilter(filterList);
> ResultScanner scanner = table.getScanner(scan);
> System.out.println(filterList);
> for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {
>   System.out.println(rr);
> }
>   }
> {code}
> {noformat:title=Output}
> FilterList OR (2/2): [FamilyFilter (EQUAL, cf1), FamilyFilter (EQUAL, cf2)]
> keyvalues={0/cf1:col_a/1499852754957/Put/vlen=4/seqid=0}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18375) The pool chunks from ChunkCreator are deallocated while in pool because there is no reference to them

2017-07-19 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092737#comment-16092737
 ] 

Anastasia Braginsky commented on HBASE-18375:
-

Hey Ram!

Thanks for your questions, they make me rethink the process once again. 

I have not yet presented in this JIRA one more important problem introduced in 
HBASE-18010, where the flattening became a creation pattern. The pipeline 
version wasn't increased while flattening, so it allowed simultaneous change of 
the pipeline from the snapshot and from the flattening. I wanted to add this 
fix in other JIRA, but as I see it is all related, I will add this fix also 
here and I will publish a new patch soon.

bq. So this chunk that we return from weak.get() at this point of time has a 
reference.
If you are asking about the exact scenario happening in my failure, there the 
reference is removed from strongMap. But generally poolChunk can be also 
removed from weakMap. And yes the pool chunk still has the reference at this 
point in time ( weak.get() returns strong reference). 

bq. But when we are actually putting it to the reclaimedChunks, the chunk 
becomes null. 
In my scenario, I believe the pool chunk still has reference also there and it 
is added to reclaimedChunks in its "correct" form. But again if chunk is null 
then, and we don't add it back to reclaimedChunks, it is still not a desired 
behavior, because we deallocate chunks from pool and decreasing their number...

bq. So when we do poll from reclaimedchunks we get a null ref and we just try 
to use that null ref and add it to the weakRefMap again and so we are working 
with null reference through out as per
Again, I see the chunk is polled from reclaimedChunks when it still OK, but 
released later when it is already in weakRefMap. It is a subtle scenario, but 
causing a problem after all. To always keep pool chunks in strongMap is much 
cleaner and thus safer.

bq. So in case of CellChunkMap atleast since we are sure to have a strong ref 
is it better to always return the strong ref in removeChunk() code rather than 
the weak ref (though it is available)?
Sorry, I do not see why it is better to always return the strong ref in 
removeChunk() ... ? Currently removeChunk() returns strong reference anyway.

> The pool chunks from ChunkCreator are deallocated while in pool because there 
> is no reference to them
> -
>
> Key: HBASE-18375
> URL: https://issues.apache.org/jira/browse/HBASE-18375
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha-1
>Reporter: Anastasia Braginsky
>Priority: Critical
> Fix For: 2.0.0, 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18375-V01.patch, HBASE-18375-V02.patch, 
> HBASE-18375-V03.patch
>
>
> Because MSLAB list of chunks was changed to list of chunk IDs, the chunks 
> returned back to pool can be deallocated by JVM because there is no reference 
> to them. The solution is to protect pool chunks from GC by the strong map of 
> ChunkCreator introduced by HBASE-18010. Will prepare the patch today.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18399) Files in a snapshot can go missing even after the snapshot is taken successfully

2017-07-19 Thread Ashu Pachauri (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092738#comment-16092738
 ] 

Ashu Pachauri commented on HBASE-18399:
---

bq. Ya so the file is in archive. The remaining steps are irrespective of the 
store file accounting feature right?
Yes and no. Yes because the remaining steps have nothing to do with store file 
accounting. No, because the sole reason why a store file is moved to archive 
while in the middle of a snapshot operation is the lack of proper locking 
between the snapshot operation and finalizing the compaction results.

> Files in a snapshot can go missing even after the snapshot is taken 
> successfully
> 
>
> Key: HBASE-18399
> URL: https://issues.apache.org/jira/browse/HBASE-18399
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Reporter: Ashu Pachauri
> Fix For: 1.3.2
>
>
> Files missing after the snapshot is taken (only applicable when the TTL for 
> the TimeToLiveHFileCleaner is small, like the default 5 mins)
> * SnapshotManifest#addRegion visits store_file_A, but is yet to write it 
> to the manifest.
> * store_file_A is marked as compacted away and HFileArchiver moves the 
> file to archive.
> * HFileCleaner comes in and sees the store_file_A in archive. It adds the 
> file to the list of files that might need to be cleaned up.
> * HFileCleaner's SnapshotHFileCleaner plugin is kicked in.
> * SnapshotFileCache#getUnreferencedFiles also says that store_file_A is 
> unreferenced and should be cleaned up (It has not yet been written to the 
> manifest).
> * SnapshotHFileCleaner is still going through rest of the files in 
> archive.
> * store_file_A reference is created and written to snapshot manifest.
> * Snapshot verification runs and sees the store_file_A is present in 
> archive, and thus the verification passes.
> * Now, the SnapshotHFileCleaner finishes and TimeToLiveHFileCleaner is 
> triggered. If TTL has passed since the store_file_A was moved to archive 
> (SnapshotHFileCleaner could take easily several minutes to go through rest of 
> the files), the TimeToLiveHFileCleaner also marks the file as deletable.
> * Since all cleaner plugins marked file as deletable, the store_file_A is 
> deleted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Reopened] (HBASE-18402) Thrift2 should support DeleteFamily and DeleteFamilyVersion type

2017-07-19 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu reopened HBASE-18402:
--

Seems like ThriftUtilities#deleteFromThrift only support 3 deleteType:  
delete/deleteColumn/deleteFamily,  but NOT deleteFamilyVersion.  :( 

SO I reopened the issue 

{code}
public static Delete deleteFromThrift(TDelete in) {
Delete out;

if (in.isSetColumns()) {
  out = new Delete(in.getRow());
  for (TColumn column : in.getColumns()) {
if (column.isSetQualifier()) {
  if (column.isSetTimestamp()) {
if (in.isSetDeleteType() &&
in.getDeleteType().equals(TDeleteType.DELETE_COLUMNS))
  out.addColumns(column.getFamily(), column.getQualifier(), 
column.getTimestamp());
else
  out.addColumn(column.getFamily(), column.getQualifier(), 
column.getTimestamp());
  } else {
if (in.isSetDeleteType() &&
in.getDeleteType().equals(TDeleteType.DELETE_COLUMNS))
  out.addColumns(column.getFamily(), column.getQualifier());
else
  out.addColumn(column.getFamily(), column.getQualifier());
  }

} else {
  if (column.isSetTimestamp()) {
out.addFamily(column.getFamily(), column.getTimestamp());
  } else {
out.addFamily(column.getFamily());
  }
}
  }
} else {
  if (in.isSetTimestamp()) {
out = new Delete(in.getRow(), in.getTimestamp());
  } else {
out = new Delete(in.getRow());
  }
}

if (in.isSetAttributes()) {
  addAttributes(out,in.getAttributes());
}

if (in.isSetDurability()) {
  out.setDurability(durabilityFromThrift(in.getDurability()));
}

return out;
  }
{code}

> Thrift2 should support  DeleteFamily and DeleteFamilyVersion type
> -
>
> Key: HBASE-18402
> URL: https://issues.apache.org/jira/browse/HBASE-18402
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 2.0.0-alpha-1
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>
> Currently,  our thrift2 only support two delete types, Actually, there are 
> four delete types.and  we should support the other delete type:  DeleteFamily 
> and DeleteFamilyVersion. 
> {code}
> /**
>  * Specify type of delete:
>  *  - DELETE_COLUMN means exactly one version will be removed,
>  *  - DELETE_COLUMNS means previous versions will also be removed.
>  */
> enum TDeleteType {
>   DELETE_COLUMN = 0,
>   DELETE_COLUMNS = 1
> }
> {code} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18412) [Shell] Support unset of a list of table configuration

2017-07-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092767#comment-16092767
 ] 

Hadoop QA commented on HBASE-18412:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue}  0m  
0s{color} | {color:blue} rubocop was not available. {color} |
| {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue}  0m  
0s{color} | {color:blue} Ruby-lint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m 25s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m 
58s{color} | {color:green} hbase-shell in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 7s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Issue | HBASE-18412 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877931/HBASE-18412.master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  rubocop  ruby_lint  |
| uname | Linux d3d99b2b35b4 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 6b7ebc0 |
| Default Java | 1.8.0_131 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7710/testReport/ |
| modules | C: hbase-shell U: hbase-shell |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7710/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> [Shell] Support unset of a list of table configuration
> --
>
> Key: HBASE-18412
> URL: https://issues.apache.org/jira/browse/HBASE-18412
> Project: HBase
>  Issue Type: Improvement
>Reporter: Yun Zhao
>Assignee: Yun Zhao
>Priority: Minor
> Attachments: HBASE-18412.master.001.patch
>
>
> Add a method 'table_conf_unset' in admin.rb to reset the configuration as the 
> system default.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18308) Eliminate the findbugs warnings for hbase-server

2017-07-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092782#comment-16092782
 ] 

Hadoop QA commented on HBASE-18308:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
24s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} branch-1 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} branch-1 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} branch-1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
13s{color} | {color:red} hbase-server in branch-1 has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} branch-1 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} branch-1 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
18m  2s{color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
33s{color} | {color:green} hbase-server generated 0 new + 0 unchanged - 2 fixed 
= 0 total (was 2) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}111m 44s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}152m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestReplicasClient |
|   | hadoop.hbase.client.TestClientScannerRPCTimeout |
|   | hadoop.hbase.master.TestAssignmentListener |
|   | hadoop.hbase.TestZooKeeper |
|   | hadoop.hbase.regionserver.TestCompactionInDeadRegionServer |
|   | hadoop.hbase.regionserver.TestRSKilledWhenInitializing |
| Timed out junit tests | 
org.apache.hadoop.hbase.security.access.TestCellACLWithMultipleVersions |
|   | org.apache.h

[jira] [Reopened] (HBASE-18390) Sleep too long when finding region location failed

2017-07-19 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reopened HBASE-18390:


We don't get response from QA for branch-1...TestClientScannerRPCTimeout fails 
in branch-1 on my machine. Would you please take a look? Thanks.

> Sleep too long when finding region location failed
> --
>
> Key: HBASE-18390
> URL: https://issues.apache.org/jira/browse/HBASE-18390
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6, 1.1.11, 2.0.0-alpha-1
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-2, 1.1.12
>
> Attachments: HBASE-18390.v01.patch, HBASE-18390.v02.patch, 
> HBASE-18390.v03.patch
>
>
> If RegionServerCallable#prepare failed when getRegionLocation, the location 
> in this callable object is null. And before we retry we will sleep. However, 
> when location is null we will sleep at least 10 seconds. And the request will 
> be failed directly if operation timeout is less than 10 seconds. I think it 
> is no need to keep MIN_WAIT_DEAD_SERVER logic. Use backoff sleeping logic is 
> ok for most cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18368) FilterList with multiple FamilyFilters concatenated by OR does not work.

2017-07-19 Thread Allan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092810#comment-16092810
 ] 

Allan Yang commented on HBASE-18368:


{quote}
The key thing is every column family's store scanner share one filter. Even the 
filter return a NEXT_ROW, we still can pass a same row's different column 
family's cell to this filter. So if the filter has state about this, it will be 
wrong.
{quote}
Yes, this is the key issue here.
[~openinx], I don't think the UT you provided is valid, as we change a 
storescanner, we should not bypass filter.

> FilterList with multiple FamilyFilters concatenated by OR does not work.
> 
>
> Key: HBASE-18368
> URL: https://issues.apache.org/jira/browse/HBASE-18368
> Project: HBase
>  Issue Type: Sub-task
>  Components: Filters
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Peter Somogyi
>Assignee: Allan Yang
>Priority: Critical
> Attachments: HBASE-18368.branch-1.patch, 
> HBASE-18368.branch-1.v2.patch, HBASE-18368.branch-1.v3.patch, 
> HBASE-18368.patch
>
>
> Scan gives back incomplete list if multiple filters are combined with OR / 
> MUST_PASS_ONE.
> Using 2 FamilyFilters in a FilterList using MUST_PASS_ONE operator will give 
> back results for only the first Filter.
> {code:java|title=Test code}
>   @Test
>   public void testFiltersWithOr() throws Exception {
> TableName tn = TableName.valueOf("MyTest");
> Table table = utility.createTable(tn, new String[] {"cf1", "cf2"});
> byte[] CF1 = Bytes.toBytes("cf1");
> byte[] CF2 = Bytes.toBytes("cf2");
> Put put1 = new Put(Bytes.toBytes("0"));
> put1.addColumn(CF1, Bytes.toBytes("col_a"), Bytes.toBytes(0));
> table.put(put1);
> Put put2 = new Put(Bytes.toBytes("0"));
> put2.addColumn(CF2, Bytes.toBytes("col_b"), Bytes.toBytes(0));
> table.put(put2);
> FamilyFilter filterCF1 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF1));
> FamilyFilter filterCF2 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF2));
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ONE);
> filterList.addFilter(filterCF1);
> filterList.addFilter(filterCF2);
> Scan scan = new Scan();
> scan.setFilter(filterList);
> ResultScanner scanner = table.getScanner(scan);
> System.out.println(filterList);
> for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {
>   System.out.println(rr);
> }
>   }
> {code}
> {noformat:title=Output}
> FilterList OR (2/2): [FamilyFilter (EQUAL, cf1), FamilyFilter (EQUAL, cf2)]
> keyvalues={0/cf1:col_a/1499852754957/Put/vlen=4/seqid=0}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18375) The pool chunks from ChunkCreator are deallocated while in pool because there is no reference to them

2017-07-19 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092819#comment-16092819
 ] 

Anastasia Braginsky commented on HBASE-18375:
-

bq. Also the change in CompactionPipeline of removing and adding the last, now 
being done at the beginning is it intentional to avoid the NPE bug?

The change in CompactionPipeline is in reversing the order of closing the 
suffix and disconnecting the suffix from the compaction pipeline. This problem 
wasn't seen, but generally it is correct to close the suffix AFTER it was 
already disconnected from pipeline. When the suffix is closed before 
disconnected from pipeline, the following problem can happen:

1. Segment is closed, assuming the reference count is zero, segment's chunks 
are released.
2. While the segment is still in the compaction pipeline, but after the chunks 
are released, scan appears and gets the reference to the segment.
3. The segment is disconnected from pipeline, but scan is accessing the 
released chunks... :(

> The pool chunks from ChunkCreator are deallocated while in pool because there 
> is no reference to them
> -
>
> Key: HBASE-18375
> URL: https://issues.apache.org/jira/browse/HBASE-18375
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha-1
>Reporter: Anastasia Braginsky
>Priority: Critical
> Fix For: 2.0.0, 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18375-V01.patch, HBASE-18375-V02.patch, 
> HBASE-18375-V03.patch
>
>
> Because MSLAB list of chunks was changed to list of chunk IDs, the chunks 
> returned back to pool can be deallocated by JVM because there is no reference 
> to them. The solution is to protect pool chunks from GC by the strong map of 
> ChunkCreator introduced by HBASE-18010. Will prepare the patch today.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18399) Files in a snapshot can go missing even after the snapshot is taken successfully

2017-07-19 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092823#comment-16092823
 ] 

ramkrishna.s.vasudevan commented on HBASE-18399:


bq.No, because the sole reason why a store file is moved to archive while in 
the middle of a snapshot operation is the lack of proper locking between the 
snapshot operation and finalizing the compaction results.
:)
I should have been more explicit. My point was previously before store file 
accounting - after a compaction was done and the files were moved to archive 
was there a synchronization available between snapshot and archival? I am not 
sure about this and hence I asked that. 

> Files in a snapshot can go missing even after the snapshot is taken 
> successfully
> 
>
> Key: HBASE-18399
> URL: https://issues.apache.org/jira/browse/HBASE-18399
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Reporter: Ashu Pachauri
> Fix For: 1.3.2
>
>
> Files missing after the snapshot is taken (only applicable when the TTL for 
> the TimeToLiveHFileCleaner is small, like the default 5 mins)
> * SnapshotManifest#addRegion visits store_file_A, but is yet to write it 
> to the manifest.
> * store_file_A is marked as compacted away and HFileArchiver moves the 
> file to archive.
> * HFileCleaner comes in and sees the store_file_A in archive. It adds the 
> file to the list of files that might need to be cleaned up.
> * HFileCleaner's SnapshotHFileCleaner plugin is kicked in.
> * SnapshotFileCache#getUnreferencedFiles also says that store_file_A is 
> unreferenced and should be cleaned up (It has not yet been written to the 
> manifest).
> * SnapshotHFileCleaner is still going through rest of the files in 
> archive.
> * store_file_A reference is created and written to snapshot manifest.
> * Snapshot verification runs and sees the store_file_A is present in 
> archive, and thus the verification passes.
> * Now, the SnapshotHFileCleaner finishes and TimeToLiveHFileCleaner is 
> triggered. If TTL has passed since the store_file_A was moved to archive 
> (SnapshotHFileCleaner could take easily several minutes to go through rest of 
> the files), the TimeToLiveHFileCleaner also marks the file as deletable.
> * Since all cleaner plugins marked file as deletable, the store_file_A is 
> deleted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18251) Remove unnecessary traversing to the first and last keys in the CellSet

2017-07-19 Thread Toshihiro Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated HBASE-18251:
-
Attachment: HBASE-18251-v2.patch

I just attached the new patch because of the test failure.

I modified that CellFlatMap supports first/lastEntry(). Because this 
modification is that Entry class is instantiated every time we call 
first/lastEntry(), it might be memory inefficient. 

> Remove unnecessary traversing to the first and last keys in the CellSet
> ---
>
> Key: HBASE-18251
> URL: https://issues.apache.org/jira/browse/HBASE-18251
> Project: HBase
>  Issue Type: Bug
>Reporter: Anastasia Braginsky
>Assignee: Toshihiro Suzuki
> Attachments: HBASE-18251.patch, HBASE-18251-v2.patch
>
>
> The implementation of finding the first and last keys in the CellSet is as 
> following:
> {code}
>  public Cell first() {
> return this.delegatee.get(this.delegatee.firstKey());
>   }
>   public Cell last() {
> return this.delegatee.get(this.delegatee.lastKey());
>   }
> {code}
> Recall we have Cell to Cell mapping, therefore the methods bringing the 
> first/last key, which allready return Cell. Thus no need to waist time on the 
> get() method for the same Cell.
> Fix: return just the first/lastKey(), should be at least twice more effective.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18308) Eliminate the findbugs warnings for hbase-server

2017-07-19 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092864#comment-16092864
 ] 

Chia-Ping Tsai commented on HBASE-18308:


TestClientScannerRPCTimeout fails because of HBASE-18390. The others pass on my 
local.

> Eliminate the findbugs warnings for hbase-server
> 
>
> Key: HBASE-18308
> URL: https://issues.apache.org/jira/browse/HBASE-18308
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-2
>
> Attachments: HBASE-18308.branch-1.2.v0.patch, 
> HBASE-18308.branch-1.2.v0.patch, HBASE-18308.branch-1.3.v0.patch, 
> HBASE-18308.branch-1.v0.patch, HBASE-18308.branch-1.v1.patch, 
> HBASE-18308.branch-1.v2.patch, HBASE-18308.v0.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18308) Eliminate the findbugs warnings for hbase-server

2017-07-19 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18308:
---
Status: Open  (was: Patch Available)

> Eliminate the findbugs warnings for hbase-server
> 
>
> Key: HBASE-18308
> URL: https://issues.apache.org/jira/browse/HBASE-18308
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-2
>
> Attachments: HBASE-18308.branch-1.2.v0.patch, 
> HBASE-18308.branch-1.2.v0.patch, HBASE-18308.branch-1.3.v0.patch, 
> HBASE-18308.branch-1.v0.patch, HBASE-18308.branch-1.v1.patch, 
> HBASE-18308.branch-1.v2.patch, HBASE-18308.v0.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18308) Eliminate the findbugs warnings for hbase-server

2017-07-19 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18308:
---
Attachment: HBASE-18308.v1.patch

v1 patch for master
# remove the unused SuppressWarnings (The preAssignMvccLock had been removed by 
HBASE-17471)

> Eliminate the findbugs warnings for hbase-server
> 
>
> Key: HBASE-18308
> URL: https://issues.apache.org/jira/browse/HBASE-18308
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-2
>
> Attachments: HBASE-18308.branch-1.2.v0.patch, 
> HBASE-18308.branch-1.2.v0.patch, HBASE-18308.branch-1.3.v0.patch, 
> HBASE-18308.branch-1.v0.patch, HBASE-18308.branch-1.v1.patch, 
> HBASE-18308.branch-1.v2.patch, HBASE-18308.v0.patch, HBASE-18308.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18308) Eliminate the findbugs warnings for hbase-server

2017-07-19 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18308:
---
Status: Patch Available  (was: Open)

> Eliminate the findbugs warnings for hbase-server
> 
>
> Key: HBASE-18308
> URL: https://issues.apache.org/jira/browse/HBASE-18308
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-2
>
> Attachments: HBASE-18308.branch-1.2.v0.patch, 
> HBASE-18308.branch-1.2.v0.patch, HBASE-18308.branch-1.3.v0.patch, 
> HBASE-18308.branch-1.v0.patch, HBASE-18308.branch-1.v1.patch, 
> HBASE-18308.branch-1.v2.patch, HBASE-18308.v0.patch, HBASE-18308.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18390) Sleep too long when finding region location failed

2017-07-19 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092870#comment-16092870
 ] 

Phil Yang commented on HBASE-18390:
---

I reset to the previous commit af359d03b5e2cc798cee8ba52d2a9fcbb1022104  it is 
also failed. So it is not broken by this issue.

> Sleep too long when finding region location failed
> --
>
> Key: HBASE-18390
> URL: https://issues.apache.org/jira/browse/HBASE-18390
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6, 1.1.11, 2.0.0-alpha-1
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-2, 1.1.12
>
> Attachments: HBASE-18390.v01.patch, HBASE-18390.v02.patch, 
> HBASE-18390.v03.patch
>
>
> If RegionServerCallable#prepare failed when getRegionLocation, the location 
> in this callable object is null. And before we retry we will sleep. However, 
> when location is null we will sleep at least 10 seconds. And the request will 
> be failed directly if operation timeout is less than 10 seconds. I think it 
> is no need to keep MIN_WAIT_DEAD_SERVER logic. Use backoff sleeping logic is 
> ok for most cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18390) Sleep too long when finding region location failed

2017-07-19 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092875#comment-16092875
 ] 

Chia-Ping Tsai commented on HBASE-18390:


bq. I reset to the previous commit af359d03b5e2cc798cee8ba52d2a9fcbb1022104 it 
is also failed. So it is not broken by this issue.
Got it. Thanks for the explanations. Will close this.

> Sleep too long when finding region location failed
> --
>
> Key: HBASE-18390
> URL: https://issues.apache.org/jira/browse/HBASE-18390
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6, 1.1.11, 2.0.0-alpha-1
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-2, 1.1.12
>
> Attachments: HBASE-18390.v01.patch, HBASE-18390.v02.patch, 
> HBASE-18390.v03.patch
>
>
> If RegionServerCallable#prepare failed when getRegionLocation, the location 
> in this callable object is null. And before we retry we will sleep. However, 
> when location is null we will sleep at least 10 seconds. And the request will 
> be failed directly if operation timeout is less than 10 seconds. I think it 
> is no need to keep MIN_WAIT_DEAD_SERVER logic. Use backoff sleeping logic is 
> ok for most cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-18390) Sleep too long when finding region location failed

2017-07-19 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved HBASE-18390.

Resolution: Fixed

> Sleep too long when finding region location failed
> --
>
> Key: HBASE-18390
> URL: https://issues.apache.org/jira/browse/HBASE-18390
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6, 1.1.11, 2.0.0-alpha-1
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-2, 1.1.12
>
> Attachments: HBASE-18390.v01.patch, HBASE-18390.v02.patch, 
> HBASE-18390.v03.patch
>
>
> If RegionServerCallable#prepare failed when getRegionLocation, the location 
> in this callable object is null. And before we retry we will sleep. However, 
> when location is null we will sleep at least 10 seconds. And the request will 
> be failed directly if operation timeout is less than 10 seconds. I think it 
> is no need to keep MIN_WAIT_DEAD_SERVER logic. Use backoff sleeping logic is 
> ok for most cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-18308) Eliminate the findbugs warnings for hbase-server

2017-07-19 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092864#comment-16092864
 ] 

Chia-Ping Tsai edited comment on HBASE-18308 at 7/19/17 9:50 AM:
-

TestClientScannerRPCTimeout fails w/o this patch. The others pass on my local.


was (Author: chia7712):
TestClientScannerRPCTimeout fails because of HBASE-18390. The others pass on my 
local.

> Eliminate the findbugs warnings for hbase-server
> 
>
> Key: HBASE-18308
> URL: https://issues.apache.org/jira/browse/HBASE-18308
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-2
>
> Attachments: HBASE-18308.branch-1.2.v0.patch, 
> HBASE-18308.branch-1.2.v0.patch, HBASE-18308.branch-1.3.v0.patch, 
> HBASE-18308.branch-1.v0.patch, HBASE-18308.branch-1.v1.patch, 
> HBASE-18308.branch-1.v2.patch, HBASE-18308.v0.patch, HBASE-18308.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when DATA_BLOCK_ENCODING set to DIFF

2017-07-19 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092883#comment-16092883
 ] 

Anoop Sam John commented on HBASE-16993:


16384,32768,40960, 46000,49152,51200,65536,131072,524288
Here 46000 is the invalid entry which is NOT multiple of 256.  Instead it 
should have been configured as 46080.  So the correct config should have been
16384,32768,40960, 46080,49152,51200,65536,131072,524288

> BucketCache throw java.io.IOException: Invalid HFile block magic when 
> DATA_BLOCK_ENCODING set to DIFF
> -
>
> Key: HBASE-16993
> URL: https://issues.apache.org/jira/browse/HBASE-16993
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache, io
>Affects Versions: 1.1.3
> Environment: hbase version 1.1.3
>Reporter: liubangchen
>Assignee: liubangchen
> Fix For: 2.0.0
>
> Attachments: HBASE-16993.000.patch, HBASE-16993.001.patch, 
> HBASE-16993.master.001.patch, HBASE-16993.master.002.patch, 
> HBASE-16993.master.003.patch, HBASE-16993.master.004.patch, 
> HBASE-16993.master.005.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> hbase-site.xml setting
> 
> hbase.bucketcache.bucket.sizes
> 16384,32768,40960, 
> 46000,49152,51200,65536,131072,524288
> 
> 
> hbase.bucketcache.size
> 16384
> 
> 
> hbase.bucketcache.ioengine
> offheap
> 
> 
> hfile.block.cache.size
> 0.3
> 
> 
> hfile.block.bloom.cacheonwrite
> true
> 
> 
> hbase.rs.cacheblocksonwrite
> true
> 
> 
> hfile.block.index.cacheonwrite
> true
>  n_splits = 200
> create 'usertable',{NAME =>'family', COMPRESSION => 'snappy', VERSIONS => 
> 1,DATA_BLOCK_ENCODING => 'DIFF',CONFIGURATION => 
> {'hbase.hregion.memstore.block.multiplier' => 5}},{DURABILITY => 
> 'SKIP_WAL'},{SPLITS => (1..n_splits).map {|i| 
> "user#{1000+i*(-1000)/n_splits}"}}
> load data
> bin/ycsb load hbase10 -P workloads/workloada -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> recordcount=2 -p insertorder=hashed -p insertstart=0 -p 
> clientbuffering=true -p durability=SKIP_WAL -threads 20 -s 
> run 
> bin/ycsb run hbase10 -P workloads/workloadb -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> operationcount=2000 -p readallfields=true -p clientbuffering=true -p 
> requestdistribution=zipfian  -threads 10 -s
> log info
> 2016-11-02 20:20:20,261 ERROR 
> [RW.default.readRpcServer.handler=36,queue=21,port=6020] bucket.BucketCache: 
> Failed reading block fdcc7ed6f3b2498b9ef316cc8206c233_44819759 from bucket 
> cache
> java.io.IOException: Invalid HFile block magic: 
> \x00\x00\x00\x00\x00\x00\x00\x00
> at 
> org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154)
> at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:273)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:134)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:121)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:427)
> at 
> org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:85)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getCachedBlock(HFileReaderV2.java:266)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:403)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:247)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:156)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:217)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2071)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5369)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2546)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HR

[jira] [Updated] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when configuring hbase.bucketcache.bucket.sizes

2017-07-19 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-16993:
---
Summary: BucketCache throw java.io.IOException: Invalid HFile block magic 
when configuring hbase.bucketcache.bucket.sizes  (was: BucketCache throw 
java.io.IOException: Invalid HFile block magic when DATA_BLOCK_ENCODING set to 
DIFF)

> BucketCache throw java.io.IOException: Invalid HFile block magic when 
> configuring hbase.bucketcache.bucket.sizes
> 
>
> Key: HBASE-16993
> URL: https://issues.apache.org/jira/browse/HBASE-16993
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache, io
>Affects Versions: 1.1.3
> Environment: hbase version 1.1.3
>Reporter: liubangchen
>Assignee: liubangchen
> Fix For: 2.0.0
>
> Attachments: HBASE-16993.000.patch, HBASE-16993.001.patch, 
> HBASE-16993.master.001.patch, HBASE-16993.master.002.patch, 
> HBASE-16993.master.003.patch, HBASE-16993.master.004.patch, 
> HBASE-16993.master.005.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> hbase-site.xml setting
> 
> hbase.bucketcache.bucket.sizes
> 16384,32768,40960, 
> 46000,49152,51200,65536,131072,524288
> 
> 
> hbase.bucketcache.size
> 16384
> 
> 
> hbase.bucketcache.ioengine
> offheap
> 
> 
> hfile.block.cache.size
> 0.3
> 
> 
> hfile.block.bloom.cacheonwrite
> true
> 
> 
> hbase.rs.cacheblocksonwrite
> true
> 
> 
> hfile.block.index.cacheonwrite
> true
>  n_splits = 200
> create 'usertable',{NAME =>'family', COMPRESSION => 'snappy', VERSIONS => 
> 1,DATA_BLOCK_ENCODING => 'DIFF',CONFIGURATION => 
> {'hbase.hregion.memstore.block.multiplier' => 5}},{DURABILITY => 
> 'SKIP_WAL'},{SPLITS => (1..n_splits).map {|i| 
> "user#{1000+i*(-1000)/n_splits}"}}
> load data
> bin/ycsb load hbase10 -P workloads/workloada -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> recordcount=2 -p insertorder=hashed -p insertstart=0 -p 
> clientbuffering=true -p durability=SKIP_WAL -threads 20 -s 
> run 
> bin/ycsb run hbase10 -P workloads/workloadb -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> operationcount=2000 -p readallfields=true -p clientbuffering=true -p 
> requestdistribution=zipfian  -threads 10 -s
> log info
> 2016-11-02 20:20:20,261 ERROR 
> [RW.default.readRpcServer.handler=36,queue=21,port=6020] bucket.BucketCache: 
> Failed reading block fdcc7ed6f3b2498b9ef316cc8206c233_44819759 from bucket 
> cache
> java.io.IOException: Invalid HFile block magic: 
> \x00\x00\x00\x00\x00\x00\x00\x00
> at 
> org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154)
> at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:273)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:134)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:121)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:427)
> at 
> org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:85)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getCachedBlock(HFileReaderV2.java:266)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:403)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:247)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:156)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:217)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2071)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5369)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2546)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2532)
> at 
> org.apache.hadoop.hbase.regions

[jira] [Updated] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when configuring hbase.bucketcache.bucket.sizes

2017-07-19 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-16993:
---
Fix Version/s: (was: 2.0.0)
   2.0.0-alpha-2
   1.4.0
   3.0.0

> BucketCache throw java.io.IOException: Invalid HFile block magic when 
> configuring hbase.bucketcache.bucket.sizes
> 
>
> Key: HBASE-16993
> URL: https://issues.apache.org/jira/browse/HBASE-16993
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache, io
>Affects Versions: 1.1.3
> Environment: hbase version 1.1.3
>Reporter: liubangchen
>Assignee: liubangchen
> Fix For: 3.0.0, 1.4.0, 2.0.0-alpha-2
>
> Attachments: HBASE-16993.000.patch, HBASE-16993.001.patch, 
> HBASE-16993.master.001.patch, HBASE-16993.master.002.patch, 
> HBASE-16993.master.003.patch, HBASE-16993.master.004.patch, 
> HBASE-16993.master.005.patch, HBASE-16993_V6.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> hbase-site.xml setting
> 
> hbase.bucketcache.bucket.sizes
> 16384,32768,40960, 
> 46000,49152,51200,65536,131072,524288
> 
> 
> hbase.bucketcache.size
> 16384
> 
> 
> hbase.bucketcache.ioengine
> offheap
> 
> 
> hfile.block.cache.size
> 0.3
> 
> 
> hfile.block.bloom.cacheonwrite
> true
> 
> 
> hbase.rs.cacheblocksonwrite
> true
> 
> 
> hfile.block.index.cacheonwrite
> true
>  n_splits = 200
> create 'usertable',{NAME =>'family', COMPRESSION => 'snappy', VERSIONS => 
> 1,DATA_BLOCK_ENCODING => 'DIFF',CONFIGURATION => 
> {'hbase.hregion.memstore.block.multiplier' => 5}},{DURABILITY => 
> 'SKIP_WAL'},{SPLITS => (1..n_splits).map {|i| 
> "user#{1000+i*(-1000)/n_splits}"}}
> load data
> bin/ycsb load hbase10 -P workloads/workloada -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> recordcount=2 -p insertorder=hashed -p insertstart=0 -p 
> clientbuffering=true -p durability=SKIP_WAL -threads 20 -s 
> run 
> bin/ycsb run hbase10 -P workloads/workloadb -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> operationcount=2000 -p readallfields=true -p clientbuffering=true -p 
> requestdistribution=zipfian  -threads 10 -s
> log info
> 2016-11-02 20:20:20,261 ERROR 
> [RW.default.readRpcServer.handler=36,queue=21,port=6020] bucket.BucketCache: 
> Failed reading block fdcc7ed6f3b2498b9ef316cc8206c233_44819759 from bucket 
> cache
> java.io.IOException: Invalid HFile block magic: 
> \x00\x00\x00\x00\x00\x00\x00\x00
> at 
> org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154)
> at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:273)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:134)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:121)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:427)
> at 
> org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:85)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getCachedBlock(HFileReaderV2.java:266)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:403)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:247)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:156)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:217)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2071)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5369)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2546)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2532)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2514)
> at org.apache.hadoop.h

[jira] [Updated] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when configuring hbase.bucketcache.bucket.sizes

2017-07-19 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-16993:
---
Attachment: HBASE-16993_V6.patch

> BucketCache throw java.io.IOException: Invalid HFile block magic when 
> configuring hbase.bucketcache.bucket.sizes
> 
>
> Key: HBASE-16993
> URL: https://issues.apache.org/jira/browse/HBASE-16993
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache, io
>Affects Versions: 1.1.3
> Environment: hbase version 1.1.3
>Reporter: liubangchen
>Assignee: liubangchen
> Fix For: 3.0.0, 1.4.0, 2.0.0-alpha-2
>
> Attachments: HBASE-16993.000.patch, HBASE-16993.001.patch, 
> HBASE-16993.master.001.patch, HBASE-16993.master.002.patch, 
> HBASE-16993.master.003.patch, HBASE-16993.master.004.patch, 
> HBASE-16993.master.005.patch, HBASE-16993_V6.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> hbase-site.xml setting
> 
> hbase.bucketcache.bucket.sizes
> 16384,32768,40960, 
> 46000,49152,51200,65536,131072,524288
> 
> 
> hbase.bucketcache.size
> 16384
> 
> 
> hbase.bucketcache.ioengine
> offheap
> 
> 
> hfile.block.cache.size
> 0.3
> 
> 
> hfile.block.bloom.cacheonwrite
> true
> 
> 
> hbase.rs.cacheblocksonwrite
> true
> 
> 
> hfile.block.index.cacheonwrite
> true
>  n_splits = 200
> create 'usertable',{NAME =>'family', COMPRESSION => 'snappy', VERSIONS => 
> 1,DATA_BLOCK_ENCODING => 'DIFF',CONFIGURATION => 
> {'hbase.hregion.memstore.block.multiplier' => 5}},{DURABILITY => 
> 'SKIP_WAL'},{SPLITS => (1..n_splits).map {|i| 
> "user#{1000+i*(-1000)/n_splits}"}}
> load data
> bin/ycsb load hbase10 -P workloads/workloada -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> recordcount=2 -p insertorder=hashed -p insertstart=0 -p 
> clientbuffering=true -p durability=SKIP_WAL -threads 20 -s 
> run 
> bin/ycsb run hbase10 -P workloads/workloadb -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> operationcount=2000 -p readallfields=true -p clientbuffering=true -p 
> requestdistribution=zipfian  -threads 10 -s
> log info
> 2016-11-02 20:20:20,261 ERROR 
> [RW.default.readRpcServer.handler=36,queue=21,port=6020] bucket.BucketCache: 
> Failed reading block fdcc7ed6f3b2498b9ef316cc8206c233_44819759 from bucket 
> cache
> java.io.IOException: Invalid HFile block magic: 
> \x00\x00\x00\x00\x00\x00\x00\x00
> at 
> org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154)
> at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:273)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:134)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:121)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:427)
> at 
> org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:85)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getCachedBlock(HFileReaderV2.java:266)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:403)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:247)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:156)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:217)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2071)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5369)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2546)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2532)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2514)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6558)
> at org.apache.hadoop.hb

[jira] [Updated] (HBASE-17819) Reduce the heap overhead for BucketCache

2017-07-19 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-17819:
---
Attachment: HBASE-17819_V1.patch

> Reduce the heap overhead for BucketCache
> 
>
> Key: HBASE-17819
> URL: https://issues.apache.org/jira/browse/HBASE-17819
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-17819_V1.patch
>
>
> We keep Bucket entry map in BucketCache.  Below is the math for heapSize for 
> the key , value into this map.
> BlockCacheKey
> ---
> String hfileName  -  Ref  - 4
> long offset  - 8
> BlockType blockType  - Ref  - 4
> boolean isPrimaryReplicaBlock  - 1
> Total  =  12 (Object) + 17 = 29
> BucketEntry
> 
> int offsetBase  -  4
> int length  - 4
> byte offset1  -  1
> byte deserialiserIndex  -  1
> long accessCounter  -  8
> BlockPriority priority  - Ref  - 4
> volatile boolean markedForEvict  -  1
> AtomicInteger refCount  -  16 + 4
> long cachedTime  -  8
> Total = 12 (Object) + 51 = 63
> ConcurrentHashMap Map.Entry  -  40
> blocksByHFile ConcurrentSkipListSet Entry  -  40
> Total = 29 + 63 + 80 = 172
> For 10 million blocks we will end up having 1.6GB of heap size.  
> This jira aims to reduce this as much as possible



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17819) Reduce the heap overhead for BucketCache

2017-07-19 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-17819:
---
Status: Patch Available  (was: Open)

See what QA says

> Reduce the heap overhead for BucketCache
> 
>
> Key: HBASE-17819
> URL: https://issues.apache.org/jira/browse/HBASE-17819
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-17819_V1.patch
>
>
> We keep Bucket entry map in BucketCache.  Below is the math for heapSize for 
> the key , value into this map.
> BlockCacheKey
> ---
> String hfileName  -  Ref  - 4
> long offset  - 8
> BlockType blockType  - Ref  - 4
> boolean isPrimaryReplicaBlock  - 1
> Total  =  12 (Object) + 17 = 29
> BucketEntry
> 
> int offsetBase  -  4
> int length  - 4
> byte offset1  -  1
> byte deserialiserIndex  -  1
> long accessCounter  -  8
> BlockPriority priority  - Ref  - 4
> volatile boolean markedForEvict  -  1
> AtomicInteger refCount  -  16 + 4
> long cachedTime  -  8
> Total = 12 (Object) + 51 = 63
> ConcurrentHashMap Map.Entry  -  40
> blocksByHFile ConcurrentSkipListSet Entry  -  40
> Total = 29 + 63 + 80 = 172
> For 10 million blocks we will end up having 1.6GB of heap size.  
> This jira aims to reduce this as much as possible



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17819) Reduce the heap overhead for BucketCache

2017-07-19 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092900#comment-16092900
 ] 

Anoop Sam John commented on HBASE-17819:


V1 patch.  The evictByHFile() doing a loop per entry file removal.  This is 
been called from CompactedHFilesDischarger per file.  May be some optimization 
we can think of?

> Reduce the heap overhead for BucketCache
> 
>
> Key: HBASE-17819
> URL: https://issues.apache.org/jira/browse/HBASE-17819
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-17819_V1.patch
>
>
> We keep Bucket entry map in BucketCache.  Below is the math for heapSize for 
> the key , value into this map.
> BlockCacheKey
> ---
> String hfileName  -  Ref  - 4
> long offset  - 8
> BlockType blockType  - Ref  - 4
> boolean isPrimaryReplicaBlock  - 1
> Total  =  12 (Object) + 17 = 29
> BucketEntry
> 
> int offsetBase  -  4
> int length  - 4
> byte offset1  -  1
> byte deserialiserIndex  -  1
> long accessCounter  -  8
> BlockPriority priority  - Ref  - 4
> volatile boolean markedForEvict  -  1
> AtomicInteger refCount  -  16 + 4
> long cachedTime  -  8
> Total = 12 (Object) + 51 = 63
> ConcurrentHashMap Map.Entry  -  40
> blocksByHFile ConcurrentSkipListSet Entry  -  40
> Total = 29 + 63 + 80 = 172
> For 10 million blocks we will end up having 1.6GB of heap size.  
> This jira aims to reduce this as much as possible



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17738) BucketCache startup is slow

2017-07-19 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092906#comment-16092906
 ] 

Anoop Sam John commented on HBASE-17738:


+1
bq.testByteBufferCreaetion
Pls correct the spell on commit.

> BucketCache startup is slow
> ---
>
> Key: HBASE-17738
> URL: https://issues.apache.org/jira/browse/HBASE-17738
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-17738_2.patch, HBASE-17738_2.patch, 
> HBASE-17738_3.patch, HBASE-17738_4.patch, HBASE-17738_5_withoutUnsafe.patch, 
> HBASE-17738_6_withoutUnsafe.patch, HBASE-17738_8.patch, HBASE-17738_9.patch, 
> HBASE-17738.patch
>
>
> If you set bucketcache size at 64G say and then start hbase, it takes a long 
> time. Can we do the allocations in parallel and not inline with the server 
> startup?
> Related, prefetching on a bucketcache is slow. Speed it up.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-10240) Remove 0.94->0.96 migration code

2017-07-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092907#comment-16092907
 ] 

stack commented on HBASE-10240:
---

I'd suggest then that the way forward is to look for classes that implement 
Writables. We'd write instances of each Writables-implementing class, then 
apply this patch. Can we still read from the Writables after this patch has 
gone in?

> Remove 0.94->0.96 migration code
> 
>
> Key: HBASE-10240
> URL: https://issues.apache.org/jira/browse/HBASE-10240
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Peter Somogyi
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-10240.master.001.patch, 
> HBASE-10240.master.001.patch
>
>
> Remove the objects and code only needed for supporting migration to 0.96 from 
> 0.94. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-10240) Remove 0.94->0.96 migration code

2017-07-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092912#comment-16092912
 ] 

stack commented on HBASE-10240:
---

Can we purge any of the remaining Writables? We were supposed to Writables-free 
long time ago by 0.96 or 0.98 at the latest.

> Remove 0.94->0.96 migration code
> 
>
> Key: HBASE-10240
> URL: https://issues.apache.org/jira/browse/HBASE-10240
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Peter Somogyi
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-10240.master.001.patch, 
> HBASE-10240.master.001.patch
>
>
> Remove the objects and code only needed for supporting migration to 0.96 from 
> 0.94. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18397) StoreFile accounting issues on branch-1.3 and branch-1

2017-07-19 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092917#comment-16092917
 ] 

Anoop Sam John commented on HBASE-18397:


Change those to be subtasks of this?

> StoreFile accounting issues on branch-1.3 and branch-1
> --
>
> Key: HBASE-18397
> URL: https://issues.apache.org/jira/browse/HBASE-18397
> Project: HBase
>  Issue Type: Umbrella
>  Components: regionserver, Scanners
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Priority: Critical
> Fix For: 1.3.2
>
>
> This jira is an umbrella for a set of issues around store file accounting on 
> branch-1.3 and branch-1 (I believe).
> At this point I do believe that many / most of those issues are related to 
> backport of HBASE-13082 done  long time ago. A number of related issues were 
> identified and fixed previously, but some still yet to be debugged and fixed. 
> I think that this class of problems prevents us from releasing 1.3.2 and 
> moving stable pointer to branch 1.3 at this point, so marking as critical.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17908) Upgrade guava

2017-07-19 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-17908:
--
Attachment: HBASE-17908.master.027.patch

> Upgrade guava
> -
>
> Key: HBASE-17908
> URL: https://issues.apache.org/jira/browse/HBASE-17908
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies
>Reporter: Balazs Meszaros
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 0001-HBASE-17908-Upgrade-guava.022.patch, 
> HBASE-17908.master.001.patch, HBASE-17908.master.002.patch, 
> HBASE-17908.master.003.patch, HBASE-17908.master.004.patch, 
> HBASE-17908.master.005.patch, HBASE-17908.master.006.patch, 
> HBASE-17908.master.007.patch, HBASE-17908.master.008.patch, 
> HBASE-17908.master.009.patch, HBASE-17908.master.010.patch, 
> HBASE-17908.master.011.patch, HBASE-17908.master.012.patch, 
> HBASE-17908.master.013.patch, HBASE-17908.master.013.patch, 
> HBASE-17908.master.014.patch, HBASE-17908.master.015.patch, 
> HBASE-17908.master.015.patch, HBASE-17908.master.016.patch, 
> HBASE-17908.master.017.patch, HBASE-17908.master.018.patch, 
> HBASE-17908.master.019.patch, HBASE-17908.master.020.patch, 
> HBASE-17908.master.021.patch, HBASE-17908.master.021.patch, 
> HBASE-17908.master.022.patch, HBASE-17908.master.023.patch, 
> HBASE-17908.master.024.patch, HBASE-17908.master.025.patch, 
> HBASE-17908.master.026.patch, HBASE-17908.master.027.patch
>
>
> Currently we are using guava 12.0.1, but the latest version is 21.0. 
> Upgrading guava is always a hassle because it is not always backward 
> compatible with itself.
> Currently I think there are to approaches:
> 1. Upgrade guava to the newest version (21.0) and shade it.
> 2. Upgrade guava to a version which does not break or builds (15.0).
> If we can update it, some dependencies should be removed: 
> commons-collections, commons-codec, ...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17908) Upgrade guava

2017-07-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092918#comment-16092918
 ] 

stack commented on HBASE-17908:
---

.027 is a rebase and fix a reference to thirdparty 1.0.0-SNAPSHOT 

> Upgrade guava
> -
>
> Key: HBASE-17908
> URL: https://issues.apache.org/jira/browse/HBASE-17908
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies
>Reporter: Balazs Meszaros
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 0001-HBASE-17908-Upgrade-guava.022.patch, 
> HBASE-17908.master.001.patch, HBASE-17908.master.002.patch, 
> HBASE-17908.master.003.patch, HBASE-17908.master.004.patch, 
> HBASE-17908.master.005.patch, HBASE-17908.master.006.patch, 
> HBASE-17908.master.007.patch, HBASE-17908.master.008.patch, 
> HBASE-17908.master.009.patch, HBASE-17908.master.010.patch, 
> HBASE-17908.master.011.patch, HBASE-17908.master.012.patch, 
> HBASE-17908.master.013.patch, HBASE-17908.master.013.patch, 
> HBASE-17908.master.014.patch, HBASE-17908.master.015.patch, 
> HBASE-17908.master.015.patch, HBASE-17908.master.016.patch, 
> HBASE-17908.master.017.patch, HBASE-17908.master.018.patch, 
> HBASE-17908.master.019.patch, HBASE-17908.master.020.patch, 
> HBASE-17908.master.021.patch, HBASE-17908.master.021.patch, 
> HBASE-17908.master.022.patch, HBASE-17908.master.023.patch, 
> HBASE-17908.master.024.patch, HBASE-17908.master.025.patch, 
> HBASE-17908.master.026.patch, HBASE-17908.master.027.patch
>
>
> Currently we are using guava 12.0.1, but the latest version is 21.0. 
> Upgrading guava is always a hassle because it is not always backward 
> compatible with itself.
> Currently I think there are to approaches:
> 1. Upgrade guava to the newest version (21.0) and shade it.
> 2. Upgrade guava to a version which does not break or builds (15.0).
> If we can update it, some dependencies should be removed: 
> commons-collections, commons-codec, ...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18390) Sleep too long when finding region location failed

2017-07-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092962#comment-16092962
 ] 

Hudson commented on HBASE-18390:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #3399 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3399/])
HBASE-18390 Sleep too long when finding region location failed (yangzhe1991: 
rev 6b7ebc019c8c64a7e7a00461029f699b0f0e3772)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionUtils.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestConnectionUtils.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionAdminServiceCallable.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionServerCallable.java


> Sleep too long when finding region location failed
> --
>
> Key: HBASE-18390
> URL: https://issues.apache.org/jira/browse/HBASE-18390
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6, 1.1.11, 2.0.0-alpha-1
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-2, 1.1.12
>
> Attachments: HBASE-18390.v01.patch, HBASE-18390.v02.patch, 
> HBASE-18390.v03.patch
>
>
> If RegionServerCallable#prepare failed when getRegionLocation, the location 
> in this callable object is null. And before we retry we will sleep. However, 
> when location is null we will sleep at least 10 seconds. And the request will 
> be failed directly if operation timeout is less than 10 seconds. I think it 
> is no need to keep MIN_WAIT_DEAD_SERVER logic. Use backoff sleeping logic is 
> ok for most cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18375) The pool chunks from ChunkCreator are deallocated while in pool because there is no reference to them

2017-07-19 Thread Anastasia Braginsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anastasia Braginsky updated HBASE-18375:

Attachment: HBASE-18375-V04.patch

> The pool chunks from ChunkCreator are deallocated while in pool because there 
> is no reference to them
> -
>
> Key: HBASE-18375
> URL: https://issues.apache.org/jira/browse/HBASE-18375
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha-1
>Reporter: Anastasia Braginsky
>Priority: Critical
> Fix For: 2.0.0, 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18375-V01.patch, HBASE-18375-V02.patch, 
> HBASE-18375-V03.patch, HBASE-18375-V04.patch
>
>
> Because MSLAB list of chunks was changed to list of chunk IDs, the chunks 
> returned back to pool can be deallocated by JVM because there is no reference 
> to them. The solution is to protect pool chunks from GC by the strong map of 
> ChunkCreator introduced by HBASE-18010. Will prepare the patch today.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18402) Thrift2 should support DeleteFamily and DeleteFamilyVersion type

2017-07-19 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-18402:
-
Attachment: HBASE-18402.v1.patch

> Thrift2 should support  DeleteFamily and DeleteFamilyVersion type
> -
>
> Key: HBASE-18402
> URL: https://issues.apache.org/jira/browse/HBASE-18402
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 2.0.0-alpha-1
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Attachments: HBASE-18402.v1.patch
>
>
> Currently,  our thrift2 only support two delete types, Actually, there are 
> four delete types.and  we should support the other delete type:  DeleteFamily 
> and DeleteFamilyVersion. 
> {code}
> /**
>  * Specify type of delete:
>  *  - DELETE_COLUMN means exactly one version will be removed,
>  *  - DELETE_COLUMNS means previous versions will also be removed.
>  */
> enum TDeleteType {
>   DELETE_COLUMN = 0,
>   DELETE_COLUMNS = 1
> }
> {code} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18402) Thrift2 should support DeleteFamily and DeleteFamilyVersion type

2017-07-19 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-18402:
-
Status: Patch Available  (was: Reopened)

> Thrift2 should support  DeleteFamily and DeleteFamilyVersion type
> -
>
> Key: HBASE-18402
> URL: https://issues.apache.org/jira/browse/HBASE-18402
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 2.0.0-alpha-1
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Attachments: HBASE-18402.v1.patch
>
>
> Currently,  our thrift2 only support two delete types, Actually, there are 
> four delete types.and  we should support the other delete type:  DeleteFamily 
> and DeleteFamilyVersion. 
> {code}
> /**
>  * Specify type of delete:
>  *  - DELETE_COLUMN means exactly one version will be removed,
>  *  - DELETE_COLUMNS means previous versions will also be removed.
>  */
> enum TDeleteType {
>   DELETE_COLUMN = 0,
>   DELETE_COLUMNS = 1
> }
> {code} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17819) Reduce the heap overhead for BucketCache

2017-07-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092979#comment-16092979
 ] 

Hadoop QA commented on HBASE-17819:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
39s{color} | {color:red} hbase-server in master has 9 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 38s{color} 
| {color:red} hbase-server generated 3 new + 6 unchanged - 0 fixed = 9 total 
(was 6) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
32m  7s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
25s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m  6s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.io.hfile.TestLazyDataBlockDecompression |
|   | hadoop.hbase.regionserver.TestBlocksScanned |
|   | hadoop.hbase.coprocessor.TestCoprocessorInterface |
|   | hadoop.hbase.io.hfile.TestPrefetch |
|   | hadoop.hbase.io.hfile.TestHFileEncryption |
|   | hadoop.hbase.regionserver.TestScanner |
|   | hadoop.hbase.io.hfile.TestHFile |
|   | hadoop.hbase.io.TestHalfStoreFileReader |
|   | hadoop.hbase.regionserver.TestHStoreFile |
| Timed out junit tests | 
org.apache.hadoop.hbase.io.hfile.bucket.TestBucketCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:757bf37 |
| JIRA Issue | HBASE-17819 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/128779

[jira] [Commented] (HBASE-18251) Remove unnecessary traversing to the first and last keys in the CellSet

2017-07-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092993#comment-16092993
 ] 

Hadoop QA commented on HBASE-18251:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
53s{color} | {color:red} hbase-server in master has 9 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
30m 46s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}120m  
6s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}166m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Issue | HBASE-18251 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877950/HBASE-18251-v2.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 8c6c2a5021c8 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 6b7ebc0 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC3 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7711/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7711/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7711/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> Remove unnecessary traversing to the first and last keys in the CellSet
> ---
>
> Key

[jira] [Created] (HBASE-18413) SHORT DESCRIPTION

2017-07-19 Thread Silvano Tamburini (JIRA)
Silvano Tamburini created HBASE-18413:
-

 Summary: SHORT DESCRIPTION
 Key: HBASE-18413
 URL: https://issues.apache.org/jira/browse/HBASE-18413
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Silvano Tamburini
Priority: Blocker






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18368) FilterList with multiple FamilyFilters concatenated by OR does not work.

2017-07-19 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092995#comment-16092995
 ] 

Zheng Hu commented on HBASE-18368:
--

bq. I don't think the UT you provided is valid, as we change a storescanner, we 
should not bypass filter.
The UT is valid. I understand that NEXT_ROW is CF level  definition, but the 
problem is that maybe user do not , and once a user save some states (which is 
relative to NEXT_ROW)  in his own filter, then the BUG occur.  

> FilterList with multiple FamilyFilters concatenated by OR does not work.
> 
>
> Key: HBASE-18368
> URL: https://issues.apache.org/jira/browse/HBASE-18368
> Project: HBase
>  Issue Type: Sub-task
>  Components: Filters
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Peter Somogyi
>Assignee: Allan Yang
>Priority: Critical
> Attachments: HBASE-18368.branch-1.patch, 
> HBASE-18368.branch-1.v2.patch, HBASE-18368.branch-1.v3.patch, 
> HBASE-18368.patch
>
>
> Scan gives back incomplete list if multiple filters are combined with OR / 
> MUST_PASS_ONE.
> Using 2 FamilyFilters in a FilterList using MUST_PASS_ONE operator will give 
> back results for only the first Filter.
> {code:java|title=Test code}
>   @Test
>   public void testFiltersWithOr() throws Exception {
> TableName tn = TableName.valueOf("MyTest");
> Table table = utility.createTable(tn, new String[] {"cf1", "cf2"});
> byte[] CF1 = Bytes.toBytes("cf1");
> byte[] CF2 = Bytes.toBytes("cf2");
> Put put1 = new Put(Bytes.toBytes("0"));
> put1.addColumn(CF1, Bytes.toBytes("col_a"), Bytes.toBytes(0));
> table.put(put1);
> Put put2 = new Put(Bytes.toBytes("0"));
> put2.addColumn(CF2, Bytes.toBytes("col_b"), Bytes.toBytes(0));
> table.put(put2);
> FamilyFilter filterCF1 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF1));
> FamilyFilter filterCF2 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF2));
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ONE);
> filterList.addFilter(filterCF1);
> filterList.addFilter(filterCF2);
> Scan scan = new Scan();
> scan.setFilter(filterList);
> ResultScanner scanner = table.getScanner(scan);
> System.out.println(filterList);
> for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {
>   System.out.println(rr);
> }
>   }
> {code}
> {noformat:title=Output}
> FilterList OR (2/2): [FamilyFilter (EQUAL, cf1), FamilyFilter (EQUAL, cf2)]
> keyvalues={0/cf1:col_a/1499852754957/Put/vlen=4/seqid=0}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18402) Thrift2 should support DeleteFamilyVersion type

2017-07-19 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-18402:
-
Summary: Thrift2 should support  DeleteFamilyVersion type  (was: Thrift2 
should support  DeleteFamily and DeleteFamilyVersion type)

> Thrift2 should support  DeleteFamilyVersion type
> 
>
> Key: HBASE-18402
> URL: https://issues.apache.org/jira/browse/HBASE-18402
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 2.0.0-alpha-1
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Attachments: HBASE-18402.v1.patch
>
>
> Currently,  our thrift2 only support two delete types, Actually, there are 
> four delete types.and  we should support the other delete type:  DeleteFamily 
> and DeleteFamilyVersion. 
> {code}
> /**
>  * Specify type of delete:
>  *  - DELETE_COLUMN means exactly one version will be removed,
>  *  - DELETE_COLUMNS means previous versions will also be removed.
>  */
> enum TDeleteType {
>   DELETE_COLUMN = 0,
>   DELETE_COLUMNS = 1
> }
> {code} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18308) Eliminate the findbugs warnings for hbase-server

2017-07-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16093014#comment-16093014
 ] 

Hadoop QA commented on HBASE-18308:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
11s{color} | {color:red} hbase-server in master has 9 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
34m 22s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
21s{color} | {color:green} hbase-server generated 0 new + 0 unchanged - 9 fixed 
= 0 total (was 9) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}121m 
18s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:757bf37 |
| JIRA Issue | HBASE-18308 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877951/HBASE-18308.v1.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 468faab3a1cc 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 6b7ebc0 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC3 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7712/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7712/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7712/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> Eliminate the findbugs warnings for hbase-server
> 
>
>  

[jira] [Created] (HBASE-18414) FilterList with MUST_PASS_ALL can be optimized to choose the highest key among filters in filter list to seek

2017-07-19 Thread Zheng Hu (JIRA)
Zheng Hu created HBASE-18414:


 Summary: FilterList with MUST_PASS_ALL can be optimized to choose 
the highest key among filters in filter list to seek 
 Key: HBASE-18414
 URL: https://issues.apache.org/jira/browse/HBASE-18414
 Project: HBase
  Issue Type: Bug
Reporter: Zheng Hu






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18414) FilterList with MUST_PASS_ALL can be optimized to choose the highest key among filters in filter list to seek

2017-07-19 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-18414:
-
Issue Type: Sub-task  (was: Bug)
Parent: HBASE-18410

> FilterList with MUST_PASS_ALL can be optimized to choose the highest key 
> among filters in filter list to seek 
> --
>
> Key: HBASE-18414
> URL: https://issues.apache.org/jira/browse/HBASE-18414
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18295) The result contains the cells across different rows

2017-07-19 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18295:
---
Issue Type: Sub-task  (was: Bug)
Parent: HBASE-18397

>  The result contains the cells across different rows
> 
>
> Key: HBASE-18295
> URL: https://issues.apache.org/jira/browse/HBASE-18295
> Project: HBase
>  Issue Type: Sub-task
>  Components: Scanners
>Affects Versions: 1.3.0, 1.3.1, 2.0.0-alpha-1
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Blocker
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.5.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18295.branch-1.3.v0.patch, 
> HBASE-18295.branch-1.3.v1.patch, HBASE-18295.branch-1.3.v2.patch, 
> HBASE-18295.branch-1.v0.patch, HBASE-18295.branch-1.v1.patch, 
> HBASE-18295.branch-1.v1.patch, HBASE-18295.branch-1.v1.patch, 
> HBASE-18295.branch-1.v2.patch, HBASE-18295.v0.patch, HBASE-18295.v1.patch, 
> HBASE-18295.v2.patch, HBASE-18295.v2.patch, HBASE-18295.v3.patch
>
>
> From the [flaky 
> dashboard|https://builds.apache.org/job/HBASE-Find-Flaky-Tests/lastSuccessfulBuild/artifact/dashboard.html]
> If we use the cell which won't be flushed into disk as the top cell to reopen 
> the scanners, the new top cell will change. If the new top cell is in 
> different row, the matcher will reset, and then matcher will accept the new 
> top cell...
> The TestStore# testFlushBeforeCompletingScan reproduces the bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18397) StoreFile accounting issues on branch-1.3 and branch-1

2017-07-19 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16093025#comment-16093025
 ] 

Chia-Ping Tsai commented on HBASE-18397:


bq. Change those to be subtasks of this?
done

> StoreFile accounting issues on branch-1.3 and branch-1
> --
>
> Key: HBASE-18397
> URL: https://issues.apache.org/jira/browse/HBASE-18397
> Project: HBase
>  Issue Type: Umbrella
>  Components: regionserver, Scanners
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Priority: Critical
> Fix For: 1.3.2
>
>
> This jira is an umbrella for a set of issues around store file accounting on 
> branch-1.3 and branch-1 (I believe).
> At this point I do believe that many / most of those issues are related to 
> backport of HBASE-13082 done  long time ago. A number of related issues were 
> identified and fixed previously, but some still yet to be debugged and fixed. 
> I think that this class of problems prevents us from releasing 1.3.2 and 
> moving stable pointer to branch 1.3 at this point, so marking as critical.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17887) Row-level consistency is broken for read

2017-07-19 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17887:
---
Issue Type: Sub-task  (was: Bug)
Parent: HBASE-18397

> Row-level consistency is broken for read
> 
>
> Key: HBASE-17887
> URL: https://issues.apache.org/jira/browse/HBASE-17887
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Umesh Agashe
>Assignee: Chia-Ping Tsai
>Priority: Blocker
> Fix For: 2.0.0, 1.4.0, 1.3.2
>
> Attachments: HBASE-17887.branch-1.v0.patch, 
> HBASE-17887.branch-1.v1.patch, HBASE-17887.branch-1.v1.patch, 
> HBASE-17887.branch-1.v2.patch, HBASE-17887.branch-1.v2.patch, 
> HBASE-17887.branch-1.v3.patch, HBASE-17887.branch-1.v4.patch, 
> HBASE-17887.branch-1.v4.patch, HBASE-17887.branch-1.v4.patch, 
> HBASE-17887.branch-1.v5.patch, HBASE-17887.branch-1.v6.patch, 
> HBASE-17887.ut.patch, HBASE-17887.v0.patch, HBASE-17887.v1.patch, 
> HBASE-17887.v2.patch, HBASE-17887.v3.patch, HBASE-17887.v4.patch, 
> HBASE-17887.v5.patch, HBASE-17887.v5.patch
>
>
> The scanner of latest memstore may be lost if we make quick flushes. The 
> following step may help explain this issue.
> # put data_A (seq id = 10, active store data_A and snapshots is empty)
> # snapshot of 1st flush (active is empty and snapshot stores data_A)
> # put data_B (seq id = 11, active store data_B and snapshot store data_A)
> # create user scanner (read point = 11, so It should see the data_B)
> # commit of 1st flush
> #* clear snapshot ((hfile_A has data_A, active store data_B, and snapshot is 
> empty)
> #* update the reader (the user scanner receives the hfile_A)
> # snapshot of 2st flush (active is empty and snapshot store data_B)
> # commit of 2st flush
> #* clear snapshot (hfile_A has data_A, hfile_B has data_B, active is empty, 
> and snapshot is empty) – this is critical piece.
> #* -update the reader- (haven't happen)
> # user scanner update the kv scanners (it creates scanner of hfile_A but 
> nothing of memstore)
> # user see the older data A – wrong result



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18308) Eliminate the findbugs warnings for hbase-server

2017-07-19 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16093029#comment-16093029
 ] 

Chia-Ping Tsai commented on HBASE-18308:


[~tedyu] Any other concerns? Thanks

> Eliminate the findbugs warnings for hbase-server
> 
>
> Key: HBASE-18308
> URL: https://issues.apache.org/jira/browse/HBASE-18308
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-2
>
> Attachments: HBASE-18308.branch-1.2.v0.patch, 
> HBASE-18308.branch-1.2.v0.patch, HBASE-18308.branch-1.3.v0.patch, 
> HBASE-18308.branch-1.v0.patch, HBASE-18308.branch-1.v1.patch, 
> HBASE-18308.branch-1.v2.patch, HBASE-18308.v0.patch, HBASE-18308.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18415) Fix client.TestClientScannerRPCTimeout

2017-07-19 Thread Chia-Ping Tsai (JIRA)
Chia-Ping Tsai created HBASE-18415:
--

 Summary: Fix client.TestClientScannerRPCTimeout
 Key: HBASE-18415
 URL: https://issues.apache.org/jira/browse/HBASE-18415
 Project: HBase
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai
Priority: Minor


{quote}
Running org.apache.hadoop.hbase.client.TestClientScannerRPCTimeout
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 10.046 sec <<< 
FAILURE! - in org.apache.hadoop.hbase.client.TestClientScannerRPCTimeout
testScannerNextRPCTimesout(org.apache.hadoop.hbase.client.TestClientScannerRPCTimeout)
  Time elapsed: 4.186 sec  <<< ERROR!
org.apache.hadoop.hbase.TableExistsException: testScannerNextRPCTimesout
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
at 
org.apache.hadoop.hbase.util.ForeignExceptionUtil.toIOException(ForeignExceptionUtil.java:45)
at 
org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.convertResult(HBaseAdmin.java:4773)
at 
org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4731)
at 
org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4664)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:678)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1500)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1547)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1438)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1414)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1370)
at 
org.apache.hadoop.hbase.client.TestClientScannerRPCTimeout.testScannerNextRPCTimesout(TestClientScannerRPCTimeout.java:92)
Caused by: org.apache.hadoop.ipc.RemoteException: testScannerNextRPCTimesout
at 
org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:286)
at 
org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:107)
at 
org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:59)
at 
org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:139)
at 
org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:506)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1152)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:940)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:893)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:76)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:478)
{quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18416) FilterLIst with MUST_PASS_ONE can be optimized to choose the mininal step to seek rather than SKIP cell one by one

2017-07-19 Thread Zheng Hu (JIRA)
Zheng Hu created HBASE-18416:


 Summary: FilterLIst with MUST_PASS_ONE can be optimized to choose 
the mininal step to seek rather than SKIP cell one by one
 Key: HBASE-18416
 URL: https://issues.apache.org/jira/browse/HBASE-18416
 Project: HBase
  Issue Type: Sub-task
Reporter: Zheng Hu






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18402) Thrift2 should support DeleteFamilyVersion type

2017-07-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16093034#comment-16093034
 ] 

Hadoop QA commented on HBASE-18402:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
46m  1s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
18s{color} | {color:green} hbase-thrift in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:757bf37 |
| JIRA Issue | HBASE-18402 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877974/HBASE-18402.v1.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux a53383039430 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / f10f819 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7717/testReport/ |
| modules | C: hbase-thrift U: hbase-thrift |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7717/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> Thrift2 should support  DeleteFamilyVersion type
> 
>
> Key: HBASE-18402
> URL: https://issues.apache.org/jira/browse/HBASE-18402
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 2.0.0-alpha-1
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Attachments: HBASE-18402.v1.patch
>
>
> Currently,  our th

[jira] [Commented] (HBASE-18160) Fix incorrect logic in FilterList.filterKeyValue

2017-07-19 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16093038#comment-16093038
 ] 

Zheng Hu commented on HBASE-18160:
--

bq. I'm thinking of another optimization for filterlist with MUST_PASS_ONE. If 
all filter in the filterlist return NEXT_ROW/NEXT_COL, can we return 
NEXT_ROW/NEXT_COL?
I created HBASE-18416 for it. 
bq.  But the optimized way would be return a highest key hint of all these 
returned hints.
I created HBASE-18414 for it.
bq. In case of MUST_PASS_ALL operator also, we don't handle 
INCLUDE_AND_SEEK_NEXT_ROW.
You are right, Let me fix it.
bq. A small reset one liner can avoid such a possible issue! 
Got it,  Let me fix it.

> Fix incorrect  logic in FilterList.filterKeyValue
> -
>
> Key: HBASE-18160
> URL: https://issues.apache.org/jira/browse/HBASE-18160
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Attachments: HBASE-18160.branch-1.1.v1.patch, 
> HBASE-18160.branch-1.v1.patch, HBASE-18160.v1.patch, HBASE-18160.v2.patch, 
> HBASE-18160.v2.patch
>
>
> As HBASE-17678 said, there are two problems in FilterList.filterKeyValue 
> implementation: 
> 1.  FilterList did not consider INCLUDE_AND_SEEK_NEXT_ROW case( seems like 
> INCLUDE_AND_SEEK_NEXT_ROW is a newly added case, and the dev forgot to 
> consider FilterList), So if a user use INCLUDE_AND_SEEK_NEXT_ROW in his own 
> Filter and wrapped by a FilterList,  it'll  throw  an 
> IllegalStateException("Received code is not valid."). 
> 2.  For FilterList with MUST_PASS_ONE,   if filter-A in filter list return  
> INCLUDE and filter-B in filter list return INCLUDE_AND_NEXT_COL,   the 
> FilterList will return  INCLUDE_AND_NEXT_COL finally.  According to the 
> mininal step rule , It's incorrect.  (filter list with MUST_PASS_ONE choose 
> the mininal step among filters in filter list. Let's call it: The Mininal 
> Step Rule).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18251) Remove unnecessary traversing to the first and last keys in the CellSet

2017-07-19 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16093042#comment-16093042
 ] 

Anastasia Braginsky commented on HBASE-18251:
-

Thank you, [~brfrn169]! I have looked on your last patch it looks good to me.

> Remove unnecessary traversing to the first and last keys in the CellSet
> ---
>
> Key: HBASE-18251
> URL: https://issues.apache.org/jira/browse/HBASE-18251
> Project: HBase
>  Issue Type: Bug
>Reporter: Anastasia Braginsky
>Assignee: Toshihiro Suzuki
> Attachments: HBASE-18251.patch, HBASE-18251-v2.patch
>
>
> The implementation of finding the first and last keys in the CellSet is as 
> following:
> {code}
>  public Cell first() {
> return this.delegatee.get(this.delegatee.firstKey());
>   }
>   public Cell last() {
> return this.delegatee.get(this.delegatee.lastKey());
>   }
> {code}
> Recall we have Cell to Cell mapping, therefore the methods bringing the 
> first/last key, which allready return Cell. Thus no need to waist time on the 
> get() method for the same Cell.
> Fix: return just the first/lastKey(), should be at least twice more effective.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18308) Eliminate the findbugs warnings for hbase-server

2017-07-19 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16093047#comment-16093047
 ] 

Ted Yu commented on HBASE-18308:


For LoadIncrementalHFiles.java, why is the code removed ?

> Eliminate the findbugs warnings for hbase-server
> 
>
> Key: HBASE-18308
> URL: https://issues.apache.org/jira/browse/HBASE-18308
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-2
>
> Attachments: HBASE-18308.branch-1.2.v0.patch, 
> HBASE-18308.branch-1.2.v0.patch, HBASE-18308.branch-1.3.v0.patch, 
> HBASE-18308.branch-1.v0.patch, HBASE-18308.branch-1.v1.patch, 
> HBASE-18308.branch-1.v2.patch, HBASE-18308.v0.patch, HBASE-18308.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18368) FilterList with multiple FamilyFilters concatenated by OR does not work.

2017-07-19 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16093066#comment-16093066
 ] 

Ted Yu commented on HBASE-18368:


For RowCountFilter, the correct method to override is (assuming the filter 
counts rows):
{code}
  abstract public boolean filterRowKey(Cell firstRowCell) throws IOException;
{code}
See the following javadoc for filterKeyValue():
{code}
   * If your filter returns ReturnCode.NEXT_ROW, it should return
   * ReturnCode.NEXT_ROW until {@link #reset()} is called just in 
case the caller calls
   * for the next row.
{code}
If we cannot reach consensus on Allan's fix, HBASE-17678 should be reverted 
from branch-1 (assuming 1.4.0 release is around the corner).

> FilterList with multiple FamilyFilters concatenated by OR does not work.
> 
>
> Key: HBASE-18368
> URL: https://issues.apache.org/jira/browse/HBASE-18368
> Project: HBase
>  Issue Type: Sub-task
>  Components: Filters
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Peter Somogyi
>Assignee: Allan Yang
>Priority: Critical
> Attachments: HBASE-18368.branch-1.patch, 
> HBASE-18368.branch-1.v2.patch, HBASE-18368.branch-1.v3.patch, 
> HBASE-18368.patch
>
>
> Scan gives back incomplete list if multiple filters are combined with OR / 
> MUST_PASS_ONE.
> Using 2 FamilyFilters in a FilterList using MUST_PASS_ONE operator will give 
> back results for only the first Filter.
> {code:java|title=Test code}
>   @Test
>   public void testFiltersWithOr() throws Exception {
> TableName tn = TableName.valueOf("MyTest");
> Table table = utility.createTable(tn, new String[] {"cf1", "cf2"});
> byte[] CF1 = Bytes.toBytes("cf1");
> byte[] CF2 = Bytes.toBytes("cf2");
> Put put1 = new Put(Bytes.toBytes("0"));
> put1.addColumn(CF1, Bytes.toBytes("col_a"), Bytes.toBytes(0));
> table.put(put1);
> Put put2 = new Put(Bytes.toBytes("0"));
> put2.addColumn(CF2, Bytes.toBytes("col_b"), Bytes.toBytes(0));
> table.put(put2);
> FamilyFilter filterCF1 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF1));
> FamilyFilter filterCF2 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF2));
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ONE);
> filterList.addFilter(filterCF1);
> filterList.addFilter(filterCF2);
> Scan scan = new Scan();
> scan.setFilter(filterList);
> ResultScanner scanner = table.getScanner(scan);
> System.out.println(filterList);
> for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {
>   System.out.println(rr);
> }
>   }
> {code}
> {noformat:title=Output}
> FilterList OR (2/2): [FamilyFilter (EQUAL, cf1), FamilyFilter (EQUAL, cf2)]
> keyvalues={0/cf1:col_a/1499852754957/Put/vlen=4/seqid=0}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18308) Eliminate the findbugs warnings for hbase-server

2017-07-19 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16093073#comment-16093073
 ] 

Chia-Ping Tsai commented on HBASE-18308:


bq. For LoadIncrementalHFiles.java, why is the code removed ?
The code (famPaths) is never read.

> Eliminate the findbugs warnings for hbase-server
> 
>
> Key: HBASE-18308
> URL: https://issues.apache.org/jira/browse/HBASE-18308
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-2
>
> Attachments: HBASE-18308.branch-1.2.v0.patch, 
> HBASE-18308.branch-1.2.v0.patch, HBASE-18308.branch-1.3.v0.patch, 
> HBASE-18308.branch-1.v0.patch, HBASE-18308.branch-1.v1.patch, 
> HBASE-18308.branch-1.v2.patch, HBASE-18308.v0.patch, HBASE-18308.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when configuring hbase.bucketcache.bucket.sizes

2017-07-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16093075#comment-16093075
 ] 

Hadoop QA commented on HBASE-16993:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
31s{color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
32s{color} | {color:red} hbase-server in master has 9 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
37m 39s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
30s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}141m 
51s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}204m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:757bf37 |
| JIRA Issue | HBASE-16993 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877954/HBASE-16993_V6.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  xml  findbugs  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 15cebc262671 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 6b7ebc0 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC3 |
| findbugs | 
https://builds.apache.org/job/PreCom

[jira] [Commented] (HBASE-18308) Eliminate the findbugs warnings for hbase-server

2017-07-19 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16093089#comment-16093089
 ] 

Ted Yu commented on HBASE-18308:


Alright.
lgtm

> Eliminate the findbugs warnings for hbase-server
> 
>
> Key: HBASE-18308
> URL: https://issues.apache.org/jira/browse/HBASE-18308
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-2
>
> Attachments: HBASE-18308.branch-1.2.v0.patch, 
> HBASE-18308.branch-1.2.v0.patch, HBASE-18308.branch-1.3.v0.patch, 
> HBASE-18308.branch-1.v0.patch, HBASE-18308.branch-1.v1.patch, 
> HBASE-18308.branch-1.v2.patch, HBASE-18308.v0.patch, HBASE-18308.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-18413) SHORT DESCRIPTION

2017-07-19 Thread Dima Spivak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dima Spivak resolved HBASE-18413.
-
Resolution: Invalid

Hm?

> SHORT DESCRIPTION
> -
>
> Key: HBASE-18413
> URL: https://issues.apache.org/jira/browse/HBASE-18413
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Silvano Tamburini
>Priority: Blocker
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18251) Remove unnecessary traversing to the first and last keys in the CellSet

2017-07-19 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16093133#comment-16093133
 ] 

Ted Yu commented on HBASE-18251:


lgtm

> Remove unnecessary traversing to the first and last keys in the CellSet
> ---
>
> Key: HBASE-18251
> URL: https://issues.apache.org/jira/browse/HBASE-18251
> Project: HBase
>  Issue Type: Bug
>Reporter: Anastasia Braginsky
>Assignee: Toshihiro Suzuki
> Attachments: HBASE-18251.patch, HBASE-18251-v2.patch
>
>
> The implementation of finding the first and last keys in the CellSet is as 
> following:
> {code}
>  public Cell first() {
> return this.delegatee.get(this.delegatee.firstKey());
>   }
>   public Cell last() {
> return this.delegatee.get(this.delegatee.lastKey());
>   }
> {code}
> Recall we have Cell to Cell mapping, therefore the methods bringing the 
> first/last key, which allready return Cell. Thus no need to waist time on the 
> get() method for the same Cell.
> Fix: return just the first/lastKey(), should be at least twice more effective.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when configuring hbase.bucketcache.bucket.sizes

2017-07-19 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16093137#comment-16093137
 ] 

Ted Yu commented on HBASE-16993:


Release note needs to be updated to match the patch.

> BucketCache throw java.io.IOException: Invalid HFile block magic when 
> configuring hbase.bucketcache.bucket.sizes
> 
>
> Key: HBASE-16993
> URL: https://issues.apache.org/jira/browse/HBASE-16993
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache, io
>Affects Versions: 1.1.3
> Environment: hbase version 1.1.3
>Reporter: liubangchen
>Assignee: liubangchen
> Fix For: 3.0.0, 1.4.0, 2.0.0-alpha-2
>
> Attachments: HBASE-16993.000.patch, HBASE-16993.001.patch, 
> HBASE-16993.master.001.patch, HBASE-16993.master.002.patch, 
> HBASE-16993.master.003.patch, HBASE-16993.master.004.patch, 
> HBASE-16993.master.005.patch, HBASE-16993_V6.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> hbase-site.xml setting
> 
> hbase.bucketcache.bucket.sizes
> 16384,32768,40960, 
> 46000,49152,51200,65536,131072,524288
> 
> 
> hbase.bucketcache.size
> 16384
> 
> 
> hbase.bucketcache.ioengine
> offheap
> 
> 
> hfile.block.cache.size
> 0.3
> 
> 
> hfile.block.bloom.cacheonwrite
> true
> 
> 
> hbase.rs.cacheblocksonwrite
> true
> 
> 
> hfile.block.index.cacheonwrite
> true
>  n_splits = 200
> create 'usertable',{NAME =>'family', COMPRESSION => 'snappy', VERSIONS => 
> 1,DATA_BLOCK_ENCODING => 'DIFF',CONFIGURATION => 
> {'hbase.hregion.memstore.block.multiplier' => 5}},{DURABILITY => 
> 'SKIP_WAL'},{SPLITS => (1..n_splits).map {|i| 
> "user#{1000+i*(-1000)/n_splits}"}}
> load data
> bin/ycsb load hbase10 -P workloads/workloada -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> recordcount=2 -p insertorder=hashed -p insertstart=0 -p 
> clientbuffering=true -p durability=SKIP_WAL -threads 20 -s 
> run 
> bin/ycsb run hbase10 -P workloads/workloadb -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> operationcount=2000 -p readallfields=true -p clientbuffering=true -p 
> requestdistribution=zipfian  -threads 10 -s
> log info
> 2016-11-02 20:20:20,261 ERROR 
> [RW.default.readRpcServer.handler=36,queue=21,port=6020] bucket.BucketCache: 
> Failed reading block fdcc7ed6f3b2498b9ef316cc8206c233_44819759 from bucket 
> cache
> java.io.IOException: Invalid HFile block magic: 
> \x00\x00\x00\x00\x00\x00\x00\x00
> at 
> org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154)
> at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:273)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:134)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:121)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:427)
> at 
> org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:85)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getCachedBlock(HFileReaderV2.java:266)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:403)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:247)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:156)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:217)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2071)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5369)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2546)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2532)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2514)
> at org.apache.hadoop.hbase.regionserver.HRegio

[jira] [Updated] (HBASE-18160) Fix incorrect logic in FilterList.filterKeyValue

2017-07-19 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-18160:
-
Attachment: HBASE-18160.v3.patch

> Fix incorrect  logic in FilterList.filterKeyValue
> -
>
> Key: HBASE-18160
> URL: https://issues.apache.org/jira/browse/HBASE-18160
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Attachments: HBASE-18160.branch-1.1.v1.patch, 
> HBASE-18160.branch-1.v1.patch, HBASE-18160.v1.patch, HBASE-18160.v2.patch, 
> HBASE-18160.v2.patch, HBASE-18160.v3.patch
>
>
> As HBASE-17678 said, there are two problems in FilterList.filterKeyValue 
> implementation: 
> 1.  FilterList did not consider INCLUDE_AND_SEEK_NEXT_ROW case( seems like 
> INCLUDE_AND_SEEK_NEXT_ROW is a newly added case, and the dev forgot to 
> consider FilterList), So if a user use INCLUDE_AND_SEEK_NEXT_ROW in his own 
> Filter and wrapped by a FilterList,  it'll  throw  an 
> IllegalStateException("Received code is not valid."). 
> 2.  For FilterList with MUST_PASS_ONE,   if filter-A in filter list return  
> INCLUDE and filter-B in filter list return INCLUDE_AND_NEXT_COL,   the 
> FilterList will return  INCLUDE_AND_NEXT_COL finally.  According to the 
> mininal step rule , It's incorrect.  (filter list with MUST_PASS_ONE choose 
> the mininal step among filters in filter list. Let's call it: The Mininal 
> Step Rule).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18160) Fix incorrect logic in FilterList.filterKeyValue

2017-07-19 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16093143#comment-16093143
 ] 

Zheng Hu commented on HBASE-18160:
--

[~anoop.hbase]  I uploaded patch v3 for your suggestion.  

> Fix incorrect  logic in FilterList.filterKeyValue
> -
>
> Key: HBASE-18160
> URL: https://issues.apache.org/jira/browse/HBASE-18160
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Attachments: HBASE-18160.branch-1.1.v1.patch, 
> HBASE-18160.branch-1.v1.patch, HBASE-18160.v1.patch, HBASE-18160.v2.patch, 
> HBASE-18160.v2.patch, HBASE-18160.v3.patch
>
>
> As HBASE-17678 said, there are two problems in FilterList.filterKeyValue 
> implementation: 
> 1.  FilterList did not consider INCLUDE_AND_SEEK_NEXT_ROW case( seems like 
> INCLUDE_AND_SEEK_NEXT_ROW is a newly added case, and the dev forgot to 
> consider FilterList), So if a user use INCLUDE_AND_SEEK_NEXT_ROW in his own 
> Filter and wrapped by a FilterList,  it'll  throw  an 
> IllegalStateException("Received code is not valid."). 
> 2.  For FilterList with MUST_PASS_ONE,   if filter-A in filter list return  
> INCLUDE and filter-B in filter list return INCLUDE_AND_NEXT_COL,   the 
> FilterList will return  INCLUDE_AND_NEXT_COL finally.  According to the 
> mininal step rule , It's incorrect.  (filter list with MUST_PASS_ONE choose 
> the mininal step among filters in filter list. Let's call it: The Mininal 
> Step Rule).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18160) Fix incorrect logic in FilterList.filterKeyValue

2017-07-19 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-18160:
-
Attachment: HBASE-18160.v3.patch

> Fix incorrect  logic in FilterList.filterKeyValue
> -
>
> Key: HBASE-18160
> URL: https://issues.apache.org/jira/browse/HBASE-18160
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Attachments: HBASE-18160.branch-1.1.v1.patch, 
> HBASE-18160.branch-1.v1.patch, HBASE-18160.v1.patch, HBASE-18160.v2.patch, 
> HBASE-18160.v2.patch, HBASE-18160.v3.patch
>
>
> As HBASE-17678 said, there are two problems in FilterList.filterKeyValue 
> implementation: 
> 1.  FilterList did not consider INCLUDE_AND_SEEK_NEXT_ROW case( seems like 
> INCLUDE_AND_SEEK_NEXT_ROW is a newly added case, and the dev forgot to 
> consider FilterList), So if a user use INCLUDE_AND_SEEK_NEXT_ROW in his own 
> Filter and wrapped by a FilterList,  it'll  throw  an 
> IllegalStateException("Received code is not valid."). 
> 2.  For FilterList with MUST_PASS_ONE,   if filter-A in filter list return  
> INCLUDE and filter-B in filter list return INCLUDE_AND_NEXT_COL,   the 
> FilterList will return  INCLUDE_AND_NEXT_COL finally.  According to the 
> mininal step rule , It's incorrect.  (filter list with MUST_PASS_ONE choose 
> the mininal step among filters in filter list. Let's call it: The Mininal 
> Step Rule).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18160) Fix incorrect logic in FilterList.filterKeyValue

2017-07-19 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-18160:
-
Attachment: (was: HBASE-18160.v3.patch)

> Fix incorrect  logic in FilterList.filterKeyValue
> -
>
> Key: HBASE-18160
> URL: https://issues.apache.org/jira/browse/HBASE-18160
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Attachments: HBASE-18160.branch-1.1.v1.patch, 
> HBASE-18160.branch-1.v1.patch, HBASE-18160.v1.patch, HBASE-18160.v2.patch, 
> HBASE-18160.v2.patch, HBASE-18160.v3.patch
>
>
> As HBASE-17678 said, there are two problems in FilterList.filterKeyValue 
> implementation: 
> 1.  FilterList did not consider INCLUDE_AND_SEEK_NEXT_ROW case( seems like 
> INCLUDE_AND_SEEK_NEXT_ROW is a newly added case, and the dev forgot to 
> consider FilterList), So if a user use INCLUDE_AND_SEEK_NEXT_ROW in his own 
> Filter and wrapped by a FilterList,  it'll  throw  an 
> IllegalStateException("Received code is not valid."). 
> 2.  For FilterList with MUST_PASS_ONE,   if filter-A in filter list return  
> INCLUDE and filter-B in filter list return INCLUDE_AND_NEXT_COL,   the 
> FilterList will return  INCLUDE_AND_NEXT_COL finally.  According to the 
> mininal step rule , It's incorrect.  (filter list with MUST_PASS_ONE choose 
> the mininal step among filters in filter list. Let's call it: The Mininal 
> Step Rule).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18408) AM consumes CPU and fills up the logs really fast when there is no RS to assign

2017-07-19 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18408:
--
Fix Version/s: 2.0.0-alpha-2

> AM consumes CPU and fills up the logs really fast when there is no RS to 
> assign
> ---
>
> Key: HBASE-18408
> URL: https://issues.apache.org/jira/browse/HBASE-18408
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Priority: Blocker
> Fix For: 2.0.0-alpha-2
>
>
> I was testing something else when I discovered that when there is no RS to 
> assign a region to (but master is alive), then AM/LB creates GB's of logs. 
> Logs like this:
> {code}
> 2017-07-18 16:40:00,712 WARN  [AssignmentThread] balancer.BaseLoadBalancer: 
> Wanted to do round robin assignment but no servers to assign to
> 2017-07-18 16:40:00,712 WARN  [AssignmentThread] 
> assignment.AssignmentManager: unable to round-robin assignment
> org.apache.hadoop.hbase.HBaseIOException: unable to compute plans for 
> regions=1
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.acceptPlan(AssignmentManager.java:1725)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:1711)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$300(AssignmentManager.java:108)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager$2.run(AssignmentManager.java:1587)
> 2017-07-18 16:40:00,865 WARN  [AssignmentThread] balancer.BaseLoadBalancer: 
> Wanted to do round robin assignment but no servers to assign to
> 2017-07-18 16:40:00,866 WARN  [AssignmentThread] 
> assignment.AssignmentManager: unable to round-robin assignment
> org.apache.hadoop.hbase.HBaseIOException: unable to compute plans for 
> regions=1
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.acceptPlan(AssignmentManager.java:1725)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:1711)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$300(AssignmentManager.java:108)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager$2.run(AssignmentManager.java:1587)
> 2017-07-18 16:40:01,019 WARN  [AssignmentThread] balancer.BaseLoadBalancer: 
> Wanted to do round robin assignment but no servers to assign to
> 2017-07-18 16:40:01,019 WARN  [AssignmentThread] 
> assignment.AssignmentManager: unable to round-robin assignment
> org.apache.hadoop.hbase.HBaseIOException: unable to compute plans for 
> regions=1
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.acceptPlan(AssignmentManager.java:1725)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:1711)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$300(AssignmentManager.java:108)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager$2.run(AssignmentManager.java:1587)
> 2017-07-18 16:40:01,173 WARN  [AssignmentThread] balancer.BaseLoadBalancer: 
> Wanted to do round robin assignment but no servers to assign to
> 2017-07-18 16:40:01,173 WARN  [AssignmentThread] 
> assignment.AssignmentManager: unable to round-robin assignment
> org.apache.hadoop.hbase.HBaseIOException: unable to compute plans for 
> regions=1
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.acceptPlan(AssignmentManager.java:1725)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:1711)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$300(AssignmentManager.java:108)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager$2.run(AssignmentManager.java:1587)
> {code}
> Reproduction is easy: 
>  - Start pseudo-distributed cluster
>  - Create a table 
>  - kill region server 
> I have also noticed that we are just spinning CPU in another case consuming 
> 100-200% (but this is in a very old code base from master) in this cycle: 
> {code}
> "ProcedureExecutor-0" #106 daemon prio=5 os_prio=0 tid=0x7fab54851800 
> nid=0xcf1 runnable [0x7fab4e7b]
>java.lang.Thread.State: RUNNABLE
>   at java.lang.Object.hashCode(Native Method)
>   at 
> java.util.concurrent.ConcurrentHashMap.replaceNode(ConcurrentHashMap.java:1106)
>   at 
> java.util.concurrent.ConcurrentHashMap.remove(ConcurrentHashMap.java:1097)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.close(HRegion.java:6158)
>   - locked <0xc4cb62e8> (a 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
>   at org.apache.hadoop.hbase.regionserver.HReg

[jira] [Updated] (HBASE-18408) AM consumes CPU and fills up the logs really fast when there is no RS to assign

2017-07-19 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18408:
--
Priority: Blocker  (was: Major)

> AM consumes CPU and fills up the logs really fast when there is no RS to 
> assign
> ---
>
> Key: HBASE-18408
> URL: https://issues.apache.org/jira/browse/HBASE-18408
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Priority: Blocker
> Fix For: 2.0.0-alpha-2
>
>
> I was testing something else when I discovered that when there is no RS to 
> assign a region to (but master is alive), then AM/LB creates GB's of logs. 
> Logs like this:
> {code}
> 2017-07-18 16:40:00,712 WARN  [AssignmentThread] balancer.BaseLoadBalancer: 
> Wanted to do round robin assignment but no servers to assign to
> 2017-07-18 16:40:00,712 WARN  [AssignmentThread] 
> assignment.AssignmentManager: unable to round-robin assignment
> org.apache.hadoop.hbase.HBaseIOException: unable to compute plans for 
> regions=1
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.acceptPlan(AssignmentManager.java:1725)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:1711)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$300(AssignmentManager.java:108)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager$2.run(AssignmentManager.java:1587)
> 2017-07-18 16:40:00,865 WARN  [AssignmentThread] balancer.BaseLoadBalancer: 
> Wanted to do round robin assignment but no servers to assign to
> 2017-07-18 16:40:00,866 WARN  [AssignmentThread] 
> assignment.AssignmentManager: unable to round-robin assignment
> org.apache.hadoop.hbase.HBaseIOException: unable to compute plans for 
> regions=1
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.acceptPlan(AssignmentManager.java:1725)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:1711)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$300(AssignmentManager.java:108)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager$2.run(AssignmentManager.java:1587)
> 2017-07-18 16:40:01,019 WARN  [AssignmentThread] balancer.BaseLoadBalancer: 
> Wanted to do round robin assignment but no servers to assign to
> 2017-07-18 16:40:01,019 WARN  [AssignmentThread] 
> assignment.AssignmentManager: unable to round-robin assignment
> org.apache.hadoop.hbase.HBaseIOException: unable to compute plans for 
> regions=1
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.acceptPlan(AssignmentManager.java:1725)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:1711)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$300(AssignmentManager.java:108)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager$2.run(AssignmentManager.java:1587)
> 2017-07-18 16:40:01,173 WARN  [AssignmentThread] balancer.BaseLoadBalancer: 
> Wanted to do round robin assignment but no servers to assign to
> 2017-07-18 16:40:01,173 WARN  [AssignmentThread] 
> assignment.AssignmentManager: unable to round-robin assignment
> org.apache.hadoop.hbase.HBaseIOException: unable to compute plans for 
> regions=1
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.acceptPlan(AssignmentManager.java:1725)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:1711)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$300(AssignmentManager.java:108)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager$2.run(AssignmentManager.java:1587)
> {code}
> Reproduction is easy: 
>  - Start pseudo-distributed cluster
>  - Create a table 
>  - kill region server 
> I have also noticed that we are just spinning CPU in another case consuming 
> 100-200% (but this is in a very old code base from master) in this cycle: 
> {code}
> "ProcedureExecutor-0" #106 daemon prio=5 os_prio=0 tid=0x7fab54851800 
> nid=0xcf1 runnable [0x7fab4e7b]
>java.lang.Thread.State: RUNNABLE
>   at java.lang.Object.hashCode(Native Method)
>   at 
> java.util.concurrent.ConcurrentHashMap.replaceNode(ConcurrentHashMap.java:1106)
>   at 
> java.util.concurrent.ConcurrentHashMap.remove(ConcurrentHashMap.java:1097)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.close(HRegion.java:6158)
>   - locked <0xc4cb62e8> (a 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
>   at org.apache.hadoop.hbase.regionserver.H

[jira] [Commented] (HBASE-18375) The pool chunks from ChunkCreator are deallocated while in pool because there is no reference to them

2017-07-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16093172#comment-16093172
 ] 

Hadoop QA commented on HBASE-18375:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
21s{color} | {color:red} hbase-server in master has 9 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
36m  3s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m  
1s{color} | {color:red} hbase-server generated 1 new + 9 unchanged - 0 fixed = 
10 total (was 9) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}113m 
36s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}167m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hbase-server |
|  |  Increment of volatile field 
org.apache.hadoop.hbase.regionserver.CompactionPipeline.version in 
org.apache.hadoop.hbase.regionserver.CompactionPipeline.replaceAtIndex(int, 
ImmutableSegment)  At CompactionPipeline.java:in 
org.apache.hadoop.hbase.regionserver.CompactionPipeline.replaceAtIndex(int, 
ImmutableSegment)  At CompactionPipeline.java:[line 284] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:757bf37 |
| JIRA Issue | HBASE-18375 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877972/HBASE-18375-V04.patch 
|
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 8400b69b060d 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / f10f819 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC3 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7716/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7716/artifact/patchprocess/new-findbugs-hbase-server.html
 |
|  Test Results | 
htt

[jira] [Commented] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when configuring hbase.bucketcache.bucket.sizes

2017-07-19 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16093221#comment-16093221
 ] 

Anoop Sam John commented on HBASE-16993:


Sure. will update?  May I get a +1 pls?

> BucketCache throw java.io.IOException: Invalid HFile block magic when 
> configuring hbase.bucketcache.bucket.sizes
> 
>
> Key: HBASE-16993
> URL: https://issues.apache.org/jira/browse/HBASE-16993
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache, io
>Affects Versions: 1.1.3
> Environment: hbase version 1.1.3
>Reporter: liubangchen
>Assignee: liubangchen
> Fix For: 3.0.0, 1.4.0, 2.0.0-alpha-2
>
> Attachments: HBASE-16993.000.patch, HBASE-16993.001.patch, 
> HBASE-16993.master.001.patch, HBASE-16993.master.002.patch, 
> HBASE-16993.master.003.patch, HBASE-16993.master.004.patch, 
> HBASE-16993.master.005.patch, HBASE-16993_V6.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> hbase-site.xml setting
> 
> hbase.bucketcache.bucket.sizes
> 16384,32768,40960, 
> 46000,49152,51200,65536,131072,524288
> 
> 
> hbase.bucketcache.size
> 16384
> 
> 
> hbase.bucketcache.ioengine
> offheap
> 
> 
> hfile.block.cache.size
> 0.3
> 
> 
> hfile.block.bloom.cacheonwrite
> true
> 
> 
> hbase.rs.cacheblocksonwrite
> true
> 
> 
> hfile.block.index.cacheonwrite
> true
>  n_splits = 200
> create 'usertable',{NAME =>'family', COMPRESSION => 'snappy', VERSIONS => 
> 1,DATA_BLOCK_ENCODING => 'DIFF',CONFIGURATION => 
> {'hbase.hregion.memstore.block.multiplier' => 5}},{DURABILITY => 
> 'SKIP_WAL'},{SPLITS => (1..n_splits).map {|i| 
> "user#{1000+i*(-1000)/n_splits}"}}
> load data
> bin/ycsb load hbase10 -P workloads/workloada -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> recordcount=2 -p insertorder=hashed -p insertstart=0 -p 
> clientbuffering=true -p durability=SKIP_WAL -threads 20 -s 
> run 
> bin/ycsb run hbase10 -P workloads/workloadb -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> operationcount=2000 -p readallfields=true -p clientbuffering=true -p 
> requestdistribution=zipfian  -threads 10 -s
> log info
> 2016-11-02 20:20:20,261 ERROR 
> [RW.default.readRpcServer.handler=36,queue=21,port=6020] bucket.BucketCache: 
> Failed reading block fdcc7ed6f3b2498b9ef316cc8206c233_44819759 from bucket 
> cache
> java.io.IOException: Invalid HFile block magic: 
> \x00\x00\x00\x00\x00\x00\x00\x00
> at 
> org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154)
> at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:273)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:134)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:121)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:427)
> at 
> org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:85)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getCachedBlock(HFileReaderV2.java:266)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:403)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:247)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:156)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:217)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2071)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5369)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2546)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2532)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2514)
> at org.apache.hadoop.hbase.regionserver.HRe

[jira] [Commented] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when configuring hbase.bucketcache.bucket.sizes

2017-07-19 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16093224#comment-16093224
 ] 

Ted Yu commented on HBASE-16993:


+1

> BucketCache throw java.io.IOException: Invalid HFile block magic when 
> configuring hbase.bucketcache.bucket.sizes
> 
>
> Key: HBASE-16993
> URL: https://issues.apache.org/jira/browse/HBASE-16993
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache, io
>Affects Versions: 1.1.3
> Environment: hbase version 1.1.3
>Reporter: liubangchen
>Assignee: liubangchen
> Fix For: 3.0.0, 1.4.0, 2.0.0-alpha-2
>
> Attachments: HBASE-16993.000.patch, HBASE-16993.001.patch, 
> HBASE-16993.master.001.patch, HBASE-16993.master.002.patch, 
> HBASE-16993.master.003.patch, HBASE-16993.master.004.patch, 
> HBASE-16993.master.005.patch, HBASE-16993_V6.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> hbase-site.xml setting
> 
> hbase.bucketcache.bucket.sizes
> 16384,32768,40960, 
> 46000,49152,51200,65536,131072,524288
> 
> 
> hbase.bucketcache.size
> 16384
> 
> 
> hbase.bucketcache.ioengine
> offheap
> 
> 
> hfile.block.cache.size
> 0.3
> 
> 
> hfile.block.bloom.cacheonwrite
> true
> 
> 
> hbase.rs.cacheblocksonwrite
> true
> 
> 
> hfile.block.index.cacheonwrite
> true
>  n_splits = 200
> create 'usertable',{NAME =>'family', COMPRESSION => 'snappy', VERSIONS => 
> 1,DATA_BLOCK_ENCODING => 'DIFF',CONFIGURATION => 
> {'hbase.hregion.memstore.block.multiplier' => 5}},{DURABILITY => 
> 'SKIP_WAL'},{SPLITS => (1..n_splits).map {|i| 
> "user#{1000+i*(-1000)/n_splits}"}}
> load data
> bin/ycsb load hbase10 -P workloads/workloada -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> recordcount=2 -p insertorder=hashed -p insertstart=0 -p 
> clientbuffering=true -p durability=SKIP_WAL -threads 20 -s 
> run 
> bin/ycsb run hbase10 -P workloads/workloadb -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> operationcount=2000 -p readallfields=true -p clientbuffering=true -p 
> requestdistribution=zipfian  -threads 10 -s
> log info
> 2016-11-02 20:20:20,261 ERROR 
> [RW.default.readRpcServer.handler=36,queue=21,port=6020] bucket.BucketCache: 
> Failed reading block fdcc7ed6f3b2498b9ef316cc8206c233_44819759 from bucket 
> cache
> java.io.IOException: Invalid HFile block magic: 
> \x00\x00\x00\x00\x00\x00\x00\x00
> at 
> org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154)
> at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:273)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:134)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:121)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:427)
> at 
> org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:85)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getCachedBlock(HFileReaderV2.java:266)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:403)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:247)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:156)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:217)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2071)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5369)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2546)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2532)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2514)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6558)
> at org.apache.h

[jira] [Updated] (HBASE-18412) [Shell] Support unset of configuration for a list of table

2017-07-19 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-18412:
---
Summary: [Shell] Support unset of configuration for a list of table  (was: 
[Shell] Support unset of a list of table configuration)

> [Shell] Support unset of configuration for a list of table
> --
>
> Key: HBASE-18412
> URL: https://issues.apache.org/jira/browse/HBASE-18412
> Project: HBase
>  Issue Type: Improvement
>Reporter: Yun Zhao
>Assignee: Yun Zhao
>Priority: Minor
> Attachments: HBASE-18412.master.001.patch
>
>
> Add a method 'table_conf_unset' in admin.rb to reset the configuration as the 
> system default.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18412) [Shell] Support unset of list of configuration for a table

2017-07-19 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-18412:
---
Summary: [Shell] Support unset of list of configuration for a table  (was: 
[Shell] Support unset of configuration for a list of table)

> [Shell] Support unset of list of configuration for a table
> --
>
> Key: HBASE-18412
> URL: https://issues.apache.org/jira/browse/HBASE-18412
> Project: HBase
>  Issue Type: Improvement
>Reporter: Yun Zhao
>Assignee: Yun Zhao
>Priority: Minor
> Attachments: HBASE-18412.master.001.patch
>
>
> Add a method 'table_conf_unset' in admin.rb to reset the configuration as the 
> system default.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18412) [Shell] Support unset of list of configuration for a table

2017-07-19 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16093240#comment-16093240
 ] 

Ted Yu commented on HBASE-18412:


lgtm

> [Shell] Support unset of list of configuration for a table
> --
>
> Key: HBASE-18412
> URL: https://issues.apache.org/jira/browse/HBASE-18412
> Project: HBase
>  Issue Type: Improvement
>Reporter: Yun Zhao
>Assignee: Yun Zhao
>Priority: Minor
> Attachments: HBASE-18412.master.001.patch
>
>
> Add a method 'table_conf_unset' in admin.rb to reset the configuration as the 
> system default.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16312) update jquery version

2017-07-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16093263#comment-16093263
 ] 

Hudson commented on HBASE-16312:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3400 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3400/])
HBASE-16312 update jquery version (stack: rev 
f10f8198afed36a40faa459fe4aa4646838832af)
* (edit) 
hbase-server/src/main/resources/hbase-webapps/regionserver/storeFile.jsp
* (edit) hbase-server/src/main/resources/hbase-webapps/master/processMaster.jsp
* (edit) hbase-server/src/main/resources/hbase-webapps/static/js/jquery.min.js
* (edit) 
hbase-server/src/main/resources/hbase-webapps/regionserver/processRS.jsp
* (edit) hbase-server/src/main/resources/hbase-webapps/master/table.jsp
* (edit) 
hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.jamon
* (edit) hbase-server/src/main/resources/hbase-webapps/master/snapshotsStats.jsp
* (edit) hbase-server/src/main/resources/hbase-webapps/master/procedures.jsp
* (edit) hbase-server/src/main/resources/hbase-webapps/master/snapshot.jsp
* (edit) hbase-thrift/src/main/resources/hbase-webapps/static/js/jquery.min.js
* (edit) 
hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/regionserver/RSStatusTmpl.jamon
* (edit) hbase-server/src/main/resources/hbase-webapps/master/zk.jsp
* (edit) hbase-server/src/main/resources/hbase-webapps/regionserver/region.jsp
* (edit) hbase-server/src/main/resources/hbase-webapps/master/processRS.jsp
* (edit) hbase-rest/src/main/resources/hbase-webapps/rest/rest.jsp
* (edit) hbase-server/src/main/resources/hbase-webapps/master/tablesDetailed.jsp
* (edit) hbase-server/src/main/resources/hbase-webapps/thrift/thrift.jsp
* (edit) hbase-thrift/src/main/resources/hbase-webapps/thrift/thrift.jsp


> update jquery version
> -
>
> Key: HBASE-16312
> URL: https://issues.apache.org/jira/browse/HBASE-16312
> Project: HBase
>  Issue Type: Improvement
>  Components: dependencies, UI
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Sean Busbey
>Assignee: Peter Somogyi
>Priority: Critical
> Fix For: 2.0.0, 3.0.0
>
> Attachments: HBASE-16312.master.001.patch
>
>
> the jquery version we bundle for our web ui is EOM. update to latest jquery 
> 3.y.
> we can use the [jquery migrate 
> plugin|http://jquery.com/download/#jquery-migrate-plugin] to help update APIs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when configuring hbase.bucketcache.bucket.sizes

2017-07-19 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16093275#comment-16093275
 ] 

ramkrishna.s.vasudevan commented on HBASE-16993:


bq.Then only all possible block
657   // offsets in Buckets will come as multiples of 256. 
May be rephrase 'Having all the configured bucket sizes to be multiples of 256 
will ensure that the block offsets of the bucket entires that is calculated 
will also be mutiples of 256 because in bucketentry each offset is represented 
by '
bq.hrow new IllegalArgumentException("Illegal value: " + bucketSize + " 
cofigured for '"
'configured' - Typo.
Rest looks good to me. +1.

> BucketCache throw java.io.IOException: Invalid HFile block magic when 
> configuring hbase.bucketcache.bucket.sizes
> 
>
> Key: HBASE-16993
> URL: https://issues.apache.org/jira/browse/HBASE-16993
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache, io
>Affects Versions: 1.1.3
> Environment: hbase version 1.1.3
>Reporter: liubangchen
>Assignee: liubangchen
> Fix For: 3.0.0, 1.4.0, 2.0.0-alpha-2
>
> Attachments: HBASE-16993.000.patch, HBASE-16993.001.patch, 
> HBASE-16993.master.001.patch, HBASE-16993.master.002.patch, 
> HBASE-16993.master.003.patch, HBASE-16993.master.004.patch, 
> HBASE-16993.master.005.patch, HBASE-16993_V6.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> hbase-site.xml setting
> 
> hbase.bucketcache.bucket.sizes
> 16384,32768,40960, 
> 46000,49152,51200,65536,131072,524288
> 
> 
> hbase.bucketcache.size
> 16384
> 
> 
> hbase.bucketcache.ioengine
> offheap
> 
> 
> hfile.block.cache.size
> 0.3
> 
> 
> hfile.block.bloom.cacheonwrite
> true
> 
> 
> hbase.rs.cacheblocksonwrite
> true
> 
> 
> hfile.block.index.cacheonwrite
> true
>  n_splits = 200
> create 'usertable',{NAME =>'family', COMPRESSION => 'snappy', VERSIONS => 
> 1,DATA_BLOCK_ENCODING => 'DIFF',CONFIGURATION => 
> {'hbase.hregion.memstore.block.multiplier' => 5}},{DURABILITY => 
> 'SKIP_WAL'},{SPLITS => (1..n_splits).map {|i| 
> "user#{1000+i*(-1000)/n_splits}"}}
> load data
> bin/ycsb load hbase10 -P workloads/workloada -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> recordcount=2 -p insertorder=hashed -p insertstart=0 -p 
> clientbuffering=true -p durability=SKIP_WAL -threads 20 -s 
> run 
> bin/ycsb run hbase10 -P workloads/workloadb -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> operationcount=2000 -p readallfields=true -p clientbuffering=true -p 
> requestdistribution=zipfian  -threads 10 -s
> log info
> 2016-11-02 20:20:20,261 ERROR 
> [RW.default.readRpcServer.handler=36,queue=21,port=6020] bucket.BucketCache: 
> Failed reading block fdcc7ed6f3b2498b9ef316cc8206c233_44819759 from bucket 
> cache
> java.io.IOException: Invalid HFile block magic: 
> \x00\x00\x00\x00\x00\x00\x00\x00
> at 
> org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154)
> at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:273)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:134)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:121)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:427)
> at 
> org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:85)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getCachedBlock(HFileReaderV2.java:266)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:403)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:247)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:156)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:217)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getS

[jira] [Commented] (HBASE-18402) Thrift2 should support DeleteFamilyVersion type

2017-07-19 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16093287#comment-16093287
 ] 

Ted Yu commented on HBASE-18402:


For ThriftUtilities, this line (277) is not governed by in.isSetDeleteType() :
{code}
  out.addColumn(column.getFamily(), column.getQualifier(), 
column.getTimestamp());
{code}
Looks like the above is removed.

Please keep non-delete logic.

> Thrift2 should support  DeleteFamilyVersion type
> 
>
> Key: HBASE-18402
> URL: https://issues.apache.org/jira/browse/HBASE-18402
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 2.0.0-alpha-1
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Attachments: HBASE-18402.v1.patch
>
>
> Currently,  our thrift2 only support two delete types, Actually, there are 
> four delete types.and  we should support the other delete type:  DeleteFamily 
> and DeleteFamilyVersion. 
> {code}
> /**
>  * Specify type of delete:
>  *  - DELETE_COLUMN means exactly one version will be removed,
>  *  - DELETE_COLUMNS means previous versions will also be removed.
>  */
> enum TDeleteType {
>   DELETE_COLUMN = 0,
>   DELETE_COLUMNS = 1
> }
> {code} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17648) HBase Table-level synchronization fails between two secured(kerberized) clusters

2017-07-19 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-17648:

Priority: Critical  (was: Major)

> HBase Table-level synchronization fails between two secured(kerberized) 
> clusters
> 
>
> Key: HBASE-17648
> URL: https://issues.apache.org/jira/browse/HBASE-17648
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce, Operability, security, tooling
>Affects Versions: 2.0.0
>Reporter: Yi Liang
>Assignee: Yi Liang
>Priority: Critical
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17648-V1.patch, HBASE-17648-V2.patch
>
>
> run org.apache.hadoop.hbase.mapreduce.SyncTable’ command.
> Follow the steps in HBASE-15557 between two secured cluster, get errors below:
> {code}
> WARN [hconnection-0x55b5e331-metaLookup-shared--pool6-t1] 
> org.apache.hadoop.hbase.ipc.RpcClientImpl: Exception encountered while 
> connecting to the server : javax.security.sasl.SaslException: GSS initiate 
> failed [Caused by GSSException: No valid credentials provided (Mechanism 
> level: Failed to find any Kerberos tgt)]
> 2017-02-08 14:33:57,465 FATAL 
> [hconnection-0x55b5e331-metaLookup-shared--pool6-t1] 
> org.apache.hadoop.hbase.ipc.RpcClientImpl: SASL authentication failed. The 
> most likely cause is missing or invalid credentials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
>   at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>   at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1697)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
>   at 
> org.apache.hadoop.hbase.client.ClientSmallScanner$SmallScannerCallable.call(ClientSmallScanner.java:201)
>   at 
> org.apache.hadoop.hbase.client.ClientSmallScanner$SmallScannerCallable.call(ClientSmallScanner.java:180)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:364)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:338)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
>   at 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)
>   at 
> sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
>   at 
> sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
>   at 
> sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
>   at 
> sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)
>   at 
> sun.security.jgss.GS

[jira] [Updated] (HBASE-17648) HBase Table-level synchronization fails between two secured(kerberized) clusters

2017-07-19 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-17648:

Component/s: security

> HBase Table-level synchronization fails between two secured(kerberized) 
> clusters
> 
>
> Key: HBASE-17648
> URL: https://issues.apache.org/jira/browse/HBASE-17648
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce, Operability, security, tooling
>Affects Versions: 2.0.0
>Reporter: Yi Liang
>Assignee: Yi Liang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17648-V1.patch, HBASE-17648-V2.patch
>
>
> run org.apache.hadoop.hbase.mapreduce.SyncTable’ command.
> Follow the steps in HBASE-15557 between two secured cluster, get errors below:
> {code}
> WARN [hconnection-0x55b5e331-metaLookup-shared--pool6-t1] 
> org.apache.hadoop.hbase.ipc.RpcClientImpl: Exception encountered while 
> connecting to the server : javax.security.sasl.SaslException: GSS initiate 
> failed [Caused by GSSException: No valid credentials provided (Mechanism 
> level: Failed to find any Kerberos tgt)]
> 2017-02-08 14:33:57,465 FATAL 
> [hconnection-0x55b5e331-metaLookup-shared--pool6-t1] 
> org.apache.hadoop.hbase.ipc.RpcClientImpl: SASL authentication failed. The 
> most likely cause is missing or invalid credentials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
>   at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>   at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1697)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
>   at 
> org.apache.hadoop.hbase.client.ClientSmallScanner$SmallScannerCallable.call(ClientSmallScanner.java:201)
>   at 
> org.apache.hadoop.hbase.client.ClientSmallScanner$SmallScannerCallable.call(ClientSmallScanner.java:180)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:364)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:338)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
>   at 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)
>   at 
> sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
>   at 
> sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
>   at 
> sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
>   at 
> sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)
>   at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.

[jira] [Updated] (HBASE-17648) HBase Table-level synchronization fails between two secured(kerberized) clusters

2017-07-19 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-17648:

Component/s: tooling
 Operability
 mapreduce

> HBase Table-level synchronization fails between two secured(kerberized) 
> clusters
> 
>
> Key: HBASE-17648
> URL: https://issues.apache.org/jira/browse/HBASE-17648
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce, Operability, security, tooling
>Affects Versions: 2.0.0
>Reporter: Yi Liang
>Assignee: Yi Liang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17648-V1.patch, HBASE-17648-V2.patch
>
>
> run org.apache.hadoop.hbase.mapreduce.SyncTable’ command.
> Follow the steps in HBASE-15557 between two secured cluster, get errors below:
> {code}
> WARN [hconnection-0x55b5e331-metaLookup-shared--pool6-t1] 
> org.apache.hadoop.hbase.ipc.RpcClientImpl: Exception encountered while 
> connecting to the server : javax.security.sasl.SaslException: GSS initiate 
> failed [Caused by GSSException: No valid credentials provided (Mechanism 
> level: Failed to find any Kerberos tgt)]
> 2017-02-08 14:33:57,465 FATAL 
> [hconnection-0x55b5e331-metaLookup-shared--pool6-t1] 
> org.apache.hadoop.hbase.ipc.RpcClientImpl: SASL authentication failed. The 
> most likely cause is missing or invalid credentials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
>   at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>   at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1697)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
>   at 
> org.apache.hadoop.hbase.client.ClientSmallScanner$SmallScannerCallable.call(ClientSmallScanner.java:201)
>   at 
> org.apache.hadoop.hbase.client.ClientSmallScanner$SmallScannerCallable.call(ClientSmallScanner.java:180)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:364)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:338)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
>   at 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)
>   at 
> sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
>   at 
> sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
>   at 
> sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
>   at 
> sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)
>   at 
> sun.secu

[jira] [Reopened] (HBASE-17648) HBase Table-level synchronization fails between two secured(kerberized) clusters

2017-07-19 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reopened HBASE-17648:
-

This is a critical failure for secure deployments. I'd like it backported to 
the remaining branch-1 branches. Please shout if there's a reason it ought not 
be so.

> HBase Table-level synchronization fails between two secured(kerberized) 
> clusters
> 
>
> Key: HBASE-17648
> URL: https://issues.apache.org/jira/browse/HBASE-17648
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce, Operability, security, tooling
>Affects Versions: 2.0.0
>Reporter: Yi Liang
>Assignee: Yi Liang
>Priority: Critical
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17648-V1.patch, HBASE-17648-V2.patch
>
>
> run org.apache.hadoop.hbase.mapreduce.SyncTable’ command.
> Follow the steps in HBASE-15557 between two secured cluster, get errors below:
> {code}
> WARN [hconnection-0x55b5e331-metaLookup-shared--pool6-t1] 
> org.apache.hadoop.hbase.ipc.RpcClientImpl: Exception encountered while 
> connecting to the server : javax.security.sasl.SaslException: GSS initiate 
> failed [Caused by GSSException: No valid credentials provided (Mechanism 
> level: Failed to find any Kerberos tgt)]
> 2017-02-08 14:33:57,465 FATAL 
> [hconnection-0x55b5e331-metaLookup-shared--pool6-t1] 
> org.apache.hadoop.hbase.ipc.RpcClientImpl: SASL authentication failed. The 
> most likely cause is missing or invalid credentials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
>   at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>   at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1697)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
>   at 
> org.apache.hadoop.hbase.client.ClientSmallScanner$SmallScannerCallable.call(ClientSmallScanner.java:201)
>   at 
> org.apache.hadoop.hbase.client.ClientSmallScanner$SmallScannerCallable.call(ClientSmallScanner.java:180)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:364)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:338)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
>   at 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)
>   at 
> sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
>   at 
> sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
>   at 
> sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:

[jira] [Commented] (HBASE-17648) HBase Table-level synchronization fails between two secured(kerberized) clusters

2017-07-19 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16093292#comment-16093292
 ] 

Sean Busbey commented on HBASE-17648:
-

We should also make a follow-on issue about the lack of any tests of this 
change. 

> HBase Table-level synchronization fails between two secured(kerberized) 
> clusters
> 
>
> Key: HBASE-17648
> URL: https://issues.apache.org/jira/browse/HBASE-17648
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce, Operability, security, tooling
>Affects Versions: 2.0.0
>Reporter: Yi Liang
>Assignee: Yi Liang
>Priority: Critical
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17648-V1.patch, HBASE-17648-V2.patch
>
>
> run org.apache.hadoop.hbase.mapreduce.SyncTable’ command.
> Follow the steps in HBASE-15557 between two secured cluster, get errors below:
> {code}
> WARN [hconnection-0x55b5e331-metaLookup-shared--pool6-t1] 
> org.apache.hadoop.hbase.ipc.RpcClientImpl: Exception encountered while 
> connecting to the server : javax.security.sasl.SaslException: GSS initiate 
> failed [Caused by GSSException: No valid credentials provided (Mechanism 
> level: Failed to find any Kerberos tgt)]
> 2017-02-08 14:33:57,465 FATAL 
> [hconnection-0x55b5e331-metaLookup-shared--pool6-t1] 
> org.apache.hadoop.hbase.ipc.RpcClientImpl: SASL authentication failed. The 
> most likely cause is missing or invalid credentials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
>   at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>   at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1697)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
>   at 
> org.apache.hadoop.hbase.client.ClientSmallScanner$SmallScannerCallable.call(ClientSmallScanner.java:201)
>   at 
> org.apache.hadoop.hbase.client.ClientSmallScanner$SmallScannerCallable.call(ClientSmallScanner.java:180)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:364)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:338)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
>   at 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)
>   at 
> sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
>   at 
> sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
>   at 
> sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
>   at 
> sun.sec

[jira] [Comment Edited] (HBASE-18402) Thrift2 should support DeleteFamilyVersion type

2017-07-19 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16093287#comment-16093287
 ] 

Ted Yu edited comment on HBASE-18402 at 7/19/17 3:45 PM:
-

lgtm


was (Author: yuzhih...@gmail.com):
For ThriftUtilities, this line (277) is not governed by in.isSetDeleteType() :
{code}
  out.addColumn(column.getFamily(), column.getQualifier(), 
column.getTimestamp());
{code}
Looks like the above is removed.

Please keep non-delete logic.

> Thrift2 should support  DeleteFamilyVersion type
> 
>
> Key: HBASE-18402
> URL: https://issues.apache.org/jira/browse/HBASE-18402
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 2.0.0-alpha-1
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Attachments: HBASE-18402.v1.patch
>
>
> Currently,  our thrift2 only support two delete types, Actually, there are 
> four delete types.and  we should support the other delete type:  DeleteFamily 
> and DeleteFamilyVersion. 
> {code}
> /**
>  * Specify type of delete:
>  *  - DELETE_COLUMN means exactly one version will be removed,
>  *  - DELETE_COLUMNS means previous versions will also be removed.
>  */
> enum TDeleteType {
>   DELETE_COLUMN = 0,
>   DELETE_COLUMNS = 1
> }
> {code} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-15548) SyncTable: sourceHashDir is supposed to be optional but won't work without

2017-07-19 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15548:

Priority: Critical  (was: Minor)

> SyncTable: sourceHashDir is supposed to be optional but won't work without 
> ---
>
> Key: HBASE-15548
> URL: https://issues.apache.org/jira/browse/HBASE-15548
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce, Operability, tooling
>Affects Versions: 1.2.0
> Environment: Ubuntu 12.04
>Reporter: My Ho
>Assignee: Dave Latham
>Priority: Critical
> Fix For: 2.0.0, 0.98.19, 1.4.0
>
> Attachments: HBASE-15548.patch
>
>
> 1) SyncTable code is contradictory. Usage said sourcehashdir is optional 
> (https://github.com/apache/hbase/blob/ad3feaa44800f10d102255a240c38ccf23a82d49/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/SyncTable.java#L687).
>  However, the command won't run if sourcehashdir is missing 
> (https://github.com/apache/hbase/blob/ad3feaa44800f10d102255a240c38ccf23a82d49/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/SyncTable.java#L83-L85)
>  
> 2) There is no documentation on how to create the desired sourcehash



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-15548) SyncTable: sourceHashDir is supposed to be optional but won't work without

2017-07-19 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15548:

Component/s: (was: Replication)
 tooling
 Operability
 mapreduce

> SyncTable: sourceHashDir is supposed to be optional but won't work without 
> ---
>
> Key: HBASE-15548
> URL: https://issues.apache.org/jira/browse/HBASE-15548
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce, Operability, tooling
>Affects Versions: 1.2.0
> Environment: Ubuntu 12.04
>Reporter: My Ho
>Assignee: Dave Latham
>Priority: Critical
> Fix For: 2.0.0, 0.98.19, 1.4.0
>
> Attachments: HBASE-15548.patch
>
>
> 1) SyncTable code is contradictory. Usage said sourcehashdir is optional 
> (https://github.com/apache/hbase/blob/ad3feaa44800f10d102255a240c38ccf23a82d49/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/SyncTable.java#L687).
>  However, the command won't run if sourcehashdir is missing 
> (https://github.com/apache/hbase/blob/ad3feaa44800f10d102255a240c38ccf23a82d49/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/SyncTable.java#L83-L85)
>  
> 2) There is no documentation on how to create the desired sourcehash



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Reopened] (HBASE-15548) SyncTable: sourceHashDir is supposed to be optional but won't work without

2017-07-19 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reopened HBASE-15548:
-

I'm reopening and marking this critical, since SyncTable doesn't work without 
it. Need this back in maintenance branches.

> SyncTable: sourceHashDir is supposed to be optional but won't work without 
> ---
>
> Key: HBASE-15548
> URL: https://issues.apache.org/jira/browse/HBASE-15548
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce, Operability, tooling
>Affects Versions: 1.2.0
> Environment: Ubuntu 12.04
>Reporter: My Ho
>Assignee: Dave Latham
>Priority: Critical
> Fix For: 2.0.0, 0.98.19, 1.4.0
>
> Attachments: HBASE-15548.patch
>
>
> 1) SyncTable code is contradictory. Usage said sourcehashdir is optional 
> (https://github.com/apache/hbase/blob/ad3feaa44800f10d102255a240c38ccf23a82d49/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/SyncTable.java#L687).
>  However, the command won't run if sourcehashdir is missing 
> (https://github.com/apache/hbase/blob/ad3feaa44800f10d102255a240c38ccf23a82d49/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/SyncTable.java#L83-L85)
>  
> 2) There is no documentation on how to create the desired sourcehash



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-16090) ResultScanner is not closed in SyncTable#finishRemainingHashRanges()

2017-07-19 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-16090:

Component/s: tooling
 Operability
 mapreduce

> ResultScanner is not closed in SyncTable#finishRemainingHashRanges()
> 
>
> Key: HBASE-16090
> URL: https://issues.apache.org/jira/browse/HBASE-16090
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce, Operability, tooling
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.4.0, 0.98.21
>
> Attachments: 16090.v1.txt, 16090.v1.txt
>
>
> Here is related code:
> {code}
>   ResultScanner targetScanner = targetTable.getScanner(scan);
>   for (Result row : targetScanner) {
> targetHasher.hashResult(row);  
>   }
> } // else current batch ends exactly at split end row
> {code}
> targetScanner should be closed upon leaving the if block.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Reopened] (HBASE-16090) ResultScanner is not closed in SyncTable#finishRemainingHashRanges()

2017-07-19 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reopened HBASE-16090:
-

Pulling this back into maintenance branches.

> ResultScanner is not closed in SyncTable#finishRemainingHashRanges()
> 
>
> Key: HBASE-16090
> URL: https://issues.apache.org/jira/browse/HBASE-16090
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce, Operability, tooling
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.4.0, 0.98.21
>
> Attachments: 16090.v1.txt, 16090.v1.txt
>
>
> Here is related code:
> {code}
>   ResultScanner targetScanner = targetTable.getScanner(scan);
>   for (Result row : targetScanner) {
> targetHasher.hashResult(row);  
>   }
> } // else current batch ends exactly at split end row
> {code}
> targetScanner should be closed upon leaving the if block.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-15548) SyncTable: sourceHashDir is supposed to be optional but won't work without

2017-07-19 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey resolved HBASE-15548.
-
   Resolution: Fixed
Fix Version/s: 1.2.7
   1.3.2

pushed to 1.3 and 1.2. SyncTable was introduced in 1.2, so skipping 1.1.

> SyncTable: sourceHashDir is supposed to be optional but won't work without 
> ---
>
> Key: HBASE-15548
> URL: https://issues.apache.org/jira/browse/HBASE-15548
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce, Operability, tooling
>Affects Versions: 1.2.0
> Environment: Ubuntu 12.04
>Reporter: My Ho
>Assignee: Dave Latham
>Priority: Critical
> Fix For: 2.0.0, 1.4.0, 1.3.2, 1.2.7, 0.98.19
>
> Attachments: HBASE-15548.patch
>
>
> 1) SyncTable code is contradictory. Usage said sourcehashdir is optional 
> (https://github.com/apache/hbase/blob/ad3feaa44800f10d102255a240c38ccf23a82d49/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/SyncTable.java#L687).
>  However, the command won't run if sourcehashdir is missing 
> (https://github.com/apache/hbase/blob/ad3feaa44800f10d102255a240c38ccf23a82d49/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/SyncTable.java#L83-L85)
>  
> 2) There is no documentation on how to create the desired sourcehash



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-17648) HBase Table-level synchronization fails between two secured(kerberized) clusters

2017-07-19 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey resolved HBASE-17648.
-
   Resolution: Fixed
Fix Version/s: 1.2.7
   1.3.2
 Release Note: pushed to 1.3 and 1.2. SyncTable was introduced in 1.2, so 
skipping 1.1.

> HBase Table-level synchronization fails between two secured(kerberized) 
> clusters
> 
>
> Key: HBASE-17648
> URL: https://issues.apache.org/jira/browse/HBASE-17648
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce, Operability, security, tooling
>Affects Versions: 2.0.0
>Reporter: Yi Liang
>Assignee: Yi Liang
>Priority: Critical
> Fix For: 2.0.0, 1.4.0, 1.3.2, 1.2.7
>
> Attachments: HBASE-17648-V1.patch, HBASE-17648-V2.patch
>
>
> run org.apache.hadoop.hbase.mapreduce.SyncTable’ command.
> Follow the steps in HBASE-15557 between two secured cluster, get errors below:
> {code}
> WARN [hconnection-0x55b5e331-metaLookup-shared--pool6-t1] 
> org.apache.hadoop.hbase.ipc.RpcClientImpl: Exception encountered while 
> connecting to the server : javax.security.sasl.SaslException: GSS initiate 
> failed [Caused by GSSException: No valid credentials provided (Mechanism 
> level: Failed to find any Kerberos tgt)]
> 2017-02-08 14:33:57,465 FATAL 
> [hconnection-0x55b5e331-metaLookup-shared--pool6-t1] 
> org.apache.hadoop.hbase.ipc.RpcClientImpl: SASL authentication failed. The 
> most likely cause is missing or invalid credentials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
>   at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>   at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1697)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
>   at 
> org.apache.hadoop.hbase.client.ClientSmallScanner$SmallScannerCallable.call(ClientSmallScanner.java:201)
>   at 
> org.apache.hadoop.hbase.client.ClientSmallScanner$SmallScannerCallable.call(ClientSmallScanner.java:180)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:364)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:338)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
>   at 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)
>   at 
> sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
>   at 
> sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
>   at 
> sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5M

[jira] [Resolved] (HBASE-16090) ResultScanner is not closed in SyncTable#finishRemainingHashRanges()

2017-07-19 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey resolved HBASE-16090.
-
   Resolution: Fixed
Fix Version/s: 1.2.7
   1.3.2
 Release Note: pushed to 1.3 and 1.2. SyncTable was introduced in 1.2, so 
skipping 1.1.

> ResultScanner is not closed in SyncTable#finishRemainingHashRanges()
> 
>
> Key: HBASE-16090
> URL: https://issues.apache.org/jira/browse/HBASE-16090
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce, Operability, tooling
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.4.0, 1.3.2, 1.2.7, 0.98.21
>
> Attachments: 16090.v1.txt, 16090.v1.txt
>
>
> Here is related code:
> {code}
>   ResultScanner targetScanner = targetTable.getScanner(scan);
>   for (Result row : targetScanner) {
> targetHasher.hashResult(row);  
>   }
> } // else current batch ends exactly at split end row
> {code}
> targetScanner should be closed upon leaving the if block.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18276) Release 1.2.7

2017-07-19 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1609#comment-1609
 ] 

Sean Busbey commented on HBASE-18276:
-

my discovery and backport of HBASE-15548 and HBASE-17648 have renewed my desire 
to get the next 1.2 release out. Expect to start nlt this weekend.

> Release 1.2.7
> -
>
> Key: HBASE-18276
> URL: https://issues.apache.org/jira/browse/HBASE-18276
> Project: HBase
>  Issue Type: Task
>  Components: community
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 1.2.7
>
>
> about time to get rolling on 1.2.7 for ~monthly 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17648) HBase Table-level synchronization fails between two secured(kerberized) clusters

2017-07-19 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16093338#comment-16093338
 ] 

Sean Busbey commented on HBASE-17648:
-

bq. pushed to 1.3 and 1.2. SyncTable was introduced in 1.2, so skipping 1.1.

> HBase Table-level synchronization fails between two secured(kerberized) 
> clusters
> 
>
> Key: HBASE-17648
> URL: https://issues.apache.org/jira/browse/HBASE-17648
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce, Operability, security, tooling
>Affects Versions: 2.0.0
>Reporter: Yi Liang
>Assignee: Yi Liang
>Priority: Critical
> Fix For: 2.0.0, 1.4.0, 1.3.2, 1.2.7
>
> Attachments: HBASE-17648-V1.patch, HBASE-17648-V2.patch
>
>
> run org.apache.hadoop.hbase.mapreduce.SyncTable’ command.
> Follow the steps in HBASE-15557 between two secured cluster, get errors below:
> {code}
> WARN [hconnection-0x55b5e331-metaLookup-shared--pool6-t1] 
> org.apache.hadoop.hbase.ipc.RpcClientImpl: Exception encountered while 
> connecting to the server : javax.security.sasl.SaslException: GSS initiate 
> failed [Caused by GSSException: No valid credentials provided (Mechanism 
> level: Failed to find any Kerberos tgt)]
> 2017-02-08 14:33:57,465 FATAL 
> [hconnection-0x55b5e331-metaLookup-shared--pool6-t1] 
> org.apache.hadoop.hbase.ipc.RpcClientImpl: SASL authentication failed. The 
> most likely cause is missing or invalid credentials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
>   at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>   at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1697)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
>   at 
> org.apache.hadoop.hbase.client.ClientSmallScanner$SmallScannerCallable.call(ClientSmallScanner.java:201)
>   at 
> org.apache.hadoop.hbase.client.ClientSmallScanner$SmallScannerCallable.call(ClientSmallScanner.java:180)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:364)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:338)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
>   at 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)
>   at 
> sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
>   at 
> sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
>   at 
> sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
>   at 
> 

  1   2   3   >