Re: [VOTE] First release candidate for HBase 1.2.6.1 (RC0) is available

2018-06-07 Thread Stack
+1

Checked signatures and hash.
CHANGES.txt looks good as does general layout.
Built from src bundle, started it up. Checked around UI. Looks good.
Loaded data and verified it there after restart.

S

On Sun, Jun 3, 2018 at 11:45 PM Sean Busbey  wrote:

> Hi!
>
> The first release candidate for Apache HBase 1.2.6.1 is available for
> download and testing.
>
> Artifacts are available here:
>
> https://dist.apache.org/repos/dist/dev/hbase/1.2.6.1RC0/
>
> You can find the SHA512 of the artifacts as of this email at the end.
>
> Corresponding convenience artifacts for maven use are in the staging
> repository:
>
> https://repository.apache.org/content/repositories/orgapachehbase-1218/
>
> All artifacts are signed with my code signing key, 0D80DB7C, which is
> also in the project KEYS file:
>
> http://www.apache.org/dist/hbase/KEYS
>
> These artifacts correspond to commit ref
>
> 61297b5e3acf2a77606b59e0b6f0013c9dae0fbb
>
> which has been tagged as 1.2.6.1RC0 as a convenience.
>
> Please take a few minutes to verify the release and vote on releasing it:
>
> [ ] +1 Release these artifacts as Apache HBase 1.2.6.1
> [ ] -1 Do not release this package because ...
>
> This VOTE thread will remain open for at least 72 hours.
>
> -busbey
>
> as of this email the posted artifacts have the following SHA512.
>
> hbase-1.2.6.1-src.tar.gz: 02A44970 2614D148 7B2162A5 9AC23837 FC2583BC
> 068BC8D8
>   8FCB1C30 3FE38D2D 403727D5 E7103FF7 7FDF65B1
> 1F4DFF3D
>   7E9945BE A5A9453F 4FE0AE0A A56C28FE
>
> hbase-1.2.6.1-bin.tar.gz: EB473744 184430BE 55E8DAF2 A6450E2F 06281960
> 13D473D0
>   596779AB 2F1EEFBA 1BB76273 F1C48BCD FCAB1A33
> 2AFCB649
>   B0BC3EF8 B2756540 70E7E375 F5CFC43A
>


[VOTE] Apache HBase 1.4.5 rc0

2018-06-07 Thread Josh Elser

Hi,

Please vote to approve the following as Apache HBase 1.4.5

https://dist.apache.org/repos/dist/dev/hbase/1.4.5rc0/

Per usual, there is a source release as well as a convenience binary

This is built with JDK7 from the commit: 
https://git-wip-us.apache.org/repos/asf?p=hbase.git;a=commit;h=74596816c85f1256ec8a302efecc0144f2ea76fa 
(there is a corresponding tag "1.4.5rc0" for convenience)


hbase-1.4.5-bin.tar.gz: 7C8EFD79 CD5EAEFF 92F2E093 8AC8448C ED5717BD 
4C8D2C43
B95F804B 003E2126 9235EFE0 ABE61302 B81B30B1 
F9F4A785

17191950 2F436F64 19F50E53 999B5272
hbase-1.4.5-src.tar.gz: FED89273 FFA746DA D868DF79 7E46DB75 D0908419 
F3D418FF
73068583 A6F1DCB2 61BD2389 12DCE920 F8800CAE 
23631343

DB7601F4 F43331A4 678135E5 E5C566C4

There is also a Maven staging repository for this release:
https://repository.apache.org/content/repositories/orgapachehbase-1219

This vote will be open for at least 72 hours (2018/06/11  UTC).

- Josh (on behalf of the HBase PMC)




[jira] [Created] (HBASE-20705) Having RPC Quota on a table prevents Space quota to be recreated/removed

2018-06-07 Thread Biju Nair (JIRA)
Biju Nair created HBASE-20705:
-

 Summary: Having RPC Quota on a table prevents Space quota to be 
recreated/removed
 Key: HBASE-20705
 URL: https://issues.apache.org/jira/browse/HBASE-20705
 Project: HBase
  Issue Type: Bug
Reporter: Biju Nair


* Property {{hbase.quota.remove.on.table.delete}} is set to {{true}} by default
 * Create a table and set RPC and Space quota

{noformat}
hbase(main):022:0> create 't2','cf1'
Created table t2
Took 0.7420 seconds
=> Hbase::Table - t2
hbase(main):023:0> set_quota TYPE => SPACE, TABLE => 't2', LIMIT => '1G', 
POLICY => NO_WRITES
Took 0.0105 seconds
hbase(main):024:0> set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => '10M/sec'
Took 0.0186 seconds
hbase(main):025:0> list_quotas
TABLE => t2 TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_SIZE, LIMIT => 10M/sec, 
SCOPE => MACHINE
TABLE => t2 TYPE => SPACE, TABLE => t2, LIMIT => 1073741824, VIOLATION_POLICY 
=> NO_WRITES{noformat}
 * Drop the table and the Space quota is set to {{REMOVE => true}}

{noformat}
hbase(main):026:0> disable 't2'
Took 0.4363 seconds
hbase(main):027:0> drop 't2'
Took 0.2344 seconds
hbase(main):028:0> list_quotas
TABLE => t2 TYPE => SPACE, TABLE => t2, REMOVE => true
USER => u1 TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_SIZE, LIMIT => 10M/sec, 
SCOPE => MACHINE{noformat}
 * Recreate the table and set Space quota back. The Space quota on the table is 
still set to {{REMOVE => true}}

{noformat}
hbase(main):029:0> create 't2','cf1'
Created table t2
Took 0.7348 seconds
=> Hbase::Table - t2
hbase(main):031:0> set_quota TYPE => SPACE, TABLE => 't2', LIMIT => '1G', 
POLICY => NO_WRITES
Took 0.0088 seconds
hbase(main):032:0> list_quotas
OWNER QUOTAS
TABLE => t2 TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_SIZE, LIMIT => 10M/sec, 
SCOPE => MACHINE
TABLE => t2 TYPE => SPACE, TABLE => t2, REMOVE => true{noformat}
 * Remove RPC quota and drop the table, the Space Quota is not removed

{noformat}
hbase(main):033:0> set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => NONE
Took 0.0193 seconds

hbase(main):036:0> disable 't2'
Took 0.4305 seconds
hbase(main):037:0> drop 't2'
Took 0.2353 seconds
hbase(main):038:0> list_quotas
OWNER QUOTAS
TABLE => t2                               TYPE => SPACE, TABLE => t2, REMOVE => 
true{noformat}
 * Deleting the quota entry from {{hbase:quota}} seems to be the option to 
reset it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20704) Sometimes compacted storefiles are archived on region close

2018-06-07 Thread Francis Liu (JIRA)
Francis Liu created HBASE-20704:
---

 Summary: Sometimes compacted storefiles are archived on region 
close
 Key: HBASE-20704
 URL: https://issues.apache.org/jira/browse/HBASE-20704
 Project: HBase
  Issue Type: Bug
  Components: Compaction
Affects Versions: 2.0.0, 1.4.0, 1.3.0, 3.0.0, 1.5.0
Reporter: Francis Liu


During region close compacted files which have not yet been archived by the 
discharger are archived as part of the region closing process. It is important 
that these files are wholly archived to insure data consistency. ie a storefile 
containing delete tombstones can be archived while older storefiles containing 
cells that were supposed to be deleted are left unarchived. 

On region close a compacted storefile is skipped from archiving if it has read 
references (ie open scanners). This behavior is correct for when the discharger 
chore runs but on region close consistency is of course more important so we 
should add a special case to ignore any references on the storefile and go 
ahead and archive it. 

Attached patch contains a unit test that reproduces the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-20702) Processing crash, skip ONLINE'ing empty rows

2018-06-07 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-20702.
---
  Resolution: Fixed
Hadoop Flags: Reviewed

Pushed to branch-2.0+ Blaming [~elserj] for review.

> Processing crash, skip ONLINE'ing empty rows
> 
>
> Key: HBASE-20702
> URL: https://issues.apache.org/jira/browse/HBASE-20702
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.1
>
> Attachments: HBASE-20702.branch-2.0.001.patch
>
>
> This patch comes from the parent issue. Parent issue identifies us ONLINE'ing 
> a region though it has nothing in the row (in parent issue scenario, region 
> info family was deleted in a merge region parent). We shouldn't do this.
> Committing patch from parent here in this subtask since the parent issue is 
> still under investigation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20703) When quota feature is off shell should give a nice message

2018-06-07 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-20703:
---

 Summary: When quota feature is off shell should give a nice message
 Key: HBASE-20703
 URL: https://issues.apache.org/jira/browse/HBASE-20703
 Project: HBase
  Issue Type: Improvement
  Components: shell, Usability
Affects Versions: 2.0.0
Reporter: Sean Busbey


When quota is off the shell gives a error that requires knowledge of our 
implementation details to understand:

{code}
2.2.1 :001 > list_snapshot_sizes
SNAPSHOT SIZE   



ERROR: Unknown table hbase:quota!

For usage try 'help "list_snapshot_sizes"'

Took 1.6285 seconds 

   
2.2.1 :002 > list_quota_snapshots
 TABLE USAGE LIMIT IN_VIOLATION POLICY

ERROR: Unknown table hbase:quota!

For usage try 'help "list_quota_snapshots"'

Took 0.0371 seconds
{code}

Or it just doesn't mention that quotas can't exist:

{code}
2.2.1 :003 > list_quotas
OWNERQUOTAS 


0 row(s)
Took 0.0475 seconds 
 
2.2.1 :004 > list_quota_table_sizes
TABLESIZE   


0 row(s)
Took 0.1221 seconds  
{code}

set quota gives a better pointer that the problem is the feature is off:

{code}
2.2.1 :005 > set_quota USER => 'busbey', GLOBAL_BYPASS => true

ERROR: org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.UnsupportedOperationException: quota support disabled
at 
org.apache.hadoop.hbase.quotas.MasterQuotaManager.checkQuotaSupport(MasterQuotaManager.java:442)
at 
org.apache.hadoop.hbase.quotas.MasterQuotaManager.setQuota(MasterQuotaManager.java:124)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.setQuota(MasterRpcServices.java:1555)
at 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
Caused by: java.lang.UnsupportedOperationException: quota support disabled
... 8 more

For usage try 'help "set_quota"'
{code}

Instead we should give a nice message, like you get if visibility labels is off:

{code}

2.2.1 :06 > list_labels

ERROR: DISABLED: Visibility labels feature is not available

For usage try 'help "list_labels"'

Took 0.0426 seconds
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20702) Processing crash, skip ONLINE'ing empty rows

2018-06-07 Thread stack (JIRA)
stack created HBASE-20702:
-

 Summary: Processing crash, skip ONLINE'ing empty rows
 Key: HBASE-20702
 URL: https://issues.apache.org/jira/browse/HBASE-20702
 Project: HBase
  Issue Type: Sub-task
  Components: amv2
Affects Versions: 2.0.0
Reporter: stack
Assignee: stack
 Fix For: 2.0.1


This patch comes from the parent issue. Parent issue identifies us ONLINE'ing a 
region though it has nothing in the row (in parent issue scenario, region info 
family was deleted in a merge region parent). We shouldn't do this.

Committing patch from parent here in this subtask since the parent issue is 
still under investigation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20701) too much logging when balancer runs

2018-06-07 Thread Monani Mihir (JIRA)
Monani Mihir created HBASE-20701:


 Summary: too much logging when balancer runs
 Key: HBASE-20701
 URL: https://issues.apache.org/jira/browse/HBASE-20701
 Project: HBase
  Issue Type: Improvement
  Components: Balancer
Reporter: Monani Mihir
Assignee: Monani Mihir






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20700) Move meta region when server crash can cause the procedure to be stuck

2018-06-07 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-20700:
-

 Summary: Move meta region when server crash can cause the 
procedure to be stuck
 Key: HBASE-20700
 URL: https://issues.apache.org/jira/browse/HBASE-20700
 Project: HBase
  Issue Type: Sub-task
Reporter: Duo Zhang


As said in HBASE-20682.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-20696) Shell list_peers print useless string

2018-06-07 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey resolved HBASE-20696.
-
Resolution: Not A Problem

That's IRB, the Ruby interactive shell that underlies the hbase shell.

Please see the example of configuring your local .irbrc file in our reference 
guide if you want to change it:

http://hbase.apache.org/book.html#irbrc

Please use the mailing list to discuss configuration questions.

> Shell list_peers print useless string 
> --
>
> Key: HBASE-20696
> URL: https://issues.apache.org/jira/browse/HBASE-20696
> Project: HBase
>  Issue Type: Bug
>Reporter: Zheng Hu
>Priority: Major
>
> {code}
> hbase(main):020:0> list_peers
>  PEER_ID CLUSTER_KEY ENDPOINT_CLASSNAME REMOTE_ROOT_DIR 
> SYNC_REPLICATION_STATE STATE REPLICATE_ALL NAMESPACES TABLE_CFS BANDWIDTH 
> SERIAL
>  1 
> lg-hadoop-tst-st01.bj:10010,lg-hadoop-tst-st02.bj:10010,lg-hadoop-tst-st03.bj:10010:/hbase/test-hbase-slave
>  nil hdfs://lg-hadoop-tst-st01.bj:20100/hbase/test-hbase-slave/remoteWALs 
> ACTIVE ENABLED false  default.ycsb-test 0 false
> 1 row(s)
> Took 0.0446 seconds   
>  
> => #> It's useless .. 
> {code}
> Interested contributors are welcome to fix this bug...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20699) QuataCache should cancel the QuotaRefresherChore service inside its stop()

2018-06-07 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-20699:
--

 Summary: QuataCache should cancel the QuotaRefresherChore service 
inside its stop()
 Key: HBASE-20699
 URL: https://issues.apache.org/jira/browse/HBASE-20699
 Project: HBase
  Issue Type: Bug
Reporter: Nihal Jain
Assignee: Nihal Jain


*ANALYSIS*
 * * Called from HRegionServer.run() in case rs is aborted for some reason:

{code:java}
// Stop the quota manager
if (rsQuotaManager != null) {
  rsQuotaManager.stop();
}
{code}
 * Inside {{RegionServerRpcQuotaManager.stop()}}:

{code:java}
  public void stop() {
if (isQuotaEnabled()) {
  quotaCache.stop("shutdown");
}
  }
{code}
 * {{QuotaCache starts QuotaRefresherChore in}}{{ QuotaCache.start()}}:

{code:java}
  public void start() throws IOException {
stopped = false;

// TODO: This will be replaced once we have the notification bus ready.
Configuration conf = rsServices.getConfiguration();
int period = conf.getInt(REFRESH_CONF_KEY, REFRESH_DEFAULT_PERIOD);
refreshChore = new QuotaRefresherChore(period, this);
rsServices.getChoreService().scheduleChore(refreshChore);
  }
{code}
 * {{QuotaCache stop should cancel refreshChore inside }}{{QuotaCache.stop()}}:

{code:java}
  @Override
  public void stop(final String why) {
stopped = true;
  }
{code}

*IMPACT:*
QuotaRefresherChore may cause some retrying operation to delay rs abort



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20698) Master don't record right server version until new started region server call regionServerReport method

2018-06-07 Thread Guanghao Zhang (JIRA)
Guanghao Zhang created HBASE-20698:
--

 Summary: Master don't record right server version until new 
started region server call regionServerReport method
 Key: HBASE-20698
 URL: https://issues.apache.org/jira/browse/HBASE-20698
 Project: HBase
  Issue Type: Bug
  Components: proc-v2
Affects Versions: 2.0.0
Reporter: Guanghao Zhang


When a new region server started, it will call regionServerStartup first. 
Master will record this server as a new online server and may dispath 
RemoteProcedure to the new server. But master only record the server version 
when the new region server call regionServerReport method. Dispatch a new 
RemoteProcedure to this new regionserver will fail if version is not right.

{code:java}
  @Override
  protected void remoteDispatch(final ServerName serverName,
  final Set remoteProcedures) {
final int rsVersion = 
master.getAssignmentManager().getServerVersion(serverName);
if (rsVersion >= RS_VERSION_WITH_EXEC_PROCS) {
  LOG.trace("Using procedure batch rpc execution for serverName={} 
version={}",
serverName, rsVersion);
  submitTask(new ExecuteProceduresRemoteCall(serverName, remoteProcedures));
} else {
  LOG.info(String.format(
"Fallback to compat rpc execution for serverName=%s version=%s",
serverName, rsVersion));
  submitTask(new CompatRemoteProcedureResolver(serverName, 
remoteProcedures));
}
  }
{code}

The above code use version to resolve compatibility problem. So dispatch will 
work right for old version region server. But for RefreshPeerProcedure, it is 
new since hbase 2.0. So RefreshPeerProcedure don't need this. But the new 
region server version is not right, it will use CompatRemoteProcedureResolver 
for RefreshPeerProcedure, too. So the RefreshPeerProcedure can't be executed 
rightly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20697) Can't cache All region locations of the specify table by calling table.getRegionLocator().getAllRegionLocations()

2018-06-07 Thread zhaoyuan (JIRA)
zhaoyuan created HBASE-20697:


 Summary: Can't cache All region locations of the specify table by 
calling table.getRegionLocator().getAllRegionLocations()
 Key: HBASE-20697
 URL: https://issues.apache.org/jira/browse/HBASE-20697
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.2.6, 1.3.1
Reporter: zhaoyuan
Assignee: zhaoyuan


When we upgrade and restart  a new version application which will read and 
write to HBase, we will get some operation timeout. The time out is expected 
because when the application restarts,It will not hold any region locations 
cache and do communication with zk and meta regionserver to get region 
locations.

We want to avoid these timeouts so we do warmup work and as far as I am 
concerned,the method table.getRegionLocator().getAllRegionLocations() will 
fetch all region locations and cache them. However, it didn't work good. There 
are still a lot of time outs,so it confused me.
{code:java}
// code placeholder
public List getAllRegionLocations() throws IOException {
  TableName tableName = getName();
  NavigableMap locations =
  MetaScanner.allTableRegions(this.connection, tableName);
  ArrayList regions = new ArrayList<>(locations.size());
  for (Entry entry : locations.entrySet()) {
regions.add(new HRegionLocation(entry.getKey(), entry.getValue()));
  }
  if (regions.size() > 0) {
connection.cacheLocation(tableName, new RegionLocations(regions));
  }
  return regions;
}

In MetaCache

public void cacheLocation(final TableName tableName, final RegionLocations 
locations) {
  byte [] startKey = 
locations.getRegionLocation().getRegionInfo().getStartKey();
  ConcurrentMap tableLocations = 
getTableLocations(tableName);
  RegionLocations oldLocation = tableLocations.putIfAbsent(startKey, locations);
  boolean isNewCacheEntry = (oldLocation == null);
  if (isNewCacheEntry) {
if (LOG.isTraceEnabled()) {
  LOG.trace("Cached location: " + locations);
}
addToCachedServers(locations);
return;
  }
{code}
It will collect all regions into one RegionLocations object and only cache the 
first not null region location. When we do getCacheLocation() 
{code:java}
// code placeholder
public RegionLocations getCachedLocation(final TableName tableName, final byte 
[] row) {
  ConcurrentNavigableMap tableLocations =
getTableLocations(tableName);

  Entry e = tableLocations.floorEntry(row);
  if (e == null) {
if (metrics!= null) metrics.incrMetaCacheMiss();
return null;
  }
  RegionLocations possibleRegion = e.getValue();

  // make sure that the end key is greater than the row we're looking
  // for, otherwise the row actually belongs in the next region, not
  // this one. the exception case is when the endkey is
  // HConstants.EMPTY_END_ROW, signifying that the region we're
  // checking is actually the last region in the table.
  byte[] endKey = 
possibleRegion.getRegionLocation().getRegionInfo().getEndKey();
  if (Bytes.equals(endKey, HConstants.EMPTY_END_ROW) ||
  getRowComparator(tableName).compareRows(
  endKey, 0, endKey.length, row, 0, row.length) > 0) {
if (metrics != null) metrics.incrMetaCacheHit();
return possibleRegion;
  }

  // Passed all the way through, so we got nothing - complete cache miss
  if (metrics != null) metrics.incrMetaCacheMiss();
  return null;
}
{code}
It will choose the first location to be possibleRegion and possibly it will 
miss match.

So did I forget something or may be wrong somewhere? If this is indeed a bug I 
think it can be fixed not very hard.

Hope commiters and PMC review this !

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20696) Shell list_peers print useless string

2018-06-07 Thread Zheng Hu (JIRA)
Zheng Hu created HBASE-20696:


 Summary: Shell list_peers print useless string 
 Key: HBASE-20696
 URL: https://issues.apache.org/jira/browse/HBASE-20696
 Project: HBase
  Issue Type: Bug
Reporter: Zheng Hu


{code}
hbase(main):020:0> list_peers
 PEER_ID CLUSTER_KEY ENDPOINT_CLASSNAME REMOTE_ROOT_DIR SYNC_REPLICATION_STATE 
STATE REPLICATE_ALL NAMESPACES TABLE_CFS BANDWIDTH SERIAL
 1 
lg-hadoop-tst-st01.bj:10010,lg-hadoop-tst-st02.bj:10010,lg-hadoop-tst-st03.bj:10010:/hbase/test-hbase-slave
 nil hdfs://lg-hadoop-tst-st01.bj:20100/hbase/test-hbase-slave/remoteWALs 
ACTIVE ENABLED false  default.ycsb-test 0 false
1 row(s)
Took 0.0446 seconds 
   
=> #> It's useless .. 
{code}

Interested contributors are welcome to fix this bug...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)