Re: [VOTE] The 1st HBase 0.98.20 release candidate (RC0) is available

2016-06-10 Thread larsh
+1
The usual:- built from source- built with Phoenix- loaded a few 100m rows- did 
various experiments (with and without Phoenix)- nothing strange in the logs

  From: Andrew Purtell 
 To: "dev@hbase.apache.org"  
 Sent: Tuesday, June 7, 2016 6:26 PM
 Subject: [VOTE] The 1st HBase 0.98.20 release candidate (RC0) is available
   
The 1st HBase 0.98.20 release candidate (RC0) is available for download at
https://dist.apache.org/repos/dist/dev/hbase/0.98.20RC0/ and Maven
artifacts are also available in the temporary repository
https://repository.apache.org/content/repositories/orgapachehbase-1137/ .

The detailed source and binary compatibility report for this release with
respect to the previous is available for your review at
https://dist.apache.org/repos/dist/dev/hbase/0.98.20RC0/0.98.19_0.98.20RC0_compat_report.html
. There are no reported problems or warnings.

The 41 issues resolved in this release can be found at
https://s.apache.org/5f48 .

I have made the following preliminary assessments of this candidate:
- Build with source artifact with RAT and enforcers enabled (-Prelease)
completes successfully
- Unit test suite passes (7u79)
- Loaded 1M rows with LTT (10 readers, 10 writers, ​10 updaters @ ​20%),
nothing unusual logged,​ all keys verified,​ reported latencies in the
ballpark
- Built and ran unit tests with head of Apache Phoenix 4.x-HBase-0.98
branch, looks good (7u79)

Signed with my ***former, now renewed*** code signing key D5365CCD. You may
need to refetch it.

    pub  4096R/D5365CCD 2013-12-19 [expires: 2018-05-20]
    uid                  Andrew Purtell (CODE SIGNING KEY) <
apurt...@apache.org>

Apologies for the churn on signing key.

Please try out the candidate and vote +1/0/-1. This vote will be open for
at least 72 hours. Unless objection I will try to close it Monday June 12,
2016 if we have sufficient votes. That is 4 working days from now. Three +1
votes from PMC will be required to release.


-- 
Best regards,

  - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)

  

Re: Branch for 1.3

2016-06-10 Thread Enis Söztutar
We should have a conclusion for HBASE-15406 (roll forward or backwards)
before 1.3.0.

Enis

On Fri, Jun 10, 2016 at 5:04 PM, Ted Yu  wrote:

> bq. How many people want to stick with Hadoop 2.4 yet upgrade to HBase 1.3?
>
> Seems like the above question should be asked on user@ also.
>
> On Fri, Jun 10, 2016 at 5:00 PM, Mikhail Antonov 
> wrote:
>
> > Basically everything I waited for to land in 1.3 is done so it's time to
> > start rolling RCs.
> >
> > in HBASE-15344 I'm thinking about supported versions, it looks like they
> > could be the same as for 1.2
> >
> > I'm thinking to move Hadoop 2.4.* from Supported to Not Tested, to kind
> of
> > encourage people to move and have less versions to test. How many people
> > want to stick with Hadoop 2.4 yet upgrade to HBase 1.3?
> >
> > -Mikhail
> >
> > On Sun, May 15, 2016 at 11:53 PM, Sean Busbey  wrote:
> >
> > > On Sun, May 15, 2016 at 10:51 PM, Mikhail Antonov <
> olorinb...@gmail.com>
> > > wrote:
> > > > Thanks Sean, I indeed missed that looking at the list of issues. I'm
> > not
> > > > familiar with Phoenix, but will try to look at the HBase side and
> help
> > > > reviewing patches here. I also see that HBASE-14845 is marked
> critical
> > > (and
> > > > 's been for long time), do you want that patch in for 1.3 or should
> we
> > > bump
> > > > it to 1.3.1/1.4.0?
> > > >
> > >
> > > I never managed to get it out of test scope, and I doubt it'll get
> > > done in a timely manner.
> > >
> > > Best to bump it out.
> > >
> >
> >
> >
> > --
> > Thanks,
> > Michael Antonov
> >
>


Re: Branch for 1.3

2016-06-10 Thread Ted Yu
bq. How many people want to stick with Hadoop 2.4 yet upgrade to HBase 1.3?

Seems like the above question should be asked on user@ also.

On Fri, Jun 10, 2016 at 5:00 PM, Mikhail Antonov 
wrote:

> Basically everything I waited for to land in 1.3 is done so it's time to
> start rolling RCs.
>
> in HBASE-15344 I'm thinking about supported versions, it looks like they
> could be the same as for 1.2
>
> I'm thinking to move Hadoop 2.4.* from Supported to Not Tested, to kind of
> encourage people to move and have less versions to test. How many people
> want to stick with Hadoop 2.4 yet upgrade to HBase 1.3?
>
> -Mikhail
>
> On Sun, May 15, 2016 at 11:53 PM, Sean Busbey  wrote:
>
> > On Sun, May 15, 2016 at 10:51 PM, Mikhail Antonov 
> > wrote:
> > > Thanks Sean, I indeed missed that looking at the list of issues. I'm
> not
> > > familiar with Phoenix, but will try to look at the HBase side and help
> > > reviewing patches here. I also see that HBASE-14845 is marked critical
> > (and
> > > 's been for long time), do you want that patch in for 1.3 or should we
> > bump
> > > it to 1.3.1/1.4.0?
> > >
> >
> > I never managed to get it out of test scope, and I doubt it'll get
> > done in a timely manner.
> >
> > Best to bump it out.
> >
>
>
>
> --
> Thanks,
> Michael Antonov
>


Re: Branch for 1.3

2016-06-10 Thread Mikhail Antonov
Basically everything I waited for to land in 1.3 is done so it's time to
start rolling RCs.

in HBASE-15344 I'm thinking about supported versions, it looks like they
could be the same as for 1.2

I'm thinking to move Hadoop 2.4.* from Supported to Not Tested, to kind of
encourage people to move and have less versions to test. How many people
want to stick with Hadoop 2.4 yet upgrade to HBase 1.3?

-Mikhail

On Sun, May 15, 2016 at 11:53 PM, Sean Busbey  wrote:

> On Sun, May 15, 2016 at 10:51 PM, Mikhail Antonov 
> wrote:
> > Thanks Sean, I indeed missed that looking at the list of issues. I'm not
> > familiar with Phoenix, but will try to look at the HBase side and help
> > reviewing patches here. I also see that HBASE-14845 is marked critical
> (and
> > 's been for long time), do you want that patch in for 1.3 or should we
> bump
> > it to 1.3.1/1.4.0?
> >
>
> I never managed to get it out of test scope, and I doubt it'll get
> done in a timely manner.
>
> Best to bump it out.
>



-- 
Thanks,
Michael Antonov


[jira] [Created] (HBASE-16010) Put draining function through Admin API

2016-06-10 Thread Jerry He (JIRA)
Jerry He created HBASE-16010:


 Summary: Put draining function through Admin API
 Key: HBASE-16010
 URL: https://issues.apache.org/jira/browse/HBASE-16010
 Project: HBase
  Issue Type: Improvement
Reporter: Jerry He
Priority: Minor


Currently, there is no Amdin API for draining function. Client has to interact 
directly with Zookeeper draining node to add and remove draining servers.
For example, in draining_servers.rb:
{code}
  zkw = org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.new(config, 
"draining_servers", nil)
  parentZnode = zkw.drainingZNode

  begin
for server in servers
  node = ZKUtil.joinZNode(parentZnode, server)
  ZKUtil.createAndFailSilent(zkw, node)
end
  ensure
zkw.close()
  end
{code}

This is not good in cases like secure clusters with protected Zookeeper nodes.
Let's put draining function through Admin API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16009) Restoring an incremental backup should not require -overwrite

2016-06-10 Thread Ted Yu (JIRA)
Ted Yu created HBASE-16009:
--

 Summary: Restoring an incremental backup should not require 
-overwrite
 Key: HBASE-16009
 URL: https://issues.apache.org/jira/browse/HBASE-16009
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu


When I tried to restore an incremental backup,

hbase restore hdfs://hbase-test-rc-rerun-6:8020/user/hbase backup_1465575766499 
t1 t2

I got:
{code}
2016-06-10 19:53:11,317 ERROR [main] impl.RestoreClientImpl: ERROR: restore 
failed with error: Existing table found in target while no "-overwrite" option 
found
java.io.IOException: Existing table found in target while no "-overwrite" 
option found
at 
org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.checkTargetTables(RestoreClientImpl.java:186)
at 
org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restore(RestoreClientImpl.java:108)
at 
org.apache.hadoop.hbase.backup.RestoreDriver.parseAndRun(RestoreDriver.java:169)
at 
org.apache.hadoop.hbase.backup.RestoreDriver.doWork(RestoreDriver.java:198)
{code}
The above check should only be performed for restoring full backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16008) A robust way deal with early termination of HBCK

2016-06-10 Thread Stephen Yuan Jiang (JIRA)
Stephen Yuan Jiang created HBASE-16008:
--

 Summary: A robust way deal with early termination of HBCK
 Key: HBASE-16008
 URL: https://issues.apache.org/jira/browse/HBASE-16008
 Project: HBase
  Issue Type: Improvement
  Components: hbck
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang


When HBCK is running, we want to disable Catalog Janitor, Balancer and 
Split/Merge.  Today, the implementation is not robust.  If HBCK is terminated 
earlier by Control-C, the changed state would not be reset to original.  

HBASE-15406 was trying to solve this problem for Split/Merge switch.  The 
implementation is complicated, and it did not solve CJ and Balancer.  

We also have another problem is that to prevent multiple HBCK run, we used a 
file lock to indicate a running HBCK; earlier terminating might not clean up 
the file.  Sometimes we have to manually remove the file so that future HBCK 
could run.  

The proposal to solve all the problem is to use a znode to indicate that one 
HBCK is running.  CJ, balancer, and Split/Merge switch all look for this znode 
before doing it operation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16007) Job's Configuration should be passed to TableMapReduceUtil#addDependencyJars() in WALPlayer

2016-06-10 Thread Ted Yu (JIRA)
Ted Yu created HBASE-16007:
--

 Summary: Job's Configuration should be passed to 
TableMapReduceUtil#addDependencyJars() in WALPlayer
 Key: HBASE-16007
 URL: https://issues.apache.org/jira/browse/HBASE-16007
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu


HBASE-15752 tried to fix ClassNotFoundException when there is custom WAL edit 
Codec involved.

However, it didn't achieve this goal due to typo in the first parameter passed 
to TableMapReduceUtil#addDependencyJars().

job.getConfiguration() should have been used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] The 1st HBase 0.98.20 release candidate (RC0) is available

2016-06-10 Thread Sean Busbey
+1

- sigs and checksums match
- source matches commit 9624f3a9eb76f84656a41de0e2099c97f949e831,
currently pointed at by 0.98.20RC0 tag.
- LICENSE/NOTICE spot check looks fine (basically same as last time
and no red flags that it should have changed)
- compiles to binary artifacts that spot check correct.

On Tue, Jun 7, 2016 at 8:26 PM, Andrew Purtell  wrote:
> The 1st HBase 0.98.20 release candidate (RC0) is available for download at
> https://dist.apache.org/repos/dist/dev/hbase/0.98.20RC0/ and Maven
> artifacts are also available in the temporary repository
> https://repository.apache.org/content/repositories/orgapachehbase-1137/ .
>
> The detailed source and binary compatibility report for this release with
> respect to the previous is available for your review at
> https://dist.apache.org/repos/dist/dev/hbase/0.98.20RC0/0.98.19_0.98.20RC0_compat_report.html
> . There are no reported problems or warnings.
>
> The 41 issues resolved in this release can be found at
> https://s.apache.org/5f48 .
>
> I have made the following preliminary assessments of this candidate:
> - Build with source artifact with RAT and enforcers enabled (-Prelease)
> completes successfully
> - Unit test suite passes (7u79)
> - Loaded 1M rows with LTT (10 readers, 10 writers, 10 updaters @ 20%),
> nothing unusual logged, all keys verified, reported latencies in the
> ballpark
> - Built and ran unit tests with head of Apache Phoenix 4.x-HBase-0.98
> branch, looks good (7u79)
>
> Signed with my ***former, now renewed*** code signing key D5365CCD. You may
> need to refetch it.
>
> pub   4096R/D5365CCD 2013-12-19 [expires: 2018-05-20]
> uid  Andrew Purtell (CODE SIGNING KEY) <
> apurt...@apache.org>
>
> Apologies for the churn on signing key.
>
> Please try out the candidate and vote +1/0/-1. This vote will be open for
> at least 72 hours. Unless objection I will try to close it Monday June 12,
> 2016 if we have sufficient votes. That is 4 working days from now. Three +1
> votes from PMC will be required to release.
>
>
> --
> Best regards,
>
>- Andy
>
> Problems worthy of attack prove their worth by hitting back. - Piet Hein
> (via Tom White)



-- 
busbey


[jira] [Created] (HBASE-16006) FileSystem should be obtained from specified path in WALInputFormat#getSplits()

2016-06-10 Thread Ted Yu (JIRA)
Ted Yu created HBASE-16006:
--

 Summary: FileSystem should be obtained from specified path in 
WALInputFormat#getSplits()
 Key: HBASE-16006
 URL: https://issues.apache.org/jira/browse/HBASE-16006
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu


I was trying out restore feature and encountered the following exception:
{code}
2016-06-10 16:56:57,533 ERROR [main] impl.RestoreClientImpl: ERROR: restore 
failed with error: java.io.IOException: Can not restore from backup directory 
hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs (check 
Hadoop and HBase logs)
java.io.IOException: java.io.IOException: Can not restore from backup directory 
hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs (check 
Hadoop and HBase logs)
at 
org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restoreStage(RestoreClientImpl.java:257)
at 
org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restore(RestoreClientImpl.java:112)
at 
org.apache.hadoop.hbase.backup.RestoreDriver.parseAndRun(RestoreDriver.java:169)
at 
org.apache.hadoop.hbase.backup.RestoreDriver.doWork(RestoreDriver.java:198)
at 
org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.hbase.backup.RestoreDriver.main(RestoreDriver.java:203)
Caused by: java.io.IOException: Can not restore from backup directory 
hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs (check 
Hadoop and HBase logs)
at 
org.apache.hadoop.hbase.backup.mapreduce.MapReduceRestoreService.run(MapReduceRestoreService.java:92)
at 
org.apache.hadoop.hbase.backup.util.RestoreServerUtil.incrementalRestoreTable(RestoreServerUtil.java:165)
at 
org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restoreImage(RestoreClientImpl.java:293)
at 
org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restoreStage(RestoreClientImpl.java:238)
... 6 more
Caused by: java.lang.IllegalArgumentException: Wrong FS: 
hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs, 
expected: hdfs://hbase-test-rc-rerun-6.openstacklocal:8020
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:658)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:212)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:882)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:112)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:951)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:947)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:947)
at 
org.apache.hadoop.hbase.mapreduce.WALInputFormat.getFiles(WALInputFormat.java:266)
at 
org.apache.hadoop.hbase.mapreduce.WALInputFormat.getSplits(WALInputFormat.java:246)
at 
org.apache.hadoop.hbase.mapreduce.WALInputFormat.getSplits(WALInputFormat.java:227)
at 
org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
at 
org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
at org.apache.hadoop.hbase.mapreduce.WALPlayer.run(WALPlayer.java:380)
at 
org.apache.hadoop.hbase.backup.mapreduce.MapReduceRestoreService.run(MapReduceRestoreService.java:73)
... 9 more
{code}
It turned out that the refactoring from HBASE-14140 changed the code:
{code}
-FileSystem fs = inputDir.getFileSystem(conf);
-List files = getFiles(fs, inputDir, startTime, endTime);
-
-List splits = new ArrayList(files.size());
-for (FileStatus file : files) {
+FileSystem fs = FileSystem.get(conf);
{code}
We shouldn't be using the default FileSystem.
Instead, FileSystem should be obtained from specified path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16005) Implement HFile ref's tracking (bulk loading) in ReplicationQueuesHBaseImpl and ReplicationQueuesClientHBaseImpl

2016-06-10 Thread Joseph (JIRA)
Joseph created HBASE-16005:
--

 Summary: Implement HFile ref's tracking (bulk loading) in 
ReplicationQueuesHBaseImpl and ReplicationQueuesClientHBaseImpl
 Key: HBASE-16005
 URL: https://issues.apache.org/jira/browse/HBASE-16005
 Project: HBase
  Issue Type: Sub-task
Reporter: Joseph


Currently ReplicationQueuesHBaseImpl and ReplicationQueuesClientHBaseImpl have 
not implemented any of the HFile ref methods. They currently throw 
NotImplementedExceptions. We should implement them eventually.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16004) Update to Netty 4.1.1

2016-06-10 Thread Jurriaan Mous (JIRA)
Jurriaan Mous created HBASE-16004:
-

 Summary: Update to Netty 4.1.1
 Key: HBASE-16004
 URL: https://issues.apache.org/jira/browse/HBASE-16004
 Project: HBase
  Issue Type: Improvement
Reporter: Jurriaan Mous
Assignee: Jurriaan Mous


Netty 4.1 is out and received first bug fix release so it seems good enough for 
hbase to migrate.

It seems to have great performance improvements in Cassandra because of 
optimizations in cleaning direct buffers. (Now is on by default)
https://issues.apache.org/jira/plugins/servlet/mobile#issue/CASSANDRA-11818/comment/15306030
https://github.com/netty/netty/pull/5314




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Successful: HBase Generate Website

2016-06-10 Thread Apache Jenkins Server
Build status: Successful

If successful, the website and docs have been generated. If failed, skip to the 
bottom of this email.

Use the following commands to download the patch and apply it to a clean branch 
based on origin/asf-site. If you prefer to keep the hbase-site repo around 
permanently, you can skip the clone step.

  git clone https://git-wip-us.apache.org/repos/asf/hbase-site.git

  cd hbase-site
  wget -O- 
https://builds.apache.org/job/hbase_generate_website/254/artifact/website.patch.zip
 | funzip > 6da6babe4faa7b2b16775d3cd5c861e71ef4cf31.patch
  git fetch
  git checkout -b asf-site-6da6babe4faa7b2b16775d3cd5c861e71ef4cf31 
origin/asf-site
  git am --whitespace=fix 6da6babe4faa7b2b16775d3cd5c861e71ef4cf31.patch

At this point, you can preview the changes by opening index.html or any of the 
other HTML pages in your local 
asf-site-6da6babe4faa7b2b16775d3cd5c861e71ef4cf31 branch.

There are lots of spurious changes, such as timestamps and CSS styles in 
tables, so a generic git diff is not very useful. To see a list of files that 
have been added, deleted, renamed, changed type, or are otherwise interesting, 
use the following command:

  git diff --name-status --diff-filter=ADCRTXUB origin/asf-site

To see only files that had 100 or more lines changed:

  git diff --stat origin/asf-site | grep -E '[1-9][0-9]{2,}'

When you are satisfied, publish your changes to origin/asf-site using these 
commands:

  git commit --allow-empty -m "Empty commit" # to work around a current ASF 
INFRA bug
  git push origin asf-site-6da6babe4faa7b2b16775d3cd5c861e71ef4cf31:asf-site
  git checkout asf-site
  git branch -d asf-site-6da6babe4faa7b2b16775d3cd5c861e71ef4cf31

Changes take a couple of minutes to be propagated. You can verify whether they 
have been propagated by looking at the Last Published date at the bottom of 
http://hbase.apache.org/. It should match the date in the index.html on the 
asf-site branch in Git.

  



If failed, see https://builds.apache.org/job/hbase_generate_website/254/console