Re: Testing and CI -- Apache Jenkins Builds (WAS -> Re: Testing)

2016-09-16 Thread Apekshit Sharma
So this all started with spaces-in-path issue, right?  I think it has
gobbled up a lot of time of a lot of people.
Let's discuss our options and try to fix it for good. Here are what i can
think of, and my opinion about them.

1. Not use matrix build
  Temporary fix. Not preferred since not applicable to other
branches' builds.

2. Use matrix build

  a. Use tool environment trick
   I applied this few days ago. Seemed to work until we discovered
scalatest issue. While the solution looks legitimate, we can't trust that
all tools will use JAVA_HOME instead of directly using java command.

  b. Use JDK axix
  Doesn't work right now. I don't have much idea of what's the cost
for fixing it.

  c. Use JDK axis with custom child workspace

https://github.com/jenkinsci/matrix-project-plugin/blob/master/src/main/resources/hudson/matrix/MatrixProject/help-childCustomWorkspace.html
  Just found this one, and it might solve things for good. I have
updated the job to use this. Let's see how it works.

What do others think?

On Fri, Sep 16, 2016 at 3:31 PM, Stack  wrote:

> The profile (or define) skipSparkTests looks like it will skip spark tests.
> Setting skipIntegrationTests to true will skip it.
> S
>
> On Fri, Sep 16, 2016 at 1:40 PM, Dima Spivak 
> wrote:
>
> > Doesn't seem we need a matrix project for master anymore since we're just
> > doing JDK 8 now, right? Also, it looks like the hbase-spark
> > integration-test phase is what's tripping up the build. Why not just
> > temporarily disable that to unblock testing?
> >
> > On Friday, September 16, 2016, Apekshit Sharma 
> wrote:
> >
> > > So the issue is, we set JAVA_HOME to jdk8 based on matrix param and
> using
> > > tool environment. Since mvn uses the env variable, it compiles with jdk
> > 8.
> > > But i suspect that scalatest isn't using the env variable, instead it
> > might
> > > be directly using 'java' cmd, which can be jdk 7 or 8, and can vary by
> > > machine.
> > > Build succeed if 'java' points to jdk 8, otherwise fails.
> > > Note that we didn't have this issue earlier since we were using jenkins
> > > 'JDK' axis which would set the 'java' to the appropriate version. But
> > that
> > > methods had spaces-in-path issue, so i had to change it.
> > >
> > >
> > > On Fri, Sep 16, 2016 at 3:46 AM, aman poonia  > > >
> > > wrote:
> > >
> > > > I am not sure if this will help. But it looks like it is because of
> > > version
> > > > mismatch, that is, it is expecting JDK1.7 and we are compiling with
> > > jdk1.8.
> > > > That means there is some library which has to be compiled with jdk8
> or
> > > > needs to be updated to a jdk8 compatible version.
> > > >
> > > >
> > > > --
> > > > *With Regards:-*
> > > > *Aman Poonia*
> > > >
> > > > On Fri, Sep 16, 2016 at 2:40 AM, Apekshit Sharma  > > >
> > > > wrote:
> > > >
> > > > > Andeverything is back to red.
> > > > > Because something is plaguing our builds again. :(
> > > > >
> > > > > If anyone knows what's problem in this case, please reply on this
> > > thread,
> > > > > otherwise i'll try to fix it later sometime today.
> > > > >
> > > > > [INFO] *--- scalatest-maven-plugin:1.0:test (integration-test) @
> > > > > hbase-spark ---
> > > > > * [36mDiscovery starting. [0m
> > > > >  [31m*** RUN ABORTED *** [0m
> > > > >  [31m  java.lang.UnsupportedClassVersionError:
> > > > > org/apache/hadoop/hbase/spark/example/hbasecontext/
> > > > > JavaHBaseDistributedScan
> > > > > : Unsupported major.minor version 52.0 [0m
> > > > >  [31m  at java.lang.ClassLoader.defineClass1(Native Method) [0m
> > > > >  [31m  at java.lang.ClassLoader.defineClass(ClassLoader.java:803)
> > [0m
> > > > >  [31m  at java.security.SecureClassLoader.defineClass(
> > > > > SecureClassLoader.java:142)
> > > > > [0m
> > > > >  [31m  at java.net.URLClassLoader.defineClass(URLClassLoader.
> > java:449)
> > > > [0m
> > > > >  [31m  at java.net.URLClassLoader.access$100(URLClassLoader.
> java:71)
> > > [0m
> > > > >  [31m  at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
> [0m
> > > > >  [31m  at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> [0m
> > > > >  [31m  at java.security.AccessController.doPrivileged(Native
> Method)
> > > [0m
> > > > >  [31m  at java.net.URLClassLoader.findClass(URLClassLoader.java:
> 354)
> > > [0m
> > > > >  [31m  at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> [0m
> > > > >
> > > > >
> > > > >
> > > > > On Mon, Sep 12, 2016 at 5:01 PM, Mikhail Antonov <
> > olorinb...@gmail.com
> > > >
> > > > > wrote:
> > > > >
> > > > > > Great work indeed!
> > > > > >
> > > > > > Agreed, occasional failed runs may not be that bad, but fairly
> > > regular
> > > > > > failed runs ruin the idea of CI. Especially for released or
> > otherwise
> > > > > > supposedly stable branches.
> > > > > >
> > > > > > -Mikhail
> > > 

[jira] [Created] (HBASE-16647) hbck should do the offline reference repair before online repair

2016-09-16 Thread Jerry He (JIRA)
Jerry He created HBASE-16647:


 Summary: hbck should do the offline reference repair before online 
repair
 Key: HBASE-16647
 URL: https://issues.apache.org/jira/browse/HBASE-16647
 Project: HBase
  Issue Type: Bug
Reporter: Jerry He
Assignee: Jerry He


{noformat}
hbck
-fixReferenceFiles  Try to offline lingering reference store files

Metadata Repair shortcuts
-repairShortcut for -fixAssignments -fixMeta -fixHdfsHoles -fixHdfsOrphans 
-fixHdfsOverlaps -fixVersionFile -sidelineBigOverlaps -fixReferenceFiles 
-fixTableLocks -fixOrphanedTableZnodes
{noformat}

Bad reference files prevent the region from coming online.
If used in the shortcut combination, the reference files should be fixed before 
other online fix.

I have seen repeated '-repair' did not work because bad reference files failed 
regions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Testing and CI -- Apache Jenkins Builds (WAS -> Re: Testing)

2016-09-16 Thread Stack
The profile (or define) skipSparkTests looks like it will skip spark tests.
Setting skipIntegrationTests to true will skip it.
S

On Fri, Sep 16, 2016 at 1:40 PM, Dima Spivak  wrote:

> Doesn't seem we need a matrix project for master anymore since we're just
> doing JDK 8 now, right? Also, it looks like the hbase-spark
> integration-test phase is what's tripping up the build. Why not just
> temporarily disable that to unblock testing?
>
> On Friday, September 16, 2016, Apekshit Sharma  wrote:
>
> > So the issue is, we set JAVA_HOME to jdk8 based on matrix param and using
> > tool environment. Since mvn uses the env variable, it compiles with jdk
> 8.
> > But i suspect that scalatest isn't using the env variable, instead it
> might
> > be directly using 'java' cmd, which can be jdk 7 or 8, and can vary by
> > machine.
> > Build succeed if 'java' points to jdk 8, otherwise fails.
> > Note that we didn't have this issue earlier since we were using jenkins
> > 'JDK' axis which would set the 'java' to the appropriate version. But
> that
> > methods had spaces-in-path issue, so i had to change it.
> >
> >
> > On Fri, Sep 16, 2016 at 3:46 AM, aman poonia  > >
> > wrote:
> >
> > > I am not sure if this will help. But it looks like it is because of
> > version
> > > mismatch, that is, it is expecting JDK1.7 and we are compiling with
> > jdk1.8.
> > > That means there is some library which has to be compiled with jdk8 or
> > > needs to be updated to a jdk8 compatible version.
> > >
> > >
> > > --
> > > *With Regards:-*
> > > *Aman Poonia*
> > >
> > > On Fri, Sep 16, 2016 at 2:40 AM, Apekshit Sharma  > >
> > > wrote:
> > >
> > > > Andeverything is back to red.
> > > > Because something is plaguing our builds again. :(
> > > >
> > > > If anyone knows what's problem in this case, please reply on this
> > thread,
> > > > otherwise i'll try to fix it later sometime today.
> > > >
> > > > [INFO] *--- scalatest-maven-plugin:1.0:test (integration-test) @
> > > > hbase-spark ---
> > > > * [36mDiscovery starting. [0m
> > > >  [31m*** RUN ABORTED *** [0m
> > > >  [31m  java.lang.UnsupportedClassVersionError:
> > > > org/apache/hadoop/hbase/spark/example/hbasecontext/
> > > > JavaHBaseDistributedScan
> > > > : Unsupported major.minor version 52.0 [0m
> > > >  [31m  at java.lang.ClassLoader.defineClass1(Native Method) [0m
> > > >  [31m  at java.lang.ClassLoader.defineClass(ClassLoader.java:803)
> [0m
> > > >  [31m  at java.security.SecureClassLoader.defineClass(
> > > > SecureClassLoader.java:142)
> > > > [0m
> > > >  [31m  at java.net.URLClassLoader.defineClass(URLClassLoader.
> java:449)
> > > [0m
> > > >  [31m  at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
> > [0m
> > > >  [31m  at java.net.URLClassLoader$1.run(URLClassLoader.java:361) [0m
> > > >  [31m  at java.net.URLClassLoader$1.run(URLClassLoader.java:355) [0m
> > > >  [31m  at java.security.AccessController.doPrivileged(Native Method)
> > [0m
> > > >  [31m  at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> > [0m
> > > >  [31m  at java.lang.ClassLoader.loadClass(ClassLoader.java:425) [0m
> > > >
> > > >
> > > >
> > > > On Mon, Sep 12, 2016 at 5:01 PM, Mikhail Antonov <
> olorinb...@gmail.com
> > >
> > > > wrote:
> > > >
> > > > > Great work indeed!
> > > > >
> > > > > Agreed, occasional failed runs may not be that bad, but fairly
> > regular
> > > > > failed runs ruin the idea of CI. Especially for released or
> otherwise
> > > > > supposedly stable branches.
> > > > >
> > > > > -Mikhail
> > > > >
> > > > > On Mon, Sep 12, 2016 at 4:53 PM, Sean Busbey  > >
> > > > wrote:
> > > > >
> > > > > > awesome work Appy!
> > > > > >
> > > > > > That's certainly good news to hear.
> > > > > >
> > > > > > On Mon, Sep 12, 2016 at 2:14 PM, Apekshit Sharma <
> > a...@cloudera.com >
> > > > > > wrote:
> > > > > > > On a separate note:
> > > > > > > Trunk had 8 green runs in last 3 days! (
> > > > > > > https://builds.apache.org/job/HBase-Trunk_matrix/)
> > > > > > > This was due to fixing just the mass failures on trunk and no
> > > change
> > > > in
> > > > > > > flaky infra. Which made me to conclude two things:
> > > > > > > 1. Flaky infra works.
> > > > > > > 2. It relies heavily on the post-commit build's stability
> (which
> > > > every
> > > > > > > project should anyways strive for). If the build fails
> > > > catastrophically
> > > > > > > once in a while, we can just exclude that one run using a flag
> > and
> > > > > > > everything will work, but if it happens frequently, then it
> won't
> > > > work
> > > > > > > right.
> > > > > > >
> > > > > > > I have re-enabled Flaky tests job (
> > > > > > > https://builds.apache.org/view/H-L/view/HBase/job/HBASE-
> > > Flaky-Tests/
> > > > )
> > > > > > which
> > > > > > > was disabled for almost a month due to trunk being 

[jira] [Created] (HBASE-16646) Enhance LoadIncrementalHFiles to accept store file paths as input

2016-09-16 Thread Ted Yu (JIRA)
Ted Yu created HBASE-16646:
--

 Summary: Enhance LoadIncrementalHFiles to accept store file paths 
as input
 Key: HBASE-16646
 URL: https://issues.apache.org/jira/browse/HBASE-16646
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Ted Yu


Currently LoadIncrementalHFiles takes the directory (output path) as input 
parameter.

In some scenarios (incremental restore of bulk loaded hfiles), the List of 
paths to hfiles is known.

LoadIncrementalHFiles can take the List as input parameter and proceed with 
loading.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Testing and CI -- Apache Jenkins Builds (WAS -> Re: Testing)

2016-09-16 Thread Dima Spivak
Doesn't seem we need a matrix project for master anymore since we're just
doing JDK 8 now, right? Also, it looks like the hbase-spark
integration-test phase is what's tripping up the build. Why not just
temporarily disable that to unblock testing?

On Friday, September 16, 2016, Apekshit Sharma  wrote:

> So the issue is, we set JAVA_HOME to jdk8 based on matrix param and using
> tool environment. Since mvn uses the env variable, it compiles with jdk 8.
> But i suspect that scalatest isn't using the env variable, instead it might
> be directly using 'java' cmd, which can be jdk 7 or 8, and can vary by
> machine.
> Build succeed if 'java' points to jdk 8, otherwise fails.
> Note that we didn't have this issue earlier since we were using jenkins
> 'JDK' axis which would set the 'java' to the appropriate version. But that
> methods had spaces-in-path issue, so i had to change it.
>
>
> On Fri, Sep 16, 2016 at 3:46 AM, aman poonia  >
> wrote:
>
> > I am not sure if this will help. But it looks like it is because of
> version
> > mismatch, that is, it is expecting JDK1.7 and we are compiling with
> jdk1.8.
> > That means there is some library which has to be compiled with jdk8 or
> > needs to be updated to a jdk8 compatible version.
> >
> >
> > --
> > *With Regards:-*
> > *Aman Poonia*
> >
> > On Fri, Sep 16, 2016 at 2:40 AM, Apekshit Sharma  >
> > wrote:
> >
> > > Andeverything is back to red.
> > > Because something is plaguing our builds again. :(
> > >
> > > If anyone knows what's problem in this case, please reply on this
> thread,
> > > otherwise i'll try to fix it later sometime today.
> > >
> > > [INFO] *--- scalatest-maven-plugin:1.0:test (integration-test) @
> > > hbase-spark ---
> > > * [36mDiscovery starting. [0m
> > >  [31m*** RUN ABORTED *** [0m
> > >  [31m  java.lang.UnsupportedClassVersionError:
> > > org/apache/hadoop/hbase/spark/example/hbasecontext/
> > > JavaHBaseDistributedScan
> > > : Unsupported major.minor version 52.0 [0m
> > >  [31m  at java.lang.ClassLoader.defineClass1(Native Method) [0m
> > >  [31m  at java.lang.ClassLoader.defineClass(ClassLoader.java:803) [0m
> > >  [31m  at java.security.SecureClassLoader.defineClass(
> > > SecureClassLoader.java:142)
> > > [0m
> > >  [31m  at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
> > [0m
> > >  [31m  at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
> [0m
> > >  [31m  at java.net.URLClassLoader$1.run(URLClassLoader.java:361) [0m
> > >  [31m  at java.net.URLClassLoader$1.run(URLClassLoader.java:355) [0m
> > >  [31m  at java.security.AccessController.doPrivileged(Native Method)
> [0m
> > >  [31m  at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> [0m
> > >  [31m  at java.lang.ClassLoader.loadClass(ClassLoader.java:425) [0m
> > >
> > >
> > >
> > > On Mon, Sep 12, 2016 at 5:01 PM, Mikhail Antonov  >
> > > wrote:
> > >
> > > > Great work indeed!
> > > >
> > > > Agreed, occasional failed runs may not be that bad, but fairly
> regular
> > > > failed runs ruin the idea of CI. Especially for released or otherwise
> > > > supposedly stable branches.
> > > >
> > > > -Mikhail
> > > >
> > > > On Mon, Sep 12, 2016 at 4:53 PM, Sean Busbey  >
> > > wrote:
> > > >
> > > > > awesome work Appy!
> > > > >
> > > > > That's certainly good news to hear.
> > > > >
> > > > > On Mon, Sep 12, 2016 at 2:14 PM, Apekshit Sharma <
> a...@cloudera.com >
> > > > > wrote:
> > > > > > On a separate note:
> > > > > > Trunk had 8 green runs in last 3 days! (
> > > > > > https://builds.apache.org/job/HBase-Trunk_matrix/)
> > > > > > This was due to fixing just the mass failures on trunk and no
> > change
> > > in
> > > > > > flaky infra. Which made me to conclude two things:
> > > > > > 1. Flaky infra works.
> > > > > > 2. It relies heavily on the post-commit build's stability (which
> > > every
> > > > > > project should anyways strive for). If the build fails
> > > catastrophically
> > > > > > once in a while, we can just exclude that one run using a flag
> and
> > > > > > everything will work, but if it happens frequently, then it won't
> > > work
> > > > > > right.
> > > > > >
> > > > > > I have re-enabled Flaky tests job (
> > > > > > https://builds.apache.org/view/H-L/view/HBase/job/HBASE-
> > Flaky-Tests/
> > > )
> > > > > which
> > > > > > was disabled for almost a month due to trunk being on fire.
> > > > > > I will keep an eye on how things are going.
> > > > > >
> > > > > >
> > > > > > On Mon, Sep 12, 2016 at 2:02 PM, Apekshit Sharma <
> > a...@cloudera.com >
> > > > > wrote:
> > > > > >
> > > > > >> @Sean, Mikhail: I found the alternate solution. Using user
> defined
> > > > axis,
> > > > > >> tool environment and env variable injection.
> > > > > >> See latest diff to https://builds.apache.org/job/
> > > HBase-Trunk_matrix/
> 

Re: Testing and CI -- Apache Jenkins Builds (WAS -> Re: Testing)

2016-09-16 Thread Apekshit Sharma
So the issue is, we set JAVA_HOME to jdk8 based on matrix param and using
tool environment. Since mvn uses the env variable, it compiles with jdk 8.
But i suspect that scalatest isn't using the env variable, instead it might
be directly using 'java' cmd, which can be jdk 7 or 8, and can vary by
machine.
Build succeed if 'java' points to jdk 8, otherwise fails.
Note that we didn't have this issue earlier since we were using jenkins
'JDK' axis which would set the 'java' to the appropriate version. But that
methods had spaces-in-path issue, so i had to change it.


On Fri, Sep 16, 2016 at 3:46 AM, aman poonia 
wrote:

> I am not sure if this will help. But it looks like it is because of version
> mismatch, that is, it is expecting JDK1.7 and we are compiling with jdk1.8.
> That means there is some library which has to be compiled with jdk8 or
> needs to be updated to a jdk8 compatible version.
>
>
> --
> *With Regards:-*
> *Aman Poonia*
>
> On Fri, Sep 16, 2016 at 2:40 AM, Apekshit Sharma 
> wrote:
>
> > Andeverything is back to red.
> > Because something is plaguing our builds again. :(
> >
> > If anyone knows what's problem in this case, please reply on this thread,
> > otherwise i'll try to fix it later sometime today.
> >
> > [INFO] *--- scalatest-maven-plugin:1.0:test (integration-test) @
> > hbase-spark ---
> > * [36mDiscovery starting. [0m
> >  [31m*** RUN ABORTED *** [0m
> >  [31m  java.lang.UnsupportedClassVersionError:
> > org/apache/hadoop/hbase/spark/example/hbasecontext/
> > JavaHBaseDistributedScan
> > : Unsupported major.minor version 52.0 [0m
> >  [31m  at java.lang.ClassLoader.defineClass1(Native Method) [0m
> >  [31m  at java.lang.ClassLoader.defineClass(ClassLoader.java:803) [0m
> >  [31m  at java.security.SecureClassLoader.defineClass(
> > SecureClassLoader.java:142)
> > [0m
> >  [31m  at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
> [0m
> >  [31m  at java.net.URLClassLoader.access$100(URLClassLoader.java:71) [0m
> >  [31m  at java.net.URLClassLoader$1.run(URLClassLoader.java:361) [0m
> >  [31m  at java.net.URLClassLoader$1.run(URLClassLoader.java:355) [0m
> >  [31m  at java.security.AccessController.doPrivileged(Native Method) [0m
> >  [31m  at java.net.URLClassLoader.findClass(URLClassLoader.java:354) [0m
> >  [31m  at java.lang.ClassLoader.loadClass(ClassLoader.java:425) [0m
> >
> >
> >
> > On Mon, Sep 12, 2016 at 5:01 PM, Mikhail Antonov 
> > wrote:
> >
> > > Great work indeed!
> > >
> > > Agreed, occasional failed runs may not be that bad, but fairly regular
> > > failed runs ruin the idea of CI. Especially for released or otherwise
> > > supposedly stable branches.
> > >
> > > -Mikhail
> > >
> > > On Mon, Sep 12, 2016 at 4:53 PM, Sean Busbey 
> > wrote:
> > >
> > > > awesome work Appy!
> > > >
> > > > That's certainly good news to hear.
> > > >
> > > > On Mon, Sep 12, 2016 at 2:14 PM, Apekshit Sharma 
> > > > wrote:
> > > > > On a separate note:
> > > > > Trunk had 8 green runs in last 3 days! (
> > > > > https://builds.apache.org/job/HBase-Trunk_matrix/)
> > > > > This was due to fixing just the mass failures on trunk and no
> change
> > in
> > > > > flaky infra. Which made me to conclude two things:
> > > > > 1. Flaky infra works.
> > > > > 2. It relies heavily on the post-commit build's stability (which
> > every
> > > > > project should anyways strive for). If the build fails
> > catastrophically
> > > > > once in a while, we can just exclude that one run using a flag and
> > > > > everything will work, but if it happens frequently, then it won't
> > work
> > > > > right.
> > > > >
> > > > > I have re-enabled Flaky tests job (
> > > > > https://builds.apache.org/view/H-L/view/HBase/job/HBASE-
> Flaky-Tests/
> > )
> > > > which
> > > > > was disabled for almost a month due to trunk being on fire.
> > > > > I will keep an eye on how things are going.
> > > > >
> > > > >
> > > > > On Mon, Sep 12, 2016 at 2:02 PM, Apekshit Sharma <
> a...@cloudera.com>
> > > > wrote:
> > > > >
> > > > >> @Sean, Mikhail: I found the alternate solution. Using user defined
> > > axis,
> > > > >> tool environment and env variable injection.
> > > > >> See latest diff to https://builds.apache.org/job/
> > HBase-Trunk_matrix/
> > > > job
> > > > >> for reference.
> > > > >>
> > > > >>
> > > > >> On Tue, Aug 30, 2016 at 7:39 PM, Mikhail Antonov <
> > > olorinb...@gmail.com>
> > > > >> wrote:
> > > > >>
> > > > >>> FYI, I did the same for branch-1.3 builds.  I've disabled
> hbase-1.3
> > > and
> > > > >>> hbase-1.3-IT jobs and instead created
> > > > >>>
> > > > >>> https://builds.apache.org/job/HBase-1.3-JDK8 and
> > > > >>> https://builds.apache.org/job/HBase-1.3-JDK7
> > > > >>>
> > > > >>> This should work for now until we figure out how to move forward.
> > > > >>>
> > > > >>> Thanks,
> > > > >>> Mikhail
> > > > >>>
> > > > >>> On Wed, Aug 17, 2016 at 1:41 PM, Sean Busbey <
> 

Re: Successful: HBase Generate Website

2016-09-16 Thread Misty Stanley-Jones
Pushed.

On Sat, Sep 17, 2016, at 12:59 AM, Apache Jenkins Server wrote:
> Build status: Successful
> 
> If successful, the website and docs have been generated. To update the
> live site, follow the instructions below. If failed, skip to the bottom
> of this email.
> 
> Use the following commands to download the patch and apply it to a clean
> branch based on origin/asf-site. If you prefer to keep the hbase-site
> repo around permanently, you can skip the clone step.
> 
>   git clone https://git-wip-us.apache.org/repos/asf/hbase-site.git
> 
>   cd hbase-site
>   wget -O-
>   
> https://builds.apache.org/job/hbase_generate_website/346/artifact/website.patch.zip
>   | funzip > 2597217ae5aa057e1931c772139ce8cc7a2b3efb.patch
>   git fetch
>   git checkout -b asf-site-2597217ae5aa057e1931c772139ce8cc7a2b3efb
>   origin/asf-site
>   git am --whitespace=fix 2597217ae5aa057e1931c772139ce8cc7a2b3efb.patch
> 
> At this point, you can preview the changes by opening index.html or any
> of the other HTML pages in your local
> asf-site-2597217ae5aa057e1931c772139ce8cc7a2b3efb branch.
> 
> There are lots of spurious changes, such as timestamps and CSS styles in
> tables, so a generic git diff is not very useful. To see a list of files
> that have been added, deleted, renamed, changed type, or are otherwise
> interesting, use the following command:
> 
>   git diff --name-status --diff-filter=ADCRTXUB origin/asf-site
> 
> To see only files that had 100 or more lines changed:
> 
>   git diff --stat origin/asf-site | grep -E '[1-9][0-9]{2,}'
> 
> When you are satisfied, publish your changes to origin/asf-site using
> these commands:
> 
>   git commit --allow-empty -m "Empty commit" # to work around a current
>   ASF INFRA bug
>   git push origin
>   asf-site-2597217ae5aa057e1931c772139ce8cc7a2b3efb:asf-site
>   git checkout asf-site
>   git branch -D asf-site-2597217ae5aa057e1931c772139ce8cc7a2b3efb
> 
> Changes take a couple of minutes to be propagated. You can verify whether
> they have been propagated by looking at the Last Published date at the
> bottom of http://hbase.apache.org/. It should match the date in the
> index.html on the asf-site branch in Git.
> 
> As a courtesy- reply-all to this email to let other committers know you
> pushed the site.
> 
> 
> 
> If failed, see
> https://builds.apache.org/job/hbase_generate_website/346/console


[jira] [Created] (HBASE-16645) Wrong range of Cells is caused by CellFlatMap#tailMap, headMap, and SubMap

2016-09-16 Thread ChiaPing Tsai (JIRA)
ChiaPing Tsai created HBASE-16645:
-

 Summary: Wrong range of Cells is caused by CellFlatMap#tailMap, 
headMap, and SubMap
 Key: HBASE-16645
 URL: https://issues.apache.org/jira/browse/HBASE-16645
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: ChiaPing Tsai
Priority: Minor
 Fix For: 2.0.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Successful: HBase Generate Website

2016-09-16 Thread Apache Jenkins Server
Build status: Successful

If successful, the website and docs have been generated. To update the live 
site, follow the instructions below. If failed, skip to the bottom of this 
email.

Use the following commands to download the patch and apply it to a clean branch 
based on origin/asf-site. If you prefer to keep the hbase-site repo around 
permanently, you can skip the clone step.

  git clone https://git-wip-us.apache.org/repos/asf/hbase-site.git

  cd hbase-site
  wget -O- 
https://builds.apache.org/job/hbase_generate_website/346/artifact/website.patch.zip
 | funzip > 2597217ae5aa057e1931c772139ce8cc7a2b3efb.patch
  git fetch
  git checkout -b asf-site-2597217ae5aa057e1931c772139ce8cc7a2b3efb 
origin/asf-site
  git am --whitespace=fix 2597217ae5aa057e1931c772139ce8cc7a2b3efb.patch

At this point, you can preview the changes by opening index.html or any of the 
other HTML pages in your local 
asf-site-2597217ae5aa057e1931c772139ce8cc7a2b3efb branch.

There are lots of spurious changes, such as timestamps and CSS styles in 
tables, so a generic git diff is not very useful. To see a list of files that 
have been added, deleted, renamed, changed type, or are otherwise interesting, 
use the following command:

  git diff --name-status --diff-filter=ADCRTXUB origin/asf-site

To see only files that had 100 or more lines changed:

  git diff --stat origin/asf-site | grep -E '[1-9][0-9]{2,}'

When you are satisfied, publish your changes to origin/asf-site using these 
commands:

  git commit --allow-empty -m "Empty commit" # to work around a current ASF 
INFRA bug
  git push origin asf-site-2597217ae5aa057e1931c772139ce8cc7a2b3efb:asf-site
  git checkout asf-site
  git branch -D asf-site-2597217ae5aa057e1931c772139ce8cc7a2b3efb

Changes take a couple of minutes to be propagated. You can verify whether they 
have been propagated by looking at the Last Published date at the bottom of 
http://hbase.apache.org/. It should match the date in the index.html on the 
asf-site branch in Git.

As a courtesy- reply-all to this email to let other committers know you pushed 
the site.



If failed, see https://builds.apache.org/job/hbase_generate_website/346/console

Re: Testing and CI -- Apache Jenkins Builds (WAS -> Re: Testing)

2016-09-16 Thread aman poonia
I am not sure if this will help. But it looks like it is because of version
mismatch, that is, it is expecting JDK1.7 and we are compiling with jdk1.8.
That means there is some library which has to be compiled with jdk8 or
needs to be updated to a jdk8 compatible version.


-- 
*With Regards:-*
*Aman Poonia*

On Fri, Sep 16, 2016 at 2:40 AM, Apekshit Sharma  wrote:

> Andeverything is back to red.
> Because something is plaguing our builds again. :(
>
> If anyone knows what's problem in this case, please reply on this thread,
> otherwise i'll try to fix it later sometime today.
>
> [INFO] *--- scalatest-maven-plugin:1.0:test (integration-test) @
> hbase-spark ---
> * [36mDiscovery starting. [0m
>  [31m*** RUN ABORTED *** [0m
>  [31m  java.lang.UnsupportedClassVersionError:
> org/apache/hadoop/hbase/spark/example/hbasecontext/
> JavaHBaseDistributedScan
> : Unsupported major.minor version 52.0 [0m
>  [31m  at java.lang.ClassLoader.defineClass1(Native Method) [0m
>  [31m  at java.lang.ClassLoader.defineClass(ClassLoader.java:803) [0m
>  [31m  at java.security.SecureClassLoader.defineClass(
> SecureClassLoader.java:142)
> [0m
>  [31m  at java.net.URLClassLoader.defineClass(URLClassLoader.java:449) [0m
>  [31m  at java.net.URLClassLoader.access$100(URLClassLoader.java:71) [0m
>  [31m  at java.net.URLClassLoader$1.run(URLClassLoader.java:361) [0m
>  [31m  at java.net.URLClassLoader$1.run(URLClassLoader.java:355) [0m
>  [31m  at java.security.AccessController.doPrivileged(Native Method) [0m
>  [31m  at java.net.URLClassLoader.findClass(URLClassLoader.java:354) [0m
>  [31m  at java.lang.ClassLoader.loadClass(ClassLoader.java:425) [0m
>
>
>
> On Mon, Sep 12, 2016 at 5:01 PM, Mikhail Antonov 
> wrote:
>
> > Great work indeed!
> >
> > Agreed, occasional failed runs may not be that bad, but fairly regular
> > failed runs ruin the idea of CI. Especially for released or otherwise
> > supposedly stable branches.
> >
> > -Mikhail
> >
> > On Mon, Sep 12, 2016 at 4:53 PM, Sean Busbey 
> wrote:
> >
> > > awesome work Appy!
> > >
> > > That's certainly good news to hear.
> > >
> > > On Mon, Sep 12, 2016 at 2:14 PM, Apekshit Sharma 
> > > wrote:
> > > > On a separate note:
> > > > Trunk had 8 green runs in last 3 days! (
> > > > https://builds.apache.org/job/HBase-Trunk_matrix/)
> > > > This was due to fixing just the mass failures on trunk and no change
> in
> > > > flaky infra. Which made me to conclude two things:
> > > > 1. Flaky infra works.
> > > > 2. It relies heavily on the post-commit build's stability (which
> every
> > > > project should anyways strive for). If the build fails
> catastrophically
> > > > once in a while, we can just exclude that one run using a flag and
> > > > everything will work, but if it happens frequently, then it won't
> work
> > > > right.
> > > >
> > > > I have re-enabled Flaky tests job (
> > > > https://builds.apache.org/view/H-L/view/HBase/job/HBASE-Flaky-Tests/
> )
> > > which
> > > > was disabled for almost a month due to trunk being on fire.
> > > > I will keep an eye on how things are going.
> > > >
> > > >
> > > > On Mon, Sep 12, 2016 at 2:02 PM, Apekshit Sharma 
> > > wrote:
> > > >
> > > >> @Sean, Mikhail: I found the alternate solution. Using user defined
> > axis,
> > > >> tool environment and env variable injection.
> > > >> See latest diff to https://builds.apache.org/job/
> HBase-Trunk_matrix/
> > > job
> > > >> for reference.
> > > >>
> > > >>
> > > >> On Tue, Aug 30, 2016 at 7:39 PM, Mikhail Antonov <
> > olorinb...@gmail.com>
> > > >> wrote:
> > > >>
> > > >>> FYI, I did the same for branch-1.3 builds.  I've disabled hbase-1.3
> > and
> > > >>> hbase-1.3-IT jobs and instead created
> > > >>>
> > > >>> https://builds.apache.org/job/HBase-1.3-JDK8 and
> > > >>> https://builds.apache.org/job/HBase-1.3-JDK7
> > > >>>
> > > >>> This should work for now until we figure out how to move forward.
> > > >>>
> > > >>> Thanks,
> > > >>> Mikhail
> > > >>>
> > > >>> On Wed, Aug 17, 2016 at 1:41 PM, Sean Busbey 
> > > wrote:
> > > >>>
> > > >>> > /me smacks forehead
> > > >>> >
> > > >>> > these replacement jobs, of course, also have special characters
> in
> > > >>> > their names which then show up in the working path.
> > > >>> >
> > > >>> > renaming them to skip spaces and parens.
> > > >>> >
> > > >>> > On Wed, Aug 17, 2016 at 1:34 PM, Sean Busbey <
> > sean.bus...@gmail.com>
> > > >>> > wrote:
> > > >>> > > FYI, it looks like essentially our entire CI suite is red,
> > probably
> > > >>> due
> > > >>> > to
> > > >>> > > parts of our codebase not tolerating spaces or other special
> > > >>> characters
> > > >>> > in
> > > >>> > > the working directory.
> > > >>> > >
> > > >>> > > I've made a stop-gap non-multi-configuration set of jobs for
> > > running
> > > >>> unit
> > > >>> > > tests for the 1.2 branch against JDK 7 and JDK 8:
> > > >>> > >
> > > >>> > > 

[jira] [Created] (HBASE-16644) Errors when reading legit HFile' Trailer on branch 1.3

2016-09-16 Thread Mikhail Antonov (JIRA)
Mikhail Antonov created HBASE-16644:
---

 Summary: Errors when reading legit HFile' Trailer on branch 1.3
 Key: HBASE-16644
 URL: https://issues.apache.org/jira/browse/HBASE-16644
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 1.3.0, 1.4.0
Reporter: Mikhail Antonov
Assignee: Mikhail Antonov
 Fix For: 1.3.0


There seems to be a regression in branch 1.3 where we can't read HFile 
trailer(getting "CorruptHFileException: Problem reading HFile Trailer") on some 
HFiles that could be successfully read on 1.2.

I've seen this error manifesting in two ways so far.

{code}Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: 
Problem reading HFile Trailer from file  
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
at 
org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1164)
at 
org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
at 
org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
... 6 more
Caused by: java.io.IOException: Invalid HFile block magic: 
\x00\x00\x04\x00\x00\x00\x00\x00
at org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:155)
at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:344)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1735)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1558)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1397)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1405)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.(HFileReaderV2.java:156)
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:485)
{code}

and second

{code}
Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
reading HFile Trailer from file 
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
at 
org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1164)
at 
org.apache.hadoop.hbase.io.HalfStoreFileReader.(HalfStoreFileReader.java:104)
at 
org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:256)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
at 
org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
... 6 more
Caused by: java.io.IOException: Premature EOF from inputStream (read returned 
-1, was trying to read 10083 necessary bytes and 24 extra bytes, successfully 
read 1072
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:737)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1459)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1712)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1558)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1397)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1405)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.(HFileReaderV2.java:156)
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:485)
{code}

In my case this problem was reproducible 

[jira] [Created] (HBASE-16643) Reverse scanner heap creation may not allow MSLAB closure due to inproper ref counting of segments

2016-09-16 Thread ramkrishna.s.vasudevan (JIRA)
ramkrishna.s.vasudevan created HBASE-16643:
--

 Summary: Reverse scanner heap creation may not allow MSLAB closure 
due to inproper ref counting of segments
 Key: HBASE-16643
 URL: https://issues.apache.org/jira/browse/HBASE-16643
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Critical


In the reverse scanner case,
While doing 'initBackwardHeapIfNeeded' in MemstoreScanner for setting the 
backward heap, we do a 
{code}
if ((backwardHeap == null) && (forwardHeap != null)) {
forwardHeap.close();
forwardHeap = null;
// before building the heap seek for the relevant key on the scanners,
// for the heap to be built from the scanners correctly
for (KeyValueScanner scan : scanners) {
  if (toLast) {
res |= scan.seekToLastRow();
  } else {
res |= scan.backwardSeek(cell);
  }
}
{code}
forwardHeap.close(). This would internally decrement the MSLAB ref counter for 
the current active segment and snapshot segment.
When the scan is actually closed again we do close() and that will again 
decrement the count. Here chances are there that the count would go negative 
and hence the actual MSLAB closure that checks for refCount==0 will fail. Apart 
from this, when the refCount becomes 0 after the firstClose if any other thread 
requests to close the segment, then we will end up in corrupted segment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16642) Use DelayQueue instead of TimeoutBlockingQueue

2016-09-16 Thread Hiroshi Ikeda (JIRA)
Hiroshi Ikeda created HBASE-16642:
-

 Summary: Use DelayQueue instead of TimeoutBlockingQueue
 Key: HBASE-16642
 URL: https://issues.apache.org/jira/browse/HBASE-16642
 Project: HBase
  Issue Type: Improvement
Reporter: Hiroshi Ikeda
Priority: Minor


Enqueue poisons in order to wake up and end the internal threads.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)