namespace quota not take effect

2016-08-23 Thread W.H
hi guys 
  I am testing the hbase  namespace quota at the maxTables and 
maxRegions.Followed the guide  i add the option  "hbase.quota.enabled" with 
value "true" in the hbase-site.xml .And then created the namespace :
   hbase(main):003:0> describe_namespace 'ns1'
   DESCRIPTION
  {NAME => 'ns1', maxregions => '2', maxtables => '1'}
 
  In the table definition a limited the maxtables as 1 ,but i created 5 tables 
under the namespace "ns1".It seems the quota  did't take effect .
  The hbase cluster was restarted after  hbase-site.xml modified.And my hbase 
version is 1.1.2.2.4.
  Any ideas ?Thanks .



 Best wishes.
 who.cat

[jira] [Created] (HBASE-16490) Fix race condition between SnapshotManager and SnapshotCleaner

2016-08-23 Thread Heng Chen (JIRA)
Heng Chen created HBASE-16490:
-

 Summary: Fix race condition between SnapshotManager and 
SnapshotCleaner
 Key: HBASE-16490
 URL: https://issues.apache.org/jira/browse/HBASE-16490
 Project: HBase
  Issue Type: Bug
Reporter: Heng Chen
 Fix For: 2.0.0


As [~mbertozzi] comments on HBASE-16464,  there maybe race condition between 
SnapshotManager and SnapshotCleaner.  We should use one lock when create 
snapshot,  and cleanup should acquire the lock before take action.

One method is pass HMaster as param into Cleaner through 
{{FileCleanerDelegate.getDeletableFiles}},  suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Multiple region servers on the same host

2016-08-23 Thread GuangYang
Hello,We are recently exploring running multiple region servers on the same 
host, some of the potential benefits:1. smaller region server instance, thus 
potential less contention for better performance?2. We are running region 
server groups, thus we can run region server groups with different workloads on 
the same group of hardware.
We plan to go with cgroups to isolate region servers on the same host, mainly 
because we have had good experience using cgroups within other hadoop projects.
Has anyone played around with running multiple RS on the same host and would 
like to share the experience?
Thanks,Guang  

[jira] [Created] (HBASE-16489) Configuration parsing

2016-08-23 Thread Sudeep Sunthankar (JIRA)
Sudeep Sunthankar created HBASE-16489:
-

 Summary: Configuration parsing
 Key: HBASE-16489
 URL: https://issues.apache.org/jira/browse/HBASE-16489
 Project: HBase
  Issue Type: Sub-task
Reporter: Sudeep Sunthankar


Reading hbase-site.xml is required to read various properties viz. 
zookeeper-quorum, client retires etc.  We can either use Apache Xerces or Boost 
libraries.






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16488) Starting namespace and quota services in master startup asynchronizely

2016-08-23 Thread Stephen Yuan Jiang (JIRA)
Stephen Yuan Jiang created HBASE-16488:
--

 Summary: Starting namespace and quota services in master startup 
asynchronizely
 Key: HBASE-16488
 URL: https://issues.apache.org/jira/browse/HBASE-16488
 Project: HBase
  Issue Type: Improvement
  Components: master
Affects Versions: 1.2.2, 1.1.5, 1.0.3, 2.0.0, 1.3.0, 1.4.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang


>From time to time, during internal IT test and from customer, we often see 
>master initialization failed due to namespace table region takes long time to 
>assign (eg. sometimes split log takes long time or hanging; or sometimes RS is 
>temporarily not available; sometimes due to some unknown assignment issue).  
>In the past, there was some proposal to improve this situation, eg. 
>HBASE-13556 / HBASE-14190 (Assign system tables ahead of user region 
>assignment) or HBASE-13557 (Special WAL handling for system tables) or  
>HBASE-14623 (Implement dedicated WAL for system tables).  

This JIRA proposes another way to solve this master initialization fail issue: 
namespace service is only used by a handful operations (eg. create table / 
namespace DDL / get namespace API / some RS group DDL).  Only quota manager 
depends on it and quota management is off by default.  Therefore, namespace 
service is not really needed for master to be functional.  So we could start 
namespace service asynchronizely without blocking master startup.

 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: check_compatibility.sh script for external consumption

2016-08-23 Thread Todd Lipcon
On Tue, Aug 23, 2016 at 2:43 PM, Andrew Wang 
wrote:

> I do prefer Python! Guessing this is it:
>
> https://github.com/apache/kudu/blob/master/build-
> support/check_compatibility.py
>
>
Yep, that's it. It has some stuff specific to Kudu's build in there, but
should be good to start from.

-Todd


> I filed YETUS-445 to get this integrated, will try my hand.
>
> Thanks,
> Andrew
>
> On Tue, Aug 23, 2016 at 9:08 AM, Todd Lipcon  wrote:
>
>> We have a version of the same in kudu as well, though ported to Python.
>> Perhaps that's a more palatable language for some :)
>>
>> Todd
>>
>> On Aug 22, 2016 2:16 PM, "Dima Spivak"  wrote:
>>
>>> +1! I'm to blame for much of check_compatibility.sh (sorry for picking
>>> Bash
>>> :-p), and I'd be happy to see it living in Apache Yetus if it means
>>> making
>>> life easy for other projects' contributors.
>>>
>>> -Dima
>>>
>>> On Mon, Aug 22, 2016 at 2:10 PM, Andrew Wang 
>>> wrote:
>>>
>>> > Hi HBase devs,
>>> >
>>> > I'm working on the Hadoop 3 release, and Sean mentioned the nifty Java
>>> ACC
>>> > tool and the check_compatibility.sh script. ACC is way nicer than
>>> JDiff,
>>> > and I'd like to use it over in Hadoop.
>>> >
>>> > There are some hardcodings and other enhancements I'd like to make
>>> though.
>>> > So, rather than having us copy-paste code back and forth, could we
>>> consider
>>> > some other form of codesharing? I believe we already both use Apache
>>> Yetus,
>>> > so that's one possible direction.
>>> >
>>> > BTW, I'm not subscribed to dev@hbase, so please keep me CC'd on this
>>> > thread.
>>> >
>>> > Thanks,
>>> > Andrew
>>> >
>>>
>>>
>>>
>>> --
>>> -Dima
>>>
>>
>


-- 
Todd Lipcon
Software Engineer, Cloudera


[jira] [Created] (HBASE-16487) Remove Class.fromName("..PrefixTreeCodec") from TableMapReduceUtil addHBaseDependencyJars

2016-08-23 Thread Matteo Bertozzi (JIRA)
Matteo Bertozzi created HBASE-16487:
---

 Summary: Remove Class.fromName("..PrefixTreeCodec") from 
TableMapReduceUtil addHBaseDependencyJars
 Key: HBASE-16487
 URL: https://issues.apache.org/jira/browse/HBASE-16487
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 1.2.2, 2.0.0, 1.3.0, 1.4.0
Reporter: Matteo Bertozzi


HBASE-15152 included the prefix tree module as dependency to 
TableMapReduceUtil. but the hardcoded string of the class name was wrong. 
HBASE-16360 fixed the hardcoded string. 

but, I was looking at the comment above and I can't figure out where is the 
circular dependency.
{code}
// PrefixTreeCodec is part of the hbase-prefix-tree module. If not included in 
MR jobs jar
// dependencies, MR jobs that write encoded hfiles will fail.
// We used reflection here so to prevent a circular module dependency.
// TODO - if we extract the MR into a module, make it depend on 
hbase-prefix-tree
{code}
from the pom.xml of the prefix-tree module I don't see hbase-server. but I can 
see prefix-tree module in the hbase-server/pom.xml. the TableMapReduceUtil is 
in hbase-server.. so in theory we don't have any circular dependency.
we can just probably drop all that try/catch block with the Class.forName() and 
just simply use org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeCodec as we 
do for the others. 

(or at least we should end up with a test to cover the that Class.fromName() in 
case we rename the PrefixTreeCodec or the namespace in the future and forget to 
update this reference)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: check_compatibility.sh script for external consumption

2016-08-23 Thread Andrew Wang
I do prefer Python! Guessing this is it:

https://github.com/apache/kudu/blob/master/build-support/check_compatibility.py

I filed YETUS-445 to get this integrated, will try my hand.

Thanks,
Andrew

On Tue, Aug 23, 2016 at 9:08 AM, Todd Lipcon  wrote:

> We have a version of the same in kudu as well, though ported to Python.
> Perhaps that's a more palatable language for some :)
>
> Todd
>
> On Aug 22, 2016 2:16 PM, "Dima Spivak"  wrote:
>
>> +1! I'm to blame for much of check_compatibility.sh (sorry for picking
>> Bash
>> :-p), and I'd be happy to see it living in Apache Yetus if it means making
>> life easy for other projects' contributors.
>>
>> -Dima
>>
>> On Mon, Aug 22, 2016 at 2:10 PM, Andrew Wang 
>> wrote:
>>
>> > Hi HBase devs,
>> >
>> > I'm working on the Hadoop 3 release, and Sean mentioned the nifty Java
>> ACC
>> > tool and the check_compatibility.sh script. ACC is way nicer than JDiff,
>> > and I'd like to use it over in Hadoop.
>> >
>> > There are some hardcodings and other enhancements I'd like to make
>> though.
>> > So, rather than having us copy-paste code back and forth, could we
>> consider
>> > some other form of codesharing? I believe we already both use Apache
>> Yetus,
>> > so that's one possible direction.
>> >
>> > BTW, I'm not subscribed to dev@hbase, so please keep me CC'd on this
>> > thread.
>> >
>> > Thanks,
>> > Andrew
>> >
>>
>>
>>
>> --
>> -Dima
>>
>


[jira] [Created] (HBASE-16486) ACL system table creation should call CreateTableProcedure directly

2016-08-23 Thread Stephen Yuan Jiang (JIRA)
Stephen Yuan Jiang created HBASE-16486:
--

 Summary: ACL system table creation should call 
CreateTableProcedure directly 
 Key: HBASE-16486
 URL: https://issues.apache.org/jira/browse/HBASE-16486
 Project: HBase
  Issue Type: Improvement
  Components: proc-v2
Affects Versions: 2.0.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
Priority: Minor


All other system tables (namespace, quota, rsgroups) calls CreateTableProcedure 
directly; only ACL system table calls HMaster.createTable(), which will go 
through preCreateTable and postCreateTable Coprocessor.  This is unnecessary 
and we should make the ACL system table creation the same as other system 
tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16485) Procedure v2 - Add support to addChildProcedure() as last "step" in StateMachineProcedure

2016-08-23 Thread Matteo Bertozzi (JIRA)
Matteo Bertozzi created HBASE-16485:
---

 Summary: Procedure v2 - Add support to addChildProcedure() as last 
"step" in StateMachineProcedure
 Key: HBASE-16485
 URL: https://issues.apache.org/jira/browse/HBASE-16485
 Project: HBase
  Issue Type: Sub-task
  Components: proc-v2
Affects Versions: 2.0.0, 1.3.0, 1.4.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 2.0.0, 1.3.0, 1.4.0
 Attachments: HBASE-16485-v0.patch

HBASE-15371 added the support for adding children to the StateMachineProcedure, 
but there is one limitation to it. a child cannot be added to the last "step" 
of the execution. the current code will silently ignore the child added.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16484) Create an Interface defining MultiVersionConsistencyControl

2016-08-23 Thread John Leach (JIRA)
John Leach created HBASE-16484:
--

 Summary: Create an Interface defining 
MultiVersionConsistencyControl 
 Key: HBASE-16484
 URL: https://issues.apache.org/jira/browse/HBASE-16484
 Project: HBase
  Issue Type: Bug
Reporter: John Leach
Priority: Minor


Hopefully this will help clarify this critical component.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: IntegrationTestBigLinkedList now running on builds.apache.org

2016-08-23 Thread Dima Spivak
I've opened HBASE-16481 as an umbrella JIRA for improvements to this and
added running on more branches and collecting logs/HFiles/WALs as subtasks.
Please keep the suggestions coming!

On Tue, Aug 23, 2016 at 9:44 AM, Andrew Purtell  wrote:

> This is great.
>
> To completely retrace a rare botch we may need persisted post run:
> - The console log of the rum
> - All daemon logs
> - All WALs
> - All HFiles
> WALs and HFiles should be be organized by time from oldest to newest.
>
> All could reside in a S3 bucket.
>
>
>
> On Tue, Aug 23, 2016 at 12:26 AM, Dima Spivak 
> wrote:
>
> > Yep, that's the next improvement I plan on making. Docker has API
> endpoints
> > for copying files from a container to the host, so I can definitely use
> > that to move logs from the cluster to the Jenkins workspace if a test
> > fails.
> >
> > On Monday, August 22, 2016, Nick Dimiduk  wrote:
> >
> > > This sounds great! Is there a way to gather logs and/or data files from
> > the
> > > containers before termination? Can they be stored on Jenkins as part of
> > the
> > > job artifacts?
> > >
> > > On Monday, August 22, 2016, Ted Yu  >
> > > wrote:
> > >
> > > > Nice job, Dima.
> > > >
> > > > Is there Jenkins job for running ITBLL for 1.2 / 1.3 branches ?
> > > >
> > > > Cheers
> > > >
> > > > On Mon, Aug 22, 2016 at 5:33 PM, Dima Spivak  > > 
> > > > > wrote:
> > > >
> > > > > Dear devs,
> > > > >
> > > > > tl;dr: We now have Jenkins jobs
> > > > >  > > > IntegrationTestBigLinkedList/>
> > > > > that can run IntegrationTestBigLinkedList with fault injection on
> > > 5-node
> > > > > Apache HBase clusters built from source.
> > > > >
> > > > > Long version:
> > > > >
> > > > > I just wanted to provide an update on some recent work we've gotten
> > > done
> > > > > since committing an Apache HBase topology for clusterdock
> > > > >  > ccf5d27d7aa238c8398d2818928a71
> > > > > f39bd749a0>
> > > > > (a Python-based framework for building and starting Docker
> > > > container-based
> > > > > clusters).
> > > > >
> > > > > Despite the existence of an awesome system test framework with
> > > > > fault-injection capabilities in the form of the hbase-it module,
> > we've
> > > > > never had an easy way to run these tests on distributed clusters
> > > > upstream.
> > > > > This has long been a big hole in our Jenkins test coverage, but
> since
> > > the
> > > > > clusterdock topology got committed, we've been making progress on
> > doing
> > > > > something about it. I'm happy to report that, starting today, we
> are
> > > now
> > > > > running IntegrationTestBigLinkedList with fault-injection on Apache
> > > > > Infrastructure
> > > > >  > > > IntegrationTestBigLinkedList/>
> > > > > .
> > > > >
> > > > > Even longer version (stop reading here if you don't care how we do
> > it):
> > > > >
> > > > > So how do we do it? Well clusterdock is designed to start up
> multiple
> > > > > Docker containers on one host where each containers acts like a
> > > > lightweight
> > > > > VM (so 4 containers = 4-node cluster). What's in these containers
> > (and
> > > > what
> > > > > to do when starting them) is controlled by clusterdock's "topology"
> > > > > abstraction. Our apache_hbase topology builds a Docker image from a
> > > Java
> > > > > tarball, Hadoop tarball, and an HBase version. This last part can
> be
> > > > either
> > > > > a binary tarball (for RC testing or playing around with a release)
> > or a
> > > > Git
> > > > > commit, in which case our clusterdock topology builds HBase from
> > > source.
> > > > > Once we build a cluster, we can then push the cluster images
> > (actually,
> > > > > just one Docker image) to a shared Docker registry for repeated
> use.
> > We
> > > > now
> > > > > have a matrix job that can build any branches we care about (I set
> it
> > > up
> > > > > against branch-1.2
> > > > >  > > > > Build-clusterdock-Clusters/HBASE_VERSION=branch-1.2,
> label=docker/>,
> > > > > branch-1.3
> > > > >  > > > > Build-clusterdock-Clusters/HBASE_VERSION=branch-1.3,
> label=docker/>,
> > > > > and master
> > > > >  > > > > Build-clusterdock-Clusters/HBASE_VERSION=master,label=docker/>
> > > > > to start) and do this.
> > > > >
> > > > > Once these images are built (and pushed), we can use them to start
> up
> > > an
> > > > > n-node sized cluster on one host and run tests against it. To
> begin,
> > > I've
> > > > > set up a super simple Jenkins job that starts up a 5-node cluster,
> > runs
> > > > > ITBLL (with an optional Chaos Monkey), and then exits.
> > > > >
> > > > > This work is being tracked in HBASE-15964 and there's much more
> that
> > I
> > > > want
> > > > > to do (more tests, more Chaos Monkeys, more br

[jira] [Created] (HBASE-16483) Build and run tests against more HBase branches

2016-08-23 Thread Dima Spivak (JIRA)
Dima Spivak created HBASE-16483:
---

 Summary: Build and run tests against more HBase branches
 Key: HBASE-16483
 URL: https://issues.apache.org/jira/browse/HBASE-16483
 Project: HBase
  Issue Type: Sub-task
  Components: integration tests
Reporter: Dima Spivak
Assignee: Dima Spivak


Started out with {{master}}, but getting tests going against {{branch-1.2}} and 
{{branch-1.3}} would be a good start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16482) Collect logs and other files when system tests fail

2016-08-23 Thread Dima Spivak (JIRA)
Dima Spivak created HBASE-16482:
---

 Summary: Collect logs and other files when system tests fail
 Key: HBASE-16482
 URL: https://issues.apache.org/jira/browse/HBASE-16482
 Project: HBase
  Issue Type: Sub-task
  Components: integration tests
Reporter: Dima Spivak
Assignee: Dima Spivak


As requested by [~apurtell], when {{hbase-it}} jobs fail:
{noformat}
To completely retrace a rare botch we may need persisted post run:
- The console log of the rum
- All daemon logs
- All WALs
- All HFiles
WALs and HFiles should be be organized by time from oldest to newest.

All could reside in a S3 bucket.
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16481) Improvements to clusterdock-based hbase-it jobs

2016-08-23 Thread Dima Spivak (JIRA)
Dima Spivak created HBASE-16481:
---

 Summary: Improvements to clusterdock-based hbase-it jobs
 Key: HBASE-16481
 URL: https://issues.apache.org/jira/browse/HBASE-16481
 Project: HBase
  Issue Type: Task
  Components: integration tests
Reporter: Dima Spivak
Assignee: Dima Spivak


Parent JIRA to track things to improve in the nascent jobs on builds.apache.org 
that run tests from {{hbase-it}} on clusterdock-based Apache HBase clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: IntegrationTestBigLinkedList now running on builds.apache.org

2016-08-23 Thread Andrew Purtell
This is great.

To completely retrace a rare botch we may need persisted post run:
- The console log of the rum
- All daemon logs
- All WALs
- All HFiles
WALs and HFiles should be be organized by time from oldest to newest.

All could reside in a S3 bucket.



On Tue, Aug 23, 2016 at 12:26 AM, Dima Spivak  wrote:

> Yep, that's the next improvement I plan on making. Docker has API endpoints
> for copying files from a container to the host, so I can definitely use
> that to move logs from the cluster to the Jenkins workspace if a test
> fails.
>
> On Monday, August 22, 2016, Nick Dimiduk  wrote:
>
> > This sounds great! Is there a way to gather logs and/or data files from
> the
> > containers before termination? Can they be stored on Jenkins as part of
> the
> > job artifacts?
> >
> > On Monday, August 22, 2016, Ted Yu >
> > wrote:
> >
> > > Nice job, Dima.
> > >
> > > Is there Jenkins job for running ITBLL for 1.2 / 1.3 branches ?
> > >
> > > Cheers
> > >
> > > On Mon, Aug 22, 2016 at 5:33 PM, Dima Spivak  > 
> > > > wrote:
> > >
> > > > Dear devs,
> > > >
> > > > tl;dr: We now have Jenkins jobs
> > > >  > > IntegrationTestBigLinkedList/>
> > > > that can run IntegrationTestBigLinkedList with fault injection on
> > 5-node
> > > > Apache HBase clusters built from source.
> > > >
> > > > Long version:
> > > >
> > > > I just wanted to provide an update on some recent work we've gotten
> > done
> > > > since committing an Apache HBase topology for clusterdock
> > > >  ccf5d27d7aa238c8398d2818928a71
> > > > f39bd749a0>
> > > > (a Python-based framework for building and starting Docker
> > > container-based
> > > > clusters).
> > > >
> > > > Despite the existence of an awesome system test framework with
> > > > fault-injection capabilities in the form of the hbase-it module,
> we've
> > > > never had an easy way to run these tests on distributed clusters
> > > upstream.
> > > > This has long been a big hole in our Jenkins test coverage, but since
> > the
> > > > clusterdock topology got committed, we've been making progress on
> doing
> > > > something about it. I'm happy to report that, starting today, we are
> > now
> > > > running IntegrationTestBigLinkedList with fault-injection on Apache
> > > > Infrastructure
> > > >  > > IntegrationTestBigLinkedList/>
> > > > .
> > > >
> > > > Even longer version (stop reading here if you don't care how we do
> it):
> > > >
> > > > So how do we do it? Well clusterdock is designed to start up multiple
> > > > Docker containers on one host where each containers acts like a
> > > lightweight
> > > > VM (so 4 containers = 4-node cluster). What's in these containers
> (and
> > > what
> > > > to do when starting them) is controlled by clusterdock's "topology"
> > > > abstraction. Our apache_hbase topology builds a Docker image from a
> > Java
> > > > tarball, Hadoop tarball, and an HBase version. This last part can be
> > > either
> > > > a binary tarball (for RC testing or playing around with a release)
> or a
> > > Git
> > > > commit, in which case our clusterdock topology builds HBase from
> > source.
> > > > Once we build a cluster, we can then push the cluster images
> (actually,
> > > > just one Docker image) to a shared Docker registry for repeated use.
> We
> > > now
> > > > have a matrix job that can build any branches we care about (I set it
> > up
> > > > against branch-1.2
> > > >  > > > Build-clusterdock-Clusters/HBASE_VERSION=branch-1.2,label=docker/>,
> > > > branch-1.3
> > > >  > > > Build-clusterdock-Clusters/HBASE_VERSION=branch-1.3,label=docker/>,
> > > > and master
> > > >  > > > Build-clusterdock-Clusters/HBASE_VERSION=master,label=docker/>
> > > > to start) and do this.
> > > >
> > > > Once these images are built (and pushed), we can use them to start up
> > an
> > > > n-node sized cluster on one host and run tests against it. To begin,
> > I've
> > > > set up a super simple Jenkins job that starts up a 5-node cluster,
> runs
> > > > ITBLL (with an optional Chaos Monkey), and then exits.
> > > >
> > > > This work is being tracked in HBASE-15964 and there's much more that
> I
> > > want
> > > > to do (more tests, more Chaos Monkeys, more branches, more diagnostic
> > > > information collection when a test fails), but I figured I'd let you
> > guys
> > > > know about what have going so far. :)
> > > >
> > > > PS: Special thanks to Jon Hsieh for helping me get the Jenkins jobs
> > > > running.
> > > >
> > > > --
> > > > -Dima
> > > >
> > >
> >
>
>
> --
> -Dima
>



-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)


Re: check_compatibility.sh script for external consumption

2016-08-23 Thread Todd Lipcon
We have a version of the same in kudu as well, though ported to Python.
Perhaps that's a more palatable language for some :)

Todd

On Aug 22, 2016 2:16 PM, "Dima Spivak"  wrote:

> +1! I'm to blame for much of check_compatibility.sh (sorry for picking Bash
> :-p), and I'd be happy to see it living in Apache Yetus if it means making
> life easy for other projects' contributors.
>
> -Dima
>
> On Mon, Aug 22, 2016 at 2:10 PM, Andrew Wang 
> wrote:
>
> > Hi HBase devs,
> >
> > I'm working on the Hadoop 3 release, and Sean mentioned the nifty Java
> ACC
> > tool and the check_compatibility.sh script. ACC is way nicer than JDiff,
> > and I'd like to use it over in Hadoop.
> >
> > There are some hardcodings and other enhancements I'd like to make
> though.
> > So, rather than having us copy-paste code back and forth, could we
> consider
> > some other form of codesharing? I believe we already both use Apache
> Yetus,
> > so that's one possible direction.
> >
> > BTW, I'm not subscribed to dev@hbase, so please keep me CC'd on this
> > thread.
> >
> > Thanks,
> > Andrew
> >
>
>
>
> --
> -Dima
>


Successful: HBase Generate Website

2016-08-23 Thread Apache Jenkins Server
Build status: Successful

If successful, the website and docs have been generated. To update the live 
site, follow the instructions below. If failed, skip to the bottom of this 
email.

Use the following commands to download the patch and apply it to a clean branch 
based on origin/asf-site. If you prefer to keep the hbase-site repo around 
permanently, you can skip the clone step.

  git clone https://git-wip-us.apache.org/repos/asf/hbase-site.git

  cd hbase-site
  wget -O- 
https://builds.apache.org/job/hbase_generate_website/321/artifact/website.patch.zip
 | funzip > 77a7394f1770249f33d07df6bac6cf16ef34140e.patch
  git fetch
  git checkout -b asf-site-77a7394f1770249f33d07df6bac6cf16ef34140e 
origin/asf-site
  git am --whitespace=fix 77a7394f1770249f33d07df6bac6cf16ef34140e.patch

At this point, you can preview the changes by opening index.html or any of the 
other HTML pages in your local 
asf-site-77a7394f1770249f33d07df6bac6cf16ef34140e branch.

There are lots of spurious changes, such as timestamps and CSS styles in 
tables, so a generic git diff is not very useful. To see a list of files that 
have been added, deleted, renamed, changed type, or are otherwise interesting, 
use the following command:

  git diff --name-status --diff-filter=ADCRTXUB origin/asf-site

To see only files that had 100 or more lines changed:

  git diff --stat origin/asf-site | grep -E '[1-9][0-9]{2,}'

When you are satisfied, publish your changes to origin/asf-site using these 
commands:

  git commit --allow-empty -m "Empty commit" # to work around a current ASF 
INFRA bug
  git push origin asf-site-77a7394f1770249f33d07df6bac6cf16ef34140e:asf-site
  git checkout asf-site
  git branch -D asf-site-77a7394f1770249f33d07df6bac6cf16ef34140e

Changes take a couple of minutes to be propagated. You can verify whether they 
have been propagated by looking at the Last Published date at the bottom of 
http://hbase.apache.org/. It should match the date in the index.html on the 
asf-site branch in Git.

As a courtesy- reply-all to this email to let other committers know you pushed 
the site.



If failed, see https://builds.apache.org/job/hbase_generate_website/321/console

Re: [DISCUSSION] Should we pull the Cygwin-oriented instructions out of the official reference materials?

2016-08-23 Thread Enis Söztutar
Sounds good.

On Mon, Aug 22, 2016 at 8:48 AM, Misty Stanley-Jones 
wrote:

> +1
>
> > On Aug 21, 2016, at 10:50 PM, Daniel Vimont 
> wrote:
> >
> > I'm putting out a "last call" for any -1 votes on this proposal, and then
> > if no objections come in, I'll open a JIRA on Wednesday (or so) and make
> > the changes...
> >
> >> On Thu, Aug 18, 2016 at 2:54 AM, Sean Busbey  wrote:
> >>
> >> +1 on yanking them.
> >>
> >> Only kind of related, but with the move to having an Ubuntu-based bash
> >> shell for Windows 10, folks interested in resurrecting the instructions
> >> should focus on that as a deployment environment.
> >>
> >> On Tue, Aug 16, 2016 at 8:52 PM, Daniel Vimont 
> >> wrote:
> >>
> >>> Hey all,
> >>>
> >>> There are recurring user-list postings[1][2] (and the occasional
> JIRA[3])
> >>> from confused individuals who are attempting to follow the (apparently
> >>> outdated and unsupported, yet apparently "official") instructions for
> >>> Cygwin-based installation of HBase in a Windows environment[4].
> >>>
> >>> Given that these instructions do appear to be outdated and
> unsupported, I
> >>> would recommend that we simply pull them from the official
> documentation.
> >>> (Yes, we could put a big banner up at the top of the Cygwin
> instructions
> >>> that announces them as severely deprecated -- "Here be monsters" -- but
> >> why
> >>> leave bogus instructions lying around?)
> >>>
> >>> Big questions:
> >>> (a) Is there anyone who wants to update the Cygwin instructions?
> >>> (b) Assuming that the answer to question (a) is "no"[5], then should we
> >>> pull the Cygwin instructions (hbase/src/main/site/asciidoc/
> cygwin.adoc)
> >>> out
> >>> of the official Reference materials?
> >>>
> >>> Thanks,
> >>>
> >>> Dan
> >>>
> >>> [1]
> >>> http://mail-archives.apache.org/mod_mbox/hbase-user/201608.
> >>> mbox/%3C0e035a2a13e041fdae95d204857c271a%40HK2PR3007MB0132.
> >>> 064d.mgd.msft.net%3E
> >>> [2]
> >>> http://mail-archives.apache.org/mod_mbox/hbase-user/201608.
> >>> mbox/%3CCABL%2BTmt2khhkgQD%2BppUCewJ9Dvhsx-WqqkGFG%2B_
> >>> TGZjpu%2Bh2iw%40mail.gmail.com%3E
> >>> [3] https://issues.apache.org/jira/browse/HBASE-16424
> >>> [4] https://hbase.apache.org/cygwin.html
> >>> [5] ... or more likely, that question (a) is simply met with the sound
> of
> >>> crickets vigorously chirping
> >>
>


[jira] [Resolved] (HBASE-16472) TableNotDisabledException

2016-08-23 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi resolved HBASE-16472.
-
Resolution: Not A Problem

> TableNotDisabledException
> -
>
> Key: HBASE-16472
> URL: https://issues.apache.org/jira/browse/HBASE-16472
> Project: HBase
>  Issue Type: Bug
>Reporter: Dhruv Singhal
>
> When I created a table in HBase and then tried running create statement in 
> phoenix. Phoenix returned me the following error. 
> procedure.ModifyTableProcedure: Error trying to modify table=t21sample 
> state=MODIFY_TABLE_PREPARE
> org.apache.hadoop.hbase.TableNotDisabledException: t21sample
> at 
> org.apache.hadoop.hbase.master.procedure.ModifyTableProcedure.prepareModify(ModifyTableProcedure.java:298)
> at 
> org.apache.hadoop.hbase.master.procedure.ModifyTableProcedure.executeFromState(ModifyTableProcedure.java:98)
> at 
> org.apache.hadoop.hbase.master.procedure.ModifyTableProcedure.executeFromState(ModifyTableProcedure.java:54)
> at 
> org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:107)
> at 
> org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:400)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:869)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:673)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:626)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$200(ProcedureExecutor.java:70)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$1.run(ProcedureExecutor.java:413)
> The same error occurred in the case of altering an existing table. 
> I some how managed to create a table in Phoenix by the same sequence if I 
> disable the table after creating it. The phoenix returned me table disabled 
> error. But after enabling the table in hbase, the table was created in 
> phoenix, same worked for alter statement as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16480) Merge WALEdit and WALKey

2016-08-23 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-16480:
-

 Summary: Merge WALEdit and WALKey
 Key: HBASE-16480
 URL: https://issues.apache.org/jira/browse/HBASE-16480
 Project: HBase
  Issue Type: Sub-task
  Components: wal
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0


No need for separate classes: 
{code}
// TODO: Key and WALEdit are never used separately, or in one-to-many relation, 
for practical
//   purposes. They need to be merged into WALEntry.
@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.REPLICATION)
public class WALKey implements SequenceId, Comparable {
{code}

Will reduce garbage a little and simplify code. We can get rid of WAL.Entry as 
well. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16479) Move WALEdit from hbase.regionserver.wal package to hbase.wal package

2016-08-23 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-16479:
-

 Summary: Move WALEdit from hbase.regionserver.wal package to 
hbase.wal package
 Key: HBASE-16479
 URL: https://issues.apache.org/jira/browse/HBASE-16479
 Project: HBase
  Issue Type: Sub-task
  Components: wal
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0


{{hbase.wal}} is the new home for WAL related code. WALEdit should be there. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16477) Remove Writable interface and related code from WALEdit/WALKey

2016-08-23 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-16477:
-

 Summary: Remove Writable interface and related code from 
WALEdit/WALKey
 Key: HBASE-16477
 URL: https://issues.apache.org/jira/browse/HBASE-16477
 Project: HBase
  Issue Type: Sub-task
  Components: wal
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0


Writables are gone, and SequenceFile based WAL will be gone in parent. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16478) Rename WALKey in PB to WALEdit

2016-08-23 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-16478:
-

 Summary: Rename WALKey in PB to WALEdit
 Key: HBASE-16478
 URL: https://issues.apache.org/jira/browse/HBASE-16478
 Project: HBase
  Issue Type: Sub-task
  Components: wal
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0


As per title. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16476) Remove HLogKey

2016-08-23 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-16476:
-

 Summary: Remove HLogKey
 Key: HBASE-16476
 URL: https://issues.apache.org/jira/browse/HBASE-16476
 Project: HBase
  Issue Type: Sub-task
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0


Deprecated in 1.0+, we should remove HLogKey in 2.0. Cleanup for coprocessors. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16475) Remove SequenceFile based WAL

2016-08-23 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-16475:
-

 Summary: Remove SequenceFile based WAL
 Key: HBASE-16475
 URL: https://issues.apache.org/jira/browse/HBASE-16475
 Project: HBase
  Issue Type: Sub-task
  Components: wal
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0


SequenceFile based WAL is not used in 0.96+. There is no 0.94 -> 2.0 upgrade 
(some other upgrade code is removed already), so we can remove SF based WAL and 
related code. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16474) Remove dfs.support.append related code and documentation

2016-08-23 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-16474:
-

 Summary: Remove dfs.support.append related code and documentation
 Key: HBASE-16474
 URL: https://issues.apache.org/jira/browse/HBASE-16474
 Project: HBase
  Issue Type: Sub-task
  Components: fs, regionserver, wal
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0


{{dfs.support.append}} not needed anymore in Hadoop-2.0+. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16473) Cleanup WALKey / WALEdit / Entry

2016-08-23 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-16473:
-

 Summary: Cleanup WALKey / WALEdit / Entry
 Key: HBASE-16473
 URL: https://issues.apache.org/jira/browse/HBASE-16473
 Project: HBase
  Issue Type: Umbrella
  Components: wal
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0


Series of patches to clean up WALEdit, WALKey and related classes and 
interfaces. 

Merge WALEdit and WALKey and get rid of Entry for 2.0. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16472) TableNotDisabledException

2016-08-23 Thread Dhruv Singhal (JIRA)
Dhruv Singhal created HBASE-16472:
-

 Summary: TableNotDisabledException
 Key: HBASE-16472
 URL: https://issues.apache.org/jira/browse/HBASE-16472
 Project: HBase
  Issue Type: Bug
Reporter: Dhruv Singhal


 procedure.ModifyTableProcedure: Error trying to modify table=t21sample 
state=MODIFY_TABLE_PREPARE
org.apache.hadoop.hbase.TableNotDisabledException: t21sample
at 
org.apache.hadoop.hbase.master.procedure.ModifyTableProcedure.prepareModify(ModifyTableProcedure.java:298)
at 
org.apache.hadoop.hbase.master.procedure.ModifyTableProcedure.executeFromState(ModifyTableProcedure.java:98)
at 
org.apache.hadoop.hbase.master.procedure.ModifyTableProcedure.executeFromState(ModifyTableProcedure.java:54)
at 
org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:107)
at 
org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:400)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:869)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:673)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:626)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$200(ProcedureExecutor.java:70)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor$1.run(ProcedureExecutor.java:413)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: IntegrationTestBigLinkedList now running on builds.apache.org

2016-08-23 Thread Ted Yu
ITBLL in master branch has not been run manually for quite some time.

I wouldn't be surprised if it doesn't pass with serverKilling monkey.

Starting from 1.2 / 1.3 may get you to a green build faster.

Cheers

On Tue, Aug 23, 2016 at 12:24 AM, Dima Spivak  wrote:

> Not yet, but I plan on adding them once I get master passing. Stay tuned!
>
> On Monday, August 22, 2016, Ted Yu  wrote:
>
> > Nice job, Dima.
> >
> > Is there Jenkins job for running ITBLL for 1.2 / 1.3 branches ?
> >
> > Cheers
> >
> > On Mon, Aug 22, 2016 at 5:33 PM, Dima Spivak  > > wrote:
> >
> > > Dear devs,
> > >
> > > tl;dr: We now have Jenkins jobs
> > >  > IntegrationTestBigLinkedList/>
> > > that can run IntegrationTestBigLinkedList with fault injection on
> 5-node
> > > Apache HBase clusters built from source.
> > >
> > > Long version:
> > >
> > > I just wanted to provide an update on some recent work we've gotten
> done
> > > since committing an Apache HBase topology for clusterdock
> > >  > > f39bd749a0>
> > > (a Python-based framework for building and starting Docker
> > container-based
> > > clusters).
> > >
> > > Despite the existence of an awesome system test framework with
> > > fault-injection capabilities in the form of the hbase-it module, we've
> > > never had an easy way to run these tests on distributed clusters
> > upstream.
> > > This has long been a big hole in our Jenkins test coverage, but since
> the
> > > clusterdock topology got committed, we've been making progress on doing
> > > something about it. I'm happy to report that, starting today, we are
> now
> > > running IntegrationTestBigLinkedList with fault-injection on Apache
> > > Infrastructure
> > >  > IntegrationTestBigLinkedList/>
> > > .
> > >
> > > Even longer version (stop reading here if you don't care how we do it):
> > >
> > > So how do we do it? Well clusterdock is designed to start up multiple
> > > Docker containers on one host where each containers acts like a
> > lightweight
> > > VM (so 4 containers = 4-node cluster). What's in these containers (and
> > what
> > > to do when starting them) is controlled by clusterdock's "topology"
> > > abstraction. Our apache_hbase topology builds a Docker image from a
> Java
> > > tarball, Hadoop tarball, and an HBase version. This last part can be
> > either
> > > a binary tarball (for RC testing or playing around with a release) or a
> > Git
> > > commit, in which case our clusterdock topology builds HBase from
> source.
> > > Once we build a cluster, we can then push the cluster images (actually,
> > > just one Docker image) to a shared Docker registry for repeated use. We
> > now
> > > have a matrix job that can build any branches we care about (I set it
> up
> > > against branch-1.2
> > >  > > Build-clusterdock-Clusters/HBASE_VERSION=branch-1.2,label=docker/>,
> > > branch-1.3
> > >  > > Build-clusterdock-Clusters/HBASE_VERSION=branch-1.3,label=docker/>,
> > > and master
> > >  > > Build-clusterdock-Clusters/HBASE_VERSION=master,label=docker/>
> > > to start) and do this.
> > >
> > > Once these images are built (and pushed), we can use them to start up
> an
> > > n-node sized cluster on one host and run tests against it. To begin,
> I've
> > > set up a super simple Jenkins job that starts up a 5-node cluster, runs
> > > ITBLL (with an optional Chaos Monkey), and then exits.
> > >
> > > This work is being tracked in HBASE-15964 and there's much more that I
> > want
> > > to do (more tests, more Chaos Monkeys, more branches, more diagnostic
> > > information collection when a test fails), but I figured I'd let you
> guys
> > > know about what have going so far. :)
> > >
> > > PS: Special thanks to Jon Hsieh for helping me get the Jenkins jobs
> > > running.
> > >
> > > --
> > > -Dima
> > >
> >
>
>
> --
> -Dima
>


Re: IntegrationTestBigLinkedList now running on builds.apache.org

2016-08-23 Thread Dima Spivak
Yep, that's the next improvement I plan on making. Docker has API endpoints
for copying files from a container to the host, so I can definitely use
that to move logs from the cluster to the Jenkins workspace if a test fails.

On Monday, August 22, 2016, Nick Dimiduk  wrote:

> This sounds great! Is there a way to gather logs and/or data files from the
> containers before termination? Can they be stored on Jenkins as part of the
> job artifacts?
>
> On Monday, August 22, 2016, Ted Yu >
> wrote:
>
> > Nice job, Dima.
> >
> > Is there Jenkins job for running ITBLL for 1.2 / 1.3 branches ?
> >
> > Cheers
> >
> > On Mon, Aug 22, 2016 at 5:33 PM, Dima Spivak  
> > > wrote:
> >
> > > Dear devs,
> > >
> > > tl;dr: We now have Jenkins jobs
> > >  > IntegrationTestBigLinkedList/>
> > > that can run IntegrationTestBigLinkedList with fault injection on
> 5-node
> > > Apache HBase clusters built from source.
> > >
> > > Long version:
> > >
> > > I just wanted to provide an update on some recent work we've gotten
> done
> > > since committing an Apache HBase topology for clusterdock
> > >  > > f39bd749a0>
> > > (a Python-based framework for building and starting Docker
> > container-based
> > > clusters).
> > >
> > > Despite the existence of an awesome system test framework with
> > > fault-injection capabilities in the form of the hbase-it module, we've
> > > never had an easy way to run these tests on distributed clusters
> > upstream.
> > > This has long been a big hole in our Jenkins test coverage, but since
> the
> > > clusterdock topology got committed, we've been making progress on doing
> > > something about it. I'm happy to report that, starting today, we are
> now
> > > running IntegrationTestBigLinkedList with fault-injection on Apache
> > > Infrastructure
> > >  > IntegrationTestBigLinkedList/>
> > > .
> > >
> > > Even longer version (stop reading here if you don't care how we do it):
> > >
> > > So how do we do it? Well clusterdock is designed to start up multiple
> > > Docker containers on one host where each containers acts like a
> > lightweight
> > > VM (so 4 containers = 4-node cluster). What's in these containers (and
> > what
> > > to do when starting them) is controlled by clusterdock's "topology"
> > > abstraction. Our apache_hbase topology builds a Docker image from a
> Java
> > > tarball, Hadoop tarball, and an HBase version. This last part can be
> > either
> > > a binary tarball (for RC testing or playing around with a release) or a
> > Git
> > > commit, in which case our clusterdock topology builds HBase from
> source.
> > > Once we build a cluster, we can then push the cluster images (actually,
> > > just one Docker image) to a shared Docker registry for repeated use. We
> > now
> > > have a matrix job that can build any branches we care about (I set it
> up
> > > against branch-1.2
> > >  > > Build-clusterdock-Clusters/HBASE_VERSION=branch-1.2,label=docker/>,
> > > branch-1.3
> > >  > > Build-clusterdock-Clusters/HBASE_VERSION=branch-1.3,label=docker/>,
> > > and master
> > >  > > Build-clusterdock-Clusters/HBASE_VERSION=master,label=docker/>
> > > to start) and do this.
> > >
> > > Once these images are built (and pushed), we can use them to start up
> an
> > > n-node sized cluster on one host and run tests against it. To begin,
> I've
> > > set up a super simple Jenkins job that starts up a 5-node cluster, runs
> > > ITBLL (with an optional Chaos Monkey), and then exits.
> > >
> > > This work is being tracked in HBASE-15964 and there's much more that I
> > want
> > > to do (more tests, more Chaos Monkeys, more branches, more diagnostic
> > > information collection when a test fails), but I figured I'd let you
> guys
> > > know about what have going so far. :)
> > >
> > > PS: Special thanks to Jon Hsieh for helping me get the Jenkins jobs
> > > running.
> > >
> > > --
> > > -Dima
> > >
> >
>


-- 
-Dima


Re: IntegrationTestBigLinkedList now running on builds.apache.org

2016-08-23 Thread Dima Spivak
Not yet, but I plan on adding them once I get master passing. Stay tuned!

On Monday, August 22, 2016, Ted Yu  wrote:

> Nice job, Dima.
>
> Is there Jenkins job for running ITBLL for 1.2 / 1.3 branches ?
>
> Cheers
>
> On Mon, Aug 22, 2016 at 5:33 PM, Dima Spivak  > wrote:
>
> > Dear devs,
> >
> > tl;dr: We now have Jenkins jobs
> >  IntegrationTestBigLinkedList/>
> > that can run IntegrationTestBigLinkedList with fault injection on 5-node
> > Apache HBase clusters built from source.
> >
> > Long version:
> >
> > I just wanted to provide an update on some recent work we've gotten done
> > since committing an Apache HBase topology for clusterdock
> >  > f39bd749a0>
> > (a Python-based framework for building and starting Docker
> container-based
> > clusters).
> >
> > Despite the existence of an awesome system test framework with
> > fault-injection capabilities in the form of the hbase-it module, we've
> > never had an easy way to run these tests on distributed clusters
> upstream.
> > This has long been a big hole in our Jenkins test coverage, but since the
> > clusterdock topology got committed, we've been making progress on doing
> > something about it. I'm happy to report that, starting today, we are now
> > running IntegrationTestBigLinkedList with fault-injection on Apache
> > Infrastructure
> >  IntegrationTestBigLinkedList/>
> > .
> >
> > Even longer version (stop reading here if you don't care how we do it):
> >
> > So how do we do it? Well clusterdock is designed to start up multiple
> > Docker containers on one host where each containers acts like a
> lightweight
> > VM (so 4 containers = 4-node cluster). What's in these containers (and
> what
> > to do when starting them) is controlled by clusterdock's "topology"
> > abstraction. Our apache_hbase topology builds a Docker image from a Java
> > tarball, Hadoop tarball, and an HBase version. This last part can be
> either
> > a binary tarball (for RC testing or playing around with a release) or a
> Git
> > commit, in which case our clusterdock topology builds HBase from source.
> > Once we build a cluster, we can then push the cluster images (actually,
> > just one Docker image) to a shared Docker registry for repeated use. We
> now
> > have a matrix job that can build any branches we care about (I set it up
> > against branch-1.2
> >  > Build-clusterdock-Clusters/HBASE_VERSION=branch-1.2,label=docker/>,
> > branch-1.3
> >  > Build-clusterdock-Clusters/HBASE_VERSION=branch-1.3,label=docker/>,
> > and master
> >  > Build-clusterdock-Clusters/HBASE_VERSION=master,label=docker/>
> > to start) and do this.
> >
> > Once these images are built (and pushed), we can use them to start up an
> > n-node sized cluster on one host and run tests against it. To begin, I've
> > set up a super simple Jenkins job that starts up a 5-node cluster, runs
> > ITBLL (with an optional Chaos Monkey), and then exits.
> >
> > This work is being tracked in HBASE-15964 and there's much more that I
> want
> > to do (more tests, more Chaos Monkeys, more branches, more diagnostic
> > information collection when a test fails), but I figured I'd let you guys
> > know about what have going so far. :)
> >
> > PS: Special thanks to Jon Hsieh for helping me get the Jenkins jobs
> > running.
> >
> > --
> > -Dima
> >
>


-- 
-Dima