Re: [DISCUSSION] Dropping support for Hadoop 1.0 in 0.98

2014-10-31 Thread Konstantin Boudnik
Absolutely makes sense! it will make a lot of things easier, really. The
infamous need for classifiers will finally go away!

On Fri, Oct 31, 2014 at 10:56AM, Andrew Purtell wrote:
> Hadoop 1.0 is an ancient, and I believe dead, version that certainly nobody
> should use today. We have a chronic problem on the 0.98 branch with changes
> tested on only Hadoop 2 later are found to break builds against Hadoop 1,
> since only 0.94 and 0.98 still support Hadoop 1.x. See
> https://issues.apache.org/jira/browse/HBASE-12397 as an example. This issue
> also illustrates that dropping support for 1.0 while retaining support for
> 1.1 and later versions of Hadoop 1.x can reduce cross-version compatibility
> complexity for at least the API involved in that issue, and certainly
> others. This in my opinion is a good thing.
> 
> 
> -- 
> Best regards,
> 
>- Andy
> 
> Problems worthy of attack prove their worth by hitting back. - Piet Hein
> (via Tom White)


Re: HBase shell compatibility needs [Was: Ruby scripting in HBase?]

2014-08-28 Thread Konstantin Boudnik
Sorry Sean - I've missed the thread somehow. So, yes - let's combine the two:
they are of course very closely related.

Cos

On Thu, Aug 28, 2014 at 11:23PM, Sean Busbey wrote:
> Hey Cos,
> 
> A couple of weeks ago I started a thread on what kind of compatibility
> needs have to be met when updating the shell, which is the vast majority of
> our jruby.
> 
> *http://s.apache.org/tFV <http://s.apache.org/tFV>*
> 
> Can we combine discussions into one place over there?
> 
> Ideally, I'd like to replace the jruby (at least for user facing
> operational needs) with a pure-java implementation that can cut down on
> size while getting faster. Unfortunately I got caught up in some work and
> the thread went quite. I'm going to try to address Stack's concerns
> tomorrow.
> 
> My hope is that we can have language appropriate client bindings for folks
> who'd like to build their own tooling in a scripting language.
> 
> 
> On Thu, Aug 28, 2014 at 7:34 PM, Konstantin Boudnik  wrote:
> 
> > On Thu, Aug 28, 2014 at 07:51PM, Jean-Marc Spaggiari wrote:
> > > Was more thinking about the first. Having a way to automate groovy shell
> > > commands testing in the build.
> >
> > I am pretty sure there is. After all, Gradle (a build DSL based, using
> > Groovy
> > as the underlying language) has unit testing in it ;)
> >
> > The issue with testing of these scripts, as far as I can see, is that they
> > might need some sort of mocking involved to work around the fact that these
> > command expect a running HBase cluster. Makes sense?
> >
> > As for the 'pain vs pain' comment: I am not really sure why. Groovy is
> > really
> > just a Java with added benefits of real lambdas, dynamic bindings, etc.
> >
> > Cos
> >
> > > Le 2014-08-28 19:45, "Mikhail Antonov"  a écrit :
> > >
> > > > JM - do you mean writing unit or integration tests for groovy commands
> > > > themselves, or to be able to write HBase tests in Groovy? If later
> > > > one, then I'd think HBase tests may benefit a lot in conciseness if
> > > > written in Groovy.
> > > >
> > > > -Mikhail
> > > >
> > > > 2014-08-28 16:39 GMT-07:00, Jean-Marc Spaggiari <
> > jean-m...@spaggiari.org>:
> > > > > Are we not just going to replace a pain by another pain?
> > > > >
> > > > > Can we build test suites for Groovy? I mean, not just use groovy to
> > build
> > > > > test, but build a test script which will test groovy? I think it's
> > one of
> > > > > the main issues today with JRuby shell.
> > > > >
> > > > > I prefer Groovy over JRuby but not sure if the move really worse it.
> > > > >
> > > > >
> > > > > 2014-08-28 19:06 GMT-04:00 Konstantin Boudnik :
> > > > >
> > > > >> Guys,
> > > > >>
> > > > >> I've been looking into some service scripting around HBase lifecycle
> > > > >> management, etc. and couldn't help but wonder why those were
> > written in
> > > > >> Ruby
> > > > >> of all JVM languages? Historical legacy aside, it seems that current
> > > > >> HBase
> > > > >> is
> > > > >> still using JRuby 1.6.5 vs the latest at 1.9+ or perhaps even later.
> > > > >>
> > > > >> At any rate, I was wondering if replacing Ruby with a more Java-like
> > > > >> scripting
> > > > >> extension (if the scripting-2-Java API bridge is what indeed
> > desired)
> > > > >> would be
> > > > >> of any interest here? An obvious choice would be Groovy
> > > > >> (http://groovy.codehaus.org/). One of the main reasons behind my
> > > > proposal
> > > > >> is
> > > > >> stack simplification: Bigtop is very actively using Groovy as a
> > > > scripting
> > > > >> language of choice to do builds, develop smoke tests, etc. So, it is
> > > > >> already
> > > > >> there and guaranteed to be installed as a part of any Bigtop-derived
> > > > >> Hadoop
> > > > >> distro. There are other benefits, where, if desired, one can just
> > write
> > > > >> Java
> > > > >> code inside of a Groovy script, without a need to learn yet another
> > > > >> language
> > > > >> like Ruby.
> > > > >>
> > > > >> This is perhaps not of an immediate priority for the community, but
> > if
> > > > >> there's
> > > > >> enough interest, I can give it an initial shot to demo'ed what I am
> > > > >> really
> > > > >> talking about.
> > > > >>
> > > > >> Thoughts?
> > > > >> --
> > > > >> Regards,
> > > > >>   Cos
> > > > >>
> > > > >>
> > > > >
> > > >
> > > >
> > > > --
> > > > Thanks,
> > > > Michael Antonov
> > > >
> >
> 
> 
> 
> -- 
> Sean


signature.asc
Description: Digital signature


Re: Ruby scripting in HBase?

2014-08-28 Thread Konstantin Boudnik
On Thu, Aug 28, 2014 at 07:51PM, Jean-Marc Spaggiari wrote:
> Was more thinking about the first. Having a way to automate groovy shell
> commands testing in the build.

I am pretty sure there is. After all, Gradle (a build DSL based, using Groovy
as the underlying language) has unit testing in it ;)

The issue with testing of these scripts, as far as I can see, is that they
might need some sort of mocking involved to work around the fact that these
command expect a running HBase cluster. Makes sense?

As for the 'pain vs pain' comment: I am not really sure why. Groovy is really
just a Java with added benefits of real lambdas, dynamic bindings, etc.

Cos

> Le 2014-08-28 19:45, "Mikhail Antonov"  a écrit :
> 
> > JM - do you mean writing unit or integration tests for groovy commands
> > themselves, or to be able to write HBase tests in Groovy? If later
> > one, then I'd think HBase tests may benefit a lot in conciseness if
> > written in Groovy.
> >
> > -Mikhail
> >
> > 2014-08-28 16:39 GMT-07:00, Jean-Marc Spaggiari :
> > > Are we not just going to replace a pain by another pain?
> > >
> > > Can we build test suites for Groovy? I mean, not just use groovy to build
> > > test, but build a test script which will test groovy? I think it's one of
> > > the main issues today with JRuby shell.
> > >
> > > I prefer Groovy over JRuby but not sure if the move really worse it.
> > >
> > >
> > > 2014-08-28 19:06 GMT-04:00 Konstantin Boudnik :
> > >
> > >> Guys,
> > >>
> > >> I've been looking into some service scripting around HBase lifecycle
> > >> management, etc. and couldn't help but wonder why those were written in
> > >> Ruby
> > >> of all JVM languages? Historical legacy aside, it seems that current
> > >> HBase
> > >> is
> > >> still using JRuby 1.6.5 vs the latest at 1.9+ or perhaps even later.
> > >>
> > >> At any rate, I was wondering if replacing Ruby with a more Java-like
> > >> scripting
> > >> extension (if the scripting-2-Java API bridge is what indeed desired)
> > >> would be
> > >> of any interest here? An obvious choice would be Groovy
> > >> (http://groovy.codehaus.org/). One of the main reasons behind my
> > proposal
> > >> is
> > >> stack simplification: Bigtop is very actively using Groovy as a
> > scripting
> > >> language of choice to do builds, develop smoke tests, etc. So, it is
> > >> already
> > >> there and guaranteed to be installed as a part of any Bigtop-derived
> > >> Hadoop
> > >> distro. There are other benefits, where, if desired, one can just write
> > >> Java
> > >> code inside of a Groovy script, without a need to learn yet another
> > >> language
> > >> like Ruby.
> > >>
> > >> This is perhaps not of an immediate priority for the community, but if
> > >> there's
> > >> enough interest, I can give it an initial shot to demo'ed what I am
> > >> really
> > >> talking about.
> > >>
> > >> Thoughts?
> > >> --
> > >> Regards,
> > >>   Cos
> > >>
> > >>
> > >
> >
> >
> > --
> > Thanks,
> > Michael Antonov
> >


Ruby scripting in HBase?

2014-08-28 Thread Konstantin Boudnik
Guys,

I've been looking into some service scripting around HBase lifecycle
management, etc. and couldn't help but wonder why those were written in Ruby
of all JVM languages? Historical legacy aside, it seems that current HBase is
still using JRuby 1.6.5 vs the latest at 1.9+ or perhaps even later.

At any rate, I was wondering if replacing Ruby with a more Java-like scripting
extension (if the scripting-2-Java API bridge is what indeed desired) would be
of any interest here? An obvious choice would be Groovy
(http://groovy.codehaus.org/). One of the main reasons behind my proposal is
stack simplification: Bigtop is very actively using Groovy as a scripting
language of choice to do builds, develop smoke tests, etc. So, it is already
there and guaranteed to be installed as a part of any Bigtop-derived Hadoop
distro. There are other benefits, where, if desired, one can just write Java
code inside of a Groovy script, without a need to learn yet another language
like Ruby.

This is perhaps not of an immediate priority for the community, but if there's
enough interest, I can give it an initial shot to demo'ed what I am really
talking about.

Thoughts?
-- 
Regards,
  Cos



Re: Dropping support for JDK6 in Apache Hadoop

2014-08-19 Thread Konstantin Boudnik
While it sounds like a topic for a different discussion (or list@), do you
think it would be too crazy to skip JDK7 completely and just go to JDK8
directly? I think downstream components like HBase, Spark and so on would be a
huge driving factor to make the same change in Hadoop and other parts of the
ecosystem.

Thoughts?
  Cos
   
On Tue, Aug 19, 2014 at 11:23AM, Devaraj Das wrote:
> I think we are good on HBase side, at least on the 1.x+ branches. We
> are dropping support for 1.6 from HBase-1.0 onwards.
> 
> On Tue, Aug 19, 2014 at 10:52 AM, Arun C Murthy  wrote:
> > [Apologies for the wide distribution.]
> >
> > Dear HBase/Hive/Pig/Oozie communities,
> >
> >  We, over at Hadoop are considering dropping support for JDK6 this year.
> >
> >  As you maybe aware we just released hadoop-2.5.0 and are now considering 
> > making the next release i.e. hadoop-2.6.0 the *last* release of Apache 
> > Hadoop which supports JDK6. This means, from hadoop-2.7.0 onwards we will 
> > not support JDK6 anymore and we *may* start relying on JDK7-specific apis.
> >
> >  Now, the above releases a proposal and we do not want to pull the trigger 
> > without talking to projects downstream - hence the request for you feedback.
> >
> >  Please feel free to forward this to other communities you might deem to be 
> > at risk from this too.
> >
> > thanks,
> > Arun
> >
> >
> > --
> > CONFIDENTIALITY NOTICE
> > NOTICE: This message is intended for the use of the individual or entity to
> > which it is addressed and may contain information that is confidential,
> > privileged and exempt from disclosure under applicable law. If the reader
> > of this message is not the intended recipient, you are hereby notified that
> > any printing, copying, dissemination, distribution, disclosure or
> > forwarding of this communication is strictly prohibited. If you have
> > received this communication in error, please contact the sender immediately
> > and delete it from your system. Thank You.
> 
> -- 
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to 
> which it is addressed and may contain information that is confidential, 
> privileged and exempt from disclosure under applicable law. If the reader 
> of this message is not the intended recipient, you are hereby notified that 
> any printing, copying, dissemination, distribution, disclosure or 
> forwarding of this communication is strictly prohibited. If you have 
> received this communication in error, please contact the sender immediately 
> and delete it from your system. Thank You.


Phone bridge info [Re: Meetup invitation: Consensus based replication in Hadoop]

2014-07-14 Thread Konstantin Boudnik
To enable remote people to join the conversation, we'll keep open the
following GotoMeeting session. You can join either from your computer or by
using dial-in number. 

 =
Please join my meeting, Jul 15, 2014 at 12:00 PM PDT.
https://www4.gotomeeting.com/join/357791919

2. Use your microphone and speakers (VoIP) - a headset is recommended. Or,
call in using your telephone.

Dial +1 (805) 309-0014
Access Code: 357-791-919
Audio PIN: Shown after joining the meeting

Meeting ID: 357-791-919
 =

BTW, we have one or two spots left, so if you didn't make your reservation -
now is the time.

See you all tomorrow.
  Cos

On Fri, Jul 11, 2014 at 04:33PM, Konstantin Boudnik wrote:
> One more update: it seema that for ppl in SF, who oftentimes might not even
> have a car, getting to San Ramon can represent a certain difficulty.
> 
> So we'll do a shuttle pickup from West Dublin BART station if there's at least
> a few people who want to use the option.
> 
> Please respond directly to me if you're interested before end of Sunday, 13th.
> Cheers,
>   Cos
> 
> On Tue, Jul 08, 2014 at 12:23PM, Konstantin Boudnik wrote:
> > All,
> > 
> > Re-sending this announcement in case it fell through over the long weekend
> > when people were away. We still have seats left, so register soon.
> > 
> > Regards,
> >   Cos
> > 
> > On Wed, Jul 02, 2014 at 06:37PM, Konstantin Boudnik wrote:
> > > We'd like to invite you to the 
> > > Consensus based replication in Hadoop: A deep dive
> > > event that we are happy to hold in our San Ramon office on July 15th at 
> > > noon.
> > > We'd like to accommodate as many people as possible, but I think are 
> > > physically
> > > limited to 30 (+/- a few), so please RSVP to this Eventbrite invitation:
> > > 
> > > https://www.eventbrite.co.uk/e/consensus-based-replication-in-hadoop-a-deep-dive-tickets-12158236613
> > > 
> > > We'll provide pizza and beverages (feel free to express your special 
> > > dietary
> > > requirements if any).
> > > 
> > > See you soon!
> > > With regards,
> > >   Cos
> > > 
> > > On Wed, Jun 18, 2014 at 08:45PM, Konstantin Boudnik wrote:
> > > > Guys,
> > > > 
> > > > In the last a couple of weeks, we had a very good and productive 
> > > > initial round
> > > > of discussions on the JIRAs. I think it is worthy to keep the momentum 
> > > > going
> > > > and have a more detailed conversation. For that, we'd like to host s 
> > > > Hadoop
> > > > developers meetup to get into the bowls of the consensus-based 
> > > > coordination
> > > > implementation for HDFS. The proposed venue is our office in San Ramon, 
> > > > CA.
> > > > 
> > > > Considering that it is already a mid week and the following one looks 
> > > > short
> > > > because of the holidays, how would the week of July 7th looks for yall?
> > > > Tuesday or Thursday look pretty good on our end.
> > > > 
> > > > Please chime in on your preference either here or reach of directly to 
> > > > me.
> > > > Once I have a few RSVPs I will setup an event on Eventbrite or similar.
> > > > 
> > > > Looking forward to your input. Regards,
> > > >   Cos
> > > > 
> > > > On Thu, May 29, 2014 at 02:09PM, Konstantin Shvachko wrote:
> > > > > Hello hadoop developers,
> > > > > 
> > > > > I just opened two jiras proposing to introduce ConsensusNode into 
> > > > > HDFS and
> > > > > a Coordination Engine into Hadoop Common. The latter should benefit 
> > > > > HDFS
> > > > > and  HBase as well as potentially other projects. See HDFS-6469 and
> > > > > HADOOP-10641 for details.
> > > > > The effort is based on the system we built at Wandisco with my 
> > > > > colleagues,
> > > > > who are glad to contribute it to Apache, as quite a few people in the
> > > > > community expressed interest in this ideas and their potential 
> > > > > applications.
> > > > > 
> > > > > We should probably keep technical discussions in the jiras. Here on 
> > > > > the dev
> > > > > list I wanted to touch-base on any logistic issues / questions.
> > > > > - First of all, any ideas and help are very much welcome.
> > > > > - We would like to set up a meetup to discuss this if people are
> > > > > interested. Hadoop Summit next week may be a potential time-place to 
> > > > > meet.
> > > > > Not sure in what form. If not, we can organize one in our San Ramon 
> > > > > office
> > > > > later on.
> > > > > - The effort may take a few months depending on the contributors 
> > > > > schedules.
> > > > > Would it make sense to open a branch for the ConsensusNode work?
> > > > > - APIs and the implementation of the Coordination Engine should be a 
> > > > > fairly
> > > > > independent, so it may be reasonable to add it directly to Hadoop 
> > > > > Common
> > > > > trunk.
> > > > > 
> > > > > Thanks,
> > > > > --Konstantin




Re: Meetup invitation: Consensus based replication in Hadoop

2014-07-11 Thread Konstantin Boudnik
One more update: it seema that for ppl in SF, who oftentimes might not even
have a car, getting to San Ramon can represent a certain difficulty.

So we'll do a shuttle pickup from West Dublin BART station if there's at least
a few people who want to use the option.

Please respond directly to me if you're interested before end of Sunday, 13th.
Cheers,
  Cos

On Tue, Jul 08, 2014 at 12:23PM, Konstantin Boudnik wrote:
> All,
> 
> Re-sending this announcement in case it fell through over the long weekend
> when people were away. We still have seats left, so register soon.
> 
> Regards,
>   Cos
> 
> On Wed, Jul 02, 2014 at 06:37PM, Konstantin Boudnik wrote:
> > We'd like to invite you to the 
> > Consensus based replication in Hadoop: A deep dive
> > event that we are happy to hold in our San Ramon office on July 15th at 
> > noon.
> > We'd like to accommodate as many people as possible, but I think are 
> > physically
> > limited to 30 (+/- a few), so please RSVP to this Eventbrite invitation:
> > 
> > https://www.eventbrite.co.uk/e/consensus-based-replication-in-hadoop-a-deep-dive-tickets-12158236613
> > 
> > We'll provide pizza and beverages (feel free to express your special dietary
> > requirements if any).
> > 
> > See you soon!
> > With regards,
> >   Cos
> > 
> > On Wed, Jun 18, 2014 at 08:45PM, Konstantin Boudnik wrote:
> > > Guys,
> > > 
> > > In the last a couple of weeks, we had a very good and productive initial 
> > > round
> > > of discussions on the JIRAs. I think it is worthy to keep the momentum 
> > > going
> > > and have a more detailed conversation. For that, we'd like to host s 
> > > Hadoop
> > > developers meetup to get into the bowls of the consensus-based 
> > > coordination
> > > implementation for HDFS. The proposed venue is our office in San Ramon, 
> > > CA.
> > > 
> > > Considering that it is already a mid week and the following one looks 
> > > short
> > > because of the holidays, how would the week of July 7th looks for yall?
> > > Tuesday or Thursday look pretty good on our end.
> > > 
> > > Please chime in on your preference either here or reach of directly to me.
> > > Once I have a few RSVPs I will setup an event on Eventbrite or similar.
> > > 
> > > Looking forward to your input. Regards,
> > >   Cos
> > > 
> > > On Thu, May 29, 2014 at 02:09PM, Konstantin Shvachko wrote:
> > > > Hello hadoop developers,
> > > > 
> > > > I just opened two jiras proposing to introduce ConsensusNode into HDFS 
> > > > and
> > > > a Coordination Engine into Hadoop Common. The latter should benefit HDFS
> > > > and  HBase as well as potentially other projects. See HDFS-6469 and
> > > > HADOOP-10641 for details.
> > > > The effort is based on the system we built at Wandisco with my 
> > > > colleagues,
> > > > who are glad to contribute it to Apache, as quite a few people in the
> > > > community expressed interest in this ideas and their potential 
> > > > applications.
> > > > 
> > > > We should probably keep technical discussions in the jiras. Here on the 
> > > > dev
> > > > list I wanted to touch-base on any logistic issues / questions.
> > > > - First of all, any ideas and help are very much welcome.
> > > > - We would like to set up a meetup to discuss this if people are
> > > > interested. Hadoop Summit next week may be a potential time-place to 
> > > > meet.
> > > > Not sure in what form. If not, we can organize one in our San Ramon 
> > > > office
> > > > later on.
> > > > - The effort may take a few months depending on the contributors 
> > > > schedules.
> > > > Would it make sense to open a branch for the ConsensusNode work?
> > > > - APIs and the implementation of the Coordination Engine should be a 
> > > > fairly
> > > > independent, so it may be reasonable to add it directly to Hadoop Common
> > > > trunk.
> > > > 
> > > > Thanks,
> > > > --Konstantin


signature.asc
Description: Digital signature


Re: Meetup invitation: Consensus based replication in Hadoop

2014-07-08 Thread Konstantin Boudnik
All,

Re-sending this announcement in case it fell through over the long weekend
when people were away. We still have seats left, so register soon.

Regards,
  Cos

On Wed, Jul 02, 2014 at 06:37PM, Konstantin Boudnik wrote:
> We'd like to invite you to the 
> Consensus based replication in Hadoop: A deep dive
> event that we are happy to hold in our San Ramon office on July 15th at noon.
> We'd like to accommodate as many people as possible, but I think are 
> physically
> limited to 30 (+/- a few), so please RSVP to this Eventbrite invitation:
> 
> https://www.eventbrite.co.uk/e/consensus-based-replication-in-hadoop-a-deep-dive-tickets-12158236613
> 
> We'll provide pizza and beverages (feel free to express your special dietary
> requirements if any).
> 
> See you soon!
> With regards,
>   Cos
> 
> On Wed, Jun 18, 2014 at 08:45PM, Konstantin Boudnik wrote:
> > Guys,
> > 
> > In the last a couple of weeks, we had a very good and productive initial 
> > round
> > of discussions on the JIRAs. I think it is worthy to keep the momentum going
> > and have a more detailed conversation. For that, we'd like to host s Hadoop
> > developers meetup to get into the bowls of the consensus-based coordination
> > implementation for HDFS. The proposed venue is our office in San Ramon, CA.
> > 
> > Considering that it is already a mid week and the following one looks short
> > because of the holidays, how would the week of July 7th looks for yall?
> > Tuesday or Thursday look pretty good on our end.
> > 
> > Please chime in on your preference either here or reach of directly to me.
> > Once I have a few RSVPs I will setup an event on Eventbrite or similar.
> > 
> > Looking forward to your input. Regards,
> >   Cos
> > 
> > On Thu, May 29, 2014 at 02:09PM, Konstantin Shvachko wrote:
> > > Hello hadoop developers,
> > > 
> > > I just opened two jiras proposing to introduce ConsensusNode into HDFS and
> > > a Coordination Engine into Hadoop Common. The latter should benefit HDFS
> > > and  HBase as well as potentially other projects. See HDFS-6469 and
> > > HADOOP-10641 for details.
> > > The effort is based on the system we built at Wandisco with my colleagues,
> > > who are glad to contribute it to Apache, as quite a few people in the
> > > community expressed interest in this ideas and their potential 
> > > applications.
> > > 
> > > We should probably keep technical discussions in the jiras. Here on the 
> > > dev
> > > list I wanted to touch-base on any logistic issues / questions.
> > > - First of all, any ideas and help are very much welcome.
> > > - We would like to set up a meetup to discuss this if people are
> > > interested. Hadoop Summit next week may be a potential time-place to meet.
> > > Not sure in what form. If not, we can organize one in our San Ramon office
> > > later on.
> > > - The effort may take a few months depending on the contributors 
> > > schedules.
> > > Would it make sense to open a branch for the ConsensusNode work?
> > > - APIs and the implementation of the Coordination Engine should be a 
> > > fairly
> > > independent, so it may be reasonable to add it directly to Hadoop Common
> > > trunk.
> > > 
> > > Thanks,
> > > --Konstantin


Meetup invitation: Consensus based replication in Hadoop

2014-07-02 Thread Konstantin Boudnik
We'd like to invite you to the 
Consensus based replication in Hadoop: A deep dive
event that we are happy to hold in our San Ramon office on July 15th at noon.
We'd like to accommodate as many people as possible, but I think are physically
limited to 30 (+/- a few), so please RSVP to this Eventbrite invitation:

https://www.eventbrite.co.uk/e/consensus-based-replication-in-hadoop-a-deep-dive-tickets-12158236613

We'll provide pizza and beverages (feel free to express your special dietary
requirements if any).

See you soon!
With regards,
  Cos

On Wed, Jun 18, 2014 at 08:45PM, Konstantin Boudnik wrote:
> Guys,
> 
> In the last a couple of weeks, we had a very good and productive initial round
> of discussions on the JIRAs. I think it is worthy to keep the momentum going
> and have a more detailed conversation. For that, we'd like to host s Hadoop
> developers meetup to get into the bowls of the consensus-based coordination
> implementation for HDFS. The proposed venue is our office in San Ramon, CA.
> 
> Considering that it is already a mid week and the following one looks short
> because of the holidays, how would the week of July 7th looks for yall?
> Tuesday or Thursday look pretty good on our end.
> 
> Please chime in on your preference either here or reach of directly to me.
> Once I have a few RSVPs I will setup an event on Eventbrite or similar.
> 
> Looking forward to your input. Regards,
>   Cos
> 
> On Thu, May 29, 2014 at 02:09PM, Konstantin Shvachko wrote:
> > Hello hadoop developers,
> > 
> > I just opened two jiras proposing to introduce ConsensusNode into HDFS and
> > a Coordination Engine into Hadoop Common. The latter should benefit HDFS
> > and  HBase as well as potentially other projects. See HDFS-6469 and
> > HADOOP-10641 for details.
> > The effort is based on the system we built at Wandisco with my colleagues,
> > who are glad to contribute it to Apache, as quite a few people in the
> > community expressed interest in this ideas and their potential applications.
> > 
> > We should probably keep technical discussions in the jiras. Here on the dev
> > list I wanted to touch-base on any logistic issues / questions.
> > - First of all, any ideas and help are very much welcome.
> > - We would like to set up a meetup to discuss this if people are
> > interested. Hadoop Summit next week may be a potential time-place to meet.
> > Not sure in what form. If not, we can organize one in our San Ramon office
> > later on.
> > - The effort may take a few months depending on the contributors schedules.
> > Would it make sense to open a branch for the ConsensusNode work?
> > - APIs and the implementation of the Coordination Engine should be a fairly
> > independent, so it may be reasonable to add it directly to Hadoop Common
> > trunk.
> > 
> > Thanks,
> > --Konstantin


[jira] [Created] (HBASE-11456) hbase-testing-util module might use wrong dependency versions

2014-07-02 Thread Konstantin Boudnik (JIRA)
Konstantin Boudnik created HBASE-11456:
--

 Summary: hbase-testing-util module might use wrong dependency 
versions
 Key: HBASE-11456
 URL: https://issues.apache.org/jira/browse/HBASE-11456
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 0.98.3, 1.0.0
Reporter: Konstantin Boudnik


HBASE-11422 has addressed a number of inconsistencies with test artifacts 
dependencies. The only module that has been left out is hbase-testing-util. 

We need to figure out is there a real purpose still behind this module and if 
so fix the transitive dependencies issues properly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: [NOTICE] Branching for 1.0

2014-07-01 Thread Konstantin Boudnik
Out of curiosity mostly, as I am not committing anything, I have a question
about the merging policy. Do I read it correctly that the proposed way is to
do two separate commits to the different branches, which is equivalent to
cherry-picking? If so, as was discussed elsewhere, it will lead to branches
being diverged - at least from the git standpoint.

Cos

On Mon, Jun 30, 2014 at 08:56PM, Enis Söztutar wrote:
> I've pushed the branch, named branch-1:
> 
> https://git-wip-us.apache.org/repos/asf?p=hbase.git;a=shortlog;h=refs/heads/branch-1
> 
> Please do not commit new features to branch-1 without pinging the RM (for
> 1.0 it is me). Bug fixes, and trivial commits can always go in.
> 
> That branch still has 0.99.0-SNAPSHOT as the version number, since next
> expected release from that is 0.99.0. Jenkins build for this branch is
> setup at https://builds.apache.org/view/All/job/HBase-1.0/. It builds with
> latest jdk7. I'll try to stabilize the unit tests for the first RC.
> 
> I've changed the master version as well. It now builds with 2.0.0-SNAPSHOT.
> Exciting!
> 
> Enis
> 
> 
> 
> On Mon, Jun 30, 2014 at 2:34 PM, Kevin O'dell 
> wrote:
> 
> > HURRAY!
> >
> >
> > On Mon, Jun 30, 2014 at 5:30 PM, Enis Söztutar  wrote:
> >
> > > Devs,
> > >
> > > I will be creating the branch named "branch-1" in a couple of hours (see
> > > previous threads [1],[2],[3].
> > >
> > > We have agreed to go with the branching structure that will look like
> > this:
> > >
> > > master (2.0-SNAPSHOT)
> > > |
> > > | branch-1 (1.1-SNAPSHOT)
> > > | |
> > > | | branch-1.0 (1.0.1-SNAPSHOT)
> > > | | |
> > > | | x (1.0.0)
> > > | | |
> > > | |/
> > > | x (0.99.1)
> > > | |
> > > | x (0.99.0)
> > > | |
> > > |/
> > >
> > >
> > > This structure will give us flexibility to have both multiple active 1.x
> > > releases, and have 2.0 patches to be committable. And also we can use
> > > semantic versioning for our releases from now on [4].
> > >
> > > For now, the repo will look like as below, and before 1.0.0, branch-1.0
> > > will be forked from branch-1 and the tree will more like as above.
> > >
> > >
> > > master (2.0-SNAPSHOT)
> > > |
> > > | branch-1 (0.99.0-SNAPSHOT)
> > > | |
> > > | |
> > > |/
> > >
> > >
> > > As a reminder, 0.99.0 release and any more releases in 0.99.x release
> > will
> > > be labeled as "developer releases" as a way to prepare for 1.0.0. 0.99.x
> > > will NOT have any forward / backward compatibility guarantees and not
> > > intended for production. I aim to get the 0.99.0RC0 cut by end of this
> > > week. We should only accept patches relevant to 1.0.0 release to the
> > > branch-1 from now on. On that note, it will be good to re-kindle the
> > > interest in jiras around
> > https://issues.apache.org/jira/browse/HBASE-10856
> > > .
> > > Please feel free to pick up any issue that you consider important to fix
> > > for HBase-1.0 release.
> > >
> > > Jira labels:
> > >  - I've created 2.0.0 label in jira. We now have 0.99.0, 1.0.0, and 2.0.0
> > > labels corresponding to branches in progress. If you commit anything to
> > > master, please mark the jira with 2.0.0 label.
> > >
> > > Jenkins builds:
> > >  - I'll set up a build for HBase-1.0 using JDK-1.7.
> > >
> > >
> > > Your RM, Enis
> > >
> > > [1]
> > >
> > >
> > https://mail-archives.apache.org/mod_mbox/hbase-dev/201406.mbox/%3CCADcMMgEhM1rN4AsazErDAUqXO5fcCbgcRz%2B2nDXo9q2CQL%3D7jg%40mail.gmail.com%3E
> > >
> > > [2]
> > >
> > >
> > https://mail-archives.apache.org/mod_mbox/hbase-dev/201406.mbox/%3CCAMUu0w-qWXnHRewuqo1NpSnD6Kj0aad9LcmyuLJ%3DskQ8Ut9sYw%40mail.gmail.com%3E
> > >
> > > [3]
> > >
> > >
> > https://mail-archives.apache.org/mod_mbox/hbase-dev/201406.mbox/%3CCAMUu0w-3rkvabadvkYtfjiE24yr-wKLx0%2BcXEvcFTyK1VSiDBQ%40mail.gmail.com%3E
> > >
> > > [4] http://semver.org/
> > >
> >
> >
> >
> > --
> > Kevin O'Dell
> > Systems Engineer, Cloudera
> >


Re: Planning to roll the 0.98.4 RC on 6/30

2014-07-01 Thread Konstantin Boudnik
Hi Andrew.

Thanks for keep this rolling! I would love to have 
HBASE-11422  to get in and will finish the patch tomorrow. 

Thanks!
  Cos

On Sun, Jun 29, 2014 at 11:35AM, Andrew Purtell wrote:
> A bisect with more test suite repetitions per step lead to 34ae4a9
> (HBASE-11094: Distributed log replay is incompatible for rolling restarts).
> Bisecting again to see if the results are stable.
> 
> 
> On Thu, Jun 26, 2014 at 1:39 PM, Andrew Purtell  wrote:
> 
> > I'm finding that repeated runs of the unit test suite at the head of
> > branch 0.98 intermittently fail. Individual tests do not, so this likely a
> > lagging shutdown, port/resource conflict, and/or zombie test issue. I am
> > currently bisecting commits on 0.98 branch since the last release in the
> > hope of pinning this down to a single change. Depending on how quickly that
> > can happen, the RC might happen on Monday or not. As things stand at the
> > head of the branch, I'd not +1 the RC given the release criteria I've been
> > using up to now.
> >
> >
> > On Tue, Jun 24, 2014 at 6:09 PM, Andrew Purtell 
> > wrote:
> >
> >> Planning to roll the 0.98.4 RC on Monday 6/30.
> >>
> >> I should have done it this week. Sorry, got a little sidetracked.
> >>
> >
> 
> -- 
> Best regards,
> 
>- Andy
> 
> Problems worthy of attack prove their worth by hitting back. - Piet Hein
> (via Tom White)


Re: hbase-testing-utils [Was: HBase 0.98.x dependency problems]

2014-07-01 Thread Konstantin Boudnik
Thanks Enis. I will look into this to possibly fix the fixed deps resolution
issue. In the mean while, I will complete the patch for 0.98 and master for
everything but this module. Hopefully, it would be taken into 0.98.4. Will try
to post it tomorrow.

Thanks guys,
  Cos

On Mon, Jun 30, 2014 at 03:08PM, Enis Söztutar wrote:
> Cos,
> 
> I think that module was introduced to circumvent maven not doing transient
> dependency resolution for test jars, something along those lines. It's only
> job is to list dependencies to test jars from a main (as opposed to test)
> module.
> 
> Enis
> 
> 
> On Fri, Jun 27, 2014 at 12:51 PM, Konstantin Boudnik  wrote:
> 
> > I am looking into this  hbase-testing-util module and don't see any use of
> > it,
> > at least not within the hbase itself. The module doesn't have any source
> > code
> > and doesn't provide any functionality for the Hbase itself.
> >
> > The only relevant ticket I found is HBASE-9699, which only specifies one
> > use
> > of this module, namely
> > https://github.com/elliottneilclark/hbase-downstreamer,
> > that seems to be a "Fake downstream project used figuring what is required
> > when depending on hbase client and minicluster". The explcit use of test
> > artifacts in non-test scopes are a big no-no in my opinion. I don't think
> > we
> > need to repeat the same mistake that some of the Hadoop's modules are
> > making.
> >
> > I'd like someone with better understanding of the subject to shed the
> > light on
> > the adoption of this module, and possible modifications of it? If the whole
> > purpose of it is to provide a set of jar files than perhaps a simple
> > assembly
> > will suffice?
> >
> > Thanks in advance,
> >   Cos
> >
> > On Fri, Jun 27, 2014 at 12:27PM, Konstantin Boudnik wrote:
> > > Thanks a bunch Ted - it really opened my eye on that's the issue (you
> > can tell
> > > I am still a noob).
> > >
> > > Cos
> > >
> > > On Fri, Jun 27, 2014 at 05:04AM, Ted Yu wrote:
> > > > Thanks for finding this issue, Cos.
> > > >
> > > > I opened HBASE-11422 and attached a patch there - there is no
> > dependency on
> > > > 2.2.0 if -Dhadoop-two.version=2.3.0 is specified.
> > > >
> > > > Cheers
> > > >
> > > >
> > > > On Thu, Jun 26, 2014 at 9:43 PM, Konstantin Boudnik 
> > wrote:
> > > >
> > > > > Weird... running
> > > > >   mvn install dependency:tree -Dhadoop-two.version=2.3.0
> > > > > keeps pulling in 2.2.0 for test artifacts. An easy way to reproduce
> > it is
> > > > > to
> > > > > wipe out clean hadoop 2.2.0 artifacts from ~/.m2 as well as 0.98.2*
> > (if
> > > > > installed) and then run the command above. Something like this can
> > be seen:
> > > > >
> > > > > [INFO] +- org.apache.hbase:hbase-shell:jar:0.98.2:compile
> > > > > [INFO] |  +- org.apache.hbase:hbase-prefix-tree:jar:0.98.2:runtime
> > > > > [INFO] |  +- com.yammer.metrics:metrics-core:jar:2.1.2:compile
> > > > > [INFO] |  +- org.jruby:jruby-complete:jar:1.6.8:compile
> > > > > [INFO] |  +- org.apache.hadoop:hadoop-client:jar:2.3.0:compile
> > > > > [INFO] |  |  +-
> > > > > org.apache.hadoop:hadoop-mapreduce-client-app:jar:2.3.0:compile
> > > > > [INFO] |  |  +- org.apache.hadoop:hadoop-yarn-api:jar:2.3.0:compile
> > > > > [INFO] |  |  \-
> > > > > org.apache.hadoop:hadoop-mapreduce-client-jobclient:jar:2.3.0:compile
> > > > > [INFO] |  +- org.apache.hadoop:hadoop-hdfs:jar:2.3.0:compile
> > > > > [INFO] |  |  \- commons-daemon:commons-daemon:jar:1.0.13:compile
> > > > > [INFO] |  \-
> > org.apache.hadoop:hadoop-hdfs:test-jar:tests:2.2.0:compile
> > > > >
> > > > > I have fixed a problem with site target, but still see this...
> > > > >
> > > > > Any ideas are very appreciated! Thanks.
> > > > >   Cos
> > > > >
> > > > >
> > > > > On Tue, Jun 17, 2014 at 08:33PM, Konstantin Boudnik wrote:
> > > > > > Thank you very much Ted - that helped beautifully! I think the
> > build
> > > > > system
> > > > > > becomes a product on its own :)
> > > > > >
> > > > > > Regards,
> > > > > >   Cos
> &g

hbase-testing-utils [Was: HBase 0.98.x dependency problems]

2014-06-27 Thread Konstantin Boudnik
I am looking into this  hbase-testing-util module and don't see any use of it,
at least not within the hbase itself. The module doesn't have any source code
and doesn't provide any functionality for the Hbase itself.

The only relevant ticket I found is HBASE-9699, which only specifies one use
of this module, namely https://github.com/elliottneilclark/hbase-downstreamer,
that seems to be a "Fake downstream project used figuring what is required
when depending on hbase client and minicluster". The explcit use of test
artifacts in non-test scopes are a big no-no in my opinion. I don't think we
need to repeat the same mistake that some of the Hadoop's modules are making.

I'd like someone with better understanding of the subject to shed the light on
the adoption of this module, and possible modifications of it? If the whole
purpose of it is to provide a set of jar files than perhaps a simple assembly
will suffice?

Thanks in advance,
  Cos

On Fri, Jun 27, 2014 at 12:27PM, Konstantin Boudnik wrote:
> Thanks a bunch Ted - it really opened my eye on that's the issue (you can tell
> I am still a noob).
> 
> Cos
> 
> On Fri, Jun 27, 2014 at 05:04AM, Ted Yu wrote:
> > Thanks for finding this issue, Cos.
> > 
> > I opened HBASE-11422 and attached a patch there - there is no dependency on
> > 2.2.0 if -Dhadoop-two.version=2.3.0 is specified.
> > 
> > Cheers
> > 
> > 
> > On Thu, Jun 26, 2014 at 9:43 PM, Konstantin Boudnik  wrote:
> > 
> > > Weird... running
> > >   mvn install dependency:tree -Dhadoop-two.version=2.3.0
> > > keeps pulling in 2.2.0 for test artifacts. An easy way to reproduce it is
> > > to
> > > wipe out clean hadoop 2.2.0 artifacts from ~/.m2 as well as 0.98.2* (if
> > > installed) and then run the command above. Something like this can be 
> > > seen:
> > >
> > > [INFO] +- org.apache.hbase:hbase-shell:jar:0.98.2:compile
> > > [INFO] |  +- org.apache.hbase:hbase-prefix-tree:jar:0.98.2:runtime
> > > [INFO] |  +- com.yammer.metrics:metrics-core:jar:2.1.2:compile
> > > [INFO] |  +- org.jruby:jruby-complete:jar:1.6.8:compile
> > > [INFO] |  +- org.apache.hadoop:hadoop-client:jar:2.3.0:compile
> > > [INFO] |  |  +-
> > > org.apache.hadoop:hadoop-mapreduce-client-app:jar:2.3.0:compile
> > > [INFO] |  |  +- org.apache.hadoop:hadoop-yarn-api:jar:2.3.0:compile
> > > [INFO] |  |  \-
> > > org.apache.hadoop:hadoop-mapreduce-client-jobclient:jar:2.3.0:compile
> > > [INFO] |  +- org.apache.hadoop:hadoop-hdfs:jar:2.3.0:compile
> > > [INFO] |  |  \- commons-daemon:commons-daemon:jar:1.0.13:compile
> > > [INFO] |  \- org.apache.hadoop:hadoop-hdfs:test-jar:tests:2.2.0:compile
> > >
> > > I have fixed a problem with site target, but still see this...
> > >
> > > Any ideas are very appreciated! Thanks.
> > >   Cos
> > >
> > >
> > > On Tue, Jun 17, 2014 at 08:33PM, Konstantin Boudnik wrote:
> > > > Thank you very much Ted - that helped beautifully! I think the build
> > > system
> > > > becomes a product on its own :)
> > > >
> > > > Regards,
> > > >   Cos
> > > >
> > > > On Tue, Jun 17, 2014 at 12:40PM, Ted Yu wrote:
> > > > > Have you tried the following command ?
> > > > >
> > > > > mvn dependency:tree -Dhadoop-two.version=2.3.0
> > > > >
> > > > > The output of the above has 2.3.0 as the dependency.
> > > > >
> > > > > Cheers
> > > > >
> > > > >
> > > > > On Tue, Jun 17, 2014 at 11:40 AM, Konstantin Boudnik 
> > > wrote:
> > > > >
> > > > > > Guys,
> > > > > >
> > > > > > I have noticed an interesting problem with HBase 0.98 line. I have
> > > ended
> > > > > > up with
> > > > > > a crappy Hadoop 2.2.0 artifacts in my local M2 cache - don't ask me
> > > how
> > > > > > that
> > > > > > happen ;( - which causes compilation problems in HBase. And that
> > > brought
> > > > > > this
> > > > > > whole issue into the light. Here's the essence of it:
> > > > > >
> > > > > > while running for hbase-server module
> > > > > >   % mvn dependency:tree  -Dhadoop.version=2.3.0
> > > > > > I am getting a reference to hadoop 2.2.0
> > > > > >
> > > > > > [INFO] +- org.apache.hadoop:hadoop-common:jar:2.2.0:compile
> > > > > > ..
> > > > > > [INFO] +- org.apache.hadoop:hadoop-auth:jar:2.2.0:compile
> > > > > > [INFO] +- org.apache.hadoop:hadoop-client:jar:2.2.0:compile
> > > > > >
> > > > > > This only happens for 0.98. On master the reference goes to 2.4.0
> > > > > > The problem here is that -Dhadoop.version is being seemingly
> > > ignored, which
> > > > > > might lead to binaries with dependencies inconsistent with what it
> > > was
> > > > > > built
> > > > > > against. I am not sure if this had came up before, but certainly
> > > would
> > > > > > appreciate the community's input, if any.
> > > > > >
> > > > > > Thanks a lot!
> > > > > > --
> > > > > > Take care,
> > > > > >   Cos
> > > > > >
> > > > > >
> > >


Re: HBase 0.98.x dependency problems

2014-06-27 Thread Konstantin Boudnik
Thanks a bunch Ted - it really opened my eye on that's the issue (you can tell
I am still a noob).

Cos

On Fri, Jun 27, 2014 at 05:04AM, Ted Yu wrote:
> Thanks for finding this issue, Cos.
> 
> I opened HBASE-11422 and attached a patch there - there is no dependency on
> 2.2.0 if -Dhadoop-two.version=2.3.0 is specified.
> 
> Cheers
> 
> 
> On Thu, Jun 26, 2014 at 9:43 PM, Konstantin Boudnik  wrote:
> 
> > Weird... running
> >   mvn install dependency:tree -Dhadoop-two.version=2.3.0
> > keeps pulling in 2.2.0 for test artifacts. An easy way to reproduce it is
> > to
> > wipe out clean hadoop 2.2.0 artifacts from ~/.m2 as well as 0.98.2* (if
> > installed) and then run the command above. Something like this can be seen:
> >
> > [INFO] +- org.apache.hbase:hbase-shell:jar:0.98.2:compile
> > [INFO] |  +- org.apache.hbase:hbase-prefix-tree:jar:0.98.2:runtime
> > [INFO] |  +- com.yammer.metrics:metrics-core:jar:2.1.2:compile
> > [INFO] |  +- org.jruby:jruby-complete:jar:1.6.8:compile
> > [INFO] |  +- org.apache.hadoop:hadoop-client:jar:2.3.0:compile
> > [INFO] |  |  +-
> > org.apache.hadoop:hadoop-mapreduce-client-app:jar:2.3.0:compile
> > [INFO] |  |  +- org.apache.hadoop:hadoop-yarn-api:jar:2.3.0:compile
> > [INFO] |  |  \-
> > org.apache.hadoop:hadoop-mapreduce-client-jobclient:jar:2.3.0:compile
> > [INFO] |  +- org.apache.hadoop:hadoop-hdfs:jar:2.3.0:compile
> > [INFO] |  |  \- commons-daemon:commons-daemon:jar:1.0.13:compile
> > [INFO] |  \- org.apache.hadoop:hadoop-hdfs:test-jar:tests:2.2.0:compile
> >
> > I have fixed a problem with site target, but still see this...
> >
> > Any ideas are very appreciated! Thanks.
> >   Cos
> >
> >
> > On Tue, Jun 17, 2014 at 08:33PM, Konstantin Boudnik wrote:
> > > Thank you very much Ted - that helped beautifully! I think the build
> > system
> > > becomes a product on its own :)
> > >
> > > Regards,
> > >   Cos
> > >
> > > On Tue, Jun 17, 2014 at 12:40PM, Ted Yu wrote:
> > > > Have you tried the following command ?
> > > >
> > > > mvn dependency:tree -Dhadoop-two.version=2.3.0
> > > >
> > > > The output of the above has 2.3.0 as the dependency.
> > > >
> > > > Cheers
> > > >
> > > >
> > > > On Tue, Jun 17, 2014 at 11:40 AM, Konstantin Boudnik 
> > wrote:
> > > >
> > > > > Guys,
> > > > >
> > > > > I have noticed an interesting problem with HBase 0.98 line. I have
> > ended
> > > > > up with
> > > > > a crappy Hadoop 2.2.0 artifacts in my local M2 cache - don't ask me
> > how
> > > > > that
> > > > > happen ;( - which causes compilation problems in HBase. And that
> > brought
> > > > > this
> > > > > whole issue into the light. Here's the essence of it:
> > > > >
> > > > > while running for hbase-server module
> > > > >   % mvn dependency:tree  -Dhadoop.version=2.3.0
> > > > > I am getting a reference to hadoop 2.2.0
> > > > >
> > > > > [INFO] +- org.apache.hadoop:hadoop-common:jar:2.2.0:compile
> > > > > ..
> > > > > [INFO] +- org.apache.hadoop:hadoop-auth:jar:2.2.0:compile
> > > > > [INFO] +- org.apache.hadoop:hadoop-client:jar:2.2.0:compile
> > > > >
> > > > > This only happens for 0.98. On master the reference goes to 2.4.0
> > > > > The problem here is that -Dhadoop.version is being seemingly
> > ignored, which
> > > > > might lead to binaries with dependencies inconsistent with what it
> > was
> > > > > built
> > > > > against. I am not sure if this had came up before, but certainly
> > would
> > > > > appreciate the community's input, if any.
> > > > >
> > > > > Thanks a lot!
> > > > > --
> > > > > Take care,
> > > > >   Cos
> > > > >
> > > > >
> >


Re: HBase 0.98.x dependency problems

2014-06-26 Thread Konstantin Boudnik
Weird... running 
  mvn install dependency:tree -Dhadoop-two.version=2.3.0
keeps pulling in 2.2.0 for test artifacts. An easy way to reproduce it is to
wipe out clean hadoop 2.2.0 artifacts from ~/.m2 as well as 0.98.2* (if
installed) and then run the command above. Something like this can be seen:

[INFO] +- org.apache.hbase:hbase-shell:jar:0.98.2:compile
[INFO] |  +- org.apache.hbase:hbase-prefix-tree:jar:0.98.2:runtime
[INFO] |  +- com.yammer.metrics:metrics-core:jar:2.1.2:compile
[INFO] |  +- org.jruby:jruby-complete:jar:1.6.8:compile
[INFO] |  +- org.apache.hadoop:hadoop-client:jar:2.3.0:compile
[INFO] |  |  +- org.apache.hadoop:hadoop-mapreduce-client-app:jar:2.3.0:compile
[INFO] |  |  +- org.apache.hadoop:hadoop-yarn-api:jar:2.3.0:compile
[INFO] |  |  \- 
org.apache.hadoop:hadoop-mapreduce-client-jobclient:jar:2.3.0:compile
[INFO] |  +- org.apache.hadoop:hadoop-hdfs:jar:2.3.0:compile
[INFO] |  |  \- commons-daemon:commons-daemon:jar:1.0.13:compile
[INFO] |  \- org.apache.hadoop:hadoop-hdfs:test-jar:tests:2.2.0:compile

I have fixed a problem with site target, but still see this...

Any ideas are very appreciated! Thanks.
  Cos


On Tue, Jun 17, 2014 at 08:33PM, Konstantin Boudnik wrote:
> Thank you very much Ted - that helped beautifully! I think the build system
> becomes a product on its own :)
> 
> Regards,
>   Cos
> 
> On Tue, Jun 17, 2014 at 12:40PM, Ted Yu wrote:
> > Have you tried the following command ?
> > 
> > mvn dependency:tree -Dhadoop-two.version=2.3.0
> > 
> > The output of the above has 2.3.0 as the dependency.
> > 
> > Cheers
> > 
> > 
> > On Tue, Jun 17, 2014 at 11:40 AM, Konstantin Boudnik  
> > wrote:
> > 
> > > Guys,
> > >
> > > I have noticed an interesting problem with HBase 0.98 line. I have ended
> > > up with
> > > a crappy Hadoop 2.2.0 artifacts in my local M2 cache - don't ask me how
> > > that
> > > happen ;( - which causes compilation problems in HBase. And that brought
> > > this
> > > whole issue into the light. Here's the essence of it:
> > >
> > > while running for hbase-server module
> > >   % mvn dependency:tree  -Dhadoop.version=2.3.0
> > > I am getting a reference to hadoop 2.2.0
> > >
> > > [INFO] +- org.apache.hadoop:hadoop-common:jar:2.2.0:compile
> > > ..
> > > [INFO] +- org.apache.hadoop:hadoop-auth:jar:2.2.0:compile
> > > [INFO] +- org.apache.hadoop:hadoop-client:jar:2.2.0:compile
> > >
> > > This only happens for 0.98. On master the reference goes to 2.4.0
> > > The problem here is that -Dhadoop.version is being seemingly ignored, 
> > > which
> > > might lead to binaries with dependencies inconsistent with what it was
> > > built
> > > against. I am not sure if this had came up before, but certainly would
> > > appreciate the community's input, if any.
> > >
> > > Thanks a lot!
> > > --
> > > Take care,
> > >   Cos
> > >
> > >


Re: jdk 1.7 & trunk

2014-06-26 Thread Konstantin Boudnik
That's certainly a possibility. With JDK6U45 everything works. 
And I have stepped on HBASE-11418 ;(

On Thu, Jun 26, 2014 at 11:52AM, Andrew Purtell wrote:
> 0.98 compiles using the recent version of Java 6, 6u45. I think there was a
> compiler bug wrt type erasure introduced somewhere in the middle of that
> lineage that could still be in OpenJDK. In any case, please see
> https://issues.apache.org/jira/browse/BIGTOP-1110?focusedCommentId=14044099&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14044099
> 
> 
> 
> On Wed, Jun 25, 2014 at 2:39 PM, Andrew Purtell 
> wrote:
> 
> > Try compiling with Oracle Java 6. Same result ?
> >
> > > On Jun 25, 2014, at 12:54 PM, Konstantin Boudnik  wrote:
> > >
> > > Back in time of JDK6 GA - when I was still working in Sun's JDK team -
> > we had
> > > companies sitting on 1.4 and paying _a lot_ of money for Sun support of
> > it.
> > > So...
> > >
> > > That said, I think moving to JDK7 is pretty much has happened already for
> > > HBase, because e.g. 0.98.2 can not be build with JDK6 because we see
> > >  https://issues.apache.org/jira/browse/HBASE-8479
> > > in Bigtop CI.
> > >
> > > Cos
> > >
> > >> On Wed, Jun 25, 2014 at 10:29AM, Andrew Purtell wrote:
> > >> Er, I mean no user should be running on a runtime less than 7, they are
> > all
> > >> EOL...
> > >>
> > >>
> > >> On Wed, Jun 25, 2014 at 10:28 AM, Andrew Purtell 
> > >> wrote:
> > >>
> > >>>
> > >>> On Wed, Jun 25, 2014 at 9:15 AM, Nicolas Liochon 
> > >>> wrote:
> > >>>
> > >>>> Should we be 1.7 only for trunk / 1.0?
> > >>>> This would mean using the 1.7 features.
> > >>>
> > >>> I think this is prudent. Hadoop common is having a similar discussion
> > and
> > >>> I think converging on consensus that they would be ok with their trunk
> > >>> including features only available in 7.
> > >>>
> > >>>
> > >>>> What about .98?
> > >>>
> > >>> ​I don't think this is an option, because although no user should be
> > >>> running with a 7 runtime (and in fact performance conscious users
> > should be
> > >>> looking hard at 8), vendors will still have to support customers on 6.
> > ​
> > >>>
> > >>> --
> > >>> Best regards,
> > >>>
> > >>>   - Andy
> > >>>
> > >>> Problems worthy of attack prove their worth by hitting back. - Piet
> > Hein
> > >>> (via Tom White)
> > >>
> > >>
> > >>
> > >> --
> > >> Best regards,
> > >>
> > >>   - Andy
> > >>
> > >> Problems worthy of attack prove their worth by hitting back. - Piet Hein
> > >> (via Tom White)
> >


[jira] [Created] (HBASE-11418) biuld targe {{site}} doesn't respect hadoop-two.version property

2014-06-26 Thread Konstantin Boudnik (JIRA)
Konstantin Boudnik created HBASE-11418:
--

 Summary: biuld targe {{site}} doesn't respect hadoop-two.version 
property
 Key: HBASE-11418
 URL: https://issues.apache.org/jira/browse/HBASE-11418
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 0.98.3
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik
 Fix For: 0.98.4


Running build with {{mvn clean site -Dhadoop-two.version=2.3.0}} I see that it 
pulls in Hadoop 2.2.0 dependencies (when building 0.98.x); and Hadoop 2.3.0 on 
the master. The reason of the behavior is the bug in configuration of 
{{userapi}} reportset, namely:
{code}
  
org.apache.hadoop

hadoop-common
2.3.0
  
{code}
(the snipped is from the master branch)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: jdk 1.7 & trunk

2014-06-25 Thread Konstantin Boudnik
Back in time of JDK6 GA - when I was still working in Sun's JDK team - we had
companies sitting on 1.4 and paying _a lot_ of money for Sun support of it.
So...

That said, I think moving to JDK7 is pretty much has happened already for
HBase, because e.g. 0.98.2 can not be build with JDK6 because we see
  https://issues.apache.org/jira/browse/HBASE-8479
in Bigtop CI.

Cos

On Wed, Jun 25, 2014 at 10:29AM, Andrew Purtell wrote:
> Er, I mean no user should be running on a runtime less than 7, they are all
> EOL...
> 
> 
> On Wed, Jun 25, 2014 at 10:28 AM, Andrew Purtell 
> wrote:
> 
> >
> > On Wed, Jun 25, 2014 at 9:15 AM, Nicolas Liochon 
> > wrote:
> >
> >> Should we be 1.7 only for trunk / 1.0?
> >> This would mean using the 1.7 features.
> >>
> >
> > I think this is prudent. Hadoop common is having a similar discussion and
> > I think converging on consensus that they would be ok with their trunk
> > including features only available in 7.
> >
> >
> >> What about .98?
> >>
> >
> > ​I don't think this is an option, because although no user should be
> > running with a 7 runtime (and in fact performance conscious users should be
> > looking hard at 8), vendors will still have to support customers on 6. ​
> >
> > --
> > Best regards,
> >
> >- Andy
> >
> > Problems worthy of attack prove their worth by hitting back. - Piet Hein
> > (via Tom White)
> >
> 
> 
> 
> -- 
> Best regards,
> 
>- Andy
> 
> Problems worthy of attack prove their worth by hitting back. - Piet Hein
> (via Tom White)


Re: Introducing ConsensusNode and a Coordination Engine

2014-06-22 Thread Konstantin Boudnik
They are actually listed in the first email on this thread. Here they are:
HADOOP-10641
HDFS-6469

Cos

On Wed, Jun 18, 2014 at 10:30PM, Sujeet Varakhedi wrote:
> Can you point me to the JIRA's
> 
> Sujeet
> 
> 
> On Wed, Jun 18, 2014 at 8:45 PM, Konstantin Boudnik  wrote:
> 
> > Guys,
> >
> > In the last a couple of weeks, we had a very good and productive initial
> > round
> > of discussions on the JIRAs. I think it is worthy to keep the momentum
> > going
> > and have a more detailed conversation. For that, we'd like to host s Hadoop
> > developers meetup to get into the bowls of the consensus-based coordination
> > implementation for HDFS. The proposed venue is our office in San Ramon, CA.
> >
> > Considering that it is already a mid week and the following one looks short
> > because of the holidays, how would the week of July 7th looks for yall?
> > Tuesday or Thursday look pretty good on our end.
> >
> > Please chime in on your preference either here or reach of directly to me.
> > Once I have a few RSVPs I will setup an event on Eventbrite or similar.
> >
> > Looking forward to your input. Regards,
> >   Cos
> >
> > On Thu, May 29, 2014 at 02:09PM, Konstantin Shvachko wrote:
> > > Hello hadoop developers,
> > >
> > > I just opened two jiras proposing to introduce ConsensusNode into HDFS
> > and
> > > a Coordination Engine into Hadoop Common. The latter should benefit HDFS
> > > and  HBase as well as potentially other projects. See HDFS-6469 and
> > > HADOOP-10641 for details.
> > > The effort is based on the system we built at Wandisco with my
> > colleagues,
> > > who are glad to contribute it to Apache, as quite a few people in the
> > > community expressed interest in this ideas and their potential
> > applications.
> > >
> > > We should probably keep technical discussions in the jiras. Here on the
> > dev
> > > list I wanted to touch-base on any logistic issues / questions.
> > > - First of all, any ideas and help are very much welcome.
> > > - We would like to set up a meetup to discuss this if people are
> > > interested. Hadoop Summit next week may be a potential time-place to
> > meet.
> > > Not sure in what form. If not, we can organize one in our San Ramon
> > office
> > > later on.
> > > - The effort may take a few months depending on the contributors
> > schedules.
> > > Would it make sense to open a branch for the ConsensusNode work?
> > > - APIs and the implementation of the Coordination Engine should be a
> > fairly
> > > independent, so it may be reasonable to add it directly to Hadoop Common
> > > trunk.
> > >
> > > Thanks,
> > > --Konstantin
> >


Re: Introducing ConsensusNode and a Coordination Engine

2014-06-18 Thread Konstantin Boudnik
Guys,

In the last a couple of weeks, we had a very good and productive initial round
of discussions on the JIRAs. I think it is worthy to keep the momentum going
and have a more detailed conversation. For that, we'd like to host s Hadoop
developers meetup to get into the bowls of the consensus-based coordination
implementation for HDFS. The proposed venue is our office in San Ramon, CA.

Considering that it is already a mid week and the following one looks short
because of the holidays, how would the week of July 7th looks for yall?
Tuesday or Thursday look pretty good on our end.

Please chime in on your preference either here or reach of directly to me.
Once I have a few RSVPs I will setup an event on Eventbrite or similar.

Looking forward to your input. Regards,
  Cos

On Thu, May 29, 2014 at 02:09PM, Konstantin Shvachko wrote:
> Hello hadoop developers,
> 
> I just opened two jiras proposing to introduce ConsensusNode into HDFS and
> a Coordination Engine into Hadoop Common. The latter should benefit HDFS
> and  HBase as well as potentially other projects. See HDFS-6469 and
> HADOOP-10641 for details.
> The effort is based on the system we built at Wandisco with my colleagues,
> who are glad to contribute it to Apache, as quite a few people in the
> community expressed interest in this ideas and their potential applications.
> 
> We should probably keep technical discussions in the jiras. Here on the dev
> list I wanted to touch-base on any logistic issues / questions.
> - First of all, any ideas and help are very much welcome.
> - We would like to set up a meetup to discuss this if people are
> interested. Hadoop Summit next week may be a potential time-place to meet.
> Not sure in what form. If not, we can organize one in our San Ramon office
> later on.
> - The effort may take a few months depending on the contributors schedules.
> Would it make sense to open a branch for the ConsensusNode work?
> - APIs and the implementation of the Coordination Engine should be a fairly
> independent, so it may be reasonable to add it directly to Hadoop Common
> trunk.
> 
> Thanks,
> --Konstantin


Re: HBase 0.98.x dependency problems

2014-06-17 Thread Konstantin Boudnik
Thank you very much Ted - that helped beautifully! I think the build system
becomes a product on its own :)

Regards,
  Cos

On Tue, Jun 17, 2014 at 12:40PM, Ted Yu wrote:
> Have you tried the following command ?
> 
> mvn dependency:tree -Dhadoop-two.version=2.3.0
> 
> The output of the above has 2.3.0 as the dependency.
> 
> Cheers
> 
> 
> On Tue, Jun 17, 2014 at 11:40 AM, Konstantin Boudnik  wrote:
> 
> > Guys,
> >
> > I have noticed an interesting problem with HBase 0.98 line. I have ended
> > up with
> > a crappy Hadoop 2.2.0 artifacts in my local M2 cache - don't ask me how
> > that
> > happen ;( - which causes compilation problems in HBase. And that brought
> > this
> > whole issue into the light. Here's the essence of it:
> >
> > while running for hbase-server module
> >   % mvn dependency:tree  -Dhadoop.version=2.3.0
> > I am getting a reference to hadoop 2.2.0
> >
> > [INFO] +- org.apache.hadoop:hadoop-common:jar:2.2.0:compile
> > ..
> > [INFO] +- org.apache.hadoop:hadoop-auth:jar:2.2.0:compile
> > [INFO] +- org.apache.hadoop:hadoop-client:jar:2.2.0:compile
> >
> > This only happens for 0.98. On master the reference goes to 2.4.0
> > The problem here is that -Dhadoop.version is being seemingly ignored, which
> > might lead to binaries with dependencies inconsistent with what it was
> > built
> > against. I am not sure if this had came up before, but certainly would
> > appreciate the community's input, if any.
> >
> > Thanks a lot!
> > --
> > Take care,
> >   Cos
> >
> >


HBase 0.98.x dependency problems

2014-06-17 Thread Konstantin Boudnik
Guys,

I have noticed an interesting problem with HBase 0.98 line. I have ended up with
a crappy Hadoop 2.2.0 artifacts in my local M2 cache - don't ask me how that
happen ;( - which causes compilation problems in HBase. And that brought this
whole issue into the light. Here's the essence of it:

while running for hbase-server module
  % mvn dependency:tree  -Dhadoop.version=2.3.0
I am getting a reference to hadoop 2.2.0

[INFO] +- org.apache.hadoop:hadoop-common:jar:2.2.0:compile
..
[INFO] +- org.apache.hadoop:hadoop-auth:jar:2.2.0:compile
[INFO] +- org.apache.hadoop:hadoop-client:jar:2.2.0:compile

This only happens for 0.98. On master the reference goes to 2.4.0
The problem here is that -Dhadoop.version is being seemingly ignored, which
might lead to binaries with dependencies inconsistent with what it was built
against. I am not sure if this had came up before, but certainly would
appreciate the community's input, if any.

Thanks a lot!
-- 
Take care,
  Cos



Re: Thinking of branching for 1.0 by June 23

2014-06-11 Thread Konstantin Boudnik
I would be happy to provide an overview of variation of this model that I've
been using on different projects and companies. Please let me know if any is
needed.

Thanks,
  Cos

On Tue, Jun 10, 2014 at 06:24PM, Enis Söztutar wrote:
> I think we can accept the patches for the zk abstraction. Let me link the
> parent issues together.
> 
> For the git workflow, it would be good to hear from somebody with
> experience on different models (esp with the accumulo model).
> 
> Enis
> 
> 
> On Tue, Jun 10, 2014 at 4:41 PM, Ted Yu  wrote:
> 
> > I like Cos' idea.
> >
> > Cheers
> >
> >
> > On Tue, Jun 10, 2014 at 4:34 PM, Konstantin Boudnik 
> > wrote:
> >
> > > I am +1 on b) because it will naturally allow for a continuation of 1.x
> > > development.
> > >
> > > In all honesty, I found that notorious git branching model works
> > > perfectly for such situations. One thing to mention: unlike
> > > http://nvie.com/posts/a-successful-git-branching-model/ it'll force a
> > > significant number of cherry-picking from the master (and SHA1 changes on
> > > such
> > > commits).
> > > Perhaps it might be a good time to reconsider what has been working ok
> > for
> > > Hadoop on SVN and look into something that's more natural for Git
> > > branching?
> > >
> > > Cos
> > >
> > > On Tue, Jun 10, 2014 at 07:16PM, Jean-Marc Spaggiari wrote:
> > > > For people voting, can you please put small comments regarding why you
> > > > prefer a solution versus the other one? Just for knowledge sharing...
> > > >
> > > > Thanks,
> > > >
> > > > JM
> > > >
> > > >
> > > > 2014-06-10 19:05 GMT-04:00 Mikhail Antonov :
> > > >
> > > > > I think jiras on ZK abstraction can still get committed (I'll make
> > > sure to
> > > > > have all non-trivial patches posted on RB for discussion to make sure
> > > we
> > > > > don't accidentally introduce any instability).
> > > > >
> > > > > On jiras.
> > > > >
> > > > > Under HBASE-10909:
> > > > >  -  HBASE-11069 (region merge transaction) is close to completion,
> > just
> > > > > needs rebasing/merging, so we should have the new patch soon
> > > > >  -  HBASE-11072 (WAL splitting) - there's progress going on here, I
> > > think
> > > > > we're going to have patch up for reviews pretty soon.
> > > > >  -  HBASE-11073 (abstract Zk Watcher and listeners) - should have
> > first
> > > > > patch up for review in a week or two
> > > > >
> > > > > Besides that, we should have HBASE-4495 (get rid of CatalogTracker)
> > > too.
> > > > >
> > > > > Further steps on abstraction (involving changing/simplifying the way
> > we
> > > > > keep state in ZK) require coordination engine (as described in
> > > consensus
> > > > > design doc), which has been proposed in hadoop-common (for the time
> > > being I
> > > > > guess we can add this engine directly to HBase to speedup
> > > development?).
> > > > >
> > > > > Mikhail
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > 2014-06-10 15:46 GMT-07:00 Stack :
> > > > >
> > > > > > +1 on option b)
> > > > > >
> > > > > > On Tue, Jun 10, 2014 at 3:28 PM, Konstantin Boudnik <
> > c...@apache.org>
> > > > > > wrote:
> > > > > >
> > > > > > > +1 on the #2.
> > > > > > >
> > > > > > > One question though: do you envision that the work around
> > > coordinated
> > > > > > > replication won't be able to go into branch-1 anymore?
> > > > > > >
> > > > > >
> > > > > > Its not done and it is far along with Mikhail making good progress.
> > > I'd
> > > > > be
> > > > > > up for keeping up reviews and commit (if thats OK w/ you Mr. RM).
> > > > > >
> > > > > > How much you think could make 1.0 Cos/Mikhail?  Which issues.
> > > > > >
> > > > > > St.Ack
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Thanks,
> > > > > Michael Antonov
> > > > >
> > >
> >


Re: Anyone to have a quick look at 11313?

2014-06-10 Thread Konstantin Boudnik
Sorry, missed the context ;)

On Tue, Jun 10, 2014 at 08:27PM, Jean-Marc Spaggiari wrote:
> Hi Konstantin, I'm talking about this one:
> https://issues.apache.org/jira/browse/HBASE-11313
> 
> Thanks,
> 
> JM
> 
> 
> 2014-06-10 20:14 GMT-04:00 Konstantin Boudnik :
> 
> > Seems to be a private one? Unless I am not in the correct group or
> > something...
> >
> > On Tue, Jun 10, 2014 at 07:58PM, Jean-Marc Spaggiari wrote:
> > > Pretty trivial patch.
> > >
> > > Thanks,
> > >
> > > JM
> >


Re: Anyone to have a quick look at 11313?

2014-06-10 Thread Konstantin Boudnik
Seems to be a private one? Unless I am not in the correct group or
something...

On Tue, Jun 10, 2014 at 07:58PM, Jean-Marc Spaggiari wrote:
> Pretty trivial patch.
> 
> Thanks,
> 
> JM


Re: Thinking of branching for 1.0 by June 23

2014-06-10 Thread Konstantin Boudnik
I am +1 on b) because it will naturally allow for a continuation of 1.x
development.

In all honesty, I found that notorious git branching model works
perfectly for such situations. One thing to mention: unlike
http://nvie.com/posts/a-successful-git-branching-model/ it'll force a
significant number of cherry-picking from the master (and SHA1 changes on such
commits). 
Perhaps it might be a good time to reconsider what has been working ok for
Hadoop on SVN and look into something that's more natural for Git branching?

Cos

On Tue, Jun 10, 2014 at 07:16PM, Jean-Marc Spaggiari wrote:
> For people voting, can you please put small comments regarding why you
> prefer a solution versus the other one? Just for knowledge sharing...
> 
> Thanks,
> 
> JM
> 
> 
> 2014-06-10 19:05 GMT-04:00 Mikhail Antonov :
> 
> > I think jiras on ZK abstraction can still get committed (I'll make sure to
> > have all non-trivial patches posted on RB for discussion to make sure we
> > don't accidentally introduce any instability).
> >
> > On jiras.
> >
> > Under HBASE-10909:
> >  -  HBASE-11069 (region merge transaction) is close to completion, just
> > needs rebasing/merging, so we should have the new patch soon
> >  -  HBASE-11072 (WAL splitting) - there's progress going on here, I think
> > we're going to have patch up for reviews pretty soon.
> >  -  HBASE-11073 (abstract Zk Watcher and listeners) - should have first
> > patch up for review in a week or two
> >
> > Besides that, we should have HBASE-4495 (get rid of CatalogTracker) too.
> >
> > Further steps on abstraction (involving changing/simplifying the way we
> > keep state in ZK) require coordination engine (as described in consensus
> > design doc), which has been proposed in hadoop-common (for the time being I
> > guess we can add this engine directly to HBase to speedup development?).
> >
> > Mikhail
> >
> >
> >
> >
> > 2014-06-10 15:46 GMT-07:00 Stack :
> >
> > > +1 on option b)
> > >
> > > On Tue, Jun 10, 2014 at 3:28 PM, Konstantin Boudnik 
> > > wrote:
> > >
> > > > +1 on the #2.
> > > >
> > > > One question though: do you envision that the work around coordinated
> > > > replication won't be able to go into branch-1 anymore?
> > > >
> > >
> > > Its not done and it is far along with Mikhail making good progress. I'd
> > be
> > > up for keeping up reviews and commit (if thats OK w/ you Mr. RM).
> > >
> > > How much you think could make 1.0 Cos/Mikhail?  Which issues.
> > >
> > > St.Ack
> > >
> >
> >
> >
> > --
> > Thanks,
> > Michael Antonov
> >


Re: Thinking of branching for 1.0 by June 23

2014-06-10 Thread Konstantin Boudnik
+1 on the #2.

One question though: do you envision that the work around coordinated
replication won't be able to go into branch-1 anymore?

Thanks,
  Cos

On Tue, Jun 10, 2014 at 02:07PM, Enis Söztutar wrote:
> Hi,
> 
> As per previous threads, I am planning to branch out 1.0 at Jun 23, Mon. It
> will include the changes committed as of that date. HBASE-10070 merge
> should be completed as well. We can have the meta-based assignment as well.
> Branching now, versus after we do 0.99.0 release gives us the chance to
> stabilize the branch for the release.
> 
> I am aiming for having a 0.99.0RC0 out by end of June, do one or more
> 0.99.x releases in July and having 1.0.0RC0 by Aug or so.
> 
> I think for branching we can do two approaches:
> 
> a) create a branch named "1.0". Change the master branch version to be
> 1.1-SNAPSHOT. This implies that master branch cannot have any breaking
> changes until we branch for 2.0. If we do not need it, this will be
> simpler.
> 
> Something like this:
> master (1.1-SNAPSHOT)
> |
> | 1.0 (1.0-SNAPSHOT)
> | |
> | x (1.0.0)
> | |
> | x (0.99.1)
> | |
> | x (0.99.0)
> | |
> |/
> 
> b) create a branch named "branch-1". Change the master branch version to be
> 2.0-SNAPSHOT. This implies that all patches that are intended for 1.x
> series will go to branch-1, and all 1.x releases will be branched off of
> branch-1. Also branch-1.0 will be branched from branch-1.
> 
> Something like this:
> 
> master (2.0-SNAPSHOT)
> |
> | branch-1 (1.1-SNAPSHOT)
> | |
> | | branch-1.0 (1.0.1-SNAPSHOT)
> | | |
> | | x (1.0.0)
> | | |
> | |/
> | x (0.99.1)
> | |
> | x (0.99.0)
> | |
> |/
> 
> In both of these plans, 0.99.xRCs will come out of the branch for 1.
> 
> After we branched, we will only accept patches relevant to 1.0 release. In
> that respect, it won't be a feature freeze, but we will selectively
> backport features that we think would be needed for 1.0. Main candidates
> are the subtasks / linked issues in
> https://issues.apache.org/jira/browse/HBASE-10856. Some of the issues there
> still lacks some love, so I am not sure whether they will be done in time.
> If not, we do not have any choice but to kick them out by the time 1.0
> comes. Everything still goes to master first.
> 
> Let me know what you guys think. Any alternate proposals? Which one we
> should do? I am more in favor of b) which will be the most flexible route
> to having true semantic versioning for the cost of added branch complexity.
> a) should be fine as well if we are fine with lazy branching for 2.0.
> 
> Enis


Re: DISCUSS: We need a mascot, a totem HBASE-4920

2014-06-08 Thread Konstantin Boudnik
Either page 2 or 4. The latter looks cute and smart ;)
  Cos

On Fri, Jun 06, 2014 at 11:24AM, Stack wrote:
> Here is a new round:
> https://issues.apache.org/jira/secure/attachment/12648671/Apache_HBase_Orca_Logo_round5.pdf
> 
> The pacific northwest hockey orca is missing.  Scream if ye want it.
> 
> Stick comments up in the issue HBASE-4920
> 
> St.Ack
> 
> 
> On Wed, May 21, 2014 at 9:31 AM, Stack  wrote:
> 
> > Thanks JMS.  Took the comments here and up in the issue back to our
> > graphics hackers.  They are running another iteration.
> > St.Ack
> >
> >
> > On Thu, May 15, 2014 at 8:12 AM, Jean-Marc Spaggiari <
> > jean-m...@spaggiari.org> wrote:
> >
> >> My 2 ¢... Teeth mean aggressiveness. For me it gives a negative sense.  I
> >> tend to prefer something like
> >>
> >> http://4.bp.blogspot.com/_HBjr0PcdZW4/TIge5Uok9eI/A8Q/0AdMNtuhRaY/s400/pm006-orca7.png
> >> .
> >>
> >> JM
> >>
> >>
> >> 2014-05-13 20:57 GMT-04:00 Stack :
> >>
> >> > Any more comments here?  I'd like to go back to our team w/ some
> >> feedback
> >> >  (Enis just commented up on the issue that he is not mad about the evil
> >> > grin -- I can ask them to do something about this)
> >> >
> >> > St.Ack
> >> >
> >> >
> >> > On Wed, May 7, 2014 at 12:04 PM, Stack  wrote:
> >> >
> >> > > The design team at my place of work put together a few variations on
> >> an
> >> > > Orca with an evil grin.  I posted their combos up on
> >> > > https://issues.apache.org/jira/browse/HBASE-4920 (see the three most
> >> > > recent).  They can work on cleanup -- e.g. making the details look
> >> good
> >> > at
> >> > > a small scale and do up the various logo/image combinations -- but are
> >> > you
> >> > > lot good w/ this general direction?
> >> > >
> >> > > Enis used the following in his 1.0 slides during the release managers
> >> > > session at hbasecon and it looked alright to me (maybe it was the big
> >> > > screen that did it) but it needs work to make it look good at a small
> >> > > scale:
> >> > >
> >> >
> >> http://depositphotos.com/2900573/stock-illustration-Killer-whale-tattoo.html(or
> >> > >
> >> >
> >> http://4.bp.blogspot.com/_HBjr0PcdZW4/TIge5Uok9eI/A8Q/0AdMNtuhRaY/s400/pm006-orca7.png
> >> > )
> >> > > Our designers could clean up this one too?
> >> > >
> >> > > Nkeywal voted up this one
> >> > >
> >> >
> >> https://issues.apache.org/jira/secure/attachment/12511412/HBase%20Orca%20Logo.jpgand
> >> I can ask our designers look at this as well.
> >> > >
> >> > > Feedback appreciated.
> >> > >
> >> > > St.Ack
> >> > > P.S. If you can't tell, I'm trying to avoid an absolute vote run over
> >> > some
> >> > > selection of rough images.  It will not be a comparison of apples to
> >> > apples
> >> > > since the images are unfinished and most are without a 'setting'.
> >> >  Rather,
> >> > > I'm trying to narrow the options and then have us give feedback to a
> >> > couple
> >> > > of ready and willing professionals who can interpret our concerns in a
> >> > > language they are expert in (and in which we are not ... IANAGD).
> >> > >
> >> > >
> >> > >
> >> > >
> >> > > On Wed, Mar 5, 2014 at 11:22 AM, Stack  wrote:
> >> > >
> >> > >> On Wed, Mar 5, 2014 at 2:18 AM, Nicolas Liochon  >> > >wrote:
> >> > >>
> >> > >>> Yes, let's decide first on the animal itself (well we're done it
> >> > seems),
> >> > >>> and use another discussion thread for the picture.
> >> > >>>
> >> > >>>
> >> > >> Agreed.
> >> > >>
> >> > >> We've decided on the mascot (Hurray!).  Now for the representation.
> >> >  Will
> >> > >> do in another thread.  I thought we could skirt this issue but you
> >> > reviving
> >> > >> an image I'd passed over, Jimmy's concern w/ the suggested one and my
> >> > >> difficulty fitting the image as is against our current logo as well
> >> as
> >> > an
> >> > >> offline conversation with our other Nick makes it plain we are going
> >> to
> >> > >> deal head-on with how it is represented.
> >> > >>
> >> > >> I'll be back with some thing for folks to vote on (will rope in a
> >> few of
> >> > >> the interested parties composing a set).
> >> > >>
> >> > >> St.Ack
> >> > >>
> >> > >
> >> > >
> >> >
> >>
> >
> >


Re: Amending commit message

2014-06-02 Thread Konstantin Boudnik
Also keep in mind that amending changes commits' SHA. So if a commit is pushed
to the remote already you'll need to force-push after any amendment. And
master is off-limits for such practice.

Cos

On Mon, Jun 02, 2014 at 06:38PM, Ted Yu wrote:
> Hi,
> In case you entered incorrect or partial commit message when you checked
> in, please refer to the following for amending the message:
> 
> http://stackoverflow.com/questions/179123/edit-an-incorrect-commit-message-in-git
> 
> FYI


signature.asc
Description: Digital signature


Re: Introducing ConsensusNode and a Coordination Engine

2014-05-29 Thread Konstantin Boudnik
Crossposting from hdfs-dev@ list
Please add your feedback! The ZK-based coordination engine implementation is 
going to be posted soon the aforementioned JIRAs

Regards,
  Cos

On May 29, 2014 2:09:37 PM PDT, Konstantin Shvachko  
wrote:
>Hello hadoop developers,
>
>I just opened two jiras proposing to introduce ConsensusNode into HDFS
>and
>a Coordination Engine into Hadoop Common. The latter should benefit
>HDFS
>and  HBase as well as potentially other projects. See HDFS-6469 and
>HADOOP-10641 for details.
>The effort is based on the system we built at Wandisco with my
>colleagues,
>who are glad to contribute it to Apache, as quite a few people in the
>community expressed interest in this ideas and their potential
>applications.
>
>We should probably keep technical discussions in the jiras. Here on the
>dev
>list I wanted to touch-base on any logistic issues / questions.
>- First of all, any ideas and help are very much welcome.
>- We would like to set up a meetup to discuss this if people are
>interested. Hadoop Summit next week may be a potential time-place to
>meet.
>Not sure in what form. If not, we can organize one in our San Ramon
>office
>later on.
>- The effort may take a few months depending on the contributors
>schedules.
>Would it make sense to open a branch for the ConsensusNode work?
>- APIs and the implementation of the Coordination Engine should be a
>fairly
>independent, so it may be reasonable to add it directly to Hadoop
>Common
>trunk.
>
>Thanks,
>--Konstantin



Re: Timestamp resolution

2014-05-23 Thread Konstantin Boudnik
Thank you very much for the explanation - makes sense!

And I would love to put the heads together with you at the end of 297th year
to contemplate on why we thought that 1m trans/sec was sufficient ;) This is
certainly something to look forward to!
 
Cos

On Fri, May 23, 2014 at 06:16PM, lars hofhansl wrote:
> The specific discussion here was a transaction engine doing snapshot
> isolation using the HBase timestamps, but still be close to wall clock time
> as much as possible.
> In that scenario, with ms resolution you can only do 1000 transactions/sec,
> and so you need to turn the timestamp into something that is not wall clock
> time as HBase understands it (and hence TTL, etc, will no longer work, as
> well as any other tools you've written that use the HBase timestamp).
> 1m transactions/sec are good enough (for now, I envision in a few years
> we'll be sitting here wondering how we could ever think that 1m
> transaction/sec are sufficient) :)
> 
> -- Lars
> 
> 
> 
> ____
>  From: Konstantin Boudnik 
> To: dev@hbase.apache.org; lars hofhansl  
> Sent: Friday, May 23, 2014 5:58 PM
> Subject: Re: Timestamp resolution
>  
> 
> What's the purpose of nanos accuracy in the TS? I am trying to think of one,
> but I don't know much about real production use cases.
> 
> Cos
> 
> P.S. Are you saying that a real concern is how usable Hbase will be in
> nearly 300 years from now? ;) Or I misread you? 
> 
> 
> On Fri, May 23, 2014 at 05:27PM, lars hofhansl wrote:
> > We have discussed this in the past. It just came up again during an 
> > internal discussion.
> > Currently we simply store a Java timestamp (millisec since epoch), i.e. we 
> > have ms resolution.
> > 
> > We do have 8 bytes for the TS, though. Not enough to store nanosecs (that
> > would only cover 2^63/10^9/3600/24/365.24 = 292.279 years), but enough for
> > microseconds (292279 years).
> > Should we just store he TS is microseconds? We could do that right now (and
> > just keep the ms resolution for now - i.e. the us part would always be 0 for
> > now).
> > Existing data must be in ms of course, so we'd grandfather that in, but new
> > tables could store by default in us.
> > 
> > We'd need to make this configurable both the column family level and client
> > level, so clients could still opt to see data in ms.
> > 
> > Comments? Too much to bite off?
> > 
> > -- Lars
> > 


Re: Timestamp resolution

2014-05-23 Thread Konstantin Boudnik
What's the purpose of nanos accuracy in the TS? I am trying to think of one,
but I don't know much about real production use cases.

Cos

P.S. Are you saying that a real concern is how usable Hbase will be in
nearly 300 years from now? ;) Or I misread you? 

On Fri, May 23, 2014 at 05:27PM, lars hofhansl wrote:
> We have discussed this in the past. It just came up again during an internal 
> discussion.
> Currently we simply store a Java timestamp (millisec since epoch), i.e. we 
> have ms resolution.
> 
> We do have 8 bytes for the TS, though. Not enough to store nanosecs (that
> would only cover 2^63/10^9/3600/24/365.24 = 292.279 years), but enough for
> microseconds (292279 years).
> Should we just store he TS is microseconds? We could do that right now (and
> just keep the ms resolution for now - i.e. the us part would always be 0 for
> now).
> Existing data must be in ms of course, so we'd grandfather that in, but new
> tables could store by default in us.
> 
> We'd need to make this configurable both the column family level and client
> level, so clients could still opt to see data in ms.
> 
> Comments? Too much to bite off?
> 
> -- Lars
> 


Re: We have our first victim

2014-05-23 Thread Konstantin Boudnik
On Fri, May 23, 2014 at 05:30PM, lars hofhansl wrote:
> Just a normal push. I was confused what it actually did. A pull is fetch +
> merge, right? I think that's what happened as I saw a merge commit as the
> latest commit after I pushed.
> If a push without a rebase is blocked I'd feel much better.

Do you consider yourself unlucky just because your merge happened to be clean?  
:)

> 
>  From: Enis Söztutar 
> To: "dev@hbase.apache.org" ; lars hofhansl 
>  
> Sent: Friday, May 23, 2014 4:19 PM
> Subject: Re: We have our first victim
>  
> 
> 
> You should not be able to push before rebase. It should have failed because 
> it is not fast forward. Did you do a normal push the first time or force 
> push? 
> 
> Enis
> 
> 
> 
> On Fri, May 23, 2014 at 4:16 PM, lars hofhansl  wrote:
> 
> It would have prevented me from fixing the history, though, after I messed it 
> up simply by forgetting to rebase before a push.
> >I do think that anybody who does a force push needs to explain him/herself, 
> >though.
> >
> >
> >-- Lars
> >
> >
> >
> >
> > From: Enis Söztutar 
> >To: "dev@hbase.apache.org" 
> >Sent: Friday, May 23, 2014 4:13 PM
> >Subject: Re: We have our first victim
> >
> >
> >
> >Force push should be disabled by default for all branches I think.
> >
> >Enis
> >
> >
> >
> >On Fri, May 23, 2014 at 3:35 PM, Andrew Purtell  wrote:
> >
> >> Makes sense, earlier Jake was talking about how 'trunk' is a "protected
> >> branch".
> >>
> >>
> >> On Fri, May 23, 2014 at 3:22 PM, Konstantin Boudnik 
> >> wrote:
> >>
> >> > I think INFRA only disables force-pushes to the master: the rest of the
> >> > branches is a fair game ;)
> >> >
> >> > Cos
> >> >
> >> > On Fri, May 23, 2014 at 03:12PM, lars hofhansl wrote:
> >> > > I just had to fix up the history on the 0.94 branch, because I forgot
> >> to
> >> > > rebase after a pull (and I had some committed changes already).
> >> > >
> >> > > git will happily let you push stuff, even if that messes up the history
> >> > (and
> >> > > reorder commits, in my case my local change was ordered before the
> >> pulled
> >> > > changes).
> >> > > It's not wrong, since I should have rebased my changes. Just... uhm...
> >> > surprising :)
> >> > >
> >> > >
> >> > > Scary, but it worked and at least git gives me enough tools to fix
> >> > things.
> >> > >
> >> > > -- Lars
> >> > >
> >> > >
> >> > >
> >> > > 
> >> > >  From: Andrew Purtell 
> >> > > To: "dev@hbase.apache.org" 
> >> > > Sent: Friday, May 23, 2014 10:22 AM
> >> > > Subject: We have our first victim
> >> > >
> >> > >
> >> > > Ram pushed up a commit as a new branch 'trunk' instead of to 'master'.
> >> I
> >> > > will file an infra ticket to nuke the new branch 'trunk' as this is
> >> going
> >> > > to confuse people. Objections?
> >> > >
> >> > > --
> >> > > Best regards,
> >> > >
> >> > > ═  - Andy
> >> > >
> >> > > Problems worthy of attack prove their worth by hitting back. - Piet
> >> Hein
> >> > > (via Tom White)
> >> >
> >>
> >>
> >>
> >> --
> >> Best regards,
> >>
> >>    - Andy
> >>
> >> Problems worthy of attack prove their worth by hitting back. - Piet Hein
> >> (via Tom White)
> >>


Re: We have our first victim

2014-05-23 Thread Konstantin Boudnik
On Fri, May 23, 2014 at 04:19PM, Enis Söztutar wrote:
> You should not be able to push before rebase. It should have failed because
> it is not fast forward. Did you do a normal push the first time or force
> push?

non fast-forward commits do not prevent you from pushing if your
merges/resolved conflict can be cleanly applied to the current HEAD.

> Enis
> 
> 
> On Fri, May 23, 2014 at 4:16 PM, lars hofhansl  wrote:
> 
> > It would have prevented me from fixing the history, though, after I messed
> > it up simply by forgetting to rebase before a push.
> > I do think that anybody who does a force push needs to explain
> > him/herself, though.
> >
> >
> > -- Lars
> >
> >
> >
> > 
> >  From: Enis Söztutar 
> > To: "dev@hbase.apache.org" 
> > Sent: Friday, May 23, 2014 4:13 PM
> > Subject: Re: We have our first victim
> >
> >
> > Force push should be disabled by default for all branches I think.
> >
> > Enis
> >
> >
> >
> > On Fri, May 23, 2014 at 3:35 PM, Andrew Purtell 
> > wrote:
> >
> > > Makes sense, earlier Jake was talking about how 'trunk' is a "protected
> > > branch".
> > >
> > >
> > > On Fri, May 23, 2014 at 3:22 PM, Konstantin Boudnik 
> > > wrote:
> > >
> > > > I think INFRA only disables force-pushes to the master: the rest of the
> > > > branches is a fair game ;)
> > > >
> > > > Cos
> > > >
> > > > On Fri, May 23, 2014 at 03:12PM, lars hofhansl wrote:
> > > > > I just had to fix up the history on the 0.94 branch, because I forgot
> > > to
> > > > > rebase after a pull (and I had some committed changes already).
> > > > >
> > > > > git will happily let you push stuff, even if that messes up the
> > history
> > > > (and
> > > > > reorder commits, in my case my local change was ordered before the
> > > pulled
> > > > > changes).
> > > > > It's not wrong, since I should have rebased my changes. Just...
> > uhm...
> > > > surprising :)
> > > > >
> > > > >
> > > > > Scary, but it worked and at least git gives me enough tools to fix
> > > > things.
> > > > >
> > > > > -- Lars
> > > > >
> > > > >
> > > > >
> > > > > 
> > > > >  From: Andrew Purtell 
> > > > > To: "dev@hbase.apache.org" 
> > > > > Sent: Friday, May 23, 2014 10:22 AM
> > > > > Subject: We have our first victim
> > > > >
> > > > >
> > > > > Ram pushed up a commit as a new branch 'trunk' instead of to
> > 'master'.
> > > I
> > > > > will file an infra ticket to nuke the new branch 'trunk' as this is
> > > going
> > > > > to confuse people. Objections?
> > > > >
> > > > > --
> > > > > Best regards,
> > > > >
> > > > > ═  - Andy
> > > > >
> > > > > Problems worthy of attack prove their worth by hitting back. - Piet
> > > Hein
> > > > > (via Tom White)
> > > >
> > >
> > >
> > >
> > > --
> > > Best regards,
> > >
> > >- Andy
> > >
> > > Problems worthy of attack prove their worth by hitting back. - Piet Hein
> > > (via Tom White)
> > >
> >


Re: We have our first victim

2014-05-23 Thread Konstantin Boudnik
Arguably, if we have a shared short-lived integration branch force push might
be desirable - upon a mutual developers' consent - to compress the history of
this branch or to avoid non fast-forward merge commits, etc.

Over time I came to the conclusion that using "administrative fascism"
measures (as in http://www.gnu.org/fun/jokes/know.your.sysadmin.html) is not
very productive. Hence, INFRA stance on preserving master (trunk) as an
ultimate history of code that went into the releases.

Hopefully, it makes sense? Thanks
  Cos

On Fri, May 23, 2014 at 04:13PM, Enis Söztutar wrote:
> Force push should be disabled by default for all branches I think.
> 
> Enis
> 
> 
> On Fri, May 23, 2014 at 3:35 PM, Andrew Purtell  wrote:
> 
> > Makes sense, earlier Jake was talking about how 'trunk' is a "protected
> > branch".
> >
> >
> > On Fri, May 23, 2014 at 3:22 PM, Konstantin Boudnik 
> > wrote:
> >
> > > I think INFRA only disables force-pushes to the master: the rest of the
> > > branches is a fair game ;)
> > >
> > > Cos
> > >
> > > On Fri, May 23, 2014 at 03:12PM, lars hofhansl wrote:
> > > > I just had to fix up the history on the 0.94 branch, because I forgot
> > to
> > > > rebase after a pull (and I had some committed changes already).
> > > >
> > > > git will happily let you push stuff, even if that messes up the history
> > > (and
> > > > reorder commits, in my case my local change was ordered before the
> > pulled
> > > > changes).
> > > > It's not wrong, since I should have rebased my changes. Just... uhm...
> > > surprising :)
> > > >
> > > >
> > > > Scary, but it worked and at least git gives me enough tools to fix
> > > things.
> > > >
> > > > -- Lars
> > > >
> > > >
> > > >
> > > > 
> > > >  From: Andrew Purtell 
> > > > To: "dev@hbase.apache.org" 
> > > > Sent: Friday, May 23, 2014 10:22 AM
> > > > Subject: We have our first victim
> > > >
> > > >
> > > > Ram pushed up a commit as a new branch 'trunk' instead of to 'master'.
> > I
> > > > will file an infra ticket to nuke the new branch 'trunk' as this is
> > going
> > > > to confuse people. Objections?
> > > >
> > > > --
> > > > Best regards,
> > > >
> > > > ═  - Andy
> > > >
> > > > Problems worthy of attack prove their worth by hitting back. - Piet
> > Hein
> > > > (via Tom White)
> > >
> >
> >
> >
> > --
> > Best regards,
> >
> >- Andy
> >
> > Problems worthy of attack prove their worth by hitting back. - Piet Hein
> > (via Tom White)
> >


Re: Call for Lightning Talks, Hadoop Summit HBase BoF

2014-05-23 Thread Konstantin Boudnik
Hi Nick.

Are you planning to post the agenda on the meetup page? Please let me know if
you need any help!

Cos

On Wed, May 14, 2014 at 05:38PM, Nick Dimiduk wrote:
> Just to be clear, this is not a call for vendor pitches. This is a venue
> for HBase users, operators, and developers to intermingle, share stories,
> and storm new ideas.
> 
> 
> On Tue, May 13, 2014 at 11:40 AM, Nick Dimiduk  wrote:
> 
> > Hi HBasers!
> >
> > Subash and I are organizing the HBase Birds of a Feather (BoF) session at
> > Hadoop Summit San Jose this year. We're looking for 4-5 brave souls willing
> > to standup for 15 minutes and tell the community what's working for them
> > and what isn't. Have a story about how this particular feature saved the
> > day? Great! Really wish something was implemented differently and have a
> > plan for fixing it? Step up and recruit folks to provide/review patches!
> >
> > Either way, send me a note off-list and we'll get you queued up.
> >
> > The event is on Thursday, June 5, 3:30p - 5:00p at the San Jose Convention
> > Center, room 230C. RSPV at the meetup page [0]. Please note that this event
> > is NOT exclusive to conference attendees. Come, come, one and all!
> >
> > See you at the convention center!
> > Nick & Subash
> >
> > [0]:
> > http://www.meetup.com/Hadoop-Summit-Community-San-Jose/events/179081342/
> >


Re: We have our first victim

2014-05-23 Thread Konstantin Boudnik
I think INFRA only disables force-pushes to the master: the rest of the
branches is a fair game ;)

Cos

On Fri, May 23, 2014 at 03:12PM, lars hofhansl wrote:
> I just had to fix up the history on the 0.94 branch, because I forgot to
> rebase after a pull (and I had some committed changes already).
> 
> git will happily let you push stuff, even if that messes up the history (and
> reorder commits, in my case my local change was ordered before the pulled
> changes).
> It's not wrong, since I should have rebased my changes. Just... uhm... 
> surprising :)
> 
> 
> Scary, but it worked and at least git gives me enough tools to fix things.
> 
> -- Lars
> 
> 
> 
> 
>  From: Andrew Purtell 
> To: "dev@hbase.apache.org"  
> Sent: Friday, May 23, 2014 10:22 AM
> Subject: We have our first victim
>  
> 
> Ram pushed up a commit as a new branch 'trunk' instead of to 'master'. I
> will file an infra ticket to nuke the new branch 'trunk' as this is going
> to confuse people. Objections?
> 
> -- 
> Best regards,
> 
> ═  - Andy
> 
> Problems worthy of attack prove their worth by hitting back. - Piet Hein
> (via Tom White)


Re: [DISCUSSION] Avoiding merge commits

2014-05-23 Thread Konstantin Boudnik
+1

In the interest of full disclosure, the only good excuse for non fast-forward
commit messages is when you are merging a significant chunk of a code back
into the master, e.g. with feature development, hot fixes, releases, etc. In
such a case having an extra piece of historical reference might be helpful
(see http://nvie.com/posts/a-successful-git-branching-model/ as a pretty good
outline).

Cos

On Fri, May 23, 2014 at 10:38AM, Andrew Purtell wrote:
> I recommend we do not push merge commits upstream. I suppose it is easy
> enough to filter them out when looking at history but there is no need to
> be merging upstream branches into your local tracking branch when you can
> rebase instead. In this way we can avoid polluting the history in the
> master repository with unnecessary merge commit entries. (And maybe some
> devs will be merging upstream into tracking branches or merging commits
> from local feature branches several times per day, and these will all
> accumulate...)
> 
> When updating your local tracking branch from upstream, use git fetch
> upstream && git rebase upstream/branch instead of 'git merge'.
> 
> When developing features on a local branch it's possible to do a squash
> commit from the feature branch to the tracking branch using 'git rebase'
> instead of 'git merge', then a push of the single squashed commit from the
> tracking branch to the upstream branch.
> 
> If these workflow choices are acceptable by consensus we can update the
> 'how to commit' document with an illustration of the workflow with example
> commands.
> 
> 
> -- 
> Best regards,
> 
>- Andy
> 
> Problems worthy of attack prove their worth by hitting back. - Piet Hein
> (via Tom White)


[jira] [Created] (HBASE-11244) WAL split needs to be abstracted from ZK

2014-05-23 Thread Konstantin Boudnik (JIRA)
Konstantin Boudnik created HBASE-11244:
--

 Summary: WAL split needs to be abstracted from ZK
 Key: HBASE-11244
 URL: https://issues.apache.org/jira/browse/HBASE-11244
 Project: HBase
  Issue Type: Sub-task
  Components: Consensus, master, wal, Zookeeper
Reporter: Konstantin Boudnik






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11243) Abstract HMaster administrative operations from ZK

2014-05-23 Thread Konstantin Boudnik (JIRA)
Konstantin Boudnik created HBASE-11243:
--

 Summary: Abstract HMaster administrative operations from ZK
 Key: HBASE-11243
 URL: https://issues.apache.org/jira/browse/HBASE-11243
 Project: HBase
  Issue Type: Sub-task
  Components: Consensus, master
Reporter: Konstantin Boudnik






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11242) Client should be aware about multiple HMaster IP addresses

2014-05-23 Thread Konstantin Boudnik (JIRA)
Konstantin Boudnik created HBASE-11242:
--

 Summary: Client should be aware about multiple HMaster IP addresses
 Key: HBASE-11242
 URL: https://issues.apache.org/jira/browse/HBASE-11242
 Project: HBase
  Issue Type: Sub-task
  Components: Client, Consensus
Reporter: Konstantin Boudnik


HBase client should be able to seamlessly fail-over from a faulty active master 
to another one.
In case of a single active HMaster, this functionality will have to work as it 
is without changes in the semantic of the client.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11241) Support for multiple active HMaster replicas

2014-05-23 Thread Konstantin Boudnik (JIRA)
Konstantin Boudnik created HBASE-11241:
--

 Summary: Support for multiple active HMaster replicas
 Key: HBASE-11241
 URL: https://issues.apache.org/jira/browse/HBASE-11241
 Project: HBase
  Issue Type: Umbrella
  Components: Consensus
Affects Versions: 0.98.2
Reporter: Konstantin Boudnik


In light of discussions and refactoring progress around consensus base 
replication (see HBASE-10909) the concept of multiple active replicas can be 
applied to the HMaster entity. 

In order to achieve this, a number of steps expressed in the subtasks of this 
ticket will have to be completed. similarly to HBASE-10909).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: [ANNOUNCE] Apache Phoenix has graduated as a top level project

2014-05-22 Thread Konstantin Boudnik
Great! Congratulations! 

On May 22, 2014 2:46:16 PM PDT, James Taylor  wrote:
>I'm pleased to announce that Apache Phoenix has graduated from the
>incubator to become a top level project. Thanks so much for all your
>help
>and support - we couldn't have done it without the fantastic HBase
>community! We're looking forward to continued collaboration.
>Regards,
>The Apache Phoenix team



Re: VOTE: Move to GIT

2014-05-19 Thread Konstantin Boudnik
+1 (non-binding, apparently).

On Mon, May 19, 2014 at 08:32AM, Stack wrote:
> Following up on the old DISCUSSION thread on moving to git [1], lets vote
> (INFRA needs to see a vote).
> 
> Move to GIT?
> 
> [  ] +1 on yes, lets move to GIT
> [  ] 0 on don't care
> [  ] -1 on stay w/ current SVN setup.
> 
> I'm +1.
> 
> St.Ack
> P.S. The Mighty Talat Uyarer, who volunteered to run the migration a while
> ago is back with a vengeance and up for the job.
> 
> 1. http://search-hadoop.com/m/WmXbV15hqqq


Re: Fw: [VOTE] The 1st HBase 0.98.2 release candidate (RC0) is available

2014-05-13 Thread Konstantin Boudnik


On May 12, 2014 5:19:35 PM PDT, lars hofhansl  wrote:
>Resending... Looks like first attempt did not go through (still Apache
>email issues?).

+1 

>- Forwarded Message -
>From: lars hofhansl 
>To: "dev@hbase.apache.org"  
>Cc: "u...@hbase.apache.org"  
>Sent: Monday, May 12, 2014 11:01 AM
>Subject: Re: [VOTE] The 1st HBase 0.98.2 release candidate (RC0) is
>available
> 
>
>
>Getting folks to test releases is like pulling teeths, or herding cats,
>or getting three kids to agree which song from Frozen is the best :)
>
>All jokes aside, this *is* a problem. With the goal of more frequent
>releases release verification needs to be a smooth process.
>I do not know how to fix it, but if we all could just remember to test
>a release (and a simple verification does not take much time) that'd
>help.
>
>I take some of the blame. And I'll weasel an excuse: HBaseCon. :)
>
>Please reconsider Andy. It's not always fun, but it's important service
>to the community.
>
>
>-- Lars
>
>
>
>
> From: Andrew Purtell 
>To: "dev@hbase.apache.org"  
>Cc: "u...@hbase.apache.org"  
>Sent: Sunday, May 11, 2014 7:54 PM
>Subject: Re: [VOTE] The 1st HBase 0.98.2 release candidate (RC0) is
>available
> 
>
>Actually I do resign as RM for 0.98 effective
> immediately.
>
>I've not seen the community before disinterested in a release
>sufficiently that not even simple package verification is time worth
>spending. 
>
>
>> On May 12, 2014, at 10:47 AM, Andrew Purtell
> wrote:
>> 
>> Too late I already put out the artifacts. I thought the release vote
>is lazy consensus. If you would like me to resign as RM for 0.98 I
>will. 
>> 
>>> On May 12, 2014, at 9:17 AM, Todd Lipcon  wrote:
>>> 
>>> Hey Andrew,
>>> 
>>>
> Sorry for the late email here, but -- I believe releases need at least
>>> three +1 votes from PMC members[1]. Perhaps people were a bit busy
>last
>>> week due to HBaseCon and we should extend the voting on this release
>>> candidate for another week?
>>> 
>>> [1] http://www.apache.org/foundation/voting.html
>>> 
>>> -Todd
>>> 
>>> 
 On Fri, May 9, 2014 at 12:14 AM, Andrew Purtell
> wrote:
 
 With one +1 and no 0 or -1 votes, this vote has passed. I have sent
>the
 artifacts onward for mirroring and will announce in ~24 hours.
 
 
 On Wed, May 7, 2014 at 10:04 AM, Andrew Purtell
>
 wrote:
 
> +1
> 
> Unit test suite passes 100% 25 times out of 25 runs on Java 6u43,
>7u45,
> and 8u0.
> 
> Cluster testing looks good with LoadTestTool, and YCSB.
> 
> An informal performance test on a small cluster comparing 0.98.0
>and
> 0.98.2 indicates no serious perf regressions.
> See email to dev@ titled
> "Comparing the performance of 0.98.2 RC0 and 0.98.0 using YCSB".
> 
> 
> On Wed, Apr 30, 2014 at 8:50 PM, Andrew Purtell
> wrote:
> 
>> The 1st HBase 0.98.2 release candidate (RC0) is available for
>download
 at
>> http://people.apache.org/~apurtell/0.98.2RC0/ and Maven artifacts
>are
>> also available in the temporary repository
>>
>https://repository.apache.org/content/repositories/orgapachehbase-1020.
>> 
>> Signed with my code signing key D5365CCD.
>> 
>> The issues resolved in this release can be found here:

>https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310753&version=12326505
>> 
>> 
>> Please try out the candidate and vote +1/-1 by midnight Pacific
>Time
>> (00:00 -0800 GMT) on May 7 on whether or not we should release
>this as
>> 0.98.2.
>> 
>> --
>> Best regards,
>> 
>> ​    - Andy​
>> ​
>> 
>> Problems worthy of attack prove their worth by hitting back. -
>Piet Hein
>> (via Tom White)
> 
> 
> 
> --
> Best regards,
> 
>  - Andy
> 
> Problems worthy of attack prove their worth by hitting back. -
>Piet Hein
> (via Tom White)
 
 
 
 --
 Best regards,
 
  - Andy
 
 Problems worthy of attack prove their worth by hitting back. - Piet
>Hein
 (via Tom White)
>>> 
>>> 
>>> 
>>> -- 
>>> Todd Lipcon
>>> Software Engineer, Cloudera



Re: [VOTE] The 1st HBase 0.98.2 release candidate (RC0) is available

2014-05-12 Thread Konstantin Boudnik
I have hit a couple of issue within Bigtop and now need to pull it back over
the event horizon ;) In particular  BIGTOP-1302

I hope to have it done and then tested by the end of tomorrow. Thanks
  Cos

On Tue, May 06, 2014 at 07:04PM, Andrew Purtell wrote:
> +1
> 
> Unit test suite passes 100% 25 times out of 25 runs on Java 6u43, 7u45, and
> 8u0.
> 
> Cluster testing looks good with LoadTestTool, and YCSB.
> 
> An informal performance test on a small cluster comparing 0.98.0 and 0.98.2
> indicates no serious perf regressions. See email to dev@ titled "Comparing
> the performance of 0.98.2 RC0 and 0.98.0 using YCSB".
> 
> 
> On Wed, Apr 30, 2014 at 8:50 PM, Andrew Purtell  wrote:
> 
> > The 1st HBase 0.98.2 release candidate (RC0) is available for download at
> > http://people.apache.org/~apurtell/0.98.2RC0/ and Maven artifacts are
> > also available in the temporary repository
> > https://repository.apache.org/content/repositories/orgapachehbase-1020 .
> >
> > Signed with my code signing key D5365CCD.
> >
> > The issues resolved in this release can be found here:
> > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310753&version=12326505
> >
> >
> > Please try out the candidate and vote +1/-1 by midnight Pacific Time
> > (00:00 -0800 GMT) on May 7 on whether or not we should release this as
> > 0.98.2.
> >
> > --
> > Best regards,
> >
> >  ​- Andy​
> > ​
> >
> > Problems worthy of attack prove their worth by hitting back. - Piet Hein
> > (via Tom White)
> >
> 
> 
> 
> -- 
> Best regards,
> 
>- Andy
> 
> Problems worthy of attack prove their worth by hitting back. - Piet Hein
> (via Tom White)


signature.asc
Description: Digital signature


Re: Comparing the performance of 0.98.2 RC0 and 0.98.0 using YCSB

2014-05-11 Thread Konstantin Boudnik
I think the point wasn't a benchmarking but merely to make sure there's no
regressions.

On Wed, May 07, 2014 at 02:16PM, Vladimir Rodionov wrote:
> *7x EC2 c3.8xlarge: 1 master, 5 slaves, 1 test client*
> 
> Andrew, I think these numbers are far from maximum you can get from this
> set up. Why only 1 test client?
> 
> -Vladimir Rodionov
> 
> 
> On Tue, May 6, 2014 at 6:58 PM, Andrew Purtell  wrote:
> 
> > Comparing the relative performance of 0.98.2 RC0 and 0.98.0 on Hadoop 2.2.0
> > using YCSB.
> >
> > The hardware used is different than for the previous report comparing
> > 0.98.1 to 0.98.0. However the results are very similar, both in terms of
> > 0.98.2 RC0 numbers with respect to those measured for 0.98.0, and the
> > workload specific deltas observed when testing 0.98.1.
> >
> > *Hardware and Versions*
> >
> > Hadoop 2.2.0
> > HBase 0.98.2-hadoop2 RC0
> >
> > 7x EC2 c3.8xlarge: 1 master, 5 slaves, 1 test client
> >
> > 32 cores
> >
> > 60 GB RAM
> >
> > 2 x 320 GB directly attached SSD
> >
> > NameNode: 4 GB heap
> >
> > DataNode: 1 GB heap
> >
> > Master: 1 GB heap
> >
> > RegionServer: 8 GB heap, 24 GB bucket cache offheap engine
> >
> >
> > *Methodology*
> >
> >
> > Setup:
> >
> >  0. Start cluster
> >  1. shell: create "seed", { NAME=>"u", COMPRESSION=>"snappy" }
> >  2. YCSB:  Preload 100 million rows into table "seed"
> >   3. shell: flush "seed" ; compact "seed"
> >  4. Wait for compaction to complete
> >  5. shell: create_snapshot "seed", "seed_snap"
> >   6. shell: disable "seed"
> >
> >
> >  For each test:
> >
> >  7. shell: clone_snapshot "seed_snap", "test"
> >   8. YCSB:  Run test -p operationcount=1000 -threads 32 -target
> > 5 (clamp at ~10k ops/server/sec)
> >   9. shell: disable "test"
> > 10. shell: drop "test"
> >
> >
> > *Workload A*
> >
> >  *0.98.0*
> >
> >  [OVERALL], RunTime(ms), 2097825
> > [OVERALL], Throughput(ops/sec), 4767
> > [UPDATE], Operations, 4999049
> > [UPDATE], AverageLatency(us), 1.107036384
> > [UPDATE], MinLatency(us), 0
> > [UPDATE], MaxLatency(us), 97865
> > [UPDATE], 95thPercentileLatency(ms), 0
> > [UPDATE], 99thPercentileLatency(ms), 0
> > [READ], Operations, 5000952
> > [READ], AverageLatency(us), 413.9172277
> > [READ], MinLatency(us), 295
> > [READ], MaxLatency(us), 927729
> > [READ], 95thPercentileLatency(ms), 0
> > [READ], 99thPercentileLatency(ms), 0
> >
> >
> >  *0.98.2*
> >
> > ​[OVERALL], RunTime(ms), 2082682
> > [OVERALL], Throughput(ops/sec), 4802
> > [UPDATE], Operations, 5001208
> > [UPDATE], AverageLatency(us), 1.227632714
> > [UPDATE], MinLatency(us), 0
> > [UPDATE], MaxLatency(us), 720423
> > [UPDATE], 95thPercentileLatency(ms), 0
> > [UPDATE], 99thPercentileLatency(ms), 0
> > [READ], Operations, 4998792.667
> > [READ], AverageLatency(us), 411.0522393
> > [READ], MinLatency(us), 288
> > [READ], MaxLatency(us), 977500
> > [READ], 95thPercentileLatency(ms), 0
> > [READ], 99thPercentileLatency(ms), 0
> >
> > ​
> >
> > ​*Workload B*
> >
> >  *0.98.0*
> >
> > [OVERALL], RunTime(ms), 3678408
> > [OVERALL], Throughput(ops/sec), 2719
> > [UPDATE], Operations, 500239
> > [UPDATE], AverageLatency(us), 2.218397098
> > [UPDATE], MinLatency(us), 0
> > [UPDATE], MaxLatency(us), 101523
> > [UPDATE], 95thPercentileLatency(ms), 0
> > [UPDATE], 99thPercentileLatency(ms), 0
> > [READ], Operations, 9499762.333
> > [READ], AverageLatency(us), 384.8231468
> > [READ], MinLatency(us), 283
> > [READ], MaxLatency(us), 922395
> > [READ], 95thPercentileLatency(ms), 0
> > [READ], 99thPercentileLatency(ms), 0
> >
> >  *0.98.2*
> >
> > ​[OVERALL], RunTime(ms), 3643856
> > [OVERALL], Throughput(ops/sec), 2744
> > [UPDATE], Operations, 499256
> > [UPDATE], AverageLatency(us), 2.561636579
> > [UPDATE], MinLatency(us), 0
> > [UPDATE], MaxLatency(us), 713811
> > [UPDATE], 95thPercentileLatency(ms), 0
> > [UPDATE], 99thPercentileLatency(ms), 0
> > [READ], Operations, 9500745
> > [READ], AverageLatency(us), 381.1349225
> > [READ], MinLatency(us), 284
> > [READ], MaxLatency(us), 921680
> > [READ], 95thPercentileLatency(ms), 0
> > [READ], 99thPercentileLatency(ms), 0
> >
> > ​
> > *Workload C*
> >
> >  *0.98.0*
> >
> > [OVERALL], RunTime(ms), 3258845
> > [OVERALL], Throughput(ops/sec), 3069
> > [READ], Operations, 1000
> > [READ], AverageLatency(us), 323.7287128
> > [READ], MinLatency(us), 276
> > [READ], MaxLatency(us), 928472
> > [READ], 95thPercentileLatency(ms), 0
> > [READ], 99thPercentileLatency(ms), 0
> >
> >  *0.98.2*
> >
> >  [OVERALL], RunTime(ms), 3288822
> > [OVERALL], Throughput(ops/sec), 3041
> > [READ], Operations, 1000
> > [READ], AverageLatency(us), 326.6214268
> > [READ], MinLatency(us), 284
> > [READ], MaxLatency(us), 924632
> > [READ], 95thPercentileLatency(ms), 0
> > [READ], 99thPercentileLatency(ms), 0
> >
> > ​*Workload D*
> >
> >  *0.98.0*
> >
> >  [OVERALL], RunTime(ms), 3707601
> > [OVERALL], Throughput(ops/sec), 2700
> > [INSERT], Operations, 500774
> > [INSERT], 

Re: [VOTE] The 1st HBase 0.98.2 release candidate (RC0) is available

2014-05-10 Thread Konstantin Boudnik
Evidently my vote email didn't go through, perhaps because of the email outage
in the last 3-4 days my vote.

For what it worth - +1 from me.
Thanks Andrew!
  Cos

On Fri, May 09, 2014 at 08:14AM, Andrew Purtell wrote:
> With one +1 and no 0 or -1 votes, this vote has passed. I have sent the
> artifacts onward for mirroring and will announce in ~24 hours.
> 
> 
> On Wed, May 7, 2014 at 10:04 AM, Andrew Purtell  wrote:
> 
> > +1
> >
> > Unit test suite passes 100% 25 times out of 25 runs on Java 6u43, 7u45,
> > and 8u0.
> >
> > Cluster testing looks good with LoadTestTool, and YCSB.
> >
> > An informal performance test on a small cluster comparing 0.98.0 and
> > 0.98.2 indicates no serious perf regressions. See email to dev@ titled
> > "Comparing the performance of 0.98.2 RC0 and 0.98.0 using YCSB".
> >
> >
> > On Wed, Apr 30, 2014 at 8:50 PM, Andrew Purtell wrote:
> >
> >> The 1st HBase 0.98.2 release candidate (RC0) is available for download at
> >> http://people.apache.org/~apurtell/0.98.2RC0/ and Maven artifacts are
> >> also available in the temporary repository
> >> https://repository.apache.org/content/repositories/orgapachehbase-1020 .
> >>
> >> Signed with my code signing key D5365CCD.
> >>
> >> The issues resolved in this release can be found here:
> >> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310753&version=12326505
> >>
> >>
> >> Please try out the candidate and vote +1/-1 by midnight Pacific Time
> >> (00:00 -0800 GMT) on May 7 on whether or not we should release this as
> >> 0.98.2.
> >>
> >> --
> >> Best regards,
> >>
> >>  ​- Andy​
> >> ​
> >>
> >> Problems worthy of attack prove their worth by hitting back. - Piet Hein
> >> (via Tom White)
> >>
> >
> >
> >
> > --
> > Best regards,
> >
> >- Andy
> >
> > Problems worthy of attack prove their worth by hitting back. - Piet Hein
> > (via Tom White)
> >
> 
> 
> 
> -- 
> Best regards,
> 
>- Andy
> 
> Problems worthy of attack prove their worth by hitting back. - Piet Hein
> (via Tom White)


signature.asc
Description: Digital signature


Re: [VOTE] The 1st HBase 0.98.2 release candidate (RC0) is available

2014-05-02 Thread Konstantin Boudnik
Unleashing the bada$$ ;)

On Fri, May 02, 2014 at 03:24PM, Andrew Purtell wrote:
> Do your worst! :-)
> 
> On Friday, May 2, 2014, Konstantin Boudnik  wrote:
> 
> > 0.98.2 (RC0) is getting built smoothly in Bigtop 0.8.0!
> >
> > Not to do some testing with it ;)
> >
> > Thanks guys!
> >   Cos
> >
> > On Wed, Apr 30, 2014 at 08:50PM, Andrew Purtell wrote:
> > > The 1st HBase 0.98.2 release candidate (RC0) is available for download at
> > > http://people.apache.org/~apurtell/0.98.2RC0/ and Maven artifacts are
> > also
> > > available in the temporary repository
> > > https://repository.apache.org/content/repositories/orgapachehbase-1020 .
> > >
> > > Signed with my code signing key D5365CCD.
> > >
> > > The issues resolved in this release can be found here:
> > >
> > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310753&version=12326505
> > >
> > >
> > > Please try out the candidate and vote +1/-1 by midnight Pacific Time
> > (00:00
> > > -0800 GMT) on May 7 on whether or not we should release this as 0.98.2.
> > >
> > > --
> > > Best regards,
> > >
> > > ​- Andy​
> > > ​
> > >
> > > Problems worthy of attack prove their worth by hitting back. - Piet Hein
> > > (via Tom White)
> >
> 
> 
> -- 
> Best regards,
> 
>- Andy
> 
> Problems worthy of attack prove their worth by hitting back. - Piet Hein
> (via Tom White)


Re: [VOTE] The 1st HBase 0.98.2 release candidate (RC0) is available

2014-05-02 Thread Konstantin Boudnik
0.98.2 (RC0) is getting built smoothly in Bigtop 0.8.0! 

Not to do some testing with it ;)

Thanks guys!
  Cos

On Wed, Apr 30, 2014 at 08:50PM, Andrew Purtell wrote:
> The 1st HBase 0.98.2 release candidate (RC0) is available for download at
> http://people.apache.org/~apurtell/0.98.2RC0/ and Maven artifacts are also
> available in the temporary repository
> https://repository.apache.org/content/repositories/orgapachehbase-1020 .
> 
> Signed with my code signing key D5365CCD.
> 
> The issues resolved in this release can be found here:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310753&version=12326505
> 
> 
> Please try out the candidate and vote +1/-1 by midnight Pacific Time (00:00
> -0800 GMT) on May 7 on whether or not we should release this as 0.98.2.
> 
> -- 
> Best regards,
> 
> ​- Andy​
> ​
> 
> Problems worthy of attack prove their worth by hitting back. - Piet Hein
> (via Tom White)


[jira] [Created] (HBASE-11108) Split ZKTable into interface and implementation

2014-05-01 Thread Konstantin Boudnik (JIRA)
Konstantin Boudnik created HBASE-11108:
--

 Summary: Split ZKTable into interface and implementation
 Key: HBASE-11108
 URL: https://issues.apache.org/jira/browse/HBASE-11108
 Project: HBase
  Issue Type: Sub-task
  Components: Zookeeper
Affects Versions: 0.98.2
Reporter: Konstantin Boudnik


In HBASE-11071 we are trying to split admin handlers away from ZK. However, a 
ZKTable instance is being used in multiple places, hence it would be beneficial 
to hide its implementation behind a well defined interface.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: 0.98.1 has stopped building [Was: What happened to tar.gz assembly in 0.98?]

2014-04-29 Thread Konstantin Boudnik
+1 on what Stack said - 'site' stands apart from the other targets.

In the midst of this conversation I have tried to tight it up with
default-install phase but it basically has two consequences:
  - not working as riding install's coat-tails don't do much good as its own
bound life-cycles steps aren't getting called. Hence, no compilation, etc.
will be happening
  - increasing the time of the build (potentially)

I would ask a slightly different question: why site needs the installed
artifacts in the first place?

And if you guys feel like we need to dig into this issue - please let me know
I will be happy to play with it: I have a bit of knowledge in Maven. However,
~2K of a build file scares the hell out of me ;)

Regards,
  Cos

On Tue, Apr 29, 2014 at 10:35PM, Stack wrote:
> On Tue, Apr 29, 2014 at 6:45 PM, Andrew Purtell  wrote:
> 
> > We have a 'release' Maven profile. Right now it just runs Apache RAT. I
> > wonder if some kind of hairy Maven-foo can reattach site into the right
> > place if this profile is enabled. RAT is pretty quick, and Javadoc is going
> > to dominate build time anyhow, so should be ok for Bigtop packaging. The
> > question is if it's possible. My Maven is weak. Anyone have any idea?
> >
> >
> >
> Seems like site is a lifecycle of its own apart from the maven 'default'
> lifecycle:
> https://maven.apache.org/guides/introduction/introduction-to-the-lifecycle.html
> Getting the site lifecycle to run inside a goal of the 'default'
> lifecycle
> could be tough (I'm no expert).  We make use of the pre-site building
> docbook.
> St.Ack
> 
> 
> 
> 
> > On Tue, Apr 29, 2014 at 4:37 PM, Konstantin Boudnik 
> > wrote:
> >
> > > On Tue, Apr 29, 2014 at 04:27PM, Stack wrote:
> > > > On Tue, Apr 29, 2014 at 4:18 PM, Konstantin Boudnik 
> > > wrote:
> > > >
> > > > > Do you guys think it'd make sense to find site:run to install phase?
> > > >
> > > > It wouldn't fly.  You'd piss off everyone as they wait on javadoc and
> > doc
> > > > targets every time they make small change.
> > >
> > > Yeah, you right. it also won't fly for another reason - the install won't
> > > engage other needed steps such as compile, test-compile, etc. It seems
> > that
> > > for the purpose of Bigtop packaging there's no other way but to really
> > > execute
> > > two septate mvn process one after another.
> > >
> > > Thanks,
> > >   Cos
> > >
> > > > Site is intentionally broken off an explicit goal unhooked from maven
> > > > lifecycle for this reason.  Ditto assembly for similar but also more
> > > > convoluted reasons.
> > > >
> > > > St.Ack
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > > This way
> > > > > - in theory at least - site will always be executing install first?
> > It
> > > > > might
> > > > > be too small of an issue though which might be totally workarounded
> > > with
> > > > > two
> > > > > sequential maven runs.
> > > > >
> > > >
> > > >
> > > >
> > > > > e, Apr 29, 2014 at 02:48PM, Andrew Purtell wrote:
> > > > > > > Ah, and if I read to the end ( sorry - sometimes don't do that
> > when
> > > > > annoyed
> > > > > > > - unrelated to this :-) ), then indeed you did clean ~/.m2 and
> > then
> > > > > > > attempted a list of targets including site.
> > > > > > >
> > > > > > > Install jars to the local Maven cache before invoking javadoc or
> > > site
> > > > > > > targets.
> > > > > > >
> > > > > > >
> > > > > > > On Tue, Apr 29, 2014 at 2:44 PM, Andrew Purtell <
> > > apurt...@apache.org>
> > > > > wrote:
> > > > > > >
> > > > > > > > Is this because we frob the Maven versions after rolling the
> > > source
> > > > > > > > tarball? See https://hbase.apache.org/book/releasing.html
> > > > > > > >
> > > > > > > > Do 'mvn -DskipTests clean install' first, then something that
> > > pulls
> > > > > in
> > > > > > > > javadoc or site targets and you should be fine. My gues

Re: Rolling the 0.98.2 RC0 tomorrow

2014-04-29 Thread Konstantin Boudnik
Would be happy to give it a spin with upcoming Bigtop 0.8.0! Thanks much,
mate!

On Tue, Apr 29, 2014 at 06:26PM, Andrew Purtell wrote:
> 
> 
> -- 
> Best regards,
> 
>- Andy
> 
> Problems worthy of attack prove their worth by hitting back. - Piet Hein
> (via Tom White)


Re: 0.98.1 has stopped building [Was: What happened to tar.gz assembly in 0.98?]

2014-04-29 Thread Konstantin Boudnik
On Tue, Apr 29, 2014 at 04:27PM, Stack wrote:
> On Tue, Apr 29, 2014 at 4:18 PM, Konstantin Boudnik  wrote:
> 
> > Do you guys think it'd make sense to find site:run to install phase?
> 
> It wouldn't fly.  You'd piss off everyone as they wait on javadoc and doc
> targets every time they make small change.

Yeah, you right. it also won't fly for another reason - the install won't
engage other needed steps such as compile, test-compile, etc. It seems that
for the purpose of Bigtop packaging there's no other way but to really execute
two septate mvn process one after another.

Thanks,
  Cos

> Site is intentionally broken off an explicit goal unhooked from maven
> lifecycle for this reason.  Ditto assembly for similar but also more
> convoluted reasons.
> 
> St.Ack
> 
> 
> 
> 
> 
> > This way
> > - in theory at least - site will always be executing install first? It
> > might
> > be too small of an issue though which might be totally workarounded with
> > two
> > sequential maven runs.
> >
> 
> 
> 
> > e, Apr 29, 2014 at 02:48PM, Andrew Purtell wrote:
> > > > Ah, and if I read to the end ( sorry - sometimes don't do that when
> > annoyed
> > > > - unrelated to this :-) ), then indeed you did clean ~/.m2 and then
> > > > attempted a list of targets including site.
> > > >
> > > > Install jars to the local Maven cache before invoking javadoc or site
> > > > targets.
> > > >
> > > >
> > > > On Tue, Apr 29, 2014 at 2:44 PM, Andrew Purtell 
> > wrote:
> > > >
> > > > > Is this because we frob the Maven versions after rolling the source
> > > > > tarball? See https://hbase.apache.org/book/releasing.html
> > > > >
> > > > > Do 'mvn -DskipTests clean install' first, then something that pulls
> > in
> > > > > javadoc or site targets and you should be fine. My guess is you did
> > that at
> > > > > one point, then moved to a different box or somehow wiped out local
> > 0.98.1
> > > > > artifacts in your ~/.m2.
> > > > >
> > > > >
> > > > > On Tue, Apr 29, 2014 at 2:38 PM, Konstantin Boudnik  > >wrote:
> > > > >
> > > > >> This is a bit weird, but since last night I can't build 0.98.1
> > anymore
> > > > >> because
> > > > >> of the following error:
> > > > >>
> > > > >> [ERROR] Failed to execute goal
> > > > >> org.apache.maven.plugins:maven-site-plugin:3.3:site (default-site)
> > on
> > > > >> project
> > > > >> hbase: failed to get report for
> > > > >> org.apache.maven.plugins:maven-javadoc-plugin:
> > > > >> Failed to execute goal on project hbase-server: Could not resolve
> > > > >> dependencies
> > > > >> for project org.apache.hbase:hbase-server:jar:0.98.1: The following
> > > > >> artifacts
> > > > >> could not be resolved: org.apache.hbase:hbase-common:jar:0.98.1,
> > > > >> org.apache.hbase:hbase-protocol:jar:0.98.1,
> > > > >> org.apache.hbase:hbase-client:jar:0.98.1,
> > > > >> org.apache.hbase:hbase-prefix-tree:jar:0.98.1,
> > > > >> org.apache.hbase:hbase-common:jar:tests:0.98.1,
> > > > >> org.apache.hbase:hbase-hadoop-compat:jar:0.98.1,
> > > > >> org.apache.hbase:hbase-hadoop-compat:jar:tests:0.98.1,
> > > > >> org.apache.hbase:hbase-hadoop2-compat:jar:0.98.1,
> > > > >> org.apache.hbase:hbase-hadoop2-compat:jar:tests:0.98.1: Could
> > not find
> > > > >> artifact org.apache.hbase:hbase-common:jar:0.98.1 in apache
> > release
> > > > >> (https://repository.apache.org/content/repositories/releases/)
> > ->
> > > > >> [Help 1]
> > > > >>
> > > > >> Naturally, such artifacts aren't available in the aforementioned
> > repo,
> > > > >> because
> > > > >> only *-hadoop1 and *-hadoop2 versions are there. I am using the
> > same maven
> > > > >> command as before. But even the standard release command
> > > > >> mvn  clean install -DskipTests site assembly:single -Prelease
> > > > >>
> > > > >> doesn't work anymore. I am building with clean ~/.m2, i

Re: 0.98.1 has stopped building [Was: What happened to tar.gz assembly in 0.98?]

2014-04-29 Thread Konstantin Boudnik
Do you guys think it'd make sense to find site:run to install phase? This way
- in theory at least - site will always be executing install first? It might
be too small of an issue though which might be totally workarounded with two
sequential maven runs.

Cos

On Tue, Apr 29, 2014 at 02:54PM, Konstantin Boudnik wrote:
> Thank you guys - this is the most responsive community I've seen around! 
> 
> On Tue, Apr 29, 2014 at 02:48PM, Andrew Purtell wrote:
> > Ah, and if I read to the end ( sorry - sometimes don't do that when annoyed
> > - unrelated to this :-) ), then indeed you did clean ~/.m2 and then
> > attempted a list of targets including site.
> > 
> > Install jars to the local Maven cache before invoking javadoc or site
> > targets.
> > 
> > 
> > On Tue, Apr 29, 2014 at 2:44 PM, Andrew Purtell  wrote:
> > 
> > > Is this because we frob the Maven versions after rolling the source
> > > tarball? See https://hbase.apache.org/book/releasing.html
> > >
> > > Do 'mvn -DskipTests clean install' first, then something that pulls in
> > > javadoc or site targets and you should be fine. My guess is you did that 
> > > at
> > > one point, then moved to a different box or somehow wiped out local 0.98.1
> > > artifacts in your ~/.m2.
> > >
> > >
> > > On Tue, Apr 29, 2014 at 2:38 PM, Konstantin Boudnik 
> > > wrote:
> > >
> > >> This is a bit weird, but since last night I can't build 0.98.1 anymore
> > >> because
> > >> of the following error:
> > >>
> > >> [ERROR] Failed to execute goal
> > >> org.apache.maven.plugins:maven-site-plugin:3.3:site (default-site) on
> > >> project
> > >> hbase: failed to get report for
> > >> org.apache.maven.plugins:maven-javadoc-plugin:
> > >> Failed to execute goal on project hbase-server: Could not resolve
> > >> dependencies
> > >> for project org.apache.hbase:hbase-server:jar:0.98.1: The following
> > >> artifacts
> > >> could not be resolved: org.apache.hbase:hbase-common:jar:0.98.1,
> > >> org.apache.hbase:hbase-protocol:jar:0.98.1,
> > >> org.apache.hbase:hbase-client:jar:0.98.1,
> > >> org.apache.hbase:hbase-prefix-tree:jar:0.98.1,
> > >> org.apache.hbase:hbase-common:jar:tests:0.98.1,
> > >> org.apache.hbase:hbase-hadoop-compat:jar:0.98.1,
> > >> org.apache.hbase:hbase-hadoop-compat:jar:tests:0.98.1,
> > >> org.apache.hbase:hbase-hadoop2-compat:jar:0.98.1,
> > >> org.apache.hbase:hbase-hadoop2-compat:jar:tests:0.98.1: Could not 
> > >> find
> > >> artifact org.apache.hbase:hbase-common:jar:0.98.1 in apache release
> > >> (https://repository.apache.org/content/repositories/releases/) ->
> > >> [Help 1]
> > >>
> > >> Naturally, such artifacts aren't available in the aforementioned repo,
> > >> because
> > >> only *-hadoop1 and *-hadoop2 versions are there. I am using the same 
> > >> maven
> > >> command as before. But even the standard release command
> > >> mvn  clean install -DskipTests site assembly:single -Prelease
> > >>
> > >> doesn't work anymore. I am building with clean ~/.m2, if it makes any
> > >> difference. Anyone here has a similar experience?
> > >>
> > >> Thanks in advance,
> > >>   Cos
> > >>
> > >> On Tue, Apr 22, 2014 at 07:26PM, Konstantin Boudnik wrote:
> > >> > Right, thanks! Also, it moved + plus set of artifacts got changed. No
> > >> matter -
> > >> > I got the packaging working again, so once HBase has 0.98.2 out of the
> > >> door it
> > >> > will be right there in Bigtop 0.8.0. Appreciate the help, guys!
> > >> >
> > >> > Cos
> > >> >
> > >> > On Tue, Apr 22, 2014 at 04:20PM, Ted Yu wrote:
> > >> > > Please use assembly:single
> > >> > >
> > >> > > See http://hbase.apache.org/book.html#maven.release
> > >> > >
> > >> > > Cheers
> > >> > >
> > >> > >
> > >> > > On Tue, Apr 22, 2014 at 4:17 PM, Konstantin Boudnik 
> > >> wrote:
> > >> > >
> > >> > > > Guys,
> > >> > > >
> > >> > > > can anyone point me to the right direction about the tar.gz binary
> > >> > > > assembly in
> > >> > > > 0.98? When we were building bigtop releases out of 0.94.x we were
> > >> expecting
> > >> > > > target/hbase*tar.gz to be present.
> > >> > > >
> > >> > > > It seems the things have changes somewhat 'cause not
> > >> assembly:assembly nor
> > >> > > > package targets create the tarballs anymore. Am I doing something
> > >> wrong?
> > >> > > > Sorry
> > >> > > > if it has been answered elsewhere...
> > >> > > >
> > >> > > > --
> > >> > > > Regards,
> > >> > > >   Cos
> > >>
> > >
> > 
> > -- 
> > Best regards,
> > 
> >- Andy
> > 
> > Problems worthy of attack prove their worth by hitting back. - Piet Hein
> > (via Tom White)


Re: 0.98.1 has stopped building [Was: What happened to tar.gz assembly in 0.98?]

2014-04-29 Thread Konstantin Boudnik
Thank you guys - this is the most responsive community I've seen around! 

On Tue, Apr 29, 2014 at 02:48PM, Andrew Purtell wrote:
> Ah, and if I read to the end ( sorry - sometimes don't do that when annoyed
> - unrelated to this :-) ), then indeed you did clean ~/.m2 and then
> attempted a list of targets including site.
> 
> Install jars to the local Maven cache before invoking javadoc or site
> targets.
> 
> 
> On Tue, Apr 29, 2014 at 2:44 PM, Andrew Purtell  wrote:
> 
> > Is this because we frob the Maven versions after rolling the source
> > tarball? See https://hbase.apache.org/book/releasing.html
> >
> > Do 'mvn -DskipTests clean install' first, then something that pulls in
> > javadoc or site targets and you should be fine. My guess is you did that at
> > one point, then moved to a different box or somehow wiped out local 0.98.1
> > artifacts in your ~/.m2.
> >
> >
> > On Tue, Apr 29, 2014 at 2:38 PM, Konstantin Boudnik wrote:
> >
> >> This is a bit weird, but since last night I can't build 0.98.1 anymore
> >> because
> >> of the following error:
> >>
> >> [ERROR] Failed to execute goal
> >> org.apache.maven.plugins:maven-site-plugin:3.3:site (default-site) on
> >> project
> >> hbase: failed to get report for
> >> org.apache.maven.plugins:maven-javadoc-plugin:
> >> Failed to execute goal on project hbase-server: Could not resolve
> >> dependencies
> >> for project org.apache.hbase:hbase-server:jar:0.98.1: The following
> >> artifacts
> >> could not be resolved: org.apache.hbase:hbase-common:jar:0.98.1,
> >> org.apache.hbase:hbase-protocol:jar:0.98.1,
> >> org.apache.hbase:hbase-client:jar:0.98.1,
> >> org.apache.hbase:hbase-prefix-tree:jar:0.98.1,
> >> org.apache.hbase:hbase-common:jar:tests:0.98.1,
> >> org.apache.hbase:hbase-hadoop-compat:jar:0.98.1,
> >> org.apache.hbase:hbase-hadoop-compat:jar:tests:0.98.1,
> >> org.apache.hbase:hbase-hadoop2-compat:jar:0.98.1,
> >> org.apache.hbase:hbase-hadoop2-compat:jar:tests:0.98.1: Could not find
> >> artifact org.apache.hbase:hbase-common:jar:0.98.1 in apache release
> >> (https://repository.apache.org/content/repositories/releases/) ->
> >> [Help 1]
> >>
> >> Naturally, such artifacts aren't available in the aforementioned repo,
> >> because
> >> only *-hadoop1 and *-hadoop2 versions are there. I am using the same maven
> >> command as before. But even the standard release command
> >> mvn  clean install -DskipTests site assembly:single -Prelease
> >>
> >> doesn't work anymore. I am building with clean ~/.m2, if it makes any
> >> difference. Anyone here has a similar experience?
> >>
> >> Thanks in advance,
> >>   Cos
> >>
> >> On Tue, Apr 22, 2014 at 07:26PM, Konstantin Boudnik wrote:
> >> > Right, thanks! Also, it moved + plus set of artifacts got changed. No
> >> matter -
> >> > I got the packaging working again, so once HBase has 0.98.2 out of the
> >> door it
> >> > will be right there in Bigtop 0.8.0. Appreciate the help, guys!
> >> >
> >> > Cos
> >> >
> >> > On Tue, Apr 22, 2014 at 04:20PM, Ted Yu wrote:
> >> > > Please use assembly:single
> >> > >
> >> > > See http://hbase.apache.org/book.html#maven.release
> >> > >
> >> > > Cheers
> >> > >
> >> > >
> >> > > On Tue, Apr 22, 2014 at 4:17 PM, Konstantin Boudnik 
> >> wrote:
> >> > >
> >> > > > Guys,
> >> > > >
> >> > > > can anyone point me to the right direction about the tar.gz binary
> >> > > > assembly in
> >> > > > 0.98? When we were building bigtop releases out of 0.94.x we were
> >> expecting
> >> > > > target/hbase*tar.gz to be present.
> >> > > >
> >> > > > It seems the things have changes somewhat 'cause not
> >> assembly:assembly nor
> >> > > > package targets create the tarballs anymore. Am I doing something
> >> wrong?
> >> > > > Sorry
> >> > > > if it has been answered elsewhere...
> >> > > >
> >> > > > --
> >> > > > Regards,
> >> > > >   Cos
> >>
> >
> 
> -- 
> Best regards,
> 
>- Andy
> 
> Problems worthy of attack prove their worth by hitting back. - Piet Hein
> (via Tom White)


0.98.1 has stopped building [Was: What happened to tar.gz assembly in 0.98?]

2014-04-29 Thread Konstantin Boudnik
This is a bit weird, but since last night I can't build 0.98.1 anymore because
of the following error:

[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-site-plugin:3.3:site (default-site) on project
hbase: failed to get report for org.apache.maven.plugins:maven-javadoc-plugin:
Failed to execute goal on project hbase-server: Could not resolve dependencies
for project org.apache.hbase:hbase-server:jar:0.98.1: The following artifacts
could not be resolved: org.apache.hbase:hbase-common:jar:0.98.1,
org.apache.hbase:hbase-protocol:jar:0.98.1,
org.apache.hbase:hbase-client:jar:0.98.1,
org.apache.hbase:hbase-prefix-tree:jar:0.98.1,
org.apache.hbase:hbase-common:jar:tests:0.98.1,
org.apache.hbase:hbase-hadoop-compat:jar:0.98.1,
org.apache.hbase:hbase-hadoop-compat:jar:tests:0.98.1,
org.apache.hbase:hbase-hadoop2-compat:jar:0.98.1,
org.apache.hbase:hbase-hadoop2-compat:jar:tests:0.98.1: Could not find
artifact org.apache.hbase:hbase-common:jar:0.98.1 in apache release
(https://repository.apache.org/content/repositories/releases/) -> [Help 1]

Naturally, such artifacts aren't available in the aforementioned repo, because
only *-hadoop1 and *-hadoop2 versions are there. I am using the same maven
command as before. But even the standard release command 
mvn  clean install -DskipTests site assembly:single -Prelease

doesn't work anymore. I am building with clean ~/.m2, if it makes any
difference. Anyone here has a similar experience?

Thanks in advance,
  Cos

On Tue, Apr 22, 2014 at 07:26PM, Konstantin Boudnik wrote:
> Right, thanks! Also, it moved + plus set of artifacts got changed. No matter -
> I got the packaging working again, so once HBase has 0.98.2 out of the door it
> will be right there in Bigtop 0.8.0. Appreciate the help, guys!
> 
> Cos
> 
> On Tue, Apr 22, 2014 at 04:20PM, Ted Yu wrote:
> > Please use assembly:single
> > 
> > See http://hbase.apache.org/book.html#maven.release
> > 
> > Cheers
> > 
> > 
> > On Tue, Apr 22, 2014 at 4:17 PM, Konstantin Boudnik  wrote:
> > 
> > > Guys,
> > >
> > > can anyone point me to the right direction about the tar.gz binary
> > > assembly in
> > > 0.98? When we were building bigtop releases out of 0.94.x we were 
> > > expecting
> > > target/hbase*tar.gz to be present.
> > >
> > > It seems the things have changes somewhat 'cause not assembly:assembly nor
> > > package targets create the tarballs anymore. Am I doing something wrong?
> > > Sorry
> > > if it has been answered elsewhere...
> > >
> > > --
> > > Regards,
> > >   Cos
> > >
> > >


Re: What happened to tar.gz assembly in 0.98?

2014-04-22 Thread Konstantin Boudnik
Right, thanks! Also, it moved + plus set of artifacts got changed. No matter -
I got the packaging working again, so once HBase has 0.98.2 out of the door it
will be right there in Bigtop 0.8.0. Appreciate the help, guys!

Cos

On Tue, Apr 22, 2014 at 04:20PM, Ted Yu wrote:
> Please use assembly:single
> 
> See http://hbase.apache.org/book.html#maven.release
> 
> Cheers
> 
> 
> On Tue, Apr 22, 2014 at 4:17 PM, Konstantin Boudnik  wrote:
> 
> > Guys,
> >
> > can anyone point me to the right direction about the tar.gz binary
> > assembly in
> > 0.98? When we were building bigtop releases out of 0.94.x we were expecting
> > target/hbase*tar.gz to be present.
> >
> > It seems the things have changes somewhat 'cause not assembly:assembly nor
> > package targets create the tarballs anymore. Am I doing something wrong?
> > Sorry
> > if it has been answered elsewhere...
> >
> > --
> > Regards,
> >   Cos
> >
> >


What happened to tar.gz assembly in 0.98?

2014-04-22 Thread Konstantin Boudnik
Guys,

can anyone point me to the right direction about the tar.gz binary assembly in
0.98? When we were building bigtop releases out of 0.94.x we were expecting
target/hbase*tar.gz to be present. 

It seems the things have changes somewhat 'cause not assembly:assembly nor
package targets create the tarballs anymore. Am I doing something wrong? Sorry
if it has been answered elsewhere...

-- 
Regards,
  Cos



Re: Hbase 0.98.1 (+HBASE-10488) for Bigtop 0.8.0

2014-04-18 Thread Konstantin Boudnik
Thanks Enis - appreciate the input! Please let us (dev@bigtop) know if we can
help with testing, etc.

Cos

On Fri, Apr 18, 2014 at 05:48PM, Enis Söztutar wrote:
> Andrew is trying to follow monthly release cycles with 0.98.x, so I would
> expect 0.98.2 soon.
> 
> I've raised the question in the jira.
> 
> Enis
> 
> 
> On Fri, Apr 18, 2014 at 12:35 AM, Konstantin Boudnik  wrote:
> 
> > Hey guys.
> >
> > Is there any chance that 0.98.2 (?) will be released any time soon? If so,
> > would it be possible to have
> >   https://issues.apache.org/jira/browse/HBASE-10488
> > backported? We are trying to have Bigtop 0.8.0 released next months and
> > this
> > one is certainly a blocker for us.
> >
> > Any thoughts?
> > --
> > Thanks in advance,
> >   Cos
> >
> >


Hbase 0.98.1 (+HBASE-10488) for Bigtop 0.8.0

2014-04-17 Thread Konstantin Boudnik
Hey guys.

Is there any chance that 0.98.2 (?) will be released any time soon? If so,
would it be possible to have
  https://issues.apache.org/jira/browse/HBASE-10488
backported? We are trying to have Bigtop 0.8.0 released next months and this
one is certainly a blocker for us.

Any thoughts?
-- 
Thanks in advance,
  Cos



Bringing consensus based strong consistency into HBase

2014-04-07 Thread Konstantin Boudnik
Guys,

As some of you might have noticed there is a number of new JIRAs opened
recently that are aiming on abstracting and separating ZK out of HBase guts
and making it an implementation detail, rather than a center of attention
for some parts of the HBase.

I would like to send around a short document written by Mikhail Antonov that
is trying to clarify a couple of points about this whole effort.
People are asking me about this off-line, so I decided to send this around so
we can have a lively and wider discussion about this initiative and the
development.

Here's the JIRA and the link to the PDF document.
  https://issues.apache.org/jira/browse/HBASE-10866
  https://issues.apache.org/jira/secure/attachment/12637957/HBaseConsensus.pdf

The umbrella JIRA for this effort is here
  https://issues.apache.org/jira/browse/HBASE-10909

-- 
Regardsk
  Cos



Re: Change HBase version string from 0.95.0-SNAPSHOT to 0.95.0-hadoop1-SNAPSHOT and 0.95.0-hadoop2-SNAPSHOT?

2013-03-29 Thread Konstantin Boudnik
I also would rather abstain from classifiers approach despite this particular
usecase to be their original intention  (see a good discussion here
http://is.gd/PH1FPU). I found them particularly painful in Ivy environment,
but it might be "somebody else problem" ;)

May be version strings isn't that bad an idea after all? All other things
being equal of course :/

Cos

On Fri, Mar 29, 2013 at 03:59PM, Enis Söztutar wrote:
> I remember to trying the classifiers approach, and cannot getting it to
> work, but I don't remember the specifics. If we can get it working, than I
> would vote for the specifiers option as long as it also works with test
> jars, and maven deploy.
> I think it was in this issue:
> https://issues.apache.org/jira/browse/HBASE-6929
> 
> I can't see the classifier in the published pom:
> http://repo1.maven.org/maven2/org/apache/avro/avro/1.7.3/avro-1.7.3.pom
> 
> Enis
> 
> 
> On Fri, Mar 29, 2013 at 1:48 PM, Stack  wrote:
> 
> > On Fri, Mar 29, 2013 at 1:29 PM, Wing Yew Poon 
> > wrote:
> >
> > > I am far from being a maven expert and I also may not understand your
> > > requirements. However, as far as the avro-mapred jar goes, I am able
> > > to get the one I want in my projects by using something like
> > >
> > > 
> > >   org.apache.avro
> > >   avro-mapred
> > >   1.7.3
> > >   hadoop2
> > > 
> > >
> > > If all your users need is the ability to get the correct version
> > > (hadoop1 vs hadoop2) of hbase jars for their projects, I imagine doing
> > > it the same way as avro-mapred should work. But please consult your
> > > maven experts.
> > >
> > >
> > This could work.
> >
> > I would go through everywhere we produce a jar and add in the envClassifer
> > doo-hickey.  Would have to run build twice to get both sets of artifacts.
> >
> > @Bigtoppers Would doing the above be enough for your purposes?
> >
> > St.Ack
> >


signature.asc
Description: Digital signature


[jira] [Resolved] (HBASE-7973) API changes between 0.92.1 and 0.94.2 breaking downstream API

2013-03-01 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik resolved HBASE-7973.
---

Resolution: Invalid

Apparently, there's a way to work around this change in the client code. I am 
closing this as invalid. The solution has been found in BIGTOP-858

> API changes between 0.92.1 and 0.94.2 breaking downstream API
> -
>
> Key: HBASE-7973
> URL: https://issues.apache.org/jira/browse/HBASE-7973
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.92.1
>    Reporter: Konstantin Boudnik
>
> {{HFile.getWriterFactory(Configuration)}} has migrated to 
> {{HFile.getWriterFactoryNoCache(Configuration)}} and 
> {{HFile.WriterFactory.createWriter(...)}} is now protected in 
> {{HFile.WriterFactory}}
> As the result, Bigtop integration tests won't compile.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7973) API changes between 0.92.1 and 0.94.2 breaking downstream API changes

2013-03-01 Thread Konstantin Boudnik (JIRA)
Konstantin Boudnik created HBASE-7973:
-

 Summary: API changes between 0.92.1 and 0.94.2 breaking downstream 
API changes
 Key: HBASE-7973
 URL: https://issues.apache.org/jira/browse/HBASE-7973
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.92.1
Reporter: Konstantin Boudnik


{{HFile.getWriterFactory(Configuration)}} has migrated to 
{{HFile.getWriterFactoryNoCache(Configuration)}} and 
{{HFile.WriterFactory.createWriter(...)}} is now protected in 
{{HFile.WriterFactory}}

As the result, Bigtop integration tests won't compile.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Publishing jars for hbase compiled against hadoop 0.23.x/hadoop 2.0.x

2012-05-16 Thread Konstantin Boudnik
See my comments inlined...

On Wed, May 16, 2012 at 04:06PM, Andrew Purtell wrote:
> [cc bigtop-dev]
> 
> On Wed, May 16, 2012 at 3:22 PM, Jesse Yates  wrote:
> > ═+1 on a small number of supported versions with different classifiers that
> > only span a limited api skew to avoid a mountain of reflection. Along with
> > that, support for the builds via jenkins testing.
> 
> and
> 
> >> I think HBase should consider having a single blessed set of
> >> dependencies and only one build for a given release,
> >
> > This would be really nice, but seems a bit unreasonable given that we are
> > the "hadoop database" (if not in name, at least by connotation). I think
> > limiting our support to the latest X versions (2-3?) is reasonable given
> > consistent APIs
> 
> I was talking release mechanics not source/compilation/testing level
> support. Hence the suggestion for multiple Jenkins projects for the
> dependency versions we care about. That care could be scoped like you
> suggest.
> 
> I like what Bigtop espouses: carefully constructed snapshots of the
> world, well tested in total. Seems easier to manage then laying out
> various planes from increasingly higher dimensional spaces. If they
> get traction we can act as a responsible upstream project. As for our
> official release, we'd have maybe two, I'll grant you that, Hadoop 1
> and Hadoop 2.
> 
> X=2 will be a challenge. It's not just the Hadoop version that could
> change, but the versions of all of its dependencies, SLF4J, Guava,
> JUnit, protobuf, etc. etc. etc.; and that could happen at any time on
> point releases. If we are supporting the whole series of 1.x and 2.x
> releases, then that could be a real pain. Guava is a good example, it
> was a bit painful for us to move from 9 to 11 but not so for core as
> far as I know.

One of the by-design advantages of stack-assembly-validation automation
approach (that BigTop incidentally took ;) is that it provides a relatively
no-effort creation of stack updates when a single or multiple dependencies got
changed. Yes, it requires certain upfront time-investment to make the first
base stack definition. And from here it should be pretty much downhill.

We have applied a similar approach for the creation of X86 Solaris based
stacks for Sun Microsystems' rack-mount servers and it was a hoot and saved us
a tremendous amount of money back then (not that it helped Sun in the long
run)

>  - we should be very careful in picking which new versions
> > we support and when. A lot of the pain with the hadoop distributions has
> > been then wildly shifting apis, making a lot of work painful for handling
> > different versions (distcp/backup situations come to mind here, among other
> > things.
> 
> We also have test dependencies on interfaces that are LimitedPrivate
> at best. It's a source of friction.
> 
> > +1 on the idea of having classifiers for the different versions we actually
> > release as proper artifacts, and should be completely reasonable to enable
> > via profiles. I'd have to double check as to _how_ people would specify
> > that classifier/version of hbase from the maven repo, but it seems entirely
> > possible (my worry here is about the collison with the -tests and -sources
> > classifiers, which are standard mvn conventions for different builds).
> > Otherwise, with maven it is very reasonable to have people hosting profiles
> > for versions that they want to support - generally, this means just another
> > settings.xml file that includes another profile that people can activate on
> > their own, when they want to build against their own version.
> 
> This was a question I had, maybe you know. What happens if you want to
> build something like ---tests or
> -source? Would that work? Otherwise we'd have to add a suffix using
> property substitutions in profiles, right?

*-tests artifacts in maven are somewhat special animals and can't be dependent
upon in the common sense. This actually was a reason that BigTop has chosen to
make/use regular binary jar artifacts and use a name designator for their
test-related nature.

With regards,
  Cos

> Best regards,
> 
> ═ ═- Andy
> 
> Problems worthy of attack prove their worth by hitting back. - Piet
> Hein (via Tom White)


signature.asc
Description: Digital signature


Re: proper pace for JIRA integration

2011-11-01 Thread Konstantin Boudnik
On Tue, Nov 01, 2011 at 02:57PM, Andrew Purtell wrote:
> Roman,
> 
> > Personally, I think it is extremely unfair to refer to .22 as
> > DoA/ignored. Unless,═of course, such statement can be backed up with facts.
> 
> 
> Yeah, DoA is harsh, when I meant more like "abandoned at release". Similar to 
> 0.21.
> 
> Well that is my question, really. Is it?
> 
> We've heard that CDH4 is going to be start from something a lot closer to
> 0.23 than 0.22, that Hortonworks is committed to 0.23. It seems 0.23 is the
> future, and a RC may be happening as early as the end of this year, i.e. in
> the next month or so.

Hadoop contributions are coming from many places, so I don't see why this is
so important to know how close a particular distro will be to this a that ASF
release. Just the other day we had this long thread about contributions on
Hadoop general@ - it might be interesting to re-read it. 

> Given recent history and the above described═commitments, I think there is
> confusion about where/if 0.22 fits in. People will work on what inspires
> them, but it seems the center of gravity has already moved beyond 0.22. Is
> that a fair statement?

Perhaps I am missing the point of this discussion, but according to Hadoop
bylaws (available to anyone from https://hadoop.apache.org/bylaws.html):

Product Release

When a release of one of the project's products is ready, a vote is required
to accept the release as an official release of the project.

Lazy Majority of active PMC members



What anything in this thread has to do with the quote above? Hadoop is
released when it is ready and a release is official by the lazy majority.

If downstream projects decide not to participate in support a release - well,
this is sad, but this happens. And this means that a particular release will
have a somewhat smaller stack available to our users. 

--
  Take care,
Konstantin (Cos) Boudnik
2CAC 8312 4870 D885 8616  6115 220F 6980 1F27 E622

Disclaimer: Opinions expressed in this email are those of the author, and do
not necessarily represent the views of any company the author might be
affiliated with at the moment of writing.

> > The facts that I have are such that there will be a reasonably large
> > deployment of═Hadoop 0.22 and HBase at EBay makes me believe that such
> > a combination═should be of interest to HBase community.
> 
> 
> Then I'm sure the eBay guys will have that interest. :-)
> 
> Best regards,
> 
> 
> ═ - Andy
> 
> Problems worthy of attack prove their worth by hitting back. - Piet Hein (via 
> Tom White)
> 
> 
> ----- Original Message -
> > From: Roman Shaposhnik 
> > To: dev@hbase.apache.org; Andrew Purtell ; Konstantin 
> > Shvachko 
> > Cc: Konstantin Boudnik 
> > Sent: Tuesday, November 1, 2011 2:16 PM
> > Subject: Re: proper pace for JIRA integration
> > 
> > On Tue, Nov 1, 2011 at 1:00 PM, Andrew Purtell  
> > wrote:
> >>> ═There will be some deployment of .22 in big shops as far as I know.
> >> 
> >>  Who are these "big shops"?
> > 
> > EBay is a good example here I'm CCing Konstantin if you want to find
> > out more details.
> > 
> >>  AFAIK, compared to 0.20.x or 0.23, 0.22 has a number of regressions.
> > 
> > I'm NOT sure I agree as far as 0.23 goes. The state of 0.23 is an
> > early alpha. There's
> > lots of work that still need to go into it before it graduates from an
> > alpha stage. As such
> > it is premature to talk about its quality. Now, the question of how
> > long it takes .23
> > to get to the same point of HDFS stability that .22 has -- is an open
> > one. And I'd
> > rather hear what Konstantin has to say about it.
> > 
> >>  We need to assess how healthy 0.22 is.
> > 
> > It is pretty healthy.═ If anybody is looking for a stable and up-to-date
> > HDFS feature set -- it is the one I'd recommend taking a look at. It is
> > assumed that MR is slower in .22 compared to 20.205, but frankly,
> > I haven't seen the numbers yet, so I can't speculate.
> > 
> > I've run a reasonable # of integration tests on that combo and I liked
> > the results.
> > 
> >>  How much time/attention should the HBase community pay to what might be a 
> > DoA/ignored release?
> >>  Or is that in fact the case (that it is DoA...)?
> > 
> > Personally, I think it is extremely unfair to refer to .22 as
> > DoA/ignored. Unless,
> > of course, such statement can be backed up with facts.
> > 
> > The facts that I have are such that there will be a reasonably large
> > deployment of
> > Hadoop 0.22 and HBase at EBay makes me believe that such a combination
> > should be of interest to HBase community.
> > 
> > Thanks,
> > Roman.
> >