Re: Interesting bug report

2016-01-26 Thread John Vines
That sounds like great follow on work (clients register ephemerally so the
master can tell clients to disconnect, etc.), but I think just having a
client that can get a better read on the state of the system is a
phenomenal starting point.

On Tue, Jan 26, 2016 at 11:52 AM Keith Turner  wrote:

> On Mon, Jan 25, 2016 at 10:59 AM, John Vines  wrote:
>
> > Of course, it's when I hit send that I realize that we could mitigate by
> > making the client aware of the master state, and if the system is shut
> down
> >
>
> Thats a good idea.  Should consider the use case when someone wants to shut
> Accumulo down and bring it back up immediately.  We could allow an admin to
> decide what they want clients to do when they shutdown Accumulo (clients
> die, wait, anything else?).  This could be accomplished with supplemental
> information in ZK or other goal states.
>
>
> > (which was the case for that ticket), then it can fail quickly with a
> > descriptive message.
> >
> > On Mon, Jan 25, 2016 at 10:58 AM John Vines  wrote:
> >
> > > While we want to be fault tolerant, there's a point where we want to
> > > eventually fail. I know we have a couple never ending retry loops that
> > need
> > > to be addressed (https://issues.apache.org/jira/browse/ACCUMULO-1268),
> > > but I'm unsure if queries suffer from this problem.
> > >
> > > Unfortunately, fault tolerance is a bit at odds with instant
> notification
> > > of system issues, since some of the fault tolerance is temporally
> > oriented.
> > > And that ticket lacks context of it never failing out vs. failing out
> > > eventually (but too long for the user)
> > >
> > >
> > > On Sun, Jan 24, 2016 at 7:46 PM Christopher 
> wrote:
> > >
> > >> I saw this bug report:
> > >> https://bugzilla.redhat.com/show_bug.cgi?id=1300987
> > >>
> > >> As far as I can tell, they are reporting normal, expected, and desired
> > >> behavior of Accumulo as a bug. But, is there something we can do
> > upstream
> > >> to enable fast failures in the case of Accumulo not running to support
> > >> their use case?
> > >>
> > >> Personally, I don't see how we can reliably detect within the client
> > that
> > >> the cluster is down or up, vs. a normal temporary server
> > outage/migration,
> > >> since there is there is no single point of authority for Accumulo to
> > >> determine its overall operating status if ZooKeeper is running and no
> > >> other
> > >> servers are. Am I wrong?
> > >>
> > >
> >
>


Re: Interesting bug report

2016-01-25 Thread John Vines
Of course, it's when I hit send that I realize that we could mitigate by
making the client aware of the master state, and if the system is shut down
(which was the case for that ticket), then it can fail quickly with a
descriptive message.

On Mon, Jan 25, 2016 at 10:58 AM John Vines  wrote:

> While we want to be fault tolerant, there's a point where we want to
> eventually fail. I know we have a couple never ending retry loops that need
> to be addressed (https://issues.apache.org/jira/browse/ACCUMULO-1268),
> but I'm unsure if queries suffer from this problem.
>
> Unfortunately, fault tolerance is a bit at odds with instant notification
> of system issues, since some of the fault tolerance is temporally oriented.
> And that ticket lacks context of it never failing out vs. failing out
> eventually (but too long for the user)
>
>
> On Sun, Jan 24, 2016 at 7:46 PM Christopher  wrote:
>
>> I saw this bug report:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1300987
>>
>> As far as I can tell, they are reporting normal, expected, and desired
>> behavior of Accumulo as a bug. But, is there something we can do upstream
>> to enable fast failures in the case of Accumulo not running to support
>> their use case?
>>
>> Personally, I don't see how we can reliably detect within the client that
>> the cluster is down or up, vs. a normal temporary server outage/migration,
>> since there is there is no single point of authority for Accumulo to
>> determine its overall operating status if ZooKeeper is running and no
>> other
>> servers are. Am I wrong?
>>
>


Re: Interesting bug report

2016-01-25 Thread John Vines
While we want to be fault tolerant, there's a point where we want to
eventually fail. I know we have a couple never ending retry loops that need
to be addressed (https://issues.apache.org/jira/browse/ACCUMULO-1268), but
I'm unsure if queries suffer from this problem.

Unfortunately, fault tolerance is a bit at odds with instant notification
of system issues, since some of the fault tolerance is temporally oriented.
And that ticket lacks context of it never failing out vs. failing out
eventually (but too long for the user)

On Sun, Jan 24, 2016 at 7:46 PM Christopher  wrote:

> I saw this bug report: https://bugzilla.redhat.com/show_bug.cgi?id=1300987
>
> As far as I can tell, they are reporting normal, expected, and desired
> behavior of Accumulo as a bug. But, is there something we can do upstream
> to enable fast failures in the case of Accumulo not running to support
> their use case?
>
> Personally, I don't see how we can reliably detect within the client that
> the cluster is down or up, vs. a normal temporary server outage/migration,
> since there is there is no single point of authority for Accumulo to
> determine its overall operating status if ZooKeeper is running and no other
> servers are. Am I wrong?
>


Re: [DISCUSS] Enable PreCommit build

2016-01-08 Thread John Vines
+1

On Fri, Jan 8, 2016 at 12:58 PM Keith Turner  wrote:

> +1
>
> On Fri, Jan 8, 2016 at 12:24 PM, Josh Elser  wrote:
>
> > Hi,
> >
> > Per the other thread "Yetus Accumulo 'Personality'" [1], I'd like to see
> > what people think about turning this on by default.
> >
> > I've been talking to Sean in chat today who had made a suggestion that we
> > get our own JIRA acct instead of the "Hadoop QA" user. Aside from that,
> I'm
> > pretty happy with this.
> >
> > There is likely further tweaking we can do (e.g. multijdk builds, try the
> > sunny-day ITs). One big concern is the presence of a -1/+1 in an CTR
> > community. We would need some docs to be clear that the PreCommit comment
> > is a tool for vetting contributions, not a bar that must be satisfied
> prior
> > to commit (this is a simple website update).
> >
> > Anywho -- if you have opinions, please let them be heard now. If there
> > isn't any argument against, I'll move ahead with this in time.
> >
> >
> > [1]
> >
> http://mail-archives.apache.org/mod_mbox/accumulo-dev/201601.mbox/%3c568b5bfc.2080...@gmail.com%3E
> >
>


Re: [VOTE] Accumulo 1.5.4-rc2

2015-09-18 Thread John Vines
+1, license and notify issues resolved (agreed that ACCUMULO-4003 isn't a
blocker)

On Fri, Sep 18, 2015 at 1:40 PM Sean Busbey  wrote:

> +1
>
> * checked sigs
> * checked hashs
> * verified src tarball matches RC tag
> * verified all LICENSE and NOTICE files (hit ACCUMULO-4003, but it's a
> minor issue IMHO)
>
>
>
> On Tue, Sep 15, 2015 at 4:45 PM, Josh Elser  wrote:
> > Accumulo Developers,
> >
> > Please consider the following candidate for Accumulo 1.5.4.
> >
> > Git Commit:
> > 151db23e7d95cf77c08023ee18b7e524f78286fc
> > Branch:
> > 1.5.4-rc2
> >
> > If this vote passes, a gpg-signed tag will be created using:
> > git tag -f -m 'Apache Accumulo 1.5.4' -s 1.5.4
> > 151db23e7d95cf77c08023ee18b7e524f78286fc
> >
> > Staging repo:
> >
> https://repository.apache.org/content/repositories/orgapacheaccumulo-1041
> > Source (official release artifact):
> >
> https://repository.apache.org/content/repositories/orgapacheaccumulo-1041/org/apache/accumulo/accumulo/1.5.4/accumulo-1.5.4-src.tar.gz
> > Binary:
> >
> https://repository.apache.org/content/repositories/orgapacheaccumulo-1041/org/apache/accumulo/accumulo/1.5.4/accumulo-1.5.4-bin.tar.gz
> > (Append ".sha1", ".md5", or ".asc" to download the signature/hash for a
> > given artifact.)
> >
> > All artifacts were built and staged with:
> > mvn release:prepare && mvn release:perform
> >
> > Signing keys are available at https://www.apache.org/dist/accumulo/KEYS
> > (Expected fingerprint: ABC8914C675FAD3FA74F39B2D146D62CAB471AE9)
> >
> > Release notes (in progress) can be found at
> > https://accumulo.apache.org/release_notes/1.5.4
> >
> > Please vote one of:
> > [ ] +1 - I have verified and accept...
> > [ ] +0 - I have reservations, but not strong enough to vote against...
> > [ ] -1 - Because..., I do not accept...
> > ... these artifacts as the 1.5.4 release of Apache Accumulo.
> >
> > This vote will end on Fri Sep 18 22:00:00 UTC 2015
> > (Fri Sep 18 18:00:00 EDT 2015 / Fri Sep 18 15:00:00 PDT 2015)
> >
> > Thanks!
> >
> > P.S. Hint: download the whole staging repo with
> > wget -erobots=off -r -l inf -np -nH \
> >
> >
> https://repository.apache.org/content/repositories/orgapacheaccumulo-1041/
> > # note the trailing slash is needed
>
>
>
> --
> Sean
>


Re: [VOTE] Accumulo 1.5.4-rc1

2015-09-10 Thread John Vines
-1

I'm with Sean on this one. Ignoring now known licensing issues because we
hadn't handled them in the past is not a valid excuse.

On Thu, Sep 10, 2015 at 2:27 PM Sean Busbey  wrote:

> As members of the PMC, we're required to verify all releases we approve of
> meet ASF licensing policy[1], so I don't consider the issues "minor".
>
> Mistakenly violating policy in the past is a different kind of problem than
> moving forward to knowingly violate it.
>
> In particular, not all of the bundled works have copyrights that are
> covered under a donation to the Foundation. If we distribute e.g. the
> accumulo-core binary jar in its current state the foundation will be
> committing willful copyright infringement. The binary tarball (and I'd
> imagine the rpm/deb files) have similar problems because we'd be violating
> the terms of the included works' respective licenses.
>
>
> [1]:
> http://www.apache.org/dev/release.html#what-must-every-release-contain
>
> On Thu, Sep 10, 2015 at 11:51 AM, Billie Rinaldi  >
> wrote:
>
> > Agreed.
> >
> > On Thu, Sep 10, 2015 at 9:47 AM, Christopher 
> wrote:
> >
> > > I think the license issues are relatively small compared to the
> bugfixes,
> > > especially since we're really trying to close out 1.5.x development.
> So,
> > > given the options, I'd prefer to pass RC1, and make the license fixes
> in
> > > 1.6.x and later, as applicable.
> > >
> > > On Thu, Sep 10, 2015 at 12:28 PM Josh Elser  wrote:
> > >
> > > > Thanks again for taking the time to inspect things so thoroughly,
> Sean.
> > > >
> > > > Others who have already voted, I'd ask for your opinion on whether we
> > > > should sink this release (instead of me blindly going by majority
> > rule).
> > > >
> > > > Personally, I'm presently of the opinion that, given the severity of
> > the
> > > > bug(s) fixed in this release already, RC1 should pass. Considering
> that
> > > > we've been making releases like this for quite some time w/o issue
> and
> > > > 1.5 is all but dead, let's push this release out, (again) table 1.5
> and
> > > > then make these improvements to 1.6 before we cut an RC there there
> > when
> > > > we have time to thoroughly vet the changes (instead of the 11th hour
> of
> > > > a vote).
> > > >
> > > > If there's a need for lengthy discussion, let's break this off the
> VOTE
> > > > thread (I leave this message here for visibility).
> > > >
> > > > - Josh
> > > >
> > > > Sean Busbey wrote:
> > > > > -1
> > > > >
> > > > > * signatures check out
> > > > > * checksums match
> > > > > * licensing errors noted in ACCUMULO-3988
> > > > >
> > > > > On Sat, Sep 5, 2015 at 4:27 PM, Josh Elser
> > wrote:
> > > > >
> > > > >> Accumulo Developers,
> > > > >>
> > > > >> Please consider the following candidate for Accumulo 1.5.4.
> > > > >>
> > > > >> Git Commit:
> > > > >>  12a1041dcbb7f3b10543c305f27ece4b0d65ab9c
> > > > >> Branch:
> > > > >>  1.5.4-rc1
> > > > >>
> > > > >> If this vote passes, a gpg-signed tag will be created using:
> > > > >>  git tag -f -m 'Apache Accumulo 1.5.4' -s 1.5.4
> > > > >> 12a1041dcbb7f3b10543c305f27ece4b0d65ab9c
> > > > >>
> > > > >> Staging repo:
> > > > >>
> > > >
> > >
> >
> https://repository.apache.org/content/repositories/orgapacheaccumulo-1039
> > > > >> Source (official release artifact):
> > > > >>
> > > >
> > >
> >
> https://repository.apache.org/content/repositories/orgapacheaccumulo-1039/org/apache/accumulo/accumulo/1.5.4/accumulo-1.5.4-src.tar.gz
> > > > >> Binary:
> > > > >>
> > > >
> > >
> >
> https://repository.apache.org/content/repositories/orgapacheaccumulo-1039/org/apache/accumulo/accumulo/1.5.4/accumulo-1.5.4-bin.tar.gz
> > > > >> (Append ".sha1", ".md5", or ".asc" to download the signature/hash
> > for
> > > a
> > > > >> given artifact.)
> > > > >>
> > > > >> All artifacts were built and staged with:
> > > > >>  mvn release:prepare&&  mvn release:perform
> > > > >>
> > > > >> Signing keys are available at
> > > https://www.apache.org/dist/accumulo/KEYS
> > > > >> (Expected fingerprint: ABC8914C675FAD3FA74F39B2D146D62CAB471AE9)
> > > > >>
> > > > >> Release notes (in progress) can be found at
> > > > >> https://accumulo.apache.org/release_notes/1.5.4
> > > > >>
> > > > >> Please vote one of:
> > > > >> [ ] +1 - I have verified and accept...
> > > > >> [ ] +0 - I have reservations, but not strong enough to vote
> > against...
> > > > >> [ ] -1 - Because..., I do not accept...
> > > > >> ... these artifacts as the 1.5.4 release of Apache Accumulo.
> > > > >>
> > > > >> This vote will end on Thurs Sep  10 23:00:00 UTC 2015
> > > > >> (Thurs Sep  10 20:00:00 EDT 2015 / Thurs Sep  10 17:00:00 PDT
> 2015)
> > > > >>
> > > > >> Thanks!
> > > > >>
> > > > >> P.S. Hint: download the whole staging repo with
> > > > >>  wget -erobots=off -r -l inf -np -nH \
> > > > >>
> > > > >>
> > > >
> > >
> >
> https://repository.apache.org/content/repositories/orgapacheaccumulo-1039/
> > > > >>  # note the trailing slash is needed
> > > > >>
> > > > >
>

Re: Separate "Performance Tests" execution documentation

2015-07-07 Thread John Vines
README in the top of the testing module?

On Tue, Jul 7, 2015 at 12:18 PM Josh Elser  wrote:

> I committed https://issues.apache.org/jira/browse/ACCUMULO-3929 yesterday.
>
> Had a thought today that it might be beneficial to write down how it
> works somewhere, but I don't know where would be best? Any suggestions?
>


Re: Start scripts and address/hostname

2015-06-26 Thread John Vines
This may be tangential, but I heard the scripts for hadoop 3 had a massive
rewrite. Perhaps they can be consulted for desired behavior?

On Fri, Jun 26, 2015 at 2:03 PM Eric Newton  wrote:

> Our ops people use the "start-here.sh" scripts to bring services back up
> after failures.  That's a great convenience: they don't have to remember
> which hosts are supposed to run the which service.
>
> In sympathy with your hostname troubles: the inconsistent use of hostname
> determination causes those tservers started with start-all.sh and
> start-here.sh to have different hostnames (shortname and fqdn,
> respectively). This has something to do with how our DNS is set-up (or
> hardcoded) because I cannot reproduce the effect in my development
> environment.
>
> As a consequence of this, the quoting hell of ssh, the limitations of
> writing code in Bash, I'm avoiding The Scripts as much as possible.  I am
> happy you are taking this on.
>
> -Eric
>
> On Thu, Jun 25, 2015 at 1:24 PM, Josh Elser  wrote:
>
> > I've been on a tear within our scripts in the last day. I've been moving
> > towards getting an accumulo-daemon.sh with some reasonable start, stop,
> etc
> > semantics (ala Hadoop). This can also be done without affecting the
> > existing start-server.sh, start-here.sh, etc scripts.
> >
> > This hypothetical accumulo-daemon.sh script is a close feel to what an
> > init.d script would do. It alters the state of a server process on the
> > local node. One thing I'm struggling to wrangle is the current ability
> the
> > scripts/configs provide to control the interface that the server
> processes
> > bind to.
> >
> > For example, 127.0.0.1 in the `slaves` file will result in a TabletServer
> > that processes external to the local node cannot talk to. I know there
> are
> > likely fringe cases (multiple NICs, bonded interface) which I don't fully
> > understand to ensure proper support.
> >
> > Is anyone an expert here and could give some advice about the kinds of
> > configuration that the scripts should provide to lets users run Accumulo
> > how they want to? I would like to move away from having to pass the
> > hostname/IP to scripts locally (e.g. `accumulo-daemon.sh start tserver`
> > would start a tserver locally), but I don't want break an existing
> > deployment.
> >
> > - Josh
> >
>


Re: 1.7 branch creation

2015-04-15 Thread John Vines
I would attach cat pictures but the email aliases strips them off.

Aside from that, sounds good.

On Wed, Apr 15, 2015 at 3:22 PM Josh Elser  wrote:

> End of week? Any objections/complaints/opinions/cat-pictures?
>
> I'm thinking our branches would then look like
>
> 1.5 -> 1.5.3-SNAPSHOT
> 1.6 -> 1.6.3-SNAPSHOT
> 1.7 -> 1.7.0-SNAPSHOT
> master -> 1.8.0-SNAPSHOT
>
> I think we still want to hold off on 2.0.0 because it's too amorphous
> right now and w/o any concrete code ready to go into such a branch.
>
> - Josh
>


Re: Redundant code in ZKAuthorizor::initializeSecurity ?

2015-02-19 Thread John Vines
Yup, they look like they don't belong there.

On Thu, Feb 19, 2015 at 1:06 PM, Srikanth Viswanathan 
wrote:

> There appears to be redundant code in the ZKAuthorizor's
> 'initializeSecurity' method. It's initializing a bunch of permissions
> that are unnecessary in the authorizor. The code seems to have been
> replicated from ZKPermHandler. I'd be willing to submit a patch if
> someone could confirm that I'm not missing something.
>
> Code in question:
>
>
> https://github.com/apache/accumulo/blob/master/server/base/src/main/java/org/apache/accumulo/server/security/handler/ZKAuthorizor.java
>
> (lines 87-94 inclusive)
>
> Thanks.
>
> Srikanth
>


Re: Fw: accumulo clusters

2015-01-13 Thread John Vines
ZooKeeper should always be done in odd quantities. For such a small
cluster, you may just want to dual purpose your workers as zk nodes though.
I really can't say more without knowing more about the underlying hardware.

On Tue, Jan 13, 2015 at 4:08 AM, panqing...@163.com 
wrote:

> HI
>
> I want to set up accumulo clusters should be how to build? Now has built
> Hadoop 1 master 2 slaves ,2 zookeeper
>
>
> panqing...@163.com
>


Re: accumulo master is down

2015-01-13 Thread John Vines
Another option is to change the default port in your accumulo-site.xml file
(master.port.client) to a port that isn't being hit by that scan.

On Tue, Jan 13, 2015 at 10:41 AM, John Vines  wrote:

> This is an indicator that something is hitting that port with a different
> protocol. Make sure you don't have anything doing port scanning,
> particularly for the default ports (monitor is ). Be aware, then some
> pre-packaged hadoop releases do have monitoring software that may be doing
> this.
>
> On Tue, Jan 13, 2015 at 3:36 AM, panqing...@163.com 
> wrote:
>
>> HI
>>there is  a  error
>>
>> 2015-01-13 16:20:25,930 [util.CustomNonBlockingServer$CustomFrameBuffer] 
>> ERROR: Read a frame size of 1195725856, which is bigger than the maximum 
>> allowable buffer size for ALL connections.
>>
>> 2015-01-13 16:20:25,948 [util.CustomNonBlockingServer$CustomFrameBuffer] 
>> ERROR: Read a frame size of 1330664521, which is bigger than the maximum 
>> allowable buffer size for ALL connections.
>>
>> 2015-01-13 16:20:25,951 [util.CustomNonBlockingServer$CustomFrameBuffer] 
>> ERROR: Read a frame size of 1195725856, which is bigger than the maximum 
>> allowable buffer size for ALL connections.
>>
>>
>> can you help me ?
>> --
>> panqing...@163.com
>>
>
>


Re: accumulo master is down

2015-01-13 Thread John Vines
This is an indicator that something is hitting that port with a different
protocol. Make sure you don't have anything doing port scanning,
particularly for the default ports (monitor is ). Be aware, then some
pre-packaged hadoop releases do have monitoring software that may be doing
this.

On Tue, Jan 13, 2015 at 3:36 AM, panqing...@163.com 
wrote:

> HI
>there is  a  error
>
> 2015-01-13 16:20:25,930 [util.CustomNonBlockingServer$CustomFrameBuffer] 
> ERROR: Read a frame size of 1195725856, which is bigger than the maximum 
> allowable buffer size for ALL connections.
>
> 2015-01-13 16:20:25,948 [util.CustomNonBlockingServer$CustomFrameBuffer] 
> ERROR: Read a frame size of 1330664521, which is bigger than the maximum 
> allowable buffer size for ALL connections.
>
> 2015-01-13 16:20:25,951 [util.CustomNonBlockingServer$CustomFrameBuffer] 
> ERROR: Read a frame size of 1195725856, which is bigger than the maximum 
> allowable buffer size for ALL connections.
>
>
> can you help me ?
> --
> panqing...@163.com
>


Re: [VOTE][LAZY] Format all supported branches

2015-01-07 Thread John Vines
+1

On Wed, Jan 7, 2015 at 3:12 PM, Christopher  wrote:

> To make it easier to apply some minimal checkstyle rules for ACCUMULO-3451,
> I'm announcing my intentions to do a full, one-time, auto-format and
> organize imports on all our supported branches (1.5, 1.6, and master) to
> bring us up to some degree of compliance with our agreed-upon formatting
> standards.
>
> Benefits:
> To have additional checks, in particular against javadoc problems and other
> common trivial warnings in the build.
> To ensure less divergence from our agreed-upon formatting standards.
> Formatting first makes it much less tedious and easier on me to add these
> checks to the build.
>
> Issues I've considered:
> I will deal with all the merge conflicts.
> I will ignore generated thrift code.
> Conflicts with new code in people's branches should be minimal (and easily
> resolved by formatting according to our standards).
> Regarding concerns about history tracking, in general, each format change
> is small, but they are numerous. So, the impact on tracking history should
> be very minimal (you'll see things like a brace moved to the same line as
> the else statement it is associated with... stuff that won't generally
> affect your ability to debug).
> I'll also do a "format only" commit, separately from any substantive
> changes regarding the rule changes, so the mass formatting change will
> happen in one place, and it will also be easy to revert, if absolutely
> necessary.
>
> I'll give this 24 hours (it can be reverted if somebody objects after
> that).
>
> --
> Christopher L Tubbs II
> http://gravatar.com/ctubbsii
>


Re: Checkstyle Notes

2014-12-23 Thread John Vines
I share sentiments with Josh. I'm all for a more universally supported
standard. Going back down to 100 characters feels limiting and I'm not a
fan, but I'm not going to let that stop me.

On Tue, Dec 23, 2014 at 1:36 PM, Josh Elser  wrote:
>
> Thanks for taking the time to do this. I know it can be rather monotonous.
>
> I've read over the Google Java style guide before and liked (just about)
> everything I read. The line limit of 80-100 chars *feels* limiting to me
> (but I've also been told by others my senior that you very much get used to
> that limitation and grow to like it).
>
> Our existing style recommendation is just that (not a requirement), so
> tooling in Maven would be important to me for however we might move.
> Something that will tell me when I did something dumb or knuckle-headed
> will help to remind me.
>
> Christopher wrote:
>
>> Devs,
>>
>> So, I spent some time yesterday investigating the application of a
>> CheckStyle configuration to apply to our builds.
>>
>> I started with the CheckStyle implementation[1] of the Google Style
>> Guide[2], and a fully formatted Accumulo code base (according to our
>> current Eclipse formatter standards[3]). I then configured the
>> maven-checkstyle-plugin[4][5] to run during a build and built repeatedly,
>> whittled down some of the strictness of the Google style and fixed some of
>> our errors, to get at a minimal checkstyle configuration that might be
>> able
>> to add some additional checks during our build. I found at least one
>> bug[6]
>> doing this and evaluating the resulting warnings, and documented some of
>> the common problems.
>>
>> Now, I'm not necessarily arguing for switching to adopting the Google
>> Style
>> Guide for our standards today (we can consider voting on that in a new
>> thread). However, I think it might be beneficial to adopt the Google Style
>> Guide, especially because that project has formatters for several standard
>> IDEs, and because the latest version of CheckStyle has a ruleset which
>> conforms to it, which enables the Maven tooling. So, if anybody is
>> interesting in us doing that, I would certainly get behind it (and could
>> help make it happen: reformatting, merging, and applying build tooling).
>> One thing I like about the Google Style Guide, in particular, is that it
>> doesn't just provide tools to enforce it, it also documents the standard
>> with words and reasoning, so we have something to consult, even if you're
>> not using an IDE or any of the tools.
>>
>> As a result of this, I am going to add something to start enforcing checks
>> for some of the javadoc problems. We can add more checks later, or we can
>> adopt the Google Style.
>>
>> In the meantime, here's a list of the most common style problems I
>> observed
>> (note: these aren't necessarily "bad", just non-conforming):
>>
>> Line length violations (we're currently using 160, but Google Style Guide
>> allows 80 or 100, with 100 enforced in the tools by default; many of our
>> lines exceed the 160).
>> Escaped unicode in string literals (especially when they represent a
>> printable character which should just be inserted directly, instead of
>> encoded).
>> Use of lower-case L for long literals (prefer 10L over 10l).
>> Bad import order, use of wildcards.
>> Missing empty line between license header and package declaration.
>> Extra whitespace around generic parameters.
>> Bad package/class/method/parameter/local variable naming conventions
>> (like
>> single character variables outside of loops, or use of
>> underscore/non-standard camelCase).
>> Missing optional braces for blocks (see a representatively bad example at
>> CompactionInfo:lines 74-82).
>> Keywords out of order from JLS recommendations (e.g. static public vs.
>> public static).
>> Overloaded methods aren't grouped together.
>> Multiple statements per line, multiple variable declarations per line.
>> Array brackets should be on type (String[] names;), not variable name
>> (String names[];).
>> Switch statements sometimes omit default case or fall through.
>> Distance between local variable declaration and the first time it is used
>> is sometimes too long (many lines).
>> All uppercase abbreviations in names aren't avoided (LockID should be
>> LockId).
>> Operators should start the next line, when wrapping (like concatenation of
>> two long string literals).
>> Empty catch blocks (catch blocks should throw an AssertionError if it's
>> supposed to be impossible to reach, or a comment about why it's safe to
>> ignore).
>> Some files have more than one top-level class. They should be in their own
>> file.
>> Braces not on same line as statement (like "} else" with "{" on next
>> line).
>>
>> Missing javadocs on public methods.
>> Missing paragraph markers() in javadoc paragraphs.
>> Missing javadoc sentence descriptions.
>> Javadoc tags out of order from standards.
>> Missing html close tags in javadocs (  but no,  but no)
>> Use of<  and>  in javadoc instead of< and> or {@co

Re: Review Request 27198: ACCUMULO-3236 introducing cloneInto feature

2014-12-23 Thread John Vines


> On Dec. 22, 2014, 11:42 p.m., Christopher Tubbs wrote:
> > I'm curious about the expected behavior for when the source data contains 
> > deletes. It seems that those deletes will be merged into the destination, 
> > clobbering any non-deleted data in the destination. Clobbering (or at 
> > least, some sensible merge, based on timestamp) is probably expected 
> > behavior for non-deleted data from the source, but deleted data is tricky, 
> > because the user has no insight into the current state of deletes in the 
> > source. They can "cloneInto" in one instant and have deletes clobber their 
> > destination, and then do it again without any clobbering (because the 
> > source has since compacted).
> > 
> > To prevent this behavior, it seems you'd need to either 1) force a 
> > compaction, or 2) add metadata to the value of the file entry instructing 
> > the tserver to ignore deleted data (delete markers and any data it is 
> > hiding) coming from that particular file, before merging with the other 
> > files, similar to how we re-write timestamps when reading data from 
> > bulk-imported files.
> > 
> > This unexpected behavior can occur even if all other parts of the cloneInto 
> > feature were working perfectly. It gets even more complicated and 
> > unexpected when we begin considering that the user's view of the data is 
> > different than what the underlying data actually is in other ways, as the 
> > result of iterators. It may not be obvious that they are injecting the 
> > underlying data, and not their view of it, into another table. How do you 
> > propose to address that? With additional permissions to ensure not just 
> > everybody can perform this action? With a check to verify that the same 
> > iterators exist on the destination table that existed on the source table?
> 
> John Vines wrote:
> I see your concerns no different as those for a standard bulk import. And 
> these concerns are currently handled by the user doing the bulk import needs 
> to be aware of the data they're calling the command on.
> 
> Christopher Tubbs wrote:
> I agree that they aren't much different than bulk import. However, this 
> is not a "bulkImportFrom" feature, it is a "cloneInto" feature, and it's 
> *very* different behavior than clone. With bulk import, also, we have a 
> special permission, that can be reserved to those who understand its 
> particulars. It's also the case that typical bulk import use cases involve 
> writing data that will not have deletes, or will not come from a source with 
> a different view than the underlying files.
> 
> I'm just concerned about user expectations, and how we address them.
> 
> John Vines wrote:
> What if we require the source table to be compacted down to 1 file per 
> tablet to ease concerns about deletes?
> 
> And I totally agree with the permissions. Would you be okay with 
> requiring bulk import on the destination, or would you rather it's own 
> permission?
> 
> John Vines wrote:
> Actually, compacting down to 1 file is expensive and may not be required 
> if there are no deletes in play. I think this is best left to user's of the 
> system to quiesce to their needs.
> 
> Christopher Tubbs wrote:
> Yeah, compacting would be expensive. It's fine to leave it to the user... 
> but we do need to communicate the expected behavior and pitfalls very 
> thoroughly, and perhaps lock the feature down with a special permission, like 
> bulk import has, to prevent non-permitted users from performing this 
> function, who may not understand the pitfalls.
> 
> John Vines wrote:
> I see a few ways to achieve this, curious what your opinion is-
> For the destination table I see 2 options-
> 1. Just utilize bulk import because it has known pitfalls that need to be 
> accomodated
> 2. Introduce new permission for this
> 
> And then for the source table, there's a few more options-
> 1. READ
> 2. if we introduce a new permission for the destination table, use that 
> same permission
> 3. A new permisssion for cloning FROM
> 
> Christopher Tubbs wrote:
> For the destination table:
> 
> If we utilize the bulk import permission, I'd say we'd definitely want to 
> change the name of this operation to reflect that it's more of a "bulk 
> import" variation than a "clone" variation, at least from the user's 
> perspective (even though the implementation details are more like clone).
> 
> For the source table:
> 
> I d

Re: Review Request 27198: ACCUMULO-3236 introducing cloneInto feature

2014-12-23 Thread John Vines


> On Dec. 22, 2014, 11:42 p.m., Christopher Tubbs wrote:
> > I'm curious about the expected behavior for when the source data contains 
> > deletes. It seems that those deletes will be merged into the destination, 
> > clobbering any non-deleted data in the destination. Clobbering (or at 
> > least, some sensible merge, based on timestamp) is probably expected 
> > behavior for non-deleted data from the source, but deleted data is tricky, 
> > because the user has no insight into the current state of deletes in the 
> > source. They can "cloneInto" in one instant and have deletes clobber their 
> > destination, and then do it again without any clobbering (because the 
> > source has since compacted).
> > 
> > To prevent this behavior, it seems you'd need to either 1) force a 
> > compaction, or 2) add metadata to the value of the file entry instructing 
> > the tserver to ignore deleted data (delete markers and any data it is 
> > hiding) coming from that particular file, before merging with the other 
> > files, similar to how we re-write timestamps when reading data from 
> > bulk-imported files.
> > 
> > This unexpected behavior can occur even if all other parts of the cloneInto 
> > feature were working perfectly. It gets even more complicated and 
> > unexpected when we begin considering that the user's view of the data is 
> > different than what the underlying data actually is in other ways, as the 
> > result of iterators. It may not be obvious that they are injecting the 
> > underlying data, and not their view of it, into another table. How do you 
> > propose to address that? With additional permissions to ensure not just 
> > everybody can perform this action? With a check to verify that the same 
> > iterators exist on the destination table that existed on the source table?
> 
> John Vines wrote:
> I see your concerns no different as those for a standard bulk import. And 
> these concerns are currently handled by the user doing the bulk import needs 
> to be aware of the data they're calling the command on.
> 
> Christopher Tubbs wrote:
> I agree that they aren't much different than bulk import. However, this 
> is not a "bulkImportFrom" feature, it is a "cloneInto" feature, and it's 
> *very* different behavior than clone. With bulk import, also, we have a 
> special permission, that can be reserved to those who understand its 
> particulars. It's also the case that typical bulk import use cases involve 
> writing data that will not have deletes, or will not come from a source with 
> a different view than the underlying files.
> 
> I'm just concerned about user expectations, and how we address them.
> 
> John Vines wrote:
> What if we require the source table to be compacted down to 1 file per 
> tablet to ease concerns about deletes?
> 
> And I totally agree with the permissions. Would you be okay with 
> requiring bulk import on the destination, or would you rather it's own 
> permission?
> 
> John Vines wrote:
> Actually, compacting down to 1 file is expensive and may not be required 
> if there are no deletes in play. I think this is best left to user's of the 
> system to quiesce to their needs.
> 
> Christopher Tubbs wrote:
> Yeah, compacting would be expensive. It's fine to leave it to the user... 
> but we do need to communicate the expected behavior and pitfalls very 
> thoroughly, and perhaps lock the feature down with a special permission, like 
> bulk import has, to prevent non-permitted users from performing this 
> function, who may not understand the pitfalls.

I see a few ways to achieve this, curious what your opinion is-
For the destination table I see 2 options-
1. Just utilize bulk import because it has known pitfalls that need to be 
accomodated
2. Introduce new permission for this

And then for the source table, there's a few more options-
1. READ
2. if we introduce a new permission for the destination table, use that same 
permission
3. A new permisssion for cloning FROM


- John


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27198/#review65843
---


On Dec. 22, 2014, 7:59 p.m., John Vines wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27198/
> ---
> 
> (Updated Dec. 22, 2014, 7:59 p.m.)
> 
> 
> Review request for accumu

Re: Review Request 27198: ACCUMULO-3236 introducing cloneInto feature

2014-12-22 Thread John Vines


> On Dec. 22, 2014, 11:42 p.m., Christopher Tubbs wrote:
> > I'm curious about the expected behavior for when the source data contains 
> > deletes. It seems that those deletes will be merged into the destination, 
> > clobbering any non-deleted data in the destination. Clobbering (or at 
> > least, some sensible merge, based on timestamp) is probably expected 
> > behavior for non-deleted data from the source, but deleted data is tricky, 
> > because the user has no insight into the current state of deletes in the 
> > source. They can "cloneInto" in one instant and have deletes clobber their 
> > destination, and then do it again without any clobbering (because the 
> > source has since compacted).
> > 
> > To prevent this behavior, it seems you'd need to either 1) force a 
> > compaction, or 2) add metadata to the value of the file entry instructing 
> > the tserver to ignore deleted data (delete markers and any data it is 
> > hiding) coming from that particular file, before merging with the other 
> > files, similar to how we re-write timestamps when reading data from 
> > bulk-imported files.
> > 
> > This unexpected behavior can occur even if all other parts of the cloneInto 
> > feature were working perfectly. It gets even more complicated and 
> > unexpected when we begin considering that the user's view of the data is 
> > different than what the underlying data actually is in other ways, as the 
> > result of iterators. It may not be obvious that they are injecting the 
> > underlying data, and not their view of it, into another table. How do you 
> > propose to address that? With additional permissions to ensure not just 
> > everybody can perform this action? With a check to verify that the same 
> > iterators exist on the destination table that existed on the source table?
> 
> John Vines wrote:
> I see your concerns no different as those for a standard bulk import. And 
> these concerns are currently handled by the user doing the bulk import needs 
> to be aware of the data they're calling the command on.
> 
> Christopher Tubbs wrote:
> I agree that they aren't much different than bulk import. However, this 
> is not a "bulkImportFrom" feature, it is a "cloneInto" feature, and it's 
> *very* different behavior than clone. With bulk import, also, we have a 
> special permission, that can be reserved to those who understand its 
> particulars. It's also the case that typical bulk import use cases involve 
> writing data that will not have deletes, or will not come from a source with 
> a different view than the underlying files.
> 
> I'm just concerned about user expectations, and how we address them.
> 
> John Vines wrote:
> What if we require the source table to be compacted down to 1 file per 
> tablet to ease concerns about deletes?
> 
> And I totally agree with the permissions. Would you be okay with 
> requiring bulk import on the destination, or would you rather it's own 
> permission?

Actually, compacting down to 1 file is expensive and may not be required if 
there are no deletes in play. I think this is best left to user's of the system 
to quiesce to their needs.


- John


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27198/#review65843
---


On Dec. 22, 2014, 7:59 p.m., John Vines wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27198/
> ---
> 
> (Updated Dec. 22, 2014, 7:59 p.m.)
> 
> 
> Review request for accumulo.
> 
> 
> Bugs: ACCUMULO-3236
> https://issues.apache.org/jira/browse/ACCUMULO-3236
> 
> 
> Repository: accumulo
> 
> 
> Description
> ---
> 
> Includes all code to support feature, including thrift changes
> Includes minor code cleanup to TableLocator and items in the Bulk path to 
> remove signature items that are unused (arguments & exceptions)
> Includes renaming of some bulk import functions to clarify their purpose 
> (because they're now multi-purpose)
> 
> Patch is based on 1.6, but we can choose to make it target only 1.7 if we 
> choose (this conversation should be taken up on jira, not in RB)
> 
> 
> Diffs
> -
> 
>   
> core/src/main/java/org/apache/accumulo/core/client/admin/TableOperations.java 
> 97f538d 
>   
> core/src/main/java/org/apache/accumulo/

Re: Review Request 27198: ACCUMULO-3236 introducing cloneInto feature

2014-12-22 Thread John Vines


> On Dec. 22, 2014, 11:42 p.m., Christopher Tubbs wrote:
> > I'm curious about the expected behavior for when the source data contains 
> > deletes. It seems that those deletes will be merged into the destination, 
> > clobbering any non-deleted data in the destination. Clobbering (or at 
> > least, some sensible merge, based on timestamp) is probably expected 
> > behavior for non-deleted data from the source, but deleted data is tricky, 
> > because the user has no insight into the current state of deletes in the 
> > source. They can "cloneInto" in one instant and have deletes clobber their 
> > destination, and then do it again without any clobbering (because the 
> > source has since compacted).
> > 
> > To prevent this behavior, it seems you'd need to either 1) force a 
> > compaction, or 2) add metadata to the value of the file entry instructing 
> > the tserver to ignore deleted data (delete markers and any data it is 
> > hiding) coming from that particular file, before merging with the other 
> > files, similar to how we re-write timestamps when reading data from 
> > bulk-imported files.
> > 
> > This unexpected behavior can occur even if all other parts of the cloneInto 
> > feature were working perfectly. It gets even more complicated and 
> > unexpected when we begin considering that the user's view of the data is 
> > different than what the underlying data actually is in other ways, as the 
> > result of iterators. It may not be obvious that they are injecting the 
> > underlying data, and not their view of it, into another table. How do you 
> > propose to address that? With additional permissions to ensure not just 
> > everybody can perform this action? With a check to verify that the same 
> > iterators exist on the destination table that existed on the source table?
> 
> John Vines wrote:
> I see your concerns no different as those for a standard bulk import. And 
> these concerns are currently handled by the user doing the bulk import needs 
> to be aware of the data they're calling the command on.
> 
> Christopher Tubbs wrote:
> I agree that they aren't much different than bulk import. However, this 
> is not a "bulkImportFrom" feature, it is a "cloneInto" feature, and it's 
> *very* different behavior than clone. With bulk import, also, we have a 
> special permission, that can be reserved to those who understand its 
> particulars. It's also the case that typical bulk import use cases involve 
> writing data that will not have deletes, or will not come from a source with 
> a different view than the underlying files.
> 
> I'm just concerned about user expectations, and how we address them.

What if we require the source table to be compacted down to 1 file per tablet 
to ease concerns about deletes?

And I totally agree with the permissions. Would you be okay with requiring bulk 
import on the destination, or would you rather it's own permission?


- John


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27198/#review65843
---


On Dec. 22, 2014, 7:59 p.m., John Vines wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27198/
> ---
> 
> (Updated Dec. 22, 2014, 7:59 p.m.)
> 
> 
> Review request for accumulo.
> 
> 
> Bugs: ACCUMULO-3236
> https://issues.apache.org/jira/browse/ACCUMULO-3236
> 
> 
> Repository: accumulo
> 
> 
> Description
> ---
> 
> Includes all code to support feature, including thrift changes
> Includes minor code cleanup to TableLocator and items in the Bulk path to 
> remove signature items that are unused (arguments & exceptions)
> Includes renaming of some bulk import functions to clarify their purpose 
> (because they're now multi-purpose)
> 
> Patch is based on 1.6, but we can choose to make it target only 1.7 if we 
> choose (this conversation should be taken up on jira, not in RB)
> 
> 
> Diffs
> -
> 
>   
> core/src/main/java/org/apache/accumulo/core/client/admin/TableOperations.java 
> 97f538d 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/RootTabletLocator.java
>  97d476b 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/TableOperationsImpl.java
>  07df1bd 
>   core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocator.java 
> e396d82 
> 

Re: Review Request 27198: ACCUMULO-3236 introducing cloneInto feature

2014-12-22 Thread John Vines


> On Oct. 30, 2014, 4:06 p.m., kturner wrote:
> > server/master/src/main/java/org/apache/accumulo/master/tableOps/CloneIntoTable.java,
> >  line 84
> > <https://reviews.apache.org/r/27198/diff/2/?file=741888#file741888line84>
> >
> > Are the tables offline?  If not, then the splits could change during 
> > this operation.
> 
> John Vines wrote:
> They are not, but this isn't an issue. The locks prevent a merge and the 
> bulk import code we're utilizing handles tablets splitting mid-operation.
> 
> kturner wrote:
> What about the following situation?  Seems like this has the possibility 
> to reintroduce deleted data (data deleted before clone into starts).
> 
> 1. Src table has splits {B, C, R}
> 2. Insert Row=N col=cf1,cq1 val=4 into Src tablet (C,R]
> 3. Insert Row=I col=cf1,cq1 val=6 into Src tablet (C,R]
> 4. Tablet Src (C,R] flushes creating file F8
> 5. Delete Row=I col=cf1,cq1 in Src tablet (C,R]
> 6. Tablet Src (C,R] flushes creating file F9
> 7. Clone into operation starts (Dest has splits {B, C, R})
> 8. Clone into checks that Src is subset of Dest
> 9. Src tablet (C,R] splits into (C,K] and (K,R].  Tablet (C,K] has files 
> F9,F8.  Tablet (K,R] has file F8.
> 10. Src tablet (C,K] does full compaction dropping delete marker for Row I
> 11. CloneInto imports only file F8 into Dest tablet (C,R]

Sounds like this is easily handled by doing split validation around the files - 
if splits change while building file->extent mapping then we throw them out (or 
just the offending splits) and try again. Or offline the source table, which 
should work for my application.


- John


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27198/#review59201
---


On Dec. 22, 2014, 7:59 p.m., John Vines wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27198/
> ---
> 
> (Updated Dec. 22, 2014, 7:59 p.m.)
> 
> 
> Review request for accumulo.
> 
> 
> Bugs: ACCUMULO-3236
> https://issues.apache.org/jira/browse/ACCUMULO-3236
> 
> 
> Repository: accumulo
> 
> 
> Description
> ---
> 
> Includes all code to support feature, including thrift changes
> Includes minor code cleanup to TableLocator and items in the Bulk path to 
> remove signature items that are unused (arguments & exceptions)
> Includes renaming of some bulk import functions to clarify their purpose 
> (because they're now multi-purpose)
> 
> Patch is based on 1.6, but we can choose to make it target only 1.7 if we 
> choose (this conversation should be taken up on jira, not in RB)
> 
> 
> Diffs
> -
> 
>   
> core/src/main/java/org/apache/accumulo/core/client/admin/TableOperations.java 
> 97f538d 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/RootTabletLocator.java
>  97d476b 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/TableOperationsImpl.java
>  07df1bd 
>   core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocator.java 
> e396d82 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocatorImpl.java
>  c550f15 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/TimeoutTabletLocator.java
>  bcbe561 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/thrift/TableOperation.java
>  7716823 
>   
> core/src/main/java/org/apache/accumulo/core/client/mock/MockTableOperationsImpl.java
>  de19137 
>   
> core/src/main/java/org/apache/accumulo/core/client/mock/impl/MockTabletLocator.java
>  35f160f 
>   
> core/src/main/java/org/apache/accumulo/core/master/thrift/FateOperation.java 
> f65f552 
>   
> core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TabletClientService.java
>  2ba7674 
>   core/src/main/thrift/client.thrift 38a8076 
>   core/src/main/thrift/master.thrift 38e9227 
>   core/src/main/thrift/tabletserver.thrift 25e0b10 
>   
> core/src/test/java/org/apache/accumulo/core/client/admin/TableOperationsHelperTest.java
>  1d91574 
>   
> core/src/test/java/org/apache/accumulo/core/client/impl/TableOperationsHelperTest.java
>  02838ed 
>   
> server/base/src/main/java/org/apache/accumulo/server/client/BulkImporter.java 
> 4cc13a9 
>   
> server/base/src/main/java/org/apache/accumulo/server/client/ClientServiceHandler.java
>  fe17a62 
>   
> server/base/src/main/java/org/apache/accumulo/ser

Re: Review Request 27198: ACCUMULO-3236 introducing cloneInto feature

2014-12-22 Thread John Vines


> On Dec. 22, 2014, 11:42 p.m., Christopher Tubbs wrote:
> > I'm curious about the expected behavior for when the source data contains 
> > deletes. It seems that those deletes will be merged into the destination, 
> > clobbering any non-deleted data in the destination. Clobbering (or at 
> > least, some sensible merge, based on timestamp) is probably expected 
> > behavior for non-deleted data from the source, but deleted data is tricky, 
> > because the user has no insight into the current state of deletes in the 
> > source. They can "cloneInto" in one instant and have deletes clobber their 
> > destination, and then do it again without any clobbering (because the 
> > source has since compacted).
> > 
> > To prevent this behavior, it seems you'd need to either 1) force a 
> > compaction, or 2) add metadata to the value of the file entry instructing 
> > the tserver to ignore deleted data (delete markers and any data it is 
> > hiding) coming from that particular file, before merging with the other 
> > files, similar to how we re-write timestamps when reading data from 
> > bulk-imported files.
> > 
> > This unexpected behavior can occur even if all other parts of the cloneInto 
> > feature were working perfectly. It gets even more complicated and 
> > unexpected when we begin considering that the user's view of the data is 
> > different than what the underlying data actually is in other ways, as the 
> > result of iterators. It may not be obvious that they are injecting the 
> > underlying data, and not their view of it, into another table. How do you 
> > propose to address that? With additional permissions to ensure not just 
> > everybody can perform this action? With a check to verify that the same 
> > iterators exist on the destination table that existed on the source table?

I see your concerns no different as those for a standard bulk import. And these 
concerns are currently handled by the user doing the bulk import needs to be 
aware of the data they're calling the command on.


- John


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27198/#review65843
---


On Dec. 22, 2014, 7:59 p.m., John Vines wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27198/
> ---
> 
> (Updated Dec. 22, 2014, 7:59 p.m.)
> 
> 
> Review request for accumulo.
> 
> 
> Bugs: ACCUMULO-3236
> https://issues.apache.org/jira/browse/ACCUMULO-3236
> 
> 
> Repository: accumulo
> 
> 
> Description
> ---
> 
> Includes all code to support feature, including thrift changes
> Includes minor code cleanup to TableLocator and items in the Bulk path to 
> remove signature items that are unused (arguments & exceptions)
> Includes renaming of some bulk import functions to clarify their purpose 
> (because they're now multi-purpose)
> 
> Patch is based on 1.6, but we can choose to make it target only 1.7 if we 
> choose (this conversation should be taken up on jira, not in RB)
> 
> 
> Diffs
> -
> 
>   
> core/src/main/java/org/apache/accumulo/core/client/admin/TableOperations.java 
> 97f538d 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/RootTabletLocator.java
>  97d476b 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/TableOperationsImpl.java
>  07df1bd 
>   core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocator.java 
> e396d82 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocatorImpl.java
>  c550f15 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/TimeoutTabletLocator.java
>  bcbe561 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/thrift/TableOperation.java
>  7716823 
>   
> core/src/main/java/org/apache/accumulo/core/client/mock/MockTableOperationsImpl.java
>  de19137 
>   
> core/src/main/java/org/apache/accumulo/core/client/mock/impl/MockTabletLocator.java
>  35f160f 
>   
> core/src/main/java/org/apache/accumulo/core/master/thrift/FateOperation.java 
> f65f552 
>   
> core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TabletClientService.java
>  2ba7674 
>   core/src/main/thrift/client.thrift 38a8076 
>   core/src/main/thrift/master.thrift 38e9227 
>   core/src/main/thrift/tabletserver.thrift 25e0b10 
>   
> core/src/test/java/org/apache/accumulo/core/client/admin/Table

Re: Review Request 27198: ACCUMULO-3236 introducing cloneInto feature

2014-12-22 Thread John Vines

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27198/#review65830
---


Dont' bother with this latest patch (December 22), I have some oversights in it

- John Vines


On Dec. 22, 2014, 7:59 p.m., John Vines wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27198/
> ---
> 
> (Updated Dec. 22, 2014, 7:59 p.m.)
> 
> 
> Review request for accumulo.
> 
> 
> Bugs: ACCUMULO-3236
> https://issues.apache.org/jira/browse/ACCUMULO-3236
> 
> 
> Repository: accumulo
> 
> 
> Description
> ---
> 
> Includes all code to support feature, including thrift changes
> Includes minor code cleanup to TableLocator and items in the Bulk path to 
> remove signature items that are unused (arguments & exceptions)
> Includes renaming of some bulk import functions to clarify their purpose 
> (because they're now multi-purpose)
> 
> Patch is based on 1.6, but we can choose to make it target only 1.7 if we 
> choose (this conversation should be taken up on jira, not in RB)
> 
> 
> Diffs
> -
> 
>   
> core/src/main/java/org/apache/accumulo/core/client/admin/TableOperations.java 
> 97f538d 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/RootTabletLocator.java
>  97d476b 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/TableOperationsImpl.java
>  07df1bd 
>   core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocator.java 
> e396d82 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocatorImpl.java
>  c550f15 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/TimeoutTabletLocator.java
>  bcbe561 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/thrift/TableOperation.java
>  7716823 
>   
> core/src/main/java/org/apache/accumulo/core/client/mock/MockTableOperationsImpl.java
>  de19137 
>   
> core/src/main/java/org/apache/accumulo/core/client/mock/impl/MockTabletLocator.java
>  35f160f 
>   
> core/src/main/java/org/apache/accumulo/core/master/thrift/FateOperation.java 
> f65f552 
>   
> core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TabletClientService.java
>  2ba7674 
>   core/src/main/thrift/client.thrift 38a8076 
>   core/src/main/thrift/master.thrift 38e9227 
>   core/src/main/thrift/tabletserver.thrift 25e0b10 
>   
> core/src/test/java/org/apache/accumulo/core/client/admin/TableOperationsHelperTest.java
>  1d91574 
>   
> core/src/test/java/org/apache/accumulo/core/client/impl/TableOperationsHelperTest.java
>  02838ed 
>   
> server/base/src/main/java/org/apache/accumulo/server/client/BulkImporter.java 
> 4cc13a9 
>   
> server/base/src/main/java/org/apache/accumulo/server/client/ClientServiceHandler.java
>  fe17a62 
>   
> server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java
>  258080c 
>   
> server/base/src/test/java/org/apache/accumulo/server/client/BulkImporterTest.java
>  3680341 
>   
> server/master/src/main/java/org/apache/accumulo/master/FateServiceHandler.java
>  5818da3 
>   
> server/master/src/main/java/org/apache/accumulo/master/tableOps/CloneIntoTable.java
>  PRE-CREATION 
>   server/tserver/src/main/java/org/apache/accumulo/tserver/Tablet.java 
> 9a07a4a 
>   server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServer.java 
> 3f594cc 
>   
> test/src/main/java/org/apache/accumulo/test/performance/thrift/NullTserver.java
>  0591b19 
>   test/src/test/java/org/apache/accumulo/test/functional/CloneIntoIT.java 
> PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/27198/diff/
> 
> 
> Testing
> ---
> 
> Includes CloneIntoIT, which exercises all permutations of the flags. Existing 
> BulkIT still functions as intended for validation of no feature loss in 
> refactoring exiting code for multi-purposing.
> 
> 
> Thanks,
> 
> John Vines
> 
>



Re: Review Request 27198: ACCUMULO-3236 introducing cloneInto feature

2014-12-22 Thread John Vines

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27198/
---

(Updated Dec. 22, 2014, 7:59 p.m.)


Review request for accumulo.


Changes
---

Minor revision from actually running tests, forgot master didn't have slf4j 
support


Bugs: ACCUMULO-3236
https://issues.apache.org/jira/browse/ACCUMULO-3236


Repository: accumulo


Description
---

Includes all code to support feature, including thrift changes
Includes minor code cleanup to TableLocator and items in the Bulk path to 
remove signature items that are unused (arguments & exceptions)
Includes renaming of some bulk import functions to clarify their purpose 
(because they're now multi-purpose)

Patch is based on 1.6, but we can choose to make it target only 1.7 if we 
choose (this conversation should be taken up on jira, not in RB)


Diffs (updated)
-

  core/src/main/java/org/apache/accumulo/core/client/admin/TableOperations.java 
97f538d 
  
core/src/main/java/org/apache/accumulo/core/client/impl/RootTabletLocator.java 
97d476b 
  
core/src/main/java/org/apache/accumulo/core/client/impl/TableOperationsImpl.java
 07df1bd 
  core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocator.java 
e396d82 
  
core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocatorImpl.java 
c550f15 
  
core/src/main/java/org/apache/accumulo/core/client/impl/TimeoutTabletLocator.java
 bcbe561 
  
core/src/main/java/org/apache/accumulo/core/client/impl/thrift/TableOperation.java
 7716823 
  
core/src/main/java/org/apache/accumulo/core/client/mock/MockTableOperationsImpl.java
 de19137 
  
core/src/main/java/org/apache/accumulo/core/client/mock/impl/MockTabletLocator.java
 35f160f 
  core/src/main/java/org/apache/accumulo/core/master/thrift/FateOperation.java 
f65f552 
  
core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TabletClientService.java
 2ba7674 
  core/src/main/thrift/client.thrift 38a8076 
  core/src/main/thrift/master.thrift 38e9227 
  core/src/main/thrift/tabletserver.thrift 25e0b10 
  
core/src/test/java/org/apache/accumulo/core/client/admin/TableOperationsHelperTest.java
 1d91574 
  
core/src/test/java/org/apache/accumulo/core/client/impl/TableOperationsHelperTest.java
 02838ed 
  server/base/src/main/java/org/apache/accumulo/server/client/BulkImporter.java 
4cc13a9 
  
server/base/src/main/java/org/apache/accumulo/server/client/ClientServiceHandler.java
 fe17a62 
  
server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java
 258080c 
  
server/base/src/test/java/org/apache/accumulo/server/client/BulkImporterTest.java
 3680341 
  
server/master/src/main/java/org/apache/accumulo/master/FateServiceHandler.java 
5818da3 
  
server/master/src/main/java/org/apache/accumulo/master/tableOps/CloneIntoTable.java
 PRE-CREATION 
  server/tserver/src/main/java/org/apache/accumulo/tserver/Tablet.java 9a07a4a 
  server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServer.java 
3f594cc 
  
test/src/main/java/org/apache/accumulo/test/performance/thrift/NullTserver.java 
0591b19 
  test/src/test/java/org/apache/accumulo/test/functional/CloneIntoIT.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/27198/diff/


Testing
---

Includes CloneIntoIT, which exercises all permutations of the flags. Existing 
BulkIT still functions as intended for validation of no feature loss in 
refactoring exiting code for multi-purposing.


Thanks,

John Vines



Re: Review Request 27198: ACCUMULO-3236 introducing cloneInto feature

2014-12-22 Thread John Vines

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27198/
---

(Updated Dec. 22, 2014, 7:56 p.m.)


Review request for accumulo.


Changes
---

Updated the diff to bring up to a new version of the 1.6 branch and addressing 
majority of keith's concerns


Bugs: ACCUMULO-3236
https://issues.apache.org/jira/browse/ACCUMULO-3236


Repository: accumulo


Description
---

Includes all code to support feature, including thrift changes
Includes minor code cleanup to TableLocator and items in the Bulk path to 
remove signature items that are unused (arguments & exceptions)
Includes renaming of some bulk import functions to clarify their purpose 
(because they're now multi-purpose)

Patch is based on 1.6, but we can choose to make it target only 1.7 if we 
choose (this conversation should be taken up on jira, not in RB)


Diffs (updated)
-

  core/src/main/java/org/apache/accumulo/core/client/admin/TableOperations.java 
97f538d 
  
core/src/main/java/org/apache/accumulo/core/client/impl/RootTabletLocator.java 
97d476b 
  
core/src/main/java/org/apache/accumulo/core/client/impl/TableOperationsImpl.java
 07df1bd 
  core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocator.java 
e396d82 
  
core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocatorImpl.java 
c550f15 
  
core/src/main/java/org/apache/accumulo/core/client/impl/TimeoutTabletLocator.java
 bcbe561 
  
core/src/main/java/org/apache/accumulo/core/client/impl/thrift/TableOperation.java
 7716823 
  
core/src/main/java/org/apache/accumulo/core/client/mock/MockTableOperationsImpl.java
 de19137 
  
core/src/main/java/org/apache/accumulo/core/client/mock/impl/MockTabletLocator.java
 35f160f 
  core/src/main/java/org/apache/accumulo/core/master/thrift/FateOperation.java 
f65f552 
  
core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TabletClientService.java
 2ba7674 
  core/src/main/thrift/client.thrift 38a8076 
  core/src/main/thrift/master.thrift 38e9227 
  core/src/main/thrift/tabletserver.thrift 25e0b10 
  
core/src/test/java/org/apache/accumulo/core/client/admin/TableOperationsHelperTest.java
 1d91574 
  
core/src/test/java/org/apache/accumulo/core/client/impl/TableOperationsHelperTest.java
 02838ed 
  server/base/src/main/java/org/apache/accumulo/server/client/BulkImporter.java 
4cc13a9 
  
server/base/src/main/java/org/apache/accumulo/server/client/ClientServiceHandler.java
 fe17a62 
  
server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java
 258080c 
  
server/base/src/test/java/org/apache/accumulo/server/client/BulkImporterTest.java
 3680341 
  
server/master/src/main/java/org/apache/accumulo/master/FateServiceHandler.java 
5818da3 
  
server/master/src/main/java/org/apache/accumulo/master/tableOps/CloneIntoTable.java
 PRE-CREATION 
  server/tserver/src/main/java/org/apache/accumulo/tserver/Tablet.java 9a07a4a 
  server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServer.java 
3f594cc 
  
test/src/main/java/org/apache/accumulo/test/performance/thrift/NullTserver.java 
0591b19 
  test/src/test/java/org/apache/accumulo/test/functional/CloneIntoIT.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/27198/diff/


Testing
---

Includes CloneIntoIT, which exercises all permutations of the flags. Existing 
BulkIT still functions as intended for validation of no feature loss in 
refactoring exiting code for multi-purposing.


Thanks,

John Vines



Re: Review Request 27198: ACCUMULO-3236 introducing cloneInto feature

2014-12-22 Thread John Vines


> On Oct. 30, 2014, 4:06 p.m., kturner wrote:
> > server/master/src/main/java/org/apache/accumulo/master/tableOps/CloneIntoTable.java,
> >  line 93
> > <https://reviews.apache.org/r/27198/diff/2/?file=741888#file741888line93>
> >
> > This is user unfriendly.  We could possibly add the needed splits to 
> > the dest table.

I would be more content keeping these (clone into & split) in two seperate 
operations. Theis makes the implications to the calling user more explicit and, 
in the longer term, I see this operation being updated to not require splits to 
line up, which will be a change in behavior that wouldn't be backward 
compatible if we make that adjustment.


> On Oct. 30, 2014, 4:06 p.m., kturner wrote:
> > server/master/src/main/java/org/apache/accumulo/master/tableOps/CloneIntoTable.java,
> >  line 134
> > <https://reviews.apache.org/r/27198/diff/2/?file=741888#file741888line134>
> >
> > AFAICT nothing is done w/ the logical time of the src table?  Should 
> > take the max logical time for a src tablet and dest tablet and set that as 
> > the dest tablets logical time.  Also logical times should be of same type.

Having issues with this one as there seems to be no mechanisms in fate (or the 
client API in general) to get a table's time type. Do you know of a way to do 
this in fate?


> On Oct. 30, 2014, 4:06 p.m., kturner wrote:
> > server/master/src/main/java/org/apache/accumulo/master/tableOps/CloneIntoTable.java,
> >  line 84
> > <https://reviews.apache.org/r/27198/diff/2/?file=741888#file741888line84>
> >
> > Are the tables offline?  If not, then the splits could change during 
> > this operation.

They are not, but this isn't an issue. The locks prevent a merge and the bulk 
import code we're utilizing handles tablets splitting mid-operation.


On Oct. 30, 2014, 4:06 p.m., John Vines wrote:
> > I have only reviewed the FATE ops so far.  How will this work w/ 
> > replication?
> > 
> > I am thinking another possible approach may be to make a clone operation 
> > that accepts multiple input tables and creates a new table.  The reason I 
> > am thinking about this is that it avoids having to deal w/ issues related 
> > to the dest table changing while the clone is happening.  Something like
> > 
> > clone([tableA, tableB], tableC)
> > 
> > However, this is stil still tricky.   The existing clone handles cloning of 
> > online table that may be splitting. It makes multiple passes over the src 
> > table metadata entries updating the dest until it stabilizes.  In order to 
> > avoid this for multiple tables could move from a cloneInto that supports 
> > multiple online inputs to adding a merge that supports multiple offline 
> > tables as input.
> > 
> > ```
> > clone(tableA, tmpA)
> > offline(tmpA)
> > clone(tableB, tmpB)
> > offline(tmpB)
> > mergeTables([tmpA, tmpB], tmpC)
> > ```
> > 
> > After this tmpA and tmpB would be deleted and tmpC would have the files and 
> > splits of both.  tmpC would also have the correct logical time. However one 
> > thing I am not sure about is about per table props.  When clone(tableA, 
> > tableB) is done, it will create tableB w/ tableA's per table props.
> 
> John Vines wrote:
> Please refer to the JIRA about this feature, which explains why I need 
> this feature to not be going into a new table
> 
> kturner wrote:
> ok.  I still assert that this feature will be tricky to get correct :)  
> But I think it can be done.
> 
> One concurrency issue that I am not sure is handled in this patch is the 
> following.
> 
>  1. read files for src table (includes fileA)
>  1. fileA is dereferenced by src table 
>  1. fileA is garbage collected
>  1. a referernce to fileA is written to dest table
>  
> This is one case the current clone table code handles.

Added bulk load style import in progress markers to block deletion
tracking extent->file mapping via file now


- John


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27198/#review59201
---


On Oct. 29, 2014, 8:52 p.m., John Vines wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27198/
> ---
> 
> (Updated Oct. 29, 2014, 8:52 p.m.)
> 
> 
> Review request for accumulo.
> 
> 
> Bugs: ACCUMULO-3236

Re: Client on host can't connect to Accumulo in Docker instance

2014-12-15 Thread John Vines
The ip the processes inside docker resolve to need to be resolveable from
outside docker, which iirc, is not something easily achieved in Docker.

On Mon, Dec 15, 2014 at 6:54 PM, David Medinets 
wrote:
>
> Can I use the accumulo shell with the MiniAccumuloCluster? That's an
> interesting idea. I did not think it feasible.
>
> On Mon, Dec 15, 2014 at 6:48 PM, Christopher  wrote:
> > Have you tried the Accumulo shell with --debug?
> >
> >
> > --
> > Christopher L Tubbs II
> > http://gravatar.com/ctubbsii
> >
> > On Mon, Dec 15, 2014 at 6:45 PM, David Medinets <
> david.medin...@gmail.com>
> > wrote:
> >>
> >> I just remembered that Zookeeper can respond to "ruok" so I did try
> >> that. It worked:
> >>
> >> $ echo "ruok" | netcat -q 2 localhost 2; echo ""
> >> imok
> >>
> >> On Mon, Dec 15, 2014 at 6:18 PM, David Medinets
> >>  wrote:
> >> > The MAC running in Docker is starting and seems to run fine. A client
> >> > program inside the Docker instance can create tables and add splits.
> >> > However, a client running on the host, just hangs when connecting. I'm
> >> > not seeing any errors in the log files.
> >> >
> >> > I can connect to the Monitor running inside the Docker instance so the
> >> > ports should all be open.
> >> >
> >> > Any recommendations where I should look to resolve this?
> >>
>


Re: Official Guidance to users regarding removal of functions

2014-12-11 Thread John Vines
More likely we'd have a fully backwards compatible API for each major
version. SemVer allows for it and I think that grants us enough room for
growth while still securing things for future releases.

On Thu, Dec 11, 2014 at 2:36 PM, Adam Fuchs  wrote:

> Awesome -- ACCUMULO-2589 gets us at least halfway there. Given this,
> what would be the challenges in having and maintaining one API project
> for each major version ever released?
>
> Adam
>
> On Thu, Dec 11, 2014 at 2:24 PM, Josh Elser  wrote:
> > Adam Fuchs wrote:
> >>
> >> Has anybody looked into separating the public API a bit more from the
> >> core? It seems to me that a large number of the deprecation removal
> >> issues are related to people failing to read section 9 of the README.
> >> It would be great if we built an API jar that people could build
> >> against, but didn't leak internal classes. Maybe this is something we
> >> can shoot for in the 2.0 release?
> >
> >
> > Yup, this is already in the works by Christopher as a part of
> ACCUMULO-2589.
> >
> >
> >> Taking that a step further, it would be great if we released a 1.x API
> >> compatible client jar for every 2.x or later release. Does anybody
> >> have a feel for the maintenance costs of such a thing? Certainly
> >> changes to configuration options and metadata table structures will
> >> prove challenging. Given that we don't have a history of removing
> >> functionality, this ought to at least be feasible.
> >>
> >> Thoughts?
> >>
> >> Adam
> >>
> >>
> >> On Thu, Dec 11, 2014 at 1:54 PM, Jeremy Kepner
> wrote:
> >>>
> >>> So the simple solution is to deprecate often, but remove almost never.
> >>> It is very rare that leaving a deprecated API in place actually has a
> >>> negative impact.
> >>> The code gets a little less clean, but that's fine as long as things
> are
> >>> clearly labeled as deprecated.
> >>> In fact, seeing the way something used to be done can often be an
> >>> inspiration for something new.
> >>> If the past is deleted, then that knowledge is lost.
> >>>
> >>> I am not saying deleting can never happen, I am just saying that when
> it
> >>> does, it is because
> >>> there absolutely no choice.  Deletion to "clean up the code" shouldn't
> be
> >>> a valid reason for deletion.
> >>>
> >>> On Thu, Dec 11, 2014 at 12:58:13PM -0500, Christopher wrote:
> >>>>
> >>>> I don't know that it'd be "cold comfort". We can continue to support
> 1.x
> >>>> for some time, if we choose.
> >>>>
> >>>>
> >>>> --
> >>>> Christopher L Tubbs II
> >>>> http://gravatar.com/ctubbsii
> >>>>
> >>>> On Thu, Dec 11, 2014 at 12:53 PM, Billie Rinaldi
> >>>> wrote:
> >>>>
> >>>>> Actually, I wasn't suggesting anything.  I was providing elaboration
> on
> >>>>> what John was referring to.  I imagine that stronger API guarantees
> >>>>> will be
> >>>>> cold comfort in the face of a 1.0 ->  2.0 upgrade.  However, if we
> had
> >>>>> been
> >>>>> using semver all along, there would have been much less pain for
> users
> >>>>> in
> >>>>> the 1.x series.  Also, adopting semver would mean that going from 1.6
> >>>>> to a
> >>>>> hypothetical 1.7 would not suffer from the same upgrade issues.  I
> >>>>> doubt
> >>>>> that we could retroactively mitigate the differences in minor
> versions,
> >>>>> though, so going from 1.3/1.4/1.5 to 1.7 would still be hard.
> >>>>>
> >>>>> On Thu, Dec 11, 2014 at 9:11 AM, Mike Drob
> wrote:
> >>>>>
> >>>>>> Billie,
> >>>>>>
> >>>>>> Not to be glib, but it reads like your suggestion to Jeremy for when
> >>>>>> we
> >>>>>> have a 2.0.0 release (assuming semver passes) is to take option (2)
> >>>>>> Don't
> >>>>>> upgrade Accumulo.
> >>>>>>
> >>>>>> Please correct my misunderstanding.
> >>>>>>
> >>>>>> Mike
> >>>>>>
> >>>>>>

Re: Official Guidance to users regarding removal of functions

2014-12-11 Thread John Vines
Wouldn't this be resolved with our SemVer sqwitch?

On Thu, Dec 11, 2014 at 11:36 AM, Kepner, Jeremy - 0553 - MITLL <
kep...@ll.mit.edu> wrote:

> When we remove functions, do we have any official guidance to our users
> who may have built applications that use those functions?
>
> Right now, the official position is that the Accumulo developers can
> remove based on a consensus vote. However, this provides no guidance to
> users as to what they are suppose to do? As it stands, our guidance is that
> they have the following choices:
>
> (0) Diligently watch the Accumulo e-mail list and aggressively weigh in on
> any vote to remove functions that may impact them.
>
> (1) Find someone to modify the original source code of their applications,
> build it, and *re-verify* the application. I emphasise the re-verify
> because that is usually the most costly part of the process that often
> won't get approved by management.
>
> (2) Don't upgrade Accumulo.
>


Re: [VOTE] adoption of semver

2014-12-09 Thread John Vines
Adam, the vote isn't for 2.0.0, it's for all future releases. Can you
please clarify your vote?

On Tue, Dec 9, 2014 at 3:40 PM, Adam Fuchs  wrote:

> +1 for semver version 2.0.0
>
> Assuming this passes, let's make sure we grab a copy for posterity and
> post it on our site.
>
> Adam
>
> On Tue, Dec 9, 2014 at 1:23 PM, Billie Rinaldi  wrote:
> > I would like to call a vote on adopting semantic versioning (
> > http://semver.org/) for future releases.
> >
> > This vote is subject to majority approval and will remain open for 72
> hours.
> >
> > +1: Adopt semantic versioning for all future releases
> > +0: Don't care
> > -1: Do not adopt semantic versioning because ...
> >
> > Here is my +1.
> >
> > Billie
>


Re: [VOTE] adoption of semver

2014-12-09 Thread John Vines
+1

On Tue, Dec 9, 2014 at 1:30 PM, Keith Turner  wrote:

> +1
>
> On Tue, Dec 9, 2014 at 1:23 PM, Billie Rinaldi  wrote:
>
> > I would like to call a vote on adopting semantic versioning (
> > http://semver.org/) for future releases.
> >
> > This vote is subject to majority approval and will remain open for 72
> > hours.
> >
> > +1: Adopt semantic versioning for all future releases
> > +0: Don't care
> > -1: Do not adopt semantic versioning because ...
> >
> > Here is my +1.
> >
> > Billie
> >
>


Re: [DISCUSS] Semantic Versioning

2014-12-08 Thread John Vines
Just to make sure I'm understanding this before we get into another vote
thread kerfluffle, if we adopt semver in 1.7.0, include a new client api in
1.7.0, deprecate the old api in 1.7.0, then semver would allow (but not
require) removing the deprecated api in 2.0.0, correct?

On Mon, Dec 8, 2014 at 6:21 PM, Christopher  wrote:

> Short Summary:
>
> I see 6 informal +1s (including my own) for adopting Semver, and no -1s.
> Other points differ.
>
> Longer Summary:
>
> Including additional strictness for deprecation documented in a major
> release does not have significant consensus and, in hindsight, probably
> doesn't really add much value. Semver does not bind us to a particular
> release cycle for major/minor/bugfix, only what we call it when we make
> certain changes. The basic Semver rules are sufficient.
>
> Including additional strictness for forward compatibility isn't necessary.
> Semver requires a minor version bump if new features are added to the API.
> So, this is redundant and not needed.
>
> Including the wire version is tough without a test framework, and maybe
> unnecessary, since the main concern about compatibility seems to be with
> applications needing to be modified to function with a newer client
> library, which contains the RPC code. If we ensure compatibility at the
> API, then users simply need to drop in the appropriate client jars for wire
> compatibility. This is probably sufficient.
>
> There seems to be some confusion about when and where these rules are
> applied. However, I believe we can go ahead and start adopting these rules
> from here on, without any issues. This doesn't hurt users, and only *adds*
> to the stability of the API, which we've already been striving for. It also
> doesn't bind us to a particular release cycle or deprecation duration. It
> only helps us determine what minimum version we should call something, when
> we do release. Upon adoption, the "master" branch version can be computed
> from the rules. If that computation requires a bump higher than what we are
> comfortable with, we can always ensure a greater level of compatibility
> than what currently exists, in order to avoid that bump, if we so choose.
> Adoption of these rules should help inform such discussions.
>
> Now, to be clear, it may be the case that the 1.5 and 1.6 maintenance
> branches already have introduced additional APIs that under Semver would
> have required a minor version bump. I'm not suggesting that we revert those
> changes, but by adopting the Semver, we can agree to avoid doing that from
> here on. Since 1.7 already adds additional features, by adopting Semver, we
> simply agree that the master branch should be called 2.0 if it is not
> backwards-compatible with 1.6.x, and 1.7.0 if it is. Adopting these rules
> helps inform that decision, but does not make that decision for us. Either
> way, that decision would be independent of adopting Semver today for all
> future releases. Incidentally, this answers the question of whether 2.0 can
> introduce "breaking" (removal of deprecations) changes, but it does not say
> that we must stop support for 1.x or release 2.0 on any particular
> timeline.
>
> Action:
>
> In the absence of further discussion, I think we should call a majority
> vote (tomorrow) to adopt Semver, so we can immediately start communicating
> better versioning semantics, and we can make progress with a concrete
> decision to help with release planning. The specific wording of the
> proposition I would suggest (please propose amendments here if you think it
> is unclear) would be:
>
> "Vote to adopt Semantic Versioning 2.0.0 (as described at
> https://semver.org)
> from this point forward, for all future releases, with the public API
> documented in the README."
>
> --
> Christopher L Tubbs II
> http://gravatar.com/ctubbsii
>


Re: [DISCUSS] Semantic Versioning

2014-12-06 Thread John Vines
I think there's an issue with this course of discussion because we're
discussion issues of our current 1.x release style while also discussion
Semver, both of which are incongruent with one another. Perhaps we need to
segregate adopting semver for 2.0.0 (which is waht I assumed), vs. adopting
semver for our next release vs. adopting semver for some release after the
next but before 2.0.0?

On Sat, Dec 6, 2014 at 1:16 PM,  wrote:

> " This basically represents a goal to not to add new APIs without bumping
> the minor release."
>
>  I didn't think that with semver you could change the API in a patch
> release. An API change, if backwards compatible, requires a new MINOR
> release. Am I reading 6, 7, 8 and in the specification incorrectly? I might
> need an example.
>
> -Original Message-
> From: Christopher [mailto:ctubb...@apache.org]
> Sent: Saturday, December 06, 2014 12:53 PM
> To: Accumulo Dev List
> Subject: Re: [DISCUSS] Semantic Versioning
>
> On Sat, Dec 6, 2014 at 9:55 AM,  wrote:
>
> > [+1 ]: adopt semver 2.0.0 (http://semver.org)
> > [0 ]: adopt additional strictness to require documenting deprecation
> > for at least 1 major release before possible to consider in the next
> > major release
> > [* ]: adopt additional strictness to ensure forward compatibility
> > between bugfix releases [ +1]: start operating under whatever rules we
> > adopt as of the master branch
> > [** ]: keep the master branch named 1.7.0 [ +1***]: define scope of
> > these versioning compatibility rules to  be our current definition of
> > "public API" and the wire version
> >
> > * I'm confused by this. A change in 1.6.1 is forward compatible until
> > it's not. If a patch is applied to 1.6.2 that is not backwards
> > compatible, then that version is not 1.6.2, it's 1.7.0.
> >
>
> This basically represents a goal to not to add new APIs without bumping
> the minor release. That way, code written against 1.6.2 would run against a
> 1.6.1 instance.
>
>
> > ** if we vote to start operating under these rules, then the version
> > should calculated when development is done.
> >
>
> It can be, yes... but we can also ensure we don't have any removals that
> would force the calculation to bump higher than we want it to be. Keeping
> the master branch 1.7 means that we fix the calculation.
>
>
> > *** where is the current definition documented?
> >
> >
> The README
>
>
> > -Original Message-
> > From: Christopher [mailto:ctubb...@apache.org]
> > Sent: Friday, December 05, 2014 1:46 PM
> > To: Accumulo Dev List
> > Subject: Re: [DISCUSS] Semantic Versioning
> >
> > It would be helpful to this thread, if we can get some informal votes
> > on the following propositions:
> >
> > [ ]: adopt semver 2.0.0 (http://semver.org) [ ]: adopt additional
> > strictness to require documenting deprecation for at least 1 major
> > release before possible to consider in the next major release [ ]:
> > adopt additional strictness to ensure forward compatibility between
> bugfix releases [ ]:
> > start operating under whatever rules we adopt as of the master branch [
> ]:
> > keep the master branch named 1.7.0 [ ]: define scope of these
> > versioning compatibility rules to  be our current definition of
> > "public API" and the wire version
> >
> > I'm going to assume it's a given that if any exceptional situations
> > arise, we'll handle those through further discussions/voting, as
> appropriate.
> >
> >
> > --
> > Christopher L Tubbs II
> > http://gravatar.com/ctubbsii
> >
> > On Thu, Dec 4, 2014 at 2:09 PM, Josh Elser  wrote:
> >
> > > Christopher wrote:
> > >
> > >> On Wed, Dec 3, 2014 at 1:41 PM,  wrote:
> > >>
> > >>  >  +1 to semver
> > >>> >  +1 to 1 major release before removing deprecated items
> > >>> >  +1 to forward compatibility between bugfix releases
> > >>> >
> > >>> >  What's the version # for the master branch if these rules are
> > applied?
> > >>> >
> > >>> >
> > >>>
> > >> Well, I'd say1.7  still, since it is consistent with our existing
> > >> rules for determining a "major" releasetoday,  *and*  it matches
> > >> semver definition of a "minor" release, because it doesn't break
> > >> backwards-compatibility compatibility from1.6  (with one tiny
> > >> exception of dropping Instance.getConfiguration()... because it was
> > >> an exceptional situation discussed in previous threads; if people
> > >> are uncomfortable with that exception, I can return it to the API,
> > >> if it helps achieve consensus here).
> > >>
> > >>
> > > Sounds right to me.
> > >
> > > When we actually have code to land in Apache for 2.0.0, I figured
> > > we'd break 1.7.X off to branch named "1.7" and master would become
> 2.0.0.
> > > We can have some feature branch in Apache off to the side to make
> > > sure
> > > 2.0.0 development can happen in a shared environment before making
> > > the above switch.
> > >
> >
> >
>
>


Re: [DISCUSS] Semantic Versioning

2014-12-05 Thread John Vines
Let me try again, for clarity

[+1]: adopt semver 2.0.0 (http://semver.org)
[*]: adopt additional strictness to require documenting deprecation for at
least 1 major release before possible to consider in the next major release
[+1]: adopt additional strictness to ensure forward compatibility between
bugfix releases
[**]: start operating under whatever rules we adopt as of the master branch
[***]: keep the master branch named 1.7.0
[+1]: define scope of these versioning compatibility rules to  be our

* = Okay adopting this for the release following the release of the new API
** = This is dependent on the keep master branch named 1.7.0 so I'm afraid
to align myself to a vote against something in flux
*** = don't care (0) but it affects my other votes

On Fri, Dec 5, 2014 at 1:53 PM, Christopher  wrote:

> Does X mean "+1" here? And, are the ones you omitted undecided, or "-1".
>
>
> --
> Christopher L Tubbs II
> http://gravatar.com/ctubbsii
>
> On Fri, Dec 5, 2014 at 1:51 PM, John Vines  wrote:
>
>> [X]: adopt semver 2.0.0 (http://semver.org)
>> [ ]: adopt additional strictness to require documenting deprecation for at
>> least 1 major release before possible to consider in the next major
>> release
>> [X]: adopt additional strictness to ensure forward compatibility between
>> bugfix releases
>> [ ]: start operating under whatever rules we adopt as of the master branch
>> [ ]: keep the master branch named 1.7.0
>> [X]: define scope of these versioning compatibility rules to  be our
>> current definition of "public API" and the wire version
>>
>>
>> On Fri, Dec 5, 2014 at 1:46 PM, Christopher  wrote:
>>
>> > It would be helpful to this thread, if we can get some informal votes on
>> > the following propositions:
>> >
>> > [ ]: adopt semver 2.0.0 (http://semver.org)
>> > [ ]: adopt additional strictness to require documenting deprecation for
>> at
>> > least 1 major release before possible to consider in the next major
>> release
>> > [ ]: adopt additional strictness to ensure forward compatibility between
>> > bugfix releases
>> > [ ]: start operating under whatever rules we adopt as of the master
>> branch
>> > [ ]: keep the master branch named 1.7.0
>> > [ ]: define scope of these versioning compatibility rules to  be our
>> > current definition of "public API" and the wire version
>> >
>> > I'm going to assume it's a given that if any exceptional situations
>> arise,
>> > we'll handle those through further discussions/voting, as appropriate.
>> >
>> >
>> > --
>> > Christopher L Tubbs II
>> > http://gravatar.com/ctubbsii
>> >
>> > On Thu, Dec 4, 2014 at 2:09 PM, Josh Elser 
>> wrote:
>> >
>> > > Christopher wrote:
>> > >
>> > >> On Wed, Dec 3, 2014 at 1:41 PM,  wrote:
>> > >>
>> > >>  >  +1 to semver
>> > >>> >  +1 to 1 major release before removing deprecated items
>> > >>> >  +1 to forward compatibility between bugfix releases
>> > >>> >
>> > >>> >  What's the version # for the master branch if these rules are
>> > applied?
>> > >>> >
>> > >>> >
>> > >>>
>> > >> Well, I'd say1.7  still, since it is consistent with our existing
>> rules
>> > >> for
>> > >> determining a "major" releasetoday,  *and*  it matches semver
>> definition
>> > >> of
>> > >> a "minor" release, because it doesn't break backwards-compatibility
>> > >> compatibility from1.6  (with one tiny exception of dropping
>> > >> Instance.getConfiguration()... because it was an exceptional
>> situation
>> > >> discussed in previous threads; if people are uncomfortable with that
>> > >> exception, I can return it to the API, if it helps achieve consensus
>> > >> here).
>> > >>
>> > >>
>> > > Sounds right to me.
>> > >
>> > > When we actually have code to land in Apache for 2.0.0, I figured we'd
>> > > break 1.7.X off to branch named "1.7" and master would become 2.0.0.
>> We
>> > can
>> > > have some feature branch in Apache off to the side to make sure 2.0.0
>> > > development can happen in a shared environment before making the above
>> > > switch.
>> > >
>> 

Re: [DISCUSS] Semantic Versioning

2014-12-05 Thread John Vines
[X]: adopt semver 2.0.0 (http://semver.org)
[ ]: adopt additional strictness to require documenting deprecation for at
least 1 major release before possible to consider in the next major release
[X]: adopt additional strictness to ensure forward compatibility between
bugfix releases
[ ]: start operating under whatever rules we adopt as of the master branch
[ ]: keep the master branch named 1.7.0
[X]: define scope of these versioning compatibility rules to  be our
current definition of "public API" and the wire version


On Fri, Dec 5, 2014 at 1:46 PM, Christopher  wrote:

> It would be helpful to this thread, if we can get some informal votes on
> the following propositions:
>
> [ ]: adopt semver 2.0.0 (http://semver.org)
> [ ]: adopt additional strictness to require documenting deprecation for at
> least 1 major release before possible to consider in the next major release
> [ ]: adopt additional strictness to ensure forward compatibility between
> bugfix releases
> [ ]: start operating under whatever rules we adopt as of the master branch
> [ ]: keep the master branch named 1.7.0
> [ ]: define scope of these versioning compatibility rules to  be our
> current definition of "public API" and the wire version
>
> I'm going to assume it's a given that if any exceptional situations arise,
> we'll handle those through further discussions/voting, as appropriate.
>
>
> --
> Christopher L Tubbs II
> http://gravatar.com/ctubbsii
>
> On Thu, Dec 4, 2014 at 2:09 PM, Josh Elser  wrote:
>
> > Christopher wrote:
> >
> >> On Wed, Dec 3, 2014 at 1:41 PM,  wrote:
> >>
> >>  >  +1 to semver
> >>> >  +1 to 1 major release before removing deprecated items
> >>> >  +1 to forward compatibility between bugfix releases
> >>> >
> >>> >  What's the version # for the master branch if these rules are
> applied?
> >>> >
> >>> >
> >>>
> >> Well, I'd say1.7  still, since it is consistent with our existing rules
> >> for
> >> determining a "major" releasetoday,  *and*  it matches semver definition
> >> of
> >> a "minor" release, because it doesn't break backwards-compatibility
> >> compatibility from1.6  (with one tiny exception of dropping
> >> Instance.getConfiguration()... because it was an exceptional situation
> >> discussed in previous threads; if people are uncomfortable with that
> >> exception, I can return it to the API, if it helps achieve consensus
> >> here).
> >>
> >>
> > Sounds right to me.
> >
> > When we actually have code to land in Apache for 2.0.0, I figured we'd
> > break 1.7.X off to branch named "1.7" and master would become 2.0.0. We
> can
> > have some feature branch in Apache off to the side to make sure 2.0.0
> > development can happen in a shared environment before making the above
> > switch.
> >
>


On Fri, Dec 5, 2014 at 1:46 PM, Christopher  wrote:

> It would be helpful to this thread, if we can get some informal votes on
> the following propositions:
>
> [ ]: adopt semver 2.0.0 (http://semver.org)
> [ ]: adopt additional strictness to require documenting deprecation for at
> least 1 major release before possible to consider in the next major release
> [ ]: adopt additional strictness to ensure forward compatibility between
> bugfix releases
> [ ]: start operating under whatever rules we adopt as of the master branch
> [ ]: keep the master branch named 1.7.0
> [ ]: define scope of these versioning compatibility rules to  be our
> current definition of "public API" and the wire version
>
> I'm going to assume it's a given that if any exceptional situations arise,
> we'll handle those through further discussions/voting, as appropriate.
>
>
> --
> Christopher L Tubbs II
> http://gravatar.com/ctubbsii
>
> On Thu, Dec 4, 2014 at 2:09 PM, Josh Elser  wrote:
>
> > Christopher wrote:
> >
> >> On Wed, Dec 3, 2014 at 1:41 PM,  wrote:
> >>
> >>  >  +1 to semver
> >>> >  +1 to 1 major release before removing deprecated items
> >>> >  +1 to forward compatibility between bugfix releases
> >>> >
> >>> >  What's the version # for the master branch if these rules are
> applied?
> >>> >
> >>> >
> >>>
> >> Well, I'd say1.7  still, since it is consistent with our existing rules
> >> for
> >> determining a "major" releasetoday,  *and*  it matches semver definition
> >> of
> >> a "minor" release, because it doesn't break backwards-compatibility
> >> compatibility from1.6  (with one tiny exception of dropping
> >> Instance.getConfiguration()... because it was an exceptional situation
> >> discussed in previous threads; if people are uncomfortable with that
> >> exception, I can return it to the API, if it helps achieve consensus
> >> here).
> >>
> >>
> > Sounds right to me.
> >
> > When we actually have code to land in Apache for 2.0.0, I figured we'd
> > break 1.7.X off to branch named "1.7" and master would become 2.0.0. We
> can
> > have some feature branch in Apache off to the side to make sure 2.0.0
> > development can happen in a shared environment before making the above
> > switch.
> >
>


Re: [VOTE] API release policy for 1.7/2.0

2014-12-04 Thread John Vines
I would be okay with a 1.8.0 FINAL (or 1.9 or 1.10 or whatever is the last
number of 1.x line) that exists solely for transitioning. It sounds a bit
like we're gaming it, but if that makes people more comfortable removing
backwards support in 2.0.0 than I'm okay with it.

On Thu, Dec 4, 2014 at 5:12 PM, Keith Turner  wrote:

>
>
> On Thu, Dec 4, 2014 at 4:36 PM, Keith Turner  wrote:
>
>>
>>
>> On Thu, Dec 4, 2014 at 4:00 PM, John Vines  wrote:
>>
>>> Yes, I'm advocating for the freedom to drop undeprecated APIs in 2.0.0.
>>> This is not something I encourage but I think this is something we should
>>> have in our pocket just in case.
>>>
>>
>> What do you think about the following?
>>
>>   * API must be deprecated in 1.X release before it can be deprecated in
>> 2.0.0
>>   * Introduce new API in 1.8 w/ old API in deprecated form
>>   * In 2.0.0 we drop old API
>>
>> This way we do not have the long lived support tail for the old API that
>> I think is one of your concerns.  However this is still a nice bridge
>> release for users (one of my concerns).
>>
>
> To expand on this.   Regarding John's point about bugs when supporting old
> and new API, it will would only get worse over time.  The longer the old
> API is around not getting any attention from developers, the more likely it
> will start to break in subtle and unexpected ways.
>
>
>>
>>
>>>
>>> On Thu, Dec 4, 2014 at 3:56 PM, Keith Turner  wrote:
>>>
>>> > On Thu, Dec 4, 2014 at 12:59 PM, John Vines  wrote:
>>> >
>>> > > On Thu, Dec 4, 2014 at 12:39 PM, Keith Turner 
>>> wrote:
>>> > >
>>> > > > On Thu, Dec 4, 2014 at 12:17 PM, John Vines 
>>> wrote:
>>> > > >
>>> > > > > On Thu, Dec 4, 2014 at 12:11 PM, Josh Elser <
>>> josh.el...@gmail.com>
>>> > > > wrote:
>>> > > > >
>>> > > > > > John Vines wrote:
>>> > > > > >
>>> > > > > >> On Thu, Dec 4, 2014 at 11:52 AM, Keith Turner<
>>> ke...@deenlo.com>
>>> > > > wrote:
>>> > > > > >>
>>> > > > > >>  On Thu, Dec 4, 2014 at 11:40 AM, Josh Elser<
>>> josh.el...@gmail.com
>>> > >
>>> > > > > >>> wrote:
>>> > > > > >>>
>>> > > > > >>>  John Vines wrote:
>>> > > > > >>>>
>>> > > > > >>>>  Though I feel the biggest reasoning is our switch to
>>> semantic
>>> > > > > >>>>>
>>> > > > > >>>>>> versioning. And from semver.org,
>>> > > > > >>>>>>
>>> > > > > >>>>>>>   >
>>> > > > > >>>>>>>
>>> > > > > >>>>>>>>   >
>>> > > > > >>>>>>>>   >   1. MAJOR version when you make incompatible
>>> API
>>> > > > changes
>>> > > > > >>>>>>>>   >
>>> > > > > >>>>>>>>   >
>>> > > > > >>>>>>>>   >
>>> > > > > >>>>>>>>
>>> > > > > >>>>>>>   Right and dropping deprecated APIs is an incompatible
>>> > change.
>>> > > > Do
>>> > > > > >>>>>>> you
>>> > > > > >>>>>>>
>>> > > > > >>>>>> think
>>> > > > > >>>>>>
>>> > > > > >>>>>>>   the following two rules are reasonable?
>>> > > > > >>>>>>>
>>> > > > > >>>>>>> * When API is deprecated, must offer replacement if
>>> > > feasible.
>>> > > > > >>>>>>> * Can only drop deprecated method when MAJOR version
>>> is
>>> > > > > >>>>>>>
>>> > > > > >>>>>> incremented
>>> > > > > >>>
>>> > > > > >>>> (there
>>> > > > > >>>>>
>>> &

Re: [VOTE] API release policy for 1.7/2.0

2014-12-04 Thread John Vines
Yes, I'm advocating for the freedom to drop undeprecated APIs in 2.0.0.
This is not something I encourage but I think this is something we should
have in our pocket just in case.

On Thu, Dec 4, 2014 at 3:56 PM, Keith Turner  wrote:

> On Thu, Dec 4, 2014 at 12:59 PM, John Vines  wrote:
>
> > On Thu, Dec 4, 2014 at 12:39 PM, Keith Turner  wrote:
> >
> > > On Thu, Dec 4, 2014 at 12:17 PM, John Vines  wrote:
> > >
> > > > On Thu, Dec 4, 2014 at 12:11 PM, Josh Elser 
> > > wrote:
> > > >
> > > > > John Vines wrote:
> > > > >
> > > > >> On Thu, Dec 4, 2014 at 11:52 AM, Keith Turner
> > > wrote:
> > > > >>
> > > > >>  On Thu, Dec 4, 2014 at 11:40 AM, Josh Elser >
> > > > >>> wrote:
> > > > >>>
> > > > >>>  John Vines wrote:
> > > > >>>>
> > > > >>>>  Though I feel the biggest reasoning is our switch to semantic
> > > > >>>>>
> > > > >>>>>> versioning. And from semver.org,
> > > > >>>>>>
> > > > >>>>>>>   >
> > > > >>>>>>>
> > > > >>>>>>>>   >
> > > > >>>>>>>>   >   1. MAJOR version when you make incompatible API
> > > changes
> > > > >>>>>>>>   >
> > > > >>>>>>>>   >
> > > > >>>>>>>>   >
> > > > >>>>>>>>
> > > > >>>>>>>   Right and dropping deprecated APIs is an incompatible
> change.
> > > Do
> > > > >>>>>>> you
> > > > >>>>>>>
> > > > >>>>>> think
> > > > >>>>>>
> > > > >>>>>>>   the following two rules are reasonable?
> > > > >>>>>>>
> > > > >>>>>>> * When API is deprecated, must offer replacement if
> > feasible.
> > > > >>>>>>> * Can only drop deprecated method when MAJOR version is
> > > > >>>>>>>
> > > > >>>>>> incremented
> > > > >>>
> > > > >>>> (there
> > > > >>>>>
> > > > >>>>>are other proposed constraints on dropping deprecated
> methods)
> > > > >>>>>>>
> > > > >>>>>>>   If we follow the above, then we can not deprecate current
> API
> > > > >>>>>>> before
> > > > >>>>>>>   introducing new API (because the replacement would not
> exist
> > > > >>>>>>>   concurrently).  Also we can not drop the current API in
> 2.0.0
> > > if
> > > > >>>>>>> its
> > > > >>>>>>>
> > > > >>>>>> not
> > > > >>>>>>
> > > > >>>>>>>   deprecated.
> > > > >>>>>>>
> > > > >>>>>> It is totally a reasonable statement for after 2.0.0. But for
> > > 2.0.0
> > > > I
> > > > >>>>> am
> > > > >>>>> not okay making this guarantee because I would rather sacrifice
> > > > >>>>> backward
> > > > >>>>> compatibility for an API that isn't plagued by shortcomings of
> > the
> > > > old
> > > > >>>>>
> > > > >>>> API
> > > > >>>
> > > > >>>> Again, this is the fear/concern of impacting the new API due to
> > > > >>>>
> > > > >>> supporting
> > > > >>>
> > > > >>>> of the old which *may or may not even happen*.
> > > > >>>>
> > > > >>>>  Good point, we could adopt these rules now and never create a
> new
> > > > >>> API.  I
> > > > >>> think we would be better off adopting this now regardless of
> wether
> > > not
> > > > >>> we
> > > > >>> introduce a new API in the future.
> > > > >>>
> > &

Re: [VOTE] API release policy for 1.7/2.0

2014-12-04 Thread John Vines
I have never said we shouldn't strive for it. I am saying having a good 2.0
api is more important than that. And by putting in requirements about how
1.x APIs appear in 2.x I feel we are leaving us unable to enforce my
preferred ordering of priorities.

On Thu, Dec 4, 2014 at 1:32 PM, Christopher  wrote:

> On Thu, Dec 4, 2014 at 11:30 AM, John Vines  wrote:
>
>> Sent from my phone, please pardon the typos and brevity.
>> On Dec 4, 2014 11:20 AM, "Keith Turner"  wrote:
>> >
>> > On Wed, Dec 3, 2014 at 6:48 PM, John Vines  wrote:
>> >
>> > > It's hard to track this down-
>> > > http://www.mail-archive.com/dev@accumulo.apache.org/msg07336.html has
>> > > Busbey mentioning that 2.0 was breaking, which no one reacted to,
>> implying
>> > > this was known
>> > > http://www.mail-archive.com/dev%40accumulo.apache.org/msg08344.html
>> has
>> > > Mike Drob stating this "In general, I'm inclined to leave as much in
>> as
>> > > possible, and then if we
>> > >
>> > > must remove things then do so in 2.0.0. I know that our compatibility
>> > > statement only promises one minor version, but that doesn't mean we
>> have to
>> > > be strict at every opportunity." which promotes this idea.
>> > >
>> > >
>> > > Christopher has a response to that which also corroborates the
>> agreement.
>> > >
>> > >
>> > >
>> > > Though I feel the biggest reasoning is our switch to semantic
>> versioning. And from semver.org,
>> > >
>> > >
>> > >1. MAJOR version when you make incompatible API changes
>> > >
>> > >
>> > >
>> > Right and dropping deprecated APIs is an incompatible change. Do you
>> think
>> > the following two rules are reasonable?
>> >
>> >  * When API is deprecated, must offer replacement if feasible.
>> >  * Can only drop deprecated method when MAJOR version is incremented
>> (there
>> > are other proposed constraints on dropping deprecated methods)
>> >
>> > If we follow the above, then we can not deprecate current API before
>> > introducing new API (because the replacement would not exist
>> > concurrently).  Also we can not drop the current API in 2.0.0 if its not
>> > deprecated.
>>
>> It is totally a reasonable statement for after 2.0.0. But for 2.0.0 I am
>> not okay making this guarantee because I would rather sacrifice backward
>> compatibility for an API that isn't plagued by shortcomings of the old API
>>
>> [snip]
>
> My position is that I think we can offer this guarantee, just as we've
> been doing with the most recent releases. At the very least, this is
> something that I'm willing to strive for, and discuss if we actually run
> into something that prevents us from (or overly burdens us) doing so. Until
> that point actually happens, I think backwards compatibility with 2.0 is
> something we should strive for.
>
> --
> Christopher L Tubbs II
> http://gravatar.com/ctubbsii
>
>


Re: [VOTE] API release policy for 1.7/2.0

2014-12-04 Thread John Vines
Yes, you have identified the issues I perceive.

A proxy is one acceptable solution, yes. But given the statements people
have made about wanting to keep things around like the 1.x batchwriter API,
I question how acceptable running proxies will be / the cost of
implementing and maintaining a full proxy for a subset of features we
require for compatibility.

I wish I had a better solution to punting until we have 2.0's client API
finalized, but I see little recourse for any API guarantees we make for
2.0.0 until we have that API in hand.

On Thu, Dec 4, 2014 at 12:29 PM, Sean Busbey  wrote:

> On Thu, Dec 4, 2014 at 11:11 AM, Josh Elser  wrote:
>
> > John Vines wrote:
> >
> >> On Thu, Dec 4, 2014 at 11:52 AM, Keith Turner  wrote:
> >>
> >>  On Thu, Dec 4, 2014 at 11:40 AM, Josh Elser
> >>> wrote:
> >>>
> >>>  John Vines wrote:
> >>>>
> >>>>  Though I feel the biggest reasoning is our switch to semantic
> >>>>>
> >>>>>> versioning. And from semver.org,
> >>>>>>
> >>>>>>>   >
> >>>>>>>
> >>>>>>>>   >
> >>>>>>>>   >   1. MAJOR version when you make incompatible API changes
> >>>>>>>>   >
> >>>>>>>>   >
> >>>>>>>>   >
> >>>>>>>>
> >>>>>>>   Right and dropping deprecated APIs is an incompatible change. Do
> >>>>>>> you
> >>>>>>>
> >>>>>> think
> >>>>>>
> >>>>>>>   the following two rules are reasonable?
> >>>>>>>
> >>>>>>> * When API is deprecated, must offer replacement if feasible.
> >>>>>>> * Can only drop deprecated method when MAJOR version is
> >>>>>>>
> >>>>>> incremented
> >>>
> >>>> (there
> >>>>>
> >>>>>are other proposed constraints on dropping deprecated methods)
> >>>>>>>
> >>>>>>>   If we follow the above, then we can not deprecate current API
> >>>>>>> before
> >>>>>>>   introducing new API (because the replacement would not exist
> >>>>>>>   concurrently).  Also we can not drop the current API in 2.0.0 if
> >>>>>>> its
> >>>>>>>
> >>>>>> not
> >>>>>>
> >>>>>>>   deprecated.
> >>>>>>>
> >>>>>> It is totally a reasonable statement for after 2.0.0. But for 2.0.0
> I
> >>>>> am
> >>>>> not okay making this guarantee because I would rather sacrifice
> >>>>> backward
> >>>>> compatibility for an API that isn't plagued by shortcomings of the
> old
> >>>>>
> >>>> API
> >>>
> >>>> Again, this is the fear/concern of impacting the new API due to
> >>>>
> >>> supporting
> >>>
> >>>> of the old which *may or may not even happen*.
> >>>>
> >>>>  Good point, we could adopt these rules now and never create a new
> >>> API.  I
> >>> think we would be better off adopting this now regardless of wether not
> >>> we
> >>> introduce a new API in the future.
> >>>
> >>> Also, if we do eventually create an API.  How is it user unfriendly to
> >>> have
> >>> the old API around in deprecated form?  The deprecation markings
> clearly
> >>> communicate that someone writing new code should not use the old API.
> >>> However it still allows existing code that users invested time into
> >>> writing
> >>> to run w/o issue against 2.0.0.
> >>>
> >>>
> >> I feel like I'm repeating myself. My concern is that the implementation
> >> details of maintaining the 1.x API in deprecated form will have a
> negative
> >> impact on the 2.0 API due to implementation details.
> >>
> >
> > Sorry, Keith, you misinterpreted what I meant -- let me try to restate. I
> > am assuming that a new API will happen.
> >
> > What is only a possibility is that the old API implementation would
> > negatively affect the new API. John's concern is a hypothetical one that
> > isn&

Re: [VOTE] API release policy for 1.7/2.0

2014-12-04 Thread John Vines
On Thu, Dec 4, 2014 at 12:39 PM, Keith Turner  wrote:

> On Thu, Dec 4, 2014 at 12:17 PM, John Vines  wrote:
>
> > On Thu, Dec 4, 2014 at 12:11 PM, Josh Elser 
> wrote:
> >
> > > John Vines wrote:
> > >
> > >> On Thu, Dec 4, 2014 at 11:52 AM, Keith Turner
> wrote:
> > >>
> > >>  On Thu, Dec 4, 2014 at 11:40 AM, Josh Elser
> > >>> wrote:
> > >>>
> > >>>  John Vines wrote:
> > >>>>
> > >>>>  Though I feel the biggest reasoning is our switch to semantic
> > >>>>>
> > >>>>>> versioning. And from semver.org,
> > >>>>>>
> > >>>>>>>   >
> > >>>>>>>
> > >>>>>>>>   >
> > >>>>>>>>   >   1. MAJOR version when you make incompatible API
> changes
> > >>>>>>>>   >
> > >>>>>>>>   >
> > >>>>>>>>   >
> > >>>>>>>>
> > >>>>>>>   Right and dropping deprecated APIs is an incompatible change.
> Do
> > >>>>>>> you
> > >>>>>>>
> > >>>>>> think
> > >>>>>>
> > >>>>>>>   the following two rules are reasonable?
> > >>>>>>>
> > >>>>>>> * When API is deprecated, must offer replacement if feasible.
> > >>>>>>> * Can only drop deprecated method when MAJOR version is
> > >>>>>>>
> > >>>>>> incremented
> > >>>
> > >>>> (there
> > >>>>>
> > >>>>>are other proposed constraints on dropping deprecated methods)
> > >>>>>>>
> > >>>>>>>   If we follow the above, then we can not deprecate current API
> > >>>>>>> before
> > >>>>>>>   introducing new API (because the replacement would not exist
> > >>>>>>>   concurrently).  Also we can not drop the current API in 2.0.0
> if
> > >>>>>>> its
> > >>>>>>>
> > >>>>>> not
> > >>>>>>
> > >>>>>>>   deprecated.
> > >>>>>>>
> > >>>>>> It is totally a reasonable statement for after 2.0.0. But for
> 2.0.0
> > I
> > >>>>> am
> > >>>>> not okay making this guarantee because I would rather sacrifice
> > >>>>> backward
> > >>>>> compatibility for an API that isn't plagued by shortcomings of the
> > old
> > >>>>>
> > >>>> API
> > >>>
> > >>>> Again, this is the fear/concern of impacting the new API due to
> > >>>>
> > >>> supporting
> > >>>
> > >>>> of the old which *may or may not even happen*.
> > >>>>
> > >>>>  Good point, we could adopt these rules now and never create a new
> > >>> API.  I
> > >>> think we would be better off adopting this now regardless of wether
> not
> > >>> we
> > >>> introduce a new API in the future.
> > >>>
> > >>> Also, if we do eventually create an API.  How is it user unfriendly
> to
> > >>> have
> > >>> the old API around in deprecated form?  The deprecation markings
> > clearly
> > >>> communicate that someone writing new code should not use the old API.
> > >>> However it still allows existing code that users invested time into
> > >>> writing
> > >>> to run w/o issue against 2.0.0.
> > >>>
> > >>>
> > >> I feel like I'm repeating myself. My concern is that the
> implementation
> > >> details of maintaining the 1.x API in deprecated form will have a
> > negative
> > >> impact on the 2.0 API due to implementation details.
> > >>
> > >
> > > Sorry, Keith, you misinterpreted what I meant -- let me try to
> restate. I
> > > am assuming that a new API will happen.
> > >
> > > What is only a possibility is that the old API implementation would
> > > negatively affect the new API. John's concern is a hypothetical one
> that
> > > isn't based on any *actual* implementation details. He's assuming that
> we
> > > will hit some sort of roadblock which we would be unable to resolve in
> a
> > > desirable way (a way that would not negatively impact 2.0 API).
> > >
> > > What I'm saying is that we should address those issues if and when we
> get
> > > there. When we have context to a concrete problem, we can make a
> decision
> > > there about how to proceed. Meanwhile, we act under best-intentions to
> > keep
> > > the 1.0 APIs around.
> > >
> > > Do you get what I'm suggesting, John?
> > >
> > >
> > I'm totally okay with this. But that means no requirements about APIs
> from
> > 1.x to 2.0. I'd be comfortable with changing the verbiage to something
> that
> > lessens to encourage effort to support deprecated APIs so long as they
> > don't influence 2.0 APIs.
> >
>
> One thing to consider is that the proposal has language for making
> exceptions, a majority vote. What are your thoughts on that language?
>

That's great they're adjustable. I'm not going to agree to language now
that I currently disagree with, especially language that may be difficult
to be amended. Not everyone seems to have an understanding of my concerns
and the level of impact it has and that makes me question the ability to
get a vote through to retract that portion of the language should it arise.


Re: [VOTE] API release policy for 1.7/2.0

2014-12-04 Thread John Vines
On Thu, Dec 4, 2014 at 12:11 PM, Josh Elser  wrote:

> John Vines wrote:
>
>> On Thu, Dec 4, 2014 at 11:52 AM, Keith Turner  wrote:
>>
>>  On Thu, Dec 4, 2014 at 11:40 AM, Josh Elser
>>> wrote:
>>>
>>>  John Vines wrote:
>>>>
>>>>  Though I feel the biggest reasoning is our switch to semantic
>>>>>
>>>>>> versioning. And from semver.org,
>>>>>>
>>>>>>>   >
>>>>>>>
>>>>>>>>   >
>>>>>>>>   >   1. MAJOR version when you make incompatible API changes
>>>>>>>>   >
>>>>>>>>   >
>>>>>>>>   >
>>>>>>>>
>>>>>>>   Right and dropping deprecated APIs is an incompatible change. Do
>>>>>>> you
>>>>>>>
>>>>>> think
>>>>>>
>>>>>>>   the following two rules are reasonable?
>>>>>>>
>>>>>>> * When API is deprecated, must offer replacement if feasible.
>>>>>>> * Can only drop deprecated method when MAJOR version is
>>>>>>>
>>>>>> incremented
>>>
>>>> (there
>>>>>
>>>>>are other proposed constraints on dropping deprecated methods)
>>>>>>>
>>>>>>>   If we follow the above, then we can not deprecate current API
>>>>>>> before
>>>>>>>   introducing new API (because the replacement would not exist
>>>>>>>   concurrently).  Also we can not drop the current API in 2.0.0 if
>>>>>>> its
>>>>>>>
>>>>>> not
>>>>>>
>>>>>>>   deprecated.
>>>>>>>
>>>>>> It is totally a reasonable statement for after 2.0.0. But for 2.0.0 I
>>>>> am
>>>>> not okay making this guarantee because I would rather sacrifice
>>>>> backward
>>>>> compatibility for an API that isn't plagued by shortcomings of the old
>>>>>
>>>> API
>>>
>>>> Again, this is the fear/concern of impacting the new API due to
>>>>
>>> supporting
>>>
>>>> of the old which *may or may not even happen*.
>>>>
>>>>  Good point, we could adopt these rules now and never create a new
>>> API.  I
>>> think we would be better off adopting this now regardless of wether not
>>> we
>>> introduce a new API in the future.
>>>
>>> Also, if we do eventually create an API.  How is it user unfriendly to
>>> have
>>> the old API around in deprecated form?  The deprecation markings clearly
>>> communicate that someone writing new code should not use the old API.
>>> However it still allows existing code that users invested time into
>>> writing
>>> to run w/o issue against 2.0.0.
>>>
>>>
>> I feel like I'm repeating myself. My concern is that the implementation
>> details of maintaining the 1.x API in deprecated form will have a negative
>> impact on the 2.0 API due to implementation details.
>>
>
> Sorry, Keith, you misinterpreted what I meant -- let me try to restate. I
> am assuming that a new API will happen.
>
> What is only a possibility is that the old API implementation would
> negatively affect the new API. John's concern is a hypothetical one that
> isn't based on any *actual* implementation details. He's assuming that we
> will hit some sort of roadblock which we would be unable to resolve in a
> desirable way (a way that would not negatively impact 2.0 API).
>
> What I'm saying is that we should address those issues if and when we get
> there. When we have context to a concrete problem, we can make a decision
> there about how to proceed. Meanwhile, we act under best-intentions to keep
> the 1.0 APIs around.
>
> Do you get what I'm suggesting, John?
>
>
I'm totally okay with this. But that means no requirements about APIs from
1.x to 2.0. I'd be comfortable with changing the verbiage to something that
lessens to encourage effort to support deprecated APIs so long as they
don't influence 2.0 APIs.


Re: [VOTE] API release policy for 1.7/2.0

2014-12-04 Thread John Vines
On Thu, Dec 4, 2014 at 11:52 AM, Keith Turner  wrote:

> On Thu, Dec 4, 2014 at 11:40 AM, Josh Elser  wrote:
>
> > John Vines wrote:
> >
> >> Though I feel the biggest reasoning is our switch to semantic
> >>>>
> >>> versioning. And from semver.org,
> >>
> >>> >  >
> >>>> >  >
> >>>> >  >  1. MAJOR version when you make incompatible API changes
> >>>> >  >
> >>>> >  >
> >>>> >  >
> >>>>
> >>> >  Right and dropping deprecated APIs is an incompatible change. Do you
> >>> think
> >>> >  the following two rules are reasonable?
> >>> >
> >>> >* When API is deprecated, must offer replacement if feasible.
> >>> >* Can only drop deprecated method when MAJOR version is
> incremented
> >>>
> >> (there
> >>
> >>> >  are other proposed constraints on dropping deprecated methods)
> >>> >
> >>> >  If we follow the above, then we can not deprecate current API before
> >>> >  introducing new API (because the replacement would not exist
> >>> >  concurrently).  Also we can not drop the current API in 2.0.0 if its
> >>> not
> >>> >  deprecated.
> >>>
> >>
> >> It is totally a reasonable statement for after 2.0.0. But for 2.0.0 I am
> >> not okay making this guarantee because I would rather sacrifice backward
> >> compatibility for an API that isn't plagued by shortcomings of the old
> API
> >>
> >
> > Again, this is the fear/concern of impacting the new API due to
> supporting
> > of the old which *may or may not even happen*.
> >
>
> Good point, we could adopt these rules now and never create a new API.  I
> think we would be better off adopting this now regardless of wether not we
> introduce a new API in the future.
>
> Also, if we do eventually create an API.  How is it user unfriendly to have
> the old API around in deprecated form?  The deprecation markings clearly
> communicate that someone writing new code should not use the old API.
> However it still allows existing code that users invested time into writing
> to run w/o issue against 2.0.0.
>

I feel like I'm repeating myself. My concern is that the implementation
details of maintaining the 1.x API in deprecated form will have a negative
impact on the 2.0 API due to implementation details.


Re: [VOTE] API release policy for 1.7/2.0

2014-12-04 Thread John Vines
On Thu, Dec 4, 2014 at 11:34 AM, Keith Turner  wrote:

> On Thu, Dec 4, 2014 at 11:30 AM, John Vines  wrote:
>
> > Sent from my phone, please pardon the typos and brevity.
> > On Dec 4, 2014 11:20 AM, "Keith Turner"  wrote:
> > >
> > > On Wed, Dec 3, 2014 at 6:48 PM, John Vines  wrote:
> > >
> > > > It's hard to track this down-
> > > > http://www.mail-archive.com/dev@accumulo.apache.org/msg07336.html
> has
> > > > Busbey mentioning that 2.0 was breaking, which no one reacted to,
> > implying
> > > > this was known
> > > > http://www.mail-archive.com/dev%40accumulo.apache.org/msg08344.html
> > has
> > > > Mike Drob stating this "In general, I'm inclined to leave as much in
> as
> > > > possible, and then if we
> > > >
> > > > must remove things then do so in 2.0.0. I know that our compatibility
> > > > statement only promises one minor version, but that doesn't mean we
> > have to
> > > > be strict at every opportunity." which promotes this idea.
> > > >
> > > >
> > > > Christopher has a response to that which also corroborates the
> > agreement.
> > > >
> > > >
> > > >
> > > > Though I feel the biggest reasoning is our switch to semantic
> > versioning. And from semver.org,
> > > >
> > > >
> > > >1. MAJOR version when you make incompatible API changes
> > > >
> > > >
> > > >
> > > Right and dropping deprecated APIs is an incompatible change. Do you
> > think
> > > the following two rules are reasonable?
> > >
> > >  * When API is deprecated, must offer replacement if feasible.
> > >  * Can only drop deprecated method when MAJOR version is incremented
> > (there
> > > are other proposed constraints on dropping deprecated methods)
> > >
> > > If we follow the above, then we can not deprecate current API before
> > > introducing new API (because the replacement would not exist
> > > concurrently).  Also we can not drop the current API in 2.0.0 if its
> not
> > > deprecated.
> >
> > It is totally a reasonable statement for after 2.0.0. But for 2.0.0 I am
> > not okay making this guarantee because I would rather sacrifice backward
> > compatibility for an API that isn't plagued by shortcomings of the old
> API
> >
>
> I feel like the logic behind this is as follows
>
>   * Our API is user unfriendly in some ways, therefore lets create a new
> API thats more user friendly...
>   * When we introduce the new user friendly API, lets do something thats
> really user unfiendly and completely drop the old API w/o warning (because
> many users are not following these discussions)
>
>
I'm concerned about the new API not being as user friendly because of the
old API. And I would rather have a case of being extremely user unfriendly
by removing the old API than having lost lasting implications of user
unfriendliness by having plagued API.


>
> >
> > >
> > >
> > > > Which is exactly what we're talking about.
> > > >
> > > >
> > > > On Wed, Dec 3, 2014 at 6:01 PM, Keith Turner 
> wrote:
> > > >
> > > >>
> > > >>
> > > >> On Tue, Dec 2, 2014 at 3:07 PM, John Vines 
> wrote:
> > > >>
> > > >>> -1 I do not like the idea of committing to 1.7.0-1.9.9... API
> > additions
> > > >>> for
> > > >>> the 2.0 API. We have already come to the consensus that 2.0 will
> > break
> > > >>> the
> > > >>> 1.x API which provides a lot of breathing room and freedom from old
> > > >>>
> > > >>
> > > >> Can you point me to where this consensus was reached?
> > > >>
> > > >>
> > > >>> decisions. This causes this issue to come roaring back and an even
> > larger
> > > >>> amount of scrutiny to be required for all 1.7.0-1.9.9... API
> changes.
> > I
> > > >>> would go so far as to say an undefinable amount of scrutiny since
> we
> > > >>> still
> > > >>> don't have solid foundation of a 2.0 API. We cannot judge API items
> > for
> > > >>> how
> > > >>> well they belong in an API that does not exist yet.
> > > >>>
> > > >>> Tangential- I w

Re: [VOTE] API release policy for 1.7/2.0

2014-12-04 Thread John Vines
Sent from my phone, please pardon the typos and brevity.
On Dec 4, 2014 11:20 AM, "Keith Turner"  wrote:
>
> On Wed, Dec 3, 2014 at 6:48 PM, John Vines  wrote:
>
> > It's hard to track this down-
> > http://www.mail-archive.com/dev@accumulo.apache.org/msg07336.html has
> > Busbey mentioning that 2.0 was breaking, which no one reacted to,
implying
> > this was known
> > http://www.mail-archive.com/dev%40accumulo.apache.org/msg08344.html has
> > Mike Drob stating this "In general, I'm inclined to leave as much in as
> > possible, and then if we
> >
> > must remove things then do so in 2.0.0. I know that our compatibility
> > statement only promises one minor version, but that doesn't mean we
have to
> > be strict at every opportunity." which promotes this idea.
> >
> >
> > Christopher has a response to that which also corroborates the
agreement.
> >
> >
> >
> > Though I feel the biggest reasoning is our switch to semantic
versioning. And from semver.org,
> >
> >
> >1. MAJOR version when you make incompatible API changes
> >
> >
> >
> Right and dropping deprecated APIs is an incompatible change. Do you think
> the following two rules are reasonable?
>
>  * When API is deprecated, must offer replacement if feasible.
>  * Can only drop deprecated method when MAJOR version is incremented
(there
> are other proposed constraints on dropping deprecated methods)
>
> If we follow the above, then we can not deprecate current API before
> introducing new API (because the replacement would not exist
> concurrently).  Also we can not drop the current API in 2.0.0 if its not
> deprecated.

It is totally a reasonable statement for after 2.0.0. But for 2.0.0 I am
not okay making this guarantee because I would rather sacrifice backward
compatibility for an API that isn't plagued by shortcomings of the old API

>
>
> > Which is exactly what we're talking about.
> >
> >
> > On Wed, Dec 3, 2014 at 6:01 PM, Keith Turner  wrote:
> >
> >>
> >>
> >> On Tue, Dec 2, 2014 at 3:07 PM, John Vines  wrote:
> >>
> >>> -1 I do not like the idea of committing to 1.7.0-1.9.9... API
additions
> >>> for
> >>> the 2.0 API. We have already come to the consensus that 2.0 will break
> >>> the
> >>> 1.x API which provides a lot of breathing room and freedom from old
> >>>
> >>
> >> Can you point me to where this consensus was reached?
> >>
> >>
> >>> decisions. This causes this issue to come roaring back and an even
larger
> >>> amount of scrutiny to be required for all 1.7.0-1.9.9... API changes.
I
> >>> would go so far as to say an undefinable amount of scrutiny since we
> >>> still
> >>> don't have solid foundation of a 2.0 API. We cannot judge API items
for
> >>> how
> >>> well they belong in an API that does not exist yet.
> >>>
> >>> Tangential- I would like to see a clause about all current API items
will
> >>> not be removed (still could be deprecated) until 2.0.0, as I feel this
> >>> may
> >>> ease some concerns about API alteration in 1.7+.
> >>>
> >>> On Tue, Dec 2, 2014 at 3:01 PM, Christopher 
wrote:
> >>>
> >>> > Following the conversation on the [VOTE] thread for ACCUMULO-3176,
it
> >>> seems
> >>> > we require an explicit API guidelines at least for 1.7.0 and later
> >>> until
> >>> > 2.0.0.
> >>> >
> >>> > I hereby propose we adopt the following guidelines for future
releases
> >>> (if
> >>> > we produce any such releases) until 2.0.0:
> >>> >
> >>> > API additions are permitted in "major" 1.x releases (1.7, 1.8, 1.9,
> >>> 1.10,
> >>> > etc.).
> >>> > API should be forwards and backwards compatible within a 1.x release
> >>> (no
> >>> > new additions to the API in a "bugfix" release; e.g. 1.7.1).
> >>> > New API in 1.7.0 and later 1.x releases will not be removed in 2.0
> >>> (though
> >>> > they may be deprecated in 2.0 and subject to removal in 3.0).
> >>> > Existing API in 1.7.0 will be preserved through 2.0, and should
only be
> >>> > subject to removal if it was already deprecated prior to 1.7.0
(though
> >>> they
> >>> > may be deprecated in 2.0 and subject to removal in 3.0).
> >>> >
> >>> > The purpose of these guidelines are to ensure the ability to add
> >>> additional
> >>> > functionality and evolve API naturally, while minimizing API
> >>> disruptions to
> >>> > the user base, in the interim before 2.0.0 when we can formally
adopt
> >>> an
> >>> > API/versioning policy.
> >>> >
> >>> > Exceptions to these guidelines should be subject to a majority vote,
> >>> on a
> >>> > case-by-case basis.
> >>> >
> >>> > Because these relate to release planning, this vote will be subject
to
> >>> > majority vote, in accordance with our bylaws pertaining to release
> >>> planning
> >>> > and voting, and will be open for 3 days, concluding at 2000 on 5 Dec
> >>> 2014
> >>> > UTC.
> >>> >
> >>> > --
> >>> > Christopher L Tubbs II
> >>> > http://gravatar.com/ctubbsii
> >>> >
> >>>
> >>
> >>
> >


Re: [VOTE] API release policy for 1.7/2.0

2014-12-04 Thread John Vines
On Thu, Dec 4, 2014 at 8:15 AM, Sean Busbey  wrote:

> On Dec 4, 2014 6:55 AM, "Josh Elser"  wrote:
> >
> > (I was still confused so I just chatted with John on the subject of his
> -1)
> >
> > He was under the impression that it would not be feasible to leave the
> existing 1.X APIs in place with the creation of the 2.0 APIs; whereas, I
> had assumed that this wouldn't be an issue.
> >
> > He brought up the issue of how we plan to handle exceptions in the new
> API which, would very likely, include changes to the Thrift APIs as well.
> If this is the case, we'd now have to support the 1.X API (while it existed
> as deprecated) as well as the new 2.0 API. This would likely affect how we
> actually want 2.0 API to operate.
> >
> > This all kind of boils down to confusion over whether or not there is any
> compatibility between 1.x and 2.0. If 2.0 is a clean break from 1.x, this
> thread is pointless. Otherwise, we risk not getting the APIs we really
> want.
> >
> > Does this help clarify the concern?
> >
>
> One way to address that kind of concern would be to only support the 1.x
> APIs via an optional different end point.
>
> We obviously don't have enough information at this point to evaluate how
> much such a separation would take to implement nor how maintainable it
> would be.
>
> But there at least seems to be a way to work through that issue if it comes
> up.
>

I hope so. But until we have a new API fully implemented that we're content
with, I don't think we should have any guarantees made about compatibility
of the 1.x API, just in case we end up hitting an insurmountable issue.


Re: [VOTE] API release policy for 1.7/2.0

2014-12-04 Thread John Vines
On Wed, Dec 3, 2014 at 4:06 PM, Josh Elser  wrote:

> Can we bring the discussion back around? I feel like we have two separate
> things going on here.
>
> 1) Can we avoid further churn in the public API for [1.7.0,2.0.0] by
> avoiding any removal or additional deprecation.
>

Do you mean [1.7.0,2.0.0)?


>
> 2) In 2.0.0, what are we actually going to remove that is already
> deprecated
>
> re #1, I don't think we have consensus, but I think that is a moderate
> middle ground. Some wouldn't mind normal deprecation cycles for normal
> releases between now and 2.0.0; others have argued that we should not alter
> the public API at all before 2.0.0. I think we should try to focus the
> conversation here and come up with a compromise.
>
> re #2, I think we can re-visit what (if anything) is candidate for
> deletion when 2.0.0 happens. I don't think that directly necessary to
> answer to come to a conclusion on #1.
>
> Correct me if I have misspoken.
>
>
> Christopher wrote:
>
>> Sorry, another way to put this (more succinctly) is that I have removed
>> *all* deprecated APIs prior to 1.7 with the exception of the
>> instance.dfs.{uri,dir} configuration properties in my local 2.0 branch.
>> After some hindsight, essentially what I was trying to propose is that we
>> treat 1.7 as a "minor" release, and any subsequent 1.x releases as normal
>> "minor" or "patch" releases, according to definitions of those for Semver.
>>
>> For the record, Semver also doesn't address what *should be* removed, only
>> what *can be*. If anybody wants to keep something around longer, I don't
>> consider that blocking these minimal guidelines. If we end up adopting
>> Semver, and apply the constraints to 1.7 moving forward, these proposed
>> guidelines are moot.
>>
>>
>>
>> --
>> Christopher L Tubbs II
>> http://gravatar.com/ctubbsii
>>
>> On Wed, Dec 3, 2014 at 12:38 PM, Christopher  wrote:
>>
>>  On Wed, Dec 3, 2014 at 10:10 AM, Keith Turner  wrote:
>>>
>>>  On Tue, Dec 2, 2014 at 3:01 PM, Christopher
 wrote:

  Following the conversation on the [VOTE] thread for ACCUMULO-3176, it
>
 seems

> we require an explicit API guidelines at least for 1.7.0 and later
> until
> 2.0.0.
>
> I hereby propose we adopt the following guidelines for future releases
>
 (if

> we produce any such releases) until 2.0.0:
>
> API additions are permitted in "major" 1.x releases (1.7, 1.8, 1.9,
>
 1.10,

> etc.).
> API should be forwards and backwards compatible within a 1.x release
> (no
> new additions to the API in a "bugfix" release; e.g. 1.7.1).
> New API in 1.7.0 and later 1.x releases will not be removed in 2.0
>
 (though

> they may be deprecated in 2.0 and subject to removal in 3.0).
> Existing API in 1.7.0 will be preserved through 2.0, and should only be
> subject to removal if it was already deprecated prior to 1.7.0 (though
>
 they

> may be deprecated in 2.0 and subject to removal in 3.0).
>
>  This stmt can lead to disagreement later over what deprecated methods
 are
 removed in 2.0.  We could explicitly list which deprecated methods will
 be
 removed as part of this vote.  Alternatively, we could add a clause
 saying
 there will be a vote prior to 2.0 over which methods are removed.  If we
 decide now, then we could add something to 1.7.0 javadoc stating the
 method
 will go away in 2.0.


  These are intended to be minimal guidelines, not a comprehensive list
>>> of
>>> what should be removed... only guidelines to ensure we don't remove
>>> something in a breaking way. I'm fine with disagreeing with what can be
>>> removed later... so long as we're agreed on certain minimal things which
>>> cannot be removed, to ensure a smooth transition.
>>>
>>> However, for the record, the comprehensive list of things I expect to
>>> remove in 2.0, all of which were deprecated in 1.6.0 or prior:
>>>
>>> Constants.NO_AUTHS (deprecated since 1.6.0)
>>> ScannerOptions.{set,get}TimeOut(...) (deprecated since 1.5.0)
>>> Connector.create[MultiTable]Batch{Deleter,Scanner}(...) without
>>> BatchWriterConfig (deprecated since 1.5.0)
>>> Instance.getConnector(...) that doesn't take an AuthorizationToken
>>> (deprecated since 1.5.0)
>>> MutationsRejectedException constructor (deprecated since 1.6.0)
>>> MutationsRejectedException.getAuthorizationFailures() (deprecated since
>>> 1.5.0)
>>> some ZooKeeperInstance constructors replaced with ClientConfiguration
>>> (deprecated since 1.6.0)
>>> some SecurityOperations methods (deprecated since 1.5.0)
>>> TableOperations.getSplits() (deprecated since 1.5.0)
>>> non-range TableOperations.flush() (deprecated since 1.4)
>>> Constraint.getAuthorizations() (deprecated since 1.5.0)
>>> static KeyExtent.getkeyExtentsForRange() (deprecated and unused utility
>>> method)
>>> Value constructor with copy param (deprecated and unused)
>>> Ag

Re: [VOTE] API release policy for 1.7/2.0

2014-12-03 Thread John Vines
On Wed, Dec 3, 2014 at 7:02 PM, Christopher  wrote:

> On Wed, Dec 3, 2014 at 6:48 PM, John Vines  wrote:
>
>> It's hard to track this down-
>> http://www.mail-archive.com/dev@accumulo.apache.org/msg07336.html has
>> Busbey mentioning that 2.0 was breaking, which no one reacted to, implying
>> this was known
>>
>
> In that context, I had assumed he meant the dropping of deprecated APIs
> from previous releases.
>
>
>> http://www.mail-archive.com/dev%40accumulo.apache.org/msg08344.html has
>> Mike Drob stating this "In general, I'm inclined to leave as much in as
>> possible, and then if we
>>
>> must remove things then do so in 2.0.0. I know that our compatibility
>> statement only promises one minor version, but that doesn't mean we have
>> to
>> be strict at every opportunity." which promotes this idea.
>>
>>
> I believe we were talking about dropping deprecated stuff only here also.
>
>
>>
>> Christopher has a response to that which also corroborates the agreement.
>>
>>
>>
>> Though I feel the biggest reasoning is our switch to semantic
>> versioning. And from semver.org,
>>
>>
>>1. MAJOR version when you make incompatible API changes
>>
>>
>> Which is exactly what we're talking about.
>>
>>
> Yes, but it is unclear *which* things to drop. From our current practice,
> it's clear that it would only be deprecated stuff in at least one minor
> version (same with Semver). The proposed guidelines here strengthen that
> (along the same lines as my DISCUSS thread on adopting Semver) such that we
> make fewer such drops.
>
> However, we're still talking about deprecated APIs only.
>

I don't read it that way


>
>
>>
>> On Wed, Dec 3, 2014 at 6:01 PM, Keith Turner  wrote:
>>
>> >
>> >
>> > On Tue, Dec 2, 2014 at 3:07 PM, John Vines  wrote:
>> >
>> >> -1 I do not like the idea of committing to 1.7.0-1.9.9... API additions
>> >> for
>> >> the 2.0 API. We have already come to the consensus that 2.0 will break
>> the
>> >> 1.x API which provides a lot of breathing room and freedom from old
>> >>
>> >
>> > Can you point me to where this consensus was reached?
>> >
>> >
>> >> decisions. This causes this issue to come roaring back and an even
>> larger
>> >> amount of scrutiny to be required for all 1.7.0-1.9.9... API changes. I
>> >> would go so far as to say an undefinable amount of scrutiny since we
>> still
>> >> don't have solid foundation of a 2.0 API. We cannot judge API items for
>> >> how
>> >> well they belong in an API that does not exist yet.
>> >>
>> >> Tangential- I would like to see a clause about all current API items
>> will
>> >> not be removed (still could be deprecated) until 2.0.0, as I feel this
>> may
>> >> ease some concerns about API alteration in 1.7+.
>> >>
>> >> On Tue, Dec 2, 2014 at 3:01 PM, Christopher 
>> wrote:
>> >>
>> >> > Following the conversation on the [VOTE] thread for ACCUMULO-3176, it
>> >> seems
>> >> > we require an explicit API guidelines at least for 1.7.0 and later
>> until
>> >> > 2.0.0.
>> >> >
>> >> > I hereby propose we adopt the following guidelines for future
>> releases
>> >> (if
>> >> > we produce any such releases) until 2.0.0:
>> >> >
>> >> > API additions are permitted in "major" 1.x releases (1.7, 1.8, 1.9,
>> >> 1.10,
>> >> > etc.).
>> >> > API should be forwards and backwards compatible within a 1.x release
>> (no
>> >> > new additions to the API in a "bugfix" release; e.g. 1.7.1).
>> >> > New API in 1.7.0 and later 1.x releases will not be removed in 2.0
>> >> (though
>> >> > they may be deprecated in 2.0 and subject to removal in 3.0).
>> >> > Existing API in 1.7.0 will be preserved through 2.0, and should only
>> be
>> >> > subject to removal if it was already deprecated prior to 1.7.0
>> (though
>> >> they
>> >> > may be deprecated in 2.0 and subject to removal in 3.0).
>> >> >
>> >> > The purpose of these guidelines are to ensure the ability to add
>> >> additional
>> >> > functionality and evolve API naturally, while minimizing API
>> >> disruptions to
>> >> > the user base, in the interim before 2.0.0 when we can formally
>> adopt an
>> >> > API/versioning policy.
>> >> >
>> >> > Exceptions to these guidelines should be subject to a majority vote,
>> on
>> >> a
>> >> > case-by-case basis.
>> >> >
>> >> > Because these relate to release planning, this vote will be subject
>> to
>> >> > majority vote, in accordance with our bylaws pertaining to release
>> >> planning
>> >> > and voting, and will be open for 3 days, concluding at 2000 on 5 Dec
>> >> 2014
>> >> > UTC.
>> >> >
>> >> > --
>> >> > Christopher L Tubbs II
>> >> > http://gravatar.com/ctubbsii
>> >> >
>> >>
>> >
>> >
>>
>
>


Re: [VOTE] API release policy for 1.7/2.0

2014-12-03 Thread John Vines
I already cited sources for it in my previous response.

On Wed, Dec 3, 2014 at 6:57 PM, Christopher  wrote:

> On Wed, Dec 3, 2014 at 5:28 PM, John Vines  wrote:
>
>> I stand by my -1. This vote would guarantee a level of API compatibility
>> that I don't think we should be held to.
>>
>>
> So, this degree of compatibility is our *current* practice between major
> versions, and this is the default assumption I've been operating under.
> These proposed guidelines do not presume to add that requirement... they
> only assumes our current practice of API support between major releases
> (something I feel we've consistently been getting better at since 1.5).
>
> Whether we should discard this current practice for 2.0 seems like a
> separate conversation (though it may be a prerequisite for this vote), and
> I'm not sure where the idea that developers would be on board with this
> came from (a misunderstanding in a previous thread, perhaps?). Personally,
> I would vote against discarding the current practice, because I think it's
> a bad idea to introduce breaking changes without deprecating first, and
> giving users some opportunity to migrate at their own pace when they
> upgrade.
>
> This (mis?)understanding seems to be at the heart of a lot of the dialogue
> on this list in the past few weeks. If you could direct me to some previous
> thread which established consensus on the decision to discard the 1.x APIs
> without deprecated support, I think it would help.
>
>
>> On Wed, Dec 3, 2014 at 5:15 PM, Christopher  wrote:
>>
>>> Does this information affect your vote?
>>>
>>>
>>> --
>>> Christopher L Tubbs II
>>> http://gravatar.com/ctubbsii
>>>
>>> On Tue, Dec 2, 2014 at 6:16 PM, Christopher  wrote:
>>>
>>>> On Tue, Dec 2, 2014 at 5:18 PM, John Vines  wrote:
>>>>
>>>>> On Tue, Dec 2, 2014 at 3:14 PM, Christopher 
>>>>> wrote:
>>>>>
>>>>> > On Tue, Dec 2, 2014 at 3:07 PM, John Vines  wrote:
>>>>> >
>>>>> > > -1 I do not like the idea of committing to 1.7.0-1.9.9... API
>>>>> additions
>>>>> > for
>>>>> > > the 2.0 API. We have already come to the consensus that 2.0 will
>>>>> break
>>>>> > the
>>>>> > > 1.x API which provides a lot of breathing room and freedom from old
>>>>> > > decisions. This causes this issue to come roaring back and an even
>>>>> larger
>>>>> > > amount of scrutiny to be required for all 1.7.0-1.9.9... API
>>>>> changes. I
>>>>> > > would go so far as to say an undefinable amount of scrutiny since
>>>>> we
>>>>> > still
>>>>> > > don't have solid foundation of a 2.0 API. We cannot judge API
>>>>> items for
>>>>> > how
>>>>> > > well they belong in an API that does not exist yet.
>>>>> > >
>>>>> > >
>>>>> > Honestly, I don't expect us to have any major 1.x releases after
>>>>> 1.7.x.
>>>>> > These guidelines would just add some minor protection, making 1.x a
>>>>> bit
>>>>> > more stable in the transition to 2.0 if we ever do have such
>>>>> releases. I'd
>>>>> > hate for a user to seamlessly migrate to 2.0 from 1.7, but not be
>>>>> able to
>>>>> > seamlessly migrate from a 1.8 to 2.0, because 1.8 dropped some 1.7
>>>>> API.
>>>>> >
>>>>>
>>>>> This doesn't make any sense. I've been under the impression that there
>>>>> will
>>>>> not be a seamless migration to 2.0 from any release. I thought 2.0 was
>>>>> supposed to be a clean start of an API in order to prevent old method
>>>>> signatures from making a better, cleaner API. And with that, it means
>>>>> that
>>>>> migrating from 1.7 shouldn't make any different from 1.8. I expect
>>>>> there to
>>>>> be no necessity for any api in any version of 1.x to exist in 2.0,
>>>>> including those introduced in 1.999.0 if that's what it takes. Your
>>>>> statement specifies differently and that either means my bases for
>>>>> 2.0's
>>>>> API is false or your now introducing a new requirement to it.
>>>>>
>>>>>

Re: [VOTE] API release policy for 1.7/2.0

2014-12-03 Thread John Vines
It's hard to track this down-
http://www.mail-archive.com/dev@accumulo.apache.org/msg07336.html has
Busbey mentioning that 2.0 was breaking, which no one reacted to, implying
this was known
http://www.mail-archive.com/dev%40accumulo.apache.org/msg08344.html has
Mike Drob stating this "In general, I'm inclined to leave as much in as
possible, and then if we

must remove things then do so in 2.0.0. I know that our compatibility
statement only promises one minor version, but that doesn't mean we have to
be strict at every opportunity." which promotes this idea.


Christopher has a response to that which also corroborates the agreement.



Though I feel the biggest reasoning is our switch to semantic
versioning. And from semver.org,


   1. MAJOR version when you make incompatible API changes


Which is exactly what we're talking about.


On Wed, Dec 3, 2014 at 6:01 PM, Keith Turner  wrote:

>
>
> On Tue, Dec 2, 2014 at 3:07 PM, John Vines  wrote:
>
>> -1 I do not like the idea of committing to 1.7.0-1.9.9... API additions
>> for
>> the 2.0 API. We have already come to the consensus that 2.0 will break the
>> 1.x API which provides a lot of breathing room and freedom from old
>>
>
> Can you point me to where this consensus was reached?
>
>
>> decisions. This causes this issue to come roaring back and an even larger
>> amount of scrutiny to be required for all 1.7.0-1.9.9... API changes. I
>> would go so far as to say an undefinable amount of scrutiny since we still
>> don't have solid foundation of a 2.0 API. We cannot judge API items for
>> how
>> well they belong in an API that does not exist yet.
>>
>> Tangential- I would like to see a clause about all current API items will
>> not be removed (still could be deprecated) until 2.0.0, as I feel this may
>> ease some concerns about API alteration in 1.7+.
>>
>> On Tue, Dec 2, 2014 at 3:01 PM, Christopher  wrote:
>>
>> > Following the conversation on the [VOTE] thread for ACCUMULO-3176, it
>> seems
>> > we require an explicit API guidelines at least for 1.7.0 and later until
>> > 2.0.0.
>> >
>> > I hereby propose we adopt the following guidelines for future releases
>> (if
>> > we produce any such releases) until 2.0.0:
>> >
>> > API additions are permitted in "major" 1.x releases (1.7, 1.8, 1.9,
>> 1.10,
>> > etc.).
>> > API should be forwards and backwards compatible within a 1.x release (no
>> > new additions to the API in a "bugfix" release; e.g. 1.7.1).
>> > New API in 1.7.0 and later 1.x releases will not be removed in 2.0
>> (though
>> > they may be deprecated in 2.0 and subject to removal in 3.0).
>> > Existing API in 1.7.0 will be preserved through 2.0, and should only be
>> > subject to removal if it was already deprecated prior to 1.7.0 (though
>> they
>> > may be deprecated in 2.0 and subject to removal in 3.0).
>> >
>> > The purpose of these guidelines are to ensure the ability to add
>> additional
>> > functionality and evolve API naturally, while minimizing API
>> disruptions to
>> > the user base, in the interim before 2.0.0 when we can formally adopt an
>> > API/versioning policy.
>> >
>> > Exceptions to these guidelines should be subject to a majority vote, on
>> a
>> > case-by-case basis.
>> >
>> > Because these relate to release planning, this vote will be subject to
>> > majority vote, in accordance with our bylaws pertaining to release
>> planning
>> > and voting, and will be open for 3 days, concluding at 2000 on 5 Dec
>> 2014
>> > UTC.
>> >
>> > --
>> > Christopher L Tubbs II
>> > http://gravatar.com/ctubbsii
>> >
>>
>
>


Re: [VOTE] API release policy for 1.7/2.0

2014-12-03 Thread John Vines
I believe I've explained it in detail. For 2.0 we have not had any sort of
hard requirement for API compatibility and the language in this vote
changes that. My original response explained this in more detail in my
original explanation.


On Wed, Dec 3, 2014 at 5:33 PM, Keith Turner  wrote:

>
>
> On Wed, Dec 3, 2014 at 5:28 PM, John Vines  wrote:
>
>> I stand by my -1. This vote would guarantee a level of API compatibility
>> that I don't think we should be held to.
>>
>
> Can you give some some specific reasons for your -1?
>
>
>>
>> On Wed, Dec 3, 2014 at 5:15 PM, Christopher  wrote:
>>
>> > Does this information affect your vote?
>> >
>> >
>> > --
>> > Christopher L Tubbs II
>> > http://gravatar.com/ctubbsii
>> >
>> > On Tue, Dec 2, 2014 at 6:16 PM, Christopher 
>> wrote:
>> >
>> >> On Tue, Dec 2, 2014 at 5:18 PM, John Vines  wrote:
>> >>
>> >>> On Tue, Dec 2, 2014 at 3:14 PM, Christopher 
>> wrote:
>> >>>
>> >>> > On Tue, Dec 2, 2014 at 3:07 PM, John Vines 
>> wrote:
>> >>> >
>> >>> > > -1 I do not like the idea of committing to 1.7.0-1.9.9... API
>> >>> additions
>> >>> > for
>> >>> > > the 2.0 API. We have already come to the consensus that 2.0 will
>> >>> break
>> >>> > the
>> >>> > > 1.x API which provides a lot of breathing room and freedom from
>> old
>> >>> > > decisions. This causes this issue to come roaring back and an even
>> >>> larger
>> >>> > > amount of scrutiny to be required for all 1.7.0-1.9.9... API
>> >>> changes. I
>> >>> > > would go so far as to say an undefinable amount of scrutiny since
>> we
>> >>> > still
>> >>> > > don't have solid foundation of a 2.0 API. We cannot judge API
>> items
>> >>> for
>> >>> > how
>> >>> > > well they belong in an API that does not exist yet.
>> >>> > >
>> >>> > >
>> >>> > Honestly, I don't expect us to have any major 1.x releases after
>> 1.7.x.
>> >>> > These guidelines would just add some minor protection, making 1.x a
>> bit
>> >>> > more stable in the transition to 2.0 if we ever do have such
>> releases.
>> >>> I'd
>> >>> > hate for a user to seamlessly migrate to 2.0 from 1.7, but not be
>> able
>> >>> to
>> >>> > seamlessly migrate from a 1.8 to 2.0, because 1.8 dropped some 1.7
>> API.
>> >>> >
>> >>>
>> >>> This doesn't make any sense. I've been under the impression that there
>> >>> will
>> >>> not be a seamless migration to 2.0 from any release. I thought 2.0 was
>> >>> supposed to be a clean start of an API in order to prevent old method
>> >>> signatures from making a better, cleaner API. And with that, it means
>> >>> that
>> >>> migrating from 1.7 shouldn't make any different from 1.8. I expect
>> there
>> >>> to
>> >>> be no necessity for any api in any version of 1.x to exist in 2.0,
>> >>> including those introduced in 1.999.0 if that's what it takes. Your
>> >>> statement specifies differently and that either means my bases for
>> 2.0's
>> >>> API is false or your now introducing a new requirement to it.
>> >>>
>> >>>
>> >>>
>> >> We're not just going to drop the 1.x API. The core jar will still
>> exist,
>> >> and contain all the old APIs (at least, that was my understanding). We
>> >> weren't going to throw out the window our normal practice of
>> deprecating
>> >> APIs (I certainly had no intentions to do so). My understanding would
>> be
>> >> that we would deprecate the old 1.x APIs in 2.0, and remove them in
>> 3.0.
>> >>
>> >> I've not even considered this as a "new requirement" for the new client
>> >> API... it's just the way we do things in this community (deprecate
>> first,
>> >> remove later). The only difference would be that the version numbers
>> would
>> >> actually mean something in terms of guarantees about when we remove
>> those
>> &

Re: [VOTE] API release policy for 1.7/2.0

2014-12-03 Thread John Vines
I stand by my -1. This vote would guarantee a level of API compatibility
that I don't think we should be held to.

On Wed, Dec 3, 2014 at 5:15 PM, Christopher  wrote:

> Does this information affect your vote?
>
>
> --
> Christopher L Tubbs II
> http://gravatar.com/ctubbsii
>
> On Tue, Dec 2, 2014 at 6:16 PM, Christopher  wrote:
>
>> On Tue, Dec 2, 2014 at 5:18 PM, John Vines  wrote:
>>
>>> On Tue, Dec 2, 2014 at 3:14 PM, Christopher  wrote:
>>>
>>> > On Tue, Dec 2, 2014 at 3:07 PM, John Vines  wrote:
>>> >
>>> > > -1 I do not like the idea of committing to 1.7.0-1.9.9... API
>>> additions
>>> > for
>>> > > the 2.0 API. We have already come to the consensus that 2.0 will
>>> break
>>> > the
>>> > > 1.x API which provides a lot of breathing room and freedom from old
>>> > > decisions. This causes this issue to come roaring back and an even
>>> larger
>>> > > amount of scrutiny to be required for all 1.7.0-1.9.9... API
>>> changes. I
>>> > > would go so far as to say an undefinable amount of scrutiny since we
>>> > still
>>> > > don't have solid foundation of a 2.0 API. We cannot judge API items
>>> for
>>> > how
>>> > > well they belong in an API that does not exist yet.
>>> > >
>>> > >
>>> > Honestly, I don't expect us to have any major 1.x releases after 1.7.x.
>>> > These guidelines would just add some minor protection, making 1.x a bit
>>> > more stable in the transition to 2.0 if we ever do have such releases.
>>> I'd
>>> > hate for a user to seamlessly migrate to 2.0 from 1.7, but not be able
>>> to
>>> > seamlessly migrate from a 1.8 to 2.0, because 1.8 dropped some 1.7 API.
>>> >
>>>
>>> This doesn't make any sense. I've been under the impression that there
>>> will
>>> not be a seamless migration to 2.0 from any release. I thought 2.0 was
>>> supposed to be a clean start of an API in order to prevent old method
>>> signatures from making a better, cleaner API. And with that, it means
>>> that
>>> migrating from 1.7 shouldn't make any different from 1.8. I expect there
>>> to
>>> be no necessity for any api in any version of 1.x to exist in 2.0,
>>> including those introduced in 1.999.0 if that's what it takes. Your
>>> statement specifies differently and that either means my bases for 2.0's
>>> API is false or your now introducing a new requirement to it.
>>>
>>>
>>>
>> We're not just going to drop the 1.x API. The core jar will still exist,
>> and contain all the old APIs (at least, that was my understanding). We
>> weren't going to throw out the window our normal practice of deprecating
>> APIs (I certainly had no intentions to do so). My understanding would be
>> that we would deprecate the old 1.x APIs in 2.0, and remove them in 3.0.
>>
>> I've not even considered this as a "new requirement" for the new client
>> API... it's just the way we do things in this community (deprecate first,
>> remove later). The only difference would be that the version numbers would
>> actually mean something in terms of guarantees about when we remove those
>> deprecated methods. This is what I've consistently expressed in the
>> previous thread regarding ACCUMULO-3176.
>>
>>
>>
>>> >
>>> >
>>> > > Tangential- I would like to see a clause about all current API items
>>> will
>>> > > not be removed (still could be deprecated) until 2.0.0, as I feel
>>> this
>>> > may
>>> > > ease some concerns about API alteration in 1.7+.
>>> > >
>>> > >
>>> > I believe I expressed that above, and only excluded things that were
>>> > deprecated prior to 1.7 (such as aggregators, which I expect to drop in
>>> > 2.0).
>>> >
>>> >
>>> > > On Tue, Dec 2, 2014 at 3:01 PM, Christopher 
>>> wrote:
>>> > >
>>> > > > Following the conversation on the [VOTE] thread for ACCUMULO-3176,
>>> it
>>> > > seems
>>> > > > we require an explicit API guidelines at least for 1.7.0 and later
>>> > until
>>> > > > 2.0.0.
>>> > > >
>>> > > > I hereby propose we adopt the following guidelines 

Re: [VOTE] API release policy for 1.7/2.0

2014-12-03 Thread John Vines
Accidentally sent to just Shawn before

Sent from my phone, please pardon the typos and brevity.
-- Forwarded message --
From: "John Vines" 
Date: Dec 3, 2014 10:01 AM
Subject: Re: [VOTE] API release policy for 1.7/2.0
To: "Sean Busbey" 
Cc:

Sent from my phone, please pardon the typos and brevity.
On Dec 3, 2014 9:49 AM, "Sean Busbey"  wrote:
>
> -1 also, ATM. I'd like to see us freeze APIs between now and the 2.0
release.
>
> Downstream users have to plan when they invest effort in migrating
Accumulo versions. We've already signaled that 2.0 will be the start of a
new API with long-lived compatibility promises. (We should keep signaling
this.) That makes it a promising place to make a jump (in some cases, from
1.4 I'm sure).
>
> I would like to avoid, however possible, leaning those users towards
ignoring releases between now and 2.0. For those who are back on 1.4 or 1.5
we can't really do too much. For those on 1.6 we can make it so there is
relatively little risk in moving forward.
>
> API additions matter here because when a system integrator makes an
application on top of Accumulo they often start at the latest version they
can find. Later, they may have a client with a regulatory requirement to
use an earlier version. Porting backwards is just as hard as porting
forwards in our code base.
>

I have an issue with this line of reasoning. Not allowing new APIs for your
reasoning sounds like a poor reason for not adding them. If we had no talk
of doing a 2.0, making a request like that would be considered utterly
unreasonable. And it's also a bit invalid given we already have other new
APIs added.

> I'd also like to see the "no removing of deprecated" language
strengthened to remove the exception for things deprecated prior to 1.7.
>
> Yes, this will severely constrain what we can do prior to 2.0. But I
think doing otherwise will just encourage us to keep squeezing in "just one
more" major pre-2.0 release to get some additional client facing feature
out the door.
>

If this is a concern then we should put effort into dictating a roadmap for
features for 1.7 and 2.0. Enforcing this via API alteration limitations
doesn't seem to be the right way to do this.

> If we have some downstream users with different compatibility needs and
with particular operational needs for features that are delayed to 2.0
because of this decision, it should be straight forward for them to
backport the things they need and run their own packaging. Plenty of folks
who don't need the legal indemnification that the ASF provides do this for
a wide variety of projects.
>

Let's not go into a discussion about telling people to fork the code and do
their own thing, shall we?

>
> On Tue, Dec 2, 2014 at 2:07 PM, John Vines  wrote:
>>
>> -1 I do not like the idea of committing to 1.7.0-1.9.9... API additions
for
>> the 2.0 API. We have already come to the consensus that 2.0 will break
the
>> 1.x API which provides a lot of breathing room and freedom from old
>> decisions. This causes this issue to come roaring back and an even larger
>> amount of scrutiny to be required for all 1.7.0-1.9.9... API changes. I
>> would go so far as to say an undefinable amount of scrutiny since we
still
>> don't have solid foundation of a 2.0 API. We cannot judge API items for
how
>> well they belong in an API that does not exist yet.
>>
>> Tangential- I would like to see a clause about all current API items will
>> not be removed (still could be deprecated) until 2.0.0, as I feel this
may
>> ease some concerns about API alteration in 1.7+.
>>
>> On Tue, Dec 2, 2014 at 3:01 PM, Christopher  wrote:
>>
>> > Following the conversation on the [VOTE] thread for ACCUMULO-3176, it
seems
>> > we require an explicit API guidelines at least for 1.7.0 and later
until
>> > 2.0.0.
>> >
>> > I hereby propose we adopt the following guidelines for future releases
(if
>> > we produce any such releases) until 2.0.0:
>> >
>> > API additions are permitted in "major" 1.x releases (1.7, 1.8, 1.9,
1.10,
>> > etc.).
>> > API should be forwards and backwards compatible within a 1.x release
(no
>> > new additions to the API in a "bugfix" release; e.g. 1.7.1).
>> > New API in 1.7.0 and later 1.x releases will not be removed in 2.0
(though
>> > they may be deprecated in 2.0 and subject to removal in 3.0).
>> > Existing API in 1.7.0 will be preserved through 2.0, and should only be
>> > subject to removal if it was already deprecated prior to 1.7.0 (though
they
>> > may be deprecated in 2.0 and subject to removal in 3.0).
>> >
>> > The purpose 

Re: [VOTE] API release policy for 1.7/2.0

2014-12-02 Thread John Vines
On Tue, Dec 2, 2014 at 3:14 PM, Christopher  wrote:

> On Tue, Dec 2, 2014 at 3:07 PM, John Vines  wrote:
>
> > -1 I do not like the idea of committing to 1.7.0-1.9.9... API additions
> for
> > the 2.0 API. We have already come to the consensus that 2.0 will break
> the
> > 1.x API which provides a lot of breathing room and freedom from old
> > decisions. This causes this issue to come roaring back and an even larger
> > amount of scrutiny to be required for all 1.7.0-1.9.9... API changes. I
> > would go so far as to say an undefinable amount of scrutiny since we
> still
> > don't have solid foundation of a 2.0 API. We cannot judge API items for
> how
> > well they belong in an API that does not exist yet.
> >
> >
> Honestly, I don't expect us to have any major 1.x releases after 1.7.x.
> These guidelines would just add some minor protection, making 1.x a bit
> more stable in the transition to 2.0 if we ever do have such releases. I'd
> hate for a user to seamlessly migrate to 2.0 from 1.7, but not be able to
> seamlessly migrate from a 1.8 to 2.0, because 1.8 dropped some 1.7 API.
>

This doesn't make any sense. I've been under the impression that there will
not be a seamless migration to 2.0 from any release. I thought 2.0 was
supposed to be a clean start of an API in order to prevent old method
signatures from making a better, cleaner API. And with that, it means that
migrating from 1.7 shouldn't make any different from 1.8. I expect there to
be no necessity for any api in any version of 1.x to exist in 2.0,
including those introduced in 1.999.0 if that's what it takes. Your
statement specifies differently and that either means my bases for 2.0's
API is false or your now introducing a new requirement to it.


>
>
> > Tangential- I would like to see a clause about all current API items will
> > not be removed (still could be deprecated) until 2.0.0, as I feel this
> may
> > ease some concerns about API alteration in 1.7+.
> >
> >
> I believe I expressed that above, and only excluded things that were
> deprecated prior to 1.7 (such as aggregators, which I expect to drop in
> 2.0).
>
>
> > On Tue, Dec 2, 2014 at 3:01 PM, Christopher  wrote:
> >
> > > Following the conversation on the [VOTE] thread for ACCUMULO-3176, it
> > seems
> > > we require an explicit API guidelines at least for 1.7.0 and later
> until
> > > 2.0.0.
> > >
> > > I hereby propose we adopt the following guidelines for future releases
> > (if
> > > we produce any such releases) until 2.0.0:
> > >
> > > API additions are permitted in "major" 1.x releases (1.7, 1.8, 1.9,
> 1.10,
> > > etc.).
> > > API should be forwards and backwards compatible within a 1.x release
> (no
> > > new additions to the API in a "bugfix" release; e.g. 1.7.1).
> > > New API in 1.7.0 and later 1.x releases will not be removed in 2.0
> > (though
> > > they may be deprecated in 2.0 and subject to removal in 3.0).
> > > Existing API in 1.7.0 will be preserved through 2.0, and should only be
> > > subject to removal if it was already deprecated prior to 1.7.0 (though
> > they
> > > may be deprecated in 2.0 and subject to removal in 3.0).
> > >
> > > The purpose of these guidelines are to ensure the ability to add
> > additional
> > > functionality and evolve API naturally, while minimizing API
> disruptions
> > to
> > > the user base, in the interim before 2.0.0 when we can formally adopt
> an
> > > API/versioning policy.
> > >
> > > Exceptions to these guidelines should be subject to a majority vote,
> on a
> > > case-by-case basis.
> > >
> > > Because these relate to release planning, this vote will be subject to
> > > majority vote, in accordance with our bylaws pertaining to release
> > planning
> > > and voting, and will be open for 3 days, concluding at 2000 on 5 Dec
> 2014
> > > UTC.
> > >
> > > --
> > > Christopher L Tubbs II
> > > http://gravatar.com/ctubbsii
> > >
> >
>


Re: [VOTE] API release policy for 1.7/2.0

2014-12-02 Thread John Vines
-1 I do not like the idea of committing to 1.7.0-1.9.9... API additions for
the 2.0 API. We have already come to the consensus that 2.0 will break the
1.x API which provides a lot of breathing room and freedom from old
decisions. This causes this issue to come roaring back and an even larger
amount of scrutiny to be required for all 1.7.0-1.9.9... API changes. I
would go so far as to say an undefinable amount of scrutiny since we still
don't have solid foundation of a 2.0 API. We cannot judge API items for how
well they belong in an API that does not exist yet.

Tangential- I would like to see a clause about all current API items will
not be removed (still could be deprecated) until 2.0.0, as I feel this may
ease some concerns about API alteration in 1.7+.

On Tue, Dec 2, 2014 at 3:01 PM, Christopher  wrote:

> Following the conversation on the [VOTE] thread for ACCUMULO-3176, it seems
> we require an explicit API guidelines at least for 1.7.0 and later until
> 2.0.0.
>
> I hereby propose we adopt the following guidelines for future releases (if
> we produce any such releases) until 2.0.0:
>
> API additions are permitted in "major" 1.x releases (1.7, 1.8, 1.9, 1.10,
> etc.).
> API should be forwards and backwards compatible within a 1.x release (no
> new additions to the API in a "bugfix" release; e.g. 1.7.1).
> New API in 1.7.0 and later 1.x releases will not be removed in 2.0 (though
> they may be deprecated in 2.0 and subject to removal in 3.0).
> Existing API in 1.7.0 will be preserved through 2.0, and should only be
> subject to removal if it was already deprecated prior to 1.7.0 (though they
> may be deprecated in 2.0 and subject to removal in 3.0).
>
> The purpose of these guidelines are to ensure the ability to add additional
> functionality and evolve API naturally, while minimizing API disruptions to
> the user base, in the interim before 2.0.0 when we can formally adopt an
> API/versioning policy.
>
> Exceptions to these guidelines should be subject to a majority vote, on a
> case-by-case basis.
>
> Because these relate to release planning, this vote will be subject to
> majority vote, in accordance with our bylaws pertaining to release planning
> and voting, and will be open for 3 days, concluding at 2000 on 5 Dec 2014
> UTC.
>
> --
> Christopher L Tubbs II
> http://gravatar.com/ctubbsii
>


Re: [VOTE] ACCUMULO-3176

2014-12-01 Thread John Vines
I was having issues with apache's mail forwarding.

I would have been +1. I don't consider adding a new api breaking it. It
would be nice to have the root synchronization of config updates settled,
but that was outside the scope of the ticket.

On Mon, Dec 1, 2014, 3:55 PM Corey Nolet  wrote:

> +1 in case it wasn't inferred from my previous comments. As Josh stated,
> I'm still confused how the veto still holds technical justification- the
> changes being made aren't removing methods from the public API.
>
> On Mon, Dec 1, 2014 at 3:42 PM, Josh Elser  wrote:
>
> > I still don't understand what could even be changed to help you retract
> > your veto.
> >
> > A number of people here have made suggestions about altering the changes
> > to the public API WRT to the major version. I think Brian was the most
> > recent, but I recall asking the same question on the original JIRA issue
> > too.
> >
> >
> > Sean Busbey wrote:
> >
> >> I'm not sure what questions weren't previously answered in my
> >> explanations,
> >> could you please restate which ever ones you want clarification on?
> >>
> >> The vote is closed and only has 2 binding +1s. That means it fails under
> >> consensus rules regardless of my veto, so the issue seems moot.
> >>
> >> On Mon, Dec 1, 2014 at 1:59 PM, Christopher
> wrote:
> >>
> >>  So, it's been 5 days since last activity here, and there are still some
> >>> questions/requests for response left unanswered regarding the veto. I'd
> >>> really like a response to these questions so we can put this issue to
> >>> rest.
> >>>
> >>>
> >>> --
> >>> Christopher L Tubbs II
> >>> http://gravatar.com/ctubbsii
> >>>
> >>> On Wed, Nov 26, 2014 at 1:21 PM, Christopher
> >>> wrote:
> >>>
> >>>  On Wed, Nov 26, 2014 at 11:57 AM, Sean Busbey
> 
> >>> wrote:
> >>>
>  Responses to a few things below.
> >
> >
> > On Tue, Nov 25, 2014 at 2:56 PM, Brian Loss
> >
>  wrote:
> >>>
>  Aren’t API-breaking changes allowed in 1.7? If this change is ok for
> >>
> > 2.0,
> >
> >> then what is the technical reason why it is ok for version 2.0 but
> >>
> > vetoed
> >
> >> for version 1.7?
> >>
> >>  On Nov 25, 2014, at 3:48 PM, Sean Busbey
> >>>
> >> wrote:
> >>>
> 
> >>> How about if we push this change in the API out to the client
> >>>
> >> reworking
> >
> >> in
> >>
> >>> 2.0? Everything will break there anyways so users will already have
> >>>
> >> to
> >>>
>  deal
> >>
> >>> with the change.
> >>>
> >> As I previously mentioned, API breaking changes are allowed on major
> > revisions. Currently, 1.7 is a major revision (and I have
> consistently
> > argued for it to remain classified as such). That doesn't mean we
> > shouldn't
> > consider the cost to end users of making said changes.
> >
> > There is no way to know that there won't be a 1.8 or later version
> > after
> > 1.7 and before 2.0. We already have consensus to do a sweeping
> overhaul
> >
>  of
> >>>
>  the API for that later release and have had that consensus for quite
> >
>  some
> >>>
>  time. Since users will already have to deal with that breakage in 2.0
> I
> > don't see this improvement as worth making them deal with changes
> prior
> >
>  to
> >>>
>  that.
> >
> >
> >  So, are you arguing for no more API additions until 2.0? Because,
>  that's
>  what it sounds like. As is, your general objection to the API seems to
>  be
>  independent of this change, but reflective of an overall policy for
> API
>  additions. Please address why your argument applies to this specific
>  change, and wouldn't to other API additions. Otherwise, this seems to
> be
> 
> >>> a
> >>>
>  case of special pleading.
> 
>  Please address the fact that there is no breakage here, and we can
>  ensure
>  that there won't be any more removal (except in exceptional
> 
> >>> circumstances)
> >>>
>  of deprecated APIs until 2.0 to ease changes. (I actually think that
> 
> >>> would
> >>>
>  be a very reasonable policy to adopt today.) In addition, I fully
> expect
>  that 2.0 will be fully compatible with 1.7, and will also not
> introduce
> 
> >>> any
> >>>
>  breakage except removal of things already deprecated in 1.7. If we
> make
>  this change without marking the previous createTable methods as
> 
> >>> deprecated,
> >>>
>  this new API addition AND the previous createTable API will still be
>  available in 2.0 (as deprecated), and will not be removed until 3.0.
> 
>  You have also previously argued for more intermediate releases between
>  major releases. Please explain how you see omitting this API addition
> is
>  compatible with that goal. Please also explain why, if you consider
> 1.7
> 
> >>> to
> >>>
>  be a major (expected) release, 

Re: Review Request 27198: ACCUMULO-3236 introducing cloneInto feature

2014-10-30 Thread John Vines


> On Oct. 30, 2014, 4:06 p.m., kturner wrote:
> > server/master/src/main/java/org/apache/accumulo/master/tableOps/CloneIntoTable.java,
> >  line 72
> > <https://reviews.apache.org/r/27198/diff/2/?file=741888#file741888line72>
> >
> > reserve table, tries to reserve the table.  If it succeeds it returns 
> > 0, otherwise it returns 100.  It should be called in isReady(), and 
> > isReady() should return what it returns.
> 
> John Vines wrote:
> CloneTable does it in the call space, as a heads up then
> 
> kturner wrote:
> Below is the src reservation in isReady()
> 
> 
> https://github.com/apache/accumulo/blob/1.6.1/server/master/src/main/java/org/apache/accumulo/master/tableOps/CloneTable.java#L247
> 
> Below is the dest reservation in isReady()
> 
> 
> https://github.com/apache/accumulo/blob/1.6.1/server/master/src/main/java/org/apache/accumulo/master/tableOps/CloneTable.java#L149
> 
> Maybe you are thinking about the lock below in call?  
> 
> 
> https://github.com/apache/accumulo/blob/1.6.1/server/master/src/main/java/org/apache/accumulo/master/tableOps/CloneTable.java#L255
> 
> This is not a table lock.  This is lock in the master thats grabbed when 
> a tableid allocated.  Its not held for the lifetime of the fate operation and 
> should be quick.

Perhaps. I might have also been thinking of the unlocks in call() or the lock 
in a ready() call in BulkImport. Either way, I will fix it.


- John


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27198/#review59201
---


On Oct. 29, 2014, 8:52 p.m., John Vines wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27198/
> ---
> 
> (Updated Oct. 29, 2014, 8:52 p.m.)
> 
> 
> Review request for accumulo.
> 
> 
> Bugs: ACCUMULO-3236
> https://issues.apache.org/jira/browse/ACCUMULO-3236
> 
> 
> Repository: accumulo
> 
> 
> Description
> ---
> 
> Includes all code to support feature, including thrift changes
> Includes minor code cleanup to TableLocator and items in the Bulk path to 
> remove signature items that are unused (arguments & exceptions)
> Includes renaming of some bulk import functions to clarify their purpose 
> (because they're now multi-purpose)
> 
> Patch is based on 1.6, but we can choose to make it target only 1.7 if we 
> choose (this conversation should be taken up on jira, not in RB)
> 
> 
> Diffs
> -
> 
>   
> core/src/main/java/org/apache/accumulo/core/client/admin/TableOperations.java 
> 97f538d 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/RootTabletLocator.java
>  97d476b 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/TableOperationsImpl.java
>  2792bcc 
>   core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocator.java 
> e396d82 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocatorImpl.java
>  c550f15 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/TimeoutTabletLocator.java
>  bcbe561 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/thrift/TableOperation.java
>  7716823 
>   
> core/src/main/java/org/apache/accumulo/core/client/mock/MockTableOperationsImpl.java
>  de19137 
>   
> core/src/main/java/org/apache/accumulo/core/client/mock/impl/MockTabletLocator.java
>  35f160f 
>   
> core/src/main/java/org/apache/accumulo/core/master/thrift/FateOperation.java 
> f65f552 
>   
> core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TabletClientService.java
>  2ba7674 
>   core/src/main/thrift/client.thrift 38a8076 
>   core/src/main/thrift/master.thrift 38e9227 
>   core/src/main/thrift/tabletserver.thrift 25e0b10 
>   
> core/src/test/java/org/apache/accumulo/core/client/admin/TableOperationsHelperTest.java
>  1d91574 
>   
> core/src/test/java/org/apache/accumulo/core/client/impl/TableOperationsHelperTest.java
>  02838ed 
>   
> server/base/src/main/java/org/apache/accumulo/server/client/BulkImporter.java 
> 27ab078 
>   
> server/base/src/main/java/org/apache/accumulo/server/client/ClientServiceHandler.java
>  ebea064 
>   
> server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java
>  d0e6aea 
>   
> server/base/src/test/java/org/apache/accumulo/server/client/BulkImporterTest.java
>  3680341 
>   
> server/maste

Re: Review Request 27198: ACCUMULO-3236 introducing cloneInto feature

2014-10-30 Thread John Vines


On Oct. 30, 2014, 4:06 p.m., John Vines wrote:
> > I have only reviewed the FATE ops so far.  How will this work w/ 
> > replication?
> > 
> > I am thinking another possible approach may be to make a clone operation 
> > that accepts multiple input tables and creates a new table.  The reason I 
> > am thinking about this is that it avoids having to deal w/ issues related 
> > to the dest table changing while the clone is happening.  Something like
> > 
> > clone([tableA, tableB], tableC)
> > 
> > However, this is stil still tricky.   The existing clone handles cloning of 
> > online table that may be splitting. It makes multiple passes over the src 
> > table metadata entries updating the dest until it stabilizes.  In order to 
> > avoid this for multiple tables could move from a cloneInto that supports 
> > multiple online inputs to adding a merge that supports multiple offline 
> > tables as input.
> > 
> > ```
> > clone(tableA, tmpA)
> > offline(tmpA)
> > clone(tableB, tmpB)
> > offline(tmpB)
> > mergeTables([tmpA, tmpB], tmpC)
> > ```
> > 
> > After this tmpA and tmpB would be deleted and tmpC would have the files and 
> > splits of both.  tmpC would also have the correct logical time. However one 
> > thing I am not sure about is about per table props.  When clone(tableA, 
> > tableB) is done, it will create tableB w/ tableA's per table props.

Please refer to the JIRA about this feature, which explains why I need this 
feature to not be going into a new table


- John


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27198/#review59201
---


On Oct. 29, 2014, 8:52 p.m., John Vines wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27198/
> ---
> 
> (Updated Oct. 29, 2014, 8:52 p.m.)
> 
> 
> Review request for accumulo.
> 
> 
> Bugs: ACCUMULO-3236
> https://issues.apache.org/jira/browse/ACCUMULO-3236
> 
> 
> Repository: accumulo
> 
> 
> Description
> ---
> 
> Includes all code to support feature, including thrift changes
> Includes minor code cleanup to TableLocator and items in the Bulk path to 
> remove signature items that are unused (arguments & exceptions)
> Includes renaming of some bulk import functions to clarify their purpose 
> (because they're now multi-purpose)
> 
> Patch is based on 1.6, but we can choose to make it target only 1.7 if we 
> choose (this conversation should be taken up on jira, not in RB)
> 
> 
> Diffs
> -
> 
>   
> core/src/main/java/org/apache/accumulo/core/client/admin/TableOperations.java 
> 97f538d 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/RootTabletLocator.java
>  97d476b 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/TableOperationsImpl.java
>  2792bcc 
>   core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocator.java 
> e396d82 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocatorImpl.java
>  c550f15 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/TimeoutTabletLocator.java
>  bcbe561 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/thrift/TableOperation.java
>  7716823 
>   
> core/src/main/java/org/apache/accumulo/core/client/mock/MockTableOperationsImpl.java
>  de19137 
>   
> core/src/main/java/org/apache/accumulo/core/client/mock/impl/MockTabletLocator.java
>  35f160f 
>   
> core/src/main/java/org/apache/accumulo/core/master/thrift/FateOperation.java 
> f65f552 
>   
> core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TabletClientService.java
>  2ba7674 
>   core/src/main/thrift/client.thrift 38a8076 
>   core/src/main/thrift/master.thrift 38e9227 
>   core/src/main/thrift/tabletserver.thrift 25e0b10 
>   
> core/src/test/java/org/apache/accumulo/core/client/admin/TableOperationsHelperTest.java
>  1d91574 
>   
> core/src/test/java/org/apache/accumulo/core/client/impl/TableOperationsHelperTest.java
>  02838ed 
>   
> server/base/src/main/java/org/apache/accumulo/server/client/BulkImporter.java 
> 27ab078 
>   
> server/base/src/main/java/org/apache/accumulo/server/client/ClientServiceHandler.java
>  ebea064 
>   
> server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java
>  d0e6aea 
>   
> server/base/src/test/java/org/apache/accumulo/s

Re: Review Request 27198: ACCUMULO-3236 introducing cloneInto feature

2014-10-30 Thread John Vines


> On Oct. 30, 2014, 4:06 p.m., kturner wrote:
> > server/master/src/main/java/org/apache/accumulo/master/tableOps/CloneIntoTable.java,
> >  line 72
> > <https://reviews.apache.org/r/27198/diff/2/?file=741888#file741888line72>
> >
> > reserve table, tries to reserve the table.  If it succeeds it returns 
> > 0, otherwise it returns 100.  It should be called in isReady(), and 
> > isReady() should return what it returns.

CloneTable does it in the call space, as a heads up then


- John


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27198/#review59201
---


On Oct. 29, 2014, 8:52 p.m., John Vines wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27198/
> ---
> 
> (Updated Oct. 29, 2014, 8:52 p.m.)
> 
> 
> Review request for accumulo.
> 
> 
> Bugs: ACCUMULO-3236
> https://issues.apache.org/jira/browse/ACCUMULO-3236
> 
> 
> Repository: accumulo
> 
> 
> Description
> ---
> 
> Includes all code to support feature, including thrift changes
> Includes minor code cleanup to TableLocator and items in the Bulk path to 
> remove signature items that are unused (arguments & exceptions)
> Includes renaming of some bulk import functions to clarify their purpose 
> (because they're now multi-purpose)
> 
> Patch is based on 1.6, but we can choose to make it target only 1.7 if we 
> choose (this conversation should be taken up on jira, not in RB)
> 
> 
> Diffs
> -
> 
>   
> core/src/main/java/org/apache/accumulo/core/client/admin/TableOperations.java 
> 97f538d 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/RootTabletLocator.java
>  97d476b 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/TableOperationsImpl.java
>  2792bcc 
>   core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocator.java 
> e396d82 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocatorImpl.java
>  c550f15 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/TimeoutTabletLocator.java
>  bcbe561 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/thrift/TableOperation.java
>  7716823 
>   
> core/src/main/java/org/apache/accumulo/core/client/mock/MockTableOperationsImpl.java
>  de19137 
>   
> core/src/main/java/org/apache/accumulo/core/client/mock/impl/MockTabletLocator.java
>  35f160f 
>   
> core/src/main/java/org/apache/accumulo/core/master/thrift/FateOperation.java 
> f65f552 
>   
> core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TabletClientService.java
>  2ba7674 
>   core/src/main/thrift/client.thrift 38a8076 
>   core/src/main/thrift/master.thrift 38e9227 
>   core/src/main/thrift/tabletserver.thrift 25e0b10 
>   
> core/src/test/java/org/apache/accumulo/core/client/admin/TableOperationsHelperTest.java
>  1d91574 
>   
> core/src/test/java/org/apache/accumulo/core/client/impl/TableOperationsHelperTest.java
>  02838ed 
>   
> server/base/src/main/java/org/apache/accumulo/server/client/BulkImporter.java 
> 27ab078 
>   
> server/base/src/main/java/org/apache/accumulo/server/client/ClientServiceHandler.java
>  ebea064 
>   
> server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java
>  d0e6aea 
>   
> server/base/src/test/java/org/apache/accumulo/server/client/BulkImporterTest.java
>  3680341 
>   
> server/master/src/main/java/org/apache/accumulo/master/FateServiceHandler.java
>  5818da3 
>   
> server/master/src/main/java/org/apache/accumulo/master/tableOps/CloneIntoTable.java
>  PRE-CREATION 
>   server/tserver/src/main/java/org/apache/accumulo/tserver/Tablet.java 
> 0778f5b 
>   server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServer.java 
> 03fe069 
>   
> test/src/main/java/org/apache/accumulo/test/performance/thrift/NullTserver.java
>  0591b19 
>   test/src/test/java/org/apache/accumulo/test/functional/CloneIntoIT.java 
> PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/27198/diff/
> 
> 
> Testing
> ---
> 
> Includes CloneIntoIT, which exercises all permutations of the flags. Existing 
> BulkIT still functions as intended for validation of no feature loss in 
> refactoring exiting code for multi-purposing.
> 
> 
> Thanks,
> 
> John Vines
> 
>



Re: Review Request 27198: ACCUMULO-3236 introducing cloneInto feature

2014-10-29 Thread John Vines


> On Oct. 28, 2014, 1:48 p.m., kturner wrote:
> > Whan I look at the diffs on RB, some of the files error out and no diff is 
> > shown.  Does anyone know whats happening?
> 
> Sean Busbey wrote:
> the patch doesn't apply cleanly. I can't get it to apply on a local 1.6 
> either, even if I step back to commit 87fbb4b.
> 
> John, could you do an updated diff based on current 1.6 (presuming that 
> is still your target version)?

Newest version was done with git diff instead of git format-patch. Hopefully 
this works better


- John


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27198/#review58795
---


On Oct. 29, 2014, 8:52 p.m., John Vines wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27198/
> ---
> 
> (Updated Oct. 29, 2014, 8:52 p.m.)
> 
> 
> Review request for accumulo.
> 
> 
> Bugs: ACCUMULO-3236
> https://issues.apache.org/jira/browse/ACCUMULO-3236
> 
> 
> Repository: accumulo
> 
> 
> Description
> ---
> 
> Includes all code to support feature, including thrift changes
> Includes minor code cleanup to TableLocator and items in the Bulk path to 
> remove signature items that are unused (arguments & exceptions)
> Includes renaming of some bulk import functions to clarify their purpose 
> (because they're now multi-purpose)
> 
> Patch is based on 1.6, but we can choose to make it target only 1.7 if we 
> choose (this conversation should be taken up on jira, not in RB)
> 
> 
> Diffs
> -
> 
>   
> core/src/main/java/org/apache/accumulo/core/client/admin/TableOperations.java 
> 97f538d 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/RootTabletLocator.java
>  97d476b 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/TableOperationsImpl.java
>  2792bcc 
>   core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocator.java 
> e396d82 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocatorImpl.java
>  c550f15 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/TimeoutTabletLocator.java
>  bcbe561 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/thrift/TableOperation.java
>  7716823 
>   
> core/src/main/java/org/apache/accumulo/core/client/mock/MockTableOperationsImpl.java
>  de19137 
>   
> core/src/main/java/org/apache/accumulo/core/client/mock/impl/MockTabletLocator.java
>  35f160f 
>   
> core/src/main/java/org/apache/accumulo/core/master/thrift/FateOperation.java 
> f65f552 
>   
> core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TabletClientService.java
>  2ba7674 
>   core/src/main/thrift/client.thrift 38a8076 
>   core/src/main/thrift/master.thrift 38e9227 
>   core/src/main/thrift/tabletserver.thrift 25e0b10 
>   
> core/src/test/java/org/apache/accumulo/core/client/admin/TableOperationsHelperTest.java
>  1d91574 
>   
> core/src/test/java/org/apache/accumulo/core/client/impl/TableOperationsHelperTest.java
>  02838ed 
>   
> server/base/src/main/java/org/apache/accumulo/server/client/BulkImporter.java 
> 27ab078 
>   
> server/base/src/main/java/org/apache/accumulo/server/client/ClientServiceHandler.java
>  ebea064 
>   
> server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java
>  d0e6aea 
>   
> server/base/src/test/java/org/apache/accumulo/server/client/BulkImporterTest.java
>  3680341 
>   
> server/master/src/main/java/org/apache/accumulo/master/FateServiceHandler.java
>  5818da3 
>   
> server/master/src/main/java/org/apache/accumulo/master/tableOps/CloneIntoTable.java
>  PRE-CREATION 
>   server/tserver/src/main/java/org/apache/accumulo/tserver/Tablet.java 
> 0778f5b 
>   server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServer.java 
> 03fe069 
>   
> test/src/main/java/org/apache/accumulo/test/performance/thrift/NullTserver.java
>  0591b19 
>   test/src/test/java/org/apache/accumulo/test/functional/CloneIntoIT.java 
> PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/27198/diff/
> 
> 
> Testing
> ---
> 
> Includes CloneIntoIT, which exercises all permutations of the flags. Existing 
> BulkIT still functions as intended for validation of no feature loss in 
> refactoring exiting code for multi-purposing.
> 
> 
> Thanks,
> 
> John Vines
> 
>



Re: Review Request 27198: ACCUMULO-3236 introducing cloneInto feature

2014-10-29 Thread John Vines

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27198/
---

(Updated Oct. 29, 2014, 8:52 p.m.)


Review request for accumulo.


Changes
---

Applying first round of feedback changes


Bugs: ACCUMULO-3236
https://issues.apache.org/jira/browse/ACCUMULO-3236


Repository: accumulo


Description
---

Includes all code to support feature, including thrift changes
Includes minor code cleanup to TableLocator and items in the Bulk path to 
remove signature items that are unused (arguments & exceptions)
Includes renaming of some bulk import functions to clarify their purpose 
(because they're now multi-purpose)

Patch is based on 1.6, but we can choose to make it target only 1.7 if we 
choose (this conversation should be taken up on jira, not in RB)


Diffs (updated)
-

  core/src/main/java/org/apache/accumulo/core/client/admin/TableOperations.java 
97f538d 
  
core/src/main/java/org/apache/accumulo/core/client/impl/RootTabletLocator.java 
97d476b 
  
core/src/main/java/org/apache/accumulo/core/client/impl/TableOperationsImpl.java
 2792bcc 
  core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocator.java 
e396d82 
  
core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocatorImpl.java 
c550f15 
  
core/src/main/java/org/apache/accumulo/core/client/impl/TimeoutTabletLocator.java
 bcbe561 
  
core/src/main/java/org/apache/accumulo/core/client/impl/thrift/TableOperation.java
 7716823 
  
core/src/main/java/org/apache/accumulo/core/client/mock/MockTableOperationsImpl.java
 de19137 
  
core/src/main/java/org/apache/accumulo/core/client/mock/impl/MockTabletLocator.java
 35f160f 
  core/src/main/java/org/apache/accumulo/core/master/thrift/FateOperation.java 
f65f552 
  
core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TabletClientService.java
 2ba7674 
  core/src/main/thrift/client.thrift 38a8076 
  core/src/main/thrift/master.thrift 38e9227 
  core/src/main/thrift/tabletserver.thrift 25e0b10 
  
core/src/test/java/org/apache/accumulo/core/client/admin/TableOperationsHelperTest.java
 1d91574 
  
core/src/test/java/org/apache/accumulo/core/client/impl/TableOperationsHelperTest.java
 02838ed 
  server/base/src/main/java/org/apache/accumulo/server/client/BulkImporter.java 
27ab078 
  
server/base/src/main/java/org/apache/accumulo/server/client/ClientServiceHandler.java
 ebea064 
  
server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java
 d0e6aea 
  
server/base/src/test/java/org/apache/accumulo/server/client/BulkImporterTest.java
 3680341 
  
server/master/src/main/java/org/apache/accumulo/master/FateServiceHandler.java 
5818da3 
  
server/master/src/main/java/org/apache/accumulo/master/tableOps/CloneIntoTable.java
 PRE-CREATION 
  server/tserver/src/main/java/org/apache/accumulo/tserver/Tablet.java 0778f5b 
  server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServer.java 
03fe069 
  
test/src/main/java/org/apache/accumulo/test/performance/thrift/NullTserver.java 
0591b19 
  test/src/test/java/org/apache/accumulo/test/functional/CloneIntoIT.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/27198/diff/


Testing
---

Includes CloneIntoIT, which exercises all permutations of the flags. Existing 
BulkIT still functions as intended for validation of no feature loss in 
refactoring exiting code for multi-purposing.


Thanks,

John Vines



Re: Review Request 27198: ACCUMULO-3236 introducing cloneInto feature

2014-10-29 Thread John Vines


> On Oct. 27, 2014, 8:29 p.m., Josh Elser wrote:
> > core/src/main/java/org/apache/accumulo/core/client/admin/TableOperations.java,
> >  line 346
> > <https://reviews.apache.org/r/27198/diff/1/?file=733392#file733392line346>
> >
> > Maybe it would be clearer to actually say that no data is actually 
> > copied but the existing files will be referenced by the target table. It 
> > means the same thing that you said, but is a bit more straightforward IMO.

Changed


> On Oct. 27, 2014, 8:29 p.m., Josh Elser wrote:
> > core/src/main/java/org/apache/accumulo/core/client/impl/TableOperationsImpl.java,
> >  line 724
> > <https://reviews.apache.org/r/27198/diff/1/?file=733394#file733394line724>
> >
> > Boolean.toString(boolean) instead of creating a new String for the 
> > cast, please.

Went ahead and fixed that in importDirectory too (where I got that code from)


> On Oct. 27, 2014, 8:29 p.m., Josh Elser wrote:
> > core/src/main/java/org/apache/accumulo/core/client/mock/MockTableOperationsImpl.java,
> >  line 452
> > <https://reviews.apache.org/r/27198/diff/1/?file=733399#file733399line452>
> >
> > nit: whitespace

Reformatted file


> On Oct. 27, 2014, 8:29 p.m., Josh Elser wrote:
> > core/src/main/thrift/tabletserver.thrift, line 180
> > <https://reviews.apache.org/r/27198/diff/1/?file=733405#file733405line180>
> >
> > I don't like removing the old bulkImport method. We should try to keep 
> > the server APIs as stable as we can. You can keep the old bulkImport method 
> > around and just call the new addFiles method in the implementation. This 
> > will help us stay closer to compatibility across versions.
> 
> John Vines wrote:
> This isn't public api, this is only used for master->tserver.
> 
> Josh Elser wrote:
> Ah, I didn't notice that the first time around, thanks for pointing it 
> out. However, even though it is internal RPC, I still think that the 
> BulkImport code shouldn't have to change to support this new feature. You can 
> still keep the master->tserver RPC for bulk import the same and call the new 
> `addFiles(..., false)`. This keeps the RPC methods that the server-side bulk 
> import code calls the same while still letting you implement this new feature.
> 
> John Vines wrote:
> You are right in that I could do a separate method for this. But I feel 
> that further enables an issue I noticed while working on this of refactoring 
> things while not renaming/cleaning things up to what their actual purpose is. 
> This method was called bulkImport, but all it specifically did was assign 
> files. It did no moving of files, etc., it just validated they were in the 
> right path. So I changed the name to reflect it's actual behavior to better 
> indicate what it was doing. I just added a flag to make an ingrained 
> side-affect less ingrained.
> 
> This really comes across as a case supporting copying a method, which I 
> have fundamental issues with. Maybe I'm not really seeing the value in 
> strictly maintaining a server-only API since we're not supporting wire 
> compatibility yet.
> 
> Josh Elser wrote:
> You hit exactly the reason I'm getting hung up on it. I'd like to start 
> focusing on wire compatibility more. While we don't have hard/fast rules yet, 
> I'd like to avoid making such changes only when there is no other option. It 
> would be a shame if this was the only change that kept a 1.6 master from 
> talking to a 1.7 tserver. Obviously, I doubt this is actually true, but I'd 
> rather not find out the hard way.
> 
> Sean Busbey wrote:
> Please do not break wire compatibility. I'd like to start testing rolling 
> upgrades and doing so will preclude it.

Added back removed api, marked as deprecated so we can phase it out in a 
release we're comfortable breaking wire compatibility.


> On Oct. 27, 2014, 8:29 p.m., Josh Elser wrote:
> > server/master/src/main/java/org/apache/accumulo/master/tableOps/CloneIntoTable.java,
> >  line 162
> > <https://reviews.apache.org/r/27198/diff/1/?file=733413#file733413line162>
> >
> > Won't this fail if you try cloneInto with an empty table as the source?
> 
> John Vines wrote:
> The table may be empty, but there will be the default tablet. I'll be 
> sure to add tests for this in the IT though to verify.
> 
> Josh Elser wrote:
> Thanks, I know there will be the default tablet, but you wouldn't have a 
> file column right away. I didn't look deep enough to figure out if that would 
> actually be problematic.
> 
> John Vines w

Re: Review Request 27198: ACCUMULO-3236 introducing cloneInto feature

2014-10-27 Thread John Vines


> On Oct. 27, 2014, 8:29 p.m., Josh Elser wrote:
> > core/src/main/thrift/tabletserver.thrift, line 180
> > <https://reviews.apache.org/r/27198/diff/1/?file=733405#file733405line180>
> >
> > I don't like removing the old bulkImport method. We should try to keep 
> > the server APIs as stable as we can. You can keep the old bulkImport method 
> > around and just call the new addFiles method in the implementation. This 
> > will help us stay closer to compatibility across versions.
> 
> John Vines wrote:
> This isn't public api, this is only used for master->tserver.
> 
> Josh Elser wrote:
> Ah, I didn't notice that the first time around, thanks for pointing it 
> out. However, even though it is internal RPC, I still think that the 
> BulkImport code shouldn't have to change to support this new feature. You can 
> still keep the master->tserver RPC for bulk import the same and call the new 
> `addFiles(..., false)`. This keeps the RPC methods that the server-side bulk 
> import code calls the same while still letting you implement this new feature.

You are right in that I could do a separate method for this. But I feel that 
further enables an issue I noticed while working on this of refactoring things 
while not renaming/cleaning things up to what their actual purpose is. This 
method was called bulkImport, but all it specifically did was assign files. It 
did no moving of files, etc., it just validated they were in the right path. So 
I changed the name to reflect it's actual behavior to better indicate what it 
was doing. I just added a flag to make an ingrained side-affect less ingrained.

This really comes across as a case supporting copying a method, which I have 
fundamental issues with. Maybe I'm not really seeing the value in strictly 
maintaining a server-only API since we're not supporting wire compatibility yet.


> On Oct. 27, 2014, 8:29 p.m., Josh Elser wrote:
> > server/master/src/main/java/org/apache/accumulo/master/FateServiceHandler.java,
> >  line 235
> > <https://reviews.apache.org/r/27198/diff/1/?file=733412#file733412line235>
> >
> > We check that the source isn't the root table, but that the dest isn't 
> > in the system namespace. Don't we want both tables to just not be in the 
> > system namespace?
> 
> John Vines wrote:
> While I don't understand a use case for wanting metadata tablets to be 
> referenced in a standard table, I see no reason to prevent it.
> 
> Josh Elser wrote:
> So, root table is only precluded as the source table because it's backed 
> by ZK and not HDFS files?

I precluded it because that's what clone table was doing. We can see about 
opening up to that though.


> On Oct. 27, 2014, 8:29 p.m., Josh Elser wrote:
> > server/master/src/main/java/org/apache/accumulo/master/tableOps/CloneIntoTable.java,
> >  line 162
> > <https://reviews.apache.org/r/27198/diff/1/?file=733413#file733413line162>
> >
> > Won't this fail if you try cloneInto with an empty table as the source?
> 
> John Vines wrote:
> The table may be empty, but there will be the default tablet. I'll be 
> sure to add tests for this in the IT though to verify.
> 
> Josh Elser wrote:
> Thanks, I know there will be the default tablet, but you wouldn't have a 
> file column right away. I didn't look deep enough to figure out if that would 
> actually be problematic.

the ke is determined solely by the existance of the prev row column, which any 
default tablet will have.

Having no files just generates an empty map of files to import, so it's 
effectively a noop.


- John


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27198/#review58675
---


On Oct. 25, 2014, 7:31 p.m., John Vines wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27198/
> ---
> 
> (Updated Oct. 25, 2014, 7:31 p.m.)
> 
> 
> Review request for accumulo.
> 
> 
> Bugs: ACCUMULO-3236
> https://issues.apache.org/jira/browse/ACCUMULO-3236
> 
> 
> Repository: accumulo
> 
> 
> Description
> ---
> 
> Includes all code to support feature, including thrift changes
> Includes minor code cleanup to TableLocator and items in the Bulk path to 
> remove signature items that are unused (arguments & exceptions)
> Includes renaming of some bulk import functions to clarify their purpose 

Re: Review Request 27198: ACCUMULO-3236 introducing cloneInto feature

2014-10-27 Thread John Vines


> On Oct. 27, 2014, 8:29 p.m., Josh Elser wrote:
> > core/src/main/thrift/tabletserver.thrift, line 180
> > <https://reviews.apache.org/r/27198/diff/1/?file=733405#file733405line180>
> >
> > I don't like removing the old bulkImport method. We should try to keep 
> > the server APIs as stable as we can. You can keep the old bulkImport method 
> > around and just call the new addFiles method in the implementation. This 
> > will help us stay closer to compatibility across versions.

This isn't public api, this is only used for master->tserver.


> On Oct. 27, 2014, 8:29 p.m., Josh Elser wrote:
> > server/master/src/main/java/org/apache/accumulo/master/FateServiceHandler.java,
> >  line 235
> > <https://reviews.apache.org/r/27198/diff/1/?file=733412#file733412line235>
> >
> > We check that the source isn't the root table, but that the dest isn't 
> > in the system namespace. Don't we want both tables to just not be in the 
> > system namespace?

While I don't understand a use case for wanting metadata tablets to be 
referenced in a standard table, I see no reason to prevent it.


> On Oct. 27, 2014, 8:29 p.m., Josh Elser wrote:
> > server/master/src/main/java/org/apache/accumulo/master/tableOps/CloneIntoTable.java,
> >  line 162
> > <https://reviews.apache.org/r/27198/diff/1/?file=733413#file733413line162>
> >
> > Won't this fail if you try cloneInto with an empty table as the source?

The table may be empty, but there will be the default tablet. I'll be sure to 
add tests for this in the IT though to verify.


> On Oct. 27, 2014, 8:29 p.m., Josh Elser wrote:
> > test/src/test/java/org/apache/accumulo/test/functional/CloneIntoIT.java, 
> > line 42
> > <https://reviews.apache.org/r/27198/diff/1/?file=733417#file733417line42>
> >
> > Some more tests here that enumerate the basic edge cases would be good: 
> > srcTable missing, destTable missing, read perms denied on source, write 
> > perms denied on dest.
> > 
> > Breaking up testCloneInto into a few test cases instead of one big one 
> > would be much easier when trying to debug things later.

That's fine. I originally had one case and I creeped it up a bit.


> On Oct. 27, 2014, 8:29 p.m., Josh Elser wrote:
> > test/src/test/java/org/apache/accumulo/test/functional/CloneIntoIT.java, 
> > line 45
> > <https://reviews.apache.org/r/27198/diff/1/?file=733417#file733417line45>
> >
> > Thank you for adding this :)

I just copied the CloneIT... or BulkIT which had it.


- John


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27198/#review58675
---


On Oct. 25, 2014, 7:31 p.m., John Vines wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27198/
> ---
> 
> (Updated Oct. 25, 2014, 7:31 p.m.)
> 
> 
> Review request for accumulo.
> 
> 
> Bugs: ACCUMULO-3236
> https://issues.apache.org/jira/browse/ACCUMULO-3236
> 
> 
> Repository: accumulo
> 
> 
> Description
> ---
> 
> Includes all code to support feature, including thrift changes
> Includes minor code cleanup to TableLocator and items in the Bulk path to 
> remove signature items that are unused (arguments & exceptions)
> Includes renaming of some bulk import functions to clarify their purpose 
> (because they're now multi-purpose)
> 
> Patch is based on 1.6, but we can choose to make it target only 1.7 if we 
> choose (this conversation should be taken up on jira, not in RB)
> 
> 
> Diffs
> -
> 
>   
> core/src/main/java/org/apache/accumulo/core/client/admin/TableOperations.java 
> 97f538d 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/RootTabletLocator.java
>  97d476b 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/TableOperationsImpl.java
>  2792bcc 
>   core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocator.java 
> e396d82 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocatorImpl.java
>  c550f15 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/TimeoutTabletLocator.java
>  bcbe561 
>   
> core/src/main/java/org/apache/accumulo/core/client/impl/thrift/TableOperation.java
>  7716823 
>   
> core/src/main/java/org/apache/accumulo/core/client/mock/MockTableOperationsImpl.java
>  de19137 
>  

Review Request 27198: ACCUMULO-3236 introducing cloneInto feature

2014-10-25 Thread John Vines

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27198/
---

Review request for accumulo.


Bugs: ACCUMULO-3236
https://issues.apache.org/jira/browse/ACCUMULO-3236


Repository: accumulo


Description
---

Includes all code to support feature, including thrift changes
Includes minor code cleanup to TableLocator and items in the Bulk path to 
remove signature items that are unused (arguments & exceptions)
Includes renaming of some bulk import functions to clarify their purpose 
(because they're now multi-purpose)

Patch is based on 1.6, but we can choose to make it target only 1.7 if we 
choose (this conversation should be taken up on jira, not in RB)


Diffs
-

  core/src/main/java/org/apache/accumulo/core/client/admin/TableOperations.java 
97f538d 
  
core/src/main/java/org/apache/accumulo/core/client/impl/RootTabletLocator.java 
97d476b 
  
core/src/main/java/org/apache/accumulo/core/client/impl/TableOperationsImpl.java
 2792bcc 
  core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocator.java 
e396d82 
  
core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocatorImpl.java 
c550f15 
  
core/src/main/java/org/apache/accumulo/core/client/impl/TimeoutTabletLocator.java
 bcbe561 
  
core/src/main/java/org/apache/accumulo/core/client/impl/thrift/TableOperation.java
 7716823 
  
core/src/main/java/org/apache/accumulo/core/client/mock/MockTableOperationsImpl.java
 de19137 
  
core/src/main/java/org/apache/accumulo/core/client/mock/impl/MockTabletLocator.java
 35f160f 
  core/src/main/java/org/apache/accumulo/core/master/thrift/FateOperation.java 
f65f552 
  
core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TabletClientService.java
 2ba7674 
  core/src/main/thrift/client.thrift 38a8076 
  core/src/main/thrift/master.thrift 38e9227 
  core/src/main/thrift/tabletserver.thrift 25e0b10 
  
core/src/test/java/org/apache/accumulo/core/client/admin/TableOperationsHelperTest.java
 1d91574 
  
core/src/test/java/org/apache/accumulo/core/client/impl/TableOperationsHelperTest.java
 02838ed 
  server/base/src/main/java/org/apache/accumulo/server/client/BulkImporter.java 
27ab078 
  
server/base/src/main/java/org/apache/accumulo/server/client/ClientServiceHandler.java
 ebea064 
  
server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java
 d0e6aea 
  
server/base/src/test/java/org/apache/accumulo/server/client/BulkImporterTest.java
 3680341 
  
server/master/src/main/java/org/apache/accumulo/master/FateServiceHandler.java 
5818da3 
  
server/master/src/main/java/org/apache/accumulo/master/tableOps/CloneIntoTable.java
 PRE-CREATION 
  server/tserver/src/main/java/org/apache/accumulo/tserver/Tablet.java 0778f5b 
  server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServer.java 
03fe069 
  
test/src/main/java/org/apache/accumulo/test/performance/thrift/NullTserver.java 
0591b19 
  test/src/test/java/org/apache/accumulo/test/functional/CloneIntoIT.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/27198/diff/


Testing
---

Includes CloneIntoIT, which exercises all permutations of the flags. Existing 
BulkIT still functions as intended for validation of no feature loss in 
refactoring exiting code for multi-purposing.


Thanks,

John Vines



Re: Reasoning behind Key(Text row) using Long.MAX_VALUE

2014-10-24 Thread John Vines
Makes me think the Range(Text row) constructor should be row, true, row,
false

On Fri, Oct 24, 2014 at 10:53 AM, Andrew Wells 
wrote:

> It may be need to change either the implementation of Key::new(Text row),
> or change the way Range::exact(Text row) matches
>
> Trace on Key::new(Text row)
> line: 102
> line: 75
>
>
> Trace on Range exact(Text row)
> line 656
> line 82
> line 123
>
> This causes Range exact(Text row) to never match
>
>
>
> --
> *Andrew George Wells*
> *Software Engineer*
> *awe...@clearedgeit.com *
>


Re: Review Request 23397: ACCUMULO-2984 Support Running MAC against a standard instance

2014-10-21 Thread John Vines

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23397/
---

(Updated Oct. 21, 2014, 8:11 p.m.)


Review request for accumulo.


Changes
---

Fixing another Configuration plumbing issue and adding some info statements


Bugs: ACCUMULO-2984
https://issues.apache.org/jira/browse/ACCUMULO-2984


Repository: accumulo


Description
---

Adds a change to SiteConfiguration to allow external setting of the xml 
configuration file.
Adds a single method to MiniAccumuloConfig which allows a user to point to 
accumulo-site.xml and HADOOP_CONF_DIR to use for pulling out requisite instance 
information
Clusters configurations into those required to run inside a MAC-sized footprint 
and those which are for arbitrary naming schemes for MAC
Provides flagging to prevent uneccessary folder creation
Provides flagging to prevent running zookeeper and initializing


Diffs (updated)
-

  core/src/main/java/org/apache/accumulo/core/volume/VolumeConfiguration.java 
c901768 
  core/src/main/java/org/apache/accumulo/core/zookeeper/ZooUtil.java d536f42 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
 be80f85 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloRunner.java
 e261faa 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
 5d8501e 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
 e9ad045 
  
server/base/src/main/java/org/apache/accumulo/server/conf/ServerConfiguration.java
 50dec57 
  
server/base/src/main/java/org/apache/accumulo/server/fs/VolumeManagerImpl.java 
8ddeb4f 
  server/base/src/main/java/org/apache/accumulo/server/util/AccumuloStatus.java 
PRE-CREATION 
  test/src/test/java/org/apache/accumulo/test/ExistingMacIT.java PRE-CREATION 

Diff: https://reviews.apache.org/r/23397/diff/


Testing
---

Ran the following test code-
public class TestMACWithRealInstance {
  public static void main(String args[]) throws IOException, AccumuloException, 
AccumuloSecurityException, TableExistsException, InterruptedException {
MiniAccumuloConfig macConfig = new MiniAccumuloConfig(new File("/tmp/mac"), 
"secret");
macConfig.setNumTservers(2);
macConfig.setMemory(ServerType.TABLET_SERVER, 2, MemoryUnit.GIGABYTE);
macConfig.useExistingInstance(new 
File("/usr/lib/accumulo/conf/accumulo-site.xml"), new 
File("/usr/lib/hadoop/conf"));
MiniAccumuloCluster mac = new MiniAccumuloCluster(macConfig);
mac.start();
System.out.println("Started");
mac.getConnector("root", "secret").tableOperations().create("macCreated");
System.out.println("Stopping");
mac.stop();
System.out.println("Stopped");
  }
}
Which runs fine, except stopping issues which seem to be related to 
ACCUMULO-2985

After running this, I validated that the table was created in the real accumulo 
instance via zkCli


Thanks,

John Vines



Re: Review Request 23397: ACCUMULO-2984 Support Running MAC against a standard instance

2014-10-21 Thread John Vines

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23397/
---

(Updated Oct. 21, 2014, 7:15 p.m.)


Review request for accumulo.


Changes
---

Addressing latest batch of Keith reported errors


Bugs: ACCUMULO-2984
https://issues.apache.org/jira/browse/ACCUMULO-2984


Repository: accumulo


Description
---

Adds a change to SiteConfiguration to allow external setting of the xml 
configuration file.
Adds a single method to MiniAccumuloConfig which allows a user to point to 
accumulo-site.xml and HADOOP_CONF_DIR to use for pulling out requisite instance 
information
Clusters configurations into those required to run inside a MAC-sized footprint 
and those which are for arbitrary naming schemes for MAC
Provides flagging to prevent uneccessary folder creation
Provides flagging to prevent running zookeeper and initializing


Diffs (updated)
-

  core/src/main/java/org/apache/accumulo/core/zookeeper/ZooUtil.java d536f42 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
 be80f85 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloRunner.java
 e261faa 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
 5d8501e 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
 e9ad045 
  
server/base/src/main/java/org/apache/accumulo/server/conf/ServerConfiguration.java
 50dec57 
  
server/base/src/main/java/org/apache/accumulo/server/fs/VolumeManagerImpl.java 
8ddeb4f 
  server/base/src/main/java/org/apache/accumulo/server/util/AccumuloStatus.java 
PRE-CREATION 
  test/src/test/java/org/apache/accumulo/test/ExistingMacIT.java PRE-CREATION 

Diff: https://reviews.apache.org/r/23397/diff/


Testing
---

Ran the following test code-
public class TestMACWithRealInstance {
  public static void main(String args[]) throws IOException, AccumuloException, 
AccumuloSecurityException, TableExistsException, InterruptedException {
MiniAccumuloConfig macConfig = new MiniAccumuloConfig(new File("/tmp/mac"), 
"secret");
macConfig.setNumTservers(2);
macConfig.setMemory(ServerType.TABLET_SERVER, 2, MemoryUnit.GIGABYTE);
macConfig.useExistingInstance(new 
File("/usr/lib/accumulo/conf/accumulo-site.xml"), new 
File("/usr/lib/hadoop/conf"));
MiniAccumuloCluster mac = new MiniAccumuloCluster(macConfig);
mac.start();
System.out.println("Started");
mac.getConnector("root", "secret").tableOperations().create("macCreated");
System.out.println("Stopping");
mac.stop();
System.out.println("Stopped");
  }
}
Which runs fine, except stopping issues which seem to be related to 
ACCUMULO-2985

After running this, I validated that the table was created in the real accumulo 
instance via zkCli


Thanks,

John Vines



Re: Review Request 23397: ACCUMULO-2984 Support Running MAC against a standard instance

2014-10-21 Thread John Vines

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23397/
---

(Updated Oct. 21, 2014, 6:35 p.m.)


Review request for accumulo.


Changes
---

Fully revised version based on Keith's suggestions which longer does any sort 
of manipulation of static utilities.


Bugs: ACCUMULO-2984
https://issues.apache.org/jira/browse/ACCUMULO-2984


Repository: accumulo


Description
---

Adds a change to SiteConfiguration to allow external setting of the xml 
configuration file.
Adds a single method to MiniAccumuloConfig which allows a user to point to 
accumulo-site.xml and HADOOP_CONF_DIR to use for pulling out requisite instance 
information
Clusters configurations into those required to run inside a MAC-sized footprint 
and those which are for arbitrary naming schemes for MAC
Provides flagging to prevent uneccessary folder creation
Provides flagging to prevent running zookeeper and initializing


Diffs (updated)
-

  core/src/main/java/org/apache/accumulo/core/zookeeper/ZooUtil.java d536f42 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
 be80f85 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloRunner.java
 e261faa 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
 5d8501e 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
 e9ad045 
  
server/base/src/main/java/org/apache/accumulo/server/conf/ServerConfiguration.java
 50dec57 
  
server/base/src/main/java/org/apache/accumulo/server/fs/VolumeManagerImpl.java 
8ddeb4f 
  server/base/src/main/java/org/apache/accumulo/server/util/AccumuloStatus.java 
PRE-CREATION 
  test/src/test/java/org/apache/accumulo/test/ExistingMacIT.java PRE-CREATION 

Diff: https://reviews.apache.org/r/23397/diff/


Testing
---

Ran the following test code-
public class TestMACWithRealInstance {
  public static void main(String args[]) throws IOException, AccumuloException, 
AccumuloSecurityException, TableExistsException, InterruptedException {
MiniAccumuloConfig macConfig = new MiniAccumuloConfig(new File("/tmp/mac"), 
"secret");
macConfig.setNumTservers(2);
macConfig.setMemory(ServerType.TABLET_SERVER, 2, MemoryUnit.GIGABYTE);
macConfig.useExistingInstance(new 
File("/usr/lib/accumulo/conf/accumulo-site.xml"), new 
File("/usr/lib/hadoop/conf"));
MiniAccumuloCluster mac = new MiniAccumuloCluster(macConfig);
mac.start();
System.out.println("Started");
mac.getConnector("root", "secret").tableOperations().create("macCreated");
System.out.println("Stopping");
mac.stop();
System.out.println("Stopped");
  }
}
Which runs fine, except stopping issues which seem to be related to 
ACCUMULO-2985

After running this, I validated that the table was created in the real accumulo 
instance via zkCli


Thanks,

John Vines



Re: Review Request 23397: ACCUMULO-2984 Support Running MAC against a standard instance

2014-10-20 Thread John Vines


> On Oct. 20, 2014, 9:02 p.m., kturner wrote:
> > minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java,
> >  line 471
> > <https://reviews.apache.org/r/23397/diff/7/?file=726322#file726322line471>
> >
> > In the IT I wrote, this code is causing problems.  The following 
> > situation is occuring.
> > 
> >  * Create MAC to connection to Accumulo instance 1
> >  * Stop ZK1 for Accumulo instance 1
> >  * Create MAC to connection to Accumulo instance 2 (Accumulo instance 2 
> > uses ZK2)
> > 
> > I think the code gets stuck trying to connect to ZK1, when trying to 
> > connection to Accumulo instance 2
> 
> John Vines wrote:
> So you're expecting this to work with a new zookeeper? Standard Accumulo 
> doesn't work with a new ZK instance, why would you expect the MAC equivalent 
> to?
> 
> kturner wrote:
> Its just what my IT did.  Although I think it makes sense to fix this 
> (could be a follow on issue).  Unless this is fixed, this new functionality 
> can only be used to run one Accumulo instance per Java process(or 
> classloader).  It may also cache instance name??? not sure.   It would be 
> nice to use this functionallity to run existing Accumulo instance A and then 
> run existing Accumulo instance B from the same process.
> 
> kturner wrote:
> > Standard Accumulo doesn't work with a new ZK instance, why would you 
> expect the MAC equivalent to?
> 
> To clarify, my IT created ZK1 and Accumulo instance 1 in testA.  Then 
> testB created ZK2 and instance 2 in the same process.  When I tried to run 
> this second instance it hung.  I think it was trying to connect to the old 
> ZK.   So I am not using a new ZK with an existing Accumulo instance.
> 
> kturner wrote:
> I outlined the situation incorrectly.  Should have said the following.
> 
> 
>  * Create MAC1 to run Accumulo instance 1
>  * Stop MAC1
>  * Stop ZK1 for Accumulo instance 1
>  * Create MAC2 to run Accumulo instance 2 (Accumulo instance 2 uses ZK2)

The short explanation here is that this is using singleton items for parsing 
some information. Specifically, this is HdfsZooinstance, ZooReaderWriter, etc. 
I have resolved these issues in my latest patch, but I'm not sure how welcoming 
people would be to added methods for clearing out singleton instances.


- John


-------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23397/#review57431
---


On Oct. 21, 2014, 12:32 a.m., John Vines wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/23397/
> ---
> 
> (Updated Oct. 21, 2014, 12:32 a.m.)
> 
> 
> Review request for accumulo.
> 
> 
> Bugs: ACCUMULO-2984
> https://issues.apache.org/jira/browse/ACCUMULO-2984
> 
> 
> Repository: accumulo
> 
> 
> Description
> ---
> 
> Adds a change to SiteConfiguration to allow external setting of the xml 
> configuration file.
> Adds a single method to MiniAccumuloConfig which allows a user to point to 
> accumulo-site.xml and HADOOP_CONF_DIR to use for pulling out requisite 
> instance information
> Clusters configurations into those required to run inside a MAC-sized 
> footprint and those which are for arbitrary naming schemes for MAC
> Provides flagging to prevent uneccessary folder creation
> Provides flagging to prevent running zookeeper and initializing
> 
> 
> Diffs
> -
> 
>   core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java 
> 9b65e7d 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
>  be80f85 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
>  5d8501e 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
>  e9ad045 
>   
> server/base/src/main/java/org/apache/accumulo/server/client/HdfsZooInstance.java
>  3508164 
>   
> server/base/src/main/java/org/apache/accumulo/server/conf/ServerConfiguration.java
>  50dec57 
>   
> server/base/src/main/java/org/apache/accumulo/server/util/AccumuloStatus.java 
> PRE-CREATION 
>   
> server/base/src/main/java/org/apache/accumulo/server/zookeeper/ZooReaderWriter.java
>  435591d 
>   test/src/test/java/org/apache/accumulo/test/ExistingMacIT.java PRE-CREATION 
> 
> Diff:

Re: Review Request 23397: ACCUMULO-2984 Support Running MAC against a standard instance

2014-10-20 Thread John Vines

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23397/
---

(Updated Oct. 21, 2014, 12:32 a.m.)


Review request for accumulo.


Changes
---

Rolling in Keith's tests and fixing them by adding methods for clearing out 
singletons


Bugs: ACCUMULO-2984
https://issues.apache.org/jira/browse/ACCUMULO-2984


Repository: accumulo


Description
---

Adds a change to SiteConfiguration to allow external setting of the xml 
configuration file.
Adds a single method to MiniAccumuloConfig which allows a user to point to 
accumulo-site.xml and HADOOP_CONF_DIR to use for pulling out requisite instance 
information
Clusters configurations into those required to run inside a MAC-sized footprint 
and those which are for arbitrary naming schemes for MAC
Provides flagging to prevent uneccessary folder creation
Provides flagging to prevent running zookeeper and initializing


Diffs (updated)
-

  core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java 
9b65e7d 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
 be80f85 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
 5d8501e 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
 e9ad045 
  
server/base/src/main/java/org/apache/accumulo/server/client/HdfsZooInstance.java
 3508164 
  
server/base/src/main/java/org/apache/accumulo/server/conf/ServerConfiguration.java
 50dec57 
  server/base/src/main/java/org/apache/accumulo/server/util/AccumuloStatus.java 
PRE-CREATION 
  
server/base/src/main/java/org/apache/accumulo/server/zookeeper/ZooReaderWriter.java
 435591d 
  test/src/test/java/org/apache/accumulo/test/ExistingMacIT.java PRE-CREATION 

Diff: https://reviews.apache.org/r/23397/diff/


Testing
---

Ran the following test code-
public class TestMACWithRealInstance {
  public static void main(String args[]) throws IOException, AccumuloException, 
AccumuloSecurityException, TableExistsException, InterruptedException {
MiniAccumuloConfig macConfig = new MiniAccumuloConfig(new File("/tmp/mac"), 
"secret");
macConfig.setNumTservers(2);
macConfig.setMemory(ServerType.TABLET_SERVER, 2, MemoryUnit.GIGABYTE);
macConfig.useExistingInstance(new 
File("/usr/lib/accumulo/conf/accumulo-site.xml"), new 
File("/usr/lib/hadoop/conf"));
MiniAccumuloCluster mac = new MiniAccumuloCluster(macConfig);
mac.start();
System.out.println("Started");
mac.getConnector("root", "secret").tableOperations().create("macCreated");
System.out.println("Stopping");
mac.stop();
System.out.println("Stopped");
  }
}
Which runs fine, except stopping issues which seem to be related to 
ACCUMULO-2985

After running this, I validated that the table was created in the real accumulo 
instance via zkCli


Thanks,

John Vines



Re: Review Request 23397: ACCUMULO-2984 Support Running MAC against a standard instance

2014-10-20 Thread John Vines


> On Oct. 20, 2014, 9:02 p.m., kturner wrote:
> > minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java,
> >  line 471
> > <https://reviews.apache.org/r/23397/diff/7/?file=726322#file726322line471>
> >
> > In the IT I wrote, this code is causing problems.  The following 
> > situation is occuring.
> > 
> >  * Create MAC to connection to Accumulo instance 1
> >  * Stop ZK1 for Accumulo instance 1
> >  * Create MAC to connection to Accumulo instance 2 (Accumulo instance 2 
> > uses ZK2)
> > 
> > I think the code gets stuck trying to connect to ZK1, when trying to 
> > connection to Accumulo instance 2

So you're expecting this to work with a new zookeeper? Standard Accumulo 
doesn't work with a new ZK instance, why would you expect the MAC equivalent to?


- John


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23397/#review57431
---


On Oct. 20, 2014, 8:09 p.m., John Vines wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/23397/
> ---
> 
> (Updated Oct. 20, 2014, 8:09 p.m.)
> 
> 
> Review request for accumulo.
> 
> 
> Bugs: ACCUMULO-2984
> https://issues.apache.org/jira/browse/ACCUMULO-2984
> 
> 
> Repository: accumulo
> 
> 
> Description
> ---
> 
> Adds a change to SiteConfiguration to allow external setting of the xml 
> configuration file.
> Adds a single method to MiniAccumuloConfig which allows a user to point to 
> accumulo-site.xml and HADOOP_CONF_DIR to use for pulling out requisite 
> instance information
> Clusters configurations into those required to run inside a MAC-sized 
> footprint and those which are for arbitrary naming schemes for MAC
> Provides flagging to prevent uneccessary folder creation
> Provides flagging to prevent running zookeeper and initializing
> 
> 
> Diffs
> -
> 
>   core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java 
> 9b65e7d 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
>  be80f85 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
>  5d8501e 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
>  e9ad045 
>   
> server/base/src/main/java/org/apache/accumulo/server/util/AccumuloStatus.java 
> PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/23397/diff/
> 
> 
> Testing
> ---
> 
> Ran the following test code-
> public class TestMACWithRealInstance {
>   public static void main(String args[]) throws IOException, 
> AccumuloException, AccumuloSecurityException, TableExistsException, 
> InterruptedException {
> MiniAccumuloConfig macConfig = new MiniAccumuloConfig(new 
> File("/tmp/mac"), "secret");
> macConfig.setNumTservers(2);
> macConfig.setMemory(ServerType.TABLET_SERVER, 2, MemoryUnit.GIGABYTE);
> macConfig.useExistingInstance(new 
> File("/usr/lib/accumulo/conf/accumulo-site.xml"), new 
> File("/usr/lib/hadoop/conf"));
> MiniAccumuloCluster mac = new MiniAccumuloCluster(macConfig);
> mac.start();
> System.out.println("Started");
> mac.getConnector("root", "secret").tableOperations().create("macCreated");
> System.out.println("Stopping");
> mac.stop();
> System.out.println("Stopped");
>   }
> }
> Which runs fine, except stopping issues which seem to be related to 
> ACCUMULO-2985
> 
> After running this, I validated that the table was created in the real 
> accumulo instance via zkCli
> 
> 
> Thanks,
> 
> John Vines
> 
>



Re: Review Request 23397: ACCUMULO-2984 Support Running MAC against a standard instance

2014-10-20 Thread John Vines

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23397/
---

(Updated Oct. 20, 2014, 8:09 p.m.)


Review request for accumulo.


Changes
---

Addressing NPE concerns


Bugs: ACCUMULO-2984
https://issues.apache.org/jira/browse/ACCUMULO-2984


Repository: accumulo


Description
---

Adds a change to SiteConfiguration to allow external setting of the xml 
configuration file.
Adds a single method to MiniAccumuloConfig which allows a user to point to 
accumulo-site.xml and HADOOP_CONF_DIR to use for pulling out requisite instance 
information
Clusters configurations into those required to run inside a MAC-sized footprint 
and those which are for arbitrary naming schemes for MAC
Provides flagging to prevent uneccessary folder creation
Provides flagging to prevent running zookeeper and initializing


Diffs (updated)
-

  core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java 
9b65e7d 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
 be80f85 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
 5d8501e 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
 e9ad045 
  server/base/src/main/java/org/apache/accumulo/server/util/AccumuloStatus.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/23397/diff/


Testing
---

Ran the following test code-
public class TestMACWithRealInstance {
  public static void main(String args[]) throws IOException, AccumuloException, 
AccumuloSecurityException, TableExistsException, InterruptedException {
MiniAccumuloConfig macConfig = new MiniAccumuloConfig(new File("/tmp/mac"), 
"secret");
macConfig.setNumTservers(2);
macConfig.setMemory(ServerType.TABLET_SERVER, 2, MemoryUnit.GIGABYTE);
macConfig.useExistingInstance(new 
File("/usr/lib/accumulo/conf/accumulo-site.xml"), new 
File("/usr/lib/hadoop/conf"));
MiniAccumuloCluster mac = new MiniAccumuloCluster(macConfig);
mac.start();
System.out.println("Started");
mac.getConnector("root", "secret").tableOperations().create("macCreated");
System.out.println("Stopping");
mac.stop();
System.out.println("Stopped");
  }
}
Which runs fine, except stopping issues which seem to be related to 
ACCUMULO-2985

After running this, I validated that the table was created in the real accumulo 
instance via zkCli


Thanks,

John Vines



Re: Review Request 23397: ACCUMULO-2984 Support Running MAC against a standard instance

2014-10-20 Thread John Vines

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23397/
---

(Updated Oct. 20, 2014, 7:56 p.m.)


Review request for accumulo.


Bugs: ACCUMULO-2984
https://issues.apache.org/jira/browse/ACCUMULO-2984


Repository: accumulo


Description
---

Adds a change to SiteConfiguration to allow external setting of the xml 
configuration file.
Adds a single method to MiniAccumuloConfig which allows a user to point to 
accumulo-site.xml and HADOOP_CONF_DIR to use for pulling out requisite instance 
information
Clusters configurations into those required to run inside a MAC-sized footprint 
and those which are for arbitrary naming schemes for MAC
Provides flagging to prevent uneccessary folder creation
Provides flagging to prevent running zookeeper and initializing


Diffs
-

  core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java 
9b65e7d 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
 be80f85 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
 5d8501e 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
 e9ad045 
  server/base/src/main/java/org/apache/accumulo/server/util/AccumuloStatus.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/23397/diff/


Testing
---

Ran the following test code-
public class TestMACWithRealInstance {
  public static void main(String args[]) throws IOException, AccumuloException, 
AccumuloSecurityException, TableExistsException, InterruptedException {
MiniAccumuloConfig macConfig = new MiniAccumuloConfig(new File("/tmp/mac"), 
"secret");
macConfig.setNumTservers(2);
macConfig.setMemory(ServerType.TABLET_SERVER, 2, MemoryUnit.GIGABYTE);
macConfig.useExistingInstance(new 
File("/usr/lib/accumulo/conf/accumulo-site.xml"), new 
File("/usr/lib/hadoop/conf"));
MiniAccumuloCluster mac = new MiniAccumuloCluster(macConfig);
mac.start();
System.out.println("Started");
mac.getConnector("root", "secret").tableOperations().create("macCreated");
System.out.println("Stopping");
mac.stop();
System.out.println("Stopped");
  }
}
Which runs fine, except stopping issues which seem to be related to 
ACCUMULO-2985

After running this, I validated that the table was created in the real accumulo 
instance via zkCli


Thanks,

John Vines



Re: Review Request 23397: ACCUMULO-2984 Support Running MAC against a standard instance

2014-10-20 Thread John Vines


> On July 15, 2014, 6:58 p.m., kturner wrote:
> > minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java,
> >  line 267
> > <https://reviews.apache.org/r/23397/diff/4/?file=628112#file628112line267>
> >
> > A test to prevent regressions would be really nice.  Might be able to 
> > start a mini instance, stop it, use internal exec methods to start a 
> > zookeeper, and then point another mac to the old mini instance.
> 
> John Vines wrote:
> I'm gonna be blunt - this is really really hard and I think it can be 
> punted to a ticket for future work
> 
> kturner wrote:
> I was going to try writing a test and see how it went, however the patch 
> does not apply to 1.6 branch for me.   Below is the md5 of the patch I tried.
> 
> md5sum 0001-ACCUMULO-2984-support-running-MAC-against-a-real-acc.patch
> 68738fd6c45ce47a27ac21ae7d69f5f9  
> 0001-ACCUMULO-2984-support-running-MAC-against-a-real-acc.patch
> 
> John Vines wrote:
> That's an old version. I uploaded an updated version last week
>     b2de90dfebf8ade591bd71b2e1ab36fe  
> 0001-ACCUMULO-2984-adding-ability-to-run-MAC-against-a-pe.patch
> 
> John Vines wrote:
> Uploaded again with one that git ams just fine. RB uses the patch command 
> so it's failing here.
> 
> kturner wrote:
> I downloaded the patch and it applied, however I am seeing a different 
> md5.
> f8ff459e213d255f9a36698f8a6d7376  
> 0001-ACCUMULO-2984-adding-ability-to-run-MAC-against-a-pe.patch

That matches the new uploaded version


- John


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23397/#review47792
---


On Oct. 20, 2014, 4:51 p.m., John Vines wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/23397/
> ---
> 
> (Updated Oct. 20, 2014, 4:51 p.m.)
> 
> 
> Review request for accumulo.
> 
> 
> Bugs: ACCUMULO-2984
> https://issues.apache.org/jira/browse/ACCUMULO-2984
> 
> 
> Repository: accumulo
> 
> 
> Description
> ---
> 
> Adds a change to SiteConfiguration to allow external setting of the xml 
> configuration file.
> Adds a single method to MiniAccumuloConfig which allows a user to point to 
> accumulo-site.xml and HADOOP_CONF_DIR to use for pulling out requisite 
> instance information
> Clusters configurations into those required to run inside a MAC-sized 
> footprint and those which are for arbitrary naming schemes for MAC
> Provides flagging to prevent uneccessary folder creation
> Provides flagging to prevent running zookeeper and initializing
> 
> 
> Diffs
> -
> 
>   core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java 
> 9b65e7d 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
>  be80f85 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
>  5d8501e 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
>  e9ad045 
>   
> server/base/src/main/java/org/apache/accumulo/server/util/AccumuloStatus.java 
> PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/23397/diff/
> 
> 
> Testing
> ---
> 
> Ran the following test code-
> public class TestMACWithRealInstance {
>   public static void main(String args[]) throws IOException, 
> AccumuloException, AccumuloSecurityException, TableExistsException, 
> InterruptedException {
> MiniAccumuloConfig macConfig = new MiniAccumuloConfig(new 
> File("/tmp/mac"), "secret");
> macConfig.setNumTservers(2);
> macConfig.setMemory(ServerType.TABLET_SERVER, 2, MemoryUnit.GIGABYTE);
> macConfig.useExistingInstance(new 
> File("/usr/lib/accumulo/conf/accumulo-site.xml"), new 
> File("/usr/lib/hadoop/conf"));
> MiniAccumuloCluster mac = new MiniAccumuloCluster(macConfig);
> mac.start();
> System.out.println("Started");
> mac.getConnector("root", "secret").tableOperations().create("macCreated");
> System.out.println("Stopping");
> mac.stop();
> System.out.println("Stopped");
>   }
> }
> Which runs fine, except stopping issues which seem to be related to 
> ACCUMULO-2985
> 
> After running this, I validated that the table was created in the real 
> accumulo instance via zkCli
> 
> 
> Thanks,
> 
> John Vines
> 
>



Re: Review Request 23397: ACCUMULO-2984 Support Running MAC against a standard instance

2014-10-20 Thread John Vines


> On July 15, 2014, 6:58 p.m., kturner wrote:
> > minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java,
> >  line 267
> > <https://reviews.apache.org/r/23397/diff/4/?file=628112#file628112line267>
> >
> > A test to prevent regressions would be really nice.  Might be able to 
> > start a mini instance, stop it, use internal exec methods to start a 
> > zookeeper, and then point another mac to the old mini instance.
> 
> John Vines wrote:
> I'm gonna be blunt - this is really really hard and I think it can be 
> punted to a ticket for future work
> 
> kturner wrote:
> I was going to try writing a test and see how it went, however the patch 
> does not apply to 1.6 branch for me.   Below is the md5 of the patch I tried.
> 
> md5sum 0001-ACCUMULO-2984-support-running-MAC-against-a-real-acc.patch
> 68738fd6c45ce47a27ac21ae7d69f5f9  
> 0001-ACCUMULO-2984-support-running-MAC-against-a-real-acc.patch
> 
> John Vines wrote:
> That's an old version. I uploaded an updated version last week
> b2de90dfebf8ade591bd71b2e1ab36fe  
> 0001-ACCUMULO-2984-adding-ability-to-run-MAC-against-a-pe.patch

Uploaded again with one that git ams just fine. RB uses the patch command so 
it's failing here.


- John


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23397/#review47792
---


On Oct. 20, 2014, 4:51 p.m., John Vines wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/23397/
> ---
> 
> (Updated Oct. 20, 2014, 4:51 p.m.)
> 
> 
> Review request for accumulo.
> 
> 
> Bugs: ACCUMULO-2984
> https://issues.apache.org/jira/browse/ACCUMULO-2984
> 
> 
> Repository: accumulo
> 
> 
> Description
> ---
> 
> Adds a change to SiteConfiguration to allow external setting of the xml 
> configuration file.
> Adds a single method to MiniAccumuloConfig which allows a user to point to 
> accumulo-site.xml and HADOOP_CONF_DIR to use for pulling out requisite 
> instance information
> Clusters configurations into those required to run inside a MAC-sized 
> footprint and those which are for arbitrary naming schemes for MAC
> Provides flagging to prevent uneccessary folder creation
> Provides flagging to prevent running zookeeper and initializing
> 
> 
> Diffs
> -
> 
>   core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java 
> 9b65e7d 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
>  be80f85 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
>  5d8501e 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
>  e9ad045 
>   
> server/base/src/main/java/org/apache/accumulo/server/util/AccumuloStatus.java 
> PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/23397/diff/
> 
> 
> Testing
> ---
> 
> Ran the following test code-
> public class TestMACWithRealInstance {
>   public static void main(String args[]) throws IOException, 
> AccumuloException, AccumuloSecurityException, TableExistsException, 
> InterruptedException {
> MiniAccumuloConfig macConfig = new MiniAccumuloConfig(new 
> File("/tmp/mac"), "secret");
> macConfig.setNumTservers(2);
> macConfig.setMemory(ServerType.TABLET_SERVER, 2, MemoryUnit.GIGABYTE);
> macConfig.useExistingInstance(new 
> File("/usr/lib/accumulo/conf/accumulo-site.xml"), new 
> File("/usr/lib/hadoop/conf"));
> MiniAccumuloCluster mac = new MiniAccumuloCluster(macConfig);
> mac.start();
> System.out.println("Started");
> mac.getConnector("root", "secret").tableOperations().create("macCreated");
> System.out.println("Stopping");
> mac.stop();
> System.out.println("Stopped");
>   }
> }
> Which runs fine, except stopping issues which seem to be related to 
> ACCUMULO-2985
> 
> After running this, I validated that the table was created in the real 
> accumulo instance via zkCli
> 
> 
> Thanks,
> 
> John Vines
> 
>



Re: Review Request 23397: ACCUMULO-2984 Support Running MAC against a standard instance

2014-10-20 Thread John Vines

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23397/
---

(Updated Oct. 20, 2014, 4:51 p.m.)


Review request for accumulo.


Changes
---

Trying to get a functional patch


Bugs: ACCUMULO-2984
https://issues.apache.org/jira/browse/ACCUMULO-2984


Repository: accumulo


Description
---

Adds a change to SiteConfiguration to allow external setting of the xml 
configuration file.
Adds a single method to MiniAccumuloConfig which allows a user to point to 
accumulo-site.xml and HADOOP_CONF_DIR to use for pulling out requisite instance 
information
Clusters configurations into those required to run inside a MAC-sized footprint 
and those which are for arbitrary naming schemes for MAC
Provides flagging to prevent uneccessary folder creation
Provides flagging to prevent running zookeeper and initializing


Diffs (updated)
-

  core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java 
9b65e7d 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
 be80f85 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
 5d8501e 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
 e9ad045 
  server/base/src/main/java/org/apache/accumulo/server/util/AccumuloStatus.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/23397/diff/


Testing
---

Ran the following test code-
public class TestMACWithRealInstance {
  public static void main(String args[]) throws IOException, AccumuloException, 
AccumuloSecurityException, TableExistsException, InterruptedException {
MiniAccumuloConfig macConfig = new MiniAccumuloConfig(new File("/tmp/mac"), 
"secret");
macConfig.setNumTservers(2);
macConfig.setMemory(ServerType.TABLET_SERVER, 2, MemoryUnit.GIGABYTE);
macConfig.useExistingInstance(new 
File("/usr/lib/accumulo/conf/accumulo-site.xml"), new 
File("/usr/lib/hadoop/conf"));
MiniAccumuloCluster mac = new MiniAccumuloCluster(macConfig);
mac.start();
System.out.println("Started");
mac.getConnector("root", "secret").tableOperations().create("macCreated");
System.out.println("Stopping");
mac.stop();
System.out.println("Stopped");
  }
}
Which runs fine, except stopping issues which seem to be related to 
ACCUMULO-2985

After running this, I validated that the table was created in the real accumulo 
instance via zkCli


Thanks,

John Vines



Re: Review Request 23397: ACCUMULO-2984 Support Running MAC against a standard instance

2014-10-20 Thread John Vines


> On July 15, 2014, 6:58 p.m., kturner wrote:
> > minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java,
> >  line 267
> > <https://reviews.apache.org/r/23397/diff/4/?file=628112#file628112line267>
> >
> > A test to prevent regressions would be really nice.  Might be able to 
> > start a mini instance, stop it, use internal exec methods to start a 
> > zookeeper, and then point another mac to the old mini instance.
> 
> John Vines wrote:
> I'm gonna be blunt - this is really really hard and I think it can be 
> punted to a ticket for future work
> 
> kturner wrote:
> I was going to try writing a test and see how it went, however the patch 
> does not apply to 1.6 branch for me.   Below is the md5 of the patch I tried.
> 
> md5sum 0001-ACCUMULO-2984-support-running-MAC-against-a-real-acc.patch
> 68738fd6c45ce47a27ac21ae7d69f5f9  
> 0001-ACCUMULO-2984-support-running-MAC-against-a-real-acc.patch

That's an old version. I uploaded an updated version last week
b2de90dfebf8ade591bd71b2e1ab36fe  
0001-ACCUMULO-2984-adding-ability-to-run-MAC-against-a-pe.patch


- John


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23397/#review47792
---


On Oct. 15, 2014, 10:12 p.m., John Vines wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/23397/
> ---
> 
> (Updated Oct. 15, 2014, 10:12 p.m.)
> 
> 
> Review request for accumulo.
> 
> 
> Bugs: ACCUMULO-2984
> https://issues.apache.org/jira/browse/ACCUMULO-2984
> 
> 
> Repository: accumulo
> 
> 
> Description
> ---
> 
> Adds a change to SiteConfiguration to allow external setting of the xml 
> configuration file.
> Adds a single method to MiniAccumuloConfig which allows a user to point to 
> accumulo-site.xml and HADOOP_CONF_DIR to use for pulling out requisite 
> instance information
> Clusters configurations into those required to run inside a MAC-sized 
> footprint and those which are for arbitrary naming schemes for MAC
> Provides flagging to prevent uneccessary folder creation
> Provides flagging to prevent running zookeeper and initializing
> 
> 
> Diffs
> -
> 
>   core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java 
> 12f3ad2 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
>  be80f85 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
>  07c5742 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
>  4878967 
>   
> server/base/src/main/java/org/apache/accumulo/server/util/AccumuloStatus.java 
> PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/23397/diff/
> 
> 
> Testing
> ---
> 
> Ran the following test code-
> public class TestMACWithRealInstance {
>   public static void main(String args[]) throws IOException, 
> AccumuloException, AccumuloSecurityException, TableExistsException, 
> InterruptedException {
> MiniAccumuloConfig macConfig = new MiniAccumuloConfig(new 
> File("/tmp/mac"), "secret");
> macConfig.setNumTservers(2);
> macConfig.setMemory(ServerType.TABLET_SERVER, 2, MemoryUnit.GIGABYTE);
> macConfig.useExistingInstance(new 
> File("/usr/lib/accumulo/conf/accumulo-site.xml"), new 
> File("/usr/lib/hadoop/conf"));
> MiniAccumuloCluster mac = new MiniAccumuloCluster(macConfig);
> mac.start();
> System.out.println("Started");
> mac.getConnector("root", "secret").tableOperations().create("macCreated");
> System.out.println("Stopping");
> mac.stop();
> System.out.println("Stopped");
>   }
> }
> Which runs fine, except stopping issues which seem to be related to 
> ACCUMULO-2985
> 
> After running this, I validated that the table was created in the real 
> accumulo instance via zkCli
> 
> 
> Thanks,
> 
> John Vines
> 
>



Re: Review Request 23397: ACCUMULO-2984 Support Running MAC against a standard instance

2014-10-15 Thread John Vines

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23397/
---

(Updated Oct. 15, 2014, 10:12 p.m.)


Review request for accumulo.


Bugs: ACCUMULO-2984
https://issues.apache.org/jira/browse/ACCUMULO-2984


Repository: accumulo


Description
---

Adds a change to SiteConfiguration to allow external setting of the xml 
configuration file.
Adds a single method to MiniAccumuloConfig which allows a user to point to 
accumulo-site.xml and HADOOP_CONF_DIR to use for pulling out requisite instance 
information
Clusters configurations into those required to run inside a MAC-sized footprint 
and those which are for arbitrary naming schemes for MAC
Provides flagging to prevent uneccessary folder creation
Provides flagging to prevent running zookeeper and initializing


Diffs (updated)
-

  core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java 
12f3ad2 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
 be80f85 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
 07c5742 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
 4878967 
  server/base/src/main/java/org/apache/accumulo/server/util/AccumuloStatus.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/23397/diff/


Testing
---

Ran the following test code-
public class TestMACWithRealInstance {
  public static void main(String args[]) throws IOException, AccumuloException, 
AccumuloSecurityException, TableExistsException, InterruptedException {
MiniAccumuloConfig macConfig = new MiniAccumuloConfig(new File("/tmp/mac"), 
"secret");
macConfig.setNumTservers(2);
macConfig.setMemory(ServerType.TABLET_SERVER, 2, MemoryUnit.GIGABYTE);
macConfig.useExistingInstance(new 
File("/usr/lib/accumulo/conf/accumulo-site.xml"), new 
File("/usr/lib/hadoop/conf"));
MiniAccumuloCluster mac = new MiniAccumuloCluster(macConfig);
mac.start();
System.out.println("Started");
mac.getConnector("root", "secret").tableOperations().create("macCreated");
System.out.println("Stopping");
mac.stop();
System.out.println("Stopped");
  }
}
Which runs fine, except stopping issues which seem to be related to 
ACCUMULO-2985

After running this, I validated that the table was created in the real accumulo 
instance via zkCli


Thanks,

John Vines



Re: Review Request 23397: ACCUMULO-2984 Support Running MAC against a standard instance

2014-10-15 Thread John Vines

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23397/
---

(Updated Oct. 15, 2014, 10:10 p.m.)


Review request for accumulo.


Bugs: ACCUMULO-2984
https://issues.apache.org/jira/browse/ACCUMULO-2984


Repository: accumulo


Description
---

Adds a change to SiteConfiguration to allow external setting of the xml 
configuration file.
Adds a single method to MiniAccumuloConfig which allows a user to point to 
accumulo-site.xml and HADOOP_CONF_DIR to use for pulling out requisite instance 
information
Clusters configurations into those required to run inside a MAC-sized footprint 
and those which are for arbitrary naming schemes for MAC
Provides flagging to prevent uneccessary folder creation
Provides flagging to prevent running zookeeper and initializing


Diffs
-

  core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java 
4c7d95e 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloCluster.java
 50bb14a 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
 be80f85 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
 977968e 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
 337eda0 
  server/base/src/main/java/org/apache/accumulo/server/util/AccumuloStatus.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/23397/diff/


Testing
---

Ran the following test code-
public class TestMACWithRealInstance {
  public static void main(String args[]) throws IOException, AccumuloException, 
AccumuloSecurityException, TableExistsException, InterruptedException {
MiniAccumuloConfig macConfig = new MiniAccumuloConfig(new File("/tmp/mac"), 
"secret");
macConfig.setNumTservers(2);
macConfig.setMemory(ServerType.TABLET_SERVER, 2, MemoryUnit.GIGABYTE);
macConfig.useExistingInstance(new 
File("/usr/lib/accumulo/conf/accumulo-site.xml"), new 
File("/usr/lib/hadoop/conf"));
MiniAccumuloCluster mac = new MiniAccumuloCluster(macConfig);
mac.start();
System.out.println("Started");
mac.getConnector("root", "secret").tableOperations().create("macCreated");
System.out.println("Stopping");
mac.stop();
System.out.println("Stopped");
  }
}
Which runs fine, except stopping issues which seem to be related to 
ACCUMULO-2985

After running this, I validated that the table was created in the real accumulo 
instance via zkCli


File Attachments


0001-ACCUMULO-2984-support-running-MAC-against-a-real-acc.patch
  
https://reviews.apache.org/media/uploaded/files/2014/10/15/1ee9c409-65de-4a44-a86e-e905b4593a8f__0001-ACCUMULO-2984-support-running-MAC-against-a-real-acc.patch
0001-ACCUMULO-2984-support-running-MAC-against-a-real-acc.patch
  
https://reviews.apache.org/media/uploaded/files/2014/10/15/d38c6150-4320-41e1-8495-b42d050ddc93__0001-ACCUMULO-2984-support-running-MAC-against-a-real-acc.patch


Thanks,

John Vines



Re: Review Request 23397: ACCUMULO-2984 Support Running MAC against a standard instance

2014-10-15 Thread John Vines

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23397/
---

(Updated Oct. 15, 2014, 10:10 p.m.)


Review request for accumulo.


Bugs: ACCUMULO-2984
https://issues.apache.org/jira/browse/ACCUMULO-2984


Repository: accumulo


Description
---

Adds a change to SiteConfiguration to allow external setting of the xml 
configuration file.
Adds a single method to MiniAccumuloConfig which allows a user to point to 
accumulo-site.xml and HADOOP_CONF_DIR to use for pulling out requisite instance 
information
Clusters configurations into those required to run inside a MAC-sized footprint 
and those which are for arbitrary naming schemes for MAC
Provides flagging to prevent uneccessary folder creation
Provides flagging to prevent running zookeeper and initializing


Diffs
-

  core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java 
4c7d95e 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloCluster.java
 50bb14a 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
 be80f85 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
 977968e 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
 337eda0 
  server/base/src/main/java/org/apache/accumulo/server/util/AccumuloStatus.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/23397/diff/


Testing
---

Ran the following test code-
public class TestMACWithRealInstance {
  public static void main(String args[]) throws IOException, AccumuloException, 
AccumuloSecurityException, TableExistsException, InterruptedException {
MiniAccumuloConfig macConfig = new MiniAccumuloConfig(new File("/tmp/mac"), 
"secret");
macConfig.setNumTservers(2);
macConfig.setMemory(ServerType.TABLET_SERVER, 2, MemoryUnit.GIGABYTE);
macConfig.useExistingInstance(new 
File("/usr/lib/accumulo/conf/accumulo-site.xml"), new 
File("/usr/lib/hadoop/conf"));
MiniAccumuloCluster mac = new MiniAccumuloCluster(macConfig);
mac.start();
System.out.println("Started");
mac.getConnector("root", "secret").tableOperations().create("macCreated");
System.out.println("Stopping");
mac.stop();
System.out.println("Stopped");
  }
}
Which runs fine, except stopping issues which seem to be related to 
ACCUMULO-2985

After running this, I validated that the table was created in the real accumulo 
instance via zkCli


File Attachments (updated)


0001-ACCUMULO-2984-support-running-MAC-against-a-real-acc.patch
  
https://reviews.apache.org/media/uploaded/files/2014/10/15/1ee9c409-65de-4a44-a86e-e905b4593a8f__0001-ACCUMULO-2984-support-running-MAC-against-a-real-acc.patch
0001-ACCUMULO-2984-support-running-MAC-against-a-real-acc.patch
  
https://reviews.apache.org/media/uploaded/files/2014/10/15/d38c6150-4320-41e1-8495-b42d050ddc93__0001-ACCUMULO-2984-support-running-MAC-against-a-real-acc.patch


Thanks,

John Vines



Re: Review Request 23397: ACCUMULO-2984 Support Running MAC against a standard instance

2014-10-15 Thread John Vines

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23397/
---

(Updated Oct. 15, 2014, 10:09 p.m.)


Review request for accumulo.


Changes
---

Updated for 1.6.2, added some additional checking


Bugs: ACCUMULO-2984
https://issues.apache.org/jira/browse/ACCUMULO-2984


Repository: accumulo


Description (updated)
---

Adds a change to SiteConfiguration to allow external setting of the xml 
configuration file.
Adds a single method to MiniAccumuloConfig which allows a user to point to 
accumulo-site.xml and HADOOP_CONF_DIR to use for pulling out requisite instance 
information
Clusters configurations into those required to run inside a MAC-sized footprint 
and those which are for arbitrary naming schemes for MAC
Provides flagging to prevent uneccessary folder creation
Provides flagging to prevent running zookeeper and initializing


Diffs
-

  core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java 
4c7d95e 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloCluster.java
 50bb14a 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
 be80f85 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
 977968e 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
 337eda0 
  server/base/src/main/java/org/apache/accumulo/server/util/AccumuloStatus.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/23397/diff/


Testing (updated)
---

Ran the following test code-
public class TestMACWithRealInstance {
  public static void main(String args[]) throws IOException, AccumuloException, 
AccumuloSecurityException, TableExistsException, InterruptedException {
MiniAccumuloConfig macConfig = new MiniAccumuloConfig(new File("/tmp/mac"), 
"secret");
macConfig.setNumTservers(2);
macConfig.setMemory(ServerType.TABLET_SERVER, 2, MemoryUnit.GIGABYTE);
macConfig.useExistingInstance(new 
File("/usr/lib/accumulo/conf/accumulo-site.xml"), new 
File("/usr/lib/hadoop/conf"));
MiniAccumuloCluster mac = new MiniAccumuloCluster(macConfig);
mac.start();
System.out.println("Started");
mac.getConnector("root", "secret").tableOperations().create("macCreated");
System.out.println("Stopping");
mac.stop();
System.out.println("Stopped");
  }
}
Which runs fine, except stopping issues which seem to be related to 
ACCUMULO-2985

After running this, I validated that the table was created in the real accumulo 
instance via zkCli


File Attachments (updated)


0001-ACCUMULO-2984-support-running-MAC-against-a-real-acc.patch
  
https://reviews.apache.org/media/uploaded/files/2014/10/15/1ee9c409-65de-4a44-a86e-e905b4593a8f__0001-ACCUMULO-2984-support-running-MAC-against-a-real-acc.patch


Thanks,

John Vines



Re: Review Request 23397: ACCUMULO-2984 Support Running MAC against a standard instance

2014-10-15 Thread John Vines


> On July 15, 2014, 6:58 p.m., kturner wrote:
> > minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java,
> >  line 259
> > <https://reviews.apache.org/r/23397/diff/4/?file=628112#file628112line259>
> >
> > Would be nice to document how this method relates to setSiteConfig() 
> > and setZookeeperPort() 
> > 
> > I assume if calling this and setZookeeperPort() that the zookeeper port 
> > set would be ignored, but I have not look.  
> > 
> > Calling useExistingInstance and then setSiteConfig() could gum things 
> > up.

Addressed with strict checking


> On July 15, 2014, 6:58 p.m., kturner wrote:
> > minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java,
> >  line 267
> > <https://reviews.apache.org/r/23397/diff/4/?file=628112#file628112line267>
> >
> > A test to prevent regressions would be really nice.  Might be able to 
> > start a mini instance, stop it, use internal exec methods to start a 
> > zookeeper, and then point another mac to the old mini instance.

I'm gonna be blunt - this is really really hard and I think it can be punted to 
a ticket for future work


- John


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23397/#review47792
---


On July 10, 2014, 10:48 p.m., John Vines wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/23397/
> ---
> 
> (Updated July 10, 2014, 10:48 p.m.)
> 
> 
> Review request for accumulo.
> 
> 
> Bugs: ACCUMULO-2984
> https://issues.apache.org/jira/browse/ACCUMULO-2984
> 
> 
> Repository: accumulo
> 
> 
> Description
> ---
> 
> Adds a change to SiteConfiguration to allow external setting of the xml 
> configuration file.
> Adds a single method to MiniAccumuloConfig which allows a user to point to 
> accumulo-site.xml and HADOOP_CONF_DIR to use for pulling out requisite 
> instance information
> Clusters configurations into those required to run inside a MAC-sized 
> footprint and those which are for arbitrary naming schemes for MAC
> Provides flagging to prevent uneccessary folder creation
> Provides flagging to prevent running zookeeper and initializing
> 
> 
> Diffs
> -
> 
>   core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java 
> 4c7d95e 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloCluster.java
>  50bb14a 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
>  be80f85 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
>  977968e 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
>  337eda0 
>   
> server/base/src/main/java/org/apache/accumulo/server/util/AccumuloStatus.java 
> PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/23397/diff/
> 
> 
> Testing
> ---
> 
> Ran the following test code-
> public class TestMACWithRealInstance {
>   public static void main(String args[]) throws IOException, 
> AccumuloException, AccumuloSecurityException, TableExistsException, 
> InterruptedException {
> MiniAccumuloConfig macConfig = new MiniAccumuloConfig(new 
> File("/tmp/mac"), "secret");
> macConfig.setNumTservers(2);
> macConfig.setMemory(ServerType.TABLET_SERVER, 2, MemoryUnit.GIGABYTE);
> macConfig.useExistingInstance(new 
> File("/usr/lib/accumulo/conf/accumulo-site.xml"), new 
> File("/usr/lib/hadoop/conf"));
> MiniAccumuloCluster mac = new MiniAccumuloCluster(macConfig);
> mac.start();
> System.out.println("Started");
> mac.getConnector("root", "secret").tableOperations().create("macCreated");
> System.out.println("Stopping");
> mac.stop();
> System.out.println("Stopped");
>   }
> }
> Which runs fine, except stopping issues which seem to be related to 
> ACCUMULO-2985
> 
> After running this, I validated that the table was created in the real 
> accumulo instance via zkCli
> 
> 
> Thanks,
> 
> John Vines
> 
>



Re: Review Request 23397: ACCUMULO-2984 Support Running MAC against a standard instance

2014-10-15 Thread John Vines


> On July 11, 2014, 5:38 p.m., Josh Elser wrote:
> > minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java,
> >  line 389
> > <https://reviews.apache.org/r/23397/diff/4/?file=628113#file628113line389>
> >
> > Would be good to add a note here about changing this to fs.defaultFS 
> > once we fully move to hadoop-2 only.
> > 
> > (no need to roll a new patch, IMO)

I replaced it with CommonConfigurationKeys.FS_DEFAULT_NAME_KEY


- John


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23397/#review47661
-------


On July 10, 2014, 10:48 p.m., John Vines wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/23397/
> ---
> 
> (Updated July 10, 2014, 10:48 p.m.)
> 
> 
> Review request for accumulo.
> 
> 
> Bugs: ACCUMULO-2984
> https://issues.apache.org/jira/browse/ACCUMULO-2984
> 
> 
> Repository: accumulo
> 
> 
> Description
> ---
> 
> Adds a change to SiteConfiguration to allow external setting of the xml 
> configuration file.
> Adds a single method to MiniAccumuloConfig which allows a user to point to 
> accumulo-site.xml and HADOOP_CONF_DIR to use for pulling out requisite 
> instance information
> Clusters configurations into those required to run inside a MAC-sized 
> footprint and those which are for arbitrary naming schemes for MAC
> Provides flagging to prevent uneccessary folder creation
> Provides flagging to prevent running zookeeper and initializing
> 
> 
> Diffs
> -
> 
>   core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java 
> 4c7d95e 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloCluster.java
>  50bb14a 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
>  be80f85 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
>  977968e 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
>  337eda0 
>   
> server/base/src/main/java/org/apache/accumulo/server/util/AccumuloStatus.java 
> PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/23397/diff/
> 
> 
> Testing
> ---
> 
> Ran the following test code-
> public class TestMACWithRealInstance {
>   public static void main(String args[]) throws IOException, 
> AccumuloException, AccumuloSecurityException, TableExistsException, 
> InterruptedException {
> MiniAccumuloConfig macConfig = new MiniAccumuloConfig(new 
> File("/tmp/mac"), "secret");
> macConfig.setNumTservers(2);
> macConfig.setMemory(ServerType.TABLET_SERVER, 2, MemoryUnit.GIGABYTE);
> macConfig.useExistingInstance(new 
> File("/usr/lib/accumulo/conf/accumulo-site.xml"), new 
> File("/usr/lib/hadoop/conf"));
> MiniAccumuloCluster mac = new MiniAccumuloCluster(macConfig);
> mac.start();
> System.out.println("Started");
> mac.getConnector("root", "secret").tableOperations().create("macCreated");
> System.out.println("Stopping");
> mac.stop();
> System.out.println("Stopped");
>   }
> }
> Which runs fine, except stopping issues which seem to be related to 
> ACCUMULO-2985
> 
> After running this, I validated that the table was created in the real 
> accumulo instance via zkCli
> 
> 
> Thanks,
> 
> John Vines
> 
>



1.7 release timeline

2014-10-06 Thread John Vines
Moving this to it's own thread...

On Mon, Oct 6, 2014 at 5:54 PM, Mike Drob  wrote:

> Related: Do we have a release timeline for 1.7?
>
> On Mon, Oct 6, 2014 at 4:51 PM, Christopher  wrote:
>
> > On Mon, Oct 6, 2014 at 5:20 PM, Sean Busbey  wrote:
> >
> > > On Mon, Oct 6, 2014 at 4:12 PM, Mike Drob  wrote:
> > >
> > > >
> > > >
> > > > In general, I'm inclined to leave as much in as possible, and then if
> > we
> > > > must remove things then do so in 2.0.0. I know that our compatibility
> > > > statement only promises one minor version, but that doesn't mean we
> > have
> > > to
> > > > be strict at every opportunity.
> > > >
> > > > Mike
> > > >
> > > >
> > >
> > > Related, I'd like to EOL 1.5 shortly after 1.7 gets released. I don't
> > want
> > > to derail this thread with that discussion, but my guess is it's a much
> > > easier sell if we're conservative about removing things. Just so
> everyone
> > > knows where I'm coming from.
> > >
> > >
> > >
> > (+1 for EOL 1.5 after)
> >
> > In general, does this mean that you're okay with removing stuff
> deprecated
> > prior to 1.5? With the exception of the instance.getConfiguration stuff,
> > which was deprecated in 1.6.0 and I'd like to remove in 1.7.0, due to its
> > problematic nature (requires further discussion), I could restrict the
> > remaining cleanup to only stuff deprecated prior to 1.5.
> >
>


Re: Accumulo-2841

2014-09-03 Thread John Vines
Go for it!

Sent from my phone, please pardon the typos and brevity.
On Sep 3, 2014 4:13 PM, "Jenna Huston"  wrote:

> John Vines, would it be ok if I worked on and submitted a patch to
> Accumulo ticket 2841?
>
> Thanks!
> Jenna
>


Re: [DISCUSS] Removing relative paths from metadata

2014-07-23 Thread John Vines
That is my issue with Keith. Requiring a double dot upgrade to do a single
dot upgrade is a pretty big break in our operating procedure up until this
point.

How do we ensure people don't go to 1.7.0 straight from 1.6.1 and earlier
versions?


On Wed, Jul 23, 2014 at 10:03 AM, Keith Turner  wrote:

> On Wed, Jul 23, 2014 at 3:33 AM, Mike Drob  wrote:
>
> > I do not want to see anything get re-written between a 1.6.1 system going
> > down and a 1.6.2 system coming up. We have a wire compatibility promise
> > amongst the double-dot releases, and parts moving around really make me
> > nervous. I think it's just too big of a change.
> >
>
> One thing thats really screwy about the proposal to do this in 1.6.2 (and
> completely drop support for relative paths in 1.7.0), is that you have to
> run 1.6.2 before you can upgrade to 1.7.0. This is something
> Christopher pointed out in an offline discussion yesterday.   Is this the
> concern you had?  This may be the biggest reason not to do it.  I think in
> practice most production users will end up on later bug fix versions of
> 1.6.0 anyway.   No one runs 1.4.0 or 1.4.1 anymore.  But not sure if we can
> count on that.  If 1.6.1 is stable and works for a user, they may just
> stick with it.
>
>
> >
> > I have no problem with rewriting anything in the internals between 1.6.x
> > and 1.7.0 (or 2.0.0). Based on experience, it will be a lot harder to
> > implement as a stand-alone utility, but I do not have strong preferences
> on
> > stand-alone or part of the upgrade process.
> >
> >
> > On Tue, Jul 22, 2014 at 8:37 PM, Josh Elser 
> wrote:
> >
> > > On 7/22/14, 12:51 PM, Keith Turner wrote:
> > >
> > >> Had some discussion w/ Dave Marion about the need to drop relatavie
> > paths
> > >> from internal metadata.  From a user standpoint the requirement to
> > >> possibly
> > >> configure instance.dfs.uri and instance.dfs.dir if they might have
> > >> relative
> > >> paths is confusing over the long term.  Also it places more of a
> > >> maintenance burden on us if we need to ensure all bug fixes and new
> > >> features work properly w/ relative paths.
> > >>
> > >
> > > Assuming that we squash relative paths by 1.7.0, we shouldn't have any
> > > additional burden on new feature work because there should be no new
> > > features in 1.6. Bug fixes are still potentially more complex.
> > >
> > > I think everyone would agree that 1.6.0 should've nuked relative paths
> > > (I'm sorry if I squash anyone's opinions, but that was the impression I
> > got
> > > before 1.6.0 came out). I think trying to eradicate them in 1.6 would
> > just
> > > add even more confusion to an already sufficiently confusing situation.
> > If
> > > a sufficiently simple approach came be thought of for a 1.6.x, I would
> be
> > > open to hear it.
> > >
> > >
> > >  What are our options and what should the timeline be?  We could
> require
> > >> the
> > >> user to do something to remove all relative paths before before
> starting
> > >> 1.7.0 for example.
> > >>
> > >> Some of the things we discussed
> > >>
> > >>   * Provide a utility to rewrite all relative paths
> > >>   * Rework the volume replacement code to work w/ relative paths
> > >>
> > >> A stand alone utility is tricky.  Don't want to modify tablet metadata
> > if
> > >> the table is loaded.  Thats why the volume replacement code has the
> > >> tablets
> > >> themselves do the replacement.
> > >>
> > >
> > > I think I like the idea of writing a standalone utility as, while the
> > > "safe" conditions to run such a utility are harder, getting the rewrite
> > > correct is much easier. Didn't Sean already write some sort of check
> for
> > an
> > > "is Accumulo off" environment?
> > >
> > >
> > >  I like the idea of reworking the volume replacement code, but I do not
> > >> like
> > >> the idea of it happening automatically (like the first time 1.6.2 is
> > >> started).   Could possibly have a boolean config
> > >> instance.volume.replaceRelative.  When this is set, as tablets are
> > loaded
> > >> and when the GC starts relative paths would be replaced using current
> > >> instance.dfs.* config or hdfs config.
> > >>
> > >> Still uncertain about the best solution.  Looking for the course of
> > least
> > >> user confusion and least maintenance.  I think
> > >> instance.volume.replaceRelative is a bit confusing from a user
> > >> perspective.
> > >>
> > >> What other options are there to solve this problem?  Any issue w/ the
> > >> premise?
> > >>
> > >>
> >
>


Re: Review Request 23397: ACCUMULO-2984 Support Running MAC against a standard instance

2014-07-10 Thread John Vines

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23397/
---

(Updated July 10, 2014, 10:48 p.m.)


Review request for accumulo.


Changes
---

Revised file with attempts to check to see if accumulo is already running, as 
well as fixes to javadoc whitespaces in altered files (so others don't make my 
mistake from copy/pasting)


Bugs: ACCUMULO-2984
https://issues.apache.org/jira/browse/ACCUMULO-2984


Repository: accumulo


Description
---

Adds a change to SiteConfiguration to allow external setting of the xml 
configuration file.
Adds a single method to MiniAccumuloConfig which allows a user to point to 
accumulo-site.xml and HADOOP_CONF_DIR to use for pulling out requisite instance 
information
Clusters configurations into those required to run inside a MAC-sized footprint 
and those which are for arbitrary naming schemes for MAC
Provides flagging to prevent uneccessary folder creation
Provides flagging to prevent running zookeeper and initializing


Diffs (updated)
-

  core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java 
4c7d95e 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloCluster.java
 50bb14a 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
 be80f85 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
 977968e 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
 337eda0 
  server/base/src/main/java/org/apache/accumulo/server/util/AccumuloStatus.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/23397/diff/


Testing
---

Ran the following test code-
public class TestMACWithRealInstance {
  public static void main(String args[]) throws IOException, AccumuloException, 
AccumuloSecurityException, TableExistsException, InterruptedException {
MiniAccumuloConfig macConfig = new MiniAccumuloConfig(new File("/tmp/mac"), 
"secret");
macConfig.setNumTservers(2);
macConfig.setMemory(ServerType.TABLET_SERVER, 2, MemoryUnit.GIGABYTE);
macConfig.useExistingInstance(new 
File("/usr/lib/accumulo/conf/accumulo-site.xml"), new 
File("/usr/lib/hadoop/conf"));
MiniAccumuloCluster mac = new MiniAccumuloCluster(macConfig);
mac.start();
System.out.println("Started");
mac.getConnector("root", "secret").tableOperations().create("macCreated");
System.out.println("Stopping");
mac.stop();
System.out.println("Stopped");
  }
}
Which runs fine, except stopping issues which seem to be related to 
ACCUMULO-2985

After running this, I validated that the table was created in the real accumulo 
instance via zkCli


Thanks,

John Vines



Re: Review Request 23397: ACCUMULO-2984 Support Running MAC against a standard instance

2014-07-10 Thread John Vines


> On July 10, 2014, 9:13 p.m., Sean Busbey wrote:
> > Overall looks good. Still concerned that it has no check to ensure the 
> > existing instance is down prior to starting up the MAC.

Oh right, spaced on that comment. Let me see what I can plumb in.


- John


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23397/#review47622
---


On July 10, 2014, 6:43 p.m., John Vines wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/23397/
> ---
> 
> (Updated July 10, 2014, 6:43 p.m.)
> 
> 
> Review request for accumulo.
> 
> 
> Bugs: ACCUMULO-2984
> https://issues.apache.org/jira/browse/ACCUMULO-2984
> 
> 
> Repository: accumulo
> 
> 
> Description
> ---
> 
> Adds a change to SiteConfiguration to allow external setting of the xml 
> configuration file.
> Adds a single method to MiniAccumuloConfig which allows a user to point to 
> accumulo-site.xml and HADOOP_CONF_DIR to use for pulling out requisite 
> instance information
> Clusters configurations into those required to run inside a MAC-sized 
> footprint and those which are for arbitrary naming schemes for MAC
> Provides flagging to prevent uneccessary folder creation
> Provides flagging to prevent running zookeeper and initializing
> 
> 
> Diffs
> -
> 
>   core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java 
> 4c7d95e 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
>  be80f85 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
>  977968e 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
>  337eda0 
> 
> Diff: https://reviews.apache.org/r/23397/diff/
> 
> 
> Testing
> ---
> 
> Ran the following test code-
> public class TestMACWithRealInstance {
>   public static void main(String args[]) throws IOException, 
> AccumuloException, AccumuloSecurityException, TableExistsException, 
> InterruptedException {
> MiniAccumuloConfig macConfig = new MiniAccumuloConfig(new 
> File("/tmp/mac"), "secret");
> macConfig.setNumTservers(2);
> macConfig.setMemory(ServerType.TABLET_SERVER, 2, MemoryUnit.GIGABYTE);
> macConfig.useExistingInstance(new 
> File("/usr/lib/accumulo/conf/accumulo-site.xml"), new 
> File("/usr/lib/hadoop/conf"));
> MiniAccumuloCluster mac = new MiniAccumuloCluster(macConfig);
> mac.start();
> System.out.println("Started");
> mac.getConnector("root", "secret").tableOperations().create("macCreated");
> System.out.println("Stopping");
> mac.stop();
> System.out.println("Stopped");
>   }
> }
> Which runs fine, except stopping issues which seem to be related to 
> ACCUMULO-2985
> 
> After running this, I validated that the table was created in the real 
> accumulo instance via zkCli
> 
> 
> Thanks,
> 
> John Vines
> 
>



MiniAccumuloConfig setProperty

2014-07-10 Thread John Vines
In using MAC on 1.6.1-SNAPSHOT, I noticed that the configimpl has a
setProperty that is not exposed in the general MiniAccumuloConfig. Is this
intentional or an oversight?


Re: Review Request 23391: ACCUMULO-2986 ease use of rat plugin.

2014-07-10 Thread John Vines


> On July 10, 2014, 8:42 p.m., Christopher Tubbs wrote:
> > -1
> > There's absolutely no reason to create a profile and options to skip the 
> > execution of the plugin since there is a built-in mechanism for controlling 
> > the rat check's impact on the build (eg. by setting rat.ignoreErrors). This 
> > change unnecessarily complicates the pom files, with insufficient utility 
> > to justify it, because the rat check is relatively fast, and there's 
> > already a mechanism to ignore its results. Even if we were able to come to 
> > consensus on defaulting rat.ignoreErrors to true, I'd still be opposed to 
> > the added profiles and property to control that profile, due to this added 
> > complexity with almost no utility.
> 
> Sean Busbey wrote:
> Even with ignore errors on, the plugin itself running introduces overhead 
> for those attempting to iterate tightly on e.g. a particular IT or unit test.
> 
> "Relatively fast" is subjective. I know that both myself and Keith have 
> complained about the overhead in the past.

Review board is not a place for discussions about whether or not work should be 
done. This is a place for analyzing patches, plain and simple. Please take this 
conversation to JIRA.


- John


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23391/#review47614
---


On July 10, 2014, 8:34 a.m., Sean Busbey wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/23391/
> ---
> 
> (Updated July 10, 2014, 8:34 a.m.)
> 
> 
> Review request for accumulo.
> 
> 
> Bugs: ACCUMULO-2986
> https://issues.apache.org/jira/browse/ACCUMULO-2986
> 
> 
> Repository: accumulo
> 
> 
> Description
> ---
> 
> * puts rat plugin into profile
> * activates profile by default
> * ignores rat errrors by default
> 
> 
> Diffs
> -
> 
>   pom.xml 2bc87cf082dfeb7bfe1a3fac1fe4fba1eaa87edd 
> 
> Diff: https://reviews.apache.org/r/23391/diff/
> 
> 
> Testing
> ---
> 
> verified rat warnings and failures given move from master -> 1.6.1-SNAPSHOT 
> branch with no definitions, -Drat.skip=true, -Drat.skip=false, 
> -Drat.ignoreErrors=true, and profile to check errors in ~/.m2/settings.xml
> 
> 
> Thanks,
> 
> Sean Busbey
> 
>



Re: Review Request 23397: ACCUMULO-2984 Support Running MAC against a standard instance

2014-07-10 Thread John Vines


> On July 10, 2014, 5:50 p.m., Sean Busbey wrote:
> > minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java,
> >  line 259
> > <https://reviews.apache.org/r/23397/diff/1/?file=627812#file627812line259>
> >
> > nit: whitesapce
> 
> John Vines wrote:
> Are these nits for the patch or nits about whitespace changes? Because 
> for whatever reason this file is not properly formatted and if we're 
> stripping all whitespace changes from patches, I don't know when we expect 
> these to be addressed. If we're going to nit everyone over whitespace issues 
> outside the scope of the patch, we're going to discourage people from auto 
> formatting files they're editing, which could result in even worse formatting 
> nits being introduced in their code.
> 
> Josh Elser wrote:
> The eclipse formatter, at least how I have it set up with the 
> configuration we have set, will be adding these extra spaces.
> 
> Sean Busbey wrote:
> The nits are about the whitespace being left in the changes the patch 
> introduces. Generally I only push back on formatting errors related to the 
> scope of the patch.
> 
> Yes, there is a bug in the eclipse formatter that causes it to 
> incorrectly miss the end of line whitespace in javadoc sections it populates.

Ahh! That's been fixed, but I copied other javadoc comments to use as a 
template so it didn't get addressed. Should probably see if there's any work 
about adding that to the formatter.


- John


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23397/#review47585
---


On July 10, 2014, 6:43 p.m., John Vines wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/23397/
> ---
> 
> (Updated July 10, 2014, 6:43 p.m.)
> 
> 
> Review request for accumulo.
> 
> 
> Bugs: ACCUMULO-2984
> https://issues.apache.org/jira/browse/ACCUMULO-2984
> 
> 
> Repository: accumulo
> 
> 
> Description
> ---
> 
> Adds a change to SiteConfiguration to allow external setting of the xml 
> configuration file.
> Adds a single method to MiniAccumuloConfig which allows a user to point to 
> accumulo-site.xml and HADOOP_CONF_DIR to use for pulling out requisite 
> instance information
> Clusters configurations into those required to run inside a MAC-sized 
> footprint and those which are for arbitrary naming schemes for MAC
> Provides flagging to prevent uneccessary folder creation
> Provides flagging to prevent running zookeeper and initializing
> 
> 
> Diffs
> -
> 
>   core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java 
> 4c7d95e 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
>  be80f85 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
>  977968e 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
>  337eda0 
> 
> Diff: https://reviews.apache.org/r/23397/diff/
> 
> 
> Testing
> ---
> 
> Ran the following test code-
> public class TestMACWithRealInstance {
>   public static void main(String args[]) throws IOException, 
> AccumuloException, AccumuloSecurityException, TableExistsException, 
> InterruptedException {
> MiniAccumuloConfig macConfig = new MiniAccumuloConfig(new 
> File("/tmp/mac"), "secret");
> macConfig.setNumTservers(2);
> macConfig.setMemory(ServerType.TABLET_SERVER, 2, MemoryUnit.GIGABYTE);
> macConfig.useExistingInstance(new 
> File("/usr/lib/accumulo/conf/accumulo-site.xml"), new 
> File("/usr/lib/hadoop/conf"));
> MiniAccumuloCluster mac = new MiniAccumuloCluster(macConfig);
> mac.start();
> System.out.println("Started");
> mac.getConnector("root", "secret").tableOperations().create("macCreated");
> System.out.println("Stopping");
> mac.stop();
> System.out.println("Stopped");
>   }
> }
> Which runs fine, except stopping issues which seem to be related to 
> ACCUMULO-2985
> 
> After running this, I validated that the table was created in the real 
> accumulo instance via zkCli
> 
> 
> Thanks,
> 
> John Vines
> 
>



Re: Review Request 23397: ACCUMULO-2984 Support Running MAC against a standard instance

2014-07-10 Thread John Vines

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23397/
---

(Updated July 10, 2014, 6:43 p.m.)


Review request for accumulo.


Changes
---

Apparently the one generated that ignored whitespace was busted.


Bugs: ACCUMULO-2984
https://issues.apache.org/jira/browse/ACCUMULO-2984


Repository: accumulo


Description
---

Adds a change to SiteConfiguration to allow external setting of the xml 
configuration file.
Adds a single method to MiniAccumuloConfig which allows a user to point to 
accumulo-site.xml and HADOOP_CONF_DIR to use for pulling out requisite instance 
information
Clusters configurations into those required to run inside a MAC-sized footprint 
and those which are for arbitrary naming schemes for MAC
Provides flagging to prevent uneccessary folder creation
Provides flagging to prevent running zookeeper and initializing


Diffs (updated)
-

  core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java 
4c7d95e 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
 be80f85 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
 977968e 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
 337eda0 

Diff: https://reviews.apache.org/r/23397/diff/


Testing
---

Ran the following test code-
public class TestMACWithRealInstance {
  public static void main(String args[]) throws IOException, AccumuloException, 
AccumuloSecurityException, TableExistsException, InterruptedException {
MiniAccumuloConfig macConfig = new MiniAccumuloConfig(new File("/tmp/mac"), 
"secret");
macConfig.setNumTservers(2);
macConfig.setMemory(ServerType.TABLET_SERVER, 2, MemoryUnit.GIGABYTE);
macConfig.useExistingInstance(new 
File("/usr/lib/accumulo/conf/accumulo-site.xml"), new 
File("/usr/lib/hadoop/conf"));
MiniAccumuloCluster mac = new MiniAccumuloCluster(macConfig);
mac.start();
System.out.println("Started");
mac.getConnector("root", "secret").tableOperations().create("macCreated");
System.out.println("Stopping");
mac.stop();
System.out.println("Stopped");
  }
}
Which runs fine, except stopping issues which seem to be related to 
ACCUMULO-2985

After running this, I validated that the table was created in the real accumulo 
instance via zkCli


Thanks,

John Vines



Re: Review Request 23397: ACCUMULO-2984 Support Running MAC against a standard instance

2014-07-10 Thread John Vines

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23397/
---

(Updated July 10, 2014, 6:43 p.m.)


Review request for accumulo.


Changes
---

Apparently the one generated that ignored whitespace was busted.


Bugs: ACCUMULO-2984
https://issues.apache.org/jira/browse/ACCUMULO-2984


Repository: accumulo


Description
---

Adds a change to SiteConfiguration to allow external setting of the xml 
configuration file.
Adds a single method to MiniAccumuloConfig which allows a user to point to 
accumulo-site.xml and HADOOP_CONF_DIR to use for pulling out requisite instance 
information
Clusters configurations into those required to run inside a MAC-sized footprint 
and those which are for arbitrary naming schemes for MAC
Provides flagging to prevent uneccessary folder creation
Provides flagging to prevent running zookeeper and initializing


Diffs
-

  core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java 
4c7d95e 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
 be80f85 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
 977968e 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
 337eda0 

Diff: https://reviews.apache.org/r/23397/diff/


Testing
---

Ran the following test code-
public class TestMACWithRealInstance {
  public static void main(String args[]) throws IOException, AccumuloException, 
AccumuloSecurityException, TableExistsException, InterruptedException {
MiniAccumuloConfig macConfig = new MiniAccumuloConfig(new File("/tmp/mac"), 
"secret");
macConfig.setNumTservers(2);
macConfig.setMemory(ServerType.TABLET_SERVER, 2, MemoryUnit.GIGABYTE);
macConfig.useExistingInstance(new 
File("/usr/lib/accumulo/conf/accumulo-site.xml"), new 
File("/usr/lib/hadoop/conf"));
MiniAccumuloCluster mac = new MiniAccumuloCluster(macConfig);
mac.start();
System.out.println("Started");
mac.getConnector("root", "secret").tableOperations().create("macCreated");
System.out.println("Stopping");
mac.stop();
System.out.println("Stopped");
  }
}
Which runs fine, except stopping issues which seem to be related to 
ACCUMULO-2985

After running this, I validated that the table was created in the real accumulo 
instance via zkCli


Thanks,

John Vines



Re: Review Request 23397: ACCUMULO-2984 Support Running MAC against a standard instance

2014-07-10 Thread John Vines

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23397/
---

(Updated July 10, 2014, 6:16 p.m.)


Review request for accumulo.


Changes
---

Addressing javadoc concerns, 2944 merge issues, and generated ignoring 
whitespace


Bugs: ACCUMULO-2984
https://issues.apache.org/jira/browse/ACCUMULO-2984


Repository: accumulo


Description
---

Adds a change to SiteConfiguration to allow external setting of the xml 
configuration file.
Adds a single method to MiniAccumuloConfig which allows a user to point to 
accumulo-site.xml and HADOOP_CONF_DIR to use for pulling out requisite instance 
information
Clusters configurations into those required to run inside a MAC-sized footprint 
and those which are for arbitrary naming schemes for MAC
Provides flagging to prevent uneccessary folder creation
Provides flagging to prevent running zookeeper and initializing


Diffs (updated)
-

  core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java 
4c7d95e 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
 be80f85 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
 977968e 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
 337eda0 

Diff: https://reviews.apache.org/r/23397/diff/


Testing
---

Ran the following test code-
public class TestMACWithRealInstance {
  public static void main(String args[]) throws IOException, AccumuloException, 
AccumuloSecurityException, TableExistsException, InterruptedException {
MiniAccumuloConfig macConfig = new MiniAccumuloConfig(new File("/tmp/mac"), 
"secret");
macConfig.setNumTservers(2);
macConfig.setMemory(ServerType.TABLET_SERVER, 2, MemoryUnit.GIGABYTE);
macConfig.useExistingInstance(new 
File("/usr/lib/accumulo/conf/accumulo-site.xml"), new 
File("/usr/lib/hadoop/conf"));
MiniAccumuloCluster mac = new MiniAccumuloCluster(macConfig);
mac.start();
System.out.println("Started");
mac.getConnector("root", "secret").tableOperations().create("macCreated");
System.out.println("Stopping");
mac.stop();
System.out.println("Stopped");
  }
}
Which runs fine, except stopping issues which seem to be related to 
ACCUMULO-2985

After running this, I validated that the table was created in the real accumulo 
instance via zkCli


Thanks,

John Vines



Re: Review Request 23397: ACCUMULO-2984 Support Running MAC against a standard instance

2014-07-10 Thread John Vines


> On July 10, 2014, 5:50 p.m., Sean Busbey wrote:
> > minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java,
> >  line 259
> > <https://reviews.apache.org/r/23397/diff/1/?file=627812#file627812line259>
> >
> > nit: whitesapce

Are these nits for the patch or nits about whitespace changes? Because for 
whatever reason this file is not properly formatted and if we're stripping all 
whitespace changes from patches, I don't know when we expect these to be 
addressed. If we're going to nit everyone over whitespace issues outside the 
scope of the patch, we're going to discourage people from auto formatting files 
they're editing, which could result in even worse formatting nits being 
introduced in their code.


> On July 10, 2014, 5:50 p.m., Sean Busbey wrote:
> > minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java,
> >  lines 267-268
> > <https://reviews.apache.org/r/23397/diff/1/?file=627813#file627813line267>
> >
> > nit: can we skip this change in line wrapping?
> > 
> > I think this change set is already likely to conflict with 
> > ACCUMULO-2944 and I'd like to minimize it.

Addressing the two 2944 conflict items with r2


- John


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23397/#review47585
---


On July 10, 2014, 5:24 p.m., John Vines wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/23397/
> ---
> 
> (Updated July 10, 2014, 5:24 p.m.)
> 
> 
> Review request for accumulo.
> 
> 
> Bugs: ACCUMULO-2984
> https://issues.apache.org/jira/browse/ACCUMULO-2984
> 
> 
> Repository: accumulo
> 
> 
> Description
> ---
> 
> Adds a change to SiteConfiguration to allow external setting of the xml 
> configuration file.
> Adds a single method to MiniAccumuloConfig which allows a user to point to 
> accumulo-site.xml and HADOOP_CONF_DIR to use for pulling out requisite 
> instance information
> Clusters configurations into those required to run inside a MAC-sized 
> footprint and those which are for arbitrary naming schemes for MAC
> Provides flagging to prevent uneccessary folder creation
> Provides flagging to prevent running zookeeper and initializing
> 
> 
> Diffs
> -
> 
>   core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java 
> 4c7d95e 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
>  be80f85 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
>  977968e 
>   
> minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
>  337eda0 
> 
> Diff: https://reviews.apache.org/r/23397/diff/
> 
> 
> Testing
> ---
> 
> Ran the following test code-
> public class TestMACWithRealInstance {
>   public static void main(String args[]) throws IOException, 
> AccumuloException, AccumuloSecurityException, TableExistsException, 
> InterruptedException {
> MiniAccumuloConfig macConfig = new MiniAccumuloConfig(new 
> File("/tmp/mac"), "secret");
> macConfig.setNumTservers(2);
> macConfig.setMemory(ServerType.TABLET_SERVER, 2, MemoryUnit.GIGABYTE);
> macConfig.useExistingInstance(new 
> File("/usr/lib/accumulo/conf/accumulo-site.xml"), new 
> File("/usr/lib/hadoop/conf"));
> MiniAccumuloCluster mac = new MiniAccumuloCluster(macConfig);
> mac.start();
> System.out.println("Started");
> mac.getConnector("root", "secret").tableOperations().create("macCreated");
> System.out.println("Stopping");
> mac.stop();
> System.out.println("Stopped");
>   }
> }
> Which runs fine, except stopping issues which seem to be related to 
> ACCUMULO-2985
> 
> After running this, I validated that the table was created in the real 
> accumulo instance via zkCli
> 
> 
> Thanks,
> 
> John Vines
> 
>



Review Request 23397: ACCUMULO-2984 Support Running MAC against a standard instance

2014-07-10 Thread John Vines

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23397/
---

Review request for accumulo.


Bugs: ACCUMULO-2984
https://issues.apache.org/jira/browse/ACCUMULO-2984


Repository: accumulo


Description
---

Adds a change to SiteConfiguration to allow external setting of the xml 
configuration file.
Adds a single method to MiniAccumuloConfig which allows a user to point to 
accumulo-site.xml and HADOOP_CONF_DIR to use for pulling out requisite instance 
information
Clusters configurations into those required to run inside a MAC-sized footprint 
and those which are for arbitrary naming schemes for MAC
Provides flagging to prevent uneccessary folder creation
Provides flagging to prevent running zookeeper and initializing


Diffs
-

  core/src/main/java/org/apache/accumulo/core/conf/SiteConfiguration.java 
4c7d95e 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
 be80f85 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
 977968e 
  
minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
 337eda0 

Diff: https://reviews.apache.org/r/23397/diff/


Testing
---

Ran the following test code-
public class TestMACWithRealInstance {
  public static void main(String args[]) throws IOException, AccumuloException, 
AccumuloSecurityException, TableExistsException, InterruptedException {
MiniAccumuloConfig macConfig = new MiniAccumuloConfig(new File("/tmp/mac"), 
"secret");
macConfig.setNumTservers(2);
macConfig.setMemory(ServerType.TABLET_SERVER, 2, MemoryUnit.GIGABYTE);
macConfig.useExistingInstance(new 
File("/usr/lib/accumulo/conf/accumulo-site.xml"), new 
File("/usr/lib/hadoop/conf"));
MiniAccumuloCluster mac = new MiniAccumuloCluster(macConfig);
mac.start();
System.out.println("Started");
mac.getConnector("root", "secret").tableOperations().create("macCreated");
System.out.println("Stopping");
mac.stop();
System.out.println("Stopped");
  }
}
Which runs fine, except stopping issues which seem to be related to 
ACCUMULO-2985

After running this, I validated that the table was created in the real accumulo 
instance via zkCli


Thanks,

John Vines



Re: Make accumulo shell easy to use.

2014-07-02 Thread John Vines
Unfortunately to run the shell, it does seem like accumulo-env.sh needs to
be readable, even in the case where you have those variables set (and are
using the shell's -z flag). There seems to be a hack by setting
ACCUMULO_TEST env var, but that seems really hacky and rediculous.


On Wed, Jun 11, 2014 at 1:46 PM, Mike Drob  wrote:

> Yes, you will need to have HADOOP_PREFIX and ZOOKEEPER_HOME set, although
> not necessarily in bootstrap_config.sh -- I think you can just set them in
> your shell (like in .bashrc) and expect things to work.
>
>
> On Wed, Jun 11, 2014 at 1:44 PM, Vicky Kak  wrote:
>
> > To get it fixed I have to run $ACCUMULO_HOME/bin/bootstrap_config.sh
> >
> > and then setting these env variables
> >
> > 1) HADOOP_PREFIX
> > 2) ZOOKEPER_HOME
> >
> > So everytime I build accumulo from source I have set the ACCUMULO_HOME
> and
> > then run the bootstrap-config.sh and finally have the env variables set
> for
> > HADOOP_PREFIX and ZOOKEPER_HOME.
> >
> > Is this what other folks are doing?
> >
> > Thanks,
> > Vicky
> >
> >
> >
> >
> >
> >
> > On Wed, Jun 11, 2014 at 10:53 PM, Vicky Kak  wrote:
> >
> > > Hi Guys,
> > >
> > > Every time I am building the accumulo the distribution would get
> created
> > > and when I run the accumulo shell I get information on console like
> this
> > >
> > >
> > >
> >
> ***
> > > Accumulo is not properly configured.
> > >
> > > Try running $ACCUMULO_HOME/bin/bootstrap_config.sh and then editing
> > > $ACCUMULO_HOME/conf/accumulo-env.sh
> > >
> > >
> >
> ***
> > >
> > > If I am using a fake option and trying to connect to the MAC I should
> be
> > > not be getting the above messages. I should be allowed to connect to
> MAC
> > > seamlessly.
> > > This simply speeds the development spike.
> > >
> > > So every day I pull the latest changes and build the accumulo the
> > previous
> > > days configuration gets wiped off and I will have to do it again.
> > > Another way would be to take a backup of the earlier build configured
> > > accumulo and  use accumulo shell from it. It would save the time.
> > >
> > > I would be interested to know what procedure other developers are
> using.
> > >
> > > Right now I can say the development and setup process is cumbersome and
> > we
> > > should make it simple and seamless, may be there is already a better
> > option
> > > which I am not aware of. We need to have a wiki/document about it.
> > >
> > > Looking forward to hear from community about it.
> > >
> > > Thanks,
> > > Vicky
> > >
> > >
> > >
> >
>


Re: [DISCUSS] Should we support upgrading 1.4 -> 1.6 w/o going through 1.5?

2014-06-16 Thread John Vines
My comfort with this is based solely on the amount of effort and complexity
involved. I have no direct qualms with providing that path, on the
assumption we don't spend an exorbitant amount of hours developing it and
end up with a ridiculous amount of code brought up to support this which we
have to maintain.

I would also be happier to have some sort of story involved for how to
ensure the 1.4 users migrate to 1.6 in such a way we don't hit a similar
race condition we found for if/when they jump to 1.7/2.0.


On Mon, Jun 16, 2014 at 5:24 PM, Sean Busbey  wrote:

> In an effort to get more users off of our now unsupported 1.4 release,
> should we support upgrading directly to 1.6 without going through a 1.5
> upgrade?
>
> More directly for those on user@: would you be more likely to upgrade off
> of 1.4 if you could do so directly to 1.6?
>
> We have this working locally at Cloudera as a part of our CDH integration
> (we shipped 1.4 and we're planning to ship 1.6 next).
>
> We can get into implementation details on a jira if there's positive
> consensus, but the changes weren't very complicated. They're mostly
>
> * forward porting and consolidating some upgrade code
> * additions to the README for instructions
>
> Personally, I can see the both sides of the argument. On the plus side,
> anything to get more users off of 1.4 is a good thing. On the negative
> side, it means we have the 1.4 related upgrade code sitting in a supported
> code branch longer.
>
> Thoughts?
>
> --
> Sean
>



-- 
Cheers
~John


Re: [DISCUSS] Do we want contributors assigning to themselves?

2014-05-16 Thread John Vines
Yes, restore the old behavior


On Wed, May 14, 2014 at 4:38 PM, Sean Busbey  wrote:

> We don't have a formal onboarding process for drawing in new contributors,
> but a recent ASF Infra change impacts what I've observed historically.
>
> Here's what I've seen historically, more or less:
>
> 1) Someone expresses interest in a ticket
>
> 2) PMC/committers add them to the list of contributors in jira
>
> 3) respond to interest informing person of this change and encouraging them
> to assign the ticket to themselves
>
> 4) work happens on ticket
>
> 5) review/commit happens eventually
>
> 6) If contributor wants, added to website
>
> 7) contributor thanked and encouraged to find more tickets to assign to
> themselves.
>
> Due to a request from Spark, the ASF Jira got changed to default to not
> allow contributors to assign tickets[1].
>
> Before I speak for the PMC and file a follow on to change things back, I
> just wanted a gut check that we like the above as a general approach.
>
>
> [1]: https://issues.apache.org/jira/browse/INFRA-7675
>
> --
> Sean
>


Re: Review Request 21557: ACCUMULO-2766 fix wal group commit

2014-05-16 Thread John Vines

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/21557/#review43223
---



server/tserver/src/main/java/org/apache/accumulo/tserver/log/DfsLogger.java
<https://reviews.apache.org/r/21557/#comment77307>

I think that before this loop the closeLock and/or closed should be checked 
and if it is closed, then LogClosedExceptions should be placed on all of the 
LogWorks, not the Exception received.


- John Vines


On May 16, 2014, 5:17 p.m., kturner wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/21557/
> ---
> 
> (Updated May 16, 2014, 5:17 p.m.)
> 
> 
> Review request for accumulo.
> 
> 
> Bugs: ACCUMULO-2766
> https://issues.apache.org/jira/browse/ACCUMULO-2766
> 
> 
> Repository: accumulo
> 
> 
> Description
> ---
> 
> A possible fix for ACCUMULO-2766.
> 
> 
> Diffs
> -
> 
>   server/tserver/src/main/java/org/apache/accumulo/tserver/log/DfsLogger.java 
> eb04f09 
> 
> Diff: https://reviews.apache.org/r/21557/diff/
> 
> 
> Testing
> ---
> 
> Only performance testing so far.  This patch dramatically improves 
> performance.
> 
> 
> Thanks,
> 
> kturner
> 
>



Re: [DISCUSS] Dev branches reflecting minor/major versions, not bugfixes

2014-05-12 Thread John Vines
Why eliminate the 1.6.1-SNAPSHOT branch for 1.7.0-SNAPSHOT? Why not just
branch the master and insert a 1.7.0-SNAPSHOT into our workflow after
1.6.1-SNAPSHOT and before master?


On Mon, May 12, 2014 at 11:10 AM, Bill Havanki wrote:

> I like this plan overall. I am definitely in favor of more frequent,
> lighter-weight bugfix releases. We can start to move toward a regular
> schedule of them, based on whether there is enough there to warrant one
> each month / two months / whatever.
>
> We could start by branching off 1.6.0 now, and merging in whatever bug fix
> commits make sense (pending a discussion as Christopher suggested). It can
> be kept in a ready-to-release condition, for whenever it's "time" for
> 1.6.1.
>
> What about 1.5.x? That will still receive feature changes as well as bug
> fixes, I assume, until it goes EOL.
>
>
> On Mon, May 12, 2014 at 10:44 AM, Josh Elser  wrote:
>
> > On 5/12/14, 10:41 AM, Keith Turner wrote:
> >
> >> On Sun, May 11, 2014 at 6:54 PM, Josh Elser
>  wrote:
> >>
> >>  >SGTM. Looks like there aren't currently any fixes of much substance
> for
> >>> >1.6.1 presently, but there are a few that would make for a very-low
> >>> impact
> >>> >1.6.1, and a good 1.5.2 which also includes the fallout tickets
> shortly
> >>> >after 1.5.1. Timeframe looks good to me too.
> >>> >
> >>> >If we can get that reduced test burden for "real" bug-fix releases
> >>> >hammered out, a month sounds good to me.
> >>>
> >>
> >> Rather than reduce the test burden, it would be nice to make the cluster
> >> testing more automated like you and other have discussed.
> >>
> >
> > I think that would be a good parallel goal, but I would still think that
> 7
> > days of testing for a bug-fix release is excessive. Most times for me the
> > pain is getting resources to test for such a long period, not necessarily
> > setting up the test.
> >
>
>
>
> --
> // Bill Havanki
> // Solutions Architect, Cloudera Govt Solutions
> // 443.686.9283
>


  1   2   3   4   5   6   7   8   >