Re: Machine agents uninstall themselves upon worker.ErrTerminateAgent.

2016-05-10 Thread Eric Snow
On Mon, May 9, 2016 at 4:54 PM, Andrew Wilkins
 wrote:
> For 2.0 that'd be due to the regression mentioned earlier in the thread.
> That's something we ought to fix before releasing 2.0.

I've opened lp:1580231 for that.  I've also opened a couple of related bugs:

lp:1580233 (1.25) Machine agent is uninstalled even when
"uninstall-agent" file does not exist.
lp:1580221 (both) Auto-uninstall of machine agent should be more
strictly limited.

We'll keep lp:1514874 focused on the issue of why auth is failing for
valid credentials.

-eric


https://bugs.launchpad.net/juju-core/+bug/1580231
https://bugs.launchpad.net/juju-core/+bug/1580233
https://bugs.launchpad.net/juju-core/+bug/1580221
https://bugs.launchpad.net/juju-core/+bug/1514874

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Machine agents uninstall themselves upon worker.ErrTerminateAgent.

2016-05-09 Thread Eric Snow
On Mon, May 9, 2016 at 10:50 AM, Eric Snow  wrote:
> I'll check it with 1.25 too, though I expect that the result will be the same.

Under 1.25, using the local provider, I was able to reproduce the
behavior.  On the  and even got the same confusing log entry:

  2016-05-09 16:48:51 DEBUG juju.cmd.jujud machine.go:1741 uninstall
file "/var/lib/juju/uninstall-agent" does not exist

So it's not readily apparent what is deleting the files and removing
the init services.  However, it *is* happening.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Machine agents uninstall themselves upon worker.ErrTerminateAgent.

2016-05-09 Thread Eric Snow
On Mon, May 9, 2016 at 1:56 AM, Andrew Wilkins
 wrote:
> But... I've just looked at the 1.25 branch again, and the fix *was* made
> there. And from Jorge's comment
> https://bugs.launchpad.net/juju-core/+bug/1514874/comments/4, we can see
> that the uninstall logic isn't actually running (see `uninstall file
> "/var/lib/juju/uninstall-agent" does not exist`
> https://github.com/juju/juju/blob/1.25/cmd/jujud/agent/machine.go#L1741)
>
> I'm not sure what to make of that. Eric, have you confirmed that that code
> is what's causing the issue? Are we sure we're not barking up the wrong
> tree?

Those logs certainly don't line up with the reported bug. :/  It's
possible that they aren't the right logs.  Regardless, my initial
investigation was based on inspecting code that "uninstalls" the
agent.  The only such code I found was machineAgent.uninstallAgent()
(and in the manual provder's Destroy()).

To be sure about it, I verified the issue using the LXD provider under
2.0, following the steps Jorge specified in the bug:

1) Modify the apipassword on the /var/lib/juju/agents/machine-8/agent.conf
2) Restart jujud-machine-8

The agent is uninstalled and the the uninstall file is there:

  2016-05-09 16:21:48 INFO juju.agent uninstall.go:47 agent already
marked ready for uninstall

However, the machine is still there:

  $ juju status
  ...
  0  started 10.235.227.113
juju-af8e08f2-3953-4bcb-84de-767a15f46b4f-machine-0 xenial

I'll check it with 1.25 too, though I expect that the result will be the same.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Machine agents uninstall themselves upon worker.ErrTerminateAgent.

2016-05-06 Thread Eric Snow
On Fri, May 6, 2016 at 11:37 AM, William Reade
 wrote:
> On Fri, May 6, 2016 at 5:50 PM, Eric Snow  wrote:
>>
>> See https://bugs.launchpad.net/juju-core/+bug/1514874.
>
>
> Mainly, we just shouldn't do it. The only *actual reason* for us to do this
> is to support the manual provider; any other machine that happens to be dead
> will be cleaned up by the responsible provisioner in good time. There's a
> file we write to enable the suicidal behaviour when we enlist a manual
> machine, and we *shouldn't* have ever written it for any other reason.

So how about, other than the container bits, we drop the "uninstall"
code entirely.  To address the manual provider case, during that
provider's StartInstance() we add a "clean-up-juju" script somewhere
on the machine?  Then folks can use that to confidently clean up after
the fact.

> So, narrowly, fixing this involves relocating the more-widely-useful
> (in-container loop device detach IIRC?) code inside MachineAgent.uninstall;

Yep.

> I don't think we want any of this -- it's all a consequence of
> inappropriately triggering the uninstall stuff in the first place.

Yeah, if we simply did no automatic cleanup and (on manual provider)
give users a script that would let them clean up afterward, then they
could happily(?) manually repair their machines with much less
trouble.

>> * Could this lead to collisions if the instance is re-purposed as a
>> different machine?  I suppose it could also expose sensitive data when
>> likewise re-purposed, since it won't necessarily be in the same model
>> or controller.  However, given the need for admin access that probably
>> isn't a likely problem.
>
> If substrates are leaking data between separate instances, the substrate is
> screwed up, and there's not much we can do about it. The manual provider has
> no such guarantees, and that's the only justification for trying to clean up
> so drastically in the first place.

Fair enough.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Machine agents uninstall themselves upon worker.ErrTerminateAgent.

2016-05-06 Thread Eric Snow
See https://bugs.launchpad.net/juju-core/+bug/1514874.

When starting, the machine agent starts up several workers in a runner
(more or less the same in 1.25 and 2.0).  Then it waits for the runner
to stop.  If one of the workers has a "fatal" error then the runner
stops and the machine agent handles the error.  If the error is
worker.ErrTerminateAgent then the machine agent uninstalls (cleans up)
itself.  This amounts to deleting its data dir and uninstalling its
jujud and mongo init services.  However, all other files are left in
place* and the machine/instance is not removed from Juju (agent stays
"lost").

Notably, the unit agent handles fatal errors a bit differently, with
some retry logic (e.g. for API connections) and never cleans up after
itself.

The problem here is that a fatal error *may* be recoverable with
manual intervention or with retries.  Cleaning up like we do makes it
harder to manually recover.  Such errors are theoretically extremely
uncommon so we don't worry about doing anything other than stop and
"uninstall".  However, as seen with the above-referenced bug report,
bugs can lead to fatal errors happening with enough frequency that the
agent's clean-up becomes a pain point.

The history of this behavior isn't particularly clear so I'd first
like to hear what the original rationale was for cleaning up the agent
when "terminating" it.  Then I'd like to know if perhaps that
rationale has changed.  Finally, I'd like us to consider alternatives
that allow for better recoverability.

Regarding that third part, we have a number of options.  I introduced
several in the above-referenced bug report and will expand on them
here:

1. do nothing
This would be easy :) but does not help with the pain point.
2. be smarter about retrying (e.g. retry connecting to API) when
running into fatal errors
This would probably be good to do but the effort might not pay for itself.
3. do not clean up (data dir, init service, or either)
Leaving the init service installed really isn't an option because
we don't want
the init service to try restarting the agent over and over.
Leaving the data dir
in place isn't good because it will conflict with any new agent
dir the controller
tries to put on the instance in the future.
4. add a DO_NOT_UNINSTALL or DO_NOT_CLEAN_UP flag (e.g. in the
machine/model config or as a sentinel file on instances) and do not
clean up if it is set to true (default?)
This would provide a reasonable quick fix for the above bug, even if
temporary and even if it defaults to false.
5. disable (instead of uninstall) the init services and move the data
dir to some obvious but out-of-the-way place (e.g.
/home/ubuntu/agent-data-dir-XXMODELUUIDXX-machine-X) instead of
deleting it
This is a reasonable longer term solution as the concerns
described for 3 are
addressed and manual intervention becomes more feasible.
6. same as 5 but also provide an "enable-juju" (or "revive-juju",
"repair-juju", etc.) command *on the machine* that would re-enable the
init services and restore (or rebuild) the agent's data dir
This would make it even easier to manually recover.
7. first try to automatically recover (from machine or controller)
This would require a sizable effort for a problem that shouldn't
normally happen.
8. do what the unit agent does
I haven't looked closely enough to see if this is a good fit.

I'd consider 4 with a default of false to be an acceptable quick fix.
Additionally, I'll advocate for 6 (or 5) as the most appropriate
solution in support of manual recovery from "fatal" errors.

-eric


* Could this lead to collisions if the instance is re-purposed as a
different machine?  I suppose it could also expose sensitive data when
likewise re-purposed, since it won't necessarily be in the same model
or controller.  However, given the need for admin access that probably
isn't a likely problem.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Static Analysis tests

2016-04-28 Thread Eric Snow
On Wed, Apr 27, 2016 at 9:14 PM, Nate Finch  wrote:
> So, I don't really think the method of testing should determine where a test
> lives or how it is run.  I could test the exact same things with a more
> common unit test - check the tls config we use when dialing the API is using
> tls 1.2, that it only uses these specific ciphersuites, etc.  In fact, we
> have some unit tests that do just that, to verify that SSL is disabled.
> However, then we'd need to remember to write those same tests for every
> place we make a tls.Config.

The distinction here is between testing the code (execution) and
verifying that the code base follows proscribed constraints, whether
technical or organizational.  Perhaps you could call the latter
"meta-testing". :)  For testing the code, it is really useful to have
the tests next to the code.  For meta-tests, e.g. static-analysis
against the code base as a whole, the relationship with any particular
package is tenuous at best.

So I agree with the others that "tests" which focus on the code-base
as a whole, via static analysis or otherwise, should be grouped
together in the directory tree.  There's prior art with the
featuretests package, though it's not quite the same thing.

I also agree that Lingo would be an effective tool in cases like this
where we are enforcing a code-base-wide policy, e.g. "don't import
testing packages in non-test files" or "use the helper function when
creating a tls.Config".  However, that does not mean we can't wrap
calls to Lingo in test methods (more on this below).

>
> The thing I like about having this as part of the unit tests is that it's
> zero friction.  They already gate landings.  We can write them and run them
> them just like we write and run go tests 1000 times a day.  They're not
> special.  There's no other commands I need to remember to run, scripts I
> need to remember to set up.  It's go test, end of story.

I agree that is useful, particularly if we can isolate them, e.g. with
"go test -short" or the testing tags that I proposed.  I don't think
the objection is necessarily about running these "meta" tests via "go
test".  Personally I kind of like that approach.  It wouldn't be hard
to write test suites with methods which call out to the tools.  That
would thereby allow us to meet all our testing needs through "go test"
while allowing us to continue using the other tools independently to
meet additional specific needs.

For me, the objection is more about *where* such tests live.

>
> The comment about Lingo is valid, though I think we have room for both in
> our processes.  Lingo, in my mind, is more appropriate at review-time, which
> allows us to write lingo rules that may not have 100% confidence.  They can
> be strong suggestions rather than gating rules.  The type of test I wrote
> should be a gating rule - there are no false positives.

Again, having granular testing tags would allow us to make that
distinction in a way that the test tooling could easily distinguish.

>
> To give a little more context, I wrote the test as a suite, where you can
> add tests to hook into the code parsing, so we can trivially add more tests
> that use the full parsed code, while only incurring the 16.5 second parsing
> hit once for the entire suite.  That doesn't really affect this discussion
> at all, but I figured people might appreciate that this could be extended
> for more than my one specific test.

Fair enough.

>  I certainly wouldn't advocate people
> writing new 17 seconds tests all over the place.

Oh good! :)

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: adding unit tests that take a long time

2016-04-28 Thread Eric Snow
On Wed, Apr 27, 2016 at 8:51 PM, Nate Finch  wrote:
> I do still want to talk about what we can do for unit tests that take a long
> time.

If it takes a long time then it probably isn't a unit test. :)

>  I think giving developers the option to skip long tests is handy -
> getting a reasonable amount of coverage when you're in the middle of the
> develop/test/fix cycle.  It would be really useful for when you're making
> changes that affect a lot of packages and so you end up having to run full
> tests over and over.  Of course, running just the short tests would not give
> you 100% confidence, but once you've fixed everything so the short tests
> pass, *then* you could do a long run for thorough coverage.
>
> This is a very low friction way to increase developer productivity, and
> something we can implement incrementally.  It can also lead to better test
> coverage over all.  If you write 10 unit tests that complete in
> milliseconds, but were thinking about writing a couple longer-running unit
> tests that make sure things are working end-to-end, you don't have the
> disincentive of "well, this will make everyone's full test runs 30 seconds
> longer", since you can always skip them with -short.

Support (both technical and organizational) for explicitly
distinguishing short/long tests in the code would be great.

Further, I would like it if tests could be distinguished with a bit
more granularity than short/long.  It's often helpful to divide tests
into short/medium/long.  We've had a number of discussions on this
list to that effect.  There are other categories of tests that would
be worth segregating like this, such as CI tests (as we move them into
the core repo), other feature tests, and high-level acceptance tests.

In support of that idea, I proposed something along these lines a
while back (the patch didn't go anywhere, mostly because I had other
things to get done):

  http://reviews.vapour.ws/r/1647/

Regardless, I'm in favor of leveraging the existing tooling (e.g. "go
test -short") right now.  I also don't see why going that route would
preclude something later like what I proposed in that patch.  Maybe
I'll dust it off and make it compatible with "go test -short"... :)

>
> The only real negative I see is that it makes it less painful to write long
> tests for no reason, which would still affect landing times but
> hopefully everyone is still aware of the impact of long-running tests, and
> will avoid them whenever possible.

I think that's a lesser issue we can deal with later if it becomes a
problem, which I don't think it would. :)

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: auto-upgrading models

2016-04-26 Thread Eric Snow
On Mon, Apr 25, 2016 at 5:24 PM, Menno Smits  wrote:
> A per-model flag which indicates that the model should automatically upgrade
> when the controller does might be nice too (this what #7 means I think?).

I meant a CLI flag for "juju upgrade-juju".  If I understood right,
you are suggesting a per-model config option.  I like that.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: auto-upgrading models

2016-04-26 Thread Eric Snow
On Mon, Apr 25, 2016 at 6:38 PM, Andrew Wilkins
 wrote:
> Whatever we do, #2 should never be optional.

Agreed.

>
> I would like us to have all of #2, #3, #4, #7.

This is my opinion too.  Menno suggested another option (call it #8):
a model config setting equivalent to the flag suggested by #7.  I
support that one too.

> For #3, we could say "upgrade
> each model ... or run `juju upgrade-juju --all-models
> `".

+1

>
> I don't think this should be restricted to "--upload-tools", but rather just
> upgrading the admin model in general.

Agreed.  I was focused on --upload-tools because that's where I had
actually seen a problem with the upgrade story.  This is partly
because you can only upload tools to the admin model now.  However, I
agree that this should not be limited to --upload-tools.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


auto-upgrading models

2016-04-22 Thread Eric Snow
In a recent bug I was working on the issue of auto-upgrading models
came up.  I also ran into this personally not too long ago.
Currently, running "juju upgrade-juju -m admin --upload-tools"[1]
upgrades the admin model to a new version.  To set the version of any
other model to the uploaded one, you must do so separately afterward,
e.g. "juju upgrade-juju -m default 2.0-rc1.3". [2]

The fact that you must upgrade the non-admin model separately is
something new with multi-model support.  Perhaps it is only something
that will throw off folks that have relied on --upload-tools
previously and perhaps it is something that we'll just adjust to.
However, I wanted to bring the matter up here and identify potential
courses of action (not all mutually exclusive):

1. do nothing (rely on users to know to always upgrade juju per-model)
2. clearly point this out in the documentation
3. add a note in the output of "juju upgrade-juju --upload-tools"
reminding admins to manually upgrade each model
4. make the "juju is out-of-date" warnings also show up for models
relative to the controller
5. auto-upgrade models when the controller is upgraded
6. auto-upgrade but have a flag to not auto-upgrade
7. have a flag to auto-upgrade

FWIW, I don't consider #1 or #5 to be acceptable options.  I'm on the
fence about #6; it aligns with what I expect would be a better default
experience but hesitate to make unrequested changes of that sort by
default.  So #7 might be more appropriate if the consequences of #6
would be too risky.  Regardless, I do think the user experience of
upgrade-juju could be improved.

Thoughts?

-eric


[1] You can no longer upload tools to any other model than admin.
[2] Thankfully, due to some recent work by axw the new version is
*available* to all models.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unable to kill-controller

2016-04-08 Thread Eric Snow
On Wed, Apr 6, 2016 at 6:36 PM, Rick Harding  wrote:
> But this comes back to what started this. People are using it because
> destroy-model isn't working for them. It's one part bug that it's not the
> reverse of bootstrap in its current form and one part that we've broken
> cleanup between betas so folks are looking for something that works. People
> are looking for it because they've got no choice.

If it really is only about compatibiilty breaks between
builds/releases, how about only including kill-controller in non-final
(or pre-RC) releases?

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Move provider implementations under their related projects.

2016-03-24 Thread Eric Snow
Perhaps we should move the implementation of some providers over to
the repos for the projects with which the providers interact (and then
just import them in providers/all).  Then those providers would be
more likely to stay up-to-date in the face of changes in the project,
particularly backward-incompatible ones.  For example:

* provider/maas -> github.com/juju/gomaasapi/maasprovider
* provider/lxd -> github.com/lxc/lxd/lxdprovider
* ...

or something like that.  It's not a perfect solution nor a complete
one, but it should help.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Planning for Juju 2.2 (16.10 timeframe)

2016-03-18 Thread Eric Snow
On Fri, Mar 18, 2016 at 8:57 AM, Tom Barber  wrote:
> c) upload files with actions. Currently for some things I need to pass in
> some files then trigger an action on the unit upon that file. It would be
> good to say path=/tmp/myfile.xyz and have the action upload that to a place
> you define.

Have you taken a look at resources in the upcoming 2.0?  You define
resources in your charm metadata and use "juju attach" to upload them
to the controller (e.g. "juju attach my-service/0
my-resource=/tmp/myfile.xyz"). *  Then charms can use the
"resource-get" hook command to download the resource file from the
controller.  "resource-get" returns the path where the downloaded file
was saved.

-eric


* You will also upload the resources to the charm store for charm store charms.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: LXD support (maybe)

2016-02-25 Thread Eric Snow
I also left  a review.  Mostly LGTM.  I just wanted a bit of feedback
before giving it a ship-it. :)

-eric

On Thu, Feb 25, 2016 at 8:11 AM, Andrew McDermott
 wrote:
> I took a quick look through it as I was OCR - comments in RB.
>
> On 25 February 2016 at 15:00, John Meinel  wrote:
>>
>> So I have a patch up http://reviews.vapour.ws/r/3973
>>
>> Which with the help from Tycho does look like it allows you to "juju
>> bootstrap lxd lxd" again, but there are a fair number of caveats.
>>
>> The big one is that it is only compatible with lxd-2.0.0~beta4 (because
>> the API has changed since beta3 and very different from 0.21/0.26). And
>> currently 0.26 is the version that is in Trusty-backports. (I do see
>> 2.0.0~beta4 in Xenial main, though.)
>>
>> For people that want to try it out, you can use ppa:ubuntu-lxc/lxd-stable
>> as it has 2.0.0~beta4 in it. I'm not sure if CI will let it end up in master
>> or the next Juju release because of not having 2.0.0~beta4 easily available
>> in Trusty.
>>
>> I'll try to respond to any reviews or feedback that people give, though
>> I'm technically off for the weekend. I know its one of the blockers for the
>> next release, though, so I'll try to help.
>>
>> John
>> =:->
>>
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
>
>
>
> --
> Andrew McDermott 
> Juju Core Sapphire team 
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Diffing string checker

2016-02-25 Thread Eric Snow
On Wed, Feb 24, 2016 at 11:26 PM, Andrew Wilkins
 wrote:
> Howdy,
>
> Occasionally I'll change a test, and some string equality test will fail
> with a wall of text. Sometimes we shouldn't be checking the whole string,
> but sometimes it's legitimate to do so, and it can be difficult/tedious to
> spot the differences.
>
> I've just written a checker which diffs the two string args, and colourises
> the output. You may find it useful. I'm using red/green background, but I
> also added bold for insertions, strike-through for deletions, in case you're
> red/green colour blind. My terminal doesn't do strike-through, and your's
> probably doesn't either. Anyway, the important thing is you can see the
> difference between bits that are the same vs. insertions/deletions.
>
> Code is at github.com/axw/fancycheck. Just replace
> c.Assert("x", gc.Equals, "y")
> with
> c.Assert("x", fancycheck.StringEquals, "y")

FYI, I added something like this to the juju repo a while back:

https://github.com/juju/juju/blob/master/testing/base.go#L166

It's not a gc checker like yours but it has the same effect.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju terminology change: controllers and models

2016-02-03 Thread Eric Snow
Thanks for this in-depth break-down, Ian.  It really helps.

-eric

On Tue, Feb 2, 2016 at 6:33 PM, Ian Booth  wrote:
> Hey all
>
> As has been mentioned previously in this list, for the Juju 2.0 release we 
> have
> been working on fundamental terminology changes. In particular, we now talk
> about controllers and models instead of state servers and environments.
>
> To this end, a rather large change has landed in master and the upcoming
> 2.0-alpha2 release of Juju will reflect these changes. There are several 
> things
> to be aware of. We have also taken the opportunity to remove a lot of code 
> which
> existed to support older Juju clients. Needless to say, this Juju 2.0 release
> will not support upgrading from 1.x - it works only as a clean install.
>
> Note: some of the changes will initially break the GUI and users of the Python
> Juju Client - work is underway to update these products for the next alpha3
> release scheduled for next week. For those wishing to continue to test Juju 
> 2.0
> without the breaking changes, the alpha1 release is still available via
> ppa:juju/experimental. Separate communications to affected stakeholders 
> has/will
> be made as part of the 2.0-alpha2 release.
>
> So, the changes are roughly broken down as follows:
>
> - CLI command name changes
> - facade name changes
> - api method and parameter name changes
> - facade method restructure
> - internal api name changes
> - external artifact/data changes (including on the wire changes)
> - deprecated and older version facades are removed
>
> 1. CLI command name changes
>
> As an obvious example, create-environment becomes create-model. We also have
> destroy-controller etc. This alpha2 release will also contain many of the 
> other
> CLI changes targetted for 2.0 eg juju backup create becomes juju 
> create-backup.
> Not all 2.0 CLI syntax is supported yet, but all the environment -> model
> changes are done.
>
> You will also use -m  instead of -e .
>
> The release notes will go into more detail.
>
> All user facing text now refers to model instead of environment.
>
> 2. Facade name changes
>
> If you are curious, see https://goo.gl/l4JqGd for a representative listing of
> all facade and method names and which ones have been changed.
>
> The main one is EnvironmentManager becomes ModelManager. These changes affect
> external API clients like the GUI and Python Juju client.
>
> 3. api method and parameter name changes
>
> By way of example:
> EnvironInfo() on the undertaker facade becomes ModelInfo().
> The param struct ModifyEnvironUsers becomes ModifyModelUsers etc.
> EnvironTag attributes become ModelTag.
>
> 4. Service facade method restructure
>
> As part of making our facades more manageable and maintainable when API 
> changes
> are required, a whole bunch of service related methods are moved off the 
> Client
> facade and onto the Service facade. This had already been started months ago,
> and there were shims in place to keep existing clients working, but now the 
> job
> is finished.
> eg Client.AddRelation() becomes Service.AddRelation() etc.
>
> This change will break the GUI and Python Juju client.
>
> 5. Internal API name changes
>
> Things like state.AllEnvironments() becomes state.AllModels(), we now use
> names.ModelTag instead of names.EnvironTag, and many, many more.
>
> Note: the names package has not been forked into a .V2 yet (with EnvironTag
> removed) as there are dependencies to sort out. Please do not use EnvironTag
> anymore.
>
> 6. External artifact/data changes (including on the wire changes)
>
> There are several main examples here.
> On the wire, we transmit model-uuid tags rather than environment-uuid tags.
> In mongo, we store model-uuid doc fields rather than env-uuid.
> In agent.conf files we store Model info rather than Environment tags.
> In the controller blob store, we store and manage blobs for buckets rather 
> than
> environments.
> The controller HTTP endpoints are /model/ In backups we store model tags and details, not environment.
>
> With the blobstore, we've create a .V2 branch which core uses. The charmstore
> will continue to use V1 for now.
>
> 7. Deprecated and older version facades are removed
>
> All facade versions have been bumped up. Older facades are removed, and we 
> don't
> support fallback to older servers. The main example for facade removal is 
> uniter
> v0 and v1 are gone. With deprecated API removal, service deployment now 
> expects
> placement parameters rather than catering for older clients that did not 
> support
> placement, so there's only one ServiceDeployMethod() instead of 3. All in all,
> the code base has been simplified by removing all support for deprecated 
> facades
> and API calls.
>
> There are still a couple of grey areas internally to be sorted out. But the 
> user
> facing changes are done (pending more CLI work between now and release).
>
>
> If you have any questions, please ask. As juju-core developers, I suggest
> merging maste

Re: Problem with lxd on master - How do I debug this?

2016-02-01 Thread Eric Snow
On Mon, Feb 1, 2016 at 7:12 AM, Matthew Williams
 wrote:
> Hey folks,
>
> the lxd provider has been working fine for me - until this morning. I'm
> running on master (1cb8f0)
>
> I've attached output from juju debug-log juju status and lxd list, can
> someone help me work out what I should do to resolve this.
>
> The history leading up the attached logs is:
>
> 1) Everything was working
> 2) A deployed a charm that had a hook install error (I was debugging said
> charm, so that was expected). I couldn't ssh to the unit. I got the
> following error:
> $ juju ssh foobar/0
> ERROR error fetching address for unit "foobar/0": private no address.
>
> juju debug log showed:
>
>  machine-4: 2016-02-01 11:53:39 ERROR juju.worker runner.go:229 exited
> "machiner": setting machine addresses: cannot set machine addresses of
> machine 4: state changing too quickly; try again soon
>
> After doing what debug I can I destroyed the controller. Now whenever I
> bootstrap a lxd environment I get the state server and everything else get's
> stuck in allocation. Attached is more detailed stuff from the logs. What
> should I do to work out what's going on here?

After "juju destroy-model" (or destroy-environment) you should not
have any LXC containers left, so "lxc list" should come up empty.  If
the model was not properly destroyed then you can run "lxc delete
juju-..." to manually destroy the container.

I'd also check to see if there is an open issue that matches.  If not,
be sure to open one.

Also, from what Casey reported in IRC, it sounds like there might be
some relationship with the LXD config file.  We haven't really touched
the provider in a while so possibly something changed recently which
the provider depends on.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju devel 1.26-alpha2 is available for testing

2015-11-27 Thread Eric Snow
On Fri, Nov 27, 2015 at 9:00 AM, Marco Ceppi  wrote:
> On Fri, Nov 27, 2015 at 9:37 AM Aaron Bentley 
> wrote:
>>  Requirements
>>
>> - Running Wily (LXD is installed by default)
>>
>
> For the LXD provider, I have the latest LXD installed on trusty, will that
> work or is it hard-coded to wily+ ?

It will work fine if using a juju built with Go 1.3+ (e.g. the juju
package on wily or building locally and using --upload-tools).  The
provider uses the LXD Go bindings which require Go 1.3+.  It is a
compile-time issue rather than a run-time check.

Until trusty ships with Go 1.3+ we cannot ship a Juju that depends on
the LXD Go bindings, thus the LXD provider is disabled if Juju is
compiled with anything older than Go 1.3.  As Aaron said, there's a
separate thread discussing how to solve the problem of getting the
latest full-featured Juju on trusty.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: trouble with dependent repos

2015-11-24 Thread Eric Snow
On Mon, Nov 23, 2015 at 9:32 PM, Nate Finch  wrote:
> Last week, I had trouble landing code in gopkg.in/juju/charm.v6-unstable,

I had a similar experience last week with backward-incompatible
changes in the utils repo. :(

> and using that code from master of github.com/juju/juju.  I was hoping
> someone could help me figure out a way to avoid problems I've encountered.
>
> [snip]
>
> Anyone have ideas on how can we prevent incompatible changes landing in a
> dependent repo without the corresponding fixes landing in juju soon
> thereafter?

I'd recommend avoiding backward-incompatible changes if at all
possible.  Often the temptation is to "fix" some signature while
you're working on related code.  Sometimes you really need a different
signature.  Either way, almost every time you can instead introduce a
new function with the new signature and deprecate the old one.  Using
that approach you do not break backward compatibility and thus do not
cause the problem for dependent repos that you've described.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: lxd provider in master

2015-11-23 Thread Eric Snow
On Sun, Nov 22, 2015 at 8:50 PM, Tim Penhey  wrote:
> Hi all,
>
> I noticed that the lxd provider was now in master and attempted to use
> it to help debug an HA problem.
>
> In environments.yaml file I added a very simple clause:
>
>lxd:
>   type: lxd

This is what I've been using.  It works fine.

>
> But when I try to bootstrap, I get the following:
>
> $ juju bootstrap --debug
> 2015-11-23 03:26:14 INFO juju.cmd supercommand.go:58 running juju
> [1.26-alpha2 gc]
> 2015-11-23 03:26:14 DEBUG juju.environs.config config.go:1473 coercion
> failed attributes: map[string]interface {}{"namespace":"tim-lxd",
> "container":"lxd"}, checker:

This helps identify the problem.  The environment you tried to
bootstrap has "container: lxd" and there is no "lxd" container type.
Perhaps it's an old environment, set up back when the LXD provider was
first prototyped (with its own container type)?  It does not appear to
be the simple "lxd" environment you showed above.

Also, FYI, "namespace" is still around but is not necessary.  It is
useful when you want to have multiple LXD providers running on your
local LXD.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: use --upload-tools when bootstrapping the LXD provider

2015-11-06 Thread Eric Snow
On Fri, Nov 6, 2015 at 8:41 AM, Aaron Bentley
 wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> That may be true, but the LXD provider branch has a stale version
> number.  Once the version number is updated to 1.26-alpha2, I imagine
> the lxd provider will no longer try to use the released streams.
>
> The stale version number is also causing builds to fail:
> http://reports.vapour.ws/releases/3272
> http://reports.vapour.ws/releases/3277

Ah.  Thanks.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


use --upload-tools when bootstrapping the LXD provider

2015-11-06 Thread Eric Snow
If you are using the LXD provider, built from the lxd-provider feature
branch, then make sure you use the --upload-tools option when
bootstrapping.

Since the 1.26-alpha1 stream has been released, Juju will download
that instead of the juju you built from the lxd-provider feature
branch.  That means that the machine agent on new instances (e.g. the
new controller) will not know about the LXD provider.  Note that this
is the same as any other provider.  Just a heads up. :)

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


link to review request in launchpad

2015-06-17 Thread Eric Snow
All,

After posting a review request (i.e. a PR), please be sure to add a
link to the review request/PR (as a comment) to the lp issue the patch
addresses.  Otherwise it's a pain trying to track down which commits
actually relate to the issue.

Similarly, be sure to include a link to the related issue in the PR description.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Reminder: on June 15th, Go 1.4 will stop compiling things from code.google.com.

2015-06-02 Thread Eric Snow
On Tue, Jun 2, 2015 at 5:57 AM, Katherine Cox-Buday
 wrote:
> I think there was an effort to remove this from our code, but I thought I'd
> send out a reminder to check and for personal projects. May the source be
> with you...

code.google.com/p/winsvc is the only one left and there's a patch up
for review that switches that one over.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: gce provider options

2015-05-28 Thread Eric Snow
On Thu, May 28, 2015 at 1:46 AM, William Reade
 wrote:
> It looks like it's a wart -- AFAICS it's just another name for
> image-metadata-url, but it's not actually hooked up to anything. Eric,
> Wayne, any comments?

image-endpoint isn't an alternative to image-metadata-url, which
allows you to specify where to look for the simplestreams metadata.
>From what I can tell image-endpoint is something new.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: possible approach for "test tags"

2015-05-14 Thread Eric Snow
On Wed, May 13, 2015 at 9:59 PM, Andrew Wilkins
 wrote:
> Main thing that concerns me is that it's all opt-in.

Same here.

> I guess it doesn't
> matter
> too much, as long as CI continues to run all the tests. There's nothing
> stopping
> people from skipping running the tests and proposing junk today.

Right.

> A couple of things I'd like to see added to the proposal:
>  - the ability to have negated tags, e.g. -tags=!long or -tags=^long or
> whatever.

I already have that ("-" instead of "!"), just not well documented. :)

>  - a meta "all" tag, or command line flag to set all the tags (e.g. for CI)

Great idea.  I added "base", but that's something else.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


possible approach for "test tags"

2015-05-11 Thread Eric Snow
We've had discussions here a couple of times (since I joined the team)
about classifying tests in our suite so they could be run more
flexibly.  We've also already added explicit handling for a specific
kind of test along those same lines: featuretests.  Additionally you
can think of the CI tests as another test classification.  So we're
already taking the approach in a limited fashion.

I'd like to see something more generic that we could use in the juju
test suite.  I've put up a proof of concept patch that demonstrates
what I think would fit the bill:

  http://reviews.vapour.ws/r/1647/

If the approach seems reasonable then we could start using it for new
tests and as appropriate when touching existing code.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Migrating imports away from code.google.com/p to golang.org/x

2015-03-27 Thread Eric Snow
On Fri, Mar 27, 2015 at 9:59 AM, Nate Finch  wrote:
> Just an FYI:
>
> As you've probably heard, code.google.com is going away.  It'll be read-only
> for a year IIRC and then poof.
>
> Most of the packages from the go authors lived there (crypto, etc), but now
> will be maintained at golang.org/x.  We'll be changing our imports to point
> at the new repos, so our code doesn't break when google code goes away.  The
> only outlier is the GCE wrapper, which now lives at github.com/google.

I'm working up a patch right now for juju core (master-only).  Some of
our other dependencies will need to be updated to use the new
locations:

gopkg.in/macaroon-bakery.v0
code.google.com/p/go.net -> golang.org/x/net
bitbucket.org/kardianos/service
code.google.com/p/winsvc -> ???

Also, the GCE provider relies on code.google.com/p/goauth2 which is
deprecated and the replacement is not compatible, so I'll post a
separate patch to address that for 1.23 and master.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: backporting changes to packages that use gopkg.in

2015-03-27 Thread Eric Snow
On Thu, Mar 26, 2015 at 7:53 PM, Nate Finch  wrote:
> tl;dr: godeps overrides gopkg.in, so you can have godeps pin a commit from a
> different branch than gopkg.in is retrieving (i.e. make a release-number
> branch, like "1.22" and use godeps to pin commits from there, even if go get
> & gopkg.in is getting a different branch).

So basically, gopkg.in is just a means for a project to explicitly
control, in a sane way, the git revision that `go get` uses.  If your
project does *not* use something like gopkg.in then either tip of the
master branch must always be correct (messy) or folks that import your
code must manually manage the revision (as we do with godeps).

There is a lot less magic going on with gopkg.in than you might think.
In the context of `go get`, gopkg.in just proxies the github URL to
the corresponding named revision (branch or tag).  It does not cache a
copy of the branch or repo or otherwise restrict the revisions
available to the git fetch that `go get` does.  So non-ancestor
revisions still get downloaded.  That's why could still take the
godeps/"1.22"-branch approach without a lot of complexity.

In the context of juju core, using gopkg.in for dependencies doesn't
buy us as much since we use godeps.  It's still fine though since the
import "URL" indicates the intended target and it doesn't impede our
ability to adjust via godeps (as I originally thought it did).

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: GCE provider

2015-03-25 Thread Eric Snow
On Wed, Mar 25, 2015 at 8:00 PM, Dimiter Naydenov
 wrote:
> One thing to bear in mind. It seems the europe-west1-a AZ is
> considered "deprecated" and will be shut down soon (end of this month
> I believe).
> It will be nice to skip this AZ when trying to place instances.

Would you mind opening a bug on that?  Thanks.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: GCE provider

2015-03-25 Thread Eric Snow
On Wed, Mar 25, 2015 at 5:55 PM, Andrew Wilkins
 wrote:
> On Wed, Mar 25, 2015 at 9:33 PM, Curtis Hovey-Canonical
>  wrote:
>>
>> On Wed, Mar 25, 2015 at 3:04 AM, Andrew Wilkins
>>  wrote:
>> > Hey folks,
>> >
>> > I just signed up for a GCE trial, and tested out the provider on master.
>> > Found a couple of issues.
>> >
>> > I've filed https://bugs.launchpad.net/juju-core/+bug/1436191. I suspect
>> > the
>> > same issue would exist on 1.23, but haven't tested. I haven't set the
>> > status/importance, because I don't know what the importance is (there's
>> > no
>> > CI or docs?), and I'd like for someone to confirm the bug.
>>
>> I just got CI credentials yesterday evening. As CI is using a project
>> shared by several Canonical staff, it isn't clear to me if I should be
>> creating new client ids or how to generate keys and not interfere with
>> other people.
>>
>> Contrary to text generated by init, 'region' is required.
>
>
> Indeed, I forgot that one. The boilerplate we generate should
> not comment out "region".

Yeah, I'm not sure what happened there.  Either we had a default and
removed it but failed to update the config text, or we were going to
have a default but missed actually adding it.  Would it make sense to
have a default region?

>
>>
>>
>> > Second, a UX thing. It was mostly clear how to generate keys, but it
>> > wasn't
>> > super clear what format to put them into environments.yaml. Obviously
>> > this'll be documented, but I think it'd be nice if in environments.yaml
>> > we
>> > could just specify the location of the JSON file that GCE spits out, and
>> > have Juju extract the info it needs, like we do with certs and SSH keys.
>>
>> I assumed I could use authorized-keys.
>
>
> authorized-keys is fine. That was a bad example.
>
> In the Azure provider, credentials take the form of a certificate and key.
> You *can* enter the cert/key inline in your environments.yaml, but you
> can also specify management-certificate-path and the provider will read
> that file and add the contents into the management-certificate attribute.

I'm writing up a patch that basically does this.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: GCE provider

2015-03-25 Thread Eric Snow
On Wed, Mar 25, 2015 at 1:04 AM, Andrew Wilkins
 wrote:
> Hey folks,
>
> I just signed up for a GCE trial, and tested out the provider on master.
> Found a couple of issues.
>
> I've filed https://bugs.launchpad.net/juju-core/+bug/1436191. I suspect the
> same issue would exist on 1.23, but haven't tested. I haven't set the
> status/importance, because I don't know what the importance is (there's no
> CI or docs?), and I'd like for someone to confirm the bug.

I'll take a look.

>
> Second, a UX thing. It was mostly clear how to generate keys, but it wasn't
> super clear what format to put them into environments.yaml. Obviously
> this'll be documented, but I think it'd be nice if in environments.yaml we
> could just specify the location of the JSON file that GCE spits out, and
> have Juju extract the info it needs, like we do with certs and SSH keys.

That's a good idea.  I'll check it out.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: restoring juju state

2015-03-25 Thread Eric Snow
On Tue, Mar 24, 2015 at 9:25 PM, Tim Penhey  wrote:
> I'm seriously not in favour of calling it "recover", and not just
> because it has a particular meaning in Go.

I'm glad you said so then. :)

>
> Perhaps "rebuild" or "recreate", but "recover" feels wrong.

Got it.  I'm not married to "recover".  It was just the first
reasonable (to me) alternative that popped into my head.

"rebuild" and "recreate" could work, though they don't seem quite
right either.  Perhaps something more like "replace" or "reset"?  So
far it's all re* names, so maybe "wipe"? :)  The catch is that it will
be a subcommand of "juju backups" so it also needs to be something
that implies applying a backup rather than affecting a backup.  Hmm.
"apply"?

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: restoring juju state

2015-03-25 Thread Eric Snow
On Wed, Mar 25, 2015 at 4:14 AM, roger peppe  wrote:
> Ooh, a proper bikeshed!

:)

>
> If this command is still doing the same thing it was when
> I first wrote the original hack version, then it's more like
> restore-state-servers-and-hope-that-nothing-much-changed-since-we-dumped.

Pretty much.

>
> I'm guessing that Eric's issue with "restore" is that the word implies that
> it restores the entire environment, which it does not.

That's actually the problem I have with calling the whole thing
"backups" when it's really
state-DB-backups-plus-a-couple-other-key-state-artifacts.  But that is
a separate issue. *  :)

> But "recover"
> has a similar issue.

The problem I have with the name "restore" is that it implies we are
going to fix up the state server (and possibly the environment) to
match some earlier backup.  Instead, we replace it with a new one and
don't do anything about fixing the replicaset (if any existed).
Furthermore, as you indicated, the machines in the restored state
might not line up properly with the current set of instances in the
environment.  Restore doesn't clean up that situation either.

>
> Perhaps "dump-state" and "restore-state" might convey more
> accurately what's being done here?

You're probably right, though "restore" in the name still carries the
same implications, in my mind though.

-eric

*  I almost think the "juju backups" super command would be better
named "juju state".  backups would then be just a part of that.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: restoring juju state

2015-03-23 Thread Eric Snow
I've opened https://bugs.launchpad.net/juju-core/+bug/1435413 for
renaming "restore" to "recover".

-eric

On Mon, Mar 23, 2015 at 10:00 AM, Horacio Duran
 wrote:
> Nope this is not very accurate I believe that you got some parts wrong out
> of our conversation. I am currently on vacation but upon return I'll submit
> a writeup with some graphic flows explaining how this all works (the recover
> part I agree with, in any case what is being recovered/restored is the state
> server so that is what we should be trying to convey with the name)

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


restoring juju state

2015-03-23 Thread Eric Snow
juju 1.23 adds the new restore, all written in Go and integrated into
core.  This is a great thing and the result of a lot of effort by
Horacio.  In discussions with him about it, there are two things that
came up that I wanted to bring up here.

First, the name "restore" is misleading.  It gives the impression of
capability that it does not actually provide.  I propose that we
rename it to "recover" in 1.23 and forward.

Second, I am now under the impression that after restore is complete,
the environment is left with instances on the provider that juju
should be cleaning up but isn't.  In the case of a single state
server, the old state server instance is left dead but still active.
In the case of HA, in addition to the state server that handles the
restore API request, the other state servers are also left in that
same situation.

So for a 3 server replicaset you are left with 3 instances that juju
is not using but that are still active on the provider side.  Users
may not realize they need to manually remove those instances to avoid
further costs (depending on the provider, I suppose).  Please explain
if I've misunderstood this or if juju has some other mechanism by
which such instances are cleaned up.  If I haven't missed something
then I think we need to fix restore to clean up those state machines.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Testing on windows

2015-03-19 Thread Eric Snow
That would we awesome!  I was thinking along the same lines.

-eric

On Thu, Mar 19, 2015 at 8:51 AM, Gabriel Samfira
 wrote:
> That is actually a great idea :). We can try and do this in Nuremberg next 
> month ;). We have some tooling to do an unattended install. We can use that 
> to create the base qcow2 image, and install dependencies.
>
> for testing we can simply use the base image as a qcow2 backing file, and use 
> WinRM to run commands inside it from a simple python script.
>
> We can implement this during the sprint :D.
> ____
> From: Eric Snow [eric.s...@canonical.com]
> Sent: Thursday, March 19, 2015 4:33 PM
> To: Gabriel Samfira
> Cc: juju-dev@lists.ubuntu.com
> Subject: Re: Testing on windows
>
> Thanks for bringing this up.  And thanks to Bogdan (and anyone else
> involved) for writing up that great walkthrough!  I had expected it to
> be more complicated.  I'm going to be running the unit tests on
> windows from now on. :)  I'm sure I've broken then a bunch in the last
> couple weeks.
>
> What would be nice is a script that does the following:
>
> * start up a KVM with windows
> * pull a specified branch from a github juju repo (and run godeps)
> * run the test suite (and show me the results)
> * shut down the KVM
>
> Bonus points for optionally setting up a fresh KVM each time and for
> optionally running other provided commands (e.g. to test bootstrap,
> deploy).
>
> If I get a chance I'll write something up.  (I have so much free time. )
>
> -eric
>
> On Wed, Mar 18, 2015 at 7:40 PM, Gabriel Samfira
>  wrote:
>> Hello folks,
>>
>> It has come to my attention that there may be some confusion in regards
>> to some Windows testing. There have been a couple of branches that have
>> merged which break windows tests in juju-core.
>>
>> I would like to remind everyone that there is a guide available at:
>>
>> http://wiki.cloudbase.it/juju-testing
>>
>> that will help you set up a testing environment on Windows. Also, if
>> there are any questions regarding Windows weirdness, please feel free to
>> contact me on irc (gsamfira) or bogdanteleaga. We will be more then
>> happy to help you navigate any Windows issues you might have.
>>
>>
>> Kind regards,
>> Gabriel
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at: 
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Testing on windows

2015-03-19 Thread Eric Snow
Thanks for bringing this up.  And thanks to Bogdan (and anyone else
involved) for writing up that great walkthrough!  I had expected it to
be more complicated.  I'm going to be running the unit tests on
windows from now on. :)  I'm sure I've broken then a bunch in the last
couple weeks.

What would be nice is a script that does the following:

* start up a KVM with windows
* pull a specified branch from a github juju repo (and run godeps)
* run the test suite (and show me the results)
* shut down the KVM

Bonus points for optionally setting up a fresh KVM each time and for
optionally running other provided commands (e.g. to test bootstrap,
deploy).

If I get a chance I'll write something up.  (I have so much free time. )

-eric

On Wed, Mar 18, 2015 at 7:40 PM, Gabriel Samfira
 wrote:
> Hello folks,
>
> It has come to my attention that there may be some confusion in regards
> to some Windows testing. There have been a couple of branches that have
> merged which break windows tests in juju-core.
>
> I would like to remind everyone that there is a guide available at:
>
> http://wiki.cloudbase.it/juju-testing
>
> that will help you set up a testing environment on Windows. Also, if
> there are any questions regarding Windows weirdness, please feel free to
> contact me on irc (gsamfira) or bogdanteleaga. We will be more then
> happy to help you navigate any Windows issues you might have.
>
>
> Kind regards,
> Gabriel
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at: 
> https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


using the reviewboard dashboard effectively

2015-03-17 Thread Eric Snow
I realized today that some folks may not be aware of some of the
helpful columns in the reviewboard dashboard.  You can see the full
list of available columns by clicking on the pencil icon on the right.
There are a handful of columns that I've found really helpful to me as
a reviewer.  Most notably:

* "Ship It!" (the green check mark) - indicates how many ship-its a
review request has. It also shows how many open issues there are.
This helps me know what point the request is at in the review process.
* "Reviews" - shows how many reviews the request has received.  This
allows me to quickly know if a request hasn't gotten enough attention.
* "New Updates" (the ... bubble) - shows whether or not there has been
any activity on the review request since you last looked at it. That
way you can quickly see when you may need to follow up on a review you
gave.
* "My Comments" (the green boxy thing) - indicates whether or not you
have commented on a review request.  This allows me to visually filter
the requests on which I should follow up.

FYI, here's the list of columns I have in my dashboard, ordered the
way I like it:

ID
Ship It!
Reviews
New Updates
My Comments
Diff Size
Summary
Submitter
Repository
Last Updated (this is my sort column)

Hope this helps all the reviewers be more productive. :)

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Please, no more types called "State"

2015-03-12 Thread Eric Snow
On Thu, Mar 12, 2015 at 5:08 AM, Michael Foord
 wrote:
> When I was new to Juju the fact that we had a central "State", core to the
> Juju model, but we had umpteen types called State - so where you saw a State
> you had no idea what it actually was and when someone mentioned State you
> couldn't be sure what they meant - was a significant part of the learning
> curve.

I had the same experience.

>
> Perhaps a better solution would have been a better name for the core State.

+1

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Joyent SSH key

2015-03-04 Thread Eric Snow
On Wed, Mar 4, 2015 at 10:04 AM, Nate Finch  wrote:
> My suggested solution is that we do what we do for all the rest of the
> providers, which is to make the user give us authentication credentials in
> the environments.yaml file, and we just use that, and not create anything
> ourselves.

Makes sense to me.  Was there any reason why we did something
different for Joyent in the first place?

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Feedback on a base "fake" type in the testing repo

2015-02-13 Thread Eric Snow
Again, thanks for the feedback.  Responses inline again.

-eric

On Fri, Feb 13, 2015 at 11:36 AM, Gustavo Niemeyer  wrote:
> On Fri, Feb 13, 2015 at 3:25 PM, Eric Snow  wrote:
>>> This is a "mock object" under some well known people's terminology [1].
>>
>> With all due respect to Fowler, the terminology in this space is
>> fairly muddled still. :)
>
> Sure, I'm happy to use any terminology, but I'd prefer to not make one
> up just now.

I've decided to just follow the definitions to which Fowler refers
(citing Meszaros).  That will be my small effort to help standardize
the terminology. :)  The name then for what I'm talking about is
"stub".

>
>>> The most problematic aspect of this approach is that tests are pretty
>>> much always very closely tied to the implementation, in a way that you
>>> suddenly cannot touch the implementation anymore without also fixing a
>>> vast number tests to comply.
>>
>> Let's look at this from the context of "unit" (i.e. function
>> signature) testing.  By "implementation" do you mean you mean the
>> function you are testing, or the low-level API the function is using,
>> or both?  If the low-level API then it seems like the "real fake
>> object" you describe further on would help by moving at least part of
>> the test setup down out of the test and down into the fake.  However
>> aren't you then just as susceptible to changes in the fake with the
>> same maintenance consequences?
>
> No, because the fake should behave as a normal type would, instead of
> expecting a very precisely constrained orchestration of calls into its
> interface. If we hand the implementation a fake value, it should be
> able to call that value as many times as it wants, with whatever
> parameters it wants, in whatever order it wants, and its behavior
> should be consistent with a realistic implementation. Again, see the
> dummy provider for a convenient example of that in practice.

Right, that became clear to me after I send my message.  So it seems
to me that fakes should typically only be written by the maintainer of
the thing they fake; and they should leverage as much of the real
thing as possible.  Otherwise you have to write your own fake and try
to duplicate all the business logic in the real deal and run the risk
of getting it wrong.  Either way you'd end up with the risk of getting
out-of-sync (if you don't leverage the production code).

So with a fake you still have to manage its state for at least some
tests (e.g. stick data into its DB) to satisfy each test's
preconditions.  That implies  you need knowledge of how to manage the
fake's state, which isn't free, particularly for a large or complex
faked system.  So I guess fake vs. stub is still a trade-off.

>
>> Ultimately I just don't see how you can avoid depending on low-level
>> details ("closely tied to the implementation") in your tests and still
>> have confidence that you are testing things rigorously.  I think the
>
> I could perceive that on your original email, and it's precisely why
> I'm worried and responding to this thread.

I'm realizing that we're talking about the same thing from two
different points of view.  Your objection is to testing a function's
code in contrast to validating its outputs in response to inputs.  If
that's the case then I think we're on the same page.  I've just been
describing a function's low-level dependencies in terms of it's
implementation.  I agree that tests focused on the implementation
rather than the "contract" are fragile and misguided, though I admit
that I do it from time to time. :)

My point is just that the low-level API used by a function (regardless
of how) is an input for the function, even if often just an implicit
input.  Furthermore, the state backing that low-level API is
necessarily part of that input.  Code should be written with that in
mind and so should tests.

Using a fake for that input means you don't have to encode the
low-level business logic in each test (just any setup of the fake's
state).  You can be confident about the low-level behavior during
tests as matching production operation (as long as the fake
implementation is correct and bug free).  The potential downsides are
any performance costs of using the fake, maintaining the fake (if
applicable), and knowing how to manage the fake's state.
Consequently, there should be a mechanism to ensure that the fake's
behavior matches the real thing.

Alternately you can use a "stub" (what I was calling a fake) for that
input.  On the upside, stubs are lightweight, both performance-wise
and in terms of engineer time.  

Re: Feedback on a base "fake" type in the testing repo

2015-02-13 Thread Eric Snow
Thanks for the great feedback, Gustavo!  I have some follow-up inline.
It reflects my understanding of what you wrote, so don't hate me if
any of it reflects a *mis*understanding. :)  If anything I say *seems*
to imply some deeper criticism or veiled aspersion, just assume I
didn't actually mean to imply anything and take my statement/question
at face value. I'm a pretty straight-forward person and I likely just
phrased things poorly. :)

-eric

On Fri, Feb 13, 2015 at 6:00 AM, Gustavo Niemeyer  wrote:
> On Wed, Feb 11, 2015 at 4:53 PM, Eric Snow  wrote:
>> tl;dr Using fakes for testing works well so I wrote a base fake type. [1]
>>
>> While working on the GCE provider, Wayne and I started taking a
>> different approach to unit testing than the usual 1. expose an
>> external dependency as an unexported var; 2.export it in
>> export_test.go; 3. patch it out in tests. [2]  Instead we wrote an
>> interface for the provider's low-level API and implemented the
>> provider relative to that. [3]  Then in tests we used a fake
>> implementation of that interface instead of the concrete one.  Instead
>> of making the actual API requests, the fake simply tracked method
>> calls and controlled return values.
>
> This is a "mock object" under some well known people's terminology [1].

With all due respect to Fowler, the terminology in this space is
fairly muddled still. :)

>
> There are certainly benefits, but there are also very well known
> problems in that approach which should be kept in mind. We've
> experienced them very closely in the first Python-based implementation
> of juju, and I did many other times elsewhere. I would probably not
> have written [2] if I had a time machine.
>
> The most problematic aspect of this approach is that tests are pretty
> much always very closely tied to the implementation, in a way that you
> suddenly cannot touch the implementation anymore without also fixing a
> vast number tests to comply.

Let's look at this from the context of "unit" (i.e. function
signature) testing.  By "implementation" do you mean you mean the
function you are testing, or the low-level API the function is using,
or both?  If the low-level API then it seems like the "real fake
object" you describe further on would help by moving at least part of
the test setup down out of the test and down into the fake.  However
aren't you then just as susceptible to changes in the fake with the
same maintenance consequences?

Ultimately I just don't see how you can avoid depending on low-level
details ("closely tied to the implementation") in your tests and still
have confidence that you are testing things rigorously.  I think the
best you can do is define an interface encapsulating the low-level
behavior your functions depend on and then implement them relative to
that interface and write your tests with that interface faked out.

Also, the testing world puts a lot are emphasis on branch coverage in
tests.  It almost sounds like you are suggesting that is not such an
important goal.  Could you clarify?  Perhaps I'm inferring too much
from what you've said. :)

> Tests organized in this fashion also tend
> to obfuscate the important semantics being tested, and instead of that
> you see a sequence of apparently meaningless calls and results out of
> context. Understanding and working on these tests just a month after
> you cooked them is a nightmare.

I think this is alleviated if you implement and test against an
interface that strictly defines the low-level API on which you depend.

>
> The perfect test breaks if and only if the real assumption being
> challenged changes. Anything walking away from this is decreasing the
> test quality.

Agreed.  The challenge is in managing the preconditions of each test
in a way that is robust to low-level changes but also doesn't
"obfuscate the important semantics being tested".  In my experience,
this is not solved by any single testing approach so a judiciously
chosen mix is required.

>
> As a recommendation to avoid digging a hole -- one that is pretty
> difficult to climb out of once you're in -- instead of testing method
> calls and cooking fake return values in your own test, build a real
> fake object: one that pretends to be a real implementation of that
> interface, and understands the business logic of it. Then, have
> methods on it that allow tailoring its behavior, but in a high-level
> way, closer to the problem than to the code.

Ah, I like that!  So to rephrase, instead of a type where you just
track calls and explicitly control return values, it is better to use
a type that implements your expectations about the low-level system,
exposed via the same API as the actual one?  This

Re: Feedback on a base "fake" type in the testing repo

2015-02-13 Thread Eric Snow
Thanks for the feedback. :)  Your points are helpful and I just need
some clarification.  Before that, though, I want to make sure we are
on the same page with the term "fake".  Unfortunately testing
nomenclature is a bit muddled so you might be thinking of something
(even slightly) different.  What gave me pause is you contrasted fakes
with doubles.  So perhaps you could briefly explain what you mean by
both in the context of Go.

As for me, by "fake" I mean a struct that implements an interface with
essentially no logic other than to keep track of method calls and
facilitate controlling their return values explicitly.  For examples
see the implementations for GCE and in `testing/fake.go`.  Thus in
tests a fake may be used in place of the concrete implementation of
the interface that would be used in production.

The onus is on the test writer to populate the fake with the correct
return values such that they would match the expected behavior of the
concrete implementation.  This contrasts with the approach (e.g. taken
by EC2) of plugging in an API server that simulates the necessary
portions of the low-level API. At that point you're testing your
simulated API server just as much as the code you actually want to
test.  Furthermore, you still have to explicitly control the return
values.  You're just doing it at a lower level.

Regardless, I'm convinced that testing needs to include both high
coverage via isolated unit tests and good enough coverage via "full
stack" integration tests.  Essentially we have to ensure that layers
work together properly and that low-level APIs work the way we expect
(and don't change unexpectedly).

However, the discussion of when fakes are appropriate is somewhat
orthogonal to that of how to use fakes in Go (and juju) and what they
should look like there.  I was hoping to focus on the latter, assuming
we agree that fakes are appropriate in at least some situations (which
it appears we do). :)

On Fri, Feb 13, 2015 at 5:22 AM, John Meinel  wrote:
> High level thoughts:
>
> 1. Using an interface{} instead of patching function pointers sounds like a
> great thing

Exactly! :) Basically that was the original motivator.

> 2. Writing a provider via faking out rather than implementing a double
> means that it is really hard to run the cross-provider interface tests
> (environs/jujutest), which means we run into what we have with Azure, where
> *some* of the functions return a different error than the one we expect (see
> the recent 'juju restore' regression). Our interface tests aren't amazing,
> but if we can push more on that, then we can avoid having implementations
> leak through their abstractions.

Could you elaborate on this?  I'm not terribly familiar with
environs/jujutest and hopefully we didn't need to take it into
consideration for the GCE provider.  It sounds like you are saying we
should have tests that tie into the livetests code (I noticed at least
the EC2 tests that do so).

So I'm otherwise not clear on how our use of fakes in the GCE provider
has an impact.  Wayne and I definitely want it be correct.
Furthermore, one goal we have is for the GCE provider to be the go-to
example for a provider implementation.  If we're missing something
then we want to correct that. [1]

> 3. I do realize that time-to-something-working is a lot faster with a Fake
> implementation, but ongoing support and maintenance might pay off more. (I
> love how quickly the GCE provider was available, I'm concerned if we know
> that it actually acts like EC2/Openstack when it comes to edge cases.)

By "pay off more" I presume you mean "cost more", based on the
context. :)  Are you saying that there is a uniform mechanism (via
environs/jujutest?) for checking all those edge cases for a provider?
If so, we certainly need to use it for the GCE provider.  However, I'm
not clear on how our use of fakes relates to that, other than that it
implies we did not use the aforementioned uniform mechanism.

-eric

[1] We've discussed these motications on numerous occaions but Wayne
can chime in I've mischaracterized them. :)

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Feedback on a base "fake" type in the testing repo

2015-02-11 Thread Eric Snow
On Wed, Feb 11, 2015 at 6:43 PM, Andrew Wilkins
 wrote:
> Looks okay in principal, though I'm not sure how much benefit it would
> add to the existing, tailored fakes in the codebase. It seems a little bit
> weird that error results are encapsulated, but not non-error results.

Yeah, I've tried to make that work several different ways to no avail.
It's messy since return types vary greatly and a mechanism to
accommodate them would require extra boilerplate in faked methods to
accommodate type conversions.  So it didn't seem worth bothering to
me.*  Error returns are a different story since there is only one type
to deal with (error).

-eric

* I suppose I could use reflection to make it all work out, but I'd
rather avoid the complexity (at least in this initial patch).

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Feedback on a base "fake" type in the testing repo

2015-02-11 Thread Eric Snow
tl;dr Using fakes for testing works well so I wrote a base fake type. [1]

While working on the GCE provider, Wayne and I started taking a
different approach to unit testing than the usual 1. expose an
external dependency as an unexported var; 2.export it in
export_test.go; 3. patch it out in tests. [2]  Instead we wrote an
interface for the provider's low-level API and implemented the
provider relative to that. [3]  Then in tests we used a fake
implementation of that interface instead of the concrete one.  Instead
of making the actual API requests, the fake simply tracked method
calls and controlled return values.

Using the fake had a number of benefits:

1. our tests were very focused on what code gets exercised, meaning we
don't take responsibility for what amounts to testing our dependencies
at the same time as the code at hand.
2. the tests ran faster since they aren't making HTTP requests (even
if just to localhost).
3. the provider code isn't cluttered up with
vars-only-there-to-be-patched-out. [2]
4. the test code isn't cluttered with many separate s.PatchValue calls. [2]
5. setting up the faked return values was cleaner.
6. checking which methods were called (in order) along with the
arguments, was easier and cleaner.

In addition to all that, taking the fake approach required that we
encapsulate our low-level dependencies in a single interface.  This
was good because it forced us to spell out those dependencies.  That
helped us write the provider better.  The low-level interface also
made the code more maintainable since it makes the boundary layers
explicit, and formats it in a concise display (the interface
definition).

So I've taken the base fake type from the GCE patch and pulled it into
patch against the testing repo. [1]  I've made some adjustments based
on use cases I've had in subsequent patches.  Nate has the bright idea
of getting some feedback from the team before landing anything "since
it's the kind of thing that'll start popping up everywhere, and I want
to make sure it passes muster with others."

I'm not suggesting that fakes are the solution to all testing problems
so let's please avoid that discussion. :)  Neither am I proposing that
any existing tests should be converted to use fakes.  Nor am I
suggesting the need for any policy recommending the use of fakes.  As
well, we still need to make sure we have enough "full stack" test
coverage to be confident about integration between layers.  Relatedly,
the concrete implementation of low-level interfaces needs adequate
test coverage to ensure the underlying APIs (remote, etc.) continue to
match expected behavior.  So yeah, neither fakes nor isolated unit
tests meet all our testing needs.  I just think the base fake type
I've written will help facilitate a cleaner approach where it's
appropriate.

Thoughts?

-eric

[1] https://github.com/juju/testing/pull/48
[2] We were still feeling out the approach so we still took the
var-and-patch in some cases.
[3] See "gceConnection" in http://reviews.vapour.ws/r/771/diff/#46.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: A cautionary tale of names

2015-01-12 Thread Eric Snow
On Mon, Jan 12, 2015 at 2:12 PM, Eric Snow  wrote:
> In the case of GCE, I'd rather keep the human-readable instance ID the
> same.  To make that work, we'd store the env UUID in the instance
> metadata and make the implementation of environ.Instances (and
> AllInstances) check that, in addition to matching against the env
> name.  See any problems with that?

Wayne just pointed out that we would still end up with a problem due
the name colliding when creating the new instance.  So having the UUID
in the name is still the better way to go.  Furthermore, Wayne also
noted that having the UUID in the instance ID would make folks more
cautious about directly messing around with the instance. :)

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: A cautionary tale of names

2015-01-12 Thread Eric Snow
On Sun, Jan 11, 2015 at 8:17 PM, Tim Penhey  wrote:
> Hi folks,
>
> I hit an issue that surprised me at the time, but became obvious to me
> when I thought about our implementation.
>
> I managed to destroy my production environment by accident, and this is
> always a problem.
>
> What I was trying to do was to bring up my new production environment in
> parallel and then switch the DNS when it was up and working.
>
> I had a working environment, which I then when and renamed the .jenv
> file for.  I then bootstrapped the environment again.
>
> This had the side-effect of destroying all the machines that were
> running previously.  I'm using the trusty released juju packages, so
> 1.20.14.
>
> This is because the EC2 provider tags the machines with the environment
> name, and not environment UUID.  I think we should change this ASAP.

In the case of GCE, I'd rather keep the human-readable instance ID the
same.  To make that work, we'd store the env UUID in the instance
metadata and make the implementation of environ.Instances (and
AllInstances) check that, in addition to matching against the env
name.  See any problems with that?

I'm not sure the same would work for other resources that rely on the
env name, like firewalls (in the case of GCE).

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: A cautionary tale of names

2015-01-12 Thread Eric Snow
On Sun, Jan 11, 2015 at 8:17 PM, Tim Penhey  wrote:
> This is because the EC2 provider tags the machines with the environment
> name, and not environment UUID.  I think we should change this ASAP.

Tim, I'm glad you brought this up.  We have followed the lead of the
EC2 provider in this regard, so we'll need to fix this.

Perhaps other providers need fixing too?

On Mon, Jan 12, 2015 at 8:43 AM, Gustavo Niemeyer  wrote:
> A few quick notes:
>
> - Having an understandable name in a resource useful

Agreed.  For GCE we may have both the env name and UUID.

>
> - Being a tag means it's possible to have a name *and* a UUID

What do you mean by this?  Do tags incorporate the env UUID somehow?

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: local provider

2014-12-12 Thread Eric Snow
On Fri, Dec 12, 2014 at 9:26 AM, Nate Finch  wrote:
> I think one easy thing we could do to better indicate the purpose of the
> local provider is to simply rename it.  If we named it the "demo" provider,
> it would be much more clear to users that it is not expected to be used for
> deploying a production environment. This could be as easy as aliasing
> "local" to "demo" and making the default environments.yaml print out with
> the local provider renamed to "demo".  (feel free to s/demo/testing/ or any
> other "not ready for production" word)

I'd favor keeping "local" in the name, e.g. "local-only" or
"local-restricted".  A really explicit name would be even better, e.g.
"local-testing-dev-and-demos" (seriously).  Then "local" would just
alias back to it for backward compatibility.  Do we support emitting
warnings based on the environment type?  If so, we could do that for
"local".

Then to make the purpose clear we *could* register more aliases for
it. Some ideas (just brainstorming):

"demo"
"deployment-testing"
"development"
"isolated"

I'm not sure just "testing" is clear enough, particularly in relation
to the "dummy" provider.

-eric

p.s. I'd love to see local provider fixed up to work like the rest,
but I'll start another thread about that to avoid getting side-tracked
here.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Things which may be of interest in Go v1.4

2014-12-11 Thread Eric Snow
Thanks for writing this up, Katherine!

On Thu, Dec 11, 2014 at 9:21 AM, Katherine Cox-Buday
 wrote:
> go generate
> https://golang.org/doc/go1.4#gogenerate
> This is *very* powerful and could reduce the number of copy/paste snippets,
> or unsafe reflection code we have to write. For those of you who are
> familiar with the lisps, this is essentially macros. You write code that
> examines Go expressions and does something. This code is triggered by
> //go:generate command arg comments in the code. As a quick example, our
> StringSet type could be generated and even expanded to encompass any type.

Cool.  So generics via macros?

> https://golang.org/doc/go1.4#internalpackages
> Just what it sounds like. We can now hide implementation details from
> importers. Probably more useful for the ancillary Juju packages as Juju Core
> is probably not imported very much.

Nice.  This would be good for our various testing sub-packages, among
other things.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Too space, or not two space

2014-11-19 Thread Eric Snow
On Wed, Nov 19, 2014 at 10:31 AM, Eric Snow  wrote:
> Apparently I'm not the only one.

FWIW, Python's PEP 8 dictates the use of two spaces in its stdlib. [1]
 Presumably this is because code is typically(?) edited/viewed with
mono-spaced fonts.  Regardless, both are at least part of why I do it.

-eric


[1] https://www.python.org/dev/peps/pep-0008#comments

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Too space, or not two space

2014-11-19 Thread Eric Snow
On Tue, Nov 18, 2014 at 5:13 PM, David Cheney
 wrote:
> To quote Butterick's
> http://practicaltypography.com/one-space-between-sentences.html

And to quote that same page:

"If you have to use a typewriter-style font, you can use two spaces
after sentences. Otherwise, don’t."

I use a mono-space font.

Furthermore, I disagree with the statement "In the second paragraph,
the extra spaces disrupt the balance of white space."  I find that
having each sentence be visually distinct to help with readability.
Apparently I'm not the only one.  From what I can tell, studies on the
matter are inconclusive. [1]

Regardless, I don't think it matters all that much, particularly since
(as Roger pointed out) godoc automatically converts to single-space
for you.  Like Nate, I think there are more significant things to
worry about, though perhaps not as controversial. :)

It seems like the overall consensus is to use single space.  If that
means it becomes a rule for our code then naturally I'll go along.
Otherwise I'd rather not bother.  To be honest, this all seems like a
waste of time.

-eric


[1] 
http://en.wikipedia.org/wiki/Sentence_spacing#Effects_on_readability_and_legibility

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


github-reviewboard integration for more repos

2014-11-17 Thread Eric Snow
Does anyone object to adding the github-reviewboard integration
webhook to the remaining repos that reviewboard knows about?
Currently only juju and utils have the webhooks in place.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: reviewboard update

2014-11-17 Thread Eric Snow
On Mon, Nov 17, 2014 at 5:10 AM, Dimiter Naydenov
 wrote:
> I can confirm RB diffs for backports to 1.21 get generated correctly
> now, and the PR description is updated to include a link to the RB diff.

Thanks for reporting on this.

>
> There's one issue however -- the link in the PR is
> "https://reviews.vapour.ws/r/", rather than "http://..";, which
> cause some problems:

I've switched it to HTTP.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


reviewboard update

2014-11-14 Thread Eric Snow
FYI, I was able to solve 3 reviewboard-github integration issues today:

1. pull requests for branches other than master now work (e.g. 1.21 backports)
2. no more hitting rate limits (5000 requests/hour limit instead of 60)
3. pull request bodies now get updated with a link to the new review request

If you have any trouble with any of these please let me know.  Thanks!

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Please stop adding dependencies to the state package

2014-11-13 Thread Eric Snow
On Thu, Nov 13, 2014 at 4:05 PM, David Cheney
 wrote:
> Hello,
>
> Please stop adding dependencies to the state package.
>
> The dependency tree for state must go
>
> mgo <- state <- things that depend on state
>
> At the moment state depends on things like backups, multiwatchers,
> presence, toolstorage.
>
> This is wrong.
>
> Eric, i'm not picking on you here, it's just that b sorts first, and
> this is the one I saw today. There are many other examples, including
> the multiwatcher which also break this model
>
> If you add functionality to state, then the code has to go ***INSIDE**
> state, or be written to take a *state.State.
>
> Please help me out by not adding more dependencies to state.

Thanks for pointing this out, Dave.  I'll fix this for backups.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: review board not syncing?

2014-11-12 Thread Eric Snow
On Tue, Nov 11, 2014 at 12:28 PM, Jesse Meek  wrote:
> The latest three reviews on GitHub (#1103,#1102,#1101) I cannot see in
> Review Board. Do we have a loose wire?

For 1101 at least, it's in reviewboard (402).

I'll look into it.  I'm pretty sure the github integration struggles
with large patches (perhaps due to GH API throttling).

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: how to update dependencies.tsv

2014-10-31 Thread Eric Snow
On Fri, Oct 31, 2014 at 2:11 AM, roger peppe  wrote:
> On 30 October 2014 14:34, Eric Snow  wrote:
>> On Thu, Oct 30, 2014 at 4:40 AM, roger peppe  
>> wrote:
>>> It's a pity then that if you "go get" a package, origin is the repo
>>> you've branched from. There is always one of those, but you
>>> don't necessary have a fork of the repo yourself.
>>
>> Why wouldn't you want go get to fetch from your clone by default?
>
> What Andrew said.
>
> Also, the only information that "go get" has when a repository
> doesn't exist locally is the home of that repository.
> So when "go get" gets a new package, the only place
> that origin can point to is that repository's home.

Right.  I'm not suggesting that "go get" (or godeps) change the
remotes for a repo that didn't already have a local clone, nor even
for any existing local clones.  In fact, I'm saying that the current
behavior of "go get" is just fine.  It shouldn't need to worry about
any other remotes than "origin", regardless of if the local clone
already existed.  Neither should it worry about where "origin" is
pointing, nor about syncing origin with upstream.

If someone has pointed "origin" to point to their own remote [1] clone
(which is what the juju CONTRIBUTING doc suggests), then they are
responsible for keeping that synced up with upstream.  If they don't
and godeps has issues because it can't find the revision it wants,
that isn't the fault of godeps or "go get".

I believe the request from earlier (Tim?) is that godeps help out in
that case. [2][3]  However, that should not require that "go get" have
any awareness of remotes.

>
> When starting work on a package, the first thing I will do is
> invariably "go get" that package, which will fetch all new dependencies
> too. For the majority of those dependencies, I won't already
> have forked them on github. If I come to start work on
> one of those dependencies, it will already have an "origin"
> remote pointing to the source of the repo.
>
> So it's much easier IMHO just to go with the flow and keep origin
> always meaning the same thing, whether you've forked the
> repo or not.

Right.  That is fine and should not change, nor need to.  It's the
case where you did change your remotes that godeps *could* be more
helpful, regardless of whether or not it's worth doing.

-eric

[1] I suppose you could also point "origin" to another repo elsewhere
on your local filesystem.
[2] I was going to pull this conversation into a feature request on
the godeps issue tracker, since it is basically a distraction from the
original thread topic.  However, I didn't find any such tracker.  The
launchpad project does not seem to have one enabled and there is no
github repo,
[3] Regarding the feature request, here's a possible solution.  If the
revision for a repo is not found in "origin" (after running "go get"),
look for it in "upstream" (if that remote exists).  If "upstream"
exists but the revision isn't there, give a warning that "upstream"
may be out of date.  I suppose godeps could also try fetching from
upstream first (without pushing any changes to upstream).

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: how to update dependencies.tsv

2014-10-31 Thread Eric Snow
On Thu, Oct 30, 2014 at 8:36 PM, Andrew Wilkins
 wrote:
> On Thu, Oct 30, 2014 at 10:34 PM, Eric Snow  wrote:
>> Why wouldn't you want go get to fetch from your clone by default?
>> Naturally master on origin should be match upstream, which may require
>> you to pull-upstream-push-origin.
>
> Why naturally? I don't use "master" in my fork of juju, I just use
> feature/bug branches.
> The "master" branch in my repo tracks upstream/master.

Agreed.  I don't commit anything to master either.  However, all my
branches are based on master (or another upstream branch), which I
expect is the case for everyone.  If local master isn't in sync with
upstream (regardless of what you call the remote) then I don't see how
there can be as much confidence that any "feature" branches are valid
for merging upstream.  Hence "naturally".  Sorry I was unclear or if I
misunderstood.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: deprecating the juju-backup (and juju-restore) CLI plugin

2014-10-31 Thread Eric Snow
On Fri, Oct 31, 2014 at 8:35 AM, Curtis Hovey-Canonical
 wrote:
> On Thu, Oct 30, 2014 at 2:10 PM, Eric Snow  wrote:
>> What would be the best approach for deprecating the old backup plugin?
>>  I was going to simply gut it (it's a bash script) and stick in a
>> deprecation warning message and a call to "juju backups create
>> --download".  However, jam rightly pointed out to me that this may be
>> a sticky situation (as described above).  I'd really like to find a
>> way to eliminate the old backup implementation.  Thoughts?
>
> Look before you leap. Call juju status and check that the version is
> 1.21.+ and use new command, else old command.

Good idea.

> We expect new clients to
> manage old state-servers during transitions. The transition my be long
> if the enterprise is not upgrading their envs. I think we are stuck
> with the old implementation for as long as we support trusty.

Yeah, that's what I figured but hoped wasn't the case.

>
> Users do not know that backup is a bash script...
> juju backup
> is the command, so we can write an alternate plugin that checks the
> state-server version, then either calls to new command or the old
> script which is renamed, If we rename/change juju-backup.bash, we need
> a simultaneous packaging change.

Since it is transparent to users and otherwise mostly irrelevant, I'll
leave the filename alone and add the conditional operation to the
existing script.

Am I okay with adding a deprecation warning message (to stderr) that
instructs "juju backup" users to use "juju backups create" instead?
I'm not sure what precedent there is for doing so with our CLI.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


deprecating the juju-backup (and juju-restore) CLI plugin

2014-10-30 Thread Eric Snow
The new "juju backups create" can be used instead of "juju backup" (a
CLI plugin).  So you would think we could deprecate and later remove
the old plugin. Unfortunately, "juju backups create" won't work with
1.20 or earlier API servers.  So it's not quite as simple as I'd hoped
as long as newer clients will be connecting to older servers.

What would be the best approach for deprecating the old backup plugin?
 I was going to simply gut it (it's a bash script) and stick in a
deprecation warning message and a call to "juju backups create
--download".  However, jam rightly pointed out to me that this may be
a sticky situation (as described above).  I'd really like to find a
way to eliminate the old backup implementation.  Thoughts?

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: mongodump/mongorestore and oplog.bson

2014-10-30 Thread Eric Snow
On Thu, Oct 30, 2014 at 9:38 AM, Gustavo Niemeyer  wrote:
> The source code of mongodump lives at:
>
> https://github.com/mongodb/mongo-tools/tree/master/mongodump
>
> It is now written in Go, using mgo. Should be easy to find out any details
> you need there.

Perfect.  Thanks.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


mongodump/mongorestore and oplog.bson

2014-10-30 Thread Eric Snow
When running mongodump with the --oplog, you end up getting an
oplog.bson file dumped, along with a directory for each dumped
database.  mongorestore likewise has an --oplog option, which
presumably causes it to use that oplog.bson file.

Who could tell me about that oplog.bson file?  In my testing for
backups, it always comes up empty.  If I can always expect it to be
empty then I'll be a happy camper.  Otherwise, what are the cases
where it would not be empty and is there a safe way I could parse/edit
oplog.bson in those cases?  My guess is that it will be non-empty when
mongdump is run while mongo has actions queued up or when requests
come in while mongodump is running.  Regardless, any insight here
would be helpful.  Thanks!

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: how to update dependencies.tsv

2014-10-30 Thread Eric Snow
On Thu, Oct 30, 2014 at 4:40 AM, roger peppe  wrote:
> It's a pity then that if you "go get" a package, origin is the repo
> you've branched from. There is always one of those, but you
> don't necessary have a fork of the repo yourself.

Why wouldn't you want go get to fetch from your clone by default?
Naturally master on origin should be match upstream, which may require
you to pull-upstream-push-origin.  For local clones in my $GOPATH
where I'm not doing development (i.e. ones created by go get), I don't
have upstream set and origin still points to github (or wherever).  So
the behavior of go get works fine for me.

As to making godeps honor the "upstream" remote, the focus for godeps
should be on what's in each official repo.  The revisions in our
dependencies.tsv should be coming only from there.  If it's
conventional to use "upstream" for those official remotes (it *is* our
recommended convention in {juju}/CONTRIBUTING), then it would be nice
for godeps to try it first.  The alternative is that folks be more
diligent about making sure their "origin" clones are up-to-date with
upstream, which is sometimes easy to do before running godeps.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Menno's reviewer graduation

2014-10-29 Thread Eric Snow
On Wed, Oct 29, 2014 at 7:13 PM, Tim Penhey  wrote:
> Hey there fellow Juju developers,
>
> I'd like to announce Menno's reviewer graduation.  I have been very
> happy with the quality and understanding shown in Menno's reviews.
>
> Congratulations.

Good job!

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: per-database mongodump

2014-10-28 Thread Eric Snow
On Mon, Oct 27, 2014 at 9:26 PM, John Meinel  wrote:
> What are the tables we're wanting to filter? (charm storage because of its
> size?)

The presence DB and a new one for backups.  We actually do want to
back up the blobstore DB (charms/tools).

> Certainly 3 and 4 aren't reasonable if the point is to avoid having 4GB of
> charm data in the backup.

Yeah, I'd rather avoid that if possible.  It would also make the
restore situation messier.  We should be able to avoid including
unnecessary (even invalid) data from the backup archive.

>
> If we went with (1) for consistency sake, we should really go back to "stop
> the db, dump to a backup".

Yep.  Alternatively we could lock the databases.  Either way it would
mean juju would be unresponsive during backup.  I suppose when we have
a replicaset we could take one member offline to make the backup, but
I'd rather avoid that sort of complication.

> It is also sensitive to us adding a collection and forgetting to include it
> in the dump. (since it is a whitelist, right?)

I've already implemented option 1 (http://reviews.vapour.ws/r/268/).
The patch actually uses a blacklist, taking advantage of
mgo.Session.DatabaseNames().

>
> I'd really like (2) as a blacklist option, but I don't know how hard that is
> to do, and how much we expend to dump GB of charm data just to filter them
> out again.

mongodump actually dumps each database to its own directory.  If
necessary, the trick would be to hack the dumped oplog to remove
references to the ignored databases.  I just need to verify that
mongorestore --oplogReplay will be happy with that, or even if
mongorestore offers us any other route.  I'll be taking some time
today to look into it.

Thanks for the feedback.  After hearing your thoughts and taking
another look with fresh eyes, option 2 seems much more feasible and
palatable.  My only concern is that it relies on what amount to
implementation details of mongodump (and mongorestore).

-eric

>
> John
> =:->
>
> On Tue, Oct 28, 2014 at 3:07 AM, Eric Snow  wrote:
>>
>> For backup we currently call mongodump with the --oplog option.  This
>> dumps all databases at the same moment in time, which gives us
>> assurances about the consistency of each database.  However, we want
>> to exclude some databases.  Unfortunately --oplog is not compatible
>> with the --db option.
>>
>> I'm looking for options here.  I'm already considering, but am not
>> satisfied with, the following:
>>
>> 1. (mongodump --db ...) Forgo the moment-in-time guarantee of --oplog
>> (seems risky).
>> 2. (mongodump --oplog ...) Manually remove the undesired databases
>> from the dumped data (possible? risky!).
>> 3. (mongodump --oplog ...) Skip the undesired databases when calling
>> mongorestore (possible?).
>> 4. (mongodump --oplog ...) Clear the undesired databases after calling
>> mongorestore.
>>
>> The problem with 3 and 4 is that the backup archive will contain a
>> bunch of erroneous data (the dumps of the databases we want to
>> ignore).

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: reviewboard - most recent diff by default?

2014-10-28 Thread Eric Snow
On Mon, Oct 27, 2014 at 9:05 PM, Jesse Meek  wrote:
> Is possible and preferable to show the most recent diff by default?

If you mean instead of showing the "reviews" page by default,
ReviewBoard doesn't support that out of the box.  Certainly we could
customize RB to do so, and for all I know it would be easy.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


per-database mongodump

2014-10-27 Thread Eric Snow
For backup we currently call mongodump with the --oplog option.  This
dumps all databases at the same moment in time, which gives us
assurances about the consistency of each database.  However, we want
to exclude some databases.  Unfortunately --oplog is not compatible
with the --db option.

I'm looking for options here.  I'm already considering, but am not
satisfied with, the following:

1. (mongodump --db ...) Forgo the moment-in-time guarantee of --oplog
(seems risky).
2. (mongodump --oplog ...) Manually remove the undesired databases
from the dumped data (possible? risky!).
3. (mongodump --oplog ...) Skip the undesired databases when calling
mongorestore (possible?).
4. (mongodump --oplog ...) Clear the undesired databases after calling
mongorestore.

The problem with 3 and 4 is that the backup archive will contain a
bunch of erroneous data (the dumps of the databases we want to
ignore).

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: reviewboard-github integration

2014-10-21 Thread Eric Snow
On Tue, Oct 21, 2014 at 10:12 AM, Eric Snow  wrote:
> For now I've hard-coded adding juju-team.  If anyone still has trouble
> with this please let me know.

The issue with not finding files in the repo should be fixed now.
This means that auto-generated review requests should have diffs and
be published automatically.  If you have any trouble with it, let me
know.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: reviewboard-github integration

2014-10-21 Thread Eric Snow
On Tue, Oct 21, 2014 at 3:40 AM, Michael Foord
 wrote:
>
> On 20/10/14 22:38, Eric Snow wrote:
>>
>> This should be resolved now.  I've verified it works for me.  If it
>> still impacts anyone, just let me know.
>
>
> I still have the issue I'm afraid. No reviewer set, no diff.
>
> http://reviews.vapour.ws/r/211/

For now I've hard-coded adding juju-team.  If anyone still has trouble
with this please let me know.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: reviewboard-github integration

2014-10-21 Thread Eric Snow
On Tue, Oct 21, 2014 at 11:40 AM, Michael Foord
 wrote:
>
> On 20/10/14 22:38, Eric Snow wrote:
>>
>> This should be resolved now.  I've verified it works for me.  If it
>> still impacts anyone, just let me know.
>
>
> I still have the issue I'm afraid. No reviewer set, no diff.
>
> http://reviews.vapour.ws/r/211/
>
> Michael

:(

The frustrating part is that I haven't been impacted by this.  I'll
hack a solution in the immediate term and go from there.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: reviewboard-github integration

2014-10-20 Thread Eric Snow
This should be resolved now.  I've verified it works for me.  If it
still impacts anyone, just let me know.

-eric

On Mon, Oct 20, 2014 at 7:34 PM, Eric Snow  wrote:
> Yeah, this is the same issue that Ian brought up.  I'm looking into
> it.  Sorry for the pain.
>
> -eric
>
> On Mon, Oct 20, 2014 at 5:31 PM, Dimiter Naydenov
>  wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA1
>>
>> Hey Eric,
>>
>> Today I tried proposing a PR and the RB issue (#202) was created, but
>> it didn't have "Reviewers" field set (as described below), it wasn't
>> published (due to the former), but MOST importantly didn't have a diff
>> uploaded. After fiddling around with rbt I managed to do:
>> $ rbt diff > ~/patch
>> (while on the proposed feature branch)
>>
>> And then went to the RB issue page and manually uploaded the generated
>> diff and published it.
>>
>> So most definitely the hook generating RB issues have to upload the
>> diff as well :)
>>
>> It's coming together, keep up the good work!
>>
>> Cheers,
>> Dimiter
>>
>> On 20.10.2014 16:53, Eric Snow wrote:
>>> On Mon, Oct 20, 2014 at 6:06 AM, Ian Booth
>>>  wrote:
>>>> Hey Eric
>>>>
>>>> This is awesome, thank you.
>>>>
>>>> I did run into a gotcha - I created a PR and then looked at the
>>>> Incoming review queue and there was nothing new there. I then
>>>> clicked on All in the Outgoing review queue and saw that the
>>>> review was unpublished. I then went to publish it and it
>>>> complained at least one reviewer was needed. So I had to fill in
>>>> "juju-team" and all was good.
>>>>
>>>> 1. Can we make it so that the review is published automatically?
>>>> 2. Can we pre-fill "juju-team" as the reviewer?
>>>
>>> Good catch.  The two are actually related.  The review is
>>> published, but that fails because no reviewer got set.  I'll get
>>> that fixed.
>>>
>>> -eric
>>>
>>
>>
>> - --
>> Dimiter Naydenov 
>> juju-core team
>> -BEGIN PGP SIGNATURE-
>> Version: GnuPG v1
>>
>> iQEcBAEBAgAGBQJURSrnAAoJENzxV2TbLzHw0BQH/16P4qPDI28kkGs398qRKY5s
>> eUtcHBpYs+JuLV2ZA0LjCpTds89RBDW6cKsxcfXxaAmawIb0KHh920VzKb1Wl2OT
>> z/iMOF2q91LnV58dqPf7mZjHaT1LPRdSRxg6aAZW/mjexwVRtRDT4Asd5w6JpKrH
>> 9Tkqfy86OilJ70X8qNbegvjJrBAttwoLLI4jwJq4dNWUbWCBbuumryh0k6+GlmNH
>> NiKbpi45pPy/RIFVA7ewbLIOpUXleHm5NIGlA/liZOMHpz0w5QHK3FYGLuGMNzQC
>> fq4qW6rfb1ITdr7XWsA3gooV6FUndw3mbNsod3QgSv82RDA6GGECHeYimGG94/g=
>> =POJ4
>> -END PGP SIGNATURE-
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at: 
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: reviewboard-github integration

2014-10-20 Thread Eric Snow
Yeah, this is the same issue that Ian brought up.  I'm looking into
it.  Sorry for the pain.

-eric

On Mon, Oct 20, 2014 at 5:31 PM, Dimiter Naydenov
 wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hey Eric,
>
> Today I tried proposing a PR and the RB issue (#202) was created, but
> it didn't have "Reviewers" field set (as described below), it wasn't
> published (due to the former), but MOST importantly didn't have a diff
> uploaded. After fiddling around with rbt I managed to do:
> $ rbt diff > ~/patch
> (while on the proposed feature branch)
>
> And then went to the RB issue page and manually uploaded the generated
> diff and published it.
>
> So most definitely the hook generating RB issues have to upload the
> diff as well :)
>
> It's coming together, keep up the good work!
>
> Cheers,
> Dimiter
>
> On 20.10.2014 16:53, Eric Snow wrote:
>> On Mon, Oct 20, 2014 at 6:06 AM, Ian Booth
>>  wrote:
>>> Hey Eric
>>>
>>> This is awesome, thank you.
>>>
>>> I did run into a gotcha - I created a PR and then looked at the
>>> Incoming review queue and there was nothing new there. I then
>>> clicked on All in the Outgoing review queue and saw that the
>>> review was unpublished. I then went to publish it and it
>>> complained at least one reviewer was needed. So I had to fill in
>>> "juju-team" and all was good.
>>>
>>> 1. Can we make it so that the review is published automatically?
>>> 2. Can we pre-fill "juju-team" as the reviewer?
>>
>> Good catch.  The two are actually related.  The review is
>> published, but that fails because no reviewer got set.  I'll get
>> that fixed.
>>
>> -eric
>>
>
>
> - --
> Dimiter Naydenov 
> juju-core team
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1
>
> iQEcBAEBAgAGBQJURSrnAAoJENzxV2TbLzHw0BQH/16P4qPDI28kkGs398qRKY5s
> eUtcHBpYs+JuLV2ZA0LjCpTds89RBDW6cKsxcfXxaAmawIb0KHh920VzKb1Wl2OT
> z/iMOF2q91LnV58dqPf7mZjHaT1LPRdSRxg6aAZW/mjexwVRtRDT4Asd5w6JpKrH
> 9Tkqfy86OilJ70X8qNbegvjJrBAttwoLLI4jwJq4dNWUbWCBbuumryh0k6+GlmNH
> NiKbpi45pPy/RIFVA7ewbLIOpUXleHm5NIGlA/liZOMHpz0w5QHK3FYGLuGMNzQC
> fq4qW6rfb1ITdr7XWsA3gooV6FUndw3mbNsod3QgSv82RDA6GGECHeYimGG94/g=
> =POJ4
> -END PGP SIGNATURE-
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at: 
> https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: reviewboard-github integration

2014-10-20 Thread Eric Snow
On Mon, Oct 20, 2014 at 6:06 AM, Ian Booth  wrote:
> Hey Eric
>
> This is awesome, thank you.
>
> I did run into a gotcha - I created a PR and then looked at the Incoming 
> review
> queue and there was nothing new there. I then clicked on All in the Outgoing
> review queue and saw that the review was unpublished. I then went to publish 
> it
> and it complained at least one reviewer was needed. So I had to fill in
> "juju-team" and all was good.
>
> 1. Can we make it so that the review is published automatically?
> 2. Can we pre-fill "juju-team" as the reviewer?

Good catch.  The two are actually related.  The review is published,
but that fails because no reviewer got set.  I'll get that fixed.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: reviewboard-github integration

2014-10-18 Thread Eric Snow
And of course if you see any problems please let me know. :)

-eric

On Sat, Oct 18, 2014 at 7:38 AM, Eric Snow  wrote:
> With the switch to Reviewboard we introduced extra steps to our
> workflow (mostly involving rbt).  This in turn made things more
> difficult for new/existing contributors.  I've been able to take some
> time in the last couple weeks to improve the situation by adding some
> integration between github and reviewboard.
>
> As of tonight that integration has reached an initial milestone.  The
> barriers to contribution introduced by Reviewboard are essentially
> gone.  Furthermore, the automation means the review requests should
> stay in sync with the pull requests.  So I'm happy to report that,
> unless you are chaining branches (which github PRs don't support
> anyway), you shouldn't need to use rbt anymore.
>
> Currently:
>
> * a new PR automatically triggers the creation of a new review request
> * the review request has a link back to the pull request
> * updates to the PR (i.e. pushes to your branch) automatically trigger
> an update to the review request
> * closing (discard/merge) a PR automatically triggers closing the
> corresponding review request
> * re-opening a PR automatically triggers re-opening the corresponding
> review request
> * a reviewboard user gets created if there wasn't one already
>
> Nearly working:
>
> * after the review request is created, a link to it is added to a pull
> request comment
>
> Future work:
>
> * support patch queues/chained branches/etc. (via trigger in PR summary)
> * add reviewboard support to the merge bot (check for ship-it before
> doing anything)
>
> Will not happen:
>
> * automatically merge PR when given ship-it
> * PR comments (including review comments) will not be pushed to the
> corresponding review request
> * likewise reviewboard comments won't be pushed up to the corresponding PR
>
> I can't promise that the "future" work will happen in the short term,
> but I'll post any updates as they come.  Enjoy!
>
> -eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: reviewboard-github integration

2014-10-17 Thread Eric Snow
Also, I did this as a reviewboard extension.  All the code is online
(BSD license): https://bitbucket.org/ericsnowcurrently/rb_webhooks_extension.
I haven't charmed it up, but it should be pretty easy to adapt the
charm I wrote for rb_oauth.

-eric

On Sat, Oct 18, 2014 at 7:38 AM, Eric Snow  wrote:
> With the switch to Reviewboard we introduced extra steps to our
> workflow (mostly involving rbt).  This in turn made things more
> difficult for new/existing contributors.  I've been able to take some
> time in the last couple weeks to improve the situation by adding some
> integration between github and reviewboard.
>
> As of tonight that integration has reached an initial milestone.  The
> barriers to contribution introduced by Reviewboard are essentially
> gone.  Furthermore, the automation means the review requests should
> stay in sync with the pull requests.  So I'm happy to report that,
> unless you are chaining branches (which github PRs don't support
> anyway), you shouldn't need to use rbt anymore.
>
> Currently:
>
> * a new PR automatically triggers the creation of a new review request
> * the review request has a link back to the pull request
> * updates to the PR (i.e. pushes to your branch) automatically trigger
> an update to the review request
> * closing (discard/merge) a PR automatically triggers closing the
> corresponding review request
> * re-opening a PR automatically triggers re-opening the corresponding
> review request
> * a reviewboard user gets created if there wasn't one already
>
> Nearly working:
>
> * after the review request is created, a link to it is added to a pull
> request comment
>
> Future work:
>
> * support patch queues/chained branches/etc. (via trigger in PR summary)
> * add reviewboard support to the merge bot (check for ship-it before
> doing anything)
>
> Will not happen:
>
> * automatically merge PR when given ship-it
> * PR comments (including review comments) will not be pushed to the
> corresponding review request
> * likewise reviewboard comments won't be pushed up to the corresponding PR
>
> I can't promise that the "future" work will happen in the short term,
> but I'll post any updates as they come.  Enjoy!
>
> -eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


reviewboard-github integration

2014-10-17 Thread Eric Snow
With the switch to Reviewboard we introduced extra steps to our
workflow (mostly involving rbt).  This in turn made things more
difficult for new/existing contributors.  I've been able to take some
time in the last couple weeks to improve the situation by adding some
integration between github and reviewboard.

As of tonight that integration has reached an initial milestone.  The
barriers to contribution introduced by Reviewboard are essentially
gone.  Furthermore, the automation means the review requests should
stay in sync with the pull requests.  So I'm happy to report that,
unless you are chaining branches (which github PRs don't support
anyway), you shouldn't need to use rbt anymore.

Currently:

* a new PR automatically triggers the creation of a new review request
* the review request has a link back to the pull request
* updates to the PR (i.e. pushes to your branch) automatically trigger
an update to the review request
* closing (discard/merge) a PR automatically triggers closing the
corresponding review request
* re-opening a PR automatically triggers re-opening the corresponding
review request
* a reviewboard user gets created if there wasn't one already

Nearly working:

* after the review request is created, a link to it is added to a pull
request comment

Future work:

* support patch queues/chained branches/etc. (via trigger in PR summary)
* add reviewboard support to the merge bot (check for ship-it before
doing anything)

Will not happen:

* automatically merge PR when given ship-it
* PR comments (including review comments) will not be pushed to the
corresponding review request
* likewise reviewboard comments won't be pushed up to the corresponding PR

I can't promise that the "future" work will happen in the short term,
but I'll post any updates as they come.  Enjoy!

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Is ReviewBoard a good thing?

2014-09-22 Thread Eric Snow
On Sat, Sep 20, 2014 at 12:01 AM, Jesse Meek  wrote:
> On 20/09/14 02:34, Eric Snow wrote:
> I was not seriously suggesting we return to lp. Using ReviewBoard
> reintroduces what we gave up with lp: both the good (tooling that addresses
> pain points) and the bad (not a well known/widely adopted process of
> contributing). In this regard, using ReviewBoard is akin to returning to lp.

Sorry, I misread.  Thanks for clarifying.

>
> It is not a question of "does ReviewBoard address our pain points" nor is it
> a question of "is this just teething?". Both valid questions in their own
> right, but they miss the point. Is raising the bar to outside contributors
> necessary and justified?

What do you mean by raising the bar?  If you mean the extra steps
we've introduced for reviewboard, I agree to the extent that we do not
yet have any automation between github and reviewboard, and we must
take the steps manually.  However, once we have automated those steps
it will be a non-issue.

Furthermore, I will argue that reviewboard provides a better code
review experience than github.  That's a relatively subjective
assessment, but there are also concrete benefits that I hope I've
outlined well in the "pros and cons" thread.

FWIW, I do not plan on updating the CONTRIBUTING doc until after we
have github-reviewboard automation in place.  Until then outside
contributers won't have to worry about reviewboard.  And afterward
they still won't have to worry about more than simply following the
link to the review request for their PR.

>
> Is it necessary? Many of us have addressed our own pain points with GitHub
> already. I use browser plugins, git hooks and scripts to augment my
> workflow.

I'd be interested to know what folks are using to work around the
deficiencies in github (reviews or otherwise).  I expect such things
would be generally useful.  Ideally no one would need to worry about
such workarounds, which is what we're trying to address via
reviewboard, but I expect that any such tools would be useful,
reviewboard or not.

> Yet I can work along side the first time contributor that has
> nothing more than git and a GitHub account. What pain point necessitates
> raising the bar to contributors?

I agree that, without the github-reviewboard automation, any
requirements to use reviewboard are more roadblocks to contribution.

>
> Is it justified? Given such a pain point exists, does solving it justify
> *forcing* a new tool on a developer? This is the decision we are making and
> it is not just for 'us' the team. It is for our would-be external
> contributors. The ones that we moved to GitHub for.

I'm glad you spoke up on this.  It's important we keep this firmly in
mind when making any changes to workflow.  FYI, I had a patch for
CONTRIBUTING.md that updated the workflow relative to the new
reviewboard steps/tooling.  However, you've convinced me to abandon it
in favor of simply waiting until we have automation in place, to avoid
adding barriers to entry.  Thanks. :)

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: The Pros and Cons of ReviewBoard.

2014-09-19 Thread Eric Snow
On Fri, Sep 19, 2014 at 9:48 AM, Jonathan Aquilina
 wrote:
> I am more than willing to help out wity those modifications

Cool.  I'll get in touch.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: The Pros and Cons of ReviewBoard.

2014-09-19 Thread Eric Snow
On Fri, Sep 19, 2014 at 9:37 AM, Eric Snow  wrote:
> Given that I've in some part driven the switch to ReviewBoard, I want
> to make sure we are all on the same page and any decision on its
> future can be made objectively.  This is an outgrowth of the current
> discussion on whether or not we should ditch reviewboard.
>
> Let's look at the pros and cons of using it (at least relative to
> github).  Feel free to expand on any point here or add to them.
>
> -eric
>
> ReviewBoard Pros:
>
> * self-hosted (flexibility, ownership)
> * unified review queue with detailed info
> * reviews are composed of multiple comments, not just one
> * reviews have worklow-supporting metadata (ship-it, issues)
> * reviews can be edited as a whole before publishing
> * review comments are threaded (provides context)
> * customizable (3rd party and custom extensions)
> * extensive remote API
> * some github integration
> * supports chained branches (anti-pattern?)
> * allows you to look at new changes in context of old comments
> * allows you to look at changes between review request updates
> * does not require a PR to exist
>
> ReviewBoard Cons:
>
> * self-hosted (hosting, maintenance, etc.)
> * adds manual steps to our workflow
> * extra steps increase the barrier to contributing
> * not a part of the mainstream github workflow
> * requires adjusting to a new tool for most people
> * web UI has some usability issues (list?)
> * emails formatting is complicated (subjective)

Solutions:

* add integration between github and reviewboard (github webhooks)
  - addresses manual steps (i.e. barrier-to-entry/workflow concerns)
* provide a git plugin that wraps rbt and better supports our workflow
  - addresses complex workflow concerns
* (unlikely) Modify and add to the web UI (via an extension)
  - addresses web UI concerns
* (unlikely) Modify and add to the email formatting (via an extension)
  - addresses email formatting concerns

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


The Pros and Cons of ReviewBoard.

2014-09-19 Thread Eric Snow
Given that I've in some part driven the switch to ReviewBoard, I want
to make sure we are all on the same page and any decision on its
future can be made objectively.  This is an outgrowth of the current
discussion on whether or not we should ditch reviewboard.

Let's look at the pros and cons of using it (at least relative to
github).  Feel free to expand on any point here or add to them.

-eric

ReviewBoard Pros:

* self-hosted (flexibility, ownership)
* unified review queue with detailed info
* reviews are composed of multiple comments, not just one
* reviews have worklow-supporting metadata (ship-it, issues)
* reviews can be edited as a whole before publishing
* review comments are threaded (provides context)
* customizable (3rd party and custom extensions)
* extensive remote API
* some github integration
* supports chained branches (anti-pattern?)
* allows you to look at new changes in context of old comments
* allows you to look at changes between review request updates
* does not require a PR to exist

ReviewBoard Cons:

* self-hosted (hosting, maintenance, etc.)
* adds manual steps to our workflow
* extra steps increase the barrier to contributing
* not a part of the mainstream github workflow
* requires adjusting to a new tool for most people
* web UI has some usability issues (list?)
* emails formatting is complicated (subjective)

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Is ReviewBoard a good thing?

2014-09-19 Thread Eric Snow
On Thu, Sep 18, 2014 at 6:32 PM, David Cheney
 wrote:
> There were three problem reviewboard was supposed to address, review
> comments are sent immediately, no side by side diffs, and no way to to
> chained proposals.

It will be worth being extra clear on ReviewBoard's pros and cons.
I'll open a new thread.

>
> I think that over the last few months we've all learnt to live with
> the first issue

I for one haven't.  On several occasions I have made a comment that I
later realized was not valid, but am not able to simply remove since
it already sent.

>
> On the second, github now does nice side by side diffs.

Agreed.

>
> On the third, it appears reviewboard leaves you solidly to your own
> devices to implement chained proposals.

rbt supports it just fine and reviewboard itself supports it (see the
"Depends on" field on every review request).  What support did you
have in mind?

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Is ReviewBoard a good thing?

2014-09-19 Thread Eric Snow
On Fri, Sep 19, 2014 at 6:11 AM, Gabriel Samfira
 wrote:
> Just a suggestion:
>
> A git plugin similar to what Gerrit has would simplify things. For example,
> Gerrit has a nice little plugin called "Review". Simply doing:
>
> git review
>
> In your current branch would push the patchest to gerrit. Something similar
> for RB, would probably simplify things a lot. Chained PR's could probably be
> done by specifying in the commit message something like:
>
> depends on #

That would definitely ease the pain.  Even if we have automation of
the main workflow, a git plugin would still help with chained branches
(which github does not support).

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Is ReviewBoard a good thing?

2014-09-19 Thread Eric Snow
On Fri, Sep 19, 2014 at 8:34 AM, Eric Snow  wrote:
> I agree that reviewboard as we currently have it now adds extra work
> to our workflow. Not only does this impact the juju team, but it does
> add a stumbling block to more community involvement.  However, my firm
> belief is that the real pain points are addressable in the short term.
> Let's give it a chance.

Just to be clear, while I have spent a lot of time on setting up
ReviewBoard and believe it's our best option, I appreciate this
discussion and gladly accept both positive and negative feedback.  If
we decide that ReviewBoard isn't worth it, I'll support the decision
and move on. :)  Furthermore, I recognize that the future of
reviewboard for our team is mostly up to the team leads but ultimately
is driven by the whole team.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Is ReviewBoard a good thing?

2014-09-19 Thread Eric Snow
On Thu, Sep 18, 2014 at 6:19 PM, Jesse Meek  wrote:
> We moved to GitHub in the hope of lowering the bar to contributors outside
> the team. GitHub is *the* platform and process for open source software.
> This was the logic behind the move. It was deemed to have the most mindshare
> and we sacrificed our prefered platform and process to be part of that
> mindshare.
>
> We are now leaving that 'main stream' process to something that suits the
> tastes of our team - ReviewBoard. This adds friction for new contributors
> (friction everyone has experienced this week).

This is mostly a consequence of moving to a new tool.  Not only are we
still getting used to a new tool, we are also learning what pain
points it introduces.  We have not had a chance yet to find reasonable
solutions (or determine that the concerns are practically
insurmountable).  My belief is that the main concerns are solvable in
the short term (github and reviewboard have good APIs and reviewboard
is super extendable).

> If we value our preferred
> methods of reviewing over keeping to a well known process for outside
> contributors, the best process was launchpad + rietveld. Shouldn't we simply
> return to that.

If we ditch ReviewBoard, let's just go back to GitHub.  However, I
don't think we should ditch ReviewBoard yet.  It is way to early to
make that kind of decision.  Let's give it a chance and take this up
in Brussels.

>
> Considering we have been successfully using GitHub for several months now,
> using reviewboard is not a necessity. Obviously, I will go with whatever the
> team decides, but I'm concerned that we have moved to reviewboard without
> considering that it undermines (as far as I can see) our primary reason for
> using GitHub.

I agree that reviewboard as we currently have it now adds extra work
to our workflow. Not only does this impact the juju team, but it does
add a stumbling block to more community involvement.  However, my firm
belief is that the real pain points are addressable in the short term.
Let's give it a chance.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Is ReviewBoard a good thing?

2014-09-19 Thread Eric Snow
On Fri, Sep 19, 2014 at 7:55 AM, Matthew Williams
 wrote:
> I do think it's too early to tell though. Why not give it another week or 2
> then discuss in the cross teams if we want to keep it or not

+1

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Is ReviewBoard a good thing?

2014-09-19 Thread Eric Snow
On Fri, Sep 19, 2014 at 7:55 AM, Dimiter Naydenov
 wrote:
> +1, In addition you can always check
> https://github.com/juju/juju/pulls to see what's in the queue. For
> sub-repositories it's the same, like
> https://github.com/juju/names/pulls. While I agree it's not all in one
> place, I tend to work on the main repo mostly, and alternatively you
> can check the Notifications page (https://github.com/notifications) to
> see all activity.

The problem is when your PR in another repo sits there because
everyone "tends to work on the main repo mostly".  A unified review
queue solves that quite nicely. :)

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Doing chained diffs w/ Reviewboard

2014-09-18 Thread Eric Snow
On Thu, Sep 18, 2014 at 4:45 AM, Ian Booth  wrote:
> Eric, can we do something on the review board and/or github side to make this
> sort of thing easier? Implement a script which follows the Launchpad workflow 
> or
> something?

Totally.  Writing it as a git plugin ("git-review" on $PATH -> "git
review" subcommand) should be pretty straight-forward since rbt
supports uploading a diff.  However, given that I'm trying to wrap up
backups work before the upcoming sprint I'm not sure I'll have time to
work on anything like that before then.  Regardless, since it's just a
matter of generating a diff and using rbt to upload it, I expect
anyone could write the script.

-eric

p.s. It is a shame that git does not have better support for a prereq
branch workflow.  I chain branches constantly.  Sounds like a job for
another git plugin. :)

p.p.s. Github support for chained branches is another matter entirely,
though I expect we could come up with a work-around for that too. :)

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


rbt username and password

2014-09-17 Thread Eric Snow
The auth credentials you need for rbt (reviewboard CLI) are easy to
miss in earlier emails that have this info.

username: 
password: oauth:@github

As always, please let me know if you have any feedback on
ReviewBoard/rbt or need a hand spinning up on ReviewBoard.

-eric


Bonus Info:

You will only need this information the first time you use "rbt post".
After that it is cached (unless you clear that cache or otherwise
force rbt to authenticate).  The password piggybacks off OAuth
authentication you have already done via your browser.  This is
necessary since users are set up without passwords.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Client Facades

2014-09-16 Thread Eric Snow
On Tue, Sep 16, 2014 at 4:27 PM, John Weldon  wrote:
> Hi;
>
> Has there been any discussion or have any decisions made about Client API
> Facades in Juju?
>
> It seems all of the Client API is exposed on one Client facade, and we'll
> just be adding more to it in rolling out Actions.
>
> I'd like to consider a new client facade just for Actions.
>
> Thoughts?

For backups I've added a new client, in its own package, rather than
adding a bunch of Backups_* methods to api/client.go (see
https://github.com/juju/juju/pull/736 and
http://reviews.vapour.ws/r/37).  FWIW, I was under the same impression
as you are that all user client functionality is captured in the one
client.  Regardless, there are several clients under the api package
that provide good examples.  Basically look at any of the subpackages
with "manager" in their name.  They (including api/client.go) all make
use of the base ClientFacade type and other code in api/base.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: ReviewBoard is now the official review tool for juju

2014-09-15 Thread Eric Snow
On Mon, Sep 15, 2014 at 12:54 PM, Dimiter Naydenov
 wrote:
> - From my meager experience with writing git plugins (any executable in
> $PATH with "git-" prefix), what are you describing can be easily
> achieved. If you write a git plugin, named e.g. "git-rbpropose", using
> the GitHub CLI (https://github.com/github/hub) and rbt, you can
> automate the process:
>  1. Pushing the branch to origin (checking for uncommitted changes).
>  2. Creating a GitHub PR for the branch, which includes launching the
> default editor to fill-in the PR description (using hub).
>  3. Creating a RB review (using rbt).
>  4. Optionally opening the default browser with it.
>
> I have a couple of handy scripts that do this, which I've shared before:
> git-sync (), and git-propose (), along with a few aliases (). git-sync
> fits especially well with the RB workflow, because it pull
> upstream/master into your local repo's master, then pushes it back to
> origin/master, and finally (when you're on a branch other than master)
> rebases all branch commits over origin/master, interactively. What
> usually do is run "git propose" after the finaly "git sync" to create
> a PR - the only thing missing is the RB steps.

Cool.  I'll take a look.

>
> Cheers and thanks for all the hard work around putting RB workflow in
> place,

Glad to do it. :)

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: ReviewBoard is now the official review tool for juju

2014-09-15 Thread Eric Snow
On Mon, Sep 15, 2014 at 11:05 AM, Katherine Cox-Buday
 wrote:
> Let me preface this by saying I like the Reviewboard style of reviewing
> changes.
>
> It's somewhat concerning to me that our reviews are now disconnected from
> what will actually be landed into trunk. In Github, you were reviewing the
> actual diff which would be landed. In reviewboard, we're now reviewing a
> diff manually uploaded by the developer. There's a lot of room for error in
> terms of what diff we review vs. what diff we land.
>
> Any thoughts on how to couple these things once again?

I'm working on integration between github and reviewboard such that
creation of a PR creates a new review request automatically.  The same
goes for updating a PR.  Not only will that address what you are
talking about, it will remove the extra steps of creating/updating the
review request yourself and of closing a review request as submitted.
Ergo, rbt would not be needed for the typical workflow.  You would
still use it for "pipedlined" branches, though that could probably be
automated as well.

There is the possibility of pushing info from ReviewBoard back to
github (e.g. "ship-it" -> "LGTM" comment), but I don't think it buys
us enough to make it worth it (it's notably trickier).

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: ReviewBoard is now the official review tool for juju

2014-09-15 Thread Eric Snow
On Sun, Sep 14, 2014 at 10:28 PM, Menno Smits  wrote:
> It looks like the TRACKING_BRANCH option in .reviewboardrc could be helpful.
> It defaults to "origin/master" but if we changed it to "upstream/master" I
> suspect Reviewboard will then generate diffs against the shared master
> branch instead of what we might happen to have in master in our personal
> forks. The of course relies on every developer having a remote called
> "upstream" that points to the right place (which isn't necessarily true).

FWIW, I have the following in $GOPATH/src/github.com/juju/juju/.git/config:

[remote "origin"]
url = g...@github.com:ericsnowcurrently/juju.git
fetch = +refs/heads/*:refs/remotes/origin/*
[remote "upstream"]
url = https://github.com/juju/juju.git
fetch = +refs/heads/*:refs/remotes/upstream/*

and in ~/.reviewboardrc:

TRACKING_BRANCH = "upstream/master"

This has worked fine for me (once I realized master had to be up-to-date).

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: ReviewBoard is now the official review tool for juju

2014-09-15 Thread Eric Snow
On Mon, Sep 15, 2014 at 8:09 AM, Eric Snow  wrote:
> Yeah, those steps are a lot, though keep in mind that effectively it's
> only 2 steps more than before if you use the -p flag to rbt post and
> were already keeping your local master up to date.

Just to be clear, here are the steps again, slightly reformatted:

(0). Rebase relative to upstream master.
  - if origin is different than upstream, sync and push it
1. Create a pull request via github.
2. Run "rbt pull -p" while at your branch to create a review request.
3. add a comment to the PR with a link to the review request.
4. address reviews until you get a "Ship It!" (like normal, with LGTM).
5. add a $$merge$$ comment to the PR (like normal).
6. mark the review request as submitted.

So, steps 2, 3, and 6 are completely new.  They don't add a lot of
work and I plan on automating all 3 of those new steps.

Step (0) is also pretty easy and I'll argue that people should be
doing it anyway.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: ReviewBoard is now the official review tool for juju

2014-09-15 Thread Eric Snow
On Sun, Sep 14, 2014 at 10:28 PM, Menno Smits  wrote:
> Eric,
>
> Thanks for setting this up.
>
> Firstly, in your email you mention "rbt pull" several times. I think you
> mean "rbt post" right? I don't see a pull command documented anywhere.

Correct.  I meant "rbt post". :)

>
> I've run in to one issue so far. When I tried to get my first review in to
> Reviewboard today it took me a long time to figure out how to get it to
> generate the correct diff. After much gnashing of teeth I figured out that
> "rbt post" generates a diff by comparing "origin/master" against the current
> branch. This means that if you haven't updated your local master branch
> recently and pushed your local master branch to your personal fork on Github
> (this is the part I missed) you'll end up with diffs that include lots of
> changes that have already been merged and have nothing to do with your
> branch.
>
> As things stand the workflow actually needs to be:
>
> 1. Ensure your feature branch is rebased against upstream/master
> 2. Create a pull request like normal via github.
> 3. Switch to your local master branch.
> 4. "git pull" to update master
> 5. "git push origin master" to update your personal master on github.
> 5. Switch back to your feature branch ("git checkout -" helps here)
> 6. Run "rbt post" while at your branch to create a review request.
> 7. open the review request in your browser and "publish" it.
>   - alternately use the rbt --open (-o) and/or --publish (-p) flags.
> 8. add a comment to the PR with a link to the review request.
> 9. address reviews until you get a "Ship It!" (like normal, with LGTM).
> 10. add a $$merge$$ comment to the PR (like normal).
>
> This is a bit confusing and inconvenient. I can see us all forgetting to
> keep our personal master branches on GH updated.

Yeah, those steps are a lot, though keep in mind that effectively it's
only 2 steps more than before if you use the -p flag to rbt post and
were already keeping your local master up to date.

However, I'll look into a cleaner approach.  If rbt cannot handle it,
it may make sense to write a quick "git review" command that wraps all
that.  FYI, I've been using a "git refresh" command for a while that
wraps the pull/rebase steps, so writing one that does the extra review
steps shouldn't be too onerous.

All this is part of why I had considered waiting until we could roll
out proper github-reviewboard integration.  However, I felt like
ReviewBoard was enough of a benefit on its own that waiting for the
integration was unwarranted.

>
> It looks like the TRACKING_BRANCH option in .reviewboardrc could be helpful.
> It defaults to "origin/master" but if we changed it to "upstream/master" I
> suspect Reviewboard will then generate diffs against the shared master
> branch instead of what we might happen to have in master in our personal
> forks. The of course relies on every developer having a remote called
> "upstream" that points to the right place (which isn't necessarily true).

I'll take a look at that.

>
> If TRACKING_BRANCH isn't going to work then whatever automation we come up
> with to streamline RB/GH integration is probably going to have to sort this
> out.

Ultimately I think it's worth getting the tooling right in the very
short term. :)

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: ReviewBoard is now the official review tool for juju

2014-09-14 Thread Eric Snow
On Sun, Sep 14, 2014 at 2:59 PM, Jesse Meek  wrote:
> Thank you for setting this up. Is it possible to add 'unpublished' review
> comments such that you can add/remove/edit comments until you've completed a
> review, at which point you 'publish' all review comments, making them
> visible to the author?

That's how ReviewBoard already works.  Everything it draft until you
hit the "publish" button.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


ReviewBoard is now the official review tool for juju

2014-09-13 Thread Eric Snow
Hi all,

Starting now new code review requests should be made on
http://reviews.vapour.ws (see more below on creating review requests).
We will continue to use github for everything else (pull requests,
merging, etc.).  I've already posted some of the below information
elsewhere, but am repeating it here for the sake of reference.  I plan
on updating CONTRIBUTING.md with this information in the near future.
Please let me know if you have any feedback.  Happy reviewing!

-eric

Authentication
--
Use the Github OAuth button on the login page to log in.  If you don't
have an account yet on ReviewBoard, your github account name will
automatically be registered for you.  ReviewBoard uses session
cookies, so once you have logged in you shouldn't need to log in again
unless you log out first.

For the reviewboard commandline client (rbt), use your reviewboard
username and a password of "oauth:@github".  This should
only be needed the first time.

RBTools
--

ReviewBoard has a command-line tool that you can install on your local
system called "rbt".  rbt is the recommended tool for creating and
updating review requestsion.  The documentation covers installation
and usage.  It has satisfied my questions thus far.

https://www.reviewboard.org/docs/rbtools/0.6/

The key sub-command is "post" (see rbt post -h).

To install you can follow the instructions in the rbtools docs.  You
can also install using pip (which itself may need to be installed
first):

$ virtualenv ~/.venvs/reviewboard ~/.venvs/reviewboard/bin/pip install
--allow-unverified rbtools --allow-external rbtools rbtools
$ alias rbt='~/.venvs/reviewboard/bin/rbt'

(you could just "sudo pip install" it, but the --allow-unverified flag
makes it kind of sketchy.)

Workflow
---

1. Create a pull request like normal via github.
2. Run "rbt pull" while at your branch to create a review request.
  - if the repo does not have a .reviewboardrc file yet, you'll need
to run "rbt setup-repo".
  - make sure your branch is based on an up-to-date master.
  - if the revision already has a review request you will need to
update it (see below).
3. open the review request in your browser and "publish" it.
  - alternately use the rbt --open (-o) and/or --publish (-p) flags.
4. add a comment to the PR with a link to the review request.
5. address reviews until you get a "Ship It!" (like normal, with LGTM).
6. add a $$merge$$ comment to the PR (like normal).

Keep in mind that the github-related steps aren't strictly necessary
for the sake a getting a code review.  They are if you want to merge
the patch though. :)  I mention this because sometimes you want a
review on something for which you can't create a decent PR in github
(see "Patch Series" below).

Updates
--
To update a review request use "rbt pull -u" or "rbt pull -r #".  This
will update the corresponding existing review request.

Note: Reviewboard links revision IDs to review requests.  So if you
already have a review request for a particular revision (e.g. your
branch), then a simple "rbt post" will fail.

Patch Series
-
Sometimes you have one branch that depends on another, or perhaps
several such forming a chain of branches.  While github does not cope
well with this, ReviewBoard does just fine.  Use the parent flag: rbt
post --parent .

Workflow Automation
--
Currently we do not have any integration set up between reviewboard
and github.  However, we plan on doing so later, at which point you
won't need to create/update review requests manually.  There is other
automation we could target, but that can be addressed later.

Email
---
ReviewBoard is currently set up to send out notification emails.
However, currently registered users do not have an email address set.
So if you want to get review-related email, please make sure you set
your account's email address in ReviewBoard.

SSL
---
We working out the details of precuring an SSL certificate for
reviews.vapour.ws.  We have a self-signed cert, but that isn't a
long-term solution and makes browsers fussy, so we are going to wait
on a CA-signed cert.  Once we switch over the URL will change to
https://reviews.vapour.ws.  However, that should be relatively
transparent because reviewboard will automatically redirect to https
at that point.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


repos in reviewboard

2014-09-12 Thread Eric Snow
Which of the repositories listed at https://github.com/juju should be
set up in reviewboard?  I'm pretty sure on most of them, but a more
authoritative list would help me out.  In the meantime I'm adding the
ones I'm sure on.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


consensus on using separate *_test packages

2014-09-12 Thread Eric Snow
In Go we put tests in *_test.go files that are only built when
testing.  There are two conventions for what package to declare in
those files (relative to the corresponding non-test package):
"" and "_test".  In our code we have a mix of both.
This can be confusing.  We should come to a consensus on which to use
and stick with it, both for new code and when given the opportunity
while refactoring.  (As usual, a whole-sale refactor wouldn't be a
good idea).

This came up on IRC a couple months back. [1]  At the time I referred
to a post from Gustavo on golang-nuts where he stated a preference for
using a separate package for *_test.go files. [2]  Also note that
there are some cases where test files must be in a separate package
(e.g. to break circular imports).  So unless we always use separate
test packages, we will almost certainly end up with a mix (which is
exactly the issue at hand!).

>From my perspective, I agree with Gustavo that using a separate _test
package is a good idea.  So I would favor us being consistent in doing
that. :)

-eric

[1] http://irclogs.ubuntu.com/2014/07/22/%23juju-dev.html#t15:36
[2] https://groups.google.com/forum/#!msg/Golang-nuts/dkk0X1tIs6k/nO3CKFqbIxYJ

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


  1   2   >