Re: Github Reviews vs Reviewboard

2016-10-14 Thread James Tunnicliffe
-1

On 14 October 2016 at 02:47, Tim Penhey  wrote:
> -1, like Menno I was initially quite hopeful for the github reviews.
>
> My main concerns are around easily having a list to pull from, and being
> able to see status, comments on a single dashboard.
>
> Tim
>
> On 14/10/16 11:44, Menno Smits wrote:
>>
>> We've been trialling Github Reviews for some time now and it's time to
>> decide whether we stick with it or go back to Reviewboard.
>>
>> We're going to have a vote. If you have an opinion on the issue please
>> reply to this email with a +1, 0 or -1, optionally followed by any
>> further thoughts.
>>
>>   * +1 means you prefer Github Reviews
>>   * -1 means you prefer Reviewboard
>>   * 0 means you don't mind.
>>
>> If you don't mind which review system we use there's no need to reply
>> unless you want to voice some opinions.
>>
>> The voting period starts *now* and ends my*EOD next Friday (October 21)*.
>>
>> As a refresher, here are the concerns raised for each option.
>>
>> *Github Reviews*
>>
>>   * Comments disrupt the flow of the code and can't be minimised,
>> hindering readability.
>>   * Comments can't be marked as done making it hard to see what's still
>> to be taken care of.
>>   * There's no way to distinguish between a problem and a comment.
>>   * There's no summary of issues raised. You need to scroll through the
>> often busy discussion page.
>>   * There's no indication of which PRs have been reviewed from the pull
>> request index page nor is it possible to see which PRs have been
>> approved or otherwise.
>>   * It's hard to see when a review has been updated.
>>
>> *Reviewboard*
>>
>>   * Another piece of infrastructure for us to maintain
>>   * Higher barrier to entry for newcomers and outside contributors
>>   * Occasionally misses Github pull requests (likely a problem with our
>> integration so is fixable)
>>   * Poor handling of deleted and renamed files
>>   * Falls over with very large diffs
>>   * 1990's looks :)
>>   * May make future integration of tools which work with Github into our
>> process more difficult (e.g. static analysis or automated review
>> tools)
>>
>> There has been talk of evaluating other review tools such as Gerrit and
>> that may still happen. For now, let's decide between the two options we
>> have recent experience with.
>>
>> - Menno
>>
>>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Juju Networking - an overview and proposal

2016-07-25 Thread James Tunnicliffe
I have wanted to put the world to rights, at least in terms of Juju
networking, for a while now. It was suggested that I stopped being
grumpy and get my thoughts down, so I did. My aim was to write down
what, in my opinion, we need to do in order to make Juju elegant on
the inside and capable on the outside when it comes to network
management and use.

I have covered the model we have for Juju networking now, where I
think we need to be, why, and some high level ideas to have in our
heads when implementing networking code (which should probably be part
of another review checklist). I hope it is helpful.

https://docs.google.com/a/canonical.com/document/d/1IvvczhtVCnVoadvOg1JPb3uRrhOfFMLfBYbFwLBWVV8/edit?usp=sharing

James

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: juju status show blank for public name for one machine.

2016-07-04 Thread James Tunnicliffe
Hi Narinder,

Do you have logs from a failed deployment? The best thing to do when
something is stuck like this is open a bug and attach a couple of nice
big tarballs of everything from /var/log/juju from machines 0 and the
unhappy units/machines (in your case machine 2) so one of us can take
a look.

Thanks,

James

On 1 July 2016 at 22:51, Narinder Gupta  wrote:
> I am seeing strange behavior with juju 1.25.5 recently.
>
> http://pastebin.ubuntu.com/18269577/
>
> you can see machine 2 in pending state while in MAAS it was deployed. But
> being in pending state you can see container on the machine 2 have started
> installing charm. I am seeing this from last week onward and at end
> deployment failed.
>
> After some time install failure but container installation is still
> progress.
>
> http://pastebin.ubuntu.com/18269692/
>
> Thanks and Regards,
> Narinder Gupta (PMP)   narinder.gu...@canonical.com
> Canonical, Ltd.narindergupta [irc.freenode.net]
> +1.281.736.5150narindergupta2007[skype]
>
> Ubuntu- Linux for human beings | www.ubuntu.com | www.canonical.com
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Automatic commit squashing

2016-06-16 Thread James Tunnicliffe
One justification that I just had first hand experience of: change
ported from master to 1.25 where the change has already got a +1 on
master. I would like to see any changes that haven't already been
reviewed so squashing to two changes makes the life of the reviewer
easier.

Will shut up now.

On 16 June 2016 at 09:54, James Tunnicliffe
<james.tunnicli...@canonical.com> wrote:
> I would love to be given small, isolated, well planned work to do, the
> specification of which never changes. That isn't the world I live in,
> which is why I would like our ideals to be good and to have guidelines
> to support them, but not to have unbreakable rules that may trip us
> up. I am only advocating flexibility when it is justified.
>
> On 16 June 2016 at 09:26, Rick Harding <rick.hard...@canonical.com> wrote:
>> I'm going to push back on the why squash or even the "let's make it just
>> auto clean it up".
>>
>> A commit to the tree should be a single, well thought out, chunk of work
>> that another can review and process easily. Having the history of how you
>> got there isn't valuable in my opinion. The most important thing is to have
>> something I can look at in a diff, code review, etc and make sense of it. If
>> I need to go through the 10 commits that built it up to make sense of it,
>> then the commit is too big or fails in some other way.
>>
>> Doing this helps with the ideas we're all talking about. Making things
>> easier to review, easier to land against master, easier to review and debug.
>> This comes down to things I've discussed about breaking kanban cards down
>> into a single pull request worth of work, and breaking it down smaller and
>> smaller until you get there. The ideas need to get cut down and be something
>> that someone else can look at and understand. Doing it any other way I think
>> just continues with many of the issues we're fighting today.
>>
>> On Thu, Jun 16, 2016 at 12:16 PM James Tunnicliffe
>> <james.tunnicli...@canonical.com> wrote:
>>>
>>> TLDR: Having guidelines rather than rules is good. Having a tool
>>> mindlessly squashing commits can throw away valuable information.
>>>
>>> I am a little confused as to why we need to squash stuff at all. Git
>>> history isn't flat so if you don't want to see every commit in a
>>> branch that has landed then you don't have to. I realise that I am a
>>> scummy GUI user and I haven't looked at how to use the git CLI to do
>>> this. I am not against squashing commits to provide a nice logical
>>> history without the 'fix the fact that I am dumb and missed that
>>> rename' noise, but I don't think squashing to a single commit is
>>> always the right thing to do.
>>>
>>> Once code is up for review I want the history to remain from the start
>>> to the end of the review loop so if I ask someone to change something
>>> I can actually see that change. I have no problem with those commits
>>> being squashed pre-merge if they are minor changes to the originally
>>> proposed code.
>>>
>>> James
>>>
>>> On 16 June 2016 at 08:25, Marco Ceppi <marco.ce...@canonical.com> wrote:
>>> > This is purely anecdotal, but on the ecosystem side for a lot of our
>>> > projects I've tried to psuedo-enforce the "one commit", or really, a
>>> > change/fix/feature per commit. Thereby allowing me to cherrypick for
>>> > patch
>>> > releases to stable (or revert a commit) with confidence and without a
>>> > lot of
>>> > hunting for the right grouping.
>>> >
>>> > With the advent of squashing in github I've dropped this push and use
>>> > this
>>> > unless the author has already done the logical grouping of commits, in
>>> > which
>>> > case I'll merge them myself, out of github, to avoid merge messages but
>>> > retain their grouping (and potentially modify commit messages, to make
>>> > it
>>> > easier to identify the PR number and the bug number it fixes).
>>> >
>>> > I don't think the Juju core project can just carte blanche squash every
>>> > pull
>>> > request, but I do think it's up to the code authors to put an effort in
>>> > to
>>> > squashing/rewriting/managing their commits prior to submittion to make
>>> > the
>>> > code's history more observable and manageable overtime. Much in the same
>>> > way
>>> > you would document or comment blocks of code, commits are a win

Re: Automatic commit squashing

2016-06-16 Thread James Tunnicliffe
I would love to be given small, isolated, well planned work to do, the
specification of which never changes. That isn't the world I live in,
which is why I would like our ideals to be good and to have guidelines
to support them, but not to have unbreakable rules that may trip us
up. I am only advocating flexibility when it is justified.

On 16 June 2016 at 09:26, Rick Harding <rick.hard...@canonical.com> wrote:
> I'm going to push back on the why squash or even the "let's make it just
> auto clean it up".
>
> A commit to the tree should be a single, well thought out, chunk of work
> that another can review and process easily. Having the history of how you
> got there isn't valuable in my opinion. The most important thing is to have
> something I can look at in a diff, code review, etc and make sense of it. If
> I need to go through the 10 commits that built it up to make sense of it,
> then the commit is too big or fails in some other way.
>
> Doing this helps with the ideas we're all talking about. Making things
> easier to review, easier to land against master, easier to review and debug.
> This comes down to things I've discussed about breaking kanban cards down
> into a single pull request worth of work, and breaking it down smaller and
> smaller until you get there. The ideas need to get cut down and be something
> that someone else can look at and understand. Doing it any other way I think
> just continues with many of the issues we're fighting today.
>
> On Thu, Jun 16, 2016 at 12:16 PM James Tunnicliffe
> <james.tunnicli...@canonical.com> wrote:
>>
>> TLDR: Having guidelines rather than rules is good. Having a tool
>> mindlessly squashing commits can throw away valuable information.
>>
>> I am a little confused as to why we need to squash stuff at all. Git
>> history isn't flat so if you don't want to see every commit in a
>> branch that has landed then you don't have to. I realise that I am a
>> scummy GUI user and I haven't looked at how to use the git CLI to do
>> this. I am not against squashing commits to provide a nice logical
>> history without the 'fix the fact that I am dumb and missed that
>> rename' noise, but I don't think squashing to a single commit is
>> always the right thing to do.
>>
>> Once code is up for review I want the history to remain from the start
>> to the end of the review loop so if I ask someone to change something
>> I can actually see that change. I have no problem with those commits
>> being squashed pre-merge if they are minor changes to the originally
>> proposed code.
>>
>> James
>>
>> On 16 June 2016 at 08:25, Marco Ceppi <marco.ce...@canonical.com> wrote:
>> > This is purely anecdotal, but on the ecosystem side for a lot of our
>> > projects I've tried to psuedo-enforce the "one commit", or really, a
>> > change/fix/feature per commit. Thereby allowing me to cherrypick for
>> > patch
>> > releases to stable (or revert a commit) with confidence and without a
>> > lot of
>> > hunting for the right grouping.
>> >
>> > With the advent of squashing in github I've dropped this push and use
>> > this
>> > unless the author has already done the logical grouping of commits, in
>> > which
>> > case I'll merge them myself, out of github, to avoid merge messages but
>> > retain their grouping (and potentially modify commit messages, to make
>> > it
>> > easier to identify the PR number and the bug number it fixes).
>> >
>> > I don't think the Juju core project can just carte blanche squash every
>> > pull
>> > request, but I do think it's up to the code authors to put an effort in
>> > to
>> > squashing/rewriting/managing their commits prior to submittion to make
>> > the
>> > code's history more observable and manageable overtime. Much in the same
>> > way
>> > you would document or comment blocks of code, commits are a window into
>> > what
>> > this patch does, if you want to keep your history, for reference,
>> > branching
>> > is cheap in git and you absolutely can.
>> >
>> > Happy to share more of the latter mentioned workflow for those
>> > interested,
>> > but otherwise just some 2¢
>> >
>> > Marco
>> >
>> > On Thu, Jun 16, 2016 at 10:12 AM John Meinel <j...@arbash-meinel.com>
>> > wrote:
>> >>
>> >> Note that if github is squashing the commits when it lands into Master,
>> >> I
>> >> believe that this breaks the ancestry with your local br

Re: Automatic commit squashing

2016-06-16 Thread James Tunnicliffe
TLDR: Having guidelines rather than rules is good. Having a tool
mindlessly squashing commits can throw away valuable information.

I am a little confused as to why we need to squash stuff at all. Git
history isn't flat so if you don't want to see every commit in a
branch that has landed then you don't have to. I realise that I am a
scummy GUI user and I haven't looked at how to use the git CLI to do
this. I am not against squashing commits to provide a nice logical
history without the 'fix the fact that I am dumb and missed that
rename' noise, but I don't think squashing to a single commit is
always the right thing to do.

Once code is up for review I want the history to remain from the start
to the end of the review loop so if I ask someone to change something
I can actually see that change. I have no problem with those commits
being squashed pre-merge if they are minor changes to the originally
proposed code.

James

On 16 June 2016 at 08:25, Marco Ceppi  wrote:
> This is purely anecdotal, but on the ecosystem side for a lot of our
> projects I've tried to psuedo-enforce the "one commit", or really, a
> change/fix/feature per commit. Thereby allowing me to cherrypick for patch
> releases to stable (or revert a commit) with confidence and without a lot of
> hunting for the right grouping.
>
> With the advent of squashing in github I've dropped this push and use this
> unless the author has already done the logical grouping of commits, in which
> case I'll merge them myself, out of github, to avoid merge messages but
> retain their grouping (and potentially modify commit messages, to make it
> easier to identify the PR number and the bug number it fixes).
>
> I don't think the Juju core project can just carte blanche squash every pull
> request, but I do think it's up to the code authors to put an effort in to
> squashing/rewriting/managing their commits prior to submittion to make the
> code's history more observable and manageable overtime. Much in the same way
> you would document or comment blocks of code, commits are a window into what
> this patch does, if you want to keep your history, for reference, branching
> is cheap in git and you absolutely can.
>
> Happy to share more of the latter mentioned workflow for those interested,
> but otherwise just some 2¢
>
> Marco
>
> On Thu, Jun 16, 2016 at 10:12 AM John Meinel  wrote:
>>
>> Note that if github is squashing the commits when it lands into Master, I
>> believe that this breaks the ancestry with your local branch. So it isn't a
>> matter of "the history just isn't present in the master branch", but "it
>> looks like a completely independent commit revision, and you have no obvious
>> way to associate it with the branch that you have at all".
>>
>> It may be that git adds information to the commit ('this commit is a
>> rollup of hash deadbeef") in which case the git tool could look it up.
>>
>> I don't know the github UI around this. If I do "git merge --squash" then
>> it leaves me with an uncommitted tree with the file contents updated, and
>> then I can do "git commit -m new-content"
>>
>> And then if I try to do:
>> $ git branch -d test-branch
>> error: The branch 'test-branch' is not fully merged.
>> If you are sure you want to delete it, run 'git branch -D test-branch'
>>
>> Which indicates to me that it intentionally forgets everything about all
>> of your commits, which mean you need to know when it got merged so that you
>> can prune your branches, because the tool isn't going to track what has and
>> hasn't been merged.
>>
>> (I don't know about other people, but because of the delays of waiting for
>> reviews and merge bot bouncing things, it can take a while for something to
>> actually land. I often have branches that sit for a while, and it is easy
>> for me to not be 100% sure if that quick bugfix I did last week actually
>> made it through to master, and having 'git branch -d ' as a short hand was
>> quite useful.)
>>
>> Note that if we are going to go with "only 1 commit for each thing
>> landed", then I do think that using github's squash feature is probably
>> better than rebasing your branches. Because if we just rebase your branch,
>> then you end up with 2 revisions that represent your commit (the one you
>> proposed, and the merge revision), vs just having the "revision of master
>> that represents your changes rebased onto master". We could 'fast forward
>> when possible' but that just means there is a window where sometimes you
>> rebased your branch and it landed fast enough to be only 1 commit, vs
>> someone else landed a change just before you and now you have a merge
>> commit. I would like us to be consistent.
>>
>> For people who do want to throw away history with a rebase, what's your
>> feeling on whether there should be a merge commit (the change as I proposed
>> it) separate from the change-as-it-landed-on-master. I mean, if you're
>> getting rid of the history, 

Re: A cautionary tale - mgo asserts

2016-06-09 Thread James Tunnicliffe
Surely we want to remove any ordering from the txn logic if Mongo
makes no guarantees about keeping ordering? Being explicitly unordered
at both ends seems right.

James

On Thu, Jun 9, 2016 at 8:35 AM, roger peppe  wrote:
> On 9 June 2016 at 01:20, Menno Smits  wrote:
>> On 8 June 2016 at 22:36, John Meinel  wrote:

 ...
>>>
>>>


   ops := []txn.Op{{
   C: "collection",
   Id: ...,
   Assert: bson.M{
   "some-field.A": "foo",
   "some-field.B": 99,
   },
   Update: ...
   }

> ...
>>>
>>>
>>> If loading into a bson.M is the problem, wouldn't using a bson.M to start
>>> with also be a problem?
>>
>>
>> No this is fine. The assert above defines that each field should match the
>> values given. Each field is checked separately - order doesn't matter.
>>
>> This would be a problem though:
>>
>>   ops := []txn.Op{{
>>   C: "collection",
>>   Id: ...,
>>   Assert: bson.M{"some-field": bson.M{
>>   "A": "foo",
>>   "B": 99,
>>   },
>>   Update: ...
>>   }
>>
>>
>> In this case, mgo is being asked to assert that some-field is an embedded
>> document equal to a document defined by the bson.M{"A": "foo", "B": 99} map.
>> This is what's happening now when you provide a struct value to compare
>> against a field because the struct gets round-tripped through bson.M. That
>> bson.M eventually gets converts to actual bson and sent to mongodb but you
>> have no control of the field ordering that will ultimately be used.
>
> Actually, this *could* be OK (I thought it was, in fact) if the bson encoder
> sorted map keys before encoding like the json encoder does. As
> it doesn't, using bson.M is indeed definitely wrong there.
>
> This order-dependency of ostensibly order-independent objects really is an
> unfortunate property of MongoDB.
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at: 
> https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: adding unit tests that take a long time

2016-04-29 Thread James Tunnicliffe
Go test compiles a binary for each package with tests in, then runs
it. Go 1.7 helps with the compile step. For each binary, all tests are
run sequentially unless you call
https://golang.org/pkg/testing/#T.Parallel to indicate a test can be
run in parallel with other tests flagged as such. For any test that
hangs around waiting for something to happen (JujuConSuite), making
them to be thread safe would be a massive help.

https://github.com/dooferlad/jujuWand/blob/master/testJuju.py may be
of interest to those who are just as crazy as I am about testing.

testJuju.py --changed
  Run tests in packages where it finds changed files

testJuju.py --fast runs test packages
  Instead of running go test ./... run go test for each package, in
  parallel, as many processes wide as you have cores (like go test ./...)
  but with the known long running test packages started first.

There are other options. I make no correctness guarantees, but it
works well for me...

James

On Fri, Apr 29, 2016 at 4:43 AM, Anastasia Macmood
 wrote:
> Well, now that you ask :D
>
> On 29/04/16 12:10, Nate Finch wrote:
>
> I don't really understand what you mean by stages of development.
>
> I mean -  developing a unit of work as opposed to developing a component as
> opposing to developing wiring of several components, etc. On top of that,
> besides the usual development activities, you'd also need to include bugs
> and regression fixes which entail slightly different mindset and
> considerations than when you are writing code from scratch. Let's say
> "different development activities", if it helps to clear the mud \o/
>
> So, you'd start developing code by yourself, then your code is amalgamated
> with your team, then between teams, etc...
>
> At the end of the day, they all test the exact same thing - is our code
> correct?  The form of the test seems like it should be unrelated to when
> they are run.
>
> This statement is worthy of a discussion over a drinks :)
> Let's start by making a clear distinction - all tests are important to
> deliver a quality product \o/ However, there are different types of testing:
>
> unit testing;
> component testing;
> integration testing (including top-down, bottom-up, Big Bang, incremental,
> component integration, system integration, etc);
> system testing;
> acceptance testing (and just for fun, let's bundle in here alpha and beta
> testing);
> functional testing;
> non functional testing;
> functionality testing;
> reliability testing;
> usability testing;
> efficiency testing;
> maintainability testing;
> portability testing;
> baseline testing;
> compliance testing;
> documentation testing;
> endurance testing;
> load testing (large amount of users, etc);
> performance testing;
> compatibility testing;
> security testing;
> scalability testing;
> volume testing (large amounts of data);
> stress testing (too many users, too much data, too little time and too
> little room);
> recovery testing;
> regression testing
>
> Can you explain why you think running tests of different sorts at the same
> time would be a bad thing?
>
> All different types of testing that I have attempted to enumerate are
> written at different times and when they are run makes a difference to
> efficiency of development processes. They may live in different phase of
> SDLC. Focusing on all of these types will improve product quality at the
> expense of team(s) momentum as well as will affect individual developer's
> habits (and other factors).
>
> When you as a developer work on a task, the most relevant to you would be:
> a. unit tests (does this little unit of work do what i want?),
> b. integration (does my change work with the rest of the system?),
> c. functional (does my work address requirements?).
>
> Depending on your personal development habits, you may only want to run
> either unit tests and/or integration and/or functional tests while you work
> on your task. Before you add your code to common codebase, you should make
> sure that your code is consistent with:
> * coding guidelines (gofmt, in our case),
> * agreed and recommended coding practices (like the check that you are
> adding).
> These checks test code for conformity ensuring that our code looks the same
> and is written to the highest agreed standard.
>
>
>
> Note that I only want to "divide up tests" temporally... not necessarily
> spatially.  If we want to put all our static analysis tests in one
> directory, our integration tests in another directory, unit tests in the
> directory of the unit... that's totally fine.  I just want an easy way to
> run all the fast tests (regardless of what or how they test) to get a
> general idea of how badly I've broken juju during development.
>
> I understand your desire for a quick turn around.
> But I question the value that you would get from running "fast" (short)
> tests - would this set include some fast running unit tests, integration
> tests and functional tests? 

Re: I think master is leaking

2016-04-11 Thread James Tunnicliffe
https://bugs.launchpad.net/juju-core/+bug/1564511

The reboot tests are broken - they patch out the creation of
containers when they expect to create them, but not when they don't.
They patch out the watching of containers when they want to see them
reboot, but not when they don't. AFAIK they always interact with LXD!

James

On Sat, Apr 9, 2016 at 1:45 AM, David Cheney  wrote:
> If the test suite panics, especially during a tear down from a failure
> set up, you leak mongos and data in /tmp. I have a cron job that runs
> a few times a day to keep this leaking below a few gig.
>
> On Sat, Apr 9, 2016 at 8:13 AM, Horacio Duran
>  wrote:
>> Hey, this is more of an open question/sharing.
>> I was a good half hour trying to find why reboot test was timeouting and
>> found that I had a couple of lxcs that I dont recall creating (at least one
>> of them was a juju api server) and a good 5-8 mongos running.
>> I killed the test, the mongos and the lxcs and magic, the reboot tests ran
>> well.
>> Did anyone else detect this kind of behavior? I believe either destroy (I
>> deployed a couple of times earlier and destroyed) or the tests are leaking,
>> at least, lxcs Ill try to repro later but so far I would like some one
>> else's input.
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at: 
> https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Overlay network for Juju LXC containers?

2016-02-02 Thread James Tunnicliffe
Andrew and I took a look at this yesterday.

Digital Ocean don't support DHCP for private addresses, which is
unfortunate because if they did this would just work with Juju 2.0 and
with a feature flag for Juju 1.5. For this reason we need our own
overlay network. Unfortunately we have been overly prescriptive with
out network configuration so we always expect to use lxcbr0 for
container connectivity instead of using the defaults in
/etc/default/lxc-net. If we weren't we could set up the fan quite
easily on each DO Droplet and then use the manual provisioner to
enlist each Droplet into Juju's control.

I have got a bug open to track this issue:
https://bugs.launchpad.net/bugs/1540832

James

On Mon, Feb 1, 2016 at 2:26 PM, Andrew McDermott
<andrew.mcderm...@canonical.com> wrote:
> Merlijn & Patrik:
>
> Adding +James Tunnicliffe as he will be looking into your questions today
> (and this week).
>
> On 29 January 2016 at 13:18, Andrew McDermott
> <andrew.mcderm...@canonical.com> wrote:
>>
>> I will look into this this afternoon for you.
>>
>> On 29 January 2016 at 13:16, Rick Harding <rick.hard...@canonical.com>
>> wrote:
>>>
>>> Sorry dimiter, I know Andrew is out. Can you investigat please?
>>>
>>>
>>> On Fri, Jan 29, 2016, 8:13 AM Merlijn Sebrechts
>>> <merlijn.sebrec...@gmail.com> wrote:
>>>>
>>>> Any follow up to this? I'm also interested in using fan with lxc and
>>>> Juju.
>>>>
>>>> 2016-01-07 19:19 GMT+01:00 Andrew McDermott
>>>> <andrew.mcderm...@canonical.com>:
>>>>>
>>>>> Hi Patrik,
>>>>>
>>>>> I will look into this tomorrow. Apologies for the delay.
>>>>>
>>>>> On 7 January 2016 at 14:39, Patrik Karisch <patrik.kari...@gmail.com>
>>>>> wrote:
>>>>>>
>>>>>> Hi Andrew,
>>>>>>
>>>>>> Thanks for the answer.
>>>>>>
>>>>>> According to AWS, all the instances must be created inside a VPC to
>>>>>> bind the lxcbr0 to the AWS network and get an IP allocated?
>>>>>>
>>>>>> Since Digital Ocean provider is a simple plugin and basically based on
>>>>>> manual provsioning the best solution would be to activate Fan networking 
>>>>>> on
>>>>>> my machines manually? Are there any docs how I can point Juju to get a 
>>>>>> Fan
>>>>>> IP address for the containers? Mark Shuttleworths blog post says it's 
>>>>>> super
>>>>>> easy for LXD, Docker and Juju but shows only a Docker cli example.
>>>>>>
>>>>>> Best regards
>>>>>> Patrik
>>>>>>
>>>>>> Andrew McDermott <andrew.mcderm...@canonical.com> schrieb am Do., 7.
>>>>>> Jan. 2016 um 14:14 Uhr:
>>>>>>>
>>>>>>> Hi Patrik,
>>>>>>>
>>>>>>> There is no current solution for Digital Ocean.
>>>>>>>
>>>>>>> On AWS a container gets an IP address on the lxcbr0 network. We then
>>>>>>> add iptable rules that make the container visible on the hosts network 
>>>>>>> - the
>>>>>>> host can see the container, the container can see the host.
>>>>>>>
>>>>>>> On MAAS (for 16.04) we create a bridge per NIC and the container,
>>>>>>> depending on how many interfaces are configured, will get an address on 
>>>>>>> each
>>>>>>> subnet. Please note that all of this is currently work in progress and 
>>>>>>> is
>>>>>>> only available on a feature branch (maas-spaces).
>>>>>>>
>>>>>>> AWS and MAAS do not use the fan.
>>>>>>>
>>>>>>> We are currently working on Juju's network model to make it easier to
>>>>>>> do what you are asking for. My colleague Dimiter Naydenov has been 
>>>>>>> blogging
>>>>>>> about this recently:
>>>>>>>
>>>>>>>
>>>>>>> https://insights.ubuntu.com/2015/11/08/deploying-openstack-on-maas-1-9-with-juju/
>>>>>>>
>>>>>>> So for DO we don't have any transparent Juju solution for you, but we
>>>>>>> are actively developing the capabilities of Juju's networking mo

Re: Proposal: doc.go for each package

2015-08-27 Thread James Tunnicliffe
A good way of reading go docs is godoc -http=:6060 and pointing your
browser at http://localhost:6060/pkg/github.com/juju/juju/ to read the docs
- no grep required.

James



On Wed, Aug 26, 2015 at 6:17 PM, roger peppe roger.pe...@canonical.com
wrote:

 +1.

 We should definitely have package docs for every package,
 explaining what it's for, how it is intended to be used and explaining
 any overarching concepts.

 There's no particular need for it to be in a separate doc.go file
 though.

 On 26 August 2015 at 14:11, Frank Mueller frank.muel...@canonical.com
 wrote:
  Hi,
 
  I would like to share an idea with you.
 
  As our codebase gets larger and larger the chance of touching areas
 you've
  never worked before gets larger too. So you often need some time to get
  behind the concepts here. So for example many testing packages we have
  follow different ideas. Some only contain small helpers, others larger
 stub
  environments as a test bed allowing to inject errors and trace calls. And
  that's only testing. ;)
 
  So my proposal is to establish the writing of a doc.go for each package
  containing a high-level description (details have to be as usual at the
  accodring code parts) of the intention and usage of the package. This
 file
  only contains our copyright, this documentation, and the package
 statement.
 
  // Copyright 2015 Canonical Ltd.
  // Licensed under the AGPLv3, see LICENCE file for details.
 
  // Package foo is intended to provide access to the foo
  // cloud provider. Here it ...
  package foo
 
  This way we not only provide a good documentation for sites like
 godoc.org
  but also for ourselves when walking through the code. It's a fixed anchor
  for an initial orientation without searching. Sure, often this is
 already in
  or could be done into the major package files, like here in the example a
  foo.go. But having a fixed name for the doc containing file makes it more
  simple to navigate to, a recursive grep through all doc files is pretty
  simple, and last but not least it's more obvious if this documentation is
  missing (existance check could be part of CI, sadly not a quality check
  *smile*).
 
  So, what do you think?
 
  mue
 
  --
  Frank Mueller frank.muel...@canonical.com
  Juju Core Sapphire Team http://jujucharms.com
 
  --
  Juju-dev mailing list
  Juju-dev@lists.ubuntu.com
  Modify settings or unsubscribe at:
  https://lists.ubuntu.com/mailman/listinfo/juju-dev
 

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: ppc64le timeouts cursing things

2015-07-17 Thread James Tunnicliffe
On 17 July 2015 at 13:08, Dimiter Naydenov
dimiter.nayde...@canonical.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 17.07.2015 12:07, James Tunnicliffe wrote:
 /me opens can of worms
 Thanks for starting the discussion :)


 Having spent perhaps too long trying to parallelise the running of
 the unit test suite over multiple machines using various bat guano
 crazy ideas, I know too much about this but haven't got an easy
 fix. I do know the right fix is to re-write the very long tests
 that we have.

 If you want to find long running tests, go test -v ./... -check.v
 is the command to run at top level. You will get a lot of output,
 but it isn't difficult to find tests that take longer than 10
 seconds with grep and I am sure I could dig the script out that I
 wrote that examines the output and tells you all tests over a
 certain runtime.

 When you run go test ./... at the top of juju/juju it runs suites
 in parallel. If you have multiple long tests in a suite then it has
 a significant impact on the total runtime. We have no way with the
 current tools to exclude single tests without modifying the tests
 themselves;

 How about GOMAXPROCS=1 go test ./... ? Won't that force the runtime to
 run all suites sequentially?

I don't want to run them sequentially - that would be slower.

There are several things going on. First, long tests are bad, but if
they have to be long then starting them as soon as possible is good
because it is more efficient to pack big things first, then small
things (think of a bucket, put the big rocks in first, sand in last,
you can easily level the sand off, but if you put the sand in first
you end up with a lumpy surface). The second is long tests tend to be
ones sitting and waiting for things to happen, but aren't very CPU
intensive, but if you increase GOMAXPROCS in the hope that you can
take advantage of unused CPU time you mostly end up making other tests
fail that are timing dependant because you just slowed them down
enough to fail. The third is the scheduler running tests seems to
(though I haven't looked at the code) run one suite per process and
those suites single threaded, in alphabetical directory search order,
so since our longer suites tend to be closer to the end of that list
than to start with, it doesn't optimally schedule.

I know there is work ongoing to improve the go scheduler, which may
help if it looks at load and not just number of active processes.

 if we did we could run all the tests that take less than a few
 seconds by maintaining a list of long tests, and run those long
 tests as a separate, parallel task. The real fix is to put some
 effort into making the long running tests more unit test and less
 full stack test. 30+ seconds is not what we want. The least worst
 idea I have is making a sub-suite for tests that take  10 seconds,
 one test per suite, so the standard tools will run them in parallel
 with everything else. Providing you have many CPUs there is a
 reasonable chance this will help. It is not remotely nice though.

 Using go tool pprof can also help figuring out why certain tests take
 a long time and/or memory. I'm planning to experiment with it and come
 up with some feedback.

I did take a quick look a while ago, but I was a young Juju hacker and
young go hacker, so didn't get much further than looking at the
numbers and thinking yep, they are big. I would be very surprised if
there was an easy fix for the long running tests. I expect that
testing in a different way is required. The good news is the number of
long tests is small.

These are the long tests as found by the combination of these two:
http://pastebin.ubuntu.com/11892666/
http://pastebin.ubuntu.com/11892667/

PASS: pinger_test.go:131:
mongoPingerSuite.TestAgentConnectionsShutDownWhenStateDies 30.368s
PASS: fetch_test.go:60: FetchSuite.TestRun 9.003s
PASS: fetch_test.go:60: FetchSuite.TestRun 9.002s
PASS: status_test.go:2673: StatusSuite.TestStatusAllFormats 13.327s
PASS: upgradejuju_test.go:308: UpgradeJujuSuite.TestUpgradeJuju 16.219s
PASS: machine_test.go:409: MachineSuite.TestHostUnits 10.795s
PASS: machine_test.go:498: MachineSuite.TestManageEnviron 9.919s
PASS: machine_test.go:1941:
mongoSuite.TestStateWorkerDialSetsWriteMajority 12.071s
PASS: unit_test.go:225: UnitSuite.TestUpgradeFailsWithoutTools 10.116s
PASS: bootstrap_test.go:142:
bootstrapSuite.TestBootstrapNoToolsDevelopmentConfig 11.892s
PASS: bootstrap_test.go:123:
bootstrapSuite.TestBootstrapNoToolsNonReleaseStream 11.623s
PASS: leadership_test.go:130: leadershipSuite.TestClaimLeadership 10.021s
PASS: dblog_test.go:65: dblogSuite.TestMachineAgentWithoutFeatureFlag 10.012s
PASS: dblog_test.go:83: dblogSuite.TestUnitAgentWithoutFeatureFlag 10.060s
PASS: oplog_test.go:26: oplogSuite.TestWithRealOplog 14.208s
PASS: assign_test.go:1259:
assignCleanSuite.TestAssignUnitPolicyConcurrently 10.530s
PASS: assign_test.go:1259:
assignCleanSuite.TestAssignUnitPolicyConcurrently 10.834s
PASS

Re: ppc64le timeouts cursing things

2015-07-17 Thread James Tunnicliffe
/me opens can of worms

Having spent perhaps too long trying to parallelise the running of the
unit test suite over multiple machines using various bat guano crazy
ideas, I know too much about this but haven't got an easy fix. I do
know the right fix is to re-write the very long tests that we have.

If you want to find long running tests, go test -v ./... -check.v is
the command to run at top level. You will get a lot of output, but it
isn't difficult to find tests that take longer than 10 seconds with
grep and I am sure I could dig the script out that I wrote that
examines the output and tells you all tests over a certain runtime.

When you run go test ./... at the top of juju/juju it runs suites in
parallel. If you have multiple long tests in a suite then it has a
significant impact on the total runtime. We have no way with the
current tools to exclude single tests without modifying the tests
themselves; if we did we could run all the tests that take less than a
few seconds by maintaining a list of long tests, and run those long
tests as a separate, parallel task. The real fix is to put some effort
into making the long running tests more unit test and less full stack
test. 30+ seconds is not what we want. The least worst idea I have is
making a sub-suite for tests that take  10 seconds, one test per
suite, so the standard tools will run them in parallel with everything
else. Providing you have many CPUs there is a reasonable chance this
will help. It is not remotely nice though.

0 ✓ dooferlad@homework2
~/dev/go/src/github.com/juju/juju/worker/uniter $ go test -check.v

Shorter tests deleted from this list. The longest are:
PASS: uniter_test.go:1508: UniterSuite.TestActionEvents 39.711s
PASS: uniter_test.go:1114: UniterSuite.TestUniterRelations 16.276s
PASS: uniter_test.go:970: UniterSuite.TestUniterUpgradeGitConflicts 11.354s

These are a worth a look:
PASS: uniter_test.go:2053: UniterSuite.TestLeadership 5.146s
PASS: util_unix_test.go:103: UniterSuite.TestRunCommand 6.946s
PASS: uniter_test.go:2104: UniterSuite.TestStorage 4.593s
PASS: uniter_test.go:1367: UniterSuite.TestUniterCollectMetrics 4.102s
PASS: uniter_test.go:774: UniterSuite.TestUniterDeployerConversion 6.904s
PASS: uniter_test.go:427: UniterSuite.TestUniterDyingReaction 5.772s
PASS: uniter_test.go:393: UniterSuite.TestUniterHookSynchronisation 4.546s
PASS: uniter_test.go:1274: UniterSuite.TestUniterRelationErrors 4.536s
PASS: uniter_test.go:476: UniterSuite.TestUniterSteadyStateUpgrade 6.405s
PASS: uniter_test.go:895: UniterSuite.TestUniterUpgradeConflicts 6.430s

ok   github.com/juju/juju/worker/uniter 175.014s

James

On 17 July 2015 at 04:59, Tim Penhey tim.pen...@canonical.com wrote:
 Hi Curtis,

 I have been looking at some of the recent cursings from ppc64le, and the
 last two included timeouts for the worker/uniter tests.

 On my machine, amd64, i7, 16 gig ram, I get the following:

 $ time go test
 2015-07-17 03:53:03 WARNING juju.worker.uniter upgrade123.go:26 no
 uniter state file found for unit unit-mysql-0, skipping uniter upgrade step
 OK: 51 passed
 PASS
 ok  github.com/juju/juju/worker/uniter  433.256s

 real7m24.270s
 user3m18.647s
 sys 1m2.472s

 Now lets ignore the the logging output that someone should fix, we can
 see how long it takes here. Given that gccgo on power is slower, we are
 going to do two things:

 1) increase the timeouts for the uniter

 2) change the uniter tests

 WRT to point 2, most of the uniter tests are actually fully functional
 end to end tests, and should not be run every time we land code.

 They should be moved into the featuretest package.

 Thanks,
 Tim

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at: 
 https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: juju bootstrap failed in latest version(1.24.2-trusty-amd64) of juju

2015-07-16 Thread James Tunnicliffe
Hi Sunitha,

If you are using the local provider then you need to have LXC
installed. The instructions are here:
https://jujucharms.com/docs/stable/config-local though if that is the
problem we could do with a better error message. Please let us know if
those instructions help.

The default contents of environments.yaml contains a lot of commented
out options that represent the defaults and since installing LXC
creates lxcbr0, that is what Juju expects unless you tell it to use a
different bridge.

I hope this helps.

James

On 16 July 2015 at 08:52, Sunitha Radharapu sradh...@in.ibm.com wrote:
 Hi Team.

  juju bootstrap failed with below error.

  ERROR cannot find network interface lxcbr0: net: no such interface
 Bootstrap failed, cleaning up the environment.
 ERROR there was an issue examining the environment: failure setting config:
 cannot find address of network-bridge: lxcbr0: net: no such interface.

 In my  .juju/environments.yaml:   #network-bridge: lxcbr0 line commented
 out by default.

 please find  below  my .juju/environments.yaml file.

 environments:
 vmware:
   type: vsphere

   # IP address or DNS name of vsphere API host.
   host:

   # Vsphere API user credentials.
   user:
   password:

   # Name of vsphere datacenter.
   datacenter:

   # Name of the network, that all created vms will use ot obtain public
 ip address.
   # This network should have ip pool configured or DHCP server connected
 to it.
   # This parameter is optional.
   extenal-network:
 # https://juju.ubuntu.com/docs/config-local.html
 local:
 type: local

 # root-dir holds the directory that is used for the storage files
 and
 # database. The default location is $JUJU_HOME/env-name.
 # $JUJU_HOME defaults to ~/.juju. Override if needed.
 #
 # root-dir: ~/.juju/local

 # storage-port holds the port where the local provider starts the
 # HTTP file server. Override the value if you have multiple local
 # providers, or if the default port is used by another program.
 #
 # storage-port: 8040

 # network-bridge holds the name of the LXC network bridge to use.
 # Override if the default LXC network bridge is different.
 #

 #network-bridge: lxcbr0

 Thanks,
 Sunitha.


 --
 Juju mailing list
 Juju@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju


-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Blocking bugs process

2015-07-14 Thread James Tunnicliffe
On 14 July 2015 at 15:31, Ian Booth ian.bo...@canonical.com wrote:


 On 14/07/15 23:26, Aaron Bentley wrote:
 On 2015-07-13 07:43 PM, Ian Booth wrote:
 By the definition given

 If a bug must be fixed for the next minor release, it is
 considered a ‘blocker’ and will prevent all landing on that
 branch.

 that bug and any other that we say we must include in a release
 would block landings. That's the bit I'm having an issue with. I
 think landings need to be blocked when appropriate, but not by that
 definition.

 Here's my rationale:
 1. We have held the principle that our trunk and stable branches
 should always be releaseable.
 2. We have said we should stop-the-line when a branch becomes
 unreleasable.
 3. Therefore, I have concluded that we should stop-the-line when a bug
 is present that makes the branch unreleasable.

 Do you agree with 1 and 2?  I think 3 simply follows from 1 and 2, but
 am I wrong?


 Agree with 1 and 2 (depending on the definition of unreleasable - one 
 definition
 of releasable is CI passing).
 3 does not follow from the definition though.

 A milestone may have many bugs assigned to it that we agree must be fixed 
 before
 we release that milestone. simply because we think those bugs are of high
 importance and fit our schedule in terms of resources etc. Holding up a 20+
 people development team because we have a bunch of bugs assigned to a 
 milestone
 is not practical nor productive. Software has bugs. Bugs are assigned to
 milestones so we can plan releases. We generally agree that we want all bugs 
 on
 a milestone to be fixed prior to releasing (or else why add them to that
 milestone). This does not (or should not IMO) make them blockers.

 I am happy with the process we have now. CI passing means a branch is
 releasable. That's our current definition (we wait for a bless before
 releasing). When CI breaks we stop the line to fix CI (and rollback of the
 revision that just landed to break stuff is a viable option there). Some bugs
 that have been around for a while which finally get assigned to a milestone
 should not block landings. They may be complex and hard to diagnose and a few
 people fixing is enough. It doesn't help anyone to hold up the entire dev team
 over such bugs. Whereas a CI breakage you have clear choices - fix quickly or
 rollback to unblock.


 Depends on the changes. I think we should be pragmatic and make
 considered decisions. I guess that's why we have the jfdi flag.

 It's true that the particulars of the bug may matter in deciding
 whether it should block, and that's why there's a process for
 overriding the blocking tag: Exceptions are raised to the release team.

 I think JFDI should be considered a nuclear option.  If you need it,
 it's good that it exists, but you shouldn't ever need it.  If you
 think you need it, there may be a problem with our process.


 There have been many times we have legitimately needed jfdi. Dev teams exist 
 in
 a world where pragmatism is usually the best policy, rather than a strict
 adherence to a policy which has the potential to kill velocity for unequal
 corresponding benefit.

+2

If the only thing that needs to change before a release is for a bug
to be fixed, I am quite happy with that branch blocking. If the
situation is more nuanced than that, our process shouldn't hinder us.
As soon as we encode absolutes into an automated process we hinder or
remove our ability to be pragmatic and actually do the right thing.
This is why it is irritating when trunk blocks when we know someone is
working on a fix: we know we are doing valuable work and the issue is
being worked on. We don't need to be hit with the bug fixing stick and
should be able to work as a team on more than one thing. If our
team(s) are working well then if someone needs assistance they should
be getting it no matter if they are working on, not just bugs.

James

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: please check your spam folders

2015-03-27 Thread James Tunnicliffe
Lots of CI emails end up in spam for me unless I specifically tell
gmail not to. You can do this by creating a filter and ticking the
never send to spam box, so for internal lists you could trust them.
People in your contacts are trusted more than those who aren't for
spam filtering purposes, but I haven't tried importing the entire
Canonical directory into my own contact list as that seems dumb.

James

On 27 March 2015 at 03:05, Nate Finch nate.fi...@canonical.com wrote:
 I just checked my spam folder and found several false positives... so maybe
 gmail is having a bad month. I dunno.

 On Thu, Mar 26, 2015 at 11:04 PM, David Cheney david.che...@canonical.com
 wrote:

 Be careful with this message. Many people marked similar messages as
 spam.  Learn more

 Looks like this was deliberate

 On Fri, Mar 27, 2015 at 1:57 PM, Nate Finch nate.fi...@canonical.com
 wrote:
  I recently sent an email about backporting changes to repos that use
  gopkg,
  but I've gotten a report that it got send to spam, so please look for it
  there if you don't see it in your inbox.
 
  --
  Juju-dev mailing list
  Juju-dev@lists.ubuntu.com
  Modify settings or unsubscribe at:
  https://lists.ubuntu.com/mailman/listinfo/juju-dev
 



 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev