Re: Time to do a new release?

2024-01-16 Thread Alex Heneveld
+1 from me too - new features have stabilized, including workflow and
kubernetes updates so this makes sense

On Mon, 15 Jan 2024 at 20:38, Geoff Macartney 
wrote:

> Hi Juan,
>
> +1 from me. A new release is long overdue.
>
> Cheers
> Geoff
>
>
>
> On Mon, 15 Jan 2024, 12:31 Juan Cabrerizo,  wrote:
>
> > Hello, Brooklyn developers and users,
> >
> > I completely missed a comment on a closed PR [1] last December asking for
> > cutting a new Brooklyn release that includes it.
> >
> > The fact is, Brooklyn 1.0.0 was released almost four years ago, and since
> > then, new features have been added and vulnerable dependencies replaced.
> >
> > It is probably a good time to release 1.1.0. I wonder if anyone is
> working
> > on something that wants to be part of it.
> >
> > It would be great to have others' opinions here.
> >
> > [1]
> >
> https://github.com/apache/brooklyn-client/pull/86#issuecomment-1869669313
> >
>


Re: Apache Brooklyn dependency injection quick question - skip DI on stop

2023-05-01 Thread Alex Heneveld
Geoff -

On stop/destroy it wouldn't run the pre-apply workflow - it would just use
the value that was set when that workflow was run, on the last apply.

Best
Alex


On Mon, 1 May 2023, 19:32 Geoff Macartney,  wrote:

> Hi Alex,
>
> How does (A) work exactly? What difference does it make if the
> attributeWhenReady is in a pre-apply workflow or not? Why doesn't the same
> problem occur as you've described?
>
> Geoff
>
>
>
> On Mon, 1 May 2023 at 09:21, Alex Heneveld  wrote:
>
> > Hi folks,
> >
> > I've got a question about the best way to do some "start-only" DI using
> the
> > downstream Brooklyn-Terraform project, and I wondered what people's
> > thoughts are on the best way to do something.
> >
> > The situation is that we have a mid-tier TF and a data-tier TF, and as
> you
> > might expect we need to pass the data-tier URL to the mid-tier.  We
> > currently do this as a config `tf_var.db_url =
> > $brooklyn:entity("data_tier").attributeWhenReady("db_url")`.  It works
> > great in almost all cases.
> >
> > Where it doesn't work well is if we stop the data-tier before the
> > mid-tier.  In this case when we stop mid-tier, it fails because the
> > attribute sensor `db_url` isn't available and isn't expected to be
> > available.  Of course we don't need the `db_url` to tear down the
> mid-tier,
> > but there's no way currently to indicate that.
> >
> > So what's the best way to indicate that `db_url` is only needed during
> > plan/apply?
> >
> > There are a few options I can think of:
> >
> > (A) Add a `pre_apply.workflow` (and probably a `pre_plan.workflow`,
> > `pre_stop.workflow`, and maybe also `post_.workflow`) config to the
> > terraform entity, providing a way to supply an optional workflow to be
> run
> > prior to apply/plan.  In this workflow we could say `wait db_url =
> > $brooklyn:entity("data_tier").attributeWhenReady("db_url")` then
> > `set-config db_url = db_url`.  And the `tf_var.db_url` points at
> > `$brooklyn:config("db_url")`.  This means it is only updated on apply,
> and
> > a `stop` or other `plan` instruction will simply use the last set
> > variable.  (And in the `pre_plan.workflow` we could be more forgiving,
> > update config `db_url` if there is a new `db_url` available but otherwise
> > just leave it as it was.)
> >
> > (B) Add new special handling for vars of the form `apply.tf_vars.V`
> > (optionally `plan.tf_vars.V`, `stop.tf_vars.V`) where these vars are only
> > used if that is the step being done.
> >
> > (C) Add a `$brooklyn:if(condition, when_matched, when_unmatched)`
> function
> > to the DSL in Apache Brooklyn.  Then in Brooklyn Terraform we could say
> > something like `$brooklyn:if( { sensor: service.status, equals: starting
> },
> > $brooklyn:entity("data_tier").attributeWhenReady("db_url"),
> > $brooklyn:sensor("last_vars.db_url"))`.
> >
> > (D) Add a `depends_on` keyword which creates a relationship between two
> > entities.  If such a relationship is present, the dependent entity won't
> be
> > allowed to start until the dependency is ready, and the dependency isn't
> > allowed to stop until the dependent is stopped.  This would solve the
> > problem in a different way, because the `data_tier` wouldn't be allowed
> to
> > be stopped while the `mid_tier` is active.  This is how it is addressed
> > within Terraform.
> >
> > I think I lean towards (A), because although (B) and (C) are more
> concise,
> > they aren't as clear or as powerful as (A).  (D) would be nice but feels
> > like quite a bit more work, and some subtle things to consider around
> > start/stop checking these dependencies.  And if enforced out-of-the-box
> on
> > start/stop likely to be slightly crude and obscure.  (D) seems like a
> nice
> > thing to consider, maybe adding workflow steps to facilitate is, so e.g.
> as
> > part of (A) a developer could say, e.g. in the `pre_start.workflow` "wait
> > for all dependencies to be ready" and in the `pre_stop.workflow` "wait
> for
> > all dependents to be stopped", but not something to bake in.
> >
> > Please if you have thoughts on the above, or you can think of other good
> > ways to do it, let me know!
> >
> > Best
> > Alex
> >
> >
> > [1] https://github.com/cloudsoft/brooklyn-terraform
> >
>


Apache Brooklyn dependency injection quick question - skip DI on stop

2023-05-01 Thread Alex Heneveld
Hi folks,

I've got a question about the best way to do some "start-only" DI using the
downstream Brooklyn-Terraform project, and I wondered what people's
thoughts are on the best way to do something.

The situation is that we have a mid-tier TF and a data-tier TF, and as you
might expect we need to pass the data-tier URL to the mid-tier.  We
currently do this as a config `tf_var.db_url =
$brooklyn:entity("data_tier").attributeWhenReady("db_url")`.  It works
great in almost all cases.

Where it doesn't work well is if we stop the data-tier before the
mid-tier.  In this case when we stop mid-tier, it fails because the
attribute sensor `db_url` isn't available and isn't expected to be
available.  Of course we don't need the `db_url` to tear down the mid-tier,
but there's no way currently to indicate that.

So what's the best way to indicate that `db_url` is only needed during
plan/apply?

There are a few options I can think of:

(A) Add a `pre_apply.workflow` (and probably a `pre_plan.workflow`,
`pre_stop.workflow`, and maybe also `post_.workflow`) config to the
terraform entity, providing a way to supply an optional workflow to be run
prior to apply/plan.  In this workflow we could say `wait db_url =
$brooklyn:entity("data_tier").attributeWhenReady("db_url")` then
`set-config db_url = db_url`.  And the `tf_var.db_url` points at
`$brooklyn:config("db_url")`.  This means it is only updated on apply, and
a `stop` or other `plan` instruction will simply use the last set
variable.  (And in the `pre_plan.workflow` we could be more forgiving,
update config `db_url` if there is a new `db_url` available but otherwise
just leave it as it was.)

(B) Add new special handling for vars of the form `apply.tf_vars.V`
(optionally `plan.tf_vars.V`, `stop.tf_vars.V`) where these vars are only
used if that is the step being done.

(C) Add a `$brooklyn:if(condition, when_matched, when_unmatched)` function
to the DSL in Apache Brooklyn.  Then in Brooklyn Terraform we could say
something like `$brooklyn:if( { sensor: service.status, equals: starting },
$brooklyn:entity("data_tier").attributeWhenReady("db_url"),
$brooklyn:sensor("last_vars.db_url"))`.

(D) Add a `depends_on` keyword which creates a relationship between two
entities.  If such a relationship is present, the dependent entity won't be
allowed to start until the dependency is ready, and the dependency isn't
allowed to stop until the dependent is stopped.  This would solve the
problem in a different way, because the `data_tier` wouldn't be allowed to
be stopped while the `mid_tier` is active.  This is how it is addressed
within Terraform.

I think I lean towards (A), because although (B) and (C) are more concise,
they aren't as clear or as powerful as (A).  (D) would be nice but feels
like quite a bit more work, and some subtle things to consider around
start/stop checking these dependencies.  And if enforced out-of-the-box on
start/stop likely to be slightly crude and obscure.  (D) seems like a nice
thing to consider, maybe adding workflow steps to facilitate is, so e.g. as
part of (A) a developer could say, e.g. in the `pre_start.workflow` "wait
for all dependencies to be ready" and in the `pre_stop.workflow` "wait for
all dependents to be stopped", but not something to bake in.

Please if you have thoughts on the above, or you can think of other good
ways to do it, let me know!

Best
Alex


[1] https://github.com/cloudsoft/brooklyn-terraform


Re: Karaf 4.3.8 upgrade issues

2022-12-21 Thread Alex Heneveld
Thanks again JB. Java 11 (anything 9+) has proven impossible so far to get
a working combo of versions. We've been locked on 8 which has been working
well, with a lot of babysitting of the jre.properties.

Best
Alex

On Wed, 21 Dec 2022, 05:28 Jean-Baptiste Onofré,  wrote:

> Just a note, as you can see here
> https://karaf.apache.org/download.html, Karaf 4.3.x is supposed to
> work with JDK 11 (not JDK 8). Do you have the same issues with JDK 11
> ?
>
> Regarding the client disconnects, I think I fixed that already. I will
> double check.
>
> Regarding jaspic, it's normal: it's a change in Jetty. I remember I
> made changes about that. Let me find commits/Jira related to that.
>
> At first glance, I don't see any new changes required in Karaf (for
> 4.4.3), but I will do a pass anyway.
>
> Regards
> JB
>
> On Tue, Dec 20, 2022 at 11:59 AM Alex Heneveld  wrote:
> >
> > Thanks JB. Java 8 (Zulu, Mac M1). Hope all goes well w 4.3.9.
> >
> > Best
> > Alex
> >
> > On Tue, 20 Dec 2022, 04:58 Jean-Baptiste Onofré, 
> wrote:
> >
> > > Hi Alex,
> > >
> > > Thanks for the detailed report.
> > >
> > > Which JDK are you using ? I would like to reproduce each topic step by
> > > step.
> > >
> > > I'm right now preparing 4.4.3 and 4.3.9 releases. Not sure I will have
> > > time to include any change related to these points, but I will try.
> > > Else, I will move forward pretty fast new releases just after.
> > >
> > > Regards
> > > JB
> > >
> > > On Mon, Dec 19, 2022 at 2:20 PM Alex Heneveld 
> wrote:
> > > >
> > > > Hi JB, Team,
> > > >
> > > > I'm upgrading various OSGi deps in Apache Brooklyn and have hit three
> > > > curious issues:
> > > >
> > > >
> > > > (1) bin/client disconnects immediately after authentication if no
> command
> > > > specified; this is after enabling the karaf user in
> etc/users/properties,
> > > > `bin/client bundle:list` works, but `bin/client` now disconnects
> > > > immediately.  `ssh karaf@localhost -p 8101` still works.
> > > >
> > > > At first I thought it was related to (2) but since commands actually
> work
> > > > and an ssh shell works I now suspect a TTY issue.  Logging on client
> (`-l
> > > > 4`) and server don't show much apart from the sudden closing after
> the
> > > > server says "no command provided".
> > > >
> > > >
> > > > (2) The log shows scary stack traces when
> > > > enabling org.apache.karaf.shell.core/4.3.8 about config commands
> failing
> > > to
> > > > load, eg:
> > > >
> > > > 2022-12-19T10:47:40,020Z - INFO   64 o.a.k.s.i.a.o.CommandExtension
> > > > [tures-3-thread-1] Inspection of class
> > > > org.apache.karaf.config.command.CancelCommand failed.
> > > > java.lang.NoClassDefFoundError:
> org/apache/karaf/shell/api/action/Action
> > > >
> > > > This is almost certainly due to loading karaf.config earlier (
> > > >
> > >
> https://github.com/apache/karaf/commit/b783f279c78f79005d15657f10fbe3a84bfdd863
> > > ).
> > > > The stack traces are at INFO level and it doesn't have noticeable
> impact
> > > so
> > > > not a big deal but thought I would say.
> > > >
> > > >
> > > > (3) Using Eclipse Jetty, servlets and WARs no longer start unless
> > > geronimo
> > > > jaspi specs _and_ provider are explicitly added; I was getting errors
> > > such
> > > > as:
> > > >
> > > > Unable to resolve org.eclipse.jetty.security.jaspi/9.4.49.v20220914:
> > > > missing requirement
> [org.eclipse.jetty.security.jaspi/9.4.49.v20220914]
> > > > osgi.wiring.package;
> > > >
> > >
> filter:="(&(osgi.wiring.package=javax.security.auth.message)(version>=1.0.0)(!(version>=2.0.0)))"
> > > >
> > > > Could not start the servlet context for context path []
> > > > java.lang.SecurityException: AuthConfigFactory error:
> > > > java.lang.ClassNotFoundException:
> > > > org.apache.geronimo.components.jaspi.AuthConfigFactoryImpl not found
> by
> > > > org.apache.geronimo.specs.geronimo-jaspic_1.0_spec [46]
> > > >
> > > > The first was solved by explicitly adding this:
> > > >
> > > >
&g

Re: Karaf 4.3.8 upgrade issues

2022-12-20 Thread Alex Heneveld
Thanks JB. Java 8 (Zulu, Mac M1). Hope all goes well w 4.3.9.

Best
Alex

On Tue, 20 Dec 2022, 04:58 Jean-Baptiste Onofré,  wrote:

> Hi Alex,
>
> Thanks for the detailed report.
>
> Which JDK are you using ? I would like to reproduce each topic step by
> step.
>
> I'm right now preparing 4.4.3 and 4.3.9 releases. Not sure I will have
> time to include any change related to these points, but I will try.
> Else, I will move forward pretty fast new releases just after.
>
> Regards
> JB
>
> On Mon, Dec 19, 2022 at 2:20 PM Alex Heneveld  wrote:
> >
> > Hi JB, Team,
> >
> > I'm upgrading various OSGi deps in Apache Brooklyn and have hit three
> > curious issues:
> >
> >
> > (1) bin/client disconnects immediately after authentication if no command
> > specified; this is after enabling the karaf user in etc/users/properties,
> > `bin/client bundle:list` works, but `bin/client` now disconnects
> > immediately.  `ssh karaf@localhost -p 8101` still works.
> >
> > At first I thought it was related to (2) but since commands actually work
> > and an ssh shell works I now suspect a TTY issue.  Logging on client (`-l
> > 4`) and server don't show much apart from the sudden closing after the
> > server says "no command provided".
> >
> >
> > (2) The log shows scary stack traces when
> > enabling org.apache.karaf.shell.core/4.3.8 about config commands failing
> to
> > load, eg:
> >
> > 2022-12-19T10:47:40,020Z - INFO   64 o.a.k.s.i.a.o.CommandExtension
> > [tures-3-thread-1] Inspection of class
> > org.apache.karaf.config.command.CancelCommand failed.
> > java.lang.NoClassDefFoundError: org/apache/karaf/shell/api/action/Action
> >
> > This is almost certainly due to loading karaf.config earlier (
> >
> https://github.com/apache/karaf/commit/b783f279c78f79005d15657f10fbe3a84bfdd863
> ).
> > The stack traces are at INFO level and it doesn't have noticeable impact
> so
> > not a big deal but thought I would say.
> >
> >
> > (3) Using Eclipse Jetty, servlets and WARs no longer start unless
> geronimo
> > jaspi specs _and_ provider are explicitly added; I was getting errors
> such
> > as:
> >
> > Unable to resolve org.eclipse.jetty.security.jaspi/9.4.49.v20220914:
> > missing requirement [org.eclipse.jetty.security.jaspi/9.4.49.v20220914]
> > osgi.wiring.package;
> >
> filter:="(&(osgi.wiring.package=javax.security.auth.message)(version>=1.0.0)(!(version>=2.0.0)))"
> >
> > Could not start the servlet context for context path []
> > java.lang.SecurityException: AuthConfigFactory error:
> > java.lang.ClassNotFoundException:
> > org.apache.geronimo.components.jaspi.AuthConfigFactoryImpl not found by
> > org.apache.geronimo.specs.geronimo-jaspic_1.0_spec [46]
> >
> > The first was solved by explicitly adding this:
> >
> >
> mvn:org.apache.geronimo.specs/geronimo-jaspic_1.0_spec/1.1
> >
> > That used to come with pax, but doesn't any more, and is needed for
> > jetty-websocket, so makes sense.  But that caused the second error,
> which I
> > could only solve by adding a new (never before included) bundle:
> >
> > mvn:org.apache.geronimo.components/geronimo-jaspi/2.0.0
> >
> > It looks like in the old versions possibly something had been setting a
> > BasicAuthenticator prior to this code block which meant it previously
> > bypassed this jaspi lookup altogether.  No idea why now it is doing this
> > lookup.  Also I note there is an ecilpse jetty
> > jaspi DefaultAuthConfigFactory but can't see how to wire it.  It works
> fine
> > with geronimo-jaspi -- though we don't do anything special with that; I
> > don't even really know what it is, just the WARs stopping launching.
> >
> > (Probably this is nothing to do with the Karaf changes, but since they
> are
> > all version-linked and it was the most irritating, I figured I'd say!)
> >
> >
> > The versions being upgraded are:
> >
> > * CXF 3.4.1 -> 3.4.10
> > * Pax web 7.3.23 -> 7.3.27
> > * Karaf 4.3.6 -> 4.3.8
> > * Eclipse Jetty 9.4.43.v20210629 -> 9.4.49.v20220914
> >
> >
> > It all seems to be working now but I thought people might want to know,
> and
> > quite possibly there are better solutions you can point me at!
> >
> > Many thanks,
> > Alex
>


Karaf 4.3.8 upgrade issues

2022-12-19 Thread Alex Heneveld
Hi JB, Team,

I'm upgrading various OSGi deps in Apache Brooklyn and have hit three
curious issues:


(1) bin/client disconnects immediately after authentication if no command
specified; this is after enabling the karaf user in etc/users/properties,
`bin/client bundle:list` works, but `bin/client` now disconnects
immediately.  `ssh karaf@localhost -p 8101` still works.

At first I thought it was related to (2) but since commands actually work
and an ssh shell works I now suspect a TTY issue.  Logging on client (`-l
4`) and server don't show much apart from the sudden closing after the
server says "no command provided".


(2) The log shows scary stack traces when
enabling org.apache.karaf.shell.core/4.3.8 about config commands failing to
load, eg:

2022-12-19T10:47:40,020Z - INFO   64 o.a.k.s.i.a.o.CommandExtension
[tures-3-thread-1] Inspection of class
org.apache.karaf.config.command.CancelCommand failed.
java.lang.NoClassDefFoundError: org/apache/karaf/shell/api/action/Action

This is almost certainly due to loading karaf.config earlier (
https://github.com/apache/karaf/commit/b783f279c78f79005d15657f10fbe3a84bfdd863).
The stack traces are at INFO level and it doesn't have noticeable impact so
not a big deal but thought I would say.


(3) Using Eclipse Jetty, servlets and WARs no longer start unless geronimo
jaspi specs _and_ provider are explicitly added; I was getting errors such
as:

Unable to resolve org.eclipse.jetty.security.jaspi/9.4.49.v20220914:
missing requirement [org.eclipse.jetty.security.jaspi/9.4.49.v20220914]
osgi.wiring.package;
filter:="(&(osgi.wiring.package=javax.security.auth.message)(version>=1.0.0)(!(version>=2.0.0)))"

Could not start the servlet context for context path []
java.lang.SecurityException: AuthConfigFactory error:
java.lang.ClassNotFoundException:
org.apache.geronimo.components.jaspi.AuthConfigFactoryImpl not found by
org.apache.geronimo.specs.geronimo-jaspic_1.0_spec [46]

The first was solved by explicitly adding this:

mvn:org.apache.geronimo.specs/geronimo-jaspic_1.0_spec/1.1

That used to come with pax, but doesn't any more, and is needed for
jetty-websocket, so makes sense.  But that caused the second error, which I
could only solve by adding a new (never before included) bundle:

mvn:org.apache.geronimo.components/geronimo-jaspi/2.0.0

It looks like in the old versions possibly something had been setting a
BasicAuthenticator prior to this code block which meant it previously
bypassed this jaspi lookup altogether.  No idea why now it is doing this
lookup.  Also I note there is an ecilpse jetty
jaspi DefaultAuthConfigFactory but can't see how to wire it.  It works fine
with geronimo-jaspi -- though we don't do anything special with that; I
don't even really know what it is, just the WARs stopping launching.

(Probably this is nothing to do with the Karaf changes, but since they are
all version-linked and it was the most irritating, I figured I'd say!)


The versions being upgraded are:

* CXF 3.4.1 -> 3.4.10
* Pax web 7.3.23 -> 7.3.27
* Karaf 4.3.6 -> 4.3.8
* Eclipse Jetty 9.4.43.v20210629 -> 9.4.49.v20220914


It all seems to be working now but I thought people might want to know, and
quite possibly there are better solutions you can point me at!

Many thanks,
Alex


Re: Declarative Workflow update, mutex, replay and retention

2022-11-21 Thread Alex Heneveld
Following some feedback re "replayable" I've rejigged that section [1].  It
changes the concepts to consider whether steps are "resumable" as a
separate idea to noting explicit "replayable" waypoints, with in most cases
either able to allow a workflow to be replayed, e.g. if a resumable step is
interrupted (such as a sleep, but not for instance and http call) or if the
workflow author indicated that a completed step was "replayable here".

Thanks to those who gave their input!  I am much happier with this rejigged
plan.  More comments are welcome!

Best
Alex

[1]
https://docs.google.com/document/d/1u02Bi6sS8Fkf1s7UzRRMnvLhA477bqcyxGa0nJesqkI/edit#heading=h.63aibdplmze


On Fri, 18 Nov 2022 at 13:44, Alex Heneveld  wrote:

> Hi team,
>
> I've got most of the proposed "lock" implementation completed, as
> discussed in the previous mail (below), PR to come shortly.  It works well,
> though there are a huge number of subtleties so test cases were a
> challenge; it makes it all the better than we provide this however, so
> workflow authors have a much easier time.  The biggest challenge was to
> make sure that if an author writes e.g. { type: workflow, lock: my-lock,
> on-error: [ retry ], steps: [ ... ] }`, if Brooklyn does a failover the
> retry (at the new server) preserves the lock ... but if there is no retry,
> or the retry fails, the lock is cleared.
>
> As part of this I've been mulling over "replayable"; as I mentioned below
> it was one aspect I'm not entirely sure of, and it's quite closely related
> to "expiration" which I think might be better described as "retention".  I
> think I've got a better way to handle those, and a tweak to error
> handling.  It's described in this section:
>
>
> https://docs.google.com/document/d/1u02Bi6sS8Fkf1s7UzRRMnvLhA477bqcyxGa0nJesqkI/edit#heading=h.63aibdplmze
>
> There are two questions towards the end that I'd especially value input on.
>
> Thanks
> Alex
>
>
> On Fri, 11 Nov 2022 at 13:40, Alex Heneveld  wrote:
>
>> Hi team,
>>
>> Workflow is in a pretty good state -- nearly done mostly as per
>> proposal, with nearly all step types, retries, docs [1], and integrated
>> into the activities view in the inspector.  My feeling is that it radically
>> transforms how easy it is to write sensors and effectors.
>>
>> Thanks to everyone for their reviews and feedback!
>>
>> The biggest change from the original proposal is a switch to the list
>> syntax from the init.d syntax.  Extra thanks to those who agitated for that.
>>
>> The other really nice aspect is how the shorthand step notation functions
>> as a DSL in simple cases so extra thanks for the suggestion to make it
>> DSL-like.
>>
>> The two items remaining are nested workflows and controlling how long
>> workflows are remembered.
>>
>>
>> There is one new feature which seems to be needed, which I wanted to
>> raise.  As the subject suggests, this is mutexes.  I had hoped we wouldn't
>> need this but actually quite often you want to ensure no other workflows
>> are conflicting.  Consider the simple case where you want to atomically
>> increment a sensor:
>>
>> ```
>> - let i = ${entity.sensor.count} ?? 0
>> - let i = ${i} + 1
>> - set-sensor count = ${i}
>> ```
>>
>> Running this twice we'd hope to get count=2.  But if the runs are
>> concurrent you won't.  So how can we ensure no other instances of the
>> workflow are running concurrently?
>>
>> There are three options I see.
>>
>>
>> (1) set-sensor allows arithmetic
>>
>> We could support arithmetic on set-sensor and require it to run
>> atomically against that sensor.  For instance `set-sensor count =
>> (${entity.sensor.count} ?? 0) + 1`.  We could fairly easily ensure the RHS
>> is evaluated with the caller holding the lock on the sensor count.  However
>> our arithmetic support is quite limited (we don't support grouping at
>> present, so either you'd have to write `${entity.sensor.count} + 1 ?? 1` or
>> we'd beef that up), and I think there is something nice about at present
>> where arithmetic is only allowed on `let` so it is more inspectable.
>>
>>
>> (2) set-sensor with mutex check
>>
>> We could introduce a check which is done while the lock on the sensor is
>> held.  So you could check the sensor is unset before setting it, and fail
>> if it isn't, or check the value is as expected.  You can then set up
>> whatever retry behaviour you want in the usual way.  For instance:
&g

Re: Declarative Workflow update, mutex, replay and retention

2022-11-18 Thread Alex Heneveld
Hi team,

I've got most of the proposed "lock" implementation completed, as discussed
in the previous mail (below), PR to come shortly.  It works well, though
there are a huge number of subtleties so test cases were a challenge; it
makes it all the better than we provide this however, so workflow authors
have a much easier time.  The biggest challenge was to make sure that if an
author writes e.g. { type: workflow, lock: my-lock, on-error: [ retry ],
steps: [ ... ] }`, if Brooklyn does a failover the retry (at the new
server) preserves the lock ... but if there is no retry, or the retry
fails, the lock is cleared.

As part of this I've been mulling over "replayable"; as I mentioned below
it was one aspect I'm not entirely sure of, and it's quite closely related
to "expiration" which I think might be better described as "retention".  I
think I've got a better way to handle those, and a tweak to error
handling.  It's described in this section:

https://docs.google.com/document/d/1u02Bi6sS8Fkf1s7UzRRMnvLhA477bqcyxGa0nJesqkI/edit#heading=h.63aibdplmze

There are two questions towards the end that I'd especially value input on.

Thanks
Alex


On Fri, 11 Nov 2022 at 13:40, Alex Heneveld  wrote:

> Hi team,
>
> Workflow is in a pretty good state -- nearly done mostly as per
> proposal, with nearly all step types, retries, docs [1], and integrated
> into the activities view in the inspector.  My feeling is that it radically
> transforms how easy it is to write sensors and effectors.
>
> Thanks to everyone for their reviews and feedback!
>
> The biggest change from the original proposal is a switch to the list
> syntax from the init.d syntax.  Extra thanks to those who agitated for that.
>
> The other really nice aspect is how the shorthand step notation functions
> as a DSL in simple cases so extra thanks for the suggestion to make it
> DSL-like.
>
> The two items remaining are nested workflows and controlling how long
> workflows are remembered.
>
>
> There is one new feature which seems to be needed, which I wanted to
> raise.  As the subject suggests, this is mutexes.  I had hoped we wouldn't
> need this but actually quite often you want to ensure no other workflows
> are conflicting.  Consider the simple case where you want to atomically
> increment a sensor:
>
> ```
> - let i = ${entity.sensor.count} ?? 0
> - let i = ${i} + 1
> - set-sensor count = ${i}
> ```
>
> Running this twice we'd hope to get count=2.  But if the runs are
> concurrent you won't.  So how can we ensure no other instances of the
> workflow are running concurrently?
>
> There are three options I see.
>
>
> (1) set-sensor allows arithmetic
>
> We could support arithmetic on set-sensor and require it to run atomically
> against that sensor.  For instance `set-sensor count =
> (${entity.sensor.count} ?? 0) + 1`.  We could fairly easily ensure the RHS
> is evaluated with the caller holding the lock on the sensor count.  However
> our arithmetic support is quite limited (we don't support grouping at
> present, so either you'd have to write `${entity.sensor.count} + 1 ?? 1` or
> we'd beef that up), and I think there is something nice about at present
> where arithmetic is only allowed on `let` so it is more inspectable.
>
>
> (2) set-sensor with mutex check
>
> We could introduce a check which is done while the lock on the sensor is
> held.  So you could check the sensor is unset before setting it, and fail
> if it isn't, or check the value is as expected.  You can then set up
> whatever retry behaviour you want in the usual way.  For instance:
>
> ```
> # pessimistic locking
> - let i = ${entity.sensor.count} ?? 0
> - let j = ${i} + 1
> - step: set-sensor count = ${j}
>   require: ${i}
>   on-error:
>   - goto start
> - clear-sensor lock-for-count
> ```
>
> ```
> # mutex lock acquisition
> - step: set-sensor lock-for-count = ${workflow.id}
>   require:
> when: absent
>   on-error:
>   - retry backoff 50ms increasing 2x up to 5s
> - let i = ${entity.sensor.count} ?? 0
> - let i = ${i} + 1
> - set-sensor count = ${i}
> - clear-sensor lock-for-count
> ```
>
> (A couple subtleties for those of you new to the workflow conditions; they
> always have an implicit target depending on context, which for `require` we
> would make be the old sensor value; "when: absent" is a special predicate
> DSL keyword to say that a sensor is unavailable (you could also use `when:
> absent_or_null` or `when: falsy` or `not: { when: truthy }`.  Finally
> `require: ${i}` uses the fact that conditions default to being an equality
> check.  That call is equivalent to `require: { target:
&

Re: move jclouds to the attic?

2022-11-14 Thread Alex Heneveld
As a member of the Apache Brooklyn PMC I'd be pleased to see jclouds
sustained a bit longer.

Increasingly in AB people are using custom containers (eg AWS CLI),
terraform, helm, and other tools to drive creation, but for well-behaved
VMs without much thought jclouds is usually simpler than any of those.  So
while the long-term future of jclouds in AB isn't clear to me, in the near
term it would be great to have maintenance support for jclouds at least, if
people are willing.  Thank you!

Re (2) I am definitely curious how much effort it would be for both
Brooklyn and jclouds to move to karaf5.  I think in both there's a lot of
subtle use of OSGi capabilities so they would be interesting exercises, and
if not too hard would be a great step forward for lightweightness.

Best
Alex


On Mon, 14 Nov 2022 at 04:53, Jean-Baptiste Onofré  wrote:

> Hi guys,
>
> thanks for your update !
>
> I propose to prepare a quick plan describing:
> 1. PMC set proposal
> 2. Roadmap/ideas for jclouds future (I would like to mention Karaf Minho
> here)
> 3. Send the proposal on the mailing list to move forward on vote and
> inform the board
>
> Thoughts ?
>
> Regards
> JB
>
> On Sun, Nov 13, 2022 at 11:12 AM Juan Cabrerizo  wrote:
> >
> > Hi, I'm a PMC member of Brooklyn, happy to try to help JClouds and
> joining
> > the committee. It's a core dependency for us.
> >
> > Regards
> > Juan
> >
> > On Sat, 12 Nov 2022 at 16:22, Geoff Macartney 
> wrote:
> >
> > > I would also be willing to join the Jclouds PMC if that would be
> helpful.
> > >
> > > Regards
> > > Geoff
> > >
> > > On Thu, 10 Nov 2022 at 11:15, Jean-Baptiste Onofré 
> > > wrote:
> > > >
> > > > I’m in ;)
> > > >
> > > > Regards
> > > > JB
> > > >
> > > > Le jeu. 10 nov. 2022 à 11:56, fpapon  a écrit :
> > > >
> > > > > Hi,
> > > > >
> > > > > After some discussions with JB, we are ok to propose our help to
> join
> > > > > the PMC of JCloud and contribute to keep the project alive if
> anybody
> > > is
> > > > > ok.
> > > > >
> > > > > Regards,
> > > > >
> > > > > Francois
> > > > >
> > > > > On 09/11/2022 21:57, Geoff Macartney wrote:
> > > > > > Hello Andrew, and Jclouds PMC,
> > > > > >
> > > > > > I'm sorry to be so late in replying to this, I confess I had
> missed
> > > it
> > > > > > when it was sent last month and only became aware of it today.
> > > > > >
> > > > > > Speaking as a member of the Apache Brooklyn PMC I must confess I
> am
> > > > > > sad to hear this proposal. Jclouds is one of our most critical
> > > > > > dependencies, and I would worry about the implications for
> Brooklyn
> > > if
> > > > > > Jclouds moved to the Attic. I am worried in any case about the
> > > > > > implications of the lower activity in the community, but that is
> > > > > > another issue.
> > > > > >
> > > > > > I have been refreshing my memory about the PMC guidelines on
> moving
> > > to
> > > > > > the Attic [1]. These note that
> > > > > >
> > > > > > "In summary, the only reason for a project to move to the Attic
> is
> > > > > > lack of oversight due to an insufficient number of active PMC
> > > members"
> > > > > >
> > > > > > (the minimum being three), and that electing willing community
> > > members
> > > > > > to the PMC would be the best way to keep it viable. If the worst
> > > comes
> > > > > > to the worst "the Board can "reboot" a PMC by re-establishing it
> with
> > > > > > a new or modified PMC".
> > > > > >
> > > > > > Perhaps it would be worth doing a formal [VOTE] poll within
> Jclouds
> > > > > > PMC itself to see if at least three PMC members would be willing
> to
> > > > > > continue to carry out that role? If not, maybe other options
> could be
> > > > > > explored before deciding to move to the Attic, such as some
> community
> > > > > > members joining the PMC.
> > > > > >
> > > > > > What do you think?
> > > > > >
> > > > > > Kind regards
> > > > > > Geoff
> > > > > >
> > > > > > [1] https://apache.org/dev/pmc#move-to-attic
> > > > > >
> > > > > >
> > > > > > On Mon, 10 Oct 2022 at 14:03, Andrew Gaul 
> wrote:
> > > > > >> jclouds development has slowed from 123 commits from 26
> > > contributors in
> > > > > >> 2018 to just 24 from 6 contributors in 2022.  This is despite
> > > growing
> > > > > >> downloads over the last 12 months from 50,000 to 80,000 for
> > > jclouds-core
> > > > > >> alone.  Unfortunately the number of active committers has shrunk
> > > and we
> > > > > >> will soon lack quorum for future releases.  This means that the
> > > project
> > > > > >> must move to the Apache attic.
> > > > > >>
> > > > > >> Ideally the community could step up to sustain the project,
> e.g.,
> > > > > >> reviewing pull requests, fixing issues, responding to mailing
> list
> > > > > >> queries, and eventually becoming committers themselves.  Does
> anyone
> > > > > >> have a multi-year interest in jclouds that wants to help out?
> > > > > >>
> > > > > >> If not, I will cut a final 2.6.0 release before retiring the
> > > project.
> > > > > >>
> >

Declarative Workflow update & mutex

2022-11-11 Thread Alex Heneveld
Hi team,

Workflow is in a pretty good state -- nearly done mostly as per
proposal, with nearly all step types, retries, docs [1], and integrated
into the activities view in the inspector.  My feeling is that it radically
transforms how easy it is to write sensors and effectors.

Thanks to everyone for their reviews and feedback!

The biggest change from the original proposal is a switch to the list
syntax from the init.d syntax.  Extra thanks to those who agitated for that.

The other really nice aspect is how the shorthand step notation functions
as a DSL in simple cases so extra thanks for the suggestion to make it
DSL-like.

The two items remaining are nested workflows and controlling how long
workflows are remembered.


There is one new feature which seems to be needed, which I wanted to
raise.  As the subject suggests, this is mutexes.  I had hoped we wouldn't
need this but actually quite often you want to ensure no other workflows
are conflicting.  Consider the simple case where you want to atomically
increment a sensor:

```
- let i = ${entity.sensor.count} ?? 0
- let i = ${i} + 1
- set-sensor count = ${i}
```

Running this twice we'd hope to get count=2.  But if the runs are
concurrent you won't.  So how can we ensure no other instances of the
workflow are running concurrently?

There are three options I see.


(1) set-sensor allows arithmetic

We could support arithmetic on set-sensor and require it to run atomically
against that sensor.  For instance `set-sensor count =
(${entity.sensor.count} ?? 0) + 1`.  We could fairly easily ensure the RHS
is evaluated with the caller holding the lock on the sensor count.  However
our arithmetic support is quite limited (we don't support grouping at
present, so either you'd have to write `${entity.sensor.count} + 1 ?? 1` or
we'd beef that up), and I think there is something nice about at present
where arithmetic is only allowed on `let` so it is more inspectable.


(2) set-sensor with mutex check

We could introduce a check which is done while the lock on the sensor is
held.  So you could check the sensor is unset before setting it, and fail
if it isn't, or check the value is as expected.  You can then set up
whatever retry behaviour you want in the usual way.  For instance:

```
# pessimistic locking
- let i = ${entity.sensor.count} ?? 0
- let j = ${i} + 1
- step: set-sensor count = ${j}
  require: ${i}
  on-error:
  - goto start
- clear-sensor lock-for-count
```

```
# mutex lock acquisition
- step: set-sensor lock-for-count = ${workflow.id}
  require:
when: absent
  on-error:
  - retry backoff 50ms increasing 2x up to 5s
- let i = ${entity.sensor.count} ?? 0
- let i = ${i} + 1
- set-sensor count = ${i}
- clear-sensor lock-for-count
```

(A couple subtleties for those of you new to the workflow conditions; they
always have an implicit target depending on context, which for `require` we
would make be the old sensor value; "when: absent" is a special predicate
DSL keyword to say that a sensor is unavailable (you could also use `when:
absent_or_null` or `when: falsy` or `not: { when: truthy }`.  Finally
`require: ${i}` uses the fact that conditions default to being an equality
check.  That call is equivalent to `require: { target:
${entity.sensor.count}, equals: ${i} }`.)

The retry with backoff is pretty handy here.  But there is still one
problem, in the lock acquisition case, if the workflow fails after step 1
what will clear the lock?  (Pessimistic locking doesn't have this
problem.). Happily we have an easy solution, because workflows were
designed with recovery in mind.  If Brooklyn detects an interrupted
workflow on startup, it will fail it with a "dangling workflow" exception,
and you can attach recovery to it; specifying replay points and making
steps idempotent.

```
# rock-solid mutex lock acquisition
steps:
- step: set-sensor lock-for-count = ${workflow.id}
  require:
any:
- when: absent
- equals: ${workflow.id}
  on-error:
  - retry backoff 50ms increasing 2x up to 5s
- let i = ${entity.sensor.count} ?? 0
- let i = ${i} + 1
- step: set-sensor count = ${i}
  replayable: yes
- clear-sensor lock-for-count
on-error:
  - condition:
  error-cause:
glob: *DanglingWorkflowException*
step: retry replay
replayable: yes
```

The `require` block now allows re-entrancy.  We rely on the fact that
Brooklyn gives workflow instances a unique ID and on failover Dangling is
thrown from an interrupted step preserving the workflow ID (but giving it a
different task ID so replays can be distinguished, with support for this in
the UI), and Brooklyn persistence handles election of a single primary with
any demoted instance interrupting its tasks.  The workflow is replayable
from the start, and on Dangling it replays.  Additionally we can replay
from the `set-sensor` step which will use local copies of the workflow
variable so if that step is interrupted and runs twice it will be
idempotent.


(3) explicit `lock` keyword on `workflow` step

Re: Declarative Workflow update & shorthand/DSL

2022-09-23 Thread Alex Heneveld
Thanks Geoff.  I've had informal feedback from others in favour of the list
approach, and in my trials it is working nice.  I will apply the changes in
the docs and PRs.

Two other things:

* Geoff's comment made me wonder about having a "function" step, where
within a workflow one could define a local function.  I'm thinking *not*
for just now, but FYI it would not be too hard to add.

* The semantics of referencing a sensor which isn't yet "ready" is
ambiguous.  Do we wait, or return null, or give an error.  By default it
blocks but this feels wrong.  I think we should (1) add a new `wait` step
which allows waiting for a value, and (2) give an error when used elsewhere
(which is also the behaviour if you reference a non-existent key in a map),
and (3) support as part of local variables (only) i.e. `let` some limited
evaluation, including the `??` nullish operator which forgives an error on
the LHS (and basic maths)

A bit more detail on that last, it would allow:

- let x = ${entity.sensor.DoesNotExist} ?? 0
- let x = x + 1

But it would give an error if you reference it in log or other commands:

- log the value is ${entity.sensor.DoesNotExist}

Best
Alex


On Wed, 21 Sept 2022 at 20:35, Geoff Macartney  wrote:

> Hi Alex, Mykola,
>
> By the way I should mention that I'm very busy in the evenings this week so
> might not get to look at the latest PR for a while. By all means go ahead
> and merge it if Mykola and/or others are happy with it, no need to wait for
> me.
>
> Cheers
> Geoff
>
>
> On Tue, 20 Sept 2022, 22:20 Geoff Macartney,  wrote:
>
> > Hi Alex,
> >
> > +1 This updated proposal looks good - I do think the list based
> > approach will be simpler and less error prone, and the fact that you
> > will support an optional `id` anyway, if that is desired, means it
> > retains much of the flexibility of the map based approach. The custom
> > workflow step looks a little like the "functions" that we discussed
> > previously. Putting this all together will be pretty powerful.
> >
> > Will try to get a look at the latest PR if I can.
> >
> > Cheers
> > Geoff
> >
> >
> > On Mon, 19 Sept 2022 at 17:31, Alex Heneveld  wrote:
> > >
> > > Geoff-  Thanks.  Comments addressed in #1361 along with a major
> addition
> > to
> > > support variables -- inputs/outputs/etc.
> > >
> > > All-  One of the points Geoff makes concerns how steps are defined.  I
> > > think along with other comments that tips the balance in favour of
> > > revisiting how steps are defined.
> > >
> > > I propose we switch from the OLD proposed approach -- the map of
> ordered
> > > IDs -- to a NEW LIST-BASED approach.  There's a lot of detail below but
> > > in-short it's shifting from:
> > >
> > > steps:
> > >   1-say-hi:  log hi
> > >   2-step-two:  log step 2
> > >
> > > To:
> > >
> > > steps:
> > >   - log hi
> > >   - log step 2
> > >
> > >
> > > Specifically, based on feedback and more hands-on experience, I
> propose:
> > >
> > > * steps now be supplied as a list (now a map)
> > > * users are no longer required to supply an ID for each step (in the
> old
> > > approach, the ID was required as the key for every step)
> > > * users can if they wish supply an ID for any step (now as an explicit
> > `id:
> > > ` rule)
> > > * the default order, if no `next: ` instruction is supplied, is the
> > > order of the list (in the old approach the order was based on the ID)
> > >
> > > Also, the shorthand idea has evolved a little bit; instead of a
> ":
> > > " single-key map, we've suggested:
> > >
> > > * it be a string " "
> > > * shorthand can also be supplied in a map using the key "s" or the key
> > > "shorthand" (to allow shorthand along with other step key values)
> > > * custom steps can define custom shorthand templates (e.g. "${key} "="
> > > ${value}")
> > > * (there is also some evolution in how custom steps are defined)
> > >
> > >
> > > To illustrate:
> > >
> > > The OLD EXAMPLE:
> > >
> > > steps:
> > >1:
> > >   type: container
> > >   image: my/google-cloud
> > >   command: gcloud dataproc jobs submit spark
> --BUCKET=gs://${BUCKET}
> > >   env:
> > > BUCKET: $brooklyn:config("bucket")
> > >   on-error:

Re: Declarative Workflow update & shorthand/DSL

2022-09-19 Thread Alex Heneveld
Geoff-  Thanks.  Comments addressed in #1361 along with a major addition to
support variables -- inputs/outputs/etc.

All-  One of the points Geoff makes concerns how steps are defined.  I
think along with other comments that tips the balance in favour of
revisiting how steps are defined.

I propose we switch from the OLD proposed approach -- the map of ordered
IDs -- to a NEW LIST-BASED approach.  There's a lot of detail below but
in-short it's shifting from:

steps:
  1-say-hi:  log hi
  2-step-two:  log step 2

To:

steps:
  - log hi
  - log step 2


Specifically, based on feedback and more hands-on experience, I propose:

* steps now be supplied as a list (now a map)
* users are no longer required to supply an ID for each step (in the old
approach, the ID was required as the key for every step)
* users can if they wish supply an ID for any step (now as an explicit `id:
` rule)
* the default order, if no `next: ` instruction is supplied, is the
order of the list (in the old approach the order was based on the ID)

Also, the shorthand idea has evolved a little bit; instead of a ":
" single-key map, we've suggested:

* it be a string " "
* shorthand can also be supplied in a map using the key "s" or the key
"shorthand" (to allow shorthand along with other step key values)
* custom steps can define custom shorthand templates (e.g. "${key} "="
${value}")
* (there is also some evolution in how custom steps are defined)


To illustrate:

The OLD EXAMPLE:

steps:
   1:
  type: container
  image: my/google-cloud
  command: gcloud dataproc jobs submit spark --BUCKET=gs://${BUCKET}
  env:
BUCKET: $brooklyn:config("bucket")
  on-error: retry
2:
  set-sensor: spark-output=${1.stdout}

Would become in the NEW proposal:

steps:
- type: container
  image: my/google-cloud
  command: gcloud dataproc jobs submit spark --BUCKET=gs://${BUCKET}
  env:
BUCKET: $brooklyn:config("bucket")
  on-error: retry
- set-sensor spark-output = ${1.stdout}

If we wanted to attach an `id` to the second step (e.g. for use with
"next") we could write it either as:

# full long-hand map
- type: set-sensor
  input:
sensor: spark-output
value: ${1.stdout}
  id: set-spark-output

# mixed "s" shorthand key and other fields
- s: set-sensor spark-output = ${1.stdout}
  id: set-spark-output

To explain the reasoning:

The advantages of steps:

* Slightly less verbose when no ID is needed on a step
* Easier to read and understand flow
* Avoids hassle of renumbering when introducing step
* Avoids risk of error where same key defined multiple time

The advantages of OLD map-based scheme (implied disadvantages of the new
steps process):

* Easier user-facing correlation on steps (e.g. in UI) by always having an
explicit ID for easier correlation
* Easier to extend a workflow by inserting or overriding explicit steps

After some initial usage of the workflow, it seems these advantages of the
old approach are outweighed by the advantages of the list approach.  In
particular the "correlation" can be done in other ways, and extending a
workflow is probably not so useful, whereas supplying and maintaining an ID
is a hassle, error-prone, and harder to understand.

Finally to explain the custom steps idea, it works out nicely in the code
and we think for users to add a "compound-step" to the catalog e.g. as
follows for the workflow shown above:

  id: retryable-gcloud-dataproc-with-bucket-and-sensor
  item:
type: custom-workflow-step
parameters:
  bucket:
type: string
  sensor_name:
type: string
default: spark-output
shorthand_definition: [ " bucket " ${bucket} ] [ " sensor "
${sensor_name} ]
steps:
- type: container
  image: my/google-cloud
  command: gcloud dataproc jobs submit spark --BUCKET=gs://${BUCKET}
  env:
BUCKET: ${bucket}
  on-error: retry
- set-sensor ${sensor_name} = ${1.stdout}

A user could then write a step:

- retryable-gcloud-dataproc-with-bucket-and-sensor

And optionally use the shorthand per the shorthand_definition, matching the
quoted string literals and inferring the indicated parameters, e.g.:

- retryable-gcloud-dataproc-with-bucket-and-sensor bucket my-bucket sensor
my-spark-output

They could of course also use the longhand:

- type: retryable-gcloud-dataproc-with-bucket-and-sensor
  input:
bucket: my-bucket
sensor_name: my-spark-output


Best
Alex



On Sat, 17 Sept 2022 at 21:13, Geoff Macartney  wrote:

> Hi Alex,
>
> Belatedly reviewed the PR. It's looking good! And surprisingly simple
> in the end. Made a couple of minor comments on it.
>
> Cheers
> Geoff
>
> On Thu, 8 Sept 2022 at 09:35, Alex Heneveld  wrote:
> >
> > Hi team,
> 

Declarative Workflow update & shorthand/DSL

2022-09-08 Thread Alex Heneveld
Hi team,

An initial PR with a few types and the ability to define an effector is
available [1].

This is enough for the next steps to be parallelized, e.g. new steps
added.  The proposal has been updated with a work plan / list of tasks
[2].  Any volunteers to help with some of the upcoming tasks let me know.

Finally I've been thinking about the "shorthand syntax" and how to bring us
closer to Peter's proposal of a DSL.  The original proposal allowed instead
of a map e.g.

step_sleep:
  type: sleep
  duration: 5s

or

step_update_service_up:
  type: set-sensor
  sensor:
name: service.isUp
type: boolean
  value: true

being able to use a shorthand _map_ with a single key being the type, and
value interpreted by that type, so in the OLD SHORTHAND PROPOSAL the above
could be written:

step_sleep:
  sleep: 5s

step_update_service_up:
  set-sensor: service.isUp = true

Having played with syntaxes a bit I wonder if we should instead say the
shorthand DSL kicks in when the step _body_ is a string (instead of a
single-key map), and the first word of the string being the type, and the
remainder interpreted by the type, and we allow it to be a bit more
ambitious.

Concretely this NEW SHORTHAND PROPOSAL would look something like:

step_sleep: sleep 5s
step_update_service_up: set-sensor service.isUp = true
# also supporting a type, ie `set-sensor [TYPE] NAME = VALUE`, eg
step_update_service_up: set-sensor boolean service.isUp = true

You would still need the full map syntax whenever defining flow logic -- eg
condition, next, retry, or timeout -- or any property not supported by the
shorthand syntax.  But for the (majority?) simple cases the expression
would be very concise.  In most cases I think it would feel like a DSL but
has the virtue of a very clear translation to the actual workflow model and
the underlying (YAML) model needed for resumption and UI.

As a final example, the example used at the start of the proposal
(simplified a little -- removing on-error retry and env map as those
wouldn't be supported by shorthand):

brooklyn.initializers:
- type: workflow-effector
 name: run-spark-on-gcp
 steps:
   1:
  type: container
  image: my/google-cloud
  command: gcloud dataproc jobs submit spark
--BUCKET=gs://$brooklyn:config("bucket")
2:
  type: set-sensor
  sensor: spark-output
  value: ${1.stdout}

Could be written in this shorthand as follows:

 steps:
   1: container my/google-cloud command "gcloud dataproc jobs submit spark
--BUCKET=gs://${entity.config.bucket}"
   2: set-sensor spark-output ${1.stdout}

Thoughts?

Best
Alex


[1] https://github.com/apache/brooklyn-server/pull/1358
[2]
https://docs.google.com/document/d/1u02Bi6sS8Fkf1s7UzRRMnvLhA477bqcyxGa0nJesqkI/edit#heading=h.gbadaqa2yql6


On Wed, 7 Sept 2022 at 09:58, Alex Heneveld  wrote:

> Hi Peter,
>
> Yes - thanks for the extra details.  I did take your suggestion to be a
> procedural DSL not YAML, per the illustration at [1] (second code block).
> Probably where I was confusing was in saying that unlike DSLs which just
> run (and where the execution can be delegated to eg java/groovy/ruby), here
> we need to understand and display, store and resume the workflow progress.
> So I think it needs to be compiled to some representation that is well
> described and that new Apache Brooklyn code can reason about, both in the
> UI (JS) and backend (Java).  Parsing a DSL is much harder than using YAML
> for this "reasonable" representation (as in we can reason _about_ it :) ),
> because we already have good backend processing, persistence,
> serialization; and frontend processing and visualization support for
> YAML-based models.  So I think we almost definitely want a well-described
> declarative YAML model of the workflow.
>
> We might *also* want a Workflow DSL because I agree with you a DSL would
> be nicer for a user to write (if writing by hand; although if composing
> visually a drag-and-drop to YAML is probably easier).  However it should
> probably get "compiled" into a Workflow YAML.  So I'm suggesting we do the
> workflow YAML at this stage, and a DSL that compiles into that YAML can be
> designed later.  (Designing a good DSL and parser and reason-about-able
> representation is a big task, so being able to separate it feels good too!)
>
> Best
> Alex
>
> [1]
> https://docs.google.com/document/d/1u02Bi6sS8Fkf1s7UzRRMnvLhA477bqcyxGa0nJesqkI/edit#heading=h.75wm48pjvx0h
>
>
> On Fri, 2 Sept 2022 at 20:17, Geoff Macartney 
> wrote:
>
>> Hi Peter,
>>
>> Thanks for such a detailed writeup of how you see this working. I fear
>> I've too little experience with this sort of thing to be able to say
>> anything very useful about it. My thought on the matter would be,
>> let's get started with the yaml base

Re: A catch-up on progress?

2022-09-07 Thread Alex Heneveld
Confirmed for today 4pm UK time.  Anyone else who wants to attend please
drop me a mail.

Best
Alex


On Thu, 1 Sept 2022 at 17:24, Geoff Macartney 
wrote:

> Hi Alex,
>
> That's great, I'll be excited to hear all about it.  7th September
> suits me fine; I would probably prefer 4.00 p.m. over 11.00.
>
> Cheers
> Geoff
>
> On Thu, 1 Sept 2022 at 12:41, Alex Heneveld  wrote:
> >
> > Thanks for the excellent feedback Geoff and yes there are some very cool
> and exciting things added recently -- containers, conditions, and terraform
> and kubernetes support, all of which make writing complex blueprints much
> easier.
> >
> > I'd love to host a session to showcase these.
> >
> > How does Wed 7 Sept sound?  I could do 11am UK or 4pm UK -- depending
> what time suits for people who are interested.  Please RSVP and indicate
> your time preference!
> >
> > Best
> > Alex
> >
> >
> > On Wed, 31 Aug 2022 at 22:17, Geoff Macartney 
> wrote:
> >>
> >> Hi Alex,
> >>
> >> Another thought occurred to me when reading that workflow proposal. You
> wrote
> >>
> >> "and with the recent support for container-based tasks and declarative
> >> conditions, we have taken big steps towards enabling YAML authorship"
> >>
> >> Unfortunately over the past while I haven't been able to keep up as
> >> closely as I would like with developments in Brooklyn. I'm just
> >> wondering if it might be possible to get together some time, on Google
> >> Meet or Zoom or whatnot, if you or a colleague could spare half an
> >> hour to demo some of these recent developments? But don't worry about
> >> it if you're too busy at present.
> >>
> >> Adding dev@ to this in CC for the sake of Openness. Others might also
> >> be interested!
> >>
> >> Cheers
> >> Geoff
>


Re: A catch-up on progress?

2022-09-07 Thread Alex Heneveld
in up resources
> > workflow() - the main launch sequence using aspects of the DSL
> > monitoring() - an asynchronous workflow used to manage sensor output or
> for
> > whatever needs to be done while the "orchestra" is plating
> > shutdownHook() - called whenever shutdown is happening
> > }
> >
> > For those who don't like the smell of Java, the source file could just be
> > the contents, which would then be injected into the class framing code
> > before compilation.
> >
> > These are just ideas.  I'm not familiar enough with Brooklyn in its
> current
> > implementation to be able to create realistic pseudocode.
> >
> > Peter
> >
> > On Thu, Sep 1, 2022 at 9:24 AM Geoff Macartney <
> geoff.macart...@gmail.com>
> > wrote:
> >
> > > Hi Alex,
> > >
> > > That's great, I'll be excited to hear all about it.  7th September
> > > suits me fine; I would probably prefer 4.00 p.m. over 11.00.
> > >
> > > Cheers
> > > Geoff
> > >
> > > On Thu, 1 Sept 2022 at 12:41, Alex Heneveld  wrote:
> > > >
> > > > Thanks for the excellent feedback Geoff and yes there are some very
> cool
> > > and exciting things added recently -- containers, conditions, and
> terraform
> > > and kubernetes support, all of which make writing complex blueprints
> much
> > > easier.
> > > >
> > > > I'd love to host a session to showcase these.
> > > >
> > > > How does Wed 7 Sept sound?  I could do 11am UK or 4pm UK -- depending
> > > what time suits for people who are interested.  Please RSVP and
> indicate
> > > your time preference!
> > > >
> > > > Best
> > > > Alex
> > > >
> > > >
> > > > On Wed, 31 Aug 2022 at 22:17, Geoff Macartney <
> geoff.macart...@gmail.com>
> > > wrote:
> > > >>
> > > >> Hi Alex,
> > > >>
> > > >> Another thought occurred to me when reading that workflow proposal.
> You
> > > wrote
> > > >>
> > > >> "and with the recent support for container-based tasks and
> declarative
> > > >> conditions, we have taken big steps towards enabling YAML
> authorship"
> > > >>
> > > >> Unfortunately over the past while I haven't been able to keep up as
> > > >> closely as I would like with developments in Brooklyn. I'm just
> > > >> wondering if it might be possible to get together some time, on
> Google
> > > >> Meet or Zoom or whatnot, if you or a colleague could spare half an
> > > >> hour to demo some of these recent developments? But don't worry
> about
> > > >> it if you're too busy at present.
> > > >>
> > > >> Adding dev@ to this in CC for the sake of Openness. Others might
> also
> > > >> be interested!
> > > >>
> > > >> Cheers
> > > >> Geoff
> > >
>


Re: A catch-up on progress?

2022-09-01 Thread Alex Heneveld
Thanks for the excellent feedback Geoff and yes there are some very cool
and exciting things added recently -- containers, conditions, and terraform
and kubernetes support, all of which make writing complex blueprints much
easier.

I'd love to host a session to showcase these.

How does Wed 7 Sept sound?  I could do 11am UK or 4pm UK -- depending what
time suits for people who are interested.  Please RSVP and indicate your
time preference!

Best
Alex


On Wed, 31 Aug 2022 at 22:17, Geoff Macartney 
wrote:

> Hi Alex,
>
> Another thought occurred to me when reading that workflow proposal. You
> wrote
>
> "and with the recent support for container-based tasks and declarative
> conditions, we have taken big steps towards enabling YAML authorship"
>
> Unfortunately over the past while I haven't been able to keep up as
> closely as I would like with developments in Brooklyn. I'm just
> wondering if it might be possible to get together some time, on Google
> Meet or Zoom or whatnot, if you or a colleague could spare half an
> hour to demo some of these recent developments? But don't worry about
> it if you're too busy at present.
>
> Adding dev@ to this in CC for the sake of Openness. Others might also
> be interested!
>
> Cheers
> Geoff
>


Re: Brooklyn Feature Proposal - Declarative and Retryable Workflow

2022-08-31 Thread Alex Heneveld
s proposed which I will try to write up this week.
> >
> > I share the concerns about YAML which I think Peter expressed very well.
> His suggestion of a DSL instead of YAML is interesting and I think would be
> worth considering. I also have some reservations about some of the
> constructs you're proposing (well, at least one of them) and some perhaps
> relatively minor suggestions for changes in structure. My bigger concern is
> that adding a new programming language within Blueprints like this could
> add a whole new dimension of complexity. I'm asking myself, "how would I
> debug this" when things go wrong. I think that's worth some discussion as
> much as the details of the language. There are also points where I simply
> have questions and would like some more detail.
> >
> > I'll try to get more detailed thoughts written up this week.
> >
> > Cheers
> > Geoff
> >
> >
> >
> > On Sat, 27 Aug 2022 at 00:05, Peter Abramowitsch <
> pabramowit...@gmail.com> wrote:
> >>
> >> Hi Alex,
> >> I haven't been involved with the Brooklyn team for a long while so take
> >> this suggestion with as little or as much importance as you see at face
> >> value.   Your proposal for a richer specification language to guide
> >> realtime behavior is much appreciated and I think it is a great idea.
> >> You've obviously thought very deeply as to how it could be applied in
> >> different areas of a blueprint.
> >>
> >> My one comment is whether going for a declarative solution, especially
> one
> >> based on YAML is optimal.  Sure Yaml is well known, easy to eyeball,
> but it
> >> has two drawbacks that make me wonder if it is the best platform for
> your
> >> idea.  The first is that it is a format-based language.  Working in
> large
> >> infrastructure projects, small errors can have disastrous consequences,
> so
> >> as little as a missing or extra tab could result in destroying a data
> >> resource or bringing down a complex system.   The other, more
> philosophical
> >> comment has to do with the clumsiness of describing procedural concepts
> in
> >> a declarative language.  (anyone have fun with XSL doing anything
> >> significant?)
> >>
> >> So my suggestion would be to look into DSLs instead of Yaml.  Very nice
> >> ones can be created with little effort in Ruby Python, JS - and even
> Java.
> >> In addition to having the language's own interpreter check the syntax
> for
> >> you, you get lots of freebies such as being able to do line by line
> >> debugging - and of course the obvious advantage that there is no code
> layer
> >> between the DSL and its implementation, whereas with Yaml, someone
> needs to
> >> write the code that converts the grammar into behavior, catch errors
> etc.
> >>
> >> What do you think?
> >>
> >> Peter
> >>
> >> On Wed, Aug 24, 2022 at 8:44 AM Alex Heneveld 
> wrote:
> >>
> >> > Hi folks,
> >> >
> >> > I'd like Apache Brooklyn to allow more sophisticated workflow to be
> written
> >> > in YAML.
> >> >
> >> > As many of you know, we have a powerful task framework in java, but
> only a
> >> > very limited subset is currently exposed via YAML.  I think we could
> >> > generalize this without a mammoth effort, and get a very nice way for
> users
> >> > to write complex effectors, sensor feeds, etc, directly in YAML.
> >> >
> >> > At [1] please find details of the proposal.
> >> >
> >> > This includes the ability to branch and retry on error.  It can also
> give
> >> > us the ability to retry/resume on an Apache Brooklyn server failover.
> >> >
> >> > Comments welcome!
> >> >
> >> > Best
> >> > Alex
> >> >
> >> >
> >> > [1]
> >> >
> >> >
> https://docs.google.com/document/d/1u02Bi6sS8Fkf1s7UzRRMnvLhA477bqcyxGa0nJesqkI/edit?usp=sharing
> >> >
>


Brooklyn Feature Proposal - Declarative and Retryable Workflow

2022-08-24 Thread Alex Heneveld
Hi folks,

I'd like Apache Brooklyn to allow more sophisticated workflow to be written
in YAML.

As many of you know, we have a powerful task framework in java, but only a
very limited subset is currently exposed via YAML.  I think we could
generalize this without a mammoth effort, and get a very nice way for users
to write complex effectors, sensor feeds, etc, directly in YAML.

At [1] please find details of the proposal.

This includes the ability to branch and retry on error.  It can also give
us the ability to retry/resume on an Apache Brooklyn server failover.

Comments welcome!

Best
Alex


[1]
https://docs.google.com/document/d/1u02Bi6sS8Fkf1s7UzRRMnvLhA477bqcyxGa0nJesqkI/edit?usp=sharing


Re: Proposal to simplify predicates and specs

2022-06-27 Thread Alex Heneveld
Hi Devs,

I've implemented this in https://github.com/apache/brooklyn-server/pull/1326
.  For documentation, see:

https://github.com/apache/brooklyn-docs/pull/358/files

This should make working with dynamic groups and multigroups much nicer!

Best
Alex



On Fri, 17 Jun 2022 at 14:52, Alex Heneveld  wrote:

> Hi devs,
>
> We are seeing more use of predicates and specs in blueprints.  These are
> powerful features and not too hard to express in blueprints using the
> `$brooklyn:object` and `$brooklyn:entitySpec` DSL constructs, but I'm
> thinking it's worth what I think will be a fairly small level of effort to
> make these nicer to work with.
>
> Currently for predicates, e.g. in a DynamicGroup or DynamicMultiGroup, to
> filter the entities to accept we write something like this:
>
>   entityFilter:
> $brooklyn:object:
>   type: org.apache.brooklyn.core.entity.EntityPredicates
>   factoryMethod.name: attributeEqualTo
>   factoryMethod.args:
>   - grouping
>   - web-app-tier
>
> And for entity specs, e.g. in DynamicCluster or DynamicMultiGroup, to
> define the spec of what should be created we write something like:
>
>   dynamiccluster.memberspec:
> $brooklyn:entitySpec:
>   type: org.apache.brooklyn.entity.group.BasicGroup
>   brooklyn.config:
> ...
>   brooklyn.policies:
> - ...
>
> With the bean support and type coercion we have, it would not be too hard
> to allow a simpler YAML without the $brooklyn DSL.
>
> For predicates we could define a default deserialization to be used,
> effectively a DSL for predicates.  And for entity specs, where we know that
> a spec is expected, if YAML is supplied we can support coercion from YAML
> to specs directly.  In both cases the current $brooklyn DSL syntax would
> still be supported, but users would have the option to write simpler and
> more readable YAML.  I propose something like:
>
>   entityFilter:
> sensor: grouping
> equals: web-app-tier
>
> and
>
>   dynamiccluster.memberspec:
> type: org.apache.brooklyn.entity.group.BasicGroup
> brooklyn.config:
>   ...
> brooklyn.policies:
>   - ...
>
> For predicates we can easily add things like config and tags, and location
> config and tags, and regex, not, any, all, {less,greater}-than{,-or-equal).
>
> I think this would make it much more natural to read and write blueprints
> that make use of the powerful filters and specs.
>
> Thoughts?
>
> Best
> Alex
>
>


Proposal to simplify predicates and specs

2022-06-17 Thread Alex Heneveld
Hi devs,

We are seeing more use of predicates and specs in blueprints.  These are
powerful features and not too hard to express in blueprints using the
`$brooklyn:object` and `$brooklyn:entitySpec` DSL constructs, but I'm
thinking it's worth what I think will be a fairly small level of effort to
make these nicer to work with.

Currently for predicates, e.g. in a DynamicGroup or DynamicMultiGroup, to
filter the entities to accept we write something like this:

  entityFilter:
$brooklyn:object:
  type: org.apache.brooklyn.core.entity.EntityPredicates
  factoryMethod.name: attributeEqualTo
  factoryMethod.args:
  - grouping
  - web-app-tier

And for entity specs, e.g. in DynamicCluster or DynamicMultiGroup, to
define the spec of what should be created we write something like:

  dynamiccluster.memberspec:
$brooklyn:entitySpec:
  type: org.apache.brooklyn.entity.group.BasicGroup
  brooklyn.config:
...
  brooklyn.policies:
- ...

With the bean support and type coercion we have, it would not be too hard
to allow a simpler YAML without the $brooklyn DSL.

For predicates we could define a default deserialization to be used,
effectively a DSL for predicates.  And for entity specs, where we know that
a spec is expected, if YAML is supplied we can support coercion from YAML
to specs directly.  In both cases the current $brooklyn DSL syntax would
still be supported, but users would have the option to write simpler and
more readable YAML.  I propose something like:

  entityFilter:
sensor: grouping
equals: web-app-tier

and

  dynamiccluster.memberspec:
type: org.apache.brooklyn.entity.group.BasicGroup
brooklyn.config:
  ...
brooklyn.policies:
  - ...

For predicates we can easily add things like config and tags, and location
config and tags, and regex, not, any, all, {less,greater}-than{,-or-equal).

I think this would make it much more natural to read and write blueprints
that make use of the powerful filters and specs.

Thoughts?

Best
Alex


[jira] [Deleted] (BROOKLYN-631) Vapespring

2021-04-19 Thread Alex Heneveld (Jira)


 [ 
https://issues.apache.org/jira/browse/BROOKLYN-631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Heneveld deleted BROOKLYN-631:
---


> Vapespring
> --
>
> Key: BROOKLYN-631
> URL: https://issues.apache.org/jira/browse/BROOKLYN-631
> Project: Brooklyn
>  Issue Type: New Feature
>Reporter: Vape Spring
>Priority: Major
>
> [*VAPESPRING*|https://www.vapespring.com/]
>  
> Vape Spring is a leading online store in collaboration with the Certified 
> E-liquids manufacturer from Canada. All E-Liquids are USP & FDA Certified and 
> is produced in an ISO 7 Certified Clean Room.
>  
>  
> *[Products|https://www.vapespring.com/products]  | 
> [Flavors|https://www.vapespring.com/flavors] | 
> [NicSalts|https://www.vapespring.com/nic-salts] | 
> [MixNix|https://www.vapespring.com/mixnix]  | 
> [Shisha|https://www.vapespring.com/shisha] | 
> [Blog|https://www.vapespring.com/blog] |*  
> [*Videos*|https://www.vapespring.com/videos]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Fwd: [jira] [Created] (BROOKLYN-631) Vapespring

2021-04-19 Thread Alex Heneveld



Deleted -- I had the power, under "More"

--A


On 19/04/2021 12:04, Duncan Grant wrote:

I'm probably logging into jira using the wrong creds but I don't seem to be
able to delete this.  Can someone please delete it.

-- Forwarded message -
From: Vape Spring (Jira) 
Date: Sat, 17 Apr 2021 at 00:10
Subject: [jira] [Created] (BROOKLYN-631) Vapespring
To: 


Vape Spring created BROOKLYN-631:


  Summary: Vapespring
  Key: BROOKLYN-631
  URL: https://issues.apache.org/jira/browse/BROOKLYN-631
  Project: Brooklyn
   Issue Type: New Feature
 Reporter: Vape Spring


[*VAPESPRING*|https://www.vapespring.com/]



Vape Spring is a leading online store in collaboration with the Certified
E-liquids manufacturer from Canada. All E-Liquids are USP & FDA Certified
and is produced in an ISO 7 Certified Clean Room.





*[Products|https://www.vapespring.com/products]  | [Flavors|
https://www.vapespring.com/flavors] | [NicSalts|
https://www.vapespring.com/nic-salts] | [MixNix|
https://www.vapespring.com/mixnix]  | [Shisha|
https://www.vapespring.com/shisha] | [Blog|https://www.vapespring.com/blog
] |*  [*Videos*|https://www.vapespring.com/videos]





--
This message was sent by Atlassian Jira
(v8.3.4#803005)






[jira] [Closed] (BROOKLYN-631) Vapespring

2021-04-19 Thread Alex Heneveld (Jira)


 [ 
https://issues.apache.org/jira/browse/BROOKLYN-631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Heneveld closed BROOKLYN-631.
--
Resolution: Invalid

Bloody spam

> Vapespring
> --
>
> Key: BROOKLYN-631
> URL: https://issues.apache.org/jira/browse/BROOKLYN-631
> Project: Brooklyn
>  Issue Type: New Feature
>Reporter: Vape Spring
>Priority: Major
>
> [*VAPESPRING*|https://www.vapespring.com/]
>  
> Vape Spring is a leading online store in collaboration with the Certified 
> E-liquids manufacturer from Canada. All E-Liquids are USP & FDA Certified and 
> is produced in an ISO 7 Certified Clean Room.
>  
>  
> *[Products|https://www.vapespring.com/products]  | 
> [Flavors|https://www.vapespring.com/flavors] | 
> [NicSalts|https://www.vapespring.com/nic-salts] | 
> [MixNix|https://www.vapespring.com/mixnix]  | 
> [Shisha|https://www.vapespring.com/shisha] | 
> [Blog|https://www.vapespring.com/blog] |*  
> [*Videos*|https://www.vapespring.com/videos]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Brooklyn DSL improvement proposal

2021-02-19 Thread Alex Heneveld



Hi Iuliana,

This would be an awesome addition.  The use of beans is really powerful 
and this lets us do even more with them.


I'd like us to support *both* the `["user"]` and the `.users` syntax.  
The former could allow accessing other DSL values (if you use a DSL 
expression rather than a quoted string).  I'd also say for list indices 
it should be a number not a string, ie `[0]` not `["0"]`.  So the 
following would be supported (and equivalent):


$brooklyn:config("user_credentials")[config("user_name")].roles[0]
$brooklyn:config("user_credentials")[config("user_name")]["roles"][0]

^ assuming your object is of type `map` and 
`user_credentials` has a field `roles` of type `list`.


BTW the Apache mailing lists don't like any fancy formatting so it 
mangled your mail a little bit but it was still totally readable.


Best
Alex



On 19/02/2021 16:46, Iuliana Cosmina wrote:

Hi Brooklyners,



I am new here, but while writing CAMP blueprints  using Brooklyn DSL I hit
a wall.



Here’s my situation: I have a brooklyn.parameter of a custom type named
Credential, that is defined like this:

- *id*: Credential
   *format*: bean-with-type
   *item*:
 *type*: my.type.Credential



The Credential class has two properties: user and token.



I want to be able to use the value of the ‘user’ property when configuring
my service, like this:



*items*:
- *itemType*: template  *item*:*brooklyn.parameters*:- *name*:
user_credentials  *type*: Credential  *default*:
*user*: *"unsername"
 **token*: *"password"
 **services*:- *id*: credential-service  *type*:
my.credential.Service  *brooklyn.initializers*:  - *type*:
my.initializer.AAA*brooklyn.config*:  *name*:
user.name  *static.value*:
*$brooklyn:config("user_credentials")["user"]*


Currently I see no way to access the value of the "user" property of my
brooklyn parameter.  Also, in case the Credential type also has a "roles"
property which is a list or an array, I would very much like to be
able to write

*$brooklyn:config("user_credentials")["roles"]["0"]*

Of if the “user.credentials” is of type  list, it would be
nice if I could do

*$brooklyn:config("user_credentials")["0"]["user"]*


I personally like the [x] approach where x is tried (in order):

- an argument to a `get(x)` method (works for lists and maps: index or
key)
- a bean property (if x is a string, look for method called getX() or a
field called  x
- a config key, if the target is Configurable , ie getConfig(x) or
config().get(x)


Or we could go for something pretty similar to a JsonPath style (
https://github.com/json-path/JsonPath)



This would allow us to write constructs like:

*$brooklyn:config("user_credentials").user*

*$brooklyn:config("user_credentials").roles[0]*

*$brooklyn:config("user_credentials")[0].user*



I think a change like this would make Brooklyn DSL more flexible and open
the door to further improvements and also make blueprints more readable.



I propose to work on the above and welcome any thoughts from the community.



Cheers,

Iuliana





Re: Bundle resolvers loading too late

2020-11-24 Thread Alex Heneveld



Cheers Geoff -- helpful input.

Absolutely I think (a) should be _supported_ -- but I don't think it 
should be preferred.


Blueprint authors who just write a simple yaml/BOM file shouldn't be 
required to make/maintain an OSGI-INF/MANIFEST.MF.  And nor do we want 
to stipulate that bundles that get installed _have_ to be OSGi, or have 
to be OSGi _if_ they depend on certain resolvers -- that would rule out, 
for instance, installing a TOSCA CSAR ZIP.  So while (a) should be an 
option, I think it's important we offer at least one other.  Also a 
failure to delare that will have non-deterministic failures, it would 
probably work on first install but then might fail on rebind ... and 
while some of the other options have the same issue, they can be fixed 
at a platform level whereas this would require (and potentially be an 
error) on every bundle.


I spent some time on (d) but it is quite invasive and start levels don't 
work the way I expected. I'm talking to JB later today and he will 
hopefully clear it up.


Option (b) is winning out and actually works out quite nicely.  Again 
it's optional, but the current spike seems to be working to add 
additional optional properties in brooklyn.cfg:


brooklyn.osgi.dependencies.services.filters=(&(osgi.service.blueprint.compname=toscaCsarBundleResolver))

The RHS is an OSGi filter for a service, or a list of such filters.  The 
effect is that it won't init the catalog/rebind until the 
brooklyn.osgi.dependencies.services.filters are all satisfied, but it 
feels quite OSGi-friendly, we specify in config the services that 
catalog depends, and let OSGi tell us when they're available and we just 
wait on that.


I'm gonna tidy this up and open a PR for review.

Best
Alex



On 24/11/2020 11:16, Geoff Macartney wrote:

Hi Alex,

I'd actually say (a) is the way to do it, using the OSGI service
dependency mechanism in some way (not *quite* sure how that is done
these days!). That would be the more "OSGI native" style of doing it,
would it not? Start levels wouldn't be what we want [1] and making all
catalog init block until all bundle loaders are started (b) sounds
workable but coarse grained. If each bundle specified a dependency on
a service that loaded it, then the normal OSGI service mechanism would
control the startup order for us very naturally. I share your dislike
of options (c) and (e).

What do you think?

Geoff


[1] https://www.aqute.biz/2017/04/24/aggregate-state.html

On Mon, 23 Nov 2020 at 11:12, Alex Heneveld  wrote:


Hi Brooklyn devs,

Regarding the recent addition to allow custom Bundle Resolver OSGi
services [1], we've discovered a bothersome issue with load order at
runtime.  It is non-deterministic whether a custom resolver bundle loads
before or after the initial catalog.bom and persisted state.  If the
custom resolver loads _after_, then it won't be available to handle the
catalog.bom and persisted state, which means the bundle might be loaded
by the wrong resolver or it might fail to load altogether.

We want to introduce a mechanism to prevent those errors.  There are
several options:

(a) Specify a dependency on the resolver bundle/service inside the
bundle that needs it

(b) Specify any resolver OSGi bundle or service names that are required
in brooklyn.cfg, and then wait until they are available before
initializing Brooklyn catalog (eg using BundleListener / ServiceTracker)

(c) Require the bundle to be explicitly included in the Brooklyn/OSGi
startup sequence (boot bundlers or startup.properties) before the
catalog/rebind initializes

(d) Wait for "all startup and deploy bundles" to be in their final state
(usually active) or a start level before the catalog/rebind initializes

(e) Re-install bundles if we've added a new bundle-parser service (so
while it might fail initially, it eventually succeeds)


Option (d) would be the nicest I think, simplest for user, leaning on
OSGi "start levels":  but Karaf does not seem to respect startlevels.
I'll send an email to the Karaf list to ask.

Option (c) is quite tricky AFAIK, obscure edits needed to the etc/*
directory and some tricky listeners (OSGi doesn't encourage the notion
of "wait for everything else to be ready", for obvious reasons if two
bundles use that philosophy they will deadlock!). So I don't like it.

Option (a) makes writing a bundle that uses a custom resolver more
difficult (e.g. requiring an OSGi MANIFEST.MF) so I don't like it either.

Option (e) is quite hard to code up, and inefficient, and will cause a
lot of warnings in the log as part of the normal case, and potentially
disrupt operations if we re-install bundles whenever a resolver is
added.  That said, it is a common pattern in OSGi ... but I don't much
like it.


That leaves option (b) which is what I'm leaning towards (unless we get
an answer re (d)).  Specifically we&

Re: Karaf/Felix skips startlevel increments ... so how to delay fileinstall hot-deploy?

2020-11-23 Thread Alex Heneveld



> If you want, I can help on these issues, just ping me, we can work 
together about that (and I think you will be interested by Karaf 5 ;)).


JB - Thanks.  I'll do that.  Intrigued to hear about Karaf 5.

Lists - I'll report back here in due course for closure!

Best
Alex


On 23/11/2020 13:40, Jean-Baptiste Onofre wrote:

Hi,

FrameworkStartLevel (framework level), and start level adapt (at bundle level) 
and DefaultBundleStartLevel are different things.
I think you are referencing the adapt start level (not the framework start 
level, that should be pretty low).

For the deploy folder, Felix FileInstall is started before the feature service. IMHO, the 
right approach for you is to create a service and the "client" bundles should 
depends on it. Like this, you will be sure that the client bundles will be started only 
once your service is there.

Regarding dom4j bundle, I bet you have a massive refresh performed (due to 
resolved optional import or new version) and it could be cascading.

If you want, I can help on these issues, just ping me, we can work together 
about that (and I think you will be interested by Karaf 5 ;)).

Regards
JB


Le 23 nov. 2020 à 14:30, Alex Heneveld  a écrit :


Thanks very much JB.

To answer your question, we use a boot feature.  But consumers might extend 
with the deploy folder, specifically placing their extension bundles into the 
deploy/ folder _before_ the first start of Karaf.  It is the interplay of 
hot-deploy `fileinstall` (OSGi startlevel) with boot features/bundles start 
order that seems to be the problem.  The deploy/ folder is populated when Karaf 
starts, but is being read too early I think.  It is not to do with adding items 
to deploy/ after startup.

As you say, the feature resolver respects the order of the `
The Framework must then increase or decrease the active start level by 1 until 
the requested start level is reached.
The Framework must not increase to the next active start level until all 
started bundles have returned from their BundleActivator.start method

Is there any way to make sure that `FrameworkStartLevel.getStartLevel()` is 
consistent with the start-level that the feature resolver is starting, on 
first/clean startup?  (On subsequent start, it obeys start levels.)  I've tried 
adding things to startup.properties but nothing I've tried seems to have any 
impact on what `MyBundleActivator.start() { 
log(FrameworkStartLevel.getStartLevel()) }` displays, it also shows `100` (the 
`beginning` level).

Or is there any other way to support consumers populating `deploy/ ` (as an 
easy way for them to add bundles) but ensuring they don't get activated until 
_after_ our boot features?

While I've got you JB any thoughts on the post-script question, where 
the`org.apache.servicemix.bundles.dom4j` bundle being added to the deploy/ 
directory (after startup) is causing Karaf to destroy then recreate lots of the 
blueprint services we created during boot, even though there aren't any obvious 
package dependencies -- our services start fine without that dom4j bundle?  
I've turned on all the logging I can find and it's not displaying any messages 
about why things should be destroyed/uninstalled.  I know you wrote the bundle 
and it's really useful but weird it is wreaking havoc when hot-deployed. Maybe 
the optional imports of sun.xxx.xxx packages is forcing some major rewiring?

Many thanks,
Alex


On 23/11/2020 12:50, Jean-Baptiste Onofre wrote:

Hi Alex,

It depends the way you deploy your bundle (especially if you use the deploy 
folder).

The start level is respected in etc/startup.properties or in a feature (as the 
resolver can evaluate the order).
However, if you drop first a bundle with startLevel 90 and then, later, a 
bundle with startLevel 85, the first bundle will be deployed before the second 
one.

So, in your case, I would use a well formed features set.

Regards
JB


Le 23 nov. 2020 à 12:44, Alex Heneveld  a écrit :


hi Karaf devs-

i have a question about start-level behaviour in karaf/felix.  the osgi spec says that start-levels 
should increase 1 by 1 during startup [1].  this doesn't seem to be happening in a clean 
karaf-based environment.  what we observe is that startlevel jumps directly to the `beginning` 
level (100 in our case) before our boot bundles (at karaf.startlevel.bundle=80) are started -- a 
log(startlevel) in one of the boot bundle activators shows "100" on a clean install.  on 
a subsequent startup it shows "80":

 /bin/start
 
 2020-11-23T11:32:01,159 - INFO  175 o.a.b.u.o.OsgiActivator 
[tures-3-thread-1] Starting org.apache.brooklyn.utils-common [175], at start 
level 100, state 8
 

 /bin/stop
 
 2020-11-23T11:40:04,651 - INFO  175 o.a.b.u.o.OsgiActivator 
[FelixStartLevel] Stopping org.apache.brooklyn.utils-common [175]
 

 /bin/start
 

Re: Karaf/Felix skips startlevel increments ... so how to delay fileinstall hot-deploy?

2020-11-23 Thread Alex Heneveld



Thanks very much JB.

To answer your question, we use a boot feature.  But consumers might 
extend with the deploy folder, specifically placing their extension 
bundles into the deploy/ folder _before_ the first start of Karaf.  It 
is the interplay of hot-deploy `fileinstall` (OSGi startlevel) with boot 
features/bundles start order that seems to be the problem.  The deploy/ 
folder is populated when Karaf starts, but is being read too early I 
think.  It is not to do with adding items to deploy/ after startup.


As you say, the feature resolver respects the order of the `start-level="xxx"...` items, but it seems to do that _independently_ of 
the OSGi FrameworkStartLevel.getStartLevel() -- so `fileinstall` detects 
that `felix.fileinstall.active.level=95` is satisfied and starts 
hot-deploying ... even though the boot features at startlevel 80 are not 
yet installed.


Would that make sense?

This feels contrary to this part of the OSGi spec:

> The Framework must then increase or decrease the active start level 
by 1 until the requested start level is reached.
> The Framework must not increase to the next active start level until 
all started bundles have returned from their BundleActivator.start method


Is there any way to make sure that `FrameworkStartLevel.getStartLevel()` 
is consistent with the start-level that the feature resolver is 
starting, on first/clean startup?  (On subsequent start, it obeys start 
levels.)  I've tried adding things to startup.properties but nothing 
I've tried seems to have any impact on what `MyBundleActivator.start() { 
log(FrameworkStartLevel.getStartLevel()) }` displays, it also shows 
`100` (the `beginning` level).


Or is there any other way to support consumers populating `deploy/ ` (as 
an easy way for them to add bundles) but ensuring they don't get 
activated until _after_ our boot features?


While I've got you JB any thoughts on the post-script question, where 
the`org.apache.servicemix.bundles.dom4j` bundle being added to the 
deploy/ directory (after startup) is causing Karaf to destroy then 
recreate lots of the blueprint services we created during boot, even 
though there aren't any obvious package dependencies -- our services 
start fine without that dom4j bundle?  I've turned on all the logging I 
can find and it's not displaying any messages about why things should be 
destroyed/uninstalled.  I know you wrote the bundle and it's really 
useful but weird it is wreaking havoc when hot-deployed. Maybe the 
optional imports of sun.xxx.xxx packages is forcing some major rewiring?


Many thanks,
Alex


On 23/11/2020 12:50, Jean-Baptiste Onofre wrote:

Hi Alex,

It depends the way you deploy your bundle (especially if you use the deploy 
folder).

The start level is respected in etc/startup.properties or in a feature (as the 
resolver can evaluate the order).
However, if you drop first a bundle with startLevel 90 and then, later, a 
bundle with startLevel 85, the first bundle will be deployed before the second 
one.

So, in your case, I would use a well formed features set.

Regards
JB


Le 23 nov. 2020 à 12:44, Alex Heneveld  a écrit :


hi Karaf devs-

i have a question about start-level behaviour in karaf/felix.  the osgi spec says that start-levels 
should increase 1 by 1 during startup [1].  this doesn't seem to be happening in a clean 
karaf-based environment.  what we observe is that startlevel jumps directly to the `beginning` 
level (100 in our case) before our boot bundles (at karaf.startlevel.bundle=80) are started -- a 
log(startlevel) in one of the boot bundle activators shows "100" on a clean install.  on 
a subsequent startup it shows "80":

 /bin/start
 
 2020-11-23T11:32:01,159 - INFO  175 o.a.b.u.o.OsgiActivator 
[tures-3-thread-1] Starting org.apache.brooklyn.utils-common [175], at start 
level 100, state 8
 

 /bin/stop
 
 2020-11-23T11:40:04,651 - INFO  175 o.a.b.u.o.OsgiActivator 
[FelixStartLevel] Stopping org.apache.brooklyn.utils-common [175]
 

 /bin/start
 
 2020-11-23T11:40:15,431 - INFO  175 o.a.b.u.o.OsgiActivator 
[FelixStartLevel] Starting org.apache.brooklyn.utils-common [175], at start 
level 80, state 8

we are on 4.2.8.  there are related issues [2] where this has been observed, 
but this particular issue wasn't the focus; other suggestions in those issues, 
to set `featuresBootAsynchronous=false` and to add items to 
`startup.properties` are not working for us (although maybe I'm not adding the 
right bundles to startup.properties).

i totally buy the argument that declarative dependencies are better in most 
cases, but i think this is one of those use cases where relying on start-levels 
is justified.  one actual problem we're trying to solve is preventing hot 
deployment until after all the boot bundles are started.  but because 
startlevel is jumpin

Karaf/Felix skips startlevel increments ... so how to delay fileinstall hot-deploy?

2020-11-23 Thread Alex Heneveld



hi Karaf devs-

i have a question about start-level behaviour in karaf/felix.  the osgi 
spec says that start-levels should increase 1 by 1 during startup [1].  
this doesn't seem to be happening in a clean karaf-based environment.  
what we observe is that startlevel jumps directly to the `beginning` 
level (100 in our case) before our boot bundles (at 
karaf.startlevel.bundle=80) are started -- a log(startlevel) in one of 
the boot bundle activators shows "100" on a clean install.  on a 
subsequent startup it shows "80":


    /bin/start
    
    2020-11-23T11:32:01,159 - INFO  175 o.a.b.u.o.OsgiActivator 
[tures-3-thread-1] Starting org.apache.brooklyn.utils-common [175], at 
start level 100, state 8

    

    /bin/stop
    
    2020-11-23T11:40:04,651 - INFO  175 o.a.b.u.o.OsgiActivator 
[FelixStartLevel] Stopping org.apache.brooklyn.utils-common [175]

    

    /bin/start
    
    2020-11-23T11:40:15,431 - INFO  175 o.a.b.u.o.OsgiActivator 
[FelixStartLevel] Starting org.apache.brooklyn.utils-common [175], at 
start level 80, state 8


we are on 4.2.8.  there are related issues [2] where this has been 
observed, but this particular issue wasn't the focus; other suggestions 
in those issues, to set `featuresBootAsynchronous=false` and to add 
items to `startup.properties` are not working for us (although maybe I'm 
not adding the right bundles to startup.properties).


i totally buy the argument that declarative dependencies are better in 
most cases, but i think this is one of those use cases where relying on 
start-levels is justified.  one actual problem we're trying to solve is 
preventing hot deployment until after all the boot bundles are started.  
but because startlevel is jumping directly to 100, these settings don't 
work as expected:


    felix.fileinstall.start.level=95
    felix.fileinstall.active.level=95

we'd expect based on startlevel that fileinstall shouldn't start until 
boot bundles are installed (startlevel 80).  but instead fileinstall 
starts trying to hot-deploy right away, because startlevel jumped to 
100, and because our boot bundles aren't yet available, it fails for a 
while.  once the boot bundles are installed, the hot-deploy bundles get 
wired in fine and it all works, and the start-levels as shown in 
`bundle:list` are as expected (80 and 95), but we'd ilke not to have all 
the failed hot-deployment attempts, and there might be hot-deployed 
bundles that users install which interfere with the boot wiring in ways 
we don't want (offering other services, etc).  so this seems a common 
and desirable use case for startlevels to be obeyed -- useful enough 
anyway that the fileinstall authors provided those settings!


we also have another related problem that this is blocking, that we 
would like some of our bundles not to do some initalization until 
user-supplied hot-deploy bundles are installed, as discussed on the 
Apache Brooklyn ML (and hence the cross-post).


so ... is there a way to have a karaf clean startup see our boot bundles 
and start levels and not jump to 100, so it completes startlevel 80 
before startlevel 95 kicks in? ... or some other way to have fileinstall 
not run until our boot bundles are installed?


many thanks.

best
alex

[1] 
https://docs.osgi.org/specification/osgi.core/7.0.0/framework.startlevel.html 
-- section 9.3.1


> The Framework must then increase or decrease the active start level 
by 1 until the requested start level is reached.
> The Framework must not increase to the next active start level until 
all started bundles have returned from their BundleActivator.start method


[2] Related issues:

https://issues.apache.org/jira/browse/KARAF-4261
https://issues.apache.org/jira/browse/KARAF-4723
https://issues.apache.org/jira/browse/KARAF-4578
https://issues.apache.org/jira/browse/KARAF-4498



PS:  related curiousity, if i set `beginning=90` in the above case and 
then manually increase the startlevel to 100 later, it works, but i have 
the `org.apache.servicemix.bundles.dom4j` bundle in my deploy/ 
directory, and that makes Karaf destroy lots of the blueprint services 
we created during boot.  i can't see why they would, as a bundle it 
seems pretty simple, our other bundles don't use the dom4j classes, the 
logs don't give any reason why in this case, and if it's hot-deployed 
early we don't have any issues; so again I'm grateful if anyone has 
thoughts on why this would happen!





Bundle resolvers loading too late

2020-11-23 Thread Alex Heneveld



Hi Brooklyn devs,

Regarding the recent addition to allow custom Bundle Resolver OSGi 
services [1], we've discovered a bothersome issue with load order at 
runtime.  It is non-deterministic whether a custom resolver bundle loads 
before or after the initial catalog.bom and persisted state.  If the 
custom resolver loads _after_, then it won't be available to handle the 
catalog.bom and persisted state, which means the bundle might be loaded 
by the wrong resolver or it might fail to load altogether.


We want to introduce a mechanism to prevent those errors.  There are 
several options:


(a) Specify a dependency on the resolver bundle/service inside the 
bundle that needs it


(b) Specify any resolver OSGi bundle or service names that are required 
in brooklyn.cfg, and then wait until they are available before 
initializing Brooklyn catalog (eg using BundleListener / ServiceTracker)


(c) Require the bundle to be explicitly included in the Brooklyn/OSGi 
startup sequence (boot bundlers or startup.properties) before the 
catalog/rebind initializes


(d) Wait for "all startup and deploy bundles" to be in their final state 
(usually active) or a start level before the catalog/rebind initializes


(e) Re-install bundles if we've added a new bundle-parser service (so 
while it might fail initially, it eventually succeeds)



Option (d) would be the nicest I think, simplest for user, leaning on 
OSGi "start levels":  but Karaf does not seem to respect startlevels.  
I'll send an email to the Karaf list to ask.


Option (c) is quite tricky AFAIK, obscure edits needed to the etc/* 
directory and some tricky listeners (OSGi doesn't encourage the notion 
of "wait for everything else to be ready", for obvious reasons if two 
bundles use that philosophy they will deadlock!). So I don't like it.


Option (a) makes writing a bundle that uses a custom resolver more 
difficult (e.g. requiring an OSGi MANIFEST.MF) so I don't like it either.


Option (e) is quite hard to code up, and inefficient, and will cause a 
lot of warnings in the log as part of the normal case, and potentially 
disrupt operations if we re-install bundles whenever a resolver is 
added.  That said, it is a common pattern in OSGi ... but I don't much 
like it.



That leaves option (b) which is what I'm leaning towards (unless we get 
an answer re (d)).  Specifically we'd say something like this in 
`brooklyn.cfg`:


    brooklyn.resolvers.require.services = custom.Resolver1,custom.Resolver2

and then in catalog/rebind we block (with logging) if those are not yet 
available.  Possibly we would also have


    brooklyn.resolvers.require.timeout = 5m

And if not available within that timeframe it logs a warning and proceeds.


Note there are potentially similar issues with PlanTransformers but I 
think that is simple to solve once we've solved ^.


Best
Alex

[1] https://github.com/apache/brooklyn-server/pull/1115




Re: Multiple formats for catalog add and for deploy

2020-10-08 Thread Alex Heneveld



Hi Geoff-

That's exactly the type of thing I was thinking.

I'd probably go with an "format" API parameter rather than a header 
since it isn't a standard MIME type but that's a detail.


For deployments you are correct, the registered transformer for the 
service would have to be able to tell where it should be deployed.  Not 
many systems have a multi-location concept like Apache Brooklyn does so 
you'd have to "cheat" in many cases.  But any of the following cheats 
seem okay to me:


* the transformer has a default location that it expects to be set up, 
e.g. one called "docker.default.location" registered in Brooklyn, and 
uses it
* the location is specified by extension metadata in the plan or even a 
comment, e.g. the Dockerfile has a line "# brooklyn-deploy: 
my-docker-target"
* you wrap the foreign type, so we register a type whose implementation 
is a dockerfile, as "my-docker-file", then deploy
    "{ services: [ { type: my-docker-file } ], location: 
my-docker-target }"


While the last one looks pointless -- and on its own it is -- it allows 
you to compose multiple things together, eg docker, k8s, terraform, 
tosca, cloudformation, arm, cloud foundry, etc, and add Apache Brooklyn 
policies to it.  As people combine cloud services with container 
platforms within the same service, and as they want consistent 
management across them, I think that becomes a strong suit for Apache 
Brooklyn.


For k8s I could see "br catalog add ." running over a directory 
containing a helm chart, and a bundle gets added with the deployments 
from the chart, which you could then deploy (or compose) using standard 
Brooklyn CAMP syntax.  Or simpler, "br deploy --file 
myk8s.deployment.yaml" if Brooklyn has been configured with a 
"kubernetes.default.location" pointing at your KubernetesLocation (and 
the k8s transformer written so that that is what it uses).


PS - I expect a first draft of the PR to land shortly

HTH
Alex



On 07/10/2020 21:41, Geoff Macartney wrote:

Hi Alex,

That certainly sounds like an interesting idea. I'm not entirely sure
I've got my head round it though. I guess I can see how you could add
a new type to the catalog, say by posting a Dockerfile as the request
body and having an API (HTTP) header of `x-brooklyn-type: dockerfile`.
In this case I expect the value "dockerfile" is some format specifier
registered with your BundleInstallResolver, e.g. by a bundle that
knows how to deploy to Docker.

I'm a little less clear how that could work for deployments. You can't
just post a Dockerfile to deploy, you'd have to at least add a bit of
Brooklyn meta-data, namely a location (`location: my-local-docker`)
and application name.

Could you maybe sketch out what the requests would look like to add a
Dockerfile to the catalog, and to deploy it to a Docker instance?

What about, say, a K8s deployment descriptor, again to add it to
catalog, and to deploy it to a KubernetesLocation?

Cheers
Geoff


On Mon, 5 Oct 2020 at 13:41, Alex Heneveld  wrote:


Hi Brooklyners,

Since the Catalog was overhauled many years ago with the TypeRegistry
approach, we have had support in the back-end for custom types and for
custom plan formats.  The former is useful e.g. if you want to add a new
type to be used as a config key or initializer.  The latter is useful if
the usual entity-centric CAMP YAML isn't the native format.  The latter
is extensible, using the TypePlanTransformer OSGi service at runtime, so
new bundles can register services to support new format types -- e.g.
TOSCA, Kubernetes, pick your poison.  The only requirements are that the
service register the type saying what its supertypes are and is then
able to instantiate it when requested (using whatever it wants,
including calling to a CLI or REST endpoint).  We've also added a
BeanWithType transformer (specify "format: bean-with-type" in the
catalog BOM) to use simple POJO/Jackson YAML deserialization, which has
been really useful for Initializers.

There are two related features that would complete this nicely:

* Offer a similar extensible approach to resolving the overall
artifact/bundle added to the catalog:  currently _within_ the
catalog.bom file one can define the format for items, but the BOM YAML
format is required, and there is a simple try/catch to allow upload of
just a BOM YAML or a ZIP which contains either catalog.bom or OSGi
metadata (or both); having an extensible "services" mechanism for
scoring and resolving catalog bundles would allow Apache Brooklyn to
handle other items being added to the catalog in their native format
with no extra metadata. Similar to TypePlanTransformer, the e.g.
BundleInstallResolver would be responsible for scoring its applicability
and adding the items to the catalog.  It could lean on the existing
catalog.bom format to do so

Multiple formats for catalog add and for deploy

2020-10-05 Thread Alex Heneveld



Hi Brooklyners,

Since the Catalog was overhauled many years ago with the TypeRegistry 
approach, we have had support in the back-end for custom types and for 
custom plan formats.  The former is useful e.g. if you want to add a new 
type to be used as a config key or initializer.  The latter is useful if 
the usual entity-centric CAMP YAML isn't the native format.  The latter 
is extensible, using the TypePlanTransformer OSGi service at runtime, so 
new bundles can register services to support new format types -- e.g. 
TOSCA, Kubernetes, pick your poison.  The only requirements are that the 
service register the type saying what its supertypes are and is then 
able to instantiate it when requested (using whatever it wants, 
including calling to a CLI or REST endpoint).  We've also added a 
BeanWithType transformer (specify "format: bean-with-type" in the 
catalog BOM) to use simple POJO/Jackson YAML deserialization, which has 
been really useful for Initializers.


There are two related features that would complete this nicely:

* Offer a similar extensible approach to resolving the overall 
artifact/bundle added to the catalog:  currently _within_ the 
catalog.bom file one can define the format for items, but the BOM YAML 
format is required, and there is a simple try/catch to allow upload of 
just a BOM YAML or a ZIP which contains either catalog.bom or OSGi 
metadata (or both); having an extensible "services" mechanism for 
scoring and resolving catalog bundles would allow Apache Brooklyn to 
handle other items being added to the catalog in their native format 
with no extra metadata. Similar to TypePlanTransformer, the e.g. 
BundleInstallResolver would be responsible for scoring its applicability 
and adding the items to the catalog.  It could lean on the existing 
catalog.bom format to do so, or it could replace that with its own 
reading of what items are within the bundle.  For instance a 
BundleInstallResolver could accept a Dockerfile or a Helm Chart or a 
TOSCA CSAR or a CloudFoundry project, and make it known within the catalog.


* On the REST API, when deploying or adding to catalog, add an argument 
so that the caller can specify the format of the artifact being deployed 
or added to the catalog.  This will link one-to-one with:  for 
deploying, the TypePlanTransformer; and for adding to catalog, the (new) 
BundleInstallResolver.  By default auto-detect will operate, asking 
services to score their applicability.  (eg. if it's a ZIP with a 
catalog.bom the current OsgiArchiveInstaller will score 1.0, if it's 
OSGi but no catalog.bom then score 0.2.) Currently, deployment will use 
auto-detect against all TypePlanTransformers, with no way on the API to 
specify a format. And for adding to catalog, it is only the BOM YAML or 
ZIP-with-BOM-and-or-OSGi formats that are supported.


I think this could go a long way to making Apache Brooklyn more 
accessible, because with just a small code investment, the software can 
then support potentially many other existing deployment formats, 
reducing (removing!) the need for a consumer to learn the Apache 
Brooklyn syntax for plans and bundles.


I propose to work on the above and welcome any thoughts from the community.

Once the API is updated it would be good to add a CLI argument to 
support this.  I might could do with help here!


Best
Alex




Re: jclouds use of gson

2020-07-06 Thread Alex Heneveld



Thanks Geoff, and Jclouds team, and Markus for the contribution.

FWIW regretfully I'd rather jclouds *didn't* do this.  Personally I 
don't see benefits to Apache Brooklyn and I *do* see it increasing our 
dependency hell -- as ugly as shading is, it spares us that problem.  
Given that it sounds like GSON OGSi support is flaky (thanks Robert) and 
the community unresponsive (no release since the issues were fixed in 
2019) it seems especially unwise.


There was a suggestion that Jackson be used, instead of GSON (whether 
official or forked and shaded), as the way to modernize and go 
proper/clean OSGi and simplify integration.  I'd +1 that.


On a side note, the use of BND here seemed nice.  Would that be a 
candidate for a separate PR?


Best
Alex


On 06/07/2020 22:16, Geoff Macartney wrote:

Hi all,

Just copying the Apache Brooklyn community on this.

As Ignasi mentioned [2] Brooklyn uses jclouds' OSGI support. I am not
saying we need to do anything but it might be worth us at least being
aware of the ongoing discussion.

Regards
Geoff

[2] https://github.com/apache/jclouds/pull/78#issuecomment-650507931

On Mon, 6 Jul 2020 at 17:21, Robert Varga  wrote:

On 06/07/2020 14:59, Andrew Gaul wrote:

A contributor recently submitted a pull request to jclouds which
proposes unshadowing gson, part of our efforts to modernizing our
dependencies:

https://github.com/apache/jclouds/pull/78

However, our team lacks the OSGi expertise to review this change.  Could
someone from the Karaf team help us out?  The Karaf project took over
maintenance of jclouds-karaf last year and we prefer not to break any
users:

https://www.mail-archive.com/dev@karaf.apache.org/msg13623.html


Hello,

I cannot contribute to an explicit review, sorry.

I would suggest steering clear of 2.8.6 due to a number of OSGi
packaging mistakes:
https://github.com/google/gson/issues/1601
https://github.com/google/gson/issues/1602

We are using non-shaded gson-2.8.5 in OpenDaylight for a few years now
without any issues -- the latest packaged features.xml lives here:
https://repo1.maven.org/maven2/org/opendaylight/odlparent/odl-gson/7.0.3/odl-gson-7.0.3-features.xml

Regards,
Robert





Re: [VOTE] Release Apache Brooklyn 1.0.0 [rc3]

2020-06-02 Thread Alex Heneveld



Hi team-

I've taken a deeper look at the license/notice issues raised by Justin 
and I think have resolved them in PR [1] (and various PRs it 
references).  A summary is below.  Justin, thank you for spotting these 
bugs.


If anyone has comments please reply here or on the issue [1].

Regarding the use of Category-X [2] licenses:

* net.java.dev.jna - this is dual-licensed under LGPL and ASL; the 
NOTICE incorrectly stated it was being used under the former; it now 
correctly states it is being used under the latter


* com.google.code.findbugs.annotations - Apache Brooklyn does not use 
nor depend on this LGPL project. It is a compile-time-only dependency of 
libraries we use, but not accurately reported in those libraries as 
compile-time-only dependencies and so was picked up as a transient 
dependency of apache Brooklyn.  Our maven POMs now explicitly exclude 
this so it is no longer treated as a dependency, not included in our 
binary dist, and not noted in NOTICE.


* With the above two fixes there are no longer any Category-X [2] 
licenses in our source or binary builds.



Regarding the information included in our NOTICE files:

* Our source dist and JAR NOTICE files (in the root of projects, in JARs 
and in the source dist artifact) previously for convenience reported the 
binary dependencies pulled in.  These were clearly labelled as such but 
nevertheless contrary to the philosophy that NOTICE files should contain 
only what is legally required.  These NOTICES have been fixed so that 
they only list third-party artifacts actually included in our source. 
Consequently they are much, much smaller.


* Our binary dist NOTICE files (in binary TGZs, RPMs, WARs and all other 
binary artifacts) list all runtime dependencies included in the binary 
dist where a custom notice, attribution, and/or license for that 
dependency is appropriate.  Where there is doubt about any such 
obligation we have erred on the side of inclusion.


* A non-statutory DEPENDENCIES file is now included alongside the source 
dist NOTICE files advising what binary dependencies will be included in 
the built artifact.  This file contains what was formerly in the source 
dist NOTICE files. This makes it easy for users to analyse the full set 
of dependencies of Apache Brooklyn without conferring the undue legal 
burden entailed by including this information in any of the statutory 
NOTICE files.



There are some additional changes:

* Some libraries have been updated or added recently and use the new 
licenses EPL v2 and EDL v1 which were not previously recognised


* Some dependencies were overlooked in some reports where the "karaf" 
project did not depend on the bundles it incorporates; this is remedied, 
and the license/notice generation only applies to that relevant project 
(and license-gen running faster by only running on that project)


* Some icons had been added from Apache projects and elsewhere, with no 
NOTICE; this is remedied


I believe with [1] all LICENSE and NOTICE files will now be current, 
correct, and compliant with Apache policy.


Best
Alex


[1] https://github.com/apache/brooklyn-dist/pull/164

[2] https://www.apache.org/legal/resolved.html#category-x


On 18/05/2020 10:18, Aled Sage wrote:

Hi Justin,

Thanks for spotting this and reaching out.

Looking at the license/notice generation, I think there are two things 
that went wrong for 1.0 release:


1. The maven license plugin [1] picked the wrong license for 
dependencies when there were multiple to choose from (i.e. LGPL vs 
Apache 2.0 in [2]).


2. We're trying to include far too much stuff in NOTICE. Quoting the 
really useful link you shared [3]:


        "Do not add anything to NOTICE which is not legally required."

---

We should review point 1 above to confirm there really are no licenses 
that are forbidden in apache projects. And we should review point 2 to 
change the way we generate NOTICE files so it doesn't include everything.


Aled

[1] https://github.com/ahgittin/license-audit-maven-plugin

[2] https://github.com/java-native-access/jna/blob/master/pom-jna.xml

[3] http://www.apache.org/dev/licensing-howto.html

[4] https://www.apache.org/legal/resolved.html#category-x


On 17/05/2020 10:20, Justin Mclean wrote:

Hi,

I was looking reviewing your board report and mailing list and took a 
look at your release. The current LICENSE and NOTICE are not in line 
with ASF policy. For instance, your license contains licenses that 
can't be used in a source release. I think what you have 
misunderstood is that you're listing the licenses of all dependencies 
rather than just what is bundled in the release. Your notice file 
also doesn't need to list dependencies but just required notices, 
content from other ALv2 notice files and relocated copyright notices. 
This is a good guide [1] if you need help on fixing this, please 
reach out.


Thanks,
Justin

1. http://www.apache.org/dev/licensing-howto.html





Re: [VOTE] Release Apache Brooklyn 1.0.0 [rc2]

2020-02-06 Thread Alex Heneveld
Good find. I agree this sounds like a blocker. (Easy to fix fortunately!
IMT guessing log message format changed with the karaf bump? We should grep
on the Brooklyn started message instead of a karaf message!)

Best
Alex

On Thu, 6 Feb 2020, 14:26 Iuliana Cosmina, <
iuliana.cosm...@cloudsoftcorp.com> wrote:

> -1
>
> Here is a summary of my tests.
> ———
>
> [X] Tested
> the apache-brooklyn-1.0.0-rc2-vagrant.zip & 
> apache-brooklyn-1.0.0-rc2-vagrant.tar.gz
>
> The condition to check that Brooklyn is up is wrong.  This means that when
> calling vagrant up, the Brooklyn node is created, but the command never
> ends, and the other nodes are not created.
>
> In files/install_brooklyn.sh, Brooklyn being successfully up is checked by
> the presence of the "BundleEvent STARTED - org.apache.brooklyn.karaf-init”
> statement in the Brooklyn.debug.log file.
>
> But that statement is not part of the log.
>
>
> [✓] apache-brooklyn-1.0.0-rc2-bin.tar.gz
> - installed correctly on macOs 10.15.3
> - web interface is ok in Firefox 72.0.2 & Safari
> - 1-server-template and 2-bash-web-server-template(corrected) were
> deployed to AWS machines
> - start/stop/restart work correctly
> [✓] apache-brooklyn-1.0.0-rc2-bin.zip
> - installed correctly on macOs 10.15.3
> - web interface is ok in Firefox 72.0.2 & Safari
> - 1-server-template and 2-bash-web-server-template(corrected) were
> deployed to AWS machines
>- start/stop/restart work correctly
> [✓] apache-brooklyn-1.0.0-rc2-1.noarch.rpm
> - installed correctly on CentOS 7.3
> - web interface is ok in Firefox 72.0.2 & Safari
> - 1-server-template and 2-bash-web-server-template(corrected) were
> deployed to AWS machines
> - start/stop/restart work correctly
>
> [X] QuickLaunch template errors:
> - The 2-bash-server-template contains an error in line 13. The apt-get
> install command is called without the -Y argument. This means that the VM
> is not configured correctly since it is stuck waiting a confirmation.
> - Template 3-bash-web-and-riak-template cannot be deployed on an Ubuntu 18
> AWS machine. The error message is  E: Unable to locate package riak.
> - Template 4-resilient-bash-web-cluster-template cannot be deployed on an
> Ubuntu 18 AWS machine, because of compile error while building nginx-1.8.0.
>
>
> Although the issues are not critical, a new user of Brooklyn might see
> them as such. Especially since working templates would convince a new user
> that Brooklyn works as expected.
>
> Iuliana Cosmina
> Engineer
>
> Cloudsoft | Bringing Business to the Cloud
>
> Twitter: _iulianacosmina
> GitHub: https://github.com/iuliana
>
>


Re: PAX Exam Issue with Brooklyn Build

2020-01-26 Thread Alex Heneveld



Geoff / team-

> the sensor in question is defined as
>
> static.value: $brooklyn:config("my.app.port")
>
> not sure why it is reported as static.value:
> 
!!org.apache.brooklyn.camp.brooklyn.spi.dsl.methods.DslComponent$DslConfigSupplier 


> in the error logs.

Good spot!  It seems like it is reading the YAML once, then serializing 
it, then trying to read that output YAML and that's where it's messing 
up.  I can't see any other way it could know the type is 
`org.apache.brooklyn.camp.brooklyn.spi.dsl.methods.DslComponent$DslConfigSupplier` 
when complaining about the YAML parse.


I don't understand how or why it could be doing that though -- why try 
YAML parse the output of what is just YAML parsed!?


(I'm not surprised it fails, as the classname syntax might need special 
OSGi handling ... as far as I know we haven't invested in being able to 
write plans or entities as YAML.  What you get for free probably works 
sometimes but it isn't supported and edge cases won't work.  The 
persistence of course is XML based.)


The fact that this is system dependent is also weird.  That would 
suggest that something hasn't entirely loaded or registered, maybe a 
service or something, or it was done in an unusual order.  But I can't 
think of any reason why that (or anything!) would cause it to do this 
YAML deserialize->serialize->deserialize cycle for plan load.


Best
Alex


On 26/01/2020 23:33, Geoff Macartney wrote:

Hi all,

OK, I've made a bit of progress investigating
ExternalConfigBrooklynPropertiesOsgiTest (haven't looked into
AssemblyTest). I have a handle on *where* it is failing but haven't got to
the bottom of *why*.

I'm afraid I'm going to be flat out this week and won't have time in the
evenings to look at it, but perhaps the info below may be of some use if
someone has bandwidth to investigate. In any case I would be entirely happy
to support the proposal to disable these tests, I don't see why they should
gate the release. We can investigate afterwards.

Cheers,
Geoff

Notes:

mvn test
-Dtest=org.apache.brooklyn.core.dsl.external.ExternalConfigBrooklynPropertiesOsgiTest

When you run the test and while it is pausing waiting to connect to the pax
container, you can cd into the container deploy directory:

±
cd 
brooklyn-dist/karaf/itest/target/paxexam/unpack/06e8b942-6fc1-499a-b64a-cedcf6194640

and either examine logs or even run bin/client to connect to Karaf.  Doing
the latter and listing bundles seems to indicate that all the bundles are
up and running happily:



karaf@brooklyn()> list
START LEVEL 100 , List Threshold: 50
  ID │ State│ Lvl │ Version  │ Name
┼──┼─┼──┼─
  13 │ Active   │  80 │ 1.0.0.SNAPSHOT   │ Brooklyn Library Catalog
  15 │ Active   │  80 │ 1.3.1│ javax.annotation API
  57 │ Active   │  80 │ 2.1.2│ jclouds atmos components
  58 │ Active   │  80 │ 2.1.2│ jclouds Amazon EC2
provider
  59 │ Active   │  80 │ 2.1.2│ jclouds Amazon Simple
Storage Service (S3) provider
  60 │ Active   │  80 │ 2.1.2│ jclouds Azure Storage
provider
  61 │ Active   │  80 │ 2.1.2│ jclouds Azure Compute ARM
API
  62 │ Active   │  80 │ 2.1.2│ Apache jclouds B2 API
  63 │ Active   │  80 │ 1.51 │ bcpkix
  64 │ Active   │  80 │ 1.51 │ bcprov-ext
  65 │ Active   │  80 │ 1.0.0.SNAPSHOT   │ Brooklyn Karaf Shell
Commands
  66 │ Active   │  80 │ 2.1.2│ jclouds bring your own
node provider
  67 │ Active   │  80 │ 1.0.7│ Logback Classic Module
  68 │ Active   │  80 │ 1.0.7│ Logback Core Module
  69 │ Active   │  80 │ 2.1.2│ jclouds cloudstack core
  71 │ Active   │  80 │ 2.10.1   │ Jackson-annotations
  73 │ Active   │  80 │ 2.10.1   │ Jackson-core
  75 │ Active   │  80 │ 2.10.1   │ jackson-databind
  77 │ Active   │  80 │ 2.9.8│ Jackson-JAXRS-base
  78 │ Active   │  80 │ 2.9.8│ Jackson-JAXRS-JSON
  81 │ Active   │  80 │ 2.5  │ Gson
  82 │ Active   │  80 │ 18.0.0   │ Guava: Google Core
Libraries for Java
  83 │ Active   │  80 │ 3.0.0│ guice, Fragments: 85, 84
  84 │ Resolved │  80 │ 3.0.0│ guice-assistedinject,
Hosts: 83
  85 │ Resolved │  80 │ 3.0.0│ guice-multibindings,
Hosts: 83
  86 │ Active   │  80 │ 0.20.0   │ sshj
  87 │ Active   │  80 │ 2.4.0│ json-path
  88 │ Active   │  80 │ 0.0.9│ a connector factory
  89 │ Active   │  80 │ 0.0.9│ a connector for ssh-agent
  90 │ Active   │  80 │ 0.0.9│ an impleme

Re: Brooklyn 1.0.0 release candidate?

2020-01-17 Thread Alex Heneveld



+1

On 17/01/2020 16:03, Duncan Grant wrote:

+1

On Fri, 17 Jan 2020 at 16:03, Paul Campbell 
wrote:


+1 :)

On Fri, 17 Jan 2020 at 16:02, Martin Harris  wrote:


+1

M

On Fri, Jan 17, 2020 at 4:00 PM Aled Sage  wrote:


Hi all,

I believe we are (finally!) ready to produce Apache Brooklyn 1.0.0 RC1.

Assuming folk agree, then we can kick off the RC1 release process and
build it over the weekend or Monday.

Please give an information +1 or -1 (we'll do a proper vote on the
actual RCs).

---

We proposed a "code freeze" for 13th Dec [1] and by lazy consensus

we've

focused on bug fixes / test fixes since then.

Aled

[1] See Martin Harris' email: "Open pull requests".



--
Martin Harris
Lead Software Engineer

*Cloudsoft  *| Bringing Business to the Cloud

E: mar...@cloudsoft.io
M: 07989 047855

Need a hand with AWS? Get a Free Consultation.




--
Paul Campbell
Software Engineer
*Cloudsoft  *| Bringing Business to the Cloud

E: p...@cloudsoft.io
M: 07476981644 <+447476981644>
T: kemitixcode 
L: https://www.linkedin.com/in/paulkcampbell/







Re: Trying to get a dropin secrets provider working

2019-08-27 Thread Alex Heneveld
Hi Peter,

It needs to be the JAR not the class. Either the dropins folder or 'br
catalog add' is fine. You can confirm the former in the karaf bin/client
bundle:list or the latter in the Brooklyn catalog UI view.

It may be that the reference in OSGi needs to be to
'your.bundle:package.Clazz'.

Let us know how that goes.

If someone could improve the docs here that would be great (including
saying to put an OSGi bundle JAR not the classes into dropins!).

Best
Alex

On Tue, 27 Aug 2019, 17:47 Juan Cabrerizo,  wrote:

> Hi Peter, I've installed it in two different way, but i'm using the drop-in
> folder. (btw i think the current path is /deploy, or at least is what i
> tried to use at seems working, but i found other unrelated problems)
> The easier way is using the CLI, running
> * br login 
> * br catalog add 
>
> The other approach i've used is installing it on the Karaf repo inside the
> system folder and modify the brooklyn default feature. But this way is
> error prone. I can give you more details if you need, but i recommend you
> use the BR command.
>
> You can run the Karaf client to list the bundles and classes installed. Is
> how I check if my classes are exported or not.
> For that. you have to execute the script `client` in the bin folder. It
> will open the console. For listing the classes use the `class` with grep:
> class | grep -i UcsfSecretsProvider
> if you don't grep the output it will take ages to cancel it. If you class
> is there, check it it is exported.
>
> I'm not sure if you only need to create a jar file with your
> implementation. I think you check to be sure that this jar file is a valid
> OSGi bundle. Using the parent project I said before shall do it.
>
> Juan
>
>
>
> On Mon, 26 Aug 2019 at 17:16, Peter Abramowitsch 
> wrote:
>
> > Hi Juan
> > Thanks for responding. Your note implied something I was suspecting..
>  The
> > documentation literally says  "Classes implementing this interface can be
> > placed in the lib/dropins folder of Brooklyn" and that's what I initially
> > did.  But your note suggested its a jar file containing the
> implementation
> > class (and its dependencies) that needs to be put in lib/dropins - not
> > naked class files
> >
> > So this morning I created a runnable jar with a no-op main method in the
> > provider class.  This guarantees that the provider and all its
> dependencies
> > are pulled into the jar.
> >
> > I verified this too, with jar
> > jar tvf ./lib/dropins/ucsf_auth.jar | grep UcsfSecretsProvider
> >   9673 Mon Aug 26 08:26:04 PDT 2019
> > org/ucsf/ctakes_auth/UcsfSecretsProvider.class
> >
> > So my problem seems to be elsewhere.
> >
> > In your implementation, did you put your security provider jar in a
> > ./lib/dropins of your Brooklyn install folder?
> > Did you have to add a whitelist entry for your jar file anywhere else?
> >
> > Peter
> >
> >
> >
> >
> > On Mon, Aug 26, 2019 at 2:28 AM Juan Cabrerizo 
> wrote:
> >
> > > Hi Peter,
> > >
> > > I found a similar problem when I started working in an implementation
> of
> > > SecurityProvider. My problem was the configuration of the
> > > maven-bundle-plugin i wasn't exporting the class. You can check it on
> the
> > > Manifest.MF file inside META-INF on the jar file. Look for the package
> > > "org.ucsf.ctakes_auth" in the block "Export-Package"
> > >
> > > I solved it setting brooklyn-downstream-parent as parent project. I't
> is
> > > configured by default for exporting the classes built.
> > >
> > >  
> > >   org.apache.brooklyn
> > >   brooklyn-downstream-parent
> > >   1.0.0-SNAPSHOT
> > > 
> > >
> > > I hope this helps
> > >
> > > Juan
> > >
> > >
> > > On Sun, 25 Aug 2019 at 05:08, Peter Abramowitsch <
> > pabramowit...@gmail.com>
> > > wrote:
> > >
> > > > Hi All
> > > >
> > > > I have created a small secrets provider and unit tested it on it's
> own
> > > > first.  But I am having an issue with Brooklyn loading up my class
> and
> > > its
> > > > dependencies.
> > > >
> > > > Following the instructions, I put the class file into a new dropins
> > > folder
> > > > inside lib.  And added a call to the provider in brooklyn.cfg
> > > >
> > > > >>
> > brooklyn.external.ucsfsecrets=org.ucsf.ctakes_auth.UcsfSecretsProvider
> > > >
> > > > I then put the class's dependent jars in the lib/boot folder.  Not
> sure
> > > > whether they should be there or in lib itself... because I'm not sure
> > > which
> > > > classloader the dropins will be using
> > > >
> > > > Because I wasn't sure what dropins was expecting, I put the class in
> > > twice,
> > > > as you see
> > > >
> > > > lib/dropins
> > > > ├── UcsfSecretsProvider.class
> > > > └── org
> > > > └── ucsf
> > > > └── ctakes_auth
> > > > └── UcsfSecretsProvider.class
> > > >
> > > > My provider needs a couple of JVM defines, so I added these to the
> > setenv
> > > > script.
> > > >
> > > > On startup,  the exception blew up the launch of the Brooklyn
> container
> > > >
> > > > My error message is this E

Re: Allowing OAUTH in Apache Brooklyn Client (BR)

2019-05-23 Thread Alex Heneveld



Juan-

This looks really good to me.

If you don't use OAuth, no change, and `br` works as it always has.

If you do need OAuth, you define the process to get the header (maybe 
another script, say br-my-oauth), which can then use`br` to persist the 
header for subsequent usage by `br`.  You'll have to explicitly 
re-generate the header whenever the session times out.


Alternatively the user could rename `br` to `br-original` then write 
their own script `br` which tries `br-original` but if auth fails it 
automaticay invokes `br-my-oauth` and retries `br-original`.  That 
way login would be as seamless as possible.


Can we make it so `br` returns a special non-zero exit code on auth 
faillure?  (Different to exit code on other errors.)


Best
Alex


On 23/05/2019 10:33, Juan Cabrerizo wrote:

Hi Geoff. Thanks your your thoughts.

As different oauth providers can implement login in very different ways,
for the moment I'm bypassing the process for request the token. Once the
user get it with an external mechanism, he can add it when do login with br.
As you said, once you use the token for login, you need sent it on all REST
calls for authentication.
And also, I added a new parameter for do not request for user and password.
If it is added, br won't sent the current basic authentication header.
The command will be executed as:

br login http://server:port --noCredentials --header 'Authorization: Bearer
x.yy.z' --header 'OtherHeader=headerValue'

This execution will crate this persisted yaml:
{
 "credentials": {
 "http://localhost:8081": "Og=="
 },
 "credentialsRequired": false,
 "headers": {
 "Authorization: Bearer x.yy.z": "",
 "OtherHeader": "headerValue"
 },
 "skipSslChecks": false,
 "target": "http://localhost:8081";
}

If you don't add this new parameter, the behavior will be the same as
before, so it won't affect that not use oauth.
I defined for the moment `headers` and `credentialsRequired` as global (no
related to any target) as is for `skipSslChecks`

Other more ambitious change could be modify the structure to something like:
{
 "servers": {
  "http://localhost:8081":{

"credentials":"Odddg=="

},
http://remote1:8081":{

"credentials":"x=="

},
http://remote2:8081":{

"credentialsRequired": false,

"headers": {

"Authorization: Bearer x.yy.z": "",

"OtherHeader": "headerValue"

},

},

 },

 "skipSslChecks": false,
 "target": "http://localhost:8081";
}

I already have the first version working and i'm testing it in a couple of
environments, probably I'c create a PR this week. I'm not sure if it worth
make more changes.

Thanks again for your ideas.

Regards

Juan

On Wed, 22 May 2019 at 23:12, Geoff Macartney 
wrote:


Hi Juan,

If I understand you right you mean that the login command will do a
request to Brooklyn, sending the username and collected password, and
Brooklyn will do the interaction with the OAuth server to obtain a token,
which it will return to the br tool (in the form of a JWT token). Then br
can store that in its local .brooklyn_cli file.  What is the REST request
to be used for that login action?

Subsequently all requests will be sent with

Authorization: Bearer x.yy.z

Where x.y.z is the JWT token.

Is the above understanding right?

How will your design support br being able to log in to some Brooklyn
servers without OAuth configured, and others that do use OAuth?

Just a couple of thoughts, but I’m keen to see this. Happy to review any
early draft pull requests you may have.

Regards
Geoff


On 21 May 2019, at 15:11, Juan Cabrerizo  wrote:

Hello all

I'm continuing with changes for allow oauth authentication in Apache
Brooklyn. I'm working now on the BR command line tool.
The initial idea is send the JWT access token in the headers instead send
the basic authentication header when the token is available after

execute a

login command to get the token.

I'll modify the BR login command to allow inject headers and store it on
the yaml file and sent on the API calls. Is a new Authorization header is
available sent it rather than the default authentication.

Any concerns about this approach? I'm fully open to any idea.

Best regards.

Juan

--
Juan Cabrerizo
Software Engineer

*Cloudsoft  *| Bringing Business to the Cloud
E: j...@cloudsoft.io
L: https://www.linkedin.com/in/juancabrerizo/

Need a hand with AWS? Get a Free Consultation.






Re: Login changes & OAuth support

2019-01-22 Thread Alex Heneveld



Thanks Duncan.

The PR [1] is on this email now (sorry!).

The brooklyn.cfg that ships with karaf sets up AnyoneSecurityProvider I 
believe, not requiring login, rather than basic (this PR hasn't changed 
that).  The docs to configure basic are at [2], but they need a minor 
update to reflect the change in the default or the default should be 
changed back to ExplicitUser or RandomUser.  We should also review the 
docs at [3] and [4] as some parts of those are no longer current with 1.0.


Best
Alex

[1] https://github.com/apache/brooklyn-server/pull/1024
[2] 
http://brooklyn.apache.org/v/latest/ops/configuration/brooklyn_cfg.html#authentication

[3] http://brooklyn.apache.org/v/latest/ops/security-guidelines.html
[4] http://brooklyn.apache.org/v/latest/ops/configuration/index.html



On 22/01/2019 11:32, Duncan Grant wrote:

Alex,

Nice work - this seems to be a regular feature request,  I have a couple of
questions.

I think you're missing a link to the PR (ref[1]).

When I try the latest Brooklyn snapshot it basic auth is not enabled.  How
would I re-enable it?  Or do we now need to write a basicauth security
provider?

Regards

Duncan


On Wed, 16 Jan 2019 at 11:20, Alex Heneveld 
wrote:


Hi All-

We've had quite a few requests to support OAuth, and I'm pleased to say
we can now support it via plugins.

Folks are still working on sample plugins for Google and/or GitHub, but
enough is there already for people to write their own to integrate with
their choice of OAuth servers.

It is a bigger change than probably anyone expected however.  It is in
[1], which has been merged with the help of Juan Cabrerizo (thanks
Juan!).  This is not expected to break any APIs, but it does have
implications for people using the code directly or developing extensions
(REST endpoints and UI modules).

Firstly background -- JAAS is geared around Basic auth, which is
incompatible with OAuth and other modern auth schemes, so we've had to
rip out the LoginModule and replace it with Filters.  Jersey (REST
bundles) and resources (WAR bundles) need different filters so there's a
bit of extra complexity, but we've refactored to share code.  Then it
needed a few version bumps to make it all work as expected -- CXF,
javax.ws.rs, Resteasy, and a point bump to karaf itself.  All of which
are good things, though there was quite a bit of pain in getting them
all aligned and playing nicely.

They all wrap the Brooklyn SecurityProvider class, so configuring
security providers (what users do) is unchanged.  There is a new method
on the interface to say whether it needs basic auth or not, so custom
SecurityProviders will need a minor update.  But you now have a lot more
flexibility in writing the provider:  you can for instance throw a
SecurityProviderDeniedAuthentication exception containing a Response to
have that Response returned to the HTTP caller.  This allows us to
handle the redirects needed for OAuth.

We've also enabled auth for all the static resouces modules (WARs). This
is so the redirect happens for the user's browser request, rather than
loading html and JS, and the 302 redirect only occurring within angular
which is unhelpful.  It also is a bit more secure, as now nothing is
available if you aren't logged in.

Changes to downstream REST projects are pretty simple, see the
`blueprint.xml` change; and WAR bundles also, an update to `web.xml` and
the `pom.xml`.

Any questions just let us know.

Best
Alex
(with a lot of work by Juan!)





[jira] [Closed] (BROOKLYN-606) ATG Stores (client) impression & MUV usage is incorrect

2019-01-18 Thread Alex Heneveld (JIRA)


 [ 
https://issues.apache.org/jira/browse/BROOKLYN-606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Heneveld closed BROOKLYN-606.
--
Resolution: Invalid

> ATG Stores (client) impression & MUV usage is incorrect
> ---
>
> Key: BROOKLYN-606
> URL: https://issues.apache.org/jira/browse/BROOKLYN-606
> Project: Brooklyn
>  Issue Type: Test
>Reporter: Jim Cadigan
>Priority: Critical
> Attachments: ATG impressions calculator 1.jpg, ATG impressions 
> calculator 2.jpg
>
>
> I don't know how to log a Jira ticket.  I'm the AE on the ATG Stores account 
> (Lowe's Canada) with Subscription 
> #[285656357|https://optimizely.lightning.force.com/lightning/r/Subscription__c/a5rC0008SRsIAM/view]
>  in Salesforce.  They are interested in adding more websites which they will 
> have to upgrade to the Impressions model to do.   When we calculate the 
> Impressions / MUV in ChartIO impressions calculator (go/impressions) it is 
> estimating the Impressions at 39 million and the MUVs at 6.76 million - which 
> is way too high according to the CSM.  The CSM believes based on the number 
> of tests they've run it shouldn't be close to that high of amount.   Can 
> someone please look into this further?   We have a call to discuss their 
> options on Tuesday (1/22) so if possible we would like an accurate estimate 
> by that time (or soon thereafter).   Jim Cadigan 
> [jim.cadi...@optimizely.com|mailto:jim.cadi...@optimizely.com]
>  
> https://chartio.com/optimizelyinc/impressions/?ev459912=285656357&ev456029=12690680070&ev456029=12109898642&ev456029=11925673319&ev456029=11617832294&ev456029=11359124158&ev456029=11081888145&ev456029=10941983808&ev456029=10793822095&ev456029=10673231236&ev456029=10523345766&ev456029=10408300039&ev456029=10200184135



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (BROOKLYN-606) ATG Stores (client) impression & MUV usage is incorrect

2019-01-18 Thread Alex Heneveld (JIRA)


[ 
https://issues.apache.org/jira/browse/BROOKLYN-606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16746533#comment-16746533
 ] 

Alex Heneveld commented on BROOKLYN-606:


Thanks, you guys are in the wrong Jira.  This is for the Apache Foundation's 
Brooklyn project.  :)

> ATG Stores (client) impression & MUV usage is incorrect
> ---
>
> Key: BROOKLYN-606
> URL: https://issues.apache.org/jira/browse/BROOKLYN-606
> Project: Brooklyn
>  Issue Type: Test
>Reporter: Jim Cadigan
>Priority: Critical
> Attachments: ATG impressions calculator 1.jpg, ATG impressions 
> calculator 2.jpg
>
>
> I don't know how to log a Jira ticket.  I'm the AE on the ATG Stores account 
> (Lowe's Canada) with Subscription 
> #[285656357|https://optimizely.lightning.force.com/lightning/r/Subscription__c/a5rC0008SRsIAM/view]
>  in Salesforce.  They are interested in adding more websites which they will 
> have to upgrade to the Impressions model to do.   When we calculate the 
> Impressions / MUV in ChartIO impressions calculator (go/impressions) it is 
> estimating the Impressions at 39 million and the MUVs at 6.76 million - which 
> is way too high according to the CSM.  The CSM believes based on the number 
> of tests they've run it shouldn't be close to that high of amount.   Can 
> someone please look into this further?   We have a call to discuss their 
> options on Tuesday (1/22) so if possible we would like an accurate estimate 
> by that time (or soon thereafter).   Jim Cadigan 
> [jim.cadi...@optimizely.com|mailto:jim.cadi...@optimizely.com]
>  
> https://chartio.com/optimizelyinc/impressions/?ev459912=285656357&ev456029=12690680070&ev456029=12109898642&ev456029=11925673319&ev456029=11617832294&ev456029=11359124158&ev456029=11081888145&ev456029=10941983808&ev456029=10793822095&ev456029=10673231236&ev456029=10523345766&ev456029=10408300039&ev456029=10200184135



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (BROOKLYN-606) ATG Stores (client) impression & MUV usage is incorrect

2019-01-18 Thread Alex Heneveld (JIRA)


[ 
https://issues.apache.org/jira/browse/BROOKLYN-606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16746512#comment-16746512
 ] 

Alex Heneveld commented on BROOKLYN-606:


Is this spam or just mis-reported? cc [~infrastruct...@apache.org] 

Unless anyone can explain how this relates to Apache Brooklyn let's close.

> ATG Stores (client) impression & MUV usage is incorrect
> ---
>
> Key: BROOKLYN-606
> URL: https://issues.apache.org/jira/browse/BROOKLYN-606
> Project: Brooklyn
>  Issue Type: Test
>Reporter: Jim Cadigan
>Priority: Critical
> Attachments: ATG impressions calculator 1.jpg, ATG impressions 
> calculator 2.jpg
>
>
> I don't know how to log a Jira ticket.  I'm the AE on the ATG Stores account 
> (Lowe's Canada) with Subscription 
> #[285656357|https://optimizely.lightning.force.com/lightning/r/Subscription__c/a5rC0008SRsIAM/view]
>  in Salesforce.  They are interested in adding more websites which they will 
> have to upgrade to the Impressions model to do.   When we calculate the 
> Impressions / MUV in ChartIO impressions calculator (go/impressions) it is 
> estimating the Impressions at 39 million and the MUVs at 6.76 million - which 
> is way too high according to the CSM.  The CSM believes based on the number 
> of tests they've run it shouldn't be close to that high of amount.   Can 
> someone please look into this further?   We have a call to discuss their 
> options on Tuesday (1/22) so if possible we would like an accurate estimate 
> by that time (or soon thereafter).   Jim Cadigan 
> [jim.cadi...@optimizely.com|mailto:jim.cadi...@optimizely.com]
>  
> https://chartio.com/optimizelyinc/impressions/?ev459912=285656357&ev456029=12690680070&ev456029=12109898642&ev456029=11925673319&ev456029=11617832294&ev456029=11359124158&ev456029=11081888145&ev456029=10941983808&ev456029=10793822095&ev456029=10673231236&ev456029=10523345766&ev456029=10408300039&ev456029=10200184135



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Login changes & OAuth support

2019-01-16 Thread Alex Heneveld



Hi All-

We've had quite a few requests to support OAuth, and I'm pleased to say 
we can now support it via plugins.


Folks are still working on sample plugins for Google and/or GitHub, but 
enough is there already for people to write their own to integrate with 
their choice of OAuth servers.


It is a bigger change than probably anyone expected however.  It is in 
[1], which has been merged with the help of Juan Cabrerizo (thanks 
Juan!).  This is not expected to break any APIs, but it does have 
implications for people using the code directly or developing extensions 
(REST endpoints and UI modules).


Firstly background -- JAAS is geared around Basic auth, which is 
incompatible with OAuth and other modern auth schemes, so we've had to 
rip out the LoginModule and replace it with Filters.  Jersey (REST 
bundles) and resources (WAR bundles) need different filters so there's a 
bit of extra complexity, but we've refactored to share code.  Then it 
needed a few version bumps to make it all work as expected -- CXF, 
javax.ws.rs, Resteasy, and a point bump to karaf itself.  All of which 
are good things, though there was quite a bit of pain in getting them 
all aligned and playing nicely.


They all wrap the Brooklyn SecurityProvider class, so configuring 
security providers (what users do) is unchanged.  There is a new method 
on the interface to say whether it needs basic auth or not, so custom 
SecurityProviders will need a minor update.  But you now have a lot more 
flexibility in writing the provider:  you can for instance throw a 
SecurityProviderDeniedAuthentication exception containing a Response to 
have that Response returned to the HTTP caller.  This allows us to 
handle the redirects needed for OAuth.


We've also enabled auth for all the static resouces modules (WARs). This 
is so the redirect happens for the user's browser request, rather than 
loading html and JS, and the 302 redirect only occurring within angular 
which is unhelpful.  It also is a bit more secure, as now nothing is 
available if you aren't logged in.


Changes to downstream REST projects are pretty simple, see the 
`blueprint.xml` change; and WAR bundles also, an update to `web.xml` and 
the `pom.xml`.


Any questions just let us know.

Best
Alex
(with a lot of work by Juan!)


Re: Build depends on Docker

2018-12-15 Thread Alex Heneveld
It should have a mvn profile like rpm and the go cli IMO. The README in the
master project describes these.

Best
Alex

On Fri, 14 Dec 2018, 22:51 Geoff Macartney  We added this in https://github.com/apache/brooklyn-dist/pull/118, but I
> do
> dislike having to have Docker to build Brooklyn.  IMHO anyone should be
> able to build and use Brooklyn without knowing anything about Docker. Could
> we remove the image build from the mvn install and have a separate shell
> script that you would run manually to build the image? And yes it should
> use the karaf distro, didn't realise it doesn't.
>
> Geoff
>
>
> On Wed, 12 Dec 2018 at 16:58 Richard Downer  wrote:
>
> > All,
> >
> > The Apache Brooklyn build depends on having a working Docker instance.
> This
> > I did not know.
> >
> > The build failure happens in the `brooklyn-dist` project, which
> > incorporates into execution `dockerfile-maven-plugin` which invokes
> Docker
> > during the build phase. If Docker is not running, it tries to connect to
> a
> > non-existent UNIX socket and the build fails.
> >
> > This presents a few discussion points...
> >
> > What exactly is it building? There's a Dockerfile there and it seems that
> > it builds an image which contains the Brooklyn distribution and starts
> > Brooklyn. I don't know much about Docker, what happens to that image? Is
> it
> > local to my computer?
> >
> > Is it necessary to have the build depend on Docker? To me this seems
> > unreasonable. Docker has a large footprint and I don't think it's
> > reasonable to require it for a normal, local build of Brooklyn.
> >
> > We're not releasing Docker images. Should we be? Should we not be? Is it
> > even possible for us to do that in Apache?
> >
> > Why haven't I seen this before? The changes to add this to brooklyn-dist
> > were made in 2017. I've performed release builds on clean EC2 instances
> and
> > never seen this. Was this dormant, and has something changed which has
> > kicked this into life?
> >
> > brooklyn-dist is obsolete now. If the Docker build is still something
> > important, then firstly it needs moved to another project (hopefully one
> > exclusive to that task) and secondly it needs to use the Karaf
> > distribution.
> >
> > Can anyone shed some light on this?
> >
> > Thanks!
> >
> > Richard.
> >
>


Brooklyn git -- git-wip-us GONE now gitbox or github

2018-12-14 Thread Alex Heneveld


As per thread below:  my `git pull` just failed, but this command in the 
root brooklyn dir worked seamlessly for me:


*for x in .git/{config,modules/*/config} ; do sed -i.bk 
s/git-wip-us/gitbox/g $x ; done**

*
(I use submodules; slight tweaks should take care of it for other 
topologies.)


Best
Alex


On 14/12/2018 10:52, Thomas Bouron wrote:

Erratum: we now have a team for the people who did the Gitbox <-> GitHub
link, which gives committers the ability to merge PR directly from the
GitHub UI 🎉


On Fri, 14 Dec 2018 at 10:18 Thomas Bouron 
wrote:


Migration successful[1]. It means that committers can push to either
Gitbox or GitHub (still cannot merge directly through the UI though)

[1]
https://issues.apache.org/jira/browse/INFRA-17440?page=com.atlassian.jira.plugin.system.issuetabpanels%3Aall-tabpanel

On Fri, 14 Dec 2018 at 08:44 Thomas Bouron 
wrote:


Hi all.

I think we have reached a consensus here as there is no +0 or -1.
I went ahead and created an INFRA ticket[1] to request the move.

Best.

[1] https://issues.apache.org/jira/browse/INFRA-17440

On Thu, 13 Dec 2018 at 13:53 John Campbell <
john.campb...@cloudsoftcorp.com> wrote:


No strong feelings, so +1 from me

John Campbell
Software Engineer

Cloudsoft | Bringing Business to the Cloud

E: john.campb...@cloudsoftcorp.com
M: 07779 576614
T: -
L: www.linkedin.com/in/john-campbell-42105267

Need a hand with AWS? Get a Free Consultation.


On 13 Dec 2018, at 09:31, Duncan Grant 

wrote:

+1

On Wed, 12 Dec 2018 at 16:42 Richard Downer 

wrote:

Brooklyn team,

Apart from myself, I don't think anyone has clearly come out in

favour or

opposed to this. I'd rather we got consensus and moved to gitbox

early - so

that if some people do object, we have time to work out the

objections with

infra, before we are involuntarily moved.

Thoughts, anyone? Does this need a formal vote to get people

responding?

Richard.


On Fri, 7 Dec 2018 at 16:54, Daniel Gruno 

wrote:

[IF YOUR PROJECT DOES NOT HAVE GIT REPOSITORIES ON GIT-WIP-US PLEASE
  DISREGARD THIS EMAIL; IT WAS MASS-MAILED TO ALL APACHE PROJECTS]

Hello Apache projects,

I am writing to you because you may have git repositories on the
git-wip-us server, which is slated to be decommissioned in the coming
months. All repositories will be moved to the new gitbox service

which

includes direct write access on github as well as the standard ASF
commit access via gitbox.apache.org.

## Why this move? ##
The move comes as a result of retiring the git-wip service, as the
hardware it runs on is longing for retirement. In lieu of this, we
have decided to consolidate the two services (git-wip and gitbox), to
ease the management of our repository systems and future-proof the
underlying hardware. The move is fully automated, and ideally,

nothing

will change in your workflow other than added features and access to
GitHub.

## Timeframe for relocation ##
Initially, we are asking that projects voluntarily request to move
their repositories to gitbox, hence this email. The voluntary
timeframe is between now and January 9th 2019, during which projects
are free to either move over to gitbox or stay put on git-wip. After
this phase, we will be requiring the remaining projects to move

within

one month, after which we will move the remaining projects over.

To have your project moved in this initial phase, you will need:

- Consensus in the project (documented via the mailing list)
- File a JIRA ticket with INFRA to voluntarily move your project

repos

   over to gitbox (as stated, this is highly automated and will take
   between a minute and an hour, depending on the size and number of
   your repositories)

To sum up the preliminary timeline;

- December 9th 2018 -> January 9th 2019: Voluntary (coordinated)
   relocation
- January 9th -> February 6th: Mandated (coordinated) relocation
- February 7th: All remaining repositories are mass migrated.

This timeline may change to accommodate various scenarios.

## Using GitHub with ASF repositories ##
When your project has moved, you are free to use either the ASF
repository system (gitbox.apache.org) OR GitHub for your development
and code pushes. To be able to use GitHub, please follow the primer
at: https://reference.apache.org/committer/github


We appreciate your understanding of this issue, and hope that your
project can coordinate voluntarily moving your repositories in a
timely manner.

All settings, such as commit mail targets, issue linking, PR
notification schemes etc will automatically be migrated to gitbox as
well.

With regards, Daniel on behalf of ASF Infra.

PS:For inquiries, please reply to us...@infra.apache.org, not your
project's dev list :-).




--
Duncan Grant
Software Engineer

*Cloudsoft  *| Bringing Business to the Cloud

Need a hand with AWS? Get a Free Consultation.


--

Thomas Bouron
Senior Software Engineer

*Cloudsoft  *| Bringing Busi

Re: [NOTICE] Mandatory relocation of Apache git repositories on git-wip-us.apache.org

2018-12-12 Thread Alex Heneveld

+1

On 12/12/2018 17:08, Mark McKenna wrote:

+1 Moving to gitbox makes sense

On Wed, 12 Dec 2018, 16:56 Aled Sage 
+1 from me.

We need "Consensus in the project (documented via the mailing list)" - I
interpreted that as us needing a formal vote, but if an informal email
thread will do then that's fine with me

Aled


On 12/12/2018 16:41, Richard Downer wrote:

Brooklyn team,

Apart from myself, I don't think anyone has clearly come out in favour or
opposed to this. I'd rather we got consensus and moved to gitbox early -

so

that if some people do object, we have time to work out the objections

with

infra, before we are involuntarily moved.

Thoughts, anyone? Does this need a formal vote to get people responding?

Richard.


On Fri, 7 Dec 2018 at 16:54, Daniel Gruno  wrote:


[IF YOUR PROJECT DOES NOT HAVE GIT REPOSITORIES ON GIT-WIP-US PLEASE
DISREGARD THIS EMAIL; IT WAS MASS-MAILED TO ALL APACHE PROJECTS]

Hello Apache projects,

I am writing to you because you may have git repositories on the
git-wip-us server, which is slated to be decommissioned in the coming
months. All repositories will be moved to the new gitbox service which
includes direct write access on github as well as the standard ASF
commit access via gitbox.apache.org.

## Why this move? ##
The move comes as a result of retiring the git-wip service, as the
hardware it runs on is longing for retirement. In lieu of this, we
have decided to consolidate the two services (git-wip and gitbox), to
ease the management of our repository systems and future-proof the
underlying hardware. The move is fully automated, and ideally, nothing
will change in your workflow other than added features and access to
GitHub.

## Timeframe for relocation ##
Initially, we are asking that projects voluntarily request to move
their repositories to gitbox, hence this email. The voluntary
timeframe is between now and January 9th 2019, during which projects
are free to either move over to gitbox or stay put on git-wip. After
this phase, we will be requiring the remaining projects to move within
one month, after which we will move the remaining projects over.

To have your project moved in this initial phase, you will need:

- Consensus in the project (documented via the mailing list)
- File a JIRA ticket with INFRA to voluntarily move your project repos
 over to gitbox (as stated, this is highly automated and will take
 between a minute and an hour, depending on the size and number of
 your repositories)

To sum up the preliminary timeline;

- December 9th 2018 -> January 9th 2019: Voluntary (coordinated)
 relocation
- January 9th -> February 6th: Mandated (coordinated) relocation
- February 7th: All remaining repositories are mass migrated.

This timeline may change to accommodate various scenarios.

## Using GitHub with ASF repositories ##
When your project has moved, you are free to use either the ASF
repository system (gitbox.apache.org) OR GitHub for your development
and code pushes. To be able to use GitHub, please follow the primer
at: https://reference.apache.org/committer/github


We appreciate your understanding of this issue, and hope that your
project can coordinate voluntarily moving your repositories in a
timely manner.

All settings, such as commit mail targets, issue linking, PR
notification schemes etc will automatically be migrated to gitbox as
well.

With regards, Daniel on behalf of ASF Infra.

PS:For inquiries, please reply to us...@infra.apache.org, not your
project's dev list :-).







Re: [PROPOSAL] Separate palette and spec editor in blueprint composer

2018-12-11 Thread Alex Heneveld



Hi Sylvain-

My 2p ... I like all of these ideas, been thinking the same myself, with 
one exception.


I like the green "+" button as it gives an alternative way to add 
something.  But could be talked out of it if others think it's too 
confusing having these multiple ways.


The rest of it, tidying up the palette schizophrenia, bring it on ASAP!

Maybe it makes sense to evaluate the green + once we have those working?

Best
Alex




On 29/11/2018 16:24, Sylvain FEROT wrote:

Sorry for that, here is a zip file with the pictures :
https://drive.google.com/open?id=1yLIK71qefZWum-JI15AfUq1ZIQKjy22a

Le jeu. 29 nov. 2018 à 17:03, Thomas Bouron  a
écrit :


Hi Sylvain, and thanks for the proposal!

The Apache ML don't like images embedded inside emails and strip them out.
Could you instead make them available somewhere, and post the links to
them?

Thanks.

Best.

On Thu, 29 Nov 2018 at 15:41 Sylvain FEROT 
wrote:


Hello everyone,

Currently we have a problem with palette. It can be shown on both left

and

right panel at the same time. It does the same thing but with slight
differences :
[image: image.png]


-

Left palette open a popup before adding to the blueprint. Right
palette does not
[image: image.png]

-

A toolbar is shown on the right, but always with one element. If you
close this element the whole panel is empty and useless.
[image: image.png]

-

If you select one entity and click on the plus button of another, the
palettes add to different entities
-

When we compose a blueprint, we always have both panels open. On small
screens there’s not much place left to see the graph.

To improve this we could do something like this :

-

Put everything that add things in the left panel, and keep the
configuration in the right panel
-

Add a toolbar to the right panel, such as the left panel. The tabs
would be the different spec editor sections : configuration,

policies, …



-

Add a close button to the right panel, to be coherent with the left
panel. Combined with the previous point, it allow to keep opened or

closed

the right panel
[image: with-arrows.png][image: image.png]

-

If no entity is selected, remove the right toolbar. It appears again
when we select an entity.
[image: image.png]

-

Delete the green plus button, it’s no more needing and create
confusion for users.
[image: image.png]

-

Everything else work like before, including drag & drop

With the points before, we could remove the add buttons of the different
sections of the spec editor. If you prefer to keep the buttons, we could
also make them just open and highlight sections on the left panel.
[image: image.png]


Best regards,

Sylvain


--
Thomas Bouron
Senior Software Engineer

*Cloudsoft  *| Bringing Business to the Cloud

GitHub: https://github.com/tbouron
Twitter: https://twitter.com/eltibouron

Need a hand with AWS? Get a Free Consultation.





Re: PROPOSAL: conditional config constraints

2018-09-21 Thread Alex Heneveld



+1

I've implemented this for the server side at 
https://github.com/apache/brooklyn-server/pull/999 .


One minor tweak to Aled's proposal, I think we should output a 
structured YAML rather than the toString, so clients don't have to do 
complex parsing.  IE instead of sending the string syntax 
`requiredUnless("X")` we'd send `requiredUnless: X`.  (On input we can 
accept either representation, a toString or yaml.)


Will add support in the UI next (new PR obviously).

Best
Alex


On 21/09/2018 09:08, Geoff Macartney wrote:

Hi Aled,

I'd say go for it, that looks like something that could be valuable in
various cases.
I take it your example of "exactly one of config X or config Y" would be
expressed
along the lines of a parallel set of constraints between each config -

On X:
  constraints:
- requiredUnless("Y")
- forbiddenIf("Y")

On Y:
  constraints:
- requiredUnless("X")
- forbiddenIf("X")

Geoff


On Tue, 18 Sep 2018 at 17:05 Aled Sage  wrote:


Hi all,

I'd like to add support for more sophisticated config constraints, where
there are inter-dependencies between config keys. I'd like the blueprint
composer web-console to understand (a small number of) these, and thus
to give feedback to the user about what is required and whether their
blueprint is valid.

For example, a blueprint that requires exactly one of config X or config
Y. Another example: config X2 is required if and only if config X1 is
supplied.

There are a few questions / decision points:

   1. What constraints should we support out-of-the-box?
   2. What naming convention do we use, so that the UI can parse +
understand these?
   3. Should we support multiple constraints (we do in the REST api, but
not currently in the Java API or in the Blueprint Composer UI)

---
I suggest we support things like:

  constraints:
- requiredUnless("Y")
- forbiddenIf("Y")

and:

  constraints:
- requiredIf("X1")
- forbiddenUnless("X1")

The structure of this string would be:

  {required,forbidden}{If,Unless}("")

  requiredIf("X1"):  value is required if config X1 is set;
otherwise optional.
  forbiddenUnless("X1"): value must be null if config X1 is not set;
otherwise optional.
  requiredUnless("Y"):   value is required if config Y is not set;
otherwise optional.
  forbiddenIf("Y"):  value must be null if config Y is set;
otherwise optional.

I don't think we want to get too sophisticated. For example, do *not*
support "must match regex '[a-z]+' unless config Y is present". I don't
think we should create a full-blown DSL for this!

---
Implementation notes:

We already have the basis of this in our Java code. We support a
predicate of type
`org.apache.brooklyn.core.objs.BrooklynObjectPredicate`, which has the
additional method `boolean apply(T input, BrooklynObject context)`. The
`BrooklynObject` could be an entity, or location, etc. An implementation
of this predicate can therefore lookup other config key's values, when
validating the value.

For the UI, the Blueprint Composer calls:

http://localhost:8081/v1/catalog/bundles/
//types//latest

This returns things like:

  "config": [
{
  "name": "myPredicate",
  "type": "java.lang.String",
  "reconfigurable": false,
  "label": "myPredicate",
  "pinned": false,
  "constraints": [
"MyPredicateToString()"
  ],
  "links": {}
},
...

The constraint returned here is the toString() of the predicate.

In the UI [1], there is currently some very simple logic to interpret
this string for particular types of constraint.

Aled

[1]

brooklyn-ui/ui-modules/blueprint-composer/app/components/providers/blueprint-service.provider.js






Re: Complete org.apache.brooklyn.entity.osgi.karaf.KarafContainer YAML Blueprint example

2018-09-14 Thread Alex Heneveld



hi Nino

in that case you're probably better to create a new yaml-based entity to 
launch karaf.
brooklyn supports windows, it's just that a lot of the blueprints that 
come pre-bundled are

limited to certain environments.

what are you looking to do?

best
Alex


On 14/09/2018 10:38, nino martinez wael wrote:

ahh so that Means that currently the KarafContainer does not Work on
Windows? We are heavily forced to use Windows here.. Making it Work on
Windows would require for us to make a set of ps script right?

regards Nino

On Wed, Sep 12, 2018 at 1:10 PM Duncan Grant 
wrote:


So KarafContainer is using ssh and bash commands so would need a linux
target.

In general with brooklyn blueprints they will either explicitly document
the required target or by default it'll be "any" linux distro.

There's an example here

https://github.com/apache/brooklyn-library/blob/master/software/nosql/src/main/resources/org/apache/brooklyn/entity/nosql/mongodb/mongodb_win.yaml
of a blueprint that targets an MS Windows location.

Regards

Duncan

On Wed, 12 Sep 2018 at 11:27 nino martinez wael <
nino.martinez.w...@gmail.com> wrote:


services:
- type: org.apache.brooklyn.entity.osgi.karaf.KarafContainer:0.12.0
location: win7client1

brooklyn.locations:
- type: byon
   brooklyn.config:
 displayName: Windows 7 Client 1
 user: admin
 password: 
 hosts: [192.168.167.61]

And the error:
ON_FIRE
Service Up

false
Type

org.apache.brooklyn.entity.stock.BasicApplication
ID

ecj47361lz
Required entity not healthy: Karaf (omzghor3xd)
*Failure running task invoking start[locations] on 1 node (Q2idAeJ3)
<


http://192.168.167.60:8081/#/v1/applications/ecj47361lz/entities/ecj47361lz/activities/subtask/Q2idAeJ3

:

*Error
invoking start at KarafContainerImpl{id=omzghor3xd}: SshException: (
admin@192.168.167.61:22) (admin@192.168.167.61:22) error acquiring



Put(path=[/tmp/brooklyn-20180906-170013638-H9HK-initializing_on-box_base_dir_..sh

387]) (attempt 4/4, in time 2.13s/2m); out of retries: No such file

On Wed, Sep 12, 2018 at 11:27 AM Duncan Grant 
Hi,

There's an example here

https://github.com/brooklyncentral/brooklyn-karaf.

I just tried it and it worked fine but it isn't really any different to
your yaml above so I assume the problem may be due to your location.

What error are you getting and what does your location look like?

Regards

Duncan

On Wed, 12 Sep 2018 at 08:55 nino martinez wael <
nino.martinez.w...@gmail.com> wrote:


Hi

Im playing around with brooklyn and are having a little trouble

getting a

karafcontainer to install

IF I just do this:


services:
- type: org.apache.brooklyn.entity.osgi.karaf.KarafContainer:0.12.0
location: mynode

it fails, so does someone have an example that are complete enough to
install karaf?

--
Best regards / Med venlig hilsen
Nino Martinez


--
Duncan Grant
Software Engineer

*Cloudsoft  *| Bringing Business to the Cloud

Need a hand with AWS? Get a Free Consultation.




--
Best regards / Med venlig hilsen
Nino Martinez


--
Duncan Grant
Software Engineer

*Cloudsoft  *| Bringing Business to the Cloud

Need a hand with AWS? Get a Free Consultation.








Re: Time to make a release?

2018-09-03 Thread Alex Heneveld



+1

We might want to make an effort to get these merged (but I don't think 
it's critical):


* https://github.com/apache/brooklyn-server/pulls
#982 - better type coercion
#971 - location DSL

There are a few others in library, docs, and dist, which we should take 
care of as general housekeeping but not ahead of an M1 IMO.


Best
Alex


On 03/09/2018 13:24, Aled Sage wrote:

+1

Assuming there's consensus, shall we give 24 hours for folk to get any 
outstanding PRs submitted/merged, before we kick an RC1 etc?


Aled


On 03/09/2018 13:17, Richard Downer wrote:

All,

It's probably about time we made a release - shockingly, it's been 
nearly a

year since the last one.

We want to start the release train towards a 1.0 release, but I think we
could benefit from an interim release first, given that the paint is 
still

drying on the new UI. Therefore I propose a "milestone 1" release,
1.0.0-M1. I'm happy to volunteer to be release manager on this one.

Thoughts, comments, +1 or -1?

Cheers
Richard







Re: [VOTE] New Angular UI for Brooklyn

2018-07-26 Thread Alex Heneveld



Thanks Richard, Thomas.

i have done as you suggest, pushing from a merged branch directly to 
master.  [1] is the result of `mkdir new ; cd new ;  curl 
https://issues.apache.org/jira/secure/attachment/12932670/brooklyn-ui-angular.tgz 
| tar xvz`.  [2] is a few minor RAT/license header tweaks needed.


I confirm that `mvn clean install` now works:

* At `brooklyn-ui/` (root) to build the old UI with no errors
* In `brooklyn-ui-new/` to build the new UI with no errors

The next will be a set of linked PRs to replace `/brooklyn-ui/` with 
`/brooklyn-ui/new/`, and update all related code and documentation.


Best
Alex

[1] 
https://github.com/apache/brooklyn-ui/commit/29927b734a8db268814ea180c1d117858a747c3c
[2] 
https://github.com/apache/brooklyn-ui/commit/9b8bb9c156368143219c13ecbdeb5d1ba1e60b3b



On 26/07/2018 11:19, Richard Downer wrote:

Alex,

Agree with Thomas - the code has been [VOTE]ed in so there's no need for a
second stage of review, and pushing a massive PR isn't easy to navigate in
the GitHub UI. So just push straight to master on the Apache git repo.

There's a safety net in that the code is going into a `new` subdirectory so
interested individuals can still conduct a detailed code review.

Thanks
Richard.


On Thu, 26 Jul 2018 at 10:41, Thomas Bouron 
wrote:


Hi Alex.

Great news, I'm super excited to see this land!
I don't think the PR approach makes sense in this context, the code is
completely different from the current code base, there isn't much benefit
or doing a PR and see the differences between the two.

I would push directly IMO.

Best.

On Thu, 26 Jul 2018 at 10:32 Alex Heneveld <
alex.henev...@cloudsoftcorp.com>
wrote:


Hi Brooklyners-

I also give a +1 (binding).

With 72h elapsed I declare the VOTE closed and passed.

The [IP-CLEARANCE] also passed as folks will see.  I will incorporate
the new code now.  I will do this as a PR for visibility and call out
the IP-CLEARANCE process links in that PR, unless anyone thinks a
different approach is more suitable (e.g. pushing directly?).

Best regards
Alex


On 23/07/2018 14:22, Geoff Macartney wrote:

+1 (binding)

Excited to see this go into Brooklyn


On Mon, 23 Jul 2018 at 14:13 Richard Downer 

wrote:

+1 (binding)

Having seen this UI in action I'd be very happy for it to land into
Brooklyn. It's a much more modern-looking, aesthetically-pleasing UI

with

much more extensibility, but at it's core the UI works very similar to

the

old one, so there's very little learning curve for a user moving from

the

old UI to the new.

Richard.


On Mon, 23 Jul 2018 at 10:07, Alex Heneveld <
alex.henev...@cloudsoftcorp.com>
wrote:


Hi Brooklyners-

This is a vote on whether to accept the brooklyn-ui-angular

contribution

at [1] once IP clearance is completed.

For background, as previously discussed a new UI based on Angular/JS

has

been offered to the Apache Brooklyn project.  The formal grant has

been

completed and is on file -- thank you Cloudsoft and Fujitsu -- and is
currently going through IP Clearance (see prior email to this list)

and

barring obstacles we may have that clearance after 72 hours.  The

vote

to accept can occur in parallel with the clearance so that is what we
are doing.

We propose for the code to be added iniitially to a `new/`

subdirectory

in the `brooklyn-ui` repo, once IP clearance is completed and if this
vote is successful.  We will then create a set of PRs to replace the
contents at the root with the contents under `new/` and make changes
elsewhere as needed for the project to build, run, and be documented
cleanly.  It is proposed that those PRs be reviewed in the usual way

(no

further votes) unless anyone thinks otherwise.

This vote will run for 72 hours.

Best
Alex

[1]  https://issues.apache.org/jira/browse/INCUBATOR-214


On 20/07/2018 16:14, Alex Heneveld wrote:

Hi All-

The codebase for the UI is staged for review here:

https://github.com/ahgittin/brooklyn-ui/tree/new-ui-for-review/new

We have created the ip-clearance record [1] to track steps and the
legal grant is in process (as per [2]).  We will call for an [IP
CLEARANCE] at general@incubator once those are completed, and then

we

will look for a vote here.  If you have any comments on the code or

on

the process in the meantime please let me know.

Best
Alex

[1]


http://svn.apache.org/viewvc/incubator/public/trunk/content/ip-clearance/brooklyn-ui-angular.xml?view=markup

[2]


https://incubator.apache.org/ip-clearance/ip-clearance-template.html#form-filling

On 28/05/2018 12:46, Alex Heneveld wrote:

Dear Brooklyners,

Our users at Fujitsu, UShareSoft, and Cloudsoft have generously
sponsored the contribution of a new UI for Apache Brooklyn. This is
based on the previously-proprietary Cloudsoft AMP UI, for those of
you familiar with that.

The proposed newly contributed UI has all the functionality of the
existing UI including an inspector, groovy console, a

[RESULT][VOTE] New Angular UI for Brooklyn

2018-07-26 Thread Alex Heneveld



To formally close this, I note the vote passes with:

* 6 binding +1s
* 0 non-binding +1s
* no 0 or -1 votes

Vote thread link:

http://mail-archives.apache.org/mod_mbox/brooklyn-dev/201807.mbox/%3C5dc7ad59-795e-fa7a-af32-918848229b32%40CloudsoftCorp.com%3E

Binding +1s:

Thomas Bouron
Mark McKenna
Aled Sage
Richard Downer
Geoff Macartney
Alex Heneveld

Best,
Alex


On 26/07/2018 10:32, Alex Heneveld wrote:


Hi Brooklyners-

I also give a +1 (binding).

With 72h elapsed I declare the VOTE closed and passed.

The [IP-CLEARANCE] also passed as folks will see.  I will incorporate 
the new code now.  I will do this as a PR for visibility and call out 
the IP-CLEARANCE process links in that PR, unless anyone thinks a 
different approach is more suitable (e.g. pushing directly?).


Best regards
Alex


On 23/07/2018 14:22, Geoff Macartney wrote:

+1 (binding)

Excited to see this go into Brooklyn


On Mon, 23 Jul 2018 at 14:13 Richard Downer  
wrote: * 5 binding +1s

* 0 non-binding +1s
* no 0 or -1 votes

Vote thread link:
https://mail-archives.apache.org/mod_mbox/incubator-brooklyn-dev/201509.mbox/%3C55E9C778.80907%40CloudsoftCorp.com%3E

Binding +1s:
Richard Downer
Alex Heneveld
Hadrian Zbarcea (IPMC)
Aled Sage
Sam Corbett


+1 (binding)

Having seen this UI in action I'd be very happy for it to land into
Brooklyn. It's a much more modern-looking, aesthetically-pleasing UI 
with
much more extensibility, but at it's core the UI works very similar 
to the
old one, so there's very little learning curve for a user moving 
from the

old UI to the new.

Richard.


On Mon, 23 Jul 2018 at 10:07, Alex Heneveld <
alex.henev...@cloudsoftcorp.com>
wrote:


Hi Brooklyners-

This is a vote on whether to accept the brooklyn-ui-angular 
contribution

at [1] once IP clearance is completed.

For background, as previously discussed a new UI based on 
Angular/JS has
been offered to the Apache Brooklyn project.  The formal grant has 
been

completed and is on file -- thank you Cloudsoft and Fujitsu -- and is
currently going through IP Clearance (see prior email to this list) 
and

barring obstacles we may have that clearance after 72 hours.  The vote
to accept can occur in parallel with the clearance so that is what we
are doing.

We propose for the code to be added iniitially to a `new/` 
subdirectory

in the `brooklyn-ui` repo, once IP clearance is completed and if this
vote is successful.  We will then create a set of PRs to replace the
contents at the root with the contents under `new/` and make changes
elsewhere as needed for the project to build, run, and be documented
cleanly.  It is proposed that those PRs be reviewed in the usual 
way (no

further votes) unless anyone thinks otherwise.

This vote will run for 72 hours.

Best
Alex

[1]  https://issues.apache.org/jira/browse/INCUBATOR-214


On 20/07/2018 16:14, Alex Heneveld wrote:

Hi All-

The codebase for the UI is staged for review here:

https://github.com/ahgittin/brooklyn-ui/tree/new-ui-for-review/new

We have created the ip-clearance record [1] to track steps and the
legal grant is in process (as per [2]).  We will call for an [IP
CLEARANCE] at general@incubator once those are completed, and then we
will look for a vote here.  If you have any comments on the code 
or on

the process in the meantime please let me know.

Best
Alex

[1]

http://svn.apache.org/viewvc/incubator/public/trunk/content/ip-clearance/brooklyn-ui-angular.xml?view=markup 


[2]

https://incubator.apache.org/ip-clearance/ip-clearance-template.html#form-filling 



On 28/05/2018 12:46, Alex Heneveld wrote:

Dear Brooklyners,

Our users at Fujitsu, UShareSoft, and Cloudsoft have generously
sponsored the contribution of a new UI for Apache Brooklyn. This is
based on the previously-proprietary Cloudsoft AMP UI, for those of
you familiar with that.

The proposed newly contributed UI has all the functionality of the
existing UI including an inspector, groovy console, and online REST
docs.  It is much more recent (angular, webpack), modular, easy to
develop against, and lovely to look at, and so would be a great
contribution based solely on that.

But even better, it provides a lot of new features:

*  A visual blueprint composer:  drag-and-drop elements from the
catalog onto a canvas, with a bi-directional YAML editor

* More live activity update:  a kilt view for activities, tailing
output from SSH commands

* A bundle-oriented catalog:  with search, bundle- or type- view,
delete bundles

* An extensible, skinnable, and reusable modular architecture: embed
angular directives and components from this project in others, build
a branded version of the UI, and/or add your own modules (e.g. to
accompany specific blueprints)

The last point in particular I think will be very valuable:  it will
allow people to use Brooklyn in many more good ways! There are plans
to make the Composer embeddable and able to work with other input
libraries (think e.g. of pointing it at a Docke

Re: [VOTE] New Angular UI for Brooklyn

2018-07-26 Thread Alex Heneveld



Hi Brooklyners-

I also give a +1 (binding).

With 72h elapsed I declare the VOTE closed and passed.

The [IP-CLEARANCE] also passed as folks will see.  I will incorporate 
the new code now.  I will do this as a PR for visibility and call out 
the IP-CLEARANCE process links in that PR, unless anyone thinks a 
different approach is more suitable (e.g. pushing directly?).


Best regards
Alex


On 23/07/2018 14:22, Geoff Macartney wrote:

+1 (binding)

Excited to see this go into Brooklyn


On Mon, 23 Jul 2018 at 14:13 Richard Downer  wrote:


+1 (binding)

Having seen this UI in action I'd be very happy for it to land into
Brooklyn. It's a much more modern-looking, aesthetically-pleasing UI with
much more extensibility, but at it's core the UI works very similar to the
old one, so there's very little learning curve for a user moving from the
old UI to the new.

Richard.


On Mon, 23 Jul 2018 at 10:07, Alex Heneveld <
alex.henev...@cloudsoftcorp.com>
wrote:


Hi Brooklyners-

This is a vote on whether to accept the brooklyn-ui-angular contribution
at [1] once IP clearance is completed.

For background, as previously discussed a new UI based on Angular/JS has
been offered to the Apache Brooklyn project.  The formal grant has been
completed and is on file -- thank you Cloudsoft and Fujitsu -- and is
currently going through IP Clearance (see prior email to this list) and
barring obstacles we may have that clearance after 72 hours.  The vote
to accept can occur in parallel with the clearance so that is what we
are doing.

We propose for the code to be added iniitially to a `new/` subdirectory
in the `brooklyn-ui` repo, once IP clearance is completed and if this
vote is successful.  We will then create a set of PRs to replace the
contents at the root with the contents under `new/` and make changes
elsewhere as needed for the project to build, run, and be documented
cleanly.  It is proposed that those PRs be reviewed in the usual way (no
further votes) unless anyone thinks otherwise.

This vote will run for 72 hours.

Best
Alex

[1]  https://issues.apache.org/jira/browse/INCUBATOR-214


On 20/07/2018 16:14, Alex Heneveld wrote:

Hi All-

The codebase for the UI is staged for review here:

https://github.com/ahgittin/brooklyn-ui/tree/new-ui-for-review/new

We have created the ip-clearance record [1] to track steps and the
legal grant is in process (as per [2]).  We will call for an [IP
CLEARANCE] at general@incubator once those are completed, and then we
will look for a vote here.  If you have any comments on the code or on
the process in the meantime please let me know.

Best
Alex

[1]


http://svn.apache.org/viewvc/incubator/public/trunk/content/ip-clearance/brooklyn-ui-angular.xml?view=markup

[2]


https://incubator.apache.org/ip-clearance/ip-clearance-template.html#form-filling


On 28/05/2018 12:46, Alex Heneveld wrote:

Dear Brooklyners,

Our users at Fujitsu, UShareSoft, and Cloudsoft have generously
sponsored the contribution of a new UI for Apache Brooklyn. This is
based on the previously-proprietary Cloudsoft AMP UI, for those of
you familiar with that.

The proposed newly contributed UI has all the functionality of the
existing UI including an inspector, groovy console, and online REST
docs.  It is much more recent (angular, webpack), modular, easy to
develop against, and lovely to look at, and so would be a great
contribution based solely on that.

But even better, it provides a lot of new features:

*  A visual blueprint composer:  drag-and-drop elements from the
catalog onto a canvas, with a bi-directional YAML editor

* More live activity update:  a kilt view for activities, tailing
output from SSH commands

* A bundle-oriented catalog:  with search, bundle- or type- view,
delete bundles

* An extensible, skinnable, and reusable modular architecture: embed
angular directives and components from this project in others, build
a branded version of the UI, and/or add your own modules (e.g. to
accompany specific blueprints)

The last point in particular I think will be very valuable:  it will
allow people to use Brooklyn in many more good ways!  There are plans
to make the Composer embeddable and able to work with other input
libraries (think e.g. of pointing it at a Docker repo or an image
catalog), and with widgets for configuring items, all ultimately
generating Brooklyn blueprints.

Note that this is proposed to replace the existing UI, and as we have
already deprecated the non-OSGi build, it is proposed to make this
compatible only with the OSGi build.

It is also worth pointing out that the main authors on this UI are
already Brooklyn contributors, so there is enough experience among
active project members to maintain, explain, and extend this.

Assuming this proposal finds favour, we will open a repo for review
purposes (but it will not be a merged via PR, with the actual
contribution to come via the IP clearance process [1]), followed by
associated PRs in other p

[RESULT] [IP-CLEARANCE] "brooklyn-ui-angular" contribution of new UI to Apache Brooklyn

2018-07-26 Thread Alex Heneveld


Hi Incubator PMC,

With 72h elapsed and no -1's I understand this to have passed. This 
thread is now closed.2


I will update the forn [3] and we will continue the process in the 
Brooklyn PMC.


Best
Alex


On 23/07/2018 09:56, Alex Heneveld wrote:


Hi Incubator PMC,

// cc dev@brooklyn

The top-level Apache Brooklyn project has been donated code for a new 
UI, being referred to as "brooklyn-ui-angular".


This is the formal request for IP clearance to be checked as per [1]. 
The donated code can be found at [2] (*), along with links to the 
completed (but for vote email records) XML and HTML clearance records 
[3] detailing the grant form and other compliance requirements.


Thanks in advance,
Alex


[1] 
https://incubator.apache.org/ip-clearance/ip-clearance-template.html#form-filling

[2] https://issues.apache.org/jira/browse/INCUBATOR-214
[3] 
http://svn.apache.org/viewvc/incubator/public/trunk/content/ip-clearance/brooklyn-ui-angular.xml?view=markup


(*) The "incubator drop area (/repos/asf/incubator/donations)" 
referred to in step 5 of the instructions at [1] does not exist (svn: 
E17 -- URL doesn't exist), and I saw a few issues where it had 
been attached to a Jira issue, so that's what I've done.  If there is 
a better method please accept my apologies and advise.




[VOTE] New Angular UI for Brooklyn

2018-07-23 Thread Alex Heneveld



Hi Brooklyners-

This is a vote on whether to accept the brooklyn-ui-angular contribution 
at [1] once IP clearance is completed.


For background, as previously discussed a new UI based on Angular/JS has 
been offered to the Apache Brooklyn project.  The formal grant has been 
completed and is on file -- thank you Cloudsoft and Fujitsu -- and is 
currently going through IP Clearance (see prior email to this list) and 
barring obstacles we may have that clearance after 72 hours.  The vote 
to accept can occur in parallel with the clearance so that is what we 
are doing.


We propose for the code to be added iniitially to a `new/` subdirectory 
in the `brooklyn-ui` repo, once IP clearance is completed and if this 
vote is successful.  We will then create a set of PRs to replace the 
contents at the root with the contents under `new/` and make changes 
elsewhere as needed for the project to build, run, and be documented 
cleanly.  It is proposed that those PRs be reviewed in the usual way (no 
further votes) unless anyone thinks otherwise.


This vote will run for 72 hours.

Best
Alex

[1]  https://issues.apache.org/jira/browse/INCUBATOR-214


On 20/07/2018 16:14, Alex Heneveld wrote:


Hi All-

The codebase for the UI is staged for review here:

https://github.com/ahgittin/brooklyn-ui/tree/new-ui-for-review/new

We have created the ip-clearance record [1] to track steps and the 
legal grant is in process (as per [2]).  We will call for an [IP 
CLEARANCE] at general@incubator once those are completed, and then we 
will look for a vote here.  If you have any comments on the code or on 
the process in the meantime please let me know.


Best
Alex

[1] 
http://svn.apache.org/viewvc/incubator/public/trunk/content/ip-clearance/brooklyn-ui-angular.xml?view=markup 

[2] 
https://incubator.apache.org/ip-clearance/ip-clearance-template.html#form-filling



On 28/05/2018 12:46, Alex Heneveld wrote:


Dear Brooklyners,

Our users at Fujitsu, UShareSoft, and Cloudsoft have generously 
sponsored the contribution of a new UI for Apache Brooklyn. This is 
based on the previously-proprietary Cloudsoft AMP UI, for those of 
you familiar with that.


The proposed newly contributed UI has all the functionality of the 
existing UI including an inspector, groovy console, and online REST 
docs.  It is much more recent (angular, webpack), modular, easy to 
develop against, and lovely to look at, and so would be a great 
contribution based solely on that.


But even better, it provides a lot of new features:

*  A visual blueprint composer:  drag-and-drop elements from the 
catalog onto a canvas, with a bi-directional YAML editor


* More live activity update:  a kilt view for activities, tailing 
output from SSH commands


* A bundle-oriented catalog:  with search, bundle- or type- view, 
delete bundles


* An extensible, skinnable, and reusable modular architecture: embed 
angular directives and components from this project in others, build 
a branded version of the UI, and/or add your own modules (e.g. to 
accompany specific blueprints)


The last point in particular I think will be very valuable:  it will 
allow people to use Brooklyn in many more good ways!  There are plans 
to make the Composer embeddable and able to work with other input 
libraries (think e.g. of pointing it at a Docker repo or an image 
catalog), and with widgets for configuring items, all ultimately 
generating Brooklyn blueprints.


Note that this is proposed to replace the existing UI, and as we have 
already deprecated the non-OSGi build, it is proposed to make this 
compatible only with the OSGi build.


It is also worth pointing out that the main authors on this UI are 
already Brooklyn contributors, so there is enough experience among 
active project members to maintain, explain, and extend this.


Assuming this proposal finds favour, we will open a repo for review 
purposes (but it will not be a merged via PR, with the actual 
contribution to come via the IP clearance process [1]), followed by 
associated PRs in other projects so that everything works seamlessly 
(which as minor changes to existing code is more suited to PRs than 
the IP clearance process).  Specifically we will:


* Ensure it builds and runs with the new UI in place of the old (note 
below on the Karaf switch)


* Ensure all tests are passing (esp UI tests)

* Ensure there are effective dev/test pathways and that documentation 
is updated (in particular for testing the UI and with the UI; this 
should be much simpler as the new UI can run separately, point at a 
REST endpoint, and can do incremental updates for UI code changes 
made while running!)


* Ensure we have IP clearance, license, and are duly diligent in the 
approval (as this is a large contribution we recognise this will need 
special attention)


Are there any objections at this point, or any suggestions for other 
tasks we should do to ensure its smooth integration?  Note that this 
is purely advisory at this stage

[IP-CLEARANCE] "brooklyn-ui-angular" contribution of new UI to Apache Brooklyn

2018-07-23 Thread Alex Heneveld


Hi Incubator PMC,

// cc dev@brooklyn

The top-level Apache Brooklyn project has been donated code for a new 
UI, being referred to as "brooklyn-ui-angular".


This is the formal request for IP clearance to be checked as per [1]. 
The donated code can be found at [2] (*), along with links to the 
completed (but for vote email records) XML and HTML clearance records 
[3] detailing the grant form and other compliance requirements.


Thanks in advance,
Alex


[1] 
https://incubator.apache.org/ip-clearance/ip-clearance-template.html#form-filling

[2]  https://issues.apache.org/jira/browse/INCUBATOR-214
[3] 
http://svn.apache.org/viewvc/incubator/public/trunk/content/ip-clearance/brooklyn-ui-angular.xml?view=markup


(*) The "incubator drop area (/repos/asf/incubator/donations)" referred 
to in step 5 of the instructions at [1] does not exist (svn: E17 -- 
URL doesn't exist), and I saw a few issues where it had been attached to 
a Jira issue, so that's what I've done.  If there is a better method 
please accept my apologies and advise.


Re: New Angular UI for Brooklyn [DISCUSS]

2018-07-20 Thread Alex Heneveld



Hi All-

The codebase for the UI is staged for review here:

https://github.com/ahgittin/brooklyn-ui/tree/new-ui-for-review/new

We have created the ip-clearance record [1] to track steps and the legal 
grant is in process (as per [2]).  We will call for an [IP CLEARANCE] at 
general@incubator once those are completed, and then we will look for a 
vote here.  If you have any comments on the code or on the process in 
the meantime please let me know.


Best
Alex

[1] 
http://svn.apache.org/viewvc/incubator/public/trunk/content/ip-clearance/brooklyn-ui-angular.xml?view=markup 

[2] 
https://incubator.apache.org/ip-clearance/ip-clearance-template.html#form-filling



On 28/05/2018 12:46, Alex Heneveld wrote:


Dear Brooklyners,

Our users at Fujitsu, UShareSoft, and Cloudsoft have generously 
sponsored the contribution of a new UI for Apache Brooklyn.  This is 
based on the previously-proprietary Cloudsoft AMP UI, for those of you 
familiar with that.


The proposed newly contributed UI has all the functionality of the 
existing UI including an inspector, groovy console, and online REST 
docs.  It is much more recent (angular, webpack), modular, easy to 
develop against, and lovely to look at, and so would be a great 
contribution based solely on that.


But even better, it provides a lot of new features:

*  A visual blueprint composer:  drag-and-drop elements from the 
catalog onto a canvas, with a bi-directional YAML editor


* More live activity update:  a kilt view for activities, tailing 
output from SSH commands


* A bundle-oriented catalog:  with search, bundle- or type- view, 
delete bundles


* An extensible, skinnable, and reusable modular architecture: embed 
angular directives and components from this project in others, build a 
branded version of the UI, and/or add your own modules (e.g. to 
accompany specific blueprints)


The last point in particular I think will be very valuable:  it will 
allow people to use Brooklyn in many more good ways!  There are plans 
to make the Composer embeddable and able to work with other input 
libraries (think e.g. of pointing it at a Docker repo or an image 
catalog), and with widgets for configuring items, all ultimately 
generating Brooklyn blueprints.


Note that this is proposed to replace the existing UI, and as we have 
already deprecated the non-OSGi build, it is proposed to make this 
compatible only with the OSGi build.


It is also worth pointing out that the main authors on this UI are 
already Brooklyn contributors, so there is enough experience among 
active project members to maintain, explain, and extend this.


Assuming this proposal finds favour, we will open a repo for review 
purposes (but it will not be a merged via PR, with the actual 
contribution to come via the IP clearance process [1]), followed by 
associated PRs in other projects so that everything works seamlessly 
(which as minor changes to existing code is more suited to PRs than 
the IP clearance process).  Specifically we will:


* Ensure it builds and runs with the new UI in place of the old (note 
below on the Karaf switch)


* Ensure all tests are passing (esp UI tests)

* Ensure there are effective dev/test pathways and that documentation 
is updated (in particular for testing the UI and with the UI; this 
should be much simpler as the new UI can run separately, point at a 
REST endpoint, and can do incremental updates for UI code changes made 
while running!)


* Ensure we have IP clearance, license, and are duly diligent in the 
approval (as this is a large contribution we recognise this will need 
special attention)


Are there any objections at this point, or any suggestions for other 
tasks we should do to ensure its smooth integration?  Note that this 
is purely advisory at this stage but we would very much appreciate 
early sight of any potential obstacles.


Once the above list is complete we will commence the IP clearance 
process including formal vote.


Best,
Alex


[1] https://incubator.apache.org/ip-clearance/ip-clearance-template.html





Re: LICENSE file questions - MIT, binary, process

2018-06-22 Thread Alex Heneveld


Hi folks-

Thanks Geoff.  As per discussion at 
https://github.com/apache/brooklyn/pull/15 and reviewing the Apache docs 
([1]-[4] below) I think the best thing is:


* For `LICENSE` to contain the Apache License, followed by dependency 
licenses *and practically nothing else*;
* For the listing of dependencies and their notices and statement of 
which licenses apply to be moved to the `NOTICE` file;
* For that listing to include all dependencies (not just source-bundled 
ones)


This allows the popular `licensee` software to detect the license 
correctly, while making it easy for human readers to
find the information we need.  It ensures we don't omit notices that are 
required that are in many cases currently omitted.
And ASF compliance-wise this actually seems slightly better -- LICENSE 
is _just_ licenses, and NOTICE is all the legally-required text.


I've put together an example at 
*https://github.com/ahgittin/license-sample* .  Note the files are human 
readable but if you look closely you'll see both NOTICE and LICENSE are 
also valid YAML so machines can work with them.


*Please let me know if anyone objects to this model!*

See below for more detailed notes.

Best
Alex

[1] http://www.apache.org/dev/licensing-howto.html#permissive-deps
[2]  http://www.apache.org/dev/licensing-howto.html#mod-notice
[3]  http://www.apache.org/foundation/license-faq.html#Scope
[4]  https://www.apache.org/legal/resolved.html


Relevant snippets:

(a) It is suggested at [1] that licenses for dependencies should be 
included in the LICENSE file.
(b) It is also noted at [1] that "Under normal circumstances, there is 
no need to modify NOTICE.".
(c) Elsewhere in the same doc at [2] it notes, "It is important to keep 
NOTICE as brief and simple as possible, as each addition places a burden 
on downstream consumers.  Do not add anything to NOTICE which is not 
legally required."
(d) The FAQ [3] is more forgiving, noting that "other third party works 
may have been included and their license text may have been added to the 
Apache projects' LICENSE or NOTICE files".
(e) The "Resolved" document [4] notes:  "Apache releases should contain 
a copy of each license, usually contained in the LICENSE document. For 
many licenses this is a sufficient notice. For some licenses some 
additional notice is required. In many cases, this will be included 
within the dependent artifact.  A required third-party notice is any 
third party notice which isn't covered by the above cases."


Potential mistakes (many of which seem to be widespread):

(A) NOTICE and LICENSE must be included in all built JARs (we don't do this)
(B) Most the licenses we use require inclusion of specific text (eg MIT 
and BSD for copyright/attribution, Apache for NOTICE files, etc); as 
compilation is essentially a translation and copyright extends to 
translations, it seems clear if surprising that this requirement applies 
to binaries as well as source code
(C) Where relying on required specific text being included in the 
third-party artifacts the build process must ensure these are preserved 
in the resulting binary (this is a risk eg for JS code which is minified 
or Go code which is compiled or JARs which don't include NOTICE)

(D) The suggestion (b) above is misleading in many cases given (B) and (C).

Finally:

The proposed approach of declaring _all_ dependencies and their notices 
in NOTICE is arguably more than is necessary and contrary to the 
observation (c), but given the risk of accidentally omitting something 
that is required, particularly given the frequency of build processes 
(ours or upstream) removing required notices elsewhere (e.g. C, A), I 
think it's wise.  It also seems good practice to give attribution where 
we benefit from others' software and to require downstream users to do 
the same.  The increased burden of a few hundred extra lines in a NOTICE 
file is negligible in my view, compared with 100+ MB for the artifacts!




On 21/06/2018 15:56, Geoff Macartney wrote:

Hi Alex,

I'm not an expert on licensing requirements but the above sounds convincing
to me.  Your proposal sounds like a good plan.  Re. question 3 I think it
is okay to include the Apache licensed dependencies in the binary LICENSE
too.

A minor note  - have you seen https://github.com/apache/brooklyn/pull/15 "edit
Brooklyn license info so that GitHub recognizes it"?  I intend to merge
this but added the comment to clarify whether we should do the same for all
our repos (I assume so).  I intend to raise PRs to do this if so.

Geoff



On Wed, 20 Jun 2018 at 13:47 Alex Heneveld 
wrote:


Hi Brooklyn devs-

In prepping the new UI contribution I've been working on the LICENSE
file generation.  It is rather extensive because by using Angular we
pull in hundreds of JS deps for the binary, most of them under MIT
license which as I understand it means copyr

LICENSE file questions - MIT, binary, process

2018-06-20 Thread Alex Heneveld



Hi Brooklyn devs-

In prepping the new UI contribution I've been working on the LICENSE 
file generation.  It is rather extensive because by using Angular we 
pull in hundreds of JS deps for the binary, most of them under MIT 
license which as I understand it means copyright information must be 
reproduced in the LICENSE for the binary dist.  This is based on the MIT 
clause "The above copyright notice and this permission notice shall be 
included in all copies or substantial portions of the Software" in 
accordance with the principle that copyright extends to translations.  
While it would be tempting to treat the compiled/minified version as not 
a copy and so not requiring the copyright -- and that may well be the 
intention of many MIT license users (contrasted with BSD which 
explicitly calls out binaries as requiring the copyright) -- I don't 
believe we can hide behind that.  (So JS devs please take note, please 
use the Apache License! :) )



Question 1:  Is this correct, our binaries LICENSE files need to list 
all MIT, BSD, ISC licensed dependencies whose minified/compiled output 
is included in our binary dist?



In the process I've noticed we in Brooklyn don't currently distinguish 
consistently between the source LICENSE and binary LICENSE.  As I 
understand it from [1], the LICENSE file included with source projects 
-- including I believe the one at the root of the git repo -- should 
refer to resources included in the source only.  Dependencies that are 
downloaded as part of the build and included in the binary should not be 
listed in those LICENSE files, but they must be included in any binary 
build (eg the RPM, TGZ).


It's not yet a big issue as it doesn't matter for Apache licensed 
dependencies as they do not require copyright inclusion or attribution 
and these are the bulk of what we do.  Where we do need to look more 
closely I think are:


(A) The Go CLI -- we use a few libraries (mainly MIT licensed) 
downloaded at build time.  The LICENSE file [2] includes these 
libraries.  This is included in the binary build, which is correct, but 
it is also present at the root of that sub-project where it is 
incorrect, and our source build also references these libraries which is 
incorrect.


(B) JS in "brooklyn-server" -- we have a few JS libraries included in 
the source tree of brooklyn-server (not downloaded during the build), 
for some of the CLI commands; these are indicated in that project's 
LICENSE [3], correctly, and in the binary build's LICENSE, also 
correctly.  But the project source LICENSE [3] seems to include all the 
JS used in the "brooklyn-ui" project which is not correct.


(C) JS in existing (old) "brooklyn-ui" -- this source project includes 
all the JS deps checked in, and it is listed in the LICENSE [4], 
correctly, and is included in the build binary, also correctly; no 
action is needed here


(D) JS in new (proposed) "brooklyn-ui" -- this project updates to use 
npm and package.json so downloads dependencies, with no dependencies in 
the source tree, so the project source LICENSE shouldn't list any 
dependencies.  However the binary license should include the ~100 
dependencies that npm downloads and uglifies. Fortunately npm 
license-checker [5] automates much of this (although the copyright line 
will sometimes have to be teased out manually).



Question 2:  Does the above sound right?


I'm reasonably confident of this so if no objections I will adjust our 
LICENSE generation process to distinguish between binary and source, and 
tidy up (A) and (B) above, and set up the contribution as per (D).


Finally one more question -- it's easy to tweak the process to include 
Apache-licensed dependencies used in the binary.  While this isn't 
legally required AFAIK it seems like a nice thing to do.



Question 3:  Is everyone okay with giving a shout-out to Apache-licensed 
deps in addition to MIT, BSD, etc, within our binary LICENSE ?



Best
Alex


[1]  https://apache.org/dev/licensing-howto.html
[2] https://github.com/apache/brooklyn-client/blob/master/cli/LICENSE
[3]  https://github.com/apache/brooklyn-server/blob/master/LICENSE
[4]  https://github.com/apache/brooklyn-ui/blob/master/LICENSE
[5]  https://www.npmjs.com/package/license-checker


New Angular UI for Brooklyn [DISCUSS]

2018-05-28 Thread Alex Heneveld


Dear Brooklyners,

Our users at Fujitsu, UShareSoft, and Cloudsoft have generously 
sponsored the contribution of a new UI for Apache Brooklyn.  This is 
based on the previously-proprietary Cloudsoft AMP UI, for those of you 
familiar with that.


The proposed newly contributed UI has all the functionality of the 
existing UI including an inspector, groovy console, and online REST 
docs.  It is much more recent (angular, webpack), modular, easy to 
develop against, and lovely to look at, and so would be a great 
contribution based solely on that.


But even better, it provides a lot of new features:

*  A visual blueprint composer:  drag-and-drop elements from the catalog 
onto a canvas, with a bi-directional YAML editor


* More live activity update:  a kilt view for activities, tailing output 
from SSH commands


* A bundle-oriented catalog:  with search, bundle- or type- view, delete 
bundles


* An extensible, skinnable, and reusable modular architecture: embed 
angular directives and components from this project in others, build a 
branded version of the UI, and/or add your own modules (e.g. to 
accompany specific blueprints)


The last point in particular I think will be very valuable:  it will 
allow people to use Brooklyn in many more good ways!  There are plans to 
make the Composer embeddable and able to work with other input libraries 
(think e.g. of pointing it at a Docker repo or an image catalog), and 
with widgets for configuring items, all ultimately generating Brooklyn 
blueprints.


Note that this is proposed to replace the existing UI, and as we have 
already deprecated the non-OSGi build, it is proposed to make this 
compatible only with the OSGi build.


It is also worth pointing out that the main authors on this UI are 
already Brooklyn contributors, so there is enough experience among 
active project members to maintain, explain, and extend this.


Assuming this proposal finds favour, we will open a repo for review 
purposes (but it will not be a merged via PR, with the actual 
contribution to come via the IP clearance process [1]), followed by 
associated PRs in other projects so that everything works seamlessly 
(which as minor changes to existing code is more suited to PRs than the 
IP clearance process).  Specifically we will:


* Ensure it builds and runs with the new UI in place of the old (note 
below on the Karaf switch)


* Ensure all tests are passing (esp UI tests)

* Ensure there are effective dev/test pathways and that documentation is 
updated (in particular for testing the UI and with the UI; this should 
be much simpler as the new UI can run separately, point at a REST 
endpoint, and can do incremental updates for UI code changes made while 
running!)


* Ensure we have IP clearance, license, and are duly diligent in the 
approval (as this is a large contribution we recognise this will need 
special attention)


Are there any objections at this point, or any suggestions for other 
tasks we should do to ensure its smooth integration?  Note that this is 
purely advisory at this stage but we would very much appreciate early 
sight of any potential obstacles.


Once the above list is complete we will commence the IP clearance 
process including formal vote.


Best,
Alex


[1] https://incubator.apache.org/ip-clearance/ip-clearance-template.html



[jira] [Commented] (BROOKLYN-582) AWS EC2 blueprint fails due to invalid instance type proposed

2018-05-09 Thread Alex Heneveld (JIRA)

[ 
https://issues.apache.org/jira/browse/BROOKLYN-582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469692#comment-16469692
 ] 

Alex Heneveld commented on BROOKLYN-582:


Your analysis is correct.  This is due to 
https://issues.apache.org/jira/browse/JCLOUDS-1379 .  A fix to that should fix 
this issue.

> AWS EC2 blueprint fails due to invalid instance type proposed
> -
>
> Key: BROOKLYN-582
> URL: https://issues.apache.org/jira/browse/BROOKLYN-582
> Project: Brooklyn
>  Issue Type: Bug
>Reporter: Thach Mai
>Priority: Major
>
> When running a blueprint for AWS EC2, specifying "minRam" and "minCores" 
> combination will not work for some regions.
> Example of a blueprint that works:
> {{name: WORK}}
>  {{services:}}
>  {{- type: server}}
>  {{  name: WORK}}
>  {{location:}}
>  {{  jclouds:aws-ec2:}}
>  {{    identity: }}
>  {{    credential: }}
>  {{    }}
>  {{    region:   eu-west-1}}
>  {{    }}
>  {{    osFamily: ubuntu}}
>  {{    minRam: 4096M}}
>  {{    minCores: 4}}
>  {{    }}
>  {{    user: sample}}
>  {{    password: s4mpl3}}
>  
> If we run the same blueprint, but changing the region to "eu-west-2", it 
> results in an error:
> {\{Error invoking start at EmptySoftwareProcessImpl{id=nd40mzs7n0}: Failed to 
> get VM after 2 attempts. - First cause is 
> org.jclouds.aws.AWSResponseException: request POST 
> [https://ec2.eu-west-2.amazonaws.com/] HTTP/1.1 failed with code 400, error: 
> AWSError{requestId='11cc4fed-1a3a-483b-8661-7c1ff68e37ac', 
> requestToken='null', code='Unsupported', message='The requested configuration 
> is currently not supported. Please check the documentation for supported 
> configurations.', context='
> {Response=, Errors=}'} (listed in primary trace); plus 1 more (e.g. the last 
> is org.jclouds.aws.AWSResponseException: request POST 
> [https://ec2.eu-west-2.amazonaws.com/] HTTP/1.1 failed with code 400, error: 
> AWSError\{requestId='ddf4f700-642c-401a-91b0-9e84f39d2208', 
> requestToken='null', code='Unsupported', message='The requested configuration 
> is currently not supported. Please check the documentation for supported 
> configurations.', context='{Response=, Errors=}'}): AWSResponseException: 
> request POST [https://ec2.eu-west-2.amazonaws.com/] HTTP/1.1 failed with code 
> 400, error: AWSError\{requestId='11cc4fed-1a3a-483b-8661-7c1ff68e37ac', 
> requestToken='null', code='Unsupported', message='The requested configuration 
> is currently not supported. Please check the documentation for supported 
> configurations.', context='{Response=, Errors=}'}}}
>  
> The root cause is likely because jClouds algorithm proposes an instance type 
> that doesn't exist for some regions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [PROPOSAL] Deprecated JBoss 7 entity

2018-03-09 Thread Alex Heneveld


go for it.  is there a yaml entity in the community we can point people 
to as part of the deprecation?


wherever possible i'd like to replace the system-specific java entities 
like jboss with yaml equivalents in the community (ie outside apache 
brooklyn).


--a


On 09/03/2018 10:28, Thomas Bouron wrote:

Hi Brooklyners

I made a PR yesterday[1] (which as been merged, thanks Geoff) to fix an
issue with JBoss 7 entity. As it turns out, JBoss 7 is now EOL and does not
work with java 7u171 onward. This is due to the `jboss-module.jar` not
being compatible with the newest versions of Java.

The patch I made is a trick really but it works: the version of
`jboss-module.jar` shipped with JBoss 7.1.1 is `1.1.1.GA`. However, this
particular jar has been updated to `1.1.5.GA` and using this one fixes the
issue. While this works, it is still a hacky thing to do therefore I would
like to deprecate this entity.

Any objection before I do this?

Best.

[1] https://github.com/apache/brooklyn-library/pull/148




[jira] [Created] (BROOKLYN-579) DNS lookups cached for too long

2018-01-31 Thread Alex Heneveld (JIRA)
Alex Heneveld created BROOKLYN-579:
--

 Summary: DNS lookups cached for too long
 Key: BROOKLYN-579
 URL: https://issues.apache.org/jira/browse/BROOKLYN-579
 Project: Brooklyn
  Issue Type: Bug
Reporter: Alex Heneveld


I've had issues where DNS values are changed but Brooklyn doesn't see those.  I 
think Java caches hostnames forever by default, ignoring DNS TTL.  (Controlling 
Route 53 from Brooklyn is one obvious such example!)

We should consider overriding this.

Oracle Cloud describe how 
(https://docs.us-phoenix-1.oraclecloud.com/Content/API/SDKDocs/javasdk.htm):

 
{quote}The JVM uses the 
[networkaddress.cache.ttl|http://docs.oracle.com/javase/8/docs/technotes/guides/net/properties.html]
 property to specify the caching policy for DNS name lookups. The value is an 
integer that represents the number of seconds to cache the successful lookup. 
The default value for many JVMs, {{-1}}, indicates that the lookup should be 
cached forever.

Because resources in Oracle Cloud Infrastructure use DNS names that can change, 
we recommend that you change the the TTL value to 60 seconds. This ensures that 
the new IP address for the resource is returned on next DNS query. You can 
change this value globally or specifically for your application:
{quote} * 
{quote}To set TTL globally for all applications using the JVM, add the 
following in the {{$JAVA_HOME/jre/lib/security/java.security}} file:
{{networkaddress.cache.ttl=60}}{quote}
 * 
{quote}To set TTL only for your application, set the following in your 
application's initialization code:
{{java.security.Security.setProperty("networkaddress.cache.ttl" , 
"60");}}{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Getting NodeMetadata available to entities

2018-01-23 Thread Alex Heneveld


Richard,

What values does it need?  Would it make sense to put it into 
BasicMachineMetadata and adjust JcloudsLocation.getMachineMetadata to 
add it for everything?


The other option if you want to opt-in at an initializer is for it to do 
`location.getConfig(CALLER_CONTEXT)` -- there isn't guaranteed to be 
exactly one Entity for each Location
(might have multiple, might have none) so there isn't a 
`Location.getEntity()` and this value might not be an `Entity`, but if 
there _is_ such a one-to-one releationship where an entity creates a 
location, the usual pattern is that the Entity will set itself as this 
context.


Best
Alex


On 23/01/2018 10:30, Richard Downer wrote:

Hi all.

I'm trying to solve a problem which basically involves getting a value from
the jclouds `NodeMetadata` into a sensor on all entities in that
`JcloudsLocation`. A `JcloudsLocationCustomizer` would appear to be useful
as it gets all the `NodeMetadata`, but as it doesn’t get a reference to the
entity it can’t add the sensor itself.

I looked at other possible solution but it seems that the `NodeMetadata` is
thrown away by Brooklyn once the machine is started, so a solution like
`EntityInitializer` isn’t going to work either. My current plan is to use a
location customizer to get the data needed and stash it somewhere in the
`MachineLocation` config, and an initializer to retrieve it and set a
sensor, but this sounds messy and also requires blueprints to be modified
to add the initializer.

Are there any other possible solutions?

Thanks
Richard.





Re: Brooklyn Vagrant

2017-11-20 Thread Alex Heneveld


sounds reasonable

On 20/11/2017 15:08, Duncan Godwin wrote:

Hi All,

Brooklyn Vagrant has not been working since the release of 0.12.0 due to an
error in one of the URLs, this was noted in the email thread [1]. I propose
we remove vagrant from the 0.12.0 docs and website branch. Once we do a new
1.0.0 release or 0.12.1 release these will be updated and it will include
it again. It seems better to remove the not working instructions
temporarily however.

What does everyone think?

Many thanks

Duncan


[1]
http://markmail.org/search/?q=Issue+running+Brooklyn+in+Vagrant#query:Issue%20running%20Brooklyn%20in%20Vagrant+page:1+mid:wmgietdo64cdv4xg+state:results





Re: Onboarding feedback

2017-11-20 Thread Alex Heneveld


Good catches Svet.

The user in quotes is an easy fix, following Ivana's #754 returning 
correct json.  One line fix at [1].


The CLI needs a lot more error checking.  That's the downside of Go!  
Geoff fwiw I suggest it be one issue with many things to track.


The eu-west-2 failure is indeed Brooklyn-412.  Harder to fix but worth 
attention so defaults work nicely.  Possibly should be addressed in 
jclouds instead of Brooklyn?


Best
Alex


[1]  https://github.com/apache/brooklyn-ui/pull/50


On 20/11/2017 11:27, Светослав Нейков wrote:

Hi folks,

Wanted to do a quick test with Brooklyn yesterday so went and downloaded
latest 0.12 release. Here are some of the things that didn't work out quite
as expected, mostly related to the command line client.

* The web app still shows the user in quotes - I thought we got this fixed
but might've slipped through the cracks.
* Deploying a basic app to AWS London with a minimal location results in
the error:

AWSResponseException: request POST https://ec2.eu-west-2.amazonaws.com/
HTTP/1.1 failed with code 400, error:
AWSError{requestId='fb5dc6dc-562e-4572-b854-80f61718a9b5',
requestToken='null', code='Unsupported', message='The requested
configuration is currently not supported. Please check the documentation
for supported configurations.', context='{Response=, Errors=}'}

Adding a hardwareId fixes it. Probably a case of BROOKLYN-412.


* "br --help" crashes (or any other option)
* br will not tell you if you got the argument ordering wrong - needed some
time to remember how it worked. For example:
   "br entity xxx" always results in 404 - note the URL it's trying to fetch
is "GET /v1/applications//entities/cl8z1sxvl7" (missing the app part)
   "br delete app xxx entity yyy" always results in 405 and tries to hit
"DELETE /v1/applications/"
* "br app xxx delete" doesn't quite work - removes the app without tearing
it down. Seems it's doing unmanage and not expunge as the help suggests.
Also throws the following exception:


./br app sgfcmfwfyb delete
{1511176504033 0 false In progress { {   }} destroying
BasicApplicationImpl{id=sgfcmfwfyb} map[] REST call to destroy application
Paystubs (BasicApplicationImpl{id=sgfcmfwfyb})   false { {   }}  false
Task[destroying BasicApplicationImpl{id=sgfcmfwfyb}]@EGXv12Vu

In progress (RUNNABLE)
At:
org.apache.brooklyn.core.mgmt.internal.NonDeploymentManagementContext.getSubscriptionManager(NonDeploymentManagementContext.java:249)

org.apache.brooklyn.core.mgmt.internal.EntityManagementSupport.onManagementStopping(EntityManagementSupport.java:311)

org.apache.brooklyn.core.mgmt.internal.LocalEntityManager$3.apply(LocalEntityManager.java:511)

org.apache.brooklyn.core.mgmt.internal.LocalEntityManager$3.apply(LocalEntityManager.java:508)

org.apache.brooklyn.core.mgmt.internal.LocalEntityManager.recursively(LocalEntityManager.java:645)

org.apache.brooklyn.core.mgmt.internal.LocalEntityManager.unmanage(LocalEntityManager.java:508)

org.apache.brooklyn.core.mgmt.internal.LocalEntityManager.unmanage(LocalEntityManager.java:461)

org.apache.brooklyn.core.mgmt.internal.LocalEntityManager.unmanage(LocalEntityManager.java:456)

org.apache.brooklyn.rest.util.BrooklynRestResourceUtils$2.run(BrooklynRestResourceUtils.java:421)

org.apache.brooklyn.util.core.task.BasicExecutionManager$SubmissionCallable.call(BasicExecutionManager.java:529)
[]  false map[self:/v1/activities/EGXv12Vu
children:/v1/activities/EGXv12Vu/children] EGXv12Vu 1511176504033}


Best,
Svet.





Brooklyn Tip of the Day: Injecting Script Vars

2017-11-16 Thread Alex Heneveld


Hi All-

I've noticed in a few blueprints using VanillaSoftwareProcess there is a 
pattern like this:


```install.command:
  $brooklyn:formatString:
  - |
V1=%s
V2=%s
some_command ${V1} ${V2}
  - $brooklyn:config("v1")
  - $brooklyn:config("v2")
```

This is handy and has its uses but note it doesn't play nice with 
special characters in variables.  For instance if a user says `v1: 
pa$$word` it won't do what you want.


It's usually better to use:

```shell.env:
  V1: $brooklyn:config("v1")
  V2: $brooklyn:config("v2")
install.command:
  some_command "${V1}" "${V2}"
```

Brookyn will bash-escape-for-double-quotes the values in `shell.env` -- 
so the above will work with any chars in `v1` and `v2`!  (Note you need 
the double quotes around the reference _to_ `"${V..}"` - that's what 
it's escaped for.)


Unfortunately `shell.env` is coarse-grained, it applies to all commands, 
and in particular needs to resolve as early as `pre.install.command` 
(don't put `RUN_DIR` in there for instance!) but for config like 
passwords etc it's the way to go.


Best
Alex


PS.  Addition of config like 
`{pre.,,post.}{install,customize,launch}.shell.env` - to be overlaid on 
top of `shell.env` - would be easy and useful!


PPS.  I've also noticed augeas - http://augeas.net/ - being used in more 
complex blueprints.  Very elegant way to manage config files.




[jira] [Commented] (BROOKLYN-552) Tomcat entity stop / start fails as entity id changes which update run_dir but tc obviously not moved

2017-11-09 Thread Alex Heneveld (JIRA)

[ 
https://issues.apache.org/jira/browse/BROOKLYN-552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16245532#comment-16245532
 ] 

Alex Heneveld commented on BROOKLYN-552:


almost certain this has been working.  why does the `run_dir` sensor change?  
the entity ID shouldn't change.

> Tomcat entity stop / start fails as entity id changes which update run_dir 
> but tc obviously not moved
> -
>
> Key: BROOKLYN-552
> URL: https://issues.apache.org/jira/browse/BROOKLYN-552
> Project: Brooklyn
>  Issue Type: Bug
>Reporter: Duncan Grant
>Priority: Critical
>
> Reproduce:
> Deploy tomcat with:
> name: Test Tomcat
> location:
> centos7_gce_europe
> services:
> type: org.apache.brooklyn.entity.webapp.tomcat.TomcatServer
> after entity is running do 
> br app  ent  stop
> br app  ent  start
> The isrunning check will never pass as the run_dir sensor will have upated 
> but obviously the directory is not moved on the vm
> This is presumably a problem with all blueprints that use the standard pid 
> check for running???



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (BROOKLYN-551) Cannot add location

2017-11-09 Thread Alex Heneveld (JIRA)

[ 
https://issues.apache.org/jira/browse/BROOKLYN-551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16245528#comment-16245528
 ] 

Alex Heneveld commented on BROOKLYN-551:


[~drigodwin] debug exceptions aren't necessarily a problem so probably 
disregard the log as posted.  (they can happen eg when it tries to infer the 
right structure for the plan.  messy, and to be improved, but for now just 
ignore in many cases.)

are there any errors/warns?  what is the evidence the addition failed?

what you're doing is tested in CatalogYamlLocationTest so this is curious.

> Cannot add location
> ---
>
> Key: BROOKLYN-551
> URL: https://issues.apache.org/jira/browse/BROOKLYN-551
> Project: Brooklyn
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Duncan Godwin
>
> I cannot add a location either through the add yaml to catalog using:
> {code}
> brooklyn.catalog:
>   id: 'my-id'
>   name: 'my-name'
>   itemType: location
>   item:
> type: jclouds:aws-ec2
> brooklyn.config:
>   region: eu-central-1
>   identity: aaa
>   credential: 
> {code}
> nor using the location wizard
> I get the following in the logs:
> {code}
> 017-11-09T10:58:33,798 DEBUG 124 o.a.b.c.c.i.CatalogBundleLoader 
> [qtp296483222-103] Catalog load, found catalog BOM in 299 some-id 
> 0.0.0.SNAPSHOT
> 2017-11-09T10:58:33,799 DEBUG 124 o.a.b.c.c.i.BasicBrooklynCatalog 
> [qtp296483222-103] Catalog load, adding catalog item to 
> LocalManagementContext[QTViF5xW-z6LjNoy9]: brooklyn.catalog:
>   id: some-id
>   itemType: location
>   item:
> type: jclouds:aws-ec2
> brooklyn.config:
>   displayName: some-name
>   region: eu-central-1
>   identity: aa
>   credential: bb
> 2017-11-09T10:58:33,801 DEBUG 124 o.a.b.c.p.PlanToSpecFactory 
> [qtp296483222-103] Plan could not be transformed; failure will be propagated 
> (other transformers tried = [Java type instantiator 
> (org.apache.brooklyn.core.catalog.internal.JavaCatalogToSpecTransformer 
> parses only old-style catalog items containing only 'type: JavaClass' or 
> javaType in DTO)]): [org.apa
> che.brooklyn.util.exceptions.PropagatedRuntimeException: Transformer for 
> Brooklyn OASIS CAMP interpreter gave an error creating this plan: No class or 
> resolver found for location type jclouds:aws-ec2]
> 2017-11-09T10:58:33,802 DEBUG 124 o.a.b.c.p.PlanToSpecFactory 
> [qtp296483222-103] Plan could not be transformed; failure will be propagated 
> (other transformers tried = [Java type instantiator 
> (org.apache.brooklyn.core.catalog.internal.JavaCatalogToSpecTransformer 
> parses only old-style catalog items containing only 'type: JavaClass' or 
> javaType in DTO)]): [org.apa
> che.brooklyn.util.exceptions.PropagatedRuntimeException: Transformer for 
> Brooklyn OASIS CAMP interpreter gave an error creating this plan: No class or 
> resolver found for location type jclouds:aws-ec2]
> 2017-11-09T10:58:33,802 DEBUG 124 o.a.b.c.c.i.BasicBrooklynCatalog 
> [qtp296483222-103] No version specified for catalog item some-id. Using 
> default value.
> 2017-11-09T10:58:33,803 DEBUG 124 o.a.b.c.p.PlanToSpecFactory 
> [qtp296483222-103] Plan could not be transformed; failure will be propagated 
> (other transformers tried = [Java type instantiator 
> (org.apache.brooklyn.core.catalog.internal.JavaCatalogToSpecTransformer 
> parses only old-style catalog items containing only 'type: JavaClass' or 
> javaType in DTO)]): [org.apa
> che.brooklyn.util.exceptions.PropagatedRuntimeException: Transformer for 
> Brooklyn OASIS CAMP interpreter gave an error creating this plan: No class or 
> resolver found for location type jclouds:aws-ec2]
> 2017-11-09T10:58:33,805 DEBUG 124 o.a.b.c.p.PlanToSpecFactory 
> [qtp296483222-103] Plan could not be transformed; failure will be propagated 
> (other transformers tried = [Java type instantiator 
> (org.apache.brooklyn.core.catalog.internal.JavaCatalogToSpecTransformer 
> parses only old-style catalog items containing only 'type: JavaClass' or 
> javaType in DTO)]): [org.apa
> che.brooklyn.util.exceptions.PropagatedRuntimeException: Transformer for 
> Brooklyn OASIS CAMP interpreter gave an error creating this plan: No class or 
> resolver found for location type jclouds:aws-ec2]
> 2017-11-09T10:58:33,805 DEBUG 124 o.a.b.c.t.BasicBrooklynTypeRegistry 
> [qtp296483222-103] Inserting 
> BasicRegisteredType[some-id:0.0.0-SNAPSHOT;some-id:0.0.0-SNAPSHOT] into 
> org.apache.brooklyn.core.typereg.BasicBrooklynTypeRegistry@5fe75b4
> 2

[jira] [Commented] (BROOKLYN-551) Cannot add location

2017-11-09 Thread Alex Heneveld (JIRA)

[ 
https://issues.apache.org/jira/browse/BROOKLYN-551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16245527#comment-16245527
 ] 

Alex Heneveld commented on BROOKLYN-551:


[~drigodwin] debug exceptions aren't necessarily a problem so probably 
disregard the log as posted.  (they can happen eg when it tries to infer the 
right structure for the plan.  messy, and to be improved, but for now just 
ignore in many cases.)

are there any errors/warns?  what is the evidence the addition failed?

what you're doing is tested in CatalogYamlLocationTest so this is curious.

> Cannot add location
> ---
>
> Key: BROOKLYN-551
> URL: https://issues.apache.org/jira/browse/BROOKLYN-551
> Project: Brooklyn
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Duncan Godwin
>
> I cannot add a location either through the add yaml to catalog using:
> {code}
> brooklyn.catalog:
>   id: 'my-id'
>   name: 'my-name'
>   itemType: location
>   item:
> type: jclouds:aws-ec2
> brooklyn.config:
>   region: eu-central-1
>   identity: aaa
>   credential: 
> {code}
> nor using the location wizard
> I get the following in the logs:
> {code}
> 017-11-09T10:58:33,798 DEBUG 124 o.a.b.c.c.i.CatalogBundleLoader 
> [qtp296483222-103] Catalog load, found catalog BOM in 299 some-id 
> 0.0.0.SNAPSHOT
> 2017-11-09T10:58:33,799 DEBUG 124 o.a.b.c.c.i.BasicBrooklynCatalog 
> [qtp296483222-103] Catalog load, adding catalog item to 
> LocalManagementContext[QTViF5xW-z6LjNoy9]: brooklyn.catalog:
>   id: some-id
>   itemType: location
>   item:
> type: jclouds:aws-ec2
> brooklyn.config:
>   displayName: some-name
>   region: eu-central-1
>   identity: aa
>   credential: bb
> 2017-11-09T10:58:33,801 DEBUG 124 o.a.b.c.p.PlanToSpecFactory 
> [qtp296483222-103] Plan could not be transformed; failure will be propagated 
> (other transformers tried = [Java type instantiator 
> (org.apache.brooklyn.core.catalog.internal.JavaCatalogToSpecTransformer 
> parses only old-style catalog items containing only 'type: JavaClass' or 
> javaType in DTO)]): [org.apa
> che.brooklyn.util.exceptions.PropagatedRuntimeException: Transformer for 
> Brooklyn OASIS CAMP interpreter gave an error creating this plan: No class or 
> resolver found for location type jclouds:aws-ec2]
> 2017-11-09T10:58:33,802 DEBUG 124 o.a.b.c.p.PlanToSpecFactory 
> [qtp296483222-103] Plan could not be transformed; failure will be propagated 
> (other transformers tried = [Java type instantiator 
> (org.apache.brooklyn.core.catalog.internal.JavaCatalogToSpecTransformer 
> parses only old-style catalog items containing only 'type: JavaClass' or 
> javaType in DTO)]): [org.apa
> che.brooklyn.util.exceptions.PropagatedRuntimeException: Transformer for 
> Brooklyn OASIS CAMP interpreter gave an error creating this plan: No class or 
> resolver found for location type jclouds:aws-ec2]
> 2017-11-09T10:58:33,802 DEBUG 124 o.a.b.c.c.i.BasicBrooklynCatalog 
> [qtp296483222-103] No version specified for catalog item some-id. Using 
> default value.
> 2017-11-09T10:58:33,803 DEBUG 124 o.a.b.c.p.PlanToSpecFactory 
> [qtp296483222-103] Plan could not be transformed; failure will be propagated 
> (other transformers tried = [Java type instantiator 
> (org.apache.brooklyn.core.catalog.internal.JavaCatalogToSpecTransformer 
> parses only old-style catalog items containing only 'type: JavaClass' or 
> javaType in DTO)]): [org.apa
> che.brooklyn.util.exceptions.PropagatedRuntimeException: Transformer for 
> Brooklyn OASIS CAMP interpreter gave an error creating this plan: No class or 
> resolver found for location type jclouds:aws-ec2]
> 2017-11-09T10:58:33,805 DEBUG 124 o.a.b.c.p.PlanToSpecFactory 
> [qtp296483222-103] Plan could not be transformed; failure will be propagated 
> (other transformers tried = [Java type instantiator 
> (org.apache.brooklyn.core.catalog.internal.JavaCatalogToSpecTransformer 
> parses only old-style catalog items containing only 'type: JavaClass' or 
> javaType in DTO)]): [org.apa
> che.brooklyn.util.exceptions.PropagatedRuntimeException: Transformer for 
> Brooklyn OASIS CAMP interpreter gave an error creating this plan: No class or 
> resolver found for location type jclouds:aws-ec2]
> 2017-11-09T10:58:33,805 DEBUG 124 o.a.b.c.t.BasicBrooklynTypeRegistry 
> [qtp296483222-103] Inserting 
> BasicRegisteredType[some-id:0.0.0-SNAPSHOT;some-id:0.0.0-SNAPSHOT] into 
> org.apache.brooklyn.core.typereg.BasicBrooklynTypeRegistry@5fe75b4
> 2

Re: Brooklyn REST API - omitting fields in JSON objects

2017-11-08 Thread Alex Heneveld


note previously we were inconsistent, with NON_NULL used some places, 
NON_EMPTY elsewhere, and no exclusions elsewhere, and in many places 
without much thought.  so we have three options:


1 - dogmatic - remove all `@Include(NON_*)` annotations
2 - compatibility-first - restore anything changed since 0.12.0
3 - forward-looking - leave as is, with documentation added that 
empty/null fields may not be included in json response objects, and 
release note that this has been applied in more places


(note that 1 is technically a breaking change if i had code that 
expected not to see a field unless it was non-empty ... it will probably 
break tests)


i tend to think API consumers impacted by this are small and can take 
the pain, and better to have a nice set of API response objects for new 
users.  and when i see lots of nulls and empties in an api response 
object i think the designers care more about dogma than human users.  i 
don't expect the v2 rest api will land any time soon.


so i am strongly for option 3, taking the ui pain now, but i'll defer if 
i'm a singleton


--a


On 08/11/2017 11:35, Geoff Macartney wrote:

My tuppence worth - agree with Thomas, it looks like a change best done in
a V2. Since it's actually a real breakage for the API it would be at least
worth passing through a deprecation cycle, but as Alex points out there's
no efficient way to do that. The cost of restoring ugliness in the response
objects at least keeps the pain away from client API consumers.

G

On Wed, 8 Nov 2017 at 11:22 Graeme Miller  wrote:


Hello,

I agree with Thomas here. This seems like an API breaking change, and
should be reserved for V2 if it at all.
I lean towards reverting.

Regards,
Graeme

On 8 November 2017 at 10:01, Thomas Bouron <
thomas.bou...@cloudsoftcorp.com>
wrote:


Hi all.

I'm not a fan of excluding fields from the JSON payload, if empty, for

few

reasons:
1. this is a breaking change for the UI and CLI which will be time
consuming to fix (very fiddly to guard against this in JS for example)
2. this makes it harder to design clients, because you need to guard
against the presence of those fields. The swagger page displays all

fields

leading the user/dev to think that they are always returned in the

payload

3. I don't think it saves bandwidth to remove a `constraints: []` from

the

final JSON. If you want to have a smaller payload, I would much prefer to
set a query string flag to restrict (or expand) the full body, something
like `?summary=true` => returns only necessary information.

Now, my comments apply for the API endpoints under /v1 only. I'm all in
favour of break things with a v2 API though.

Best.

On Mon, 6 Nov 2017 at 15:17 Alex Heneveld <

alex.henev...@cloudsoftcorp.com

wrote:


Hi All-

Until recently our REST API returned full records in most cases,
including often lots of empty lists and maps and sometimes nulls --

such

as `constraints: []` on all config keys.

The widespread preference in REST / JSON community seems to be to omit
these unless there is a very good reason for having them, e.g. Google
JSON Style Guide [1].

Recently in replacing deprecated FasterXML annotations with newer ones
many fields were changed to be included only if NON_EMPTY, in line with
that preference, but this is technically a breaking change.  And it's
not just technical, as there are a few places in UI code which assume
fields exist which now do break.  These are easy to fix once they're
encountered at runtime but tedious to ensure no problems at design

time.

Thus we have two choices:

* Yes, make this break now.  This (v1.0) is probably the best time.
There is no efficient way to pass this through a deprecation cycle.

Cost

is adding to release nodes and REST API client breakages and fixes.
* No, revert any `Include(NON_EMPTY)` annotations recently introduced

to

be strict about backwards-compatibility here.  Cost is restoring old
behaviour and bloat/ugliness in the API response objects.

I have a preference for "Yes", roll-forward and tolerate some breakages
with the shift to 1.0.

Best
Alex


[1]

https://google.github.io/styleguide/jsoncstyleguide.

xml#Empty/Null_Property_Values

--

Thomas Bouron • Senior Software Engineer @ Cloudsoft Corporation •
https://cloudsoft.io/
Github: https://github.com/tbouron
Twitter: https://twitter.com/eltibouron





Brooklyn REST API - omitting fields in JSON objects

2017-11-06 Thread Alex Heneveld


Hi All-

Until recently our REST API returned full records in most cases, 
including often lots of empty lists and maps and sometimes nulls -- such 
as `constraints: []` on all config keys.


The widespread preference in REST / JSON community seems to be to omit 
these unless there is a very good reason for having them, e.g. Google 
JSON Style Guide [1].


Recently in replacing deprecated FasterXML annotations with newer ones 
many fields were changed to be included only if NON_EMPTY, in line with 
that preference, but this is technically a breaking change.  And it's 
not just technical, as there are a few places in UI code which assume 
fields exist which now do break.  These are easy to fix once they're 
encountered at runtime but tedious to ensure no problems at design time.


Thus we have two choices:

* Yes, make this break now.  This (v1.0) is probably the best time. 
There is no efficient way to pass this through a deprecation cycle. Cost 
is adding to release nodes and REST API client breakages and fixes.
* No, revert any `Include(NON_EMPTY)` annotations recently introduced to 
be strict about backwards-compatibility here.  Cost is restoring old 
behaviour and bloat/ugliness in the API response objects.


I have a preference for "Yes", roll-forward and tolerate some breakages 
with the shift to 1.0.


Best
Alex


[1] 
https://google.github.io/styleguide/jsoncstyleguide.xml#Empty/Null_Property_Values




Re: New `website` branch in brooklyn-docs

2017-11-01 Thread Alex Heneveld


Great points.

> "Non-guide updates for new features" appears to be the crux of why we
> disagree (please correct me if I am wrong).

100% agree.  Like your positioning "what do we optimize for", and the 
question how much of non-guide items are release-specific.


Main non-guide items are:

* learn more - theory
* learn more - features list
* learn more - catalog
* download links
* community resources
* developer guide - IDE setup, code structure, and code standards

The majority of these are things that should update immediately, I'm now 
persuaded by you of that and consequently slightly more comfortable with 
a project or branch bifurcation.


One remaining question is whether we want to encourage more 
release-specific info to go onto the non-guide website -- but I could be 
convinced we want to force most such updates into the guide so that 
historic versions are available.  We can still reference content in the 
guide from the main website.  "Getting started" for instance does this.  
Any changes to that structure would require an embargoed change to 
non-guide website but this could be handled with PRs as you suggest.


Final point to note is that a fix to the guide still requires two PRs 
against two branches so there is an asymmetry here as opposed to the 
website.  Still feels simpler to me to have the same consistent process 
for guide fixes as for non-guide website fixes, and do the bifurcation 
in directories rather than project or branch ... but I don't feel as 
strongly as I did and willing to support you & Thomas if I'm a singleton 
in my opinion!


Thanks for exploring with me Richard.

Best
Alex



On 01/11/2017 15:45, Richard Downer wrote:

Hi Alex,

ObDisclaimer: I have an opinion. I am not suggesting that it's a correct
one :-)


On 1 November 2017 at 12:26, Alex Heneveld 
wrote:


I think it's wrong to split user guide and other parts of the web site.

I don't see a good reason for it apart from us liking more bifurcations
along technology lines (and these are evolving).  I don't see why sub-dirs
aren't appropriate or are difficult.  (And was it really necessary to
abandon the subdirs in order to support gitbooks?)


My motivation here is unrelated to the technology. GitBooks does not make
any impact on this decision.

My motivation here can be simply stated as:

1. There will be multiple published versions of the documentation
(potentially multiple active simultaneously, and many historic versions)
2. There will only ever be one published website  (no matter when URL
manipulations you do, you will never find a different version of the
website on our server)

Furthermore:

3. The documentation is the product of a release that has been made and
voted on
4. The website is not the product of a release

Given these differences, we should not assume that the documentation and
the website are treated in the same way.



To me, `brooklyn-docs` is the canonical collection of resources that
explain Brooklyn - what it is, how to use it, examples, reference, etc.
The user experiences this through our web site. It is a coherent piece.
The guide is a large chunk of this which has some unique aspects -- it is
built differently (gitbooks, and generates PDF) and we maintain old
versions of it (but not the website) but that is it.


Yes, the docs and the website are both forms of information, but they have
different forces acting on them. Consider the OOP "counterexample" of
Rectangles and Squares[1] - intuitively it makes sense to write `public
class Square extends Rectangle` until you realise that squares have
constraints that break the abstraction. Here I don't think it makes sense
to think of `public class Documentation extends Website` or vice-versa, as
both have constraints unique to each. I don't think we should be fighting
those constraints by trying to make a Square into a Rectangle; instead we
should just accept that they have differences and deal with them
differently.

There are two classes of changes:

* new features - these only apply the next release, so are not published


This is a rare event. To my knowledge we've never had a website update that
had to be synchronised with a release, other than the obvious one of
updating links to the downloads. However it *is* a worthwhile thing to be
able to support. I think that there are other ways to do it, such as
tagging the PR title.

* fixes and non-code enhancements (eg listing new members) - these apply

to the next release and the current version (and in the case of guide
possibly older versions though updating this is less important) - normal
workflow is to push to master branch and last-released branch, then publish
last-released branch


This is a common event in the website history so far. But under this
workflow, an immediate change to the website would require either *two*
PRs, or to push the effort onto the committe

Re: New `website` branch in brooklyn-docs

2017-11-01 Thread Alex Heneveld
 split our
website into its own repository.

Alex I think the technology of the website and the docs should diverge ie
we choose the best technology for each.

RE PRs ... splitting the site / docs makes the most sense as we deal with
making prs across Brooklyn's numerous repositories


Mark



On 27 October 2017 at 09:01, Alex Heneveld 
wrote:


Sorry - I really don't like website being hidden on a branch.

Why?

* It's non-obvious and one more thing to remember (or forget, or explain)
* Primary reason for branching (in my view) is lifecycle, and here the

two

have very similar versioning and release lifecycles: master for both will
be relevant to master of codebase, where eg v0.12.0 will be relevant to
last release of 0.12.0 and what is live; there will be times we want to
update what is live with something more reason relevant to that version,
but we'll do that by pushing to 0.12, and we can release (publish) that
immediately unlike code, but what we _can't_ assume for both is that

master

can always be published, because we may have added unreleased features to
docs for both site and guide; with site in a `website` branch we will

have

to have branches there eg `website_0.12` alongside `0.12` of guide
* PRs become more difficult, remembering which branch to open them

against

and merge against
* PRs become even more difficult if adding a feature which goes onto both
the site and into the docs (which we should do more of), we add to one
branch, switch branches add to the other, then open PRs for both branches
* Ideally they converge to use the same technology (not both gitbook and
markdown) so if in the same project they can share themes and utils etc;

if

in different branches we have to cherry-pick between branches (always
opening and reviewing 2 identical PRs!)

My ideal would be different subdirs but if they really need to be

separate

then we could do a different project (resolves reasons 1 2 3 and

improves 4

and 5)

Best
Alex



On 26/10/2017 14:04, Richard Downer wrote:


All,

Following on from Thomas's gitbook work[1] - now merged[2] - it means

that

the brooklyn-docs `master` branch does not contain the public website
(just
the documentation).

In those threads it was discussed that most branches would remove the
website, and a separate branch would be used to store the public website
(a
bit like how GitHub pages used a `gh-pages` branch for a repository's
publis website). I'm assuming that the acceptance and merging of

gitbooks

is passive consensus of this concept too.

Therefore, I've created a new branch in the Apache brooklyn-docs repo
called `website`, which is taken from `master` immediately prior to the
merge of the gitbook conversion.

If anybody disagrees with this passive consensus approach and thinks

that

this branch is a bad idea, then please feel free to speak up. After all,
source control means that things can always be undone :-)

Expect a forthcoming PR on the `website` branch to remove the
documentation
files and leave just the website files (like the opposite of Thomas's

PR).

Richard.

[1]
https://lists.apache.org/thread.html/760a3e2fdefaff8d8ac4ea3
f98a45060fbaa0d886fa57c8f44e4742e@%3Cdev.brooklyn.apache.org%3E
[2] https://github.com/apache/brooklyn-docs/pull/222

-- Forwarded message --
From: 
Date: 26 October 2017 at 13:50
Subject: [brooklyn-docs] Git Push Summary
To: comm...@brooklyn.apache.org


Repository: brooklyn-docs
Updated Branches:
refs/heads/website [created] e1d08e53c






Re: [PROPOSAL] catalog bundle id:version mandatory in v1.0.0

2017-10-27 Thread Alex Heneveld


We jump through quite a few hoops to ensure rebind works, including 
current PR [1].  (I might feel differently if I haven't just spent too 
long making those hoops.)


Code will be able to be simplified quite a bit in this area if we 
disallow the so-called "anonymous bundles" - but feels premature to go 
that far.


--A

[1] https://github.com/apache/brooklyn-server/pull/866


On 27/10/2017 09:21, Thomas Bouron wrote:

Fair points Alex, you are absolutely right.

Although, I still think that forcing to have `bundle:` now in a bom is 
a better way: It will be annoying for users but easy to fix, whereas 
the alternative means that it Brooklyn **will be broken on 
restart/rebind** which is a clear no-go from my point of view.


Best

On Fri, 27 Oct 2017 at 09:12 Alex Heneveld 
<mailto:alex.henev...@cloudsoftcorp.com>> wrote:



Thinking about it I'm not convinced that breaking users blueprints
is the best way to update their mental model. With a shift to use
the CLI with catalog.bom in root and updating exemplar projects,
and updating UI to reflect bundles, I expect we'll achieve this in
a less disruptive way.

At minimum I don't think we should make it a breaking change to
try to force a mental model switch until _we_ have:

* made catalog UI be bundle-centric (currently bundle name is
shown but you can't navigate by bundles)
* made composer be bundle-centric - at minimum including a bundle
name in the template Brooklyn shows, but ideally more, allowing
people to edit multiple resources in a bundle via the UI (else we
aren't really operating on the new mental model)

We will also need a non-trivial amount of work to update tests
probably the majority of which use anonymous bundles and many of
which test aspects of anonymous bundles (and we'd have to be
careful in updating/fixing tests to identify the latter ones).


Some halfway house measures would be to provide more obvious
feedback to users when they install or use anonymous bundles:

* CLI and UI show the name of the bundle added, with a warning if
it's an anonymous one (user can then easily delete and release
with better name)
* UI show warnings on items from anonymous bundles


Also maybe it's worth adding a "strict" mode to Brooklyn, where in
strict mode this is disabled (and other things, like
persisting/rebinding anonymous types or other deprecated
features).  We could even persist this setting so new users work
in strict mode but if you're rebinding you get non-strict mode.

Best

Alex



On 27/10/2017 08:16, Thomas Bouron wrote:

+1, this sounds sensible to me too.
I also vote in favor of making this a breaking change. With 1.0.0
coming, this is the perfect time to do it.

Best.

On Fri, 27 Oct 2017 at 08:06 Geoff Macartney
mailto:geoff.macart...@cloudsoft.io>> wrote:

+1 to your suggestion Aled.   Also I'd side with making it a
breaking
change, I prefer forcing that mental model.

On Thu, 26 Oct 2017 at 13:12 Aled Sage mailto:aled.s...@gmail.com>> wrote:

> Alex,
>
> I say we break it - force the user to have the correct
mental model for
> 1.0.0!
>
> It's a simple one-line addition to fix a given .bom file.
We should
> obviously ensure that historic persisted state continues to
work.
>
> ---
>
> Also note that our deprecation warnings are too hidden. For
example, if
> you do `br catalog add ...` then deprecation warnings will
only be
> visible via the Brooklyn log (rather than being returned to
the user of
> `br` or via the REST api).
>
> But that's a separate topic!
>
> Aled
>
>
> On 26/10/2017 12:58, Alex Heneveld wrote:
> >
> > Absolutely.
> >
> > Question is whether we should deprecate or make it a
breaking change.
> >
> > Deprecating feels right, though I think it would mean we
can't
> > actually remove until 2.0 ?
> >
> > Best
> > Alex
> >
> >
> > On 26/10/2017 12:06, Aled Sage wrote:
> >> Hi all,
> >>
> >> I propose we make it mandatory to specify the bundle id
and version
> >> (currently it is optional).
> >>
> >> I think we should do this asap, before 1.0.0 is released.
> >>
> >> This is a breaking change - it will require users to add
 

Re: [PROPOSAL] catalog bundle id:version mandatory in v1.0.0

2017-10-27 Thread Alex Heneveld


Thinking about it I'm not convinced that breaking users blueprints is 
the best way to update their mental model.  With a shift to use the CLI 
with catalog.bom in root and updating exemplar projects, and updating UI 
to reflect bundles, I expect we'll achieve this in a less disruptive way.


At minimum I don't think we should make it a breaking change to try to 
force a mental model switch until _we_ have:


* made catalog UI be bundle-centric (currently bundle name is shown but 
you can't navigate by bundles)
* made composer be bundle-centric - at minimum including a bundle name 
in the template Brooklyn shows, but ideally more, allowing people to 
edit multiple resources in a bundle via the UI (else we aren't really 
operating on the new mental model)


We will also need a non-trivial amount of work to update tests probably 
the majority of which use anonymous bundles and many of which test 
aspects of anonymous bundles (and we'd have to be careful in 
updating/fixing tests to identify the latter ones).



Some halfway house measures would be to provide more obvious feedback to 
users when they install or use anonymous bundles:


* CLI and UI show the name of the bundle added, with a warning if it's 
an anonymous one (user can then easily delete and release with better name)

* UI show warnings on items from anonymous bundles


Also maybe it's worth adding a "strict" mode to Brooklyn, where in 
strict mode this is disabled (and other things, like 
persisting/rebinding anonymous types or other deprecated features).  We 
could even persist this setting so new users work in strict mode but if 
you're rebinding you get non-strict mode.


Best
Alex


On 27/10/2017 08:16, Thomas Bouron wrote:

+1, this sounds sensible to me too.
I also vote in favor of making this a breaking change. With 1.0.0 
coming, this is the perfect time to do it.


Best.

On Fri, 27 Oct 2017 at 08:06 Geoff Macartney 
mailto:geoff.macart...@cloudsoft.io>> 
wrote:


+1 to your suggestion Aled.   Also I'd side with making it a breaking
change, I prefer forcing that mental model.

On Thu, 26 Oct 2017 at 13:12 Aled Sage mailto:aled.s...@gmail.com>> wrote:

> Alex,
>
> I say we break it - force the user to have the correct mental
model for
> 1.0.0!
>
> It's a simple one-line addition to fix a given .bom file. We should
> obviously ensure that historic persisted state continues to work.
>
> ---
>
> Also note that our deprecation warnings are too hidden. For
example, if
> you do `br catalog add ...` then deprecation warnings will only be
> visible via the Brooklyn log (rather than being returned to the
user of
> `br` or via the REST api).
>
> But that's a separate topic!
>
> Aled
>
>
> On 26/10/2017 12:58, Alex Heneveld wrote:
> >
> > Absolutely.
> >
> > Question is whether we should deprecate or make it a breaking
change.
> >
> > Deprecating feels right, though I think it would mean we can't
> > actually remove until 2.0 ?
> >
> > Best
> > Alex
> >
> >
> > On 26/10/2017 12:06, Aled Sage wrote:
> >> Hi all,
> >>
> >> I propose we make it mandatory to specify the bundle id and
version
> >> (currently it is optional).
> >>
> >> I think we should do this asap, before 1.0.0 is released.
> >>
> >> This is a breaking change - it will require users to add a
`bundle:
> >> ...` line to their .bom files (if they are not supplying the
metadata
> >> another way, such as building their project as an OSGi bundle).
> >>
> >> TL;DR: for backwards compatibility, we let users have a different
> >> mental model from what Brooklyn now actually does, which can
lead to
> >> confusing behaviour when upgrading or working with snapshots.
> >>
> >> *## Current Behaviour*
> >>
> >> Currently, we'll auto-wrap things as a bundle, generating a
unique
> >> anonymous bundle name:version if one is not supplied.
> >>
> >> This is important for users doing `br catalog add myfile.bom`
or `br
> >> catalog add mydir/`. In both cases we automatically create an
OSGi
> >> bundle with those contents. For that bundle's name:version,
one can
> >> explicitly supply it (e.g. via `bundle: ...` in the .bom
file). But,
> >> for backwards compatibility, we support .bom files that have
no such
> >> metadata.
> >>

Re: New `website` branch in brooklyn-docs

2017-10-27 Thread Alex Heneveld


Sorry - I really don't like website being hidden on a branch.

Why?

* It's non-obvious and one more thing to remember (or forget, or explain)
* Primary reason for branching (in my view) is lifecycle, and here the 
two have very similar versioning and release lifecycles: master for both 
will be relevant to master of codebase, where eg v0.12.0 will be 
relevant to last release of 0.12.0 and what is live; there will be times 
we want to update what is live with something more reason relevant to 
that version, but we'll do that by pushing to 0.12, and we can release 
(publish) that immediately unlike code, but what we _can't_ assume for 
both is that master can always be published, because we may have added 
unreleased features to docs for both site and guide; with site in a 
`website` branch we will have to have branches there eg `website_0.12` 
alongside `0.12` of guide
* PRs become more difficult, remembering which branch to open them 
against and merge against
* PRs become even more difficult if adding a feature which goes onto 
both the site and into the docs (which we should do more of), we add to 
one branch, switch branches add to the other, then open PRs for both 
branches
* Ideally they converge to use the same technology (not both gitbook and 
markdown) so if in the same project they can share themes and utils etc; 
if in different branches we have to cherry-pick between branches (always 
opening and reviewing 2 identical PRs!)


My ideal would be different subdirs but if they really need to be 
separate then we could do a different project (resolves reasons 1 2 3 
and improves 4 and 5)


Best
Alex


On 26/10/2017 14:04, Richard Downer wrote:

All,

Following on from Thomas's gitbook work[1] - now merged[2] - it means that
the brooklyn-docs `master` branch does not contain the public website (just
the documentation).

In those threads it was discussed that most branches would remove the
website, and a separate branch would be used to store the public website (a
bit like how GitHub pages used a `gh-pages` branch for a repository's
publis website). I'm assuming that the acceptance and merging of gitbooks
is passive consensus of this concept too.

Therefore, I've created a new branch in the Apache brooklyn-docs repo
called `website`, which is taken from `master` immediately prior to the
merge of the gitbook conversion.

If anybody disagrees with this passive consensus approach and thinks that
this branch is a bad idea, then please feel free to speak up. After all,
source control means that things can always be undone :-)

Expect a forthcoming PR on the `website` branch to remove the documentation
files and leave just the website files (like the opposite of Thomas's PR).

Richard.

[1]
https://lists.apache.org/thread.html/760a3e2fdefaff8d8ac4ea3f98a45060fbaa0d886fa57c8f44e4742e@%3Cdev.brooklyn.apache.org%3E
[2] https://github.com/apache/brooklyn-docs/pull/222

-- Forwarded message --
From: 
Date: 26 October 2017 at 13:50
Subject: [brooklyn-docs] Git Push Summary
To: comm...@brooklyn.apache.org


Repository: brooklyn-docs
Updated Branches:
   refs/heads/website [created] e1d08e53c





Re: [PROPOSAL] catalog bundle id:version mandatory in v1.0.0

2017-10-26 Thread Alex Heneveld


Absolutely.

Question is whether we should deprecate or make it a breaking change.

Deprecating feels right, though I think it would mean we can't actually 
remove until 2.0 ?


Best
Alex


On 26/10/2017 12:06, Aled Sage wrote:

Hi all,

I propose we make it mandatory to specify the bundle id and version 
(currently it is optional).


I think we should do this asap, before 1.0.0 is released.

This is a breaking change - it will require users to add a `bundle: 
...` line to their .bom files (if they are not supplying the metadata 
another way, such as building their project as an OSGi bundle).


TL;DR: for backwards compatibility, we let users have a different 
mental model from what Brooklyn now actually does, which can lead to 
confusing behaviour when upgrading or working with snapshots.


*## Current Behaviour*

Currently, we'll auto-wrap things as a bundle, generating a unique 
anonymous bundle name:version if one is not supplied.


This is important for users doing `br catalog add myfile.bom` or `br 
catalog add mydir/`. In both cases we automatically create an OSGi 
bundle with those contents. For that bundle's name:version, one can 
explicitly supply it (e.g. via `bundle: ...` in the .bom file). But, 
for backwards compatibility, we support .bom files that have no such 
metadata.


*## Old Behaviour (0.11.0 and Earlier)*

Before we added full support and use of bundles in the catalog, the 
user's .bom file was parsed and its items added to the catalog. The 
raw .bom file was discarded.


For upgrade/dependencies/versioning/usability reasons described in 
[1,2,3], this was a bad idea.



*## Reason the Current Behaviour is Bad!*

By auto-generating bundle names, we allow users to have a completely 
different mental model from what is actually happening in Brooklyn.


For simple cases (e.g. their .bom file only ever contains one item), 
that's fine.


However, it leads to surprising behaviour (and more complicated 
Brooklyn code), particularly when using snapshots or forced upgrade.


Consider a (developer) user writing a .bom file, with version 
1.0.0-SNAPSHOT containing entities A and B. If the user modifies it 
and runs `br catalog add myfile.bom` then we successfully replace the 
old auto-generated bundle with this new one. However, if the user then 
deletes B (so the .bom contains only A) and does `catalog add` again, 
it's unclear to Brooklyn whether this is meant to be the same .bom 
file. You could argue that it should be forbidden (because if we kept 
the old .bom we'd end up with two different versions of A, and 
deleting B would be wrong if it was a different .bom).


The right thing in my opinion is to force the user to have the same 
mental model as Brooklyn does: they need to include a `bundle: ` line 
in their .bom file so they are explicit about what they are doing.


Note this does not force the user to understand OSGi/java; it just 
requires them to think in terms of a "versioned bundle" of catalog items.


Aled

[1] "Uploading ZIPs for a better dev workflow" dev@brooklyn email thread
[2] "Bundles in Brooklyn" dev@brooklyn email thread
[3] "Bundles of fun" dev@brooklyn email thread






Blueprint project structure

2017-10-25 Thread Alex Heneveld


Hi All-

With the move towards bundles, people have been experimenting with 
project structure, and there seem to be two schools:


(1) use maven, include a `pom.xml` in root, and put the `catalog.bom` 
and blueprints into either `src/main/resources/` or a custom resources 
directory eg `catalog/`


(2) put `catalog.bom` in root, and have yaml blueprint files, and 
scripts and other resources, in whatever structure underneath makes sense


Both have benefits.  For instance with (2) you can deploy straight from 
the root, or even `br catalog add 
https://github.com/brooklyncentral/brooklyn-dns/archive/master.zip`, and 
it's obvious where the entry point is.  With (1) you can build and 
install to maven, and access the dependency as `mvn:...` URLs in karaf; 
and it of course becomes very useful when including java.


I've done some experimenting at [1] on a pom.xml configuration that 
works with (2).   This means a blueprint developer can do what they like 
without ever touching maven, not having a pom.xml, etc.  Then if they 
want maven at some point it can be added on without having to change 
anything else in the project.  It also means community blueprints can be 
used as (1) or as (2), and people looking at them just look at the 
subset of bom and yaml files that are the blueprint and it will be 
sensible, without any odd structure imposed by maven.  And we can 
encourage putting a `catalog.bom` in the root of every project on the 
planet so there is never ever any question about how to deploy software 
ever again.  :)


Is this structure something we should converge on as a recommendation 
for blueprints projects?  Basically I'm proposing we say (2) is _always_ 
best practice, even when is needed (1), and we have (one so far and more 
to follow if we like this) exemplars for how to structure projects.


Best
Alex


[1]  https://github.com/brooklyncentral/brooklyn-dns/pull/7



Re: Call for release: Brooklyn 0.12.1

2017-10-20 Thread Alex Heneveld

+1

On 20/10/2017 10:06, Duncan Godwin wrote:

+1

On 20 October 2017 at 09:55, Geoff Macartney 
wrote:


+1 (and agree we should just do it with the cherry pick)



On Fri, 20 Oct 2017 at 09:50 Thomas Bouron 
wrote:


I agree with you Mark. I would cherry-pick only the fix for vagrant.

On Fri, 20 Oct 2017 at 09:46 Duncan Grant 
wrote:


+1 (non-binding)


On Fri, 20 Oct 2017 at 09:44 Mark McKenna 

wrote:

+1

IMO 0.12.1 should only contain this fix

*Mark McKenna*

*Twitter ::* @m4rkmckenna 

*Github :: *m4rkmckenna 

*PGP :: A7A9 24DE 638C 681A 8DEA FAD4 2B5D C759 B1EB 76A7
*

On 19 October 2017 at 16:12, Richard Downer 

wrote:

+1

On 19 October 2017 at 16:07, Thomas Bouron


com>
wrote:


Hi all

A couple a people on IRC and the ML had issues with

`brooklyn-vagrant`.

This is unfortunately broken in Brooklyn 0.12.0, but it is the

main

installation method that we promote on the Brooklyn website.

As a new user, this can be a very frustrating first experience

with

Brooklyn which, IMO, does not send the right message.

For this reason, I think we should do a Brooklyn 0.12.1 release

as

soon

as

possible.

Let us know your opinions and if there are no other outstanding

problems.

Best.
--

Thomas Bouron • Senior Software Engineer @ Cloudsoft Corporation

•

https://cloudsoft.io/
Github: https://github.com/tbouron
Twitter: https://twitter.com/eltibouron


--

Thomas Bouron • Senior Software Engineer @ Cloudsoft Corporation •
https://cloudsoft.io/
Github: https://github.com/tbouron
Twitter: https://twitter.com/eltibouron





[jira] [Commented] (BROOKLYN-544) Deadlock in enricher's subscription

2017-10-11 Thread Alex Heneveld (JIRA)

[ 
https://issues.apache.org/jira/browse/BROOKLYN-544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16200913#comment-16200913
 ] 

Alex Heneveld commented on BROOKLYN-544:


pretty sure the above fixes it.  with fix above the test 1000 times works fine. 
 (without it, it fails for me after about 200 runs.)

> Deadlock in enricher's subscription
> ---
>
> Key: BROOKLYN-544
> URL: https://issues.apache.org/jira/browse/BROOKLYN-544
> Project: Brooklyn
>  Issue Type: Bug
>Reporter: Aled Sage
>Priority: Blocker
>
> With brooklyn 1.0.0-SNAPSHOT, running the test 
> {{YamlRollingTimeWindowMeanEnricherTest}} many times, it hung with the 
> deadlock shown below:
> {noformat}
> Found one Java-level deadlock:
> =
> "brooklyn-execmanager-CkWF5Jg7-1":
>   waiting to lock monitor 0x7f8e9c025de8 (object 0x0007b5fee528, a 
> org.apache.brooklyn.core.mgmt.internal.LocalSubscriptionManager),
>   which is held by "main"
> "main":
>   waiting to lock monitor 0x7f8e9b95ebe8 (object 0x0007b6042cf8, a 
> java.util.Collections$SynchronizedMap),
>   which is held by "brooklyn-execmanager-CkWF5Jg7-1"
> Java stack information for the threads listed above:
> ===
> "brooklyn-execmanager-CkWF5Jg7-1":
> at 
> org.apache.brooklyn.core.mgmt.internal.LocalSubscriptionManager.getSubscriptionsForEntitySensor(LocalSubscriptionManager.java:207)
> - waiting to lock <0x0007b5fee528> (a 
> org.apache.brooklyn.core.mgmt.internal.LocalSubscriptionManager)
> at 
> org.apache.brooklyn.core.mgmt.internal.LocalSubscriptionManager.publish(LocalSubscriptionManager.java:255)
> at 
> org.apache.brooklyn.core.mgmt.internal.BasicSubscriptionContext.publish(BasicSubscriptionContext.java:176)
> at 
> org.apache.brooklyn.core.entity.AbstractEntity$BasicSensorSupport.emitInternal(AbstractEntity.java:1059)
> at 
> org.apache.brooklyn.core.sensor.AttributeMap.update(AttributeMap.java:151)
> - locked <0x0007b6042cf8> (a 
> java.util.Collections$SynchronizedMap)
> at 
> org.apache.brooklyn.core.entity.AbstractEntity$BasicSensorSupport.set(AbstractEntity.java:957)
> at 
> org.apache.brooklyn.core.enricher.AbstractEnricher.emit(AbstractEnricher.java:144)
> at 
> org.apache.brooklyn.enricher.stock.AbstractTransformer.onEvent(AbstractTransformer.java:157)
> at 
> org.apache.brooklyn.core.mgmt.internal.LocalSubscriptionManager$1.run(LocalSubscriptionManager.java:336)
> at 
> org.apache.brooklyn.core.mgmt.internal.LocalSubscriptionManager.submitPublishEvent(LocalSubscriptionManager.java:352)
> at 
> org.apache.brooklyn.core.mgmt.internal.LocalSubscriptionManager.lambda$0(LocalSubscriptionManager.java:192)
> at 
> org.apache.brooklyn.core.mgmt.internal.LocalSubscriptionManager$$Lambda$3/871160466.run(Unknown
>  Source)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at 
> org.apache.brooklyn.util.core.task.BasicExecutionManager$SubmissionCallable.call(BasicExecutionManager.java:565)
> at 
> org.apache.brooklyn.util.core.task.SingleThreadedScheduler$1.call(SingleThreadedScheduler.java:116)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> "main":
> at java.util.Collections$SynchronizedMap.get(Collections.java:2584)
> - waiting to lock <0x0007b6042cf8> (a 
> java.util.Collections$SynchronizedMap)
> at 
> org.apache.brooklyn.core.sensor.AttributeMap.getValue(AttributeMap.java:219)
> at 
> org.apache.brooklyn.core.sensor.AttributeMap.getValue(AttributeMap.java:225)
> at 
> org.apache.brooklyn.core.entity.AbstractEntity$BasicSensorSupport.get(AbstractEntity.java:932)
> at 
> org.apache.brooklyn.core.mgmt.internal.LocalSubscriptionManager.subscribe(LocalSubscriptionManager.java:147)
> - locked <0x0007b5fee528> (a 
> org.apache.brooklyn.core.mgmt.internal.LocalSubscriptionManager)
> at 
> org.apache.brooklyn.core.mgmt.internal.AbstractSubscriptionManager.subscribe(AbstractSubscriptionManager.java:106)
> at 
> org.apache.brookly

Re: [PROPOSAL] Delete "classic-mode" (always use Karaf)

2017-10-10 Thread Alex Heneveld


Good idea -- classic mode still used for tests/debug and Main exists in 
tests bundle.  Nice if it can keep the same package then my IDE configs 
can all still work.


I'm on Eclipse so would be useful to hear whether Intelli-J devs are 
happy with this resolution.


Best
Alex


On 10/10/2017 12:29, Thomas Bouron wrote:

Hi Alex.

So you can still debug the code if you attach a debugger before running the
karaf shell. However, this does not support hot-deployment like you could
do from within an IDE.
So I think you're right, we need to keep the Main class. But since we want
to use it for debug purpose, I would move it to the `test` folder rather
than deprecating it.

Best.

On Tue, 10 Oct 2017 at 12:24 Alex Heneveld 
wrote:


+1 for removing classic mode in the dist and in the docs

NOT for removing the Main class itself however, I don't think -- this is
my standard way to run Brooklyn from the IDE and I don't have a karaf
replacement that supports debug mode in the same way, and also note most
of our unit tests still run classic mode rather than karaf.

Best
Alex


On 09/10/2017 14:04, Mark McKenna wrote:

+1

*Mark McKenna*

*Twitter ::* @m4rkmckenna <https://twitter.com/m4rkmckenna>

*Github :: *m4rkmckenna <https://github.com/m4rkmckenna>

*PGP :: A7A9 24DE 638C 681A 8DEA FAD4 2B5D C759 B1EB 76A7
<https://pgp.mit.edu/pks/lookup?op=get&search=0x2B5DC759B1EB76A7>*

On 9 October 2017 at 13:09, Graeme Miller  wrote:


+1

On 9 October 2017 at 12:56, Thomas Bouron <

thomas.bou...@cloudsoftcorp.com

wrote:


+1

On Mon, 9 Oct 2017 at 12:39 Geoff Macartney <

geoff.macart...@cloudsoft.io>

wrote:


+1

On Mon, 9 Oct 2017 at 12:38 Duncan Godwin 
com>

wrote:


+1

On 9 October 2017 at 12:24, Richard Downer 

wrote:

I offer my opinion through the medium of GIFs:

https://media.giphy.com/media/vohOR29F78sGk/giphy.gif

Richard.


On 9 October 2017 at 12:13, Aled Sage  wrote:


Hi all,

I propose that we *delete* Brooklyn classic-mode from master now,

in

preparation for the 1.0.0 release.

---

In 0.12.0, we switched the main distro to be the karaf-mode. We

also

built

the classic-mode distro - the intent being for users to have a

usable

classic mode, rather than being forced immediately to switch to

karaf

without advanced warning.

However, we unfortunately did not deprecate classic-mode as

clearly

as

we

should have (e.g. didn't explicitly say that it will be deleted

in

an

upcoming release, and didn't deprecate the `Main` class).

I think it's still ok to delete it for several reasons:

1. This is a "mode" of running Brooklyn, rather than underlying
 Brooklyn functionality.
2. The `Main` class [1] etc should not be considered part of our
 *public* API (even though we didn't explicitly tell people

that

it

 was internal).
3. As part of 0.12.0 release, we added to the docs instructions

for

 upgrading to Karaf [2].
4. Supporting classic to the same standard as Karaf will become
 increasingly painful as we do more and more with Bundles!
5. It's a major release, so if we are going to delete it then

1.0.0

is

 the perfect time!

Aled

[1] https://github.com/apache/brooklyn-server/blob/master/server
-cli/src/main/java/org/apache/brooklyn/cli/Main.java

[2] http://brooklyn.apache.org/v/0.12.0/ops/upgrade.html#upgrade
-from-apache-brooklyn-0110-and-below




--

Thomas Bouron • Senior Software Engineer @ Cloudsoft Corporation •
https://cloudsoft.io/
Github: https://github.com/tbouron
Twitter: https://twitter.com/eltibouron


--

Thomas Bouron • Senior Software Engineer @ Cloudsoft Corporation •
https://cloudsoft.io/
Github: https://github.com/tbouron
Twitter: https://twitter.com/eltibouron





Re: [PROPOSAL] Delete "classic-mode" (always use Karaf)

2017-10-10 Thread Alex Heneveld


+1 for removing classic mode in the dist and in the docs

NOT for removing the Main class itself however, I don't think -- this is 
my standard way to run Brooklyn from the IDE and I don't have a karaf 
replacement that supports debug mode in the same way, and also note most 
of our unit tests still run classic mode rather than karaf.


Best
Alex


On 09/10/2017 14:04, Mark McKenna wrote:

+1

*Mark McKenna*

*Twitter ::* @m4rkmckenna 

*Github :: *m4rkmckenna 

*PGP :: A7A9 24DE 638C 681A 8DEA FAD4 2B5D C759 B1EB 76A7
*

On 9 October 2017 at 13:09, Graeme Miller  wrote:


+1

On 9 October 2017 at 12:56, Thomas Bouron 
+1

On Mon, 9 Oct 2017 at 12:39 Geoff Macartney <

geoff.macart...@cloudsoft.io>

wrote:


+1

On Mon, 9 Oct 2017 at 12:38 Duncan Godwin 
com>

wrote:


+1

On 9 October 2017 at 12:24, Richard Downer 

wrote:

I offer my opinion through the medium of GIFs:

https://media.giphy.com/media/vohOR29F78sGk/giphy.gif

Richard.


On 9 October 2017 at 12:13, Aled Sage  wrote:


Hi all,

I propose that we *delete* Brooklyn classic-mode from master now,

in

preparation for the 1.0.0 release.

---

In 0.12.0, we switched the main distro to be the karaf-mode. We

also

built

the classic-mode distro - the intent being for users to have a

usable

classic mode, rather than being forced immediately to switch to

karaf

without advanced warning.

However, we unfortunately did not deprecate classic-mode as

clearly

as

we

should have (e.g. didn't explicitly say that it will be deleted

in

an

upcoming release, and didn't deprecate the `Main` class).

I think it's still ok to delete it for several reasons:

1. This is a "mode" of running Brooklyn, rather than underlying
Brooklyn functionality.
2. The `Main` class [1] etc should not be considered part of our
*public* API (even though we didn't explicitly tell people

that

it

was internal).
3. As part of 0.12.0 release, we added to the docs instructions

for

upgrading to Karaf [2].
4. Supporting classic to the same standard as Karaf will become
increasingly painful as we do more and more with Bundles!
5. It's a major release, so if we are going to delete it then

1.0.0

is

the perfect time!

Aled

[1] https://github.com/apache/brooklyn-server/blob/master/server
-cli/src/main/java/org/apache/brooklyn/cli/Main.java

[2] http://brooklyn.apache.org/v/0.12.0/ops/upgrade.html#upgrade
-from-apache-brooklyn-0110-and-below




--

Thomas Bouron • Senior Software Engineer @ Cloudsoft Corporation •
https://cloudsoft.io/
Github: https://github.com/tbouron
Twitter: https://twitter.com/eltibouron





Re: [PROPOSAL] Split Brooklyn website and documentation

2017-10-10 Thread Alex Heneveld


Thomas-

Had a deeper look -- gitbook has moved things forward a lot. Sounds like 
it will let us throw away a lot of our home-grown docs-building and 
toc-building code and have good search.  Look forward to seeing how it 
shapes up with styling and guide-v-website integration.


Best
Alex


On 09/10/2017 09:54, Thomas Bouron wrote:

Thanks Mark.

Regarding maintenance, it will be as easy as the current version. Updating
docs means updating markdown files. Adding/moving pages requires to modify
the `SUMMARY.md` but that's it.
One really cool thing is that Gitbook is a node app: really simple to
install/run compare to our current solution which runs only on an old
version of ruby => no more pain of using different versions of ruby on your
environment.

In terms of feature gaps, Gitbook provides the same or more features than
Jekyll out the box:
- search! That is a big one, not available with Jekyll
- include of external files
- syntax highlighting
- plugins system
- custom theme

Best.

On Sat, 7 Oct 2017, 17:10 Mark McKenna,  wrote:


Thomas this looks really clean great work.

How much work do you think it will take to maintain vs our current
solution?
What do you see as being the current feature gaps?

M

On Fri, 6 Oct 2017 at 14:55 Thomas Bouron 
Hi Richard.

Of course, I pushed it to my fork on the branch `experiment/gitbook`[1]
Glad you like it :)

Best.

[1] https://github.com/tbouron/brooklyn-docs/tree/experiment/gitbook

On Fri, 6 Oct 2017 at 13:53 Andrea Turli  wrote:


+1 Thomas, didn't know Gitbook at all (that's why I suggested

readthedocs)

but looks pretty good!

Il 06/ott/2017 15:37, "Richard Downer"  ha

scritto:

Hi Thomas,

I withdraw my previous comments - I looked at ReadTheDocs last year and

was

pessimistic, but it seems that GitBook this year is a different story

:-)

This is worth pursuing IMO. What did you need to do to get this

working?

Did you have to do any work on the brooklyn-docs source - if so could

you

share a link to your repo?

Thanks
Richard.


On 6 October 2017 at 13:18, Thomas Bouron 
com

wrote:


Hi All.

A demo is worth a thousand words so here is a gitbook adaptation of

our

current documentation[1] (and only documentation)
This took me only a couple of hours. There are still things to
fix/update/remove like unsupported liquid tags but for the most part,

it

works like a charm.
Search is available from the search field on the top left and PDF[2],
epub[3] amd mobi[4] versions are also available.
The build took only 10 sec + 10 more per offline version.

The table of content mirrors exactly what we currently have, except

that

I

have limited it to only 2 sub-levels. It means that some pages are

missing

but I think it demonstrates that our current menu organisation could

be

vastly improved.

Couple of thoughts on Alex's points:


* for the examples, import source code that is actually used in

tests

(!!!)

Indeed, an overhaul does not solve it, nor our current framework. But

both

can implement it.


* check links

Gitbook checks internal links at compile time and refuses to build if
something is wrong. AFAIK, there is nothing in the Gitbook world to

check

the validity of external links like the Jekyll plugin does. There are
probably external tools that we can integrate in our build pipeline

to

cover this. However, it seems that even if we have this tool, we

don't

use

it when pushing the website (as I get a lot of errors locally)
Realistically, we will always have broken links, things move around

all

the

time. Checking external links is a nice-to-have but far from being a
perfect solution. In any case, I don't see this point as important as

you

do.


* think through user flow

The clear Gitbook menu exposes this pretty well IMO and better

compared

to

the current version so that's a win.

Best.

[1] https://tbouron.github.io/brooklyn-docs/
[2] https://tbouron.github.io/brooklyn-docs/brooklyn.pdf
[3] https://tbouron.github.io/brooklyn-docs/brooklyn.epub
[4] https://tbouron.github.io/brooklyn-docs/brooklyn.mobi


On Thu, 5 Oct 2017 at 12:47 Richard Downer 

wrote:

Thank you for the research you have done Thomas. I've had similar

thoughts

myself. The original goal of our web+docs was to integrate them in

such

a

way that we had a versioned user guide that integrated perfectly

with

the

main website. At the time, Markdown tools were relatively immature,

with

Jekyll leading the pack (and being the fashionable choice), and

very

little

in the way of viable apps for generating books with structure and

tables

of

contents. We did the best we could with the tools we had, but they

needed

significant extensions (via Jekyll plugins and build scripting).

Those

plugins and scripts have turned into something fairly hairy - IMO

we

shouldn't need to have to write this much code[1] to generate a

static

site

and manual. With hindsight, I would not have argued in favour of

this

model. If I do write my book[2] I will most likely be

Re: [PROPOSAL] Split Brooklyn website and documentation

2017-10-05 Thread Alex Heneveld
I think we should at least consider "making good" the existing system
which, although it has some technical warts, I think does much of what we
need.  What are the issues it has?

I am dubious that any radical overhaul will give us a perfect solution.
Some of the issues are quite hard, eg menu order, nice PDF, and there's a
big chance we'd spend a lot more effort in one of the new shiny systems
bending it to our needs and end up with something just as jury-rigged.
Better the devil you know...

FWIW the biggest issues IMO are:

* for the examples, import source code that is actually used in tests (!!!)
* check links
* think through user flow

None of these would be addressed by an overhaul.  (And fwiw getting jekyll
to import source code nicely was painful, that pain may likely still be
there!)

Best
Alex



On 5 October 2017 at 11:23, Thomas Bouron 
wrote:

> Hi all.
>
> It's been a couple of weeks that I started to look at how to improve and
> simplify the Brookyln website[1]. As I said on the Brooklyn 1.0 thread[2],
> I think we need to sort this out before releasing 1.0.
>
> I have looked for a framework / library to handle both the website and
> documentation the same way we do it right now. To determine what was the
> best fit, I based my analysis on the following criteria:
> - Able to take markdown files and generate HTML from them.
> - Keep the folder structure intact (currently, pages that seems in the same
> logical group - take pages in the download section[3] menu - jump into a
> different folder/category/section which is very confusing)
> - Be skinnable
> - Able to handle versions for documentation.
> - Able to generate PDF version of documentation.
> - Be as "stock" as possible to limit maintenance and pain during upgrade
> (our current website still uses Jekyll 2.x).
>
> 2 contenders clearly jumped out from this:
> - Jekyll[4]
> - Gitbook[5]
>
> 
> Jekyll
>
> With the version 3, Jekyll now has a concept of collections[6] which can
> generate pages from markdown files and keep the folder structure.
> The menu can be generated based on this folder structure (with depth
> limitation for example) in combination of some clever liquid tags and
> `include`. However, it will be hard to control the order of items appearing
> on the menu. Another easy solution would be maintain list of links for the
> menu to be generated.
> There are plugins to generate PDF[7], which happens during compile time.
> Finally, Jekyll is highly skinnable with built-in or custom themes.
>
> 
> Gitbook
>
> Gitbook, in its open source version, handles out of the box doc versioning,
> PDF generation at runtime (so it seems) HTML pages generation from
> markdown. The menu is built-in feature, based on a simple markdown list of
> links[8]. This means we need to maintain it but there is a good chance we
> will have to do this with Jekyll as well. Finally, Gitbook is also easily
> skinnable[9].
>
> 
> Both frameworks offer mostly the same features. However, Jekyll is easier
> to build a website that looks like a "corporate" one whereas with Gitbook,
> you are "stuck" with the design principals it was created, i.e. serve
> documentation only. But for this very purpose, it is extremely good and
> easy.
>
> Our website is the combination of both a "corporate website" (i.e. about,
> getting started, community, etc - few pages that describe the project) and
> a documentation.
>
> Which leads me to my proposal: separate the website from the documentation,
> at least in terms of how we build it. What I mean by this is:
> - Use Jekyll (or even nothing) for the website, except the documentation
> part. This will let us build a nice theme (based on Bootstrap 4 for
> example) without to worry about complicated plugins and custom code for the
> documentation.
> - Use Gitbook for the documentation alone, applying/adapting the theme we
> will create from the point above.
>
> Best.
>
> [1] https://brooklyn.apache.org/
> [2]
> https://lists.apache.org/thread.html/dae4468aa7ef77af9dc8aca24b8434
> e9782efbd50fa876618cccf980@%3Cdev.brooklyn.apache.org%3E
> [3] https://brooklyn.apache.org/download/index.html
> [4] https://jekyllrb.com/
> [5] https://github.com/GitbookIO/gitbook
> [6] https://jekyllrb.com/docs/collections/
> [7] http://abemedia.co.uk/jekyll-pdf/
> [8] https://toolchain.gitbook.com/pages.html
> [9] https://toolchain.gitbook.com/themes/
> --
>
> Thomas Bouron • Senior Software Engineer @ Cloudsoft Corporation •
> https://cloudsoft.io/
> Github: https://github.com/tbouron
> Twitter: https://twitter.com/eltibouron
>


Re: Proposal: delete `--catalogReset`?

2017-09-27 Thread Alex Heneveld


+1

good point Thomas, if someone wants to reset they should point at a new 
persistence location,

because other persisted state will go haywire otherwise.

adding might be useful on upgrades but that's a separate topic and am 
thinking that should be
done via the default catalog.bom (have been noodling on this at [1] -- 
proposal to follow).


best
alex


[1] 
https://docs.google.com/document/d/1Lm47Kx-cXPLe8BO34-qrL3ZMPosuUHJILYVQUswEH6Y/edit?ts=59ca5a5e#heading=h.tr946lhtwsn2 
)



On 27/09/2017 14:54, Thomas Bouron wrote:

Hi Aled.

I think this is the right thing to do. As you said, this does not exists in
the karaf world and would even be tricky to implement now that we move to
bundlised all the things. Plus, if it can remove old and not used code, I'm
all in for it.

Best.

On Tue, 26 Sep 2017 at 11:13 Aled Sage  wrote:


p.s. And also *delete* `--catalogAdd` from classic (again it's not
supported in Karaf).

We sould be encouraging people to use `br catalog add ...` after
Brooklyn is started, or updating the .bom file at their
`default.catalog.location`, rather than using this command line option.

Aled


On 26/09/2017 09:53, Aled Sage wrote:

Hi all,

TL;DR: I propose we *delete* support for `--catalogReset`, for the
next release. It's not supported in karaf mode, and it's dangerous
(could break rebind of entities)!

Note that I'm not suggesting deprecate. We've moved to karaf as the
default in 0.12 release, so effectively the functionality has already
disappeared (in the recommended way of running Brooklyn).

---

In classic mode, we have support for `--catalogReset`. This works in
combination with `--rebind ...` to tell Brooklyn that it should delete
the persisted catalog (reseting it to the default initial catalog),
while keeping the rest of the persisted state as-is.

We don't support this in Karaf mode, and no-one has missed it.

It is dangerous: if you have an entity instance then that will be
persisted along with a catalogItemId, but resetting the catalog might
cause that catalog item to disapear. This can cause serious problems
when we try to rebind that entity instance (because we've lost type
information).

It is better to explicitly manage the catalog via
additions/deprecations/removals.

This is yet another example of where we can (slightly) simplify our
startup/catalog code by removing unnecessary functionality.

---

Slightly longer term, we want to rethink how we load the catalog (from
"initial" and persisted state): rebind is confusing when upgrading
Brooklyn, because it keeps the persisted catalog and doesn't
automatically add any of the catalog items/versions for the new
Brooklyn version. One has to do that addition manually.

But that will be a topic for a new email thread!

Aled



--

Thomas Bouron • Senior Software Engineer @ Cloudsoft Corporation •
https://cloudsoft.io/
Github: https://github.com/tbouron
Twitter: https://twitter.com/eltibouron





[jira] [Commented] (BROOKLYN-539) ClassCastExceptions on rebind (trying to persist catalog items)

2017-09-22 Thread Alex Heneveld (JIRA)

[ 
https://issues.apache.org/jira/browse/BROOKLYN-539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16176350#comment-16176350
 ] 

Alex Heneveld commented on BROOKLYN-539:


aha, it was following https://github.com/apache/brooklyn-server/pull/814 - 
fixed in https://github.com/apache/brooklyn-server/pull/843

> ClassCastExceptions on rebind (trying to persist catalog items)
> ---
>
> Key: BROOKLYN-539
> URL: https://issues.apache.org/jira/browse/BROOKLYN-539
> Project: Brooklyn
>  Issue Type: Bug
>Reporter: Aled Sage
>
> With bleeding edge 0.12.0-SNAPSHOT, I started Brooklyn (in Karaf mode), then 
> stopped it and restarted it again.
> On rebind, I see exceptions like that below many times in the info log. I 
> think it's logged once per item in the catalog.
> {noformat}
> 2017-09-20T11:42:36,406 WARN  122 o.a.b.c.m.r.PersistenceExceptionHandlerImpl 
> [FelixStartLevel] Problem persisting (ignoring): generate memento for 
> CATALOG_ITEM 
> org.apache.brooklyn.core.typereg.RegisteredTypes$CatalogItemFromRegisteredType
> (pr-128-top-level:1.0.0)
> java.lang.ClassCastException: 
> org.apache.brooklyn.core.typereg.RegisteredTypes$CatalogItemFromRegisteredType
>  cannot be cast to org.apache.brooklyn.core.objs.BrooklynObjectInternal
> at 
> org.apache.brooklyn.core.mgmt.rebind.PeriodicDeltaChangeListener.persistNowInternal(PeriodicDeltaChangeListener.java:454)
>  [122:org.apache.brooklyn.core:0.12.0.SNAPSHOT]
> at 
> org.apache.brooklyn.core.mgmt.rebind.PeriodicDeltaChangeListener.persistNowSafely(PeriodicDeltaChangeListener.java:379)
>  [122:org.apache.brooklyn.core:0.12.0.SNAPSHOT]
> at 
> org.apache.brooklyn.core.mgmt.rebind.PeriodicDeltaChangeListener.persistNowSafely(PeriodicDeltaChangeListener.java:373)
>  [122:org.apache.brooklyn.core:0.12.0.SNAPSHOT]
> at 
> org.apache.brooklyn.core.mgmt.rebind.RebindManagerImpl.forcePersistNow(RebindManagerImpl.java:476)
>  [122:org.apache.brooklyn.core:0.12.0.SNAPSHOT]
> at 
> org.apache.brooklyn.launcher.common.BasicLauncher.persist(BasicLauncher.java:434)
>  [126:org.apache.brooklyn.launcher-common:0.12.0.SNAPSHOT]
> at 
> org.apache.brooklyn.launcher.common.BasicLauncher.startPartTwo(BasicLauncher.java:426)
>  [126:org.apache.brooklyn.launcher-common:0.12.0.SNAPSHOT]
> at 
> org.apache.brooklyn.launcher.osgi.OsgiLauncherImpl.startOsgi(OsgiLauncherImpl.java:116)
>  [333:org.apache.brooklyn.karaf-init:0.12.0.SNAPSHOT]
> at Proxy5b14e94c_cf63_4a4e_a22b_a2fdda9c9134.startOsgi(Unknown 
> Source) [?:?]
> at 
> org.apache.brooklyn.launcher.osgi.start.OsgiLauncherCompleter.init(OsgiLauncherCompleter.java:36)
>  [335:org.apache.brooklyn.karaf-start:0.12.0.SNAPSHOT]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:?]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:?]
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:?]
> at 
> org.apache.aries.blueprint.utils.ReflectionUtils.invoke(ReflectionUtils.java:299)
>  [15:org.apache.aries.blueprint.core:1.8.2]
> at 
> org.apache.aries.blueprint.container.BeanRecipe.invoke(BeanRecipe.java:980) 
> [15:org.apache.aries.blueprint.core:1.8.2]
> at 
> org.apache.aries.blueprint.container.BeanRecipe.runBeanProcInit(BeanRecipe.java:736)
>  [15:org.apache.aries.blueprint.core:1.8.2]
> at 
> org.apache.aries.blueprint.container.BeanRecipe.internalCreate2(BeanRecipe.java:848)
>  [15:org.apache.aries.blueprint.core:1.8.2]
> at 
> org.apache.aries.blueprint.container.BeanRecipe.internalCreate(BeanRecipe.java:811)
>  [15:org.apache.aries.blueprint.core:1.8.2]
> at 
> org.apache.aries.blueprint.di.AbstractRecipe$1.call(AbstractRecipe.java:79) 
> [15:org.apache.aries.blueprint.core:1.8.2]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?]
> at 
> org.apache.aries.blueprint.di.AbstractRecipe.create(AbstractRecipe.java:88) 
> [15:org.apache.aries.blueprint.core:1.8.2]
> at 
> org.apache.aries.blueprint.container.BlueprintRepository.createInstances(BlueprintRepository.java:255)
>  [15:org.apache.aries.blueprint.core:1.8.2]
> at 
> org.apache.aries.blueprint.container.BlueprintRepository.createAll(BlueprintRepository.java:186)
>  [15:org.apache.aries.blueprint.core:1.8.2]
> at 
> org.apache.aries.blu

[jira] [Commented] (BROOKLYN-539) ClassCastExceptions on rebind (trying to persist catalog items)

2017-09-22 Thread Alex Heneveld (JIRA)

[ 
https://issues.apache.org/jira/browse/BROOKLYN-539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16176335#comment-16176335
 ] 

Alex Heneveld commented on BROOKLYN-539:


Correct, there is no need to persist types that come from bundles.  Persistence 
of catalog is only needed for legacy items.

Curious why it is (now) trying to persist this (and others).

> ClassCastExceptions on rebind (trying to persist catalog items)
> ---
>
> Key: BROOKLYN-539
> URL: https://issues.apache.org/jira/browse/BROOKLYN-539
> Project: Brooklyn
>  Issue Type: Bug
>Reporter: Aled Sage
>
> With bleeding edge 0.12.0-SNAPSHOT, I started Brooklyn (in Karaf mode), then 
> stopped it and restarted it again.
> On rebind, I see exceptions like that below many times in the info log. I 
> think it's logged once per item in the catalog.
> {noformat}
> 2017-09-20T11:42:36,406 WARN  122 o.a.b.c.m.r.PersistenceExceptionHandlerImpl 
> [FelixStartLevel] Problem persisting (ignoring): generate memento for 
> CATALOG_ITEM 
> org.apache.brooklyn.core.typereg.RegisteredTypes$CatalogItemFromRegisteredType
> (pr-128-top-level:1.0.0)
> java.lang.ClassCastException: 
> org.apache.brooklyn.core.typereg.RegisteredTypes$CatalogItemFromRegisteredType
>  cannot be cast to org.apache.brooklyn.core.objs.BrooklynObjectInternal
> at 
> org.apache.brooklyn.core.mgmt.rebind.PeriodicDeltaChangeListener.persistNowInternal(PeriodicDeltaChangeListener.java:454)
>  [122:org.apache.brooklyn.core:0.12.0.SNAPSHOT]
> at 
> org.apache.brooklyn.core.mgmt.rebind.PeriodicDeltaChangeListener.persistNowSafely(PeriodicDeltaChangeListener.java:379)
>  [122:org.apache.brooklyn.core:0.12.0.SNAPSHOT]
> at 
> org.apache.brooklyn.core.mgmt.rebind.PeriodicDeltaChangeListener.persistNowSafely(PeriodicDeltaChangeListener.java:373)
>  [122:org.apache.brooklyn.core:0.12.0.SNAPSHOT]
> at 
> org.apache.brooklyn.core.mgmt.rebind.RebindManagerImpl.forcePersistNow(RebindManagerImpl.java:476)
>  [122:org.apache.brooklyn.core:0.12.0.SNAPSHOT]
> at 
> org.apache.brooklyn.launcher.common.BasicLauncher.persist(BasicLauncher.java:434)
>  [126:org.apache.brooklyn.launcher-common:0.12.0.SNAPSHOT]
> at 
> org.apache.brooklyn.launcher.common.BasicLauncher.startPartTwo(BasicLauncher.java:426)
>  [126:org.apache.brooklyn.launcher-common:0.12.0.SNAPSHOT]
> at 
> org.apache.brooklyn.launcher.osgi.OsgiLauncherImpl.startOsgi(OsgiLauncherImpl.java:116)
>  [333:org.apache.brooklyn.karaf-init:0.12.0.SNAPSHOT]
> at Proxy5b14e94c_cf63_4a4e_a22b_a2fdda9c9134.startOsgi(Unknown 
> Source) [?:?]
> at 
> org.apache.brooklyn.launcher.osgi.start.OsgiLauncherCompleter.init(OsgiLauncherCompleter.java:36)
>  [335:org.apache.brooklyn.karaf-start:0.12.0.SNAPSHOT]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:?]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:?]
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:?]
> at 
> org.apache.aries.blueprint.utils.ReflectionUtils.invoke(ReflectionUtils.java:299)
>  [15:org.apache.aries.blueprint.core:1.8.2]
> at 
> org.apache.aries.blueprint.container.BeanRecipe.invoke(BeanRecipe.java:980) 
> [15:org.apache.aries.blueprint.core:1.8.2]
> at 
> org.apache.aries.blueprint.container.BeanRecipe.runBeanProcInit(BeanRecipe.java:736)
>  [15:org.apache.aries.blueprint.core:1.8.2]
> at 
> org.apache.aries.blueprint.container.BeanRecipe.internalCreate2(BeanRecipe.java:848)
>  [15:org.apache.aries.blueprint.core:1.8.2]
> at 
> org.apache.aries.blueprint.container.BeanRecipe.internalCreate(BeanRecipe.java:811)
>  [15:org.apache.aries.blueprint.core:1.8.2]
> at 
> org.apache.aries.blueprint.di.AbstractRecipe$1.call(AbstractRecipe.java:79) 
> [15:org.apache.aries.blueprint.core:1.8.2]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?]
> at 
> org.apache.aries.blueprint.di.AbstractRecipe.create(AbstractRecipe.java:88) 
> [15:org.apache.aries.blueprint.core:1.8.2]
> at 
> org.apache.aries.blueprint.container.BlueprintRepository.createInstances(BlueprintRepository.java:255)
>  [15:org.apache.aries.blueprint.core:1.8.2]
> at 
> org.apache.aries.blueprint.container.BlueprintRepository.createAll(BlueprintRep

Re: REST API: bundles + types + subtypes endpoints

2017-09-22 Thread Alex Heneveld
s comment about the type registry being sufficient different
from catalog, I'm not sure about that. Personally I just stretched what
I think of as the catalog, and thus it is a catalog v2 which supports
more things.

I think if "catalog" disappeared, being replaced by "types", it would be
more confusing than having a new v2 catalog that does more.

Aled


On 20/09/2017 17:37, Geoff Macartney wrote:

My two cents -

0.  Keep /bundles API PR separate from /type and do /bundles now
1. While I like '/catalog' for being more meaningful than '/type', I

think

what the type registry does is sufficiently different from the classic
catalog that it needs a new name in the API.
1.1. /type would be my preference to /types
2. I'd deprecate catalog when we introduce /v2 (see later)
3. I'd have said do /types now or very soon but put it under a (clearly
beta) /v2; however I take Aled's point about clutter-free.  Not sure how
best otherwise to indicate its beta status; possibly /beta/type?  And

then

migrate to /v2/type at the later time?
4. The only real use of /subtype is to have an easy way to say
'listApplications', 'listEntities', 'listPolicies', 'listEnrichers',
'listLocation'.  I don't think this merits a separate endpoint, and in
particular I would put SubTypeApi.list into /type as
/type/?super=whatever.  As for 'listPolicies' and friends, I would have
that as an extra query parameter (not 'super') on /type.  I don't really
quite know what to call it! What name should we use to distinguish

between

things that are Applications, Entities, Locations, Policies, and

Enrichers.

G



On Wed, 20 Sep 2017 at 16:31 Robert Moss 

wrote:

1) like the `/catalog` name and agree with Aled about its power to
communicate the idea
2 & 3) run `/v1` and `/v2` in parallel, with the latter being

experimental

and iterative until a 1.0 release
4) no preference

Robert

On 20 September 2017 at 15:45, Alex Heneveld <
alex.henev...@cloudsoftcorp.com> wrote:


Aled, Thomas, Mark - thanks for the good feedback.

All - There are some good suggestions which I agree deserve discussion.
Specific points.

(1) Should /types and /bundles be top-level or under a /catalog prefix

?

Thomas suggested the latter which also fits.  My reason for doing

top-level

is simply that most REST APIs these days seem to have more top-level
resources.  /catalog is not necessary and in some ways it's cleaner

having

separate endpoints.  On the other hand the /catalog prefix is nice to
orient consumers, as Aled says:  `bundles` and `types` on their own

aren't

as self-evident.  And it means we could continue to have `POST

/catalog`

as

the main way to install.

(2) Should we deprecate `/catalog` ?  Thomas suggests maybe not yet.  I
think we should as equivalent functionality and more is available at

the

new endpoints.  (Though if we go with (1), obviously we wouldn't

deprecate

the whole endpoint, just the old calls.).  Also worth noting:  I don't
think we should remove the deprecated endpoint anytime soon however.

(3) Should we switch to /v2 ?  Aled has suggested we do as the generic
`types` support is a significant shift from the old more restrictive
`catalog`.  I don't think we should yet, however:  I'd prefer to make

that

move when we are ready to remove all old endpoints so v2 is

clutter-free.

To the extent v2 can look like a subset of v1 we make life easier for

users

and ourselves.  Also there is significant work to add a /v2 endpoint

and

I

don't think there is benefit yet to justify this work.

(4) Should `/subtypes` be an endpoint peer of `/types` ?  It has been

noted

the same functionality as `/subtypes/entity` could be done with
`/types?super=entity` (with appropriate query paramter).  My reasoning

for

making it a separate endpoint is that this is an activity I expect

people

will want to do a lot, avoiding the query parameter makes for a

slightly

nicer API.  But it is repetition in the code.

(There are some other minor issues but I think I agree with the points

made

there.)

My answers:

(1) slight preference for the `/catalog` prefix
(2) strong pref to deprecate old calls - they are redundant and

multiple

is

confusing
(3) strong pref to stay with `/v1` for now
(4) slight pref for explicit `[/catalog]/subtypes` endpoint

Best
Alex


On 20 September 2017 at 12:38, Aled Sage  wrote:


Thanks Alex.

As per my comment in the PR at https://github.com/apache/broo
klyn-server/pull/810#issuecomment-330824643.

---
TL;DR:
Given this is a big API change and given I'm suggesting a `/v2` REST

api

then I wanted to raise it on the list as well.

I propose we split this PR into two. The `/bundles` part we can merge
pretty quickly. However, the `/types` and `/subtypes` is too

controversial

in my opinion - it probably deserves a `/v2/` of the REST a

Re: [VOTE] Release Apache Brooklyn 0.12.0 [rc2]

2017-09-21 Thread Alex Heneveld


Good catches Thomas -- suggest we cancel and do a new RC.

The build-from-source problem I suspect is simply down to network setup 
/ firewall on your box.  But it would be good to force use of localhost 
for those tests or mark them integration so that it doesn't bite others 
or ideally be smart about detecting interfaces.  Probably due to [1] -- 
if it checks that random ports are locally accessible on the NIC it 
tries to use for localhost then it should prevent binding to that IP 
(unless it was the case that you changed IP addresses during the build 
which seems unlikely).


But lack of license (we know how this happened) and empty catalog (did 
someone do a too-broad search-and-replace?) are showstoppers.


Best
Alex


[1]  https://github.com/apache/brooklyn-server/pull/768


On 21/09/2017 11:40, Thomas Bouron wrote:

-1

Quick summary of the tests I've done:
[✓] Download links work.

[✓] Checksums and PGP signatures are valid.
[✓] Expanded source archive matches contents of RC tag.
[x] Expanded source archive builds 
athttps://github.com/apache/brooklyn-server/pull/768ynd passes tests.
[✓] LICENSE is present and correct.
[✓] NOTICE is present and correct, including copyright date.
[✓] No compiled archives bundled in source archive.

Checks left to do manually with the help of above instructions:
[-] All files have license headers where appropriate.
[-] All dependencies have compatible licenses.

Remaining items from checklist:
[✓] Binaries work.
[✓] I follow this project’s commits list.


Ran the verification script, which tries to build the sources but got the
following error:

```
===
 Surefire test
 Tests run: 59, Failures: 3, Skips: 3
===
Tests run: 59, Failures: 3, Errors: 0, Skipped: 3, Time elapsed: 15.322 sec
<<< FAILURE! - in TestSuite
verifyHttp(org.apache.brooklyn.launcher.BrooklynWebServerTest)  Time
elapsed: 0.22 sec  <<< FAILURE!
org.apache.brooklyn.util.exceptions.PropagatedRuntimeException:
at
org.apache.brooklyn.launcher.BrooklynWebServerTest.verifyHttp(BrooklynWebServerTest.java:99)
Caused by: org.apache.http.NoHttpResponseException: 192.168.101.104:8081
failed to respond
at
org.apache.brooklyn.launcher.BrooklynWebServerTest.verifyHttp(BrooklynWebServerTest.java:99)

verifySecurityInitialized(org.apache.brooklyn.launcher.BrooklynWebServerTest)
  Time elapsed: 0.156 sec  <<< FAILURE!
org.apache.brooklyn.util.exceptions.PropagatedRuntimeException:
at
org.apache.brooklyn.launcher.BrooklynWebServerTest.verifySecurityInitialized(BrooklynWebServerTest.java:111)
Caused by: org.apache.http.NoHttpResponseException: 192.168.101.104:8081
failed to respond
at
org.apache.brooklyn.launcher.BrooklynWebServerTest.verifySecurityInitialized(BrooklynWebServerTest.java:111)

verifySecurityInitializedExplicitUser(org.apache.brooklyn.launcher.BrooklynWebServerTest)
  Time elapsed: 0.166 sec  <<< FAILURE!
org.apache.brooklyn.util.exceptions.PropagatedRuntimeException:
at
org.apache.brooklyn.launcher.BrooklynWebServerTest.verifySecurityInitializedExplicitUser(BrooklynWebServerTest.java:131)
Caused by: org.apache.http.NoHttpResponseException: 192.168.101.104:8081
failed to respond
at
org.apache.brooklyn.launcher.BrooklynWebServerTest.verifySecurityInitializedExplicitUser(BrooklynWebServerTest.java:131)

2017-09-21 10:51:31,132 INFO  Brooklyn shutdown: stopping entities
[Application[9cxo585b], BasicApplicationImpl{id=hmg9nnlah0},
Application[1x28b8p2], Application[7c5vs0rx]]

Results :

Failed tests:
   BrooklynWebServerTest.verifyHttp:99 » PropagatedRuntime
   BrooklynWebServerTest.verifySecurityInitialized:111 » PropagatedRuntime
   BrooklynWebServerTest.verifySecurityInitializedExplicitUser:131 »
PropagatedRuntime

Tests run: 59, Failures: 3, Errors: 0, Skipped: 3
```


Also, the RPM and DEB package don't have LICENSE and NOTICE files. This one
is one me, I pushed a PR [1] to fix it.


Finally, tried to run the bin distribution (karaf). Brooklyn starts but the
catalog is empty. In the startup log, I can see the following exceptions
which looks like a regression. A quick workaround would be to wrap the text
in quote (and there is already a PR for it [2]) but I think there might be
a deeper issue:

```
2017-09-21 10:53:09,164 | WARN  | nager-k0hvRLoI-0 | OsgiArchiveInstaller
   | 123 - org.apache.brooklyn.core - 0.12.0 | Error adding Brooklyn
items from bundle brooklyn-default-karaf-catalog:0.12.0, uninstalling,
restoring any old bundle and items, then re-throwing error: Error
installing catalog items: ParserException: while parsing a block mapping
  in 'reader', line 241, column 9:
 type: org.apache.brooklyn.policy ...
 ^
expected , but found Scalar
  in 'reader', line 242, column 28:
 name: [DEPRECATED] Rolling Mean in Time Window
^
2017-09-21 10:53:09,173 | WARN  | nager-k0hvRLoI-0 | CatalogInitialization
| 1

Re: REST API: bundles + types + subtypes endpoints

2017-09-20 Thread Alex Heneveld
Aled, Thomas, Mark - thanks for the good feedback.

All - There are some good suggestions which I agree deserve discussion.
Specific points.

(1) Should /types and /bundles be top-level or under a /catalog prefix ?
Thomas suggested the latter which also fits.  My reason for doing top-level
is simply that most REST APIs these days seem to have more top-level
resources.  /catalog is not necessary and in some ways it's cleaner having
separate endpoints.  On the other hand the /catalog prefix is nice to
orient consumers, as Aled says:  `bundles` and `types` on their own aren't
as self-evident.  And it means we could continue to have `POST /catalog` as
the main way to install.

(2) Should we deprecate `/catalog` ?  Thomas suggests maybe not yet.  I
think we should as equivalent functionality and more is available at the
new endpoints.  (Though if we go with (1), obviously we wouldn't deprecate
the whole endpoint, just the old calls.).  Also worth noting:  I don't
think we should remove the deprecated endpoint anytime soon however.

(3) Should we switch to /v2 ?  Aled has suggested we do as the generic
`types` support is a significant shift from the old more restrictive
`catalog`.  I don't think we should yet, however:  I'd prefer to make that
move when we are ready to remove all old endpoints so v2 is clutter-free.
To the extent v2 can look like a subset of v1 we make life easier for users
and ourselves.  Also there is significant work to add a /v2 endpoint and I
don't think there is benefit yet to justify this work.

(4) Should `/subtypes` be an endpoint peer of `/types` ?  It has been noted
the same functionality as `/subtypes/entity` could be done with
`/types?super=entity` (with appropriate query paramter).  My reasoning for
making it a separate endpoint is that this is an activity I expect people
will want to do a lot, avoiding the query parameter makes for a slightly
nicer API.  But it is repetition in the code.

(There are some other minor issues but I think I agree with the points made
there.)

My answers:

(1) slight preference for the `/catalog` prefix
(2) strong pref to deprecate old calls - they are redundant and multiple is
confusing
(3) strong pref to stay with `/v1` for now
(4) slight pref for explicit `[/catalog]/subtypes` endpoint

Best
Alex


On 20 September 2017 at 12:38, Aled Sage  wrote:

> Thanks Alex.
>
> As per my comment in the PR at https://github.com/apache/broo
> klyn-server/pull/810#issuecomment-330824643.
>
> ---
> TL;DR:
> Given this is a big API change and given I'm suggesting a `/v2` REST api
> then I wanted to raise it on the list as well.
>
> I propose we split this PR into two. The `/bundles` part we can merge
> pretty quickly. However, the `/types` and `/subtypes` is too controversial
> in my opinion - it probably deserves a `/v2/` of the REST api.
>
> We can continue detailed discussion in the PR.
>
> ---
> I don't want to lose the word "catalog" in the REST api - it's so good at
> getting across the meaning for users! The alternative `/type` is just not
> as good, in my opinion.
>
> The multiple endpoints of `/types` and `/subtypes` is confusing. I'd model
> the latter as just a filter on `/type`. It would be better achieved with an
> additional query parameter rather than a separate endpoint.
>
> If designing a `/v2` REST api, we could use `/catalog` instead of `/type`.
> However, it will likely take a while to get to a stable and good `/v2` api.
> There are other cleanup/improvements we should probably do to the REST api
> if we're releasing a new version of it (e.g. exclude the deprecated stuff,
> get rid of `/locations` but figure out if we really need to support
> locations from brooklyn.properties, find out from the community about other
> inconsistencies or hard-to-understand parts of the api).
>
> The meaning of `GET /subtypes/application` looks completely different from
> `GET /catalog/applications`. The latter retrieves the catalog items marked
> as `template`, but the new api returns everything that implements
> `Application`. Perhaps this is an opportunity to get rid of the "entity"
> versus "template" distinction (at least in the REST api). The original
> meaning/intent of "template" has entirely changed / been misused! I believe
> it was originally intended as a partially-complete YAML blueprint that
> someone would retrieve via the REST api, and then modify. They'd then POST
> their .yaml file to deploy their app. It has now been used as the list of
> apps to include in a "quick launch" view. If designing a new `/v2` api, I'd
> explicitly support a "quick launch" list and would get rid of the
> "template" versus "application" versus "entity" distinction in the REST a

Using a default external provider in examples ?

2017-09-08 Thread Alex Heneveld


Hi All-

I've had a couple exchanges where people are put off by passwords shown 
in plain text in our demos.  It's all well and good explaining it's for 
demo but if we say "of course you should use an external provider" then 
our examples do also.


In order not to require additional setup of this by the user, I've added 
a default external provider for review [1].  You can use this as follows:


$brooklyn:external("brooklyn-demo-sample", "hidden-brooklyn-password")

I've added documentation explaining this at [2].  I'll update our 
examples in the examples repo and the docs shortly and open new PRs.


If anyone thinks this isn't the way to go, just say.  If no one speaks 
up then could someone who really likes this review and merge the various 
PRs?  (Give 24h for people to comment and me to finish the rest of the 
items.)


Best
Alex

PRs are:

[1] https://github.com/apache/brooklyn-server/pull/812
[2] https://github.com/apache/brooklyn-docs/pull/209


REST API: bundles + types + subtypes endpoints

2017-09-07 Thread Alex Heneveld


Hi team-

As mentioned earlier, I've been working on adding bundle support to the 
REST API, so we can add/remove/query bundles.  And related to this, and 
the type registry, is the ability to add arbitrary types but until now 
there was no way to query those, so there are endpoints for types/ and 
subtypes/.  This is in #810 [1].


In brief you have:

*GET /bundles* - list bundle summaries
*POST /bundles* - add ZIP or BOM YAML
*GET /bundles/com.acme/1.0* - get details on a specific bundle
*DELETE /bundles/com.acme/1.0* - remove a bundle

*GET /types* - list all types (optionally filter by regex or fragment)
*GET /types/acme-entity/1.0* - get details on a specific type
*GET /subtypes**/entity* - list all entities (optionally filter by regex 
or fragment); same for many other categories


A full list including arguments is shown in the PR.

Another good thing about this besides bundle-centric management and 
deletion in particular is that it entirely replaces the "catalog/" 
endpoint allowing us to deprecate it.  I expect we'll keep it around for 
a while as clients (the UI, CLI) still use it but we now have equivalent 
methods that are better aligned to how we do things with bundles.  
They're also quite a bit faster so if you've gotten bored waiting for 
catalog to load this should help (when clients are updated).  And one 
final benefit, we can now register and explore other types eg custom 
task types, predicates, and more.


One thing to note is that we have fewer and simpler REST objects using 
freeform maps where we return extended type info -- eg config on 
entities, policies, etc, sensors and effectors on entities.  I'd like to 
use the same pattern for returning data on adjunct instances so that we 
can support policies, enrichers, and feeds in a consistent way (removing 
duplication there).  This should tie in with Graeme's highlights work.


Follow-on work will see the CLI updated to allow `br bundle delete 
com.acme:1.0` and similar.  No immediate plans to put lots of bundle 
info into the UI as bundle devs are probably comfortable with the CLI 
but if anyone would like that speak up.  I have updated UI to _show_ the 
containing bundle ([2], also needs review!).


Best
Alex

[1]  https://github.com/apache/brooklyn-server/pull/810
[2]  https://github.com/apache/brooklyn-ui/pull/48



On 07/09/2017 14:58, Alex Heneveld wrote:


+1 to this, with Thomas's suggestion of a list instead of map and 
Geoff's suggestion of doing it on adjuncts.  would be nice to have an 
adjunct api which lets clients treat policies, enrichers, and feeds 
the same.


i can see this being useful for policies to record selected highlights 
of their activity so a consumer doesn't have to trawl through all 
activity to see what a policy has done lately.  last value is a good 
compromise between having some info without trying to remember 
everything.  sensors on adjuncts could be another way -- maybe we'd 
move to that in future -- but for now that seems overly complex.


--a


On 07/09/2017 14:02, Thomas Bouron wrote:

Hi Graeme.

Sounds very useful to me. Would be great to have this info properly
formatted in the CLI and UI.

As for the structure, I would suggest to avoid spaces in map keys as 
best

practice, so having either:

[{
...
   "highlights": {
 "lastConfirmation": {
   "name": "Last Confirmation",
   "description": "sdjnfvsdjvfdjsng",
   "time": 12345689,
   "taskId": 1345
 },
 ...
   }
}]

or maybe even better, something like this:
[{
...
   "highlights": [
 {
   "name": "Last Confirmation",
   "description": "sdjnfvsdjvfdjsng",
   "time": 12345689,
   "taskId": 1345
 },
 ...
   ]
}]

In terms of implementation, it would be useful to extend it to other 
types

of Brooklyn Object such as enrichers, etc. Although, it looks like Geoff
has already made the same comment/suggestion.

Cheers


On Thu, 7 Sep 2017 at 13:30 Graeme Miller  wrote:


Hello,

I'd like to make a change to the REST API for policies. As this is a 
REST
API change I figured it would be best to flag it on the mailing list 
first

in case anyone has any objections.

It would be useful when consuming this API to be able to find out more
information about the policy. Specifically, it would be useful to 
find out

things like last action performed, last policy violation, last
confirmation, what the triggers are etc.

To do so, I plan to amend the REST API to include 'highlights' for a
policy. Highlights are a map of a name to a tuple of information 
including

description, time and task.

Essentially this endpoint:
"GET /applications/{application}/entities/{entity}/policies"
Will now include this:
[{
...
   "highlights": {
 "

Re: REST API: Policy endpoint addition

2017-09-07 Thread Alex Heneveld


+1 to this, with Thomas's suggestion of a list instead of map and 
Geoff's suggestion of doing it on adjuncts.  would be nice to have an 
adjunct api which lets clients treat policies, enrichers, and feeds the 
same.


i can see this being useful for policies to record selected highlights 
of their activity so a consumer doesn't have to trawl through all 
activity to see what a policy has done lately.  last value is a good 
compromise between having some info without trying to remember 
everything.  sensors on adjuncts could be another way -- maybe we'd move 
to that in future -- but for now that seems overly complex.


--a


On 07/09/2017 14:02, Thomas Bouron wrote:

Hi Graeme.

Sounds very useful to me. Would be great to have this info properly
formatted in the CLI and UI.

As for the structure, I would suggest to avoid spaces in map keys as best
practice, so having either:

[{
...
   "highlights": {
 "lastConfirmation": {
   "name": "Last Confirmation",
   "description": "sdjnfvsdjvfdjsng",
   "time": 12345689,
   "taskId": 1345
 },
 ...
   }
}]

or maybe even better, something like this:
[{
...
   "highlights": [
 {
   "name": "Last Confirmation",
   "description": "sdjnfvsdjvfdjsng",
   "time": 12345689,
   "taskId": 1345
 },
 ...
   ]
}]

In terms of implementation, it would be useful to extend it to other types
of Brooklyn Object such as enrichers, etc. Although, it looks like Geoff
has already made the same comment/suggestion.

Cheers


On Thu, 7 Sep 2017 at 13:30 Graeme Miller  wrote:


Hello,

I'd like to make a change to the REST API for policies. As this is a REST
API change I figured it would be best to flag it on the mailing list first
in case anyone has any objections.

It would be useful when consuming this API to be able to find out more
information about the policy. Specifically, it would be useful to find out
things like last action performed, last policy violation, last
confirmation, what the triggers are etc.

To do so, I plan to amend the REST API to include 'highlights' for a
policy. Highlights are a map of a name to a tuple of information including
description, time and task.

Essentially this endpoint:
"GET /applications/{application}/entities/{entity}/policies"
Will now include this:
[{
...
   "highlights": {
 "Last Confirmation": {
   "description": "sdjnfvsdjvfdjsng",
   "time": 12345689,
   "taskId": 1345
 },
 ...
   }
}]

Please shout if you have any problems with this, otherwise I'll submit a PR
shortly with this change.

Regards,
Graeme Miller





Re: DynamicCluster entity and cluster.first.first sensor

2017-09-07 Thread Alex Heneveld


Thanks Thomas, and Richard.  More details:

This removed the boolean `cluster.first` set on group members -- code 
that was marked in the code for removal (and which was ambiguous and 
buggy eg if an entity is in two groups, and which caused deadlocks when 
using dynamic group membership).


It does have a simple workaround as Thomas notes -- access the 
`cluster.first.entity` on the group (though note it's not necessarily 
the `parent`).  The PR also now correctly updates this and clears this 
as the group changes or becomes empty, respectively.  Alternatively you 
can set the `firstMember` specially to advertise its first-ness.  There 
are also some policies for electing a primary node in a group (one of 
the main use cases of this I believe) which I've been working on and 
will add these soon.


As the removal is a breaking change I should have flagged it on ML and 
in release notes:  mea maxima culpa!  ML absence has been remedied here 
-- and it is going in to release notes imminently.


Best
Alex


On 07/09/2017 12:17, Thomas Bouron wrote:

Hi Brooklyner.

Just a quick heads-up for those who are using `DynamicCluster` and more
specifically, `cluster.first.entity` sensor on cluster members.

There was a change introduced by this PR[1] that removes the sensor from
cluster members. This breaking change was merged to fix a potential
deadlock of the Brooklyn server. The sensor is still present at the cluster
level though, so DSL users can still get it from members with the following:

$brooklyn:parent().attributeWhenReady("cluster.first.entity")

This affects only the bleeding-edge Brooklyn, *not the latest stable
release*.

Cheers.

[1] https://github.com/apache/brooklyn-server/pull/777




Bundles and Type Registry discussion - today 3pm UK

2017-08-17 Thread Alex Heneveld


Hi All-

There's been a fair bit of discussion and PRs lately about the use of 
bundles and the new type registry as compared with old approaches, and 
the next steps with bundles.


I thought it would be useful to give a recap of this and field 
questions.  I'll do this later this afternoon, 3pm UK at the coordinates 
below.  If you can't make it or have any questions just let me know.


Best
Alex



Alex's Bridge

GTM 322-069-629

https://global.gotomeeting.com/join/322069629

United Kingdom: +44 20 3713 5010
United States: +1 (646) 749-3117

Canada: +1 (647) 497-9371
France: +33 170 950 585
Germany: +49 692 5736 7303
Ireland: +353 19 030 050
Italy: +39 0 693 38 75 50
Switzerland: +41 435 0167 65
United Kingdom: +44 20 3713 5010

First GoToMeeting? Try a test session: 
https://care.citrixonline.com/g2m/getready


END


Re: Proposal: Add appId optional paramater to deploy api

2017-07-27 Thread Alex Heneveld


Thanks Aled -- the inheritance of config from catalog items convinces me.

Can we mark it @Beta / internal in case we need to change the approach?  
With that I'd be happy with your proposal.


Best
Alex


On 27/07/2017 07:23, Aled Sage wrote:

Hi Alex,

I explored setting a config key in my PR. The downsides of that 
compared to setting the app-id:


1. Code is more complicated - in particular, there are very low-level 
changes to check that the uid is not already in use.
2. Config keys (and tags) are mutable - we can't enforce the proposed 
semantics of the deploymentUid.
3. You can always rely on looking up an entity's id, but config is 
more complicated and can be "resolved" - leads to more complicated 
code (e.g. my PR currently has test-failures for hot-standby).
4. Appropriate search not supported in the REST api (would require 
additional changes - i.e. more work).
5. Server-side searching would either be inefficient or require a new 
index to be exposed (e.g. low-level changes in `EntityManager`).

6. Adds another concept - a new kind of id.

---
By "camp.id", do you mean "camp.plan.id"? (there is also 
"camp.template.id", but I can't find a "camp.id".)


I don't think we should mix this up with that existing concept, which 
is set and used in a completely different way:
* One can already set a camp id on the top-level app, and reference it 
with `$brooklyn:entity(...)` - we don't want to break that.
* The camp id does not have to be unique - changing that would break 
backwards compatibility.
* Existing catalog items can set the camp id, which are used by the 
app instance - e.g. see example in the appendix.


---
What are your main concerns about allowing the id to be set (with the 
regex validation that Mark suggested)? The reason that the is used by 
other internal parts of Brooklyn seems too vague to me. For example, 
is it:
1. Security (e.g. if we didn't validate, then it would risk allowing 
code injection).
2. Makes future changes harder (but the sorts of changes I envisage I 
can also see how we could handle).

3. Point of principle to keep internal things internal wherever possible.
4. Risks breaking places that we use the id in strange ways (e.g. if 
an entity uses the id to generate a dns name, then it implies 
case-insensitivity for the uniqueness check).


That last concern is real - I recall that Andrew Kennedy changed our 
ids to be lower case for that reason!


However, I don't think we should have given such "undocumented 
guarantees" on the internal id format. But given some entities rely on 
it, we can be stricter with the validation regex of user-supplied ids.


Aled

=== Appendix ===
Example of setting a `camp.plan.id` - we don't want to subtly change 
the semantics of this.


Add this to the catalog:
```
brooklyn.catalog:
  id: app-with-camp-plan-id
  version: 0.0.0-SNAPSHOT
  itemType: template
  item:
services:
  - type: org.apache.brooklyn.entity.stock.BasicApplication
id: myAppCampPlanId
```

Deploy this app:
```
services:
  - type: app-with-camp-plan-id
```

The resulting app instance will have config `camp.plan.id` with value 
`myAppCampPlanId`.




On 27/07/2017 00:40, Alex Heneveld wrote:
The core `id` is a low-level part of `BrooklynObject` used by all 
adjuncts

and entities and persistence.  It feels wrong and risky making this
something that is user- or client- settable.  I gave one example but 
there

are others.

What's wrong with a new config key or reusing `camp.id`?  We already use
the latter one if there is a user-specified ID on an entity so it feels
natural to use it, give it special meaning for apps (blocking repeat
deployments), and add support for searching for it.  (Apologies if 
this was

explained and I missed it.)

--A


On 26 July 2017 at 22:42, Aled Sage  wrote:


Hi Alex,

Other things get a lot simpler for us if we can just supply the app-id
(e.g. subsequently searching for the app, or ensuring that a 
duplicate app
is not deployed). It also means we're not adding another concept 
that we

need to explain to users.

To me, that simplicity is very compelling.

If we want to support conformance to external id requirements, we could
have a config key for a predicate or regex that the supplied id must
satisfy. A given user could thus enforce their id standards in the 
future.

I'd favour deferring that until there is a need to support it (e.g. we
could add it at the same time as adding support for a pluggable id
generator, if we ever do that).

Aled



On 26/07/2017 15:42, Alex Heneveld wrote:

2 feels compelling to me. I want us to have the ability easily to 
change
the ID generation eg to conform with external reqs such as 
timestamp or

source.

Go with deploymentUid or similar? Or camp.id?

Best
Alex

On 26 Jul 2017 15:00, "Aled Sage"  wrote:

Hi Mark,

We removed from EntitySpec the 

Re: Proposal: Add appId optional paramater to deploy api

2017-07-26 Thread Alex Heneveld
The core `id` is a low-level part of `BrooklynObject` used by all adjuncts
and entities and persistence.  It feels wrong and risky making this
something that is user- or client- settable.  I gave one example but there
are others.

What's wrong with a new config key or reusing `camp.id`?  We already use
the latter one if there is a user-specified ID on an entity so it feels
natural to use it, give it special meaning for apps (blocking repeat
deployments), and add support for searching for it.  (Apologies if this was
explained and I missed it.)

--A


On 26 July 2017 at 22:42, Aled Sage  wrote:

> Hi Alex,
>
> Other things get a lot simpler for us if we can just supply the app-id
> (e.g. subsequently searching for the app, or ensuring that a duplicate app
> is not deployed). It also means we're not adding another concept that we
> need to explain to users.
>
> To me, that simplicity is very compelling.
>
> If we want to support conformance to external id requirements, we could
> have a config key for a predicate or regex that the supplied id must
> satisfy. A given user could thus enforce their id standards in the future.
> I'd favour deferring that until there is a need to support it (e.g. we
> could add it at the same time as adding support for a pluggable id
> generator, if we ever do that).
>
> Aled
>
>
>
> On 26/07/2017 15:42, Alex Heneveld wrote:
>
>> 2 feels compelling to me. I want us to have the ability easily to change
>> the ID generation eg to conform with external reqs such as timestamp or
>> source.
>>
>> Go with deploymentUid or similar? Or camp.id?
>>
>> Best
>> Alex
>>
>> On 26 Jul 2017 15:00, "Aled Sage"  wrote:
>>
>> Hi Mark,
>>
>> We removed from EntitySpec the ability to set the id for two reasons:
>>
>> 1. there was no use-case at that time; simplifying the code by deleting it
>> was therefore sensible - we'd deprecated it for several releases.
>>
>> 2. allowing all uids to be generated/managed internally is simpler to
>> reason about, and gives greatest freedom for future refactoring.
>>
>>
>> I don't see (2) as a compelling reason.  And we now have a use-case, so
>> that changes (1).
>>
>> I'd still be tempted to treat this as an internal part of the api, rather
>> than it going on the public `EntitySpec`, but need to look at that more to
>> see how feasible it is.
>>
>> Aled
>>
>>
>>
>> On 26/07/2017 13:36, Mark Mc Kenna wrote:
>>
>> Thanks Geoff for the good summary
>>>
>>> IMO the path of least resistance that provides the best / most
>>> predictable
>>> behaviour is allowing clients to optionally set the app id.
>>>
>>> Off the top of my head I cant see this causing any issue, as long as we
>>> sanitise the supplied id something like [a-zA-Z0-9-]{8,}.
>>>
>>> Was there a particular reason this was removed in the past?
>>>
>>> Cheers
>>> M
>>>
>>> On 26 July 2017 at 13:07, Duncan Grant 
>>> wrote:
>>>
>>> Thanks all for the advice.
>>>
>>>> I think Geoff's email summarises the issue nicely.  I like Alex's
>>>> suggestion but agree that limiting ourselves to deploy in the first is
>>>> probably significantly easier.
>>>>
>>>> Personally I don't feel comfortable with using a tag for idempotency
>>>> and I
>>>> do like Aled's suggestion of using PUT with a path with /{id} but would
>>>> be
>>>> happy with either as I think they both fit our use case.
>>>>
>>>> thanks
>>>>
>>>> Duncan
>>>>
>>>> On Wed, 26 Jul 2017 at 11:00 Geoff Macartney <
>>>> geoff.macart...@cloudsoft.io
>>>> wrote:
>>>>
>>>> If I understand correctly this isn't quite what Duncan/Aled are asking
>>>> for
>>>>
>>>> though?
>>>>> Which is not a "request id" but an idempotency token for an
>>>>> application.
>>>>>
>>>>> It
>>>>
>>>> would
>>>>> need to work long term, not just cached for a short time, and it would
>>>>>
>>>>> need
>>>>
>>>> to work
>>>>> across e.g. HA failover, so it wouldn't be just a matter of caching it
>>>>> on
>>>>> the server.
>>>>>
>>>>> For what it's worth, I'd have said thi

Re: Proposal: Add appId optional paramater to deploy api

2017-07-26 Thread Alex Heneveld
2 feels compelling to me. I want us to have the ability easily to change
the ID generation eg to conform with external reqs such as timestamp or
source.

Go with deploymentUid or similar? Or camp.id?

Best
Alex

On 26 Jul 2017 15:00, "Aled Sage"  wrote:

Hi Mark,

We removed from EntitySpec the ability to set the id for two reasons:

1. there was no use-case at that time; simplifying the code by deleting it
was therefore sensible - we'd deprecated it for several releases.

2. allowing all uids to be generated/managed internally is simpler to
reason about, and gives greatest freedom for future refactoring.


I don't see (2) as a compelling reason.  And we now have a use-case, so
that changes (1).

I'd still be tempted to treat this as an internal part of the api, rather
than it going on the public `EntitySpec`, but need to look at that more to
see how feasible it is.

Aled



On 26/07/2017 13:36, Mark Mc Kenna wrote:

> Thanks Geoff for the good summary
>
> IMO the path of least resistance that provides the best / most predictable
> behaviour is allowing clients to optionally set the app id.
>
> Off the top of my head I cant see this causing any issue, as long as we
> sanitise the supplied id something like [a-zA-Z0-9-]{8,}.
>
> Was there a particular reason this was removed in the past?
>
> Cheers
> M
>
> On 26 July 2017 at 13:07, Duncan Grant 
> wrote:
>
> Thanks all for the advice.
>>
>> I think Geoff's email summarises the issue nicely.  I like Alex's
>> suggestion but agree that limiting ourselves to deploy in the first is
>> probably significantly easier.
>>
>> Personally I don't feel comfortable with using a tag for idempotency and I
>> do like Aled's suggestion of using PUT with a path with /{id} but would be
>> happy with either as I think they both fit our use case.
>>
>> thanks
>>
>> Duncan
>>
>> On Wed, 26 Jul 2017 at 11:00 Geoff Macartney <
>> geoff.macart...@cloudsoft.io
>> wrote:
>>
>> If I understand correctly this isn't quite what Duncan/Aled are asking
>>>
>> for
>>
>>> though?
>>> Which is not a "request id" but an idempotency token for an application.
>>>
>> It
>>
>>> would
>>> need to work long term, not just cached for a short time, and it would
>>>
>> need
>>
>>> to work
>>> across e.g. HA failover, so it wouldn't be just a matter of caching it on
>>> the server.
>>>
>>> For what it's worth, I'd have said this is a good use for tags but maybe
>>> for ease of reading
>>> it's better to have it as a config key as Aled does. As to how to supply
>>> the value
>>> I agree it should just be on the "deploy" operation.
>>>
>>> On Tue, 25 Jul 2017 at 19:56 Alex Heneveld <
>>> alex.henev...@cloudsoftcorp.com>
>>> wrote:
>>>
>>> Aled-
>>>>
>>>> Should this be applicable to all POST/DELETE calls?  Imagine an
>>>> `X-caller-request-uid` and a filter which caches them server side for a
>>>> short period of time, blocking duplicates.
>>>>
>>>> Solves an overlapping set of problems.  Your way deals with a
>>>> "deploy-if-not-present" much later in time.
>>>>
>>>> --A
>>>>
>>>> On 25 July 2017 at 17:44, Aled Sage  wrote:
>>>>
>>>> Hi all,
>>>>>
>>>>> I've been exploring adding support for `&deploymentUid=...` - please
>>>>>
>>>> see
>>>
>>>> my work-in-progress PR [1].
>>>>>
>>>>> Do people think that is a better or worse direction than supporting
>>>>> `&appId=...` (which would likely be simpler code, but exposes the
>>>>>
>>>> Brooklyn
>>>>
>>>>> internals more).
>>>>>
>>>>> For `&appId=...`, we could either revert [2] (so we could set the id
>>>>>
>>>> in
>>
>>> the EntitySpec), or we could inject it via a different (i.e. add a
>>>>>
>>>> new)
>>
>>> internal way so that it isn't exposedon our Java api classes.
>>>>>
>>>>> Thoughts?
>>>>>
>>>>> Aled
>>>>>
>>>>> [1] https://github.com/apache/brooklyn-server/pull/778
>>>>>
>>>>> [2] https://github.com/apache/brooklyn-server/pull/687/commits/5
>>>>&g

Re: Proposal: Add appId optional paramater to deploy api

2017-07-25 Thread Alex Heneveld
Aled-

Should this be applicable to all POST/DELETE calls?  Imagine an
`X-caller-request-uid` and a filter which caches them server side for a
short period of time, blocking duplicates.

Solves an overlapping set of problems.  Your way deals with a
"deploy-if-not-present" much later in time.

--A

On 25 July 2017 at 17:44, Aled Sage  wrote:

> Hi all,
>
> I've been exploring adding support for `&deploymentUid=...` - please see
> my work-in-progress PR [1].
>
> Do people think that is a better or worse direction than supporting
> `&appId=...` (which would likely be simpler code, but exposes the Brooklyn
> internals more).
>
> For `&appId=...`, we could either revert [2] (so we could set the id in
> the EntitySpec), or we could inject it via a different (i.e. add a new)
> internal way so that it isn't exposedon our Java api classes.
>
> Thoughts?
>
> Aled
>
> [1] https://github.com/apache/brooklyn-server/pull/778
>
> [2] https://github.com/apache/brooklyn-server/pull/687/commits/5
> 5eb11fa91e9091447d56bb45116ccc3dc6009df
>
>
>
> On 07/07/2017 18:28, Aled Sage wrote:
>
>> Hi,
>>
>> Taking a step back to justify why this kind of thing is really
>> important...
>>
>> This has come up because we want to call Brooklyn in a robust way from
>> another system, and to handle a whole load of failure scenarios (e.g. that
>> Brooklyn is temporarily down, connection fails at some point during the
>> communication, the client in the other system goes down and another
>> instance tries to pick up where it left off, etc).
>>
>> Those kind of thing becomes much easier if you can make certain
>> assumptions such as an API call being idempotent, or it guaranteeing to
>> fail with a given error if that exact request has already been accepted.
>>
>> ---
>>
>> I much prefer the semantics of the call failing (with a meaningful error)
>> if the app already exists - that will make retry a lot easier to do safely.
>>
>> As for config keys on the app, in Duncan's use-case he'd much prefer to
>> not mess with the user's YAML (e.g. to inject another config key before
>> passing it to Brooklyn). It would be simpler in his case to supply in the
>> url `?appId=...` or `?deploymentId=...`.
>>
>> For using `deploymentId`, we could but that feels like more work. We'd
>> want create a lookup of applications indexed by `deploymentId` as well as
>> `appId`, and to fail if it already exists. Also, what if someone also
>> defines a config key called `deploymentId` - would that be forbidden? Or
>> would we name-space the config key with `org.apache.brooklyn.deploymentId`?
>> Even with those concerns, I could be persuaded of the
>> `org.apache.brooklyn.deploymentId` approach.
>>
>> For "/application's ID is not meant to be user-supplied/", that has
>> historically been the case but why should we stick to that? What matters is
>> that the appId is definitely unique. That will be checked when creating the
>> application entity. We could also include a regex check on the supplied id
>> to make sure it looks reasonable (in case someone is already relying on app
>> ids in weird ways like for filename generations, which would lead to a risk
>> of script injection).
>>
>> Aled
>>
>>
>> On 07/07/2017 17:38, Svetoslav Neykov wrote:
>>
>>> Hi Duncan,
>>>
>>> I've solved this problem before by adding a caller generated config key
>>> on the app (now it's also possible to tag them), then iterating over the
>>> deployed apps, looking for the key.
>>>
>>> An alternative which I'd like to mention is creating an async deploy
>>> operation which immediately returns an ID generated by Brooklyn. There's
>>> still a window where the client connection could fail though, however small
>>> it is, so it doesn't fully solve your use case.
>>>
>>> Your use case sounds reasonable so agree a solution to it would be nice
>>> to have.
>>>
>>> Svet.
>>>
>>>
>>> On 7.07.2017 г., at 18:33, Duncan Grant 
 wrote:

 I'd like to propose adding an appId parameter to the deploy endpoint.
 This
 would be optional and would presumably reject any attempt to start a
 second
 app with the same id.  If set the appId would obviously be used in
 place of
 the generated id.

 This proposal would be of use in scripting deployments in a distributed
 environment where deployment is not the first step in a number of
 asynchronous jobs and would give us a way of "connecting" those jobs up.
 Hopefully it will help a lot in making things more robust for end-users.
 Currently, if the client’s connection to the Brooklyn server fails while
 waiting for a response, it’s impossible to tell if the app was
 provisioned
 (e.g. how can you tell the difference between a likely-looking app, and
 another one deployed with an identical blueprint?). This would make it
 safe
 to either retry the deploy request, or to query for the app with the
 expected id to see if it exists.

 Initially I'm hoping to use this in a downst

  1   2   3   >