Re: Karaf 4.0.5 cannot resolve some OSGi services

2016-05-14 Thread Arnaud Deprez
Hi Christian,

I didn't know there is a new feature resolver between 4.0.4 and 4.0.5.
I couldn't find issues here:
https://issues.apache.org/jira/browse/KARAF-4497?jql=project%20%3D%20KARAF%20AND%20fixVersion%20%3D%204.0.5
related to this change.

I'll try your suggested solution and keep you inform.

Regards,

On Thu, May 12, 2016 at 9:40 PM Christian Schneider 
wrote:

> I think the issue is because of the new feature resolver. When you
> reference a service this results in a service requirement in your bundle
> Manifest. The karaf 4 feature resolver will check these requirements and
> only allow installation if it finds a bundle that has a matching capability
> in its manifest. Apparently camel does not provide the capability. So the
> resolve fails.
>
> You can disable this behaviour in the config.
> org.apache.karaf.features.cfg
> serviceRequirements=disabled
>
> Christian
>
> 2016-05-12 19:35 GMT+02:00 Arnaud Deprez :
>
>> Hi again,
>>
>> Sorry to insist but according to me it's a critical issue in karaf 4.0.5.
>>
>> So I have few questions:
>> Anyone else meets this issue ?
>> Am I the only one who is using aries blueprint on karaf 4.0.5 ?
>> Would you recommend me to not use blueprint anymore ? If so which
>> solution would you recommend ?
>> I know that there are some discussions about the more statical way of
>> working of blueprint vs more dynamic with some CDI implementations (scr,
>> what else ?)
>>
>> Regards,
>>
>> On Mon, May 9, 2016 at 4:40 PM Arnaud Deprez 
>> wrote:
>>
>>> Yep, I'm aware of this :-)
>>>
>>> On Mon, May 9, 2016 at 4:37 PM Morgan Hautman 
>>> wrote:
>>>
>>>> There was indeed a breaking change in Aries blueprint core for this
>>>> release during the vote but it was normally fixed by JB and he made a new
>>>> release vote including the fix... Let's wait till he checks his mails. :)
>>>>
>>>>
>>>> On 2016-05-09 15:29, Arnaud Deprez wrote:
>>>>
>>>> Nope, as you could see in my previous mail, I can see the service when
>>>> I execute the command: service:list org.apache.camel.Component.
>>>> Moreover, when I install the exact same features in karaf 4.0.4, it
>>>> just works.
>>>>
>>>> It also happens with other OSGi services, so the problem isn't related
>>>> to hazelcast.
>>>> According to me, it smells a breaking change in aries blueprint but I'm
>>>> not sure right now.
>>>>
>>>> Rgds,
>>>>
>>>> On Mon, May 9, 2016 at 3:12 PM Morgan Hautman 
>>>> wrote:
>>>>
>>>>> Hi Arnaud,
>>>>>
>>>>> Didn't you forgot to install camel-hazelcast feature?
>>>>>
>>>>> Regards,
>>>>> Morgan
>>>>>
>>>>> 2016-05-09 14:53 GMT+02:00 Arnaud Deprez :
>>>>>
>>>>>> Hi folks,
>>>>>>
>>>>>> Just tried the new karaf release and I met this issue when I install
>>>>>> my bundles that are using blueprint as DI engine:
>>>>>> Error executing command: Unable to resolve root: missing requirement
>>>>>> [root] osgi.identity; osgi.identity=enterprise-contract;
>>>>>> type=karaf.feature; version="[1.4.0.SNAPSHOT,1.4.0.SNAPSHOT]";
>>>>>> filter:="(&(osgi.identity=enterprise-contract)(type=karaf.feature)(version>=1.4.0.SNAPSHOT)(version<=1.4.0.SNAPSHOT))"
>>>>>> [caused by: Unable to resolve enterprise-contract/1.4.0.SNAPSHOT: missing
>>>>>> requirement [enterprise-contract/1.4.0.SNAPSHOT] osgi.identity;
>>>>>> osgi.identity=enterprise-customer; type=karaf.feature [caused by: Unable 
>>>>>> to
>>>>>> resolve enterprise-customer/1.4.0.SNAPSHOT: missing requirement
>>>>>> [enterprise-customer/1.4.0.SNAPSHOT] osgi.identity;
>>>>>> osgi.identity=be.lampiris.api.customer-rest; type=osgi.bundle;
>>>>>> version="[1.4.0.SNAPSHOT,1.4.0.SNAPSHOT]"; resolution:=mandatory [caused
>>>>>> by: Unable to resolve be.lampiris.api.customer-rest/1.4.0.SNAPSHOT: 
>>>>>> missing
>>>>>> requirement [be.lampiris.api.customer-rest/1.4.0.SNAPSHOT] osgi.service;
>>>>>> effective:=active;
>>>>>> filter:="(objectClass=be.lampiris.api.customer.CustomerQueryService)"
>>>>&

Re: Karaf 4.0.5 cannot resolve some OSGi services

2016-05-12 Thread Arnaud Deprez
Hi again,

Sorry to insist but according to me it's a critical issue in karaf 4.0.5.

So I have few questions:
Anyone else meets this issue ?
Am I the only one who is using aries blueprint on karaf 4.0.5 ?
Would you recommend me to not use blueprint anymore ? If so which solution
would you recommend ?
I know that there are some discussions about the more statical way of
working of blueprint vs more dynamic with some CDI implementations (scr,
what else ?)

Regards,

On Mon, May 9, 2016 at 4:40 PM Arnaud Deprez  wrote:

> Yep, I'm aware of this :-)
>
> On Mon, May 9, 2016 at 4:37 PM Morgan Hautman 
> wrote:
>
>> There was indeed a breaking change in Aries blueprint core for this
>> release during the vote but it was normally fixed by JB and he made a new
>> release vote including the fix... Let's wait till he checks his mails. :)
>>
>>
>> On 2016-05-09 15:29, Arnaud Deprez wrote:
>>
>> Nope, as you could see in my previous mail, I can see the service when I
>> execute the command: service:list org.apache.camel.Component.
>> Moreover, when I install the exact same features in karaf 4.0.4, it just
>> works.
>>
>> It also happens with other OSGi services, so the problem isn't related to
>> hazelcast.
>> According to me, it smells a breaking change in aries blueprint but I'm
>> not sure right now.
>>
>> Rgds,
>>
>> On Mon, May 9, 2016 at 3:12 PM Morgan Hautman 
>> wrote:
>>
>>> Hi Arnaud,
>>>
>>> Didn't you forgot to install camel-hazelcast feature?
>>>
>>> Regards,
>>> Morgan
>>>
>>> 2016-05-09 14:53 GMT+02:00 Arnaud Deprez :
>>>
>>>> Hi folks,
>>>>
>>>> Just tried the new karaf release and I met this issue when I install my
>>>> bundles that are using blueprint as DI engine:
>>>> Error executing command: Unable to resolve root: missing requirement
>>>> [root] osgi.identity; osgi.identity=enterprise-contract;
>>>> type=karaf.feature; version="[1.4.0.SNAPSHOT,1.4.0.SNAPSHOT]";
>>>> filter:="(&(osgi.identity=enterprise-contract)(type=karaf.feature)(version>=1.4.0.SNAPSHOT)(version<=1.4.0.SNAPSHOT))"
>>>> [caused by: Unable to resolve enterprise-contract/1.4.0.SNAPSHOT: missing
>>>> requirement [enterprise-contract/1.4.0.SNAPSHOT] osgi.identity;
>>>> osgi.identity=enterprise-customer; type=karaf.feature [caused by: Unable to
>>>> resolve enterprise-customer/1.4.0.SNAPSHOT: missing requirement
>>>> [enterprise-customer/1.4.0.SNAPSHOT] osgi.identity;
>>>> osgi.identity=be.lampiris.api.customer-rest; type=osgi.bundle;
>>>> version="[1.4.0.SNAPSHOT,1.4.0.SNAPSHOT]"; resolution:=mandatory [caused
>>>> by: Unable to resolve be.lampiris.api.customer-rest/1.4.0.SNAPSHOT: missing
>>>> requirement [be.lampiris.api.customer-rest/1.4.0.SNAPSHOT] osgi.service;
>>>> effective:=active;
>>>> filter:="(objectClass=be.lampiris.api.customer.CustomerQueryService)"
>>>> [caused by: Unable to resolve be.lampiris.api.customer-impl/1.4.0.SNAPSHOT:
>>>> missing requirement [be.lampiris.api.customer-impl/1.4.0.SNAPSHOT]
>>>> osgi.service; effective:=active;
>>>> filter:="(&(objectClass=org.apache.camel.Component)(type=hazelcast))"
>>>>
>>>> However I can see my service with the following command:
>>>> karaf@root(feature)> service:list org.apache.camel.Component
>>>> [org.apache.camel.Component]
>>>> ----
>>>>  osgi.service.blueprint.compname = hazelcastComponent
>>>>  service.bundleid = 290
>>>>  service.id = 292
>>>>  service.scope = bundle
>>>>  type = hazelcast
>>>> Provided by :
>>>>  Bundle 290
>>>>
>>>> My blueprint configuration is :
>>>> >>> filter="(type=hazelcast)"/>
>>>>
>>>> My features works in 4.0.4 so I don't know what is broken here.
>>>> Any help is welcome.
>>>>
>>>> Regards,
>>>> --
>>>> Arnaud Deprez
>>>> Software Engineer
>>>> Phone: +32 497 23 30 44 <%2B32%20497%2023%2030%2044>
>>>> Linked'In: https://www.linkedin.com/in/deprezarnaud
>>>> Github: https://github.com/arnaud-deprez
>>>>
>>>
>>> --
>> Arnaud Deprez
>> Software Engineer
>> Phone: +32 497 23 30 44
>> Linked'In: https://www.linkedin.com/in/deprezarnaud
>> Github: https://github.com/arnaud-deprez
>>
>>
>> --
> Arnaud Deprez
> Software Engineer
> Phone: +32 497 23 30 44
> Linked'In: https://www.linkedin.com/in/deprezarnaud
> Github: https://github.com/arnaud-deprez
>
-- 
Arnaud Deprez
Software Engineer
Phone: +32 497 23 30 44
Linked'In: https://www.linkedin.com/in/deprezarnaud
Github: https://github.com/arnaud-deprez


Re: Karaf 4.0.5 cannot resolve some OSGi services

2016-05-09 Thread Arnaud Deprez
Yep, I'm aware of this :-)

On Mon, May 9, 2016 at 4:37 PM Morgan Hautman 
wrote:

> There was indeed a breaking change in Aries blueprint core for this
> release during the vote but it was normally fixed by JB and he made a new
> release vote including the fix... Let's wait till he checks his mails. :)
>
>
> On 2016-05-09 15:29, Arnaud Deprez wrote:
>
> Nope, as you could see in my previous mail, I can see the service when I
> execute the command: service:list org.apache.camel.Component.
> Moreover, when I install the exact same features in karaf 4.0.4, it just
> works.
>
> It also happens with other OSGi services, so the problem isn't related to
> hazelcast.
> According to me, it smells a breaking change in aries blueprint but I'm
> not sure right now.
>
> Rgds,
>
> On Mon, May 9, 2016 at 3:12 PM Morgan Hautman 
> wrote:
>
>> Hi Arnaud,
>>
>> Didn't you forgot to install camel-hazelcast feature?
>>
>> Regards,
>> Morgan
>>
>> 2016-05-09 14:53 GMT+02:00 Arnaud Deprez :
>>
>>> Hi folks,
>>>
>>> Just tried the new karaf release and I met this issue when I install my
>>> bundles that are using blueprint as DI engine:
>>> Error executing command: Unable to resolve root: missing requirement
>>> [root] osgi.identity; osgi.identity=enterprise-contract;
>>> type=karaf.feature; version="[1.4.0.SNAPSHOT,1.4.0.SNAPSHOT]";
>>> filter:="(&(osgi.identity=enterprise-contract)(type=karaf.feature)(version>=1.4.0.SNAPSHOT)(version<=1.4.0.SNAPSHOT))"
>>> [caused by: Unable to resolve enterprise-contract/1.4.0.SNAPSHOT: missing
>>> requirement [enterprise-contract/1.4.0.SNAPSHOT] osgi.identity;
>>> osgi.identity=enterprise-customer; type=karaf.feature [caused by: Unable to
>>> resolve enterprise-customer/1.4.0.SNAPSHOT: missing requirement
>>> [enterprise-customer/1.4.0.SNAPSHOT] osgi.identity;
>>> osgi.identity=be.lampiris.api.customer-rest; type=osgi.bundle;
>>> version="[1.4.0.SNAPSHOT,1.4.0.SNAPSHOT]"; resolution:=mandatory [caused
>>> by: Unable to resolve be.lampiris.api.customer-rest/1.4.0.SNAPSHOT: missing
>>> requirement [be.lampiris.api.customer-rest/1.4.0.SNAPSHOT] osgi.service;
>>> effective:=active;
>>> filter:="(objectClass=be.lampiris.api.customer.CustomerQueryService)"
>>> [caused by: Unable to resolve be.lampiris.api.customer-impl/1.4.0.SNAPSHOT:
>>> missing requirement [be.lampiris.api.customer-impl/1.4.0.SNAPSHOT]
>>> osgi.service; effective:=active;
>>> filter:="(&(objectClass=org.apache.camel.Component)(type=hazelcast))"
>>>
>>> However I can see my service with the following command:
>>> karaf@root(feature)> service:list org.apache.camel.Component
>>> [org.apache.camel.Component]
>>> 
>>>  osgi.service.blueprint.compname = hazelcastComponent
>>>  service.bundleid = 290
>>>  service.id = 292
>>>  service.scope = bundle
>>>  type = hazelcast
>>> Provided by :
>>>  Bundle 290
>>>
>>> My blueprint configuration is :
>>> >> filter="(type=hazelcast)"/>
>>>
>>> My features works in 4.0.4 so I don't know what is broken here.
>>> Any help is welcome.
>>>
>>> Regards,
>>> --
>>> Arnaud Deprez
>>> Software Engineer
>>> Phone: +32 497 23 30 44 <%2B32%20497%2023%2030%2044>
>>> Linked'In: https://www.linkedin.com/in/deprezarnaud
>>> Github: https://github.com/arnaud-deprez
>>>
>>
>> --
> Arnaud Deprez
> Software Engineer
> Phone: +32 497 23 30 44
> Linked'In: https://www.linkedin.com/in/deprezarnaud
> Github: https://github.com/arnaud-deprez
>
>
> --
Arnaud Deprez
Software Engineer
Phone: +32 497 23 30 44
Linked'In: https://www.linkedin.com/in/deprezarnaud
Github: https://github.com/arnaud-deprez


Re: Karaf 4.0.5 cannot resolve some OSGi services

2016-05-09 Thread Arnaud Deprez
Nope, as you could see in my previous mail, I can see the service when I
execute the command: service:list org.apache.camel.Component.
Moreover, when I install the exact same features in karaf 4.0.4, it just
works.

It also happens with other OSGi services, so the problem isn't related to
hazelcast.
According to me, it smells a breaking change in aries blueprint but I'm not
sure right now.

Rgds,

On Mon, May 9, 2016 at 3:12 PM Morgan Hautman 
wrote:

> Hi Arnaud,
>
> Didn't you forgot to install camel-hazelcast feature?
>
> Regards,
> Morgan
>
> 2016-05-09 14:53 GMT+02:00 Arnaud Deprez :
>
>> Hi folks,
>>
>> Just tried the new karaf release and I met this issue when I install my
>> bundles that are using blueprint as DI engine:
>> Error executing command: Unable to resolve root: missing requirement
>> [root] osgi.identity; osgi.identity=enterprise-contract;
>> type=karaf.feature; version="[1.4.0.SNAPSHOT,1.4.0.SNAPSHOT]";
>> filter:="(&(osgi.identity=enterprise-contract)(type=karaf.feature)(version>=1.4.0.SNAPSHOT)(version<=1.4.0.SNAPSHOT))"
>> [caused by: Unable to resolve enterprise-contract/1.4.0.SNAPSHOT: missing
>> requirement [enterprise-contract/1.4.0.SNAPSHOT] osgi.identity;
>> osgi.identity=enterprise-customer; type=karaf.feature [caused by: Unable to
>> resolve enterprise-customer/1.4.0.SNAPSHOT: missing requirement
>> [enterprise-customer/1.4.0.SNAPSHOT] osgi.identity;
>> osgi.identity=be.lampiris.api.customer-rest; type=osgi.bundle;
>> version="[1.4.0.SNAPSHOT,1.4.0.SNAPSHOT]"; resolution:=mandatory [caused
>> by: Unable to resolve be.lampiris.api.customer-rest/1.4.0.SNAPSHOT: missing
>> requirement [be.lampiris.api.customer-rest/1.4.0.SNAPSHOT] osgi.service;
>> effective:=active;
>> filter:="(objectClass=be.lampiris.api.customer.CustomerQueryService)"
>> [caused by: Unable to resolve be.lampiris.api.customer-impl/1.4.0.SNAPSHOT:
>> missing requirement [be.lampiris.api.customer-impl/1.4.0.SNAPSHOT]
>> osgi.service; effective:=active;
>> filter:="(&(objectClass=org.apache.camel.Component)(type=hazelcast))"
>>
>> However I can see my service with the following command:
>> karaf@root(feature)> service:list org.apache.camel.Component
>> [org.apache.camel.Component]
>> 
>>  osgi.service.blueprint.compname = hazelcastComponent
>>  service.bundleid = 290
>>  service.id = 292
>>  service.scope = bundle
>>  type = hazelcast
>> Provided by :
>>  Bundle 290
>>
>> My blueprint configuration is :
>> > filter="(type=hazelcast)"/>
>>
>> My features works in 4.0.4 so I don't know what is broken here.
>> Any help is welcome.
>>
>> Regards,
>> --
>> Arnaud Deprez
>> Software Engineer
>> Phone: +32 497 23 30 44
>> Linked'In: https://www.linkedin.com/in/deprezarnaud
>> Github: https://github.com/arnaud-deprez
>>
>
> --
Arnaud Deprez
Software Engineer
Phone: +32 497 23 30 44
Linked'In: https://www.linkedin.com/in/deprezarnaud
Github: https://github.com/arnaud-deprez


Karaf 4.0.5 cannot resolve some OSGi services

2016-05-09 Thread Arnaud Deprez
Hi folks,

Just tried the new karaf release and I met this issue when I install my
bundles that are using blueprint as DI engine:
Error executing command: Unable to resolve root: missing requirement [root]
osgi.identity; osgi.identity=enterprise-contract; type=karaf.feature;
version="[1.4.0.SNAPSHOT,1.4.0.SNAPSHOT]";
filter:="(&(osgi.identity=enterprise-contract)(type=karaf.feature)(version>=1.4.0.SNAPSHOT)(version<=1.4.0.SNAPSHOT))"
[caused by: Unable to resolve enterprise-contract/1.4.0.SNAPSHOT: missing
requirement [enterprise-contract/1.4.0.SNAPSHOT] osgi.identity;
osgi.identity=enterprise-customer; type=karaf.feature [caused by: Unable to
resolve enterprise-customer/1.4.0.SNAPSHOT: missing requirement
[enterprise-customer/1.4.0.SNAPSHOT] osgi.identity;
osgi.identity=be.lampiris.api.customer-rest; type=osgi.bundle;
version="[1.4.0.SNAPSHOT,1.4.0.SNAPSHOT]"; resolution:=mandatory [caused
by: Unable to resolve be.lampiris.api.customer-rest/1.4.0.SNAPSHOT: missing
requirement [be.lampiris.api.customer-rest/1.4.0.SNAPSHOT] osgi.service;
effective:=active;
filter:="(objectClass=be.lampiris.api.customer.CustomerQueryService)"
[caused by: Unable to resolve be.lampiris.api.customer-impl/1.4.0.SNAPSHOT:
missing requirement [be.lampiris.api.customer-impl/1.4.0.SNAPSHOT]
osgi.service; effective:=active;
filter:="(&(objectClass=org.apache.camel.Component)(type=hazelcast))"

However I can see my service with the following command:
karaf@root(feature)> service:list org.apache.camel.Component
[org.apache.camel.Component]

 osgi.service.blueprint.compname = hazelcastComponent
 service.bundleid = 290
 service.id = 292
 service.scope = bundle
 type = hazelcast
Provided by :
 Bundle 290

My blueprint configuration is :


My features works in 4.0.4 so I don't know what is broken here.
Any help is welcome.

Regards,
-- 
Arnaud Deprez
Software Engineer
Phone: +32 497 23 30 44
Linked'In: https://www.linkedin.com/in/deprezarnaud
Github: https://github.com/arnaud-deprez


Re: JPA and transaction issue in Karaf 4.0.4

2016-02-17 Thread Arnaud Deprez
Hi Christian,

It has nothing to do directly with this issue but I though we should
use pax-jdbc-pool-aries
instead of pax-jdbc-pool-dbcp2.
It's still unclear to me :-).
What are the status of these 2 libraries ?
What should we use ?

Regards,

Arnaud Deprez

On Mon, Feb 8, 2016 at 1:01 PM Dutertry Nicolas <
nicolas.duter...@soprahr.com> wrote:

> Thank you Christian !
>
>
>
> --
>
> Nicolas Dutertry
>
>
>
> *De :* Christian Schneider [mailto:cschneider...@gmail.com] *De la part
> de* Christian Schneider
> *Envoyé :* lundi 8 février 2016 12:01
>
>
> *À :* user@karaf.apache.org
> *Objet :* Re: JPA and transaction issue in Karaf 4.0.4
>
>
>
> Hi Nicolas,
>
> I was able to reproduce the issue and created a jira issue for it
> https://issues.apache.org/jira/browse/ARIES-1494
>
> I also found the workaround to make the method TestServiceImpl.delete also
> @Transactional. Still this is a severe issue and I will try to fix it as
> soon as possible.
>
> Christian
>
> On 08.02.2016 11:17, Dutertry Nicolas wrote:
>
> Hi Christian,
>
>
>
> Thanks for you answer. I have fixed the 2 issues (correct jpa namespace
> and pax-jdbc for datasource) but unfortunately, the error is still there.
>
> The new code is committed on github.
>
> Regards,
>
> --
>
> Nicolas
>
>
>
> *De :* cschneider...@gmail.com [mailto:cschneider...@gmail.com
> ] *De la part de* Christian Schneider
> *Envoyé :* samedi 6 février 2016 19:09
> *À :* user@karaf.apache.org
> *Objet :* Re: JPA and transaction issue in Karaf 4.0.4
>
>
>
> Hi Nicolas,
>
>
>
> I found some issues with your example but nothing that could fully explain
> the error.
>
>
>
> 1. In blueprint.xml you use the namespace xmlns:jpa="
> http://aries.apache.org/xmlns/jpan/v1.0.0";. This is deprecated. The
> correct one is xmlns:jpa="http://aries.apache.org/xmlns/jpa/v2.0.0";
>
> 2. Your DataSource is not fully JTA enabled. (You are using
> org.apache.commons.dbcp2.managed.BasicManagedDataSource)
>
>
>
> You can see how to setup dbcp2 in
>
>
> https://github.com/ops4j/org.ops4j.pax.jdbc/blob/master/pax-jdbc-pool-dbcp2/src/main/java/org/ops4j/pax/jdbc/pool/dbcp2/impl/ds/DbcpXAPooledDataSourceFactory.java
>
>
>
> I suggest to simply use pax-jdbc-pool-dbcp2 und pax-jdbc-config to create
> your DataSource. See
>
>
> https://ops4j1.jira.com/wiki/display/PAXJDBC/Pooling+and+XA+support+for+DataSourceFactory
>
>
>
> Can you try if fixing these two issues helps?
>
> If not then it might be a bug and I will investigate it deeper.
>
>
>
> Christian
>
>
>
> 2016-02-05 15:07 GMT+01:00 Dutertry Nicolas  >:
>
> Hi,
>
>
>
> I’m trying to migrate an application working with Karaf 3.0.5 to Karaf
> 4.0.4.
>
> I ran into a problem with JPA and transaction management so I have created
> a small maven project showing it.
>
> This sample is available on GitHub :
> https://github.com/nicolas-dutertry/test-jpa
>
>
>
> The class TestServiceImpl in test-jpa-service has an EntityManager annoted
> with @PersistenceContext(unitname = "test")
>
> The class DeleteManager also has an EntityManager annoted with
> @PersistenceContext(unitname = "test")
>
>
>
> The method TestServiceImpl.delete is not transactional and it calls the
> transactional method DeleteManager.delete several times :
>
>
>
> public class TestServiceImpl implements TestService {
>
> @PersistenceContext(unitName = "test")
>
> private EntityManager entityManager;
>
>
>
> private DeleteManager deleteManager;
>
>
>
> …
>
>
>
> @Override
>
> public void delete(String... names) {
>
> for (String name : names) {
>
> System.out.println("Deleting " + name);
>
> deleteManager.delete(name);
>
> }
>
> }
>
> }
>
>
>
> public class DeleteManager {
>
> @PersistenceContext(unitName = "test")
>
> private EntityManager entityManager;
>
>
>
> …
>
>
>
> @Transactional
>
> public void delete(String lastName) {
>
> Query query = entityManager.createQuery(
>
> "delete from Person where lastName = :lastName");
>
>
>
> query.setParameter("lastName", lastName);
>
> query.executeUpdate();
>
> }
>
> }
>
>
>
> At runtime it raised a javax.persistence.TransactionRequiredException
> during the second call to DeleteManager.delete in the for loop.
>
>
>
> 

Re: Merry Christmas

2015-12-25 Thread Arnaud Deprez
Thank you !
Merry Christmas to all of you !

Arnaud

On Fri, Dec 25, 2015 at 12:57 PM Morgan  wrote:

> Merry Christmas to everyone!
>
> On 2015-12-25 11:00, j...@nanthrax.net wrote:
> > On behalf of the Karaf team, we wish a happy christmas to all Karaf
> > users !
> >
> > We are preparing a couple of gifts for you, especially the new
> > website. I worked on it yesterday and I will work again on it today. I
> > hope to send a vote e-mail soon.
> >
> > Again Merry Christmas
> > JB
>
>


Re: Karaf Maven Plugin: Non-central repository

2015-12-11 Thread Arnaud Deprez
Just add the repository in your pom.xml or in your settings.xml (maven
config).

On Fri, Dec 11, 2015 at 11:34 AM Daniel McGreal 
wrote:

> Hi Karaf users,
> I have a project which relies on a jar not in Maven Central, how can I get
> the maven plugin to generate a feature file which references this
> repository?
> Dan.


karaf integration tests inside docker

2015-12-09 Thread Arnaud Deprez
Hi folks,

I'm trying to develop integration tests for karaf in docker.
So basically, I build a docker image with all my features/bunbles installed
and I want to make some integration tests against this image. The gold of
this is isolation and portability.

So I know I can use citrus to test the image as a black box by sending some
inputs and check the outputs. This is fine but we can't test the OSGi
registry and checks if everything is well configured/injected.

So as I was already a bit familiar with pax-exam, I tried to see if it was
possible to use it on an "already installed karaf remote instance" and it
seems it's not...

Then I take a look at arquillian-osgi and its module for a karaf remote
instance, as arquillian has also a recent support for docker via
arquillian-cube I though it may be the good match but it doesn't work as
the arquillian-container-karaf-remote rely on jmx/rmi and it doesn't work
well inside docker. Even if I set the environment
variable EXTRA_JAVA_OPTS=-Djava.rmi.server.hostname=192.168.99.101 which
allows me to connect via the jConsole, the arquillian plugin is still not
able to connect.

So my question here is :

   - does anyone already try to perform such integrations tests with karaf ?
   - If so, what do you use ? Do you have examples ?

Regards,

Arnaud


Re: karaf 4.0.3 & cellar 4.0.0: cluster:feature-install doesn't work properly

2015-12-04 Thread Arnaud Deprez
One of mine

On Fri, Dec 4, 2015 at 2:27 PM Jean-Baptiste Onofré  wrote:

> Hi Arnaud,
>
> is it one of your feature or reproducible with a "public" feature ?
>
> Regards
> JB
>
> On 12/04/2015 02:20 PM, Arnaud Deprez wrote:
> > Hi folks,
> >
> > I recently tried cellar and it doesn't seem to work properly.
> > Actually when I try to install with cluster:feature-install, some of my
> > bundles stays in grace period which is not the case if I use
> > feature:install command on each node.
> >
> > Should I create an issue or is it a well known problem ?
> >
> > Rgds,
> >
> > Arnaud
>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>


karaf 4.0.3 & cellar 4.0.0: cluster:feature-install doesn't work properly

2015-12-04 Thread Arnaud Deprez
Hi folks,

I recently tried cellar and it doesn't seem to work properly.
Actually when I try to install with cluster:feature-install, some of my
bundles stays in grace period which is not the case if I use
feature:install command on each node.

Should I create an issue or is it a well known problem ?

Rgds,

Arnaud


Re: Keep repository definition while using karaf-maven-plugin

2015-11-23 Thread Arnaud Deprez
Hi JB,

I deep a bit into the code and the issue seems to come from the
class GenerateDescriptorMojo :

if (this.dependencyHelper.isArtifactAFeature(artifact)) {
if (aggregateFeatures &&
FEATURE_CLASSIFIER.equals(this.dependencyHelper.getClassifier(artifact))) {
File featuresFile =
this.dependencyHelper.resolve(artifact, getLog());
if (featuresFile == null || !featuresFile.exists()) {
throw new MojoExecutionException("Cannot locate
file for feature: " + artifact + " at " + featuresFile);
}
Features includedFeatures =
readFeaturesFile(featuresFile);
//TODO check for duplicates?

features.getFeature().addAll(includedFeatures.getFeature());
}
}

We see that the features are aggregated but not the repositories.
Apparently, for the goal features-generate-descriptor, it seems that the
repository tag is simply not use, whatever the options you choose.

Regards,

Arnaud

On Sun, Nov 22, 2015 at 10:19 PM Arnaud Deprez 
wrote:

> Here is the example :
> https://github.com/arnaud-deprez/karaf-maven-plugin-samples
>
> So after performing mvn install, the feature.xml generated in
> aggregate-features/target/feature/feature.xml doesn't contain the
> repository tag from the features.xml from module feature2.
>
> Rgds,
>
> Arnaud
>
> On Sun, Nov 22, 2015 at 9:34 PM Jean-Baptiste Onofré 
> wrote:
>
>> That would be great. Else I will do it myself.
>>
>> Regards
>> JB
>>
>> On 11/22/2015 09:28 PM, Arnaud Deprez wrote:
>> > Sure but I can't give the whole project as it's a corporate project and
>> > I'm not allowed to that.
>> > If you need a whole project, I can try to reproduce it and push it my
>> > github.
>> >
>> > - features1.xml is the feature from the first project
>> > - features2.xml is the feature from my module in the second project
>> > - pom.xml is the pom my distribution module
>> >
>> > Rgds,
>> >
>> > Arnaud
>> >
>> > On Sun, Nov 22, 2015 at 9:14 PM Jean-Baptiste Onofré > > <mailto:j...@nanthrax.net>> wrote:
>> >
>> > Can you share the pom.xml and features.xml ?
>> >
>> > Regards
>> > JB
>> >
>> > On 11/22/2015 09:11 PM, Arnaud Deprez wrote:
>> >  > Hi JB,
>> >  >
>> >  > Sorry, I don't get what you mean by template or dependencies set.
>> >  >
>> >  > So basically, in my second project, I have a features maven
>> > module where
>> >  > I'm defining the features.xml file. This file is templated with
>> maven
>> >  > properties and I use the maven resource plugin to replace
>> properties
>> >  > with maven property placeholder.
>> >  >
>> >  > Then, I have another module distribution where my configuration
>> is :
>> >  >
>> >  > ...
>> >  > 
>> >  >  
>> >  >  ${project.groupId}
>> >  >  features
>> >  >  ${project.version}
>> >  >  xml
>> >  >  features
>> >  >  
>> >  >  
>> >  >  be.lampiris.pie2.el2
>> >  >  el2-common-query-features
>> >  >  ${lampiris.query.version}
>> >  >  xml
>> >  >  features
>> >  >  
>> >  >  
>> >  > ...
>> >  >  
>> >  >  org.apache.karaf.tooling
>> >  >  karaf-maven-plugin
>> >  >  ${karaf-plugin.version}
>> >  >  true
>> >  >  
>> >  >  
>> >  >  features-generate-descriptor
>> >  >  package
>> >  >  
>> >  >
>> > features-generate-descriptor
>> >  >  
>> >  >  
>> >  >
>> > true
>> >  >  
>> >  >  
>> >  >  
>>

Re: Keep repository definition while using karaf-maven-plugin

2015-11-22 Thread Arnaud Deprez
Here is the example :
https://github.com/arnaud-deprez/karaf-maven-plugin-samples

So after performing mvn install, the feature.xml generated in
aggregate-features/target/feature/feature.xml doesn't contain the
repository tag from the features.xml from module feature2.

Rgds,

Arnaud

On Sun, Nov 22, 2015 at 9:34 PM Jean-Baptiste Onofré 
wrote:

> That would be great. Else I will do it myself.
>
> Regards
> JB
>
> On 11/22/2015 09:28 PM, Arnaud Deprez wrote:
> > Sure but I can't give the whole project as it's a corporate project and
> > I'm not allowed to that.
> > If you need a whole project, I can try to reproduce it and push it my
> > github.
> >
> > - features1.xml is the feature from the first project
> > - features2.xml is the feature from my module in the second project
> > - pom.xml is the pom my distribution module
> >
> > Rgds,
> >
> > Arnaud
> >
> > On Sun, Nov 22, 2015 at 9:14 PM Jean-Baptiste Onofré  > <mailto:j...@nanthrax.net>> wrote:
> >
> > Can you share the pom.xml and features.xml ?
> >
> > Regards
> > JB
> >
> > On 11/22/2015 09:11 PM, Arnaud Deprez wrote:
> >  > Hi JB,
> >  >
> >  > Sorry, I don't get what you mean by template or dependencies set.
> >  >
> >  > So basically, in my second project, I have a features maven
> > module where
> >  > I'm defining the features.xml file. This file is templated with
> maven
> >  > properties and I use the maven resource plugin to replace
> properties
> >  > with maven property placeholder.
> >  >
> >  > Then, I have another module distribution where my configuration
> is :
> >  >
> >  > ...
> >  > 
> >  >  
> >  >  ${project.groupId}
> >  >  features
> >  >  ${project.version}
> >  >  xml
> >  >  features
> >  >  
> >  >  
> >  >  be.lampiris.pie2.el2
> >  >  el2-common-query-features
> >  >  ${lampiris.query.version}
> >  >  xml
> >  >  features
> >  >  
> >  >  
> >  > ...
> >  >  
> >  >  org.apache.karaf.tooling
> >  >  karaf-maven-plugin
> >  >  ${karaf-plugin.version}
> >  >  true
> >  >  
> >  >  
> >  >  features-generate-descriptor
> >  >  package
> >  >  
> >  >
> > features-generate-descriptor
> >  >  
> >  >  
> >  >
> > true
> >  >  
> >  >  
> >  >  
> >  >  kar
> >  >  install
> >  >  
> >  >  kar
> >  >  
> >  >  
> >  >
> >  >
> >
>  
> Lampiris-${project.parent.artifactId}-${project.version}
> >  >
> >  > true
> >  >
> >  >
> >
>  ${project.build.directory}/feature/feature.xml
> >  >  
> >  >  
> >  >  
> >  >  
> >  > ...
> >  >
> >  > So I use dependencies to import my 2 features files.
> >  > Does it help ?
> >  >
> >  > Regards,
> >  >
> >  > Arnaud
> >  >
> >  > On Sun, Nov 22, 2015 at 8:51 PM Jean-Baptiste Onofré
> > mailto:j...@nanthrax.net>
> >  > <mailto:j...@nanthrax.net <mailto:j...@nanthrax.net>>> wrote:
> >  >
> >  > Hi Arnaud,
> >  >
> >  > Hmmm, it sounds like a bug.
> >  >
> >  > Do you use a template for the generate descriptor or does it
> > use the
> >  > dependencies set ?
> >  >
> >  

Re: Keep repository definition while using karaf-maven-plugin

2015-11-22 Thread Arnaud Deprez
Sure but I can't give the whole project as it's a corporate project and I'm
not allowed to that.
If you need a whole project, I can try to reproduce it and push it my
github.

- features1.xml is the feature from the first project
- features2.xml is the feature from my module in the second project
- pom.xml is the pom my distribution module

Rgds,

Arnaud

On Sun, Nov 22, 2015 at 9:14 PM Jean-Baptiste Onofré 
wrote:

> Can you share the pom.xml and features.xml ?
>
> Regards
> JB
>
> On 11/22/2015 09:11 PM, Arnaud Deprez wrote:
> > Hi JB,
> >
> > Sorry, I don't get what you mean by template or dependencies set.
> >
> > So basically, in my second project, I have a features maven module where
> > I'm defining the features.xml file. This file is templated with maven
> > properties and I use the maven resource plugin to replace properties
> > with maven property placeholder.
> >
> > Then, I have another module distribution where my configuration is :
> >
> > ...
> > 
> >  
> >  ${project.groupId}
> >  features
> >  ${project.version}
> >  xml
> >  features
> >  
> >  
> >  be.lampiris.pie2.el2
> >  el2-common-query-features
> >  ${lampiris.query.version}
> >  xml
> >  features
> >  
> >  
> > ...
> >  
> >  org.apache.karaf.tooling
> >  karaf-maven-plugin
> >  ${karaf-plugin.version}
> >  true
> >  
> >  
> >  features-generate-descriptor
> >  package
> >  
> >  features-generate-descriptor
> >  
> >  
> >  true
> >  
> >  
> >  
> >  kar
> >  install
> >  
> >  kar
> >  
> >  
> >
> >
> Lampiris-${project.parent.artifactId}-${project.version}
> >
> > true
> >
> >
> ${project.build.directory}/feature/feature.xml
> >  
> >  
> >  
> >  
> > ...
> >
> > So I use dependencies to import my 2 features files.
> > Does it help ?
> >
> > Regards,
> >
> > Arnaud
> >
> > On Sun, Nov 22, 2015 at 8:51 PM Jean-Baptiste Onofré  > <mailto:j...@nanthrax.net>> wrote:
> >
> > Hi Arnaud,
> >
> > Hmmm, it sounds like a bug.
> >
> > Do you use a template for the generate descriptor or does it use the
> > dependencies set ?
> >
> > Regards
> > JB
> >
> > On 11/22/2015 08:34 PM, Arnaud Deprez wrote:
> >  > Hi folks,
> >  >
> >  > I'm trying to use the karaf-maven-plugin to generate a kar file.
> >  >
> >  > Here is my configuration:
> >  > I have 2 projects, one depends on the other. Each project has its
> own
> >  > feature file.
> >  > In my second project, I defined the following configuration :
> >  > 
> >  >  org.apache.karaf.tooling
> >  >  karaf-maven-plugin
> >  >  ${karaf-plugin.version}
> >  >  true
> >  >  
> >  >  
> >  >  features-generate-descriptor
> >  >  package
> >  >  
> >  >
> > features-generate-descriptor
> >  >  
> >  >  
> >  >
> > true
> >  >  
> >  >  
> >  >  
> >  >  kar
> >  >  install
> >  >  
> >  >  kar
> >  >  
> >  >  
> >  >

Re: Keep repository definition while using karaf-maven-plugin

2015-11-22 Thread Arnaud Deprez
Hi JB,

Sorry, I don't get what you mean by template or dependencies set.

So basically, in my second project, I have a features maven module where
I'm defining the features.xml file. This file is templated with maven
properties and I use the maven resource plugin to replace properties with
maven property placeholder.

Then, I have another module distribution where my configuration is :

...


${project.groupId}
features
${project.version}
xml
features


be.lampiris.pie2.el2
el2-common-query-features
${lampiris.query.version}
xml
features


...

org.apache.karaf.tooling
karaf-maven-plugin
${karaf-plugin.version}
true


features-generate-descriptor
package

features-generate-descriptor


true



kar
install

kar



Lampiris-${project.parent.artifactId}-${project.version}

true

${project.build.directory}/feature/feature.xml




...

So I use dependencies to import my 2 features files.
Does it help ?

Regards,

Arnaud

On Sun, Nov 22, 2015 at 8:51 PM Jean-Baptiste Onofré 
wrote:

> Hi Arnaud,
>
> Hmmm, it sounds like a bug.
>
> Do you use a template for the generate descriptor or does it use the
> dependencies set ?
>
> Regards
> JB
>
> On 11/22/2015 08:34 PM, Arnaud Deprez wrote:
> > Hi folks,
> >
> > I'm trying to use the karaf-maven-plugin to generate a kar file.
> >
> > Here is my configuration:
> > I have 2 projects, one depends on the other. Each project has its own
> > feature file.
> > In my second project, I defined the following configuration :
> > 
> >  org.apache.karaf.tooling
> >  karaf-maven-plugin
> >  ${karaf-plugin.version}
> >  true
> >  
> >  
> >  features-generate-descriptor
> >  package
> >  
> >  features-generate-descriptor
> >  
> >  
> >  true
> >  
> >  
> >  
> >  kar
> >  install
> >  
> >  kar
> >  
> >  
> >
> >
> Lampiris-${project.parent.artifactId}-${project.version}
> >
> > true
> >
> >
> 
> >
> >
> ${project.build.directory}/feature/feature.xml
> >  
> >  
> >  
> >  
> >
> > As I've imported the 2 features files in my dependencies, it works fine.
> > Except that in the second feature, I've defined  (for
> > example to choose the right camel version) and those tags aren't
> > aggregated in the final feature.xml generated.
> >
> > I didn't find any useful information to achieve that in the
> > documentation. So that's my question : is there a way to also aggregate
> > repository tags ?
> >
> > Regards,
> >
> > Arnaud
>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>


Keep repository definition while using karaf-maven-plugin

2015-11-22 Thread Arnaud Deprez
Hi folks,

I'm trying to use the karaf-maven-plugin to generate a kar file.

Here is my configuration:
I have 2 projects, one depends on the other. Each project has its own
feature file.
In my second project, I defined the following configuration :

org.apache.karaf.tooling
karaf-maven-plugin
${karaf-plugin.version}
true


features-generate-descriptor
package

features-generate-descriptor


true



kar
install

kar



Lampiris-${project.parent.artifactId}-${project.version}

true



${project.build.directory}/feature/feature.xml





As I've imported the 2 features files in my dependencies, it works fine.
Except that in the second feature, I've defined  (for example
to choose the right camel version) and those tags aren't aggregated in the
final feature.xml generated.

I didn't find any useful information to achieve that in the documentation.
So that's my question : is there a way to also aggregate repository tags ?

Regards,

Arnaud


Re: Bug in karaf 4.0.2

2015-10-13 Thread Arnaud Deprez
I've just been testing it, I've got the same NPE in 4.0.0 as in 4.0.1.

I would be able to modify a bit my project to use netty instead of servlet
for tomorrow or at least end of week.

On Tue, Oct 13, 2015 at 3:21 PM Jean-Baptiste Onofré 
wrote:

> Or even with Karaf 4.0.0 ?
>
> Regards
> JB
>
> On 10/13/2015 03:19 PM, Arnaud Deprez wrote:
> > Actually,
> > I had another bug with karaf 4.0.1 and jetty which was related to this
> > thread :
> >
> http://karaf.922171.n3.nabble.com/Nullpointer-Exception-in-jetty-ContextHandler-on-4-0-0-M2-and-M3-td4041050.html
> .
> > So I couldn't test it with karaf 4.0.1.
> >
> > I can test it quickly if needed but I have to replace servlet by netty
> > or another rest component.
> >
> > On Tue, Oct 13, 2015 at 3:07 PM Jean-Baptiste Onofré  > <mailto:j...@nanthrax.net>> wrote:
> >
> > Hi Arnaud,
> >
> > weird. Does it work with Karaf 4.0.1 ?
> >
> > Regards
> > JB
> >
> > On 10/13/2015 03:05 PM, Arnaud Deprez wrote:
> >  > Hi,
> >  >
> >  > I've found a bug in karaf 4.0.2. Informations is there as I first
> >  > thought it was related to camel 2.16.0.
> >  >
> >
> http://camel.465427.n5.nabble.com/Camel-2-16-0-ProducerTemplate-has-not-been-started-td5772612.html
> >  >
> >  > Rgds,
> >  >
> >  > Arnaud
> >
> > --
> > Jean-Baptiste Onofré
> > jbono...@apache.org <mailto:jbono...@apache.org>
> > http://blog.nanthrax.net
> > Talend - http://www.talend.com
> >
>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>


Re: Bug in karaf 4.0.2

2015-10-13 Thread Arnaud Deprez
Actually,
I had another bug with karaf 4.0.1 and jetty which was related to this
thread :
http://karaf.922171.n3.nabble.com/Nullpointer-Exception-in-jetty-ContextHandler-on-4-0-0-M2-and-M3-td4041050.html
.
So I couldn't test it with karaf 4.0.1.

I can test it quickly if needed but I have to replace servlet by netty or
another rest component.

On Tue, Oct 13, 2015 at 3:07 PM Jean-Baptiste Onofré 
wrote:

> Hi Arnaud,
>
> weird. Does it work with Karaf 4.0.1 ?
>
> Regards
> JB
>
> On 10/13/2015 03:05 PM, Arnaud Deprez wrote:
> > Hi,
> >
> > I've found a bug in karaf 4.0.2. Informations is there as I first
> > thought it was related to camel 2.16.0.
> >
> http://camel.465427.n5.nabble.com/Camel-2-16-0-ProducerTemplate-has-not-been-started-td5772612.html
> >
> > Rgds,
> >
> > Arnaud
>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>


Bug in karaf 4.0.2

2015-10-13 Thread Arnaud Deprez
Hi,

I've found a bug in karaf 4.0.2. Informations is there as I first thought
it was related to camel 2.16.0.
http://camel.465427.n5.nabble.com/Camel-2-16-0-ProducerTemplate-has-not-been-started-td5772612.html

Rgds,

Arnaud


Re: [ANN] Apache Karaf 4.0.2 Released!

2015-10-13 Thread Arnaud Deprez
Ok, I was a bit too early.
Anyway, I won't use this mirror anymore as I meet some trouble with it and
dockerhub.

Rgds,

On Tue, Oct 13, 2015 at 11:24 AM Jean-Baptiste Onofré 
wrote:

> The mirror sync is in progress. Sorry about that.
>
> Regards
> JB
>
> On 10/13/2015 11:23 AM, Arnaud Deprez wrote:
> > Apparently, it's not yet available here :
> > http://archive.apache.org/dist/karaf/
> >
> > On Tue, Oct 13, 2015 at 10:24 AM xlogger  > <mailto:xloggers...@gmail.com>> wrote:
> >
> > Thanks for the latest release!
> >
> > I found a strange issue though... If I use a win7 PC as client and
> > enter the
> > karaf shell, the backspace button is not functioning well...
> >
> > Its okay if I used a linux vagrant box to run the karaf shell...
> >
> > Not sure if it's just me having such issue...
> >
> >
> >
> > --
> > View this message in context:
> >
> http://karaf.922171.n3.nabble.com/ANN-Apache-Karaf-4-0-2-Released-tp4043040p4043041.html
> > Sent from the Karaf - User mailing list archive at Nabble.com.
> >
>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>


Re: [ANN] Apache Karaf 4.0.2 Released!

2015-10-13 Thread Arnaud Deprez
Apparently, it's not yet available here :
http://archive.apache.org/dist/karaf/

On Tue, Oct 13, 2015 at 10:24 AM xlogger  wrote:

> Thanks for the latest release!
>
> I found a strange issue though... If I use a win7 PC as client and enter
> the
> karaf shell, the backspace button is not functioning well...
>
> Its okay if I used a linux vagrant box to run the karaf shell...
>
> Not sure if it's just me having such issue...
>
>
>
> --
> View this message in context:
> http://karaf.922171.n3.nabble.com/ANN-Apache-Karaf-4-0-2-Released-tp4043040p4043041.html
> Sent from the Karaf - User mailing list archive at Nabble.com.
>


Re: Best practices : Configuration shared between bundles

2015-09-25 Thread Arnaud Deprez
Indeed I didn't read well your lastname ;-).

You are right I actually use OSGi service to expose that service but in my
implementation I use camel (with ProducerTemplate and some route if
needed). It allows me to easily change the technology stack if needed (for
example netty, jetty, restlet, spark-rest and so on).

Thx again for sharing your experiences !

On Fri, Sep 25, 2015 at 3:45 PM Christian Schneider 
wrote:

> Hi Arnaud,
>
> I guess your mean Christian Müller who I know worked at Worldline some
> time ago. That is not me .. though we both are active in the Camel project
> which already led to some confusions :-)
>
> I think I would skip sharing the common prefix and simply have one config
> per service you call like http://example.com/api/customers and
> http://example.com/api/contracts <http://example.com/api/contacts>. I
> would wrap each such service in king of a proxy as an OSGi service with a
> plain java interface. So then each service proxy can have its own config
> for the uri. You would then still not share the common prefix but you
> achieve a nice separation of business logic and technology. It might be
> some additional effort at the start but it makes your project more
> manageable when it grows.
>
> In general I would try to implement business logic completely outside of
> camel. It should just offer and consume OSGi services. The main effect of
> this is that you have a much clearer picture of what the business
> requirements are when looking at the code. Unfortunately camel makes it
> very easy to mix technology and business logic but this does not scale well
> with project size.
>
>
> Christian
>
>
> On 25.09.2015 15:25, Arnaud Deprez wrote:
>
> Thanks for your answers guys and glad to have some news of your Christian
> (It's been a while since Worldline).
>
> So for example, what I'd like to share is endpoint url. For the database
> it's very easy to expose a datasource as a service.
>
> So to give you the picture, I'm designing an API gateway in front of
> multiple backends (I can't say Micro Services right now because it's not as
> micro as it's supposed to be right now).
> The goal of this API is to be the gateway for third party application.
> My design choices was to split my APIs and my Implementations by business
> context.
> The thing is that for different context, I've to reach the same backend
> right now.
>
> So let's say this backend as the following context-path url :
> http://example.com/api
> In one bundle I need to call http://example.com/api/customers and in
> another one http://example.com/api/contracts
> <http://example.com/api/contacts>.
> My first idea was to have a environment PID configuration which contains
> "backend.base.url = http://example.com/api"; and in each bundles
> "customers.url = ${backend.base.url}/customers" and "contracts.url =
> ${backend.base.url}/contracts".
> I know that it's possible to merge properties like that but only in the
> same configuration PID.
> The advantage with this solution is that if they decided to split their
> backend into one for customer and one for backend (and if they keep the
> same API), I can just update my properties customers.url and contracts.url.
>
> As it doesn't seem to be possible, I think creating proxies for those
> service is the best design choice right now but it's a bit over work when I
> can simply the http4 camel component.
>
> Regards,
>
> Arnaud
>
>
> --
> Christian Schneiderhttp://www.liquid-reality.de
>
> Open Source Architecthttp://www.talend.com
>
>


Re: Best practices : Configuration shared between bundles

2015-09-25 Thread Arnaud Deprez
Thanks for your answers guys and glad to have some news of your Christian
(It's been a while since Worldline).

So for example, what I'd like to share is endpoint url. For the database
it's very easy to expose a datasource as a service.

So to give you the picture, I'm designing an API gateway in front of
multiple backends (I can't say Micro Services right now because it's not as
micro as it's supposed to be right now).
The goal of this API is to be the gateway for third party application.
My design choices was to split my APIs and my Implementations by business
context.
The thing is that for different context, I've to reach the same backend
right now.

So let's say this backend as the following context-path url :
http://example.com/api
In one bundle I need to call http://example.com/api/customers and in
another one http://example.com/api/contracts
<http://example.com/api/contacts>.
My first idea was to have a environment PID configuration which contains
"backend.base.url = http://example.com/api"; and in each bundles
"customers.url = ${backend.base.url}/customers" and "contracts.url =
${backend.base.url}/contracts".
I know that it's possible to merge properties like that but only in the
same configuration PID.
The advantage with this solution is that if they decided to split their
backend into one for customer and one for backend (and if they keep the
same API), I can just update my properties customers.url and contracts.url.

As it doesn't seem to be possible, I think creating proxies for those
service is the best design choice right now but it's a bit over work when I
can simply the http4 camel component.

Regards,

Arnaud

On Fri, Sep 25, 2015 at 1:20 PM Christian Schneider 
wrote:

> I have seen a similar thing at a customer. They also used a db as config
> backend.
> Typically the config is then represented inside the service as either
> service.getProperty(key) or service.getConfig as a java bean class.
> The problem is that such a solution typically does not provide a way to
> push the config to the blueprint beans. You have to actively get it from
> the service.
>
> One way to solve this is to create a custom blueprint namespace and create
> your own property placeholder support there. This is pretty advanced stuff
> though, so I would not recommend it for every case.
>
> What I found is that there are typically two kinds of configs you want to
> share:
> 1. Database properties
> 2. Service endpoint URLs so you know what URL to use in a client per
> Environment
>
> These can also be solved differently though.
> 1. Use pax-jdbc-config to configure the DataSource in one config. Then
> share the DataSource service between bundles instead of the config
> 2. You can create client proxies that offer an OSGi service to the inside
> and do the e.g. Rest call in one central place per service. So there is
> again just one place to configure. Alternatively OSGi Remote Services can
> provide this in a general way
>
> There are also some similar things like e.g. mail server configs if you
> need to send mails. Again the solution is to create a central mail service
> that abstracts from the details and is configured in one place. Such
> services do not only solve the config problem but also make your modules
> more loosely coupled to the technologies used.
>
>
> Christian
>
>
>
>
> On 25.09.2015 12:45, Kevin Carr wrote:
>
> Christian we use a shared config for each "environment".  So bundles know
> certain entries will be available in dev qa and prod.
>
> We also have a gui over the db to make it easier for us to update said
> properties.
>
> On Fri, Sep 25, 2015, 5:43 AM Christian Schneider 
> wrote:
>
>> You should also look into your architecture to see why you need to share
>> some config. In some cases you
>> can extract the commonalities into a service that can then be regularly
>> configured by config admin.
>>
>> Can you explain a bit what kind of configuration you need to share
>> between bundles?
>>
>> Christian
>>
>>
>> On 25.09.2015 12:33, Arnaud Deprez wrote:
>>
>> Hi,
>>
>> @JP: The problem is not using 2 property placeholder, but sharing one
>> property placeholder across multiple bundles. And in that case, it seems
>> that we have to avoid it because it can cause trouble to the Config Admin
>> due to concurrency issues (it's actually what I read, I didn't take a look
>> to the code).
>>
>> @Christian: Thanks for your answer. I actually had the same idea but I'm
>> a guy who is looking for icing by using Config Admin with its dynamical
>> behavior :-). But if there is no other solution, I&

Re: Best practices : Configuration shared between bundles

2015-09-25 Thread Arnaud Deprez
Hi,

@JP: The problem is not using 2 property placeholder, but sharing one
property placeholder across multiple bundles. And in that case, it seems
that we have to avoid it because it can cause trouble to the Config Admin
due to concurrency issues (it's actually what I read, I didn't take a look
to the code).

@Christian: Thanks for your answer. I actually had the same idea but I'm a
guy who is looking for icing by using Config Admin with its dynamical
behavior :-). But if there is no other solution, I'll go to that direction.

On Fri, Sep 25, 2015 at 11:34 AM Christian Schneider <
ch...@die-schneider.net> wrote:

> You can use the  placeholder-suffix="]"/>
> It allows access to the System.properties and use it for shared config.
> See:
> https://github.com/apache/karaf/blob/karaf-3.0.x/bundle/core/src/main/resources/OSGI-INF/blueprint/blueprint.xml
>
> It can be used together with a cm:property-placeholder that contains
> bundle specific config from config admin.
>
> One problem is that the System properties do not support updates in case
> of changes. So you can only use it for relatively fixed configs.
>
>
> Christian
>
>
> Am 25.09.2015 um 10:04 schrieb Arnaud Deprez:
>
> Hi folks,
>
> I would like to have your opinion about my needs.
> I've several bundles which all need their custom configuration (own PID)
> but all of them depends on some common configuration depending on which
> environment (ie: test, production) my karaf instance is running.
>
> I'm currently using apache blueprint. So I would like to do is for
> example, if I use [[env.name]] it will retrieve the config value from
> global configuration and if I use {{bundle.name}}.
>
> So my questions is :
> Is there a way to use common/global configuration with my current bundle
> configuration ?
> If yes, is it possible with blueprint or do I have to change my tech stack
> ?
> What are best practices ?
> And icing on the cake, do you have example ?
>
> Rgds,
>
> Arnaud
>
>
>


Re: Best practices : Configuration shared between bundles

2015-09-25 Thread Arnaud Deprez
Yeah ok,

That's what I though when I had read some documentation about that.
The thing is that the OSGi service is not useable directly in my blueprint
configuration.

Is there to achieve that (OSGi service with common configuration) by using
DS or something at a higher level than using the OSGi API Activator.
I've actually never used DS yet but it seems to give a more fine grained
configuration than blueprint.

Rgds,

On Fri, Sep 25, 2015 at 10:13 AM Achim Nierbeck 
wrote:

> Hi Arnaud,
>
> sorry no icing here ;)
>
> The principal way of configuration is one configuration for one service,
> or a couple of configuration for a couple of services of the same interface
> (Managed Service Factory).
> So there is no way of sharing configuration between services.
> But you can have a service which only contains shared configuration which
> is used by the other services as depending service.
> This way your dependent services are only started if the "common" service
> is available.
>
> regards, Achim
>
>
>
> 2015-09-25 10:04 GMT+02:00 Arnaud Deprez :
>
>> Hi folks,
>>
>> I would like to have your opinion about my needs.
>> I've several bundles which all need their custom configuration (own PID)
>> but all of them depends on some common configuration depending on which
>> environment (ie: test, production) my karaf instance is running.
>>
>> I'm currently using apache blueprint. So I would like to do is for
>> example, if I use [[env.name]] it will retrieve the config value from
>> global configuration and if I use {{bundle.name}}.
>>
>> So my questions is :
>> Is there a way to use common/global configuration with my current bundle
>> configuration ?
>> If yes, is it possible with blueprint or do I have to change my tech
>> stack ?
>> What are best practices ?
>> And icing on the cake, do you have example ?
>>
>> Rgds,
>>
>> Arnaud
>>
>
>
>
> --
>
> Apache Member
> Apache Karaf <http://karaf.apache.org/> Committer & PMC
> OPS4J Pax Web <http://wiki.ops4j.org/display/paxweb/Pax+Web/> Committer &
> Project Lead
> blog <http://notizblog.nierbeck.de/>
> Co-Author of Apache Karaf Cookbook <http://bit.ly/1ps9rkS>
>
> Software Architect / Project Manager / Scrum Master
>
>


Best practices : Configuration shared between bundles

2015-09-25 Thread Arnaud Deprez
Hi folks,

I would like to have your opinion about my needs.
I've several bundles which all need their custom configuration (own PID)
but all of them depends on some common configuration depending on which
environment (ie: test, production) my karaf instance is running.

I'm currently using apache blueprint. So I would like to do is for example,
if I use [[env.name]] it will retrieve the config value from global
configuration and if I use {{bundle.name}}.

So my questions is :
Is there a way to use common/global configuration with my current bundle
configuration ?
If yes, is it possible with blueprint or do I have to change my tech stack
?
What are best practices ?
And icing on the cake, do you have example ?

Rgds,

Arnaud


Re: camel-swagger in karaf

2015-07-16 Thread Arnaud Deprez
Hi again,

I just push my example here which works fine with spring-boot but still
can't see the model in Karaf.
https://github.com/arnaud-deprez/camel-examples/tree/master/rest

There are also the features I use.
If someone find what's wrong, I'd be glad ! :-)

A.

On Thu, Jul 16, 2015 at 2:46 PM Arnaud Deprez  wrote:

> Hi guys,
>
> I don't know if this mail is for camel folks or karaf folks.
>
> I'm using camel 2.15.2 with its REST DSL and camel-swagger in an OSGi
> environment (I tried both karaf 2.4.3 and karaf 4).
>
> I'm using the servlet component by exposing it as OSGi services :
>
>- CamelHttpTransportServlet for my camel route
>- DefaultCamelSwaggerServlet for the swagger documentation.
>
> Then I use the swagger ui to see the documentation.
>
> I can see my rest endpoint and the documentation I put in "description"
> but it's not able to show me the model for the request and the response
> (defined in "type" and "outType").
>
> However if I use the same routes and the same servlet in a spring-boot
> environment, it works like a charm.
>
> I tried to find some information on the web but I didn't find anything
> about missing supports or bug with karaf and camel and the latest version.
>
> So is it a known bug ? Or camel will support it in future version ? Or is
> it a lack of support in karaf ?
>
> Regards,
>
> A.
>


camel-swagger in karaf

2015-07-16 Thread Arnaud Deprez
Hi guys,

I don't know if this mail is for camel folks or karaf folks.

I'm using camel 2.15.2 with its REST DSL and camel-swagger in an OSGi
environment (I tried both karaf 2.4.3 and karaf 4).

I'm using the servlet component by exposing it as OSGi services :

   - CamelHttpTransportServlet for my camel route
   - DefaultCamelSwaggerServlet for the swagger documentation.

Then I use the swagger ui to see the documentation.

I can see my rest endpoint and the documentation I put in "description" but
it's not able to show me the model for the request and the response
(defined in "type" and "outType").

However if I use the same routes and the same servlet in a spring-boot
environment, it works like a charm.

I tried to find some information on the web but I didn't find anything
about missing supports or bug with karaf and camel and the latest version.

So is it a known bug ? Or camel will support it in future version ? Or is
it a lack of support in karaf ?

Regards,

A.


Re: Karaf 3.0.3 OSGi Version

2015-03-17 Thread Arnaud Deprez
I'd rather would like to see K4 released too but I don't know when it will
be planned.
As far as I know there are still lot of work to do for a K4 release.
As I'm not (yet) involved in it, it's a supposition based on the changelog
:-).

Currently, I think there are some confusion with the actual version 2.4.Z
and 3.0.Z.
I think it's a bit strange to a lambda user that karaf 2.4.Z has a full
support for OSGi 5 and Karaf 3.0.Z has only a partial support.

So I think that

   - whether the documentation should be clearer to say that K3 will be
   abandoned
   - whether we should have a K3.1.Z release as JB said

I think the solution 2 is better regarding the lambda user than the
solution 1 but the work to do may be too hard if we have to throw it away
regarding K4.
I don't know actually.

Regards,

Arnaud

2015-03-17 13:51 GMT+01:00 Achim Nierbeck :

> well, I guess that would be true as a minor version upgrade on the
> framework would suggest a minor version bump on K3.
>
> tbh, I'd rather would like to see K4 released ... but it might be a
> possible solution to have a lighter "upgrade" to it again.
>
> regards, Achim
>
> 2015-03-17 13:48 GMT+01:00 Jean-Baptiste Onofré :
>
>> Hi Arnaud,
>>
>> if we move this way, it would make sense to go to Karaf 3.1.x as it's a
>> major "update" on K3.
>>
>> But I think it makes sense.
>>
>> Thoughts ?
>>
>> Regards
>> JB
>>
>> On 03/17/2015 01:42 PM, Arnaud Deprez wrote:
>>
>>> Hi JB,
>>>
>>> I think it makes sense to upgrade the felix framework in K3 at least to
>>> be aligned with K2.4.Z in order to avoid confusion.
>>> Or maybe K3 won't have any improvements and all the effort should be
>>> concentrated on K4.
>>> I think the documentation should be clearer for this topic, shouldn't it
>>> ?
>>>
>>> Regards,
>>>
>>> Arnaud
>>>
>>> 2015-03-17 11:23 GMT+01:00 agrz >> <mailto:alexander.grze...@medisite.de>>:
>>>
>>> Thank you for the Clarification.
>>> Alex
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://karaf.922171.n3.nabble.com/Karaf-3-0-3-OSGi-Version-
>>> tp4039091p4039114.html
>>> Sent from the Karaf - User mailing list archive at Nabble.com.
>>>
>>>
>>>
>> --
>> Jean-Baptiste Onofré
>> jbono...@apache.org
>> http://blog.nanthrax.net
>> Talend - http://www.talend.com
>>
>
>
>
> --
>
> Apache Member
> Apache Karaf <http://karaf.apache.org/> Committer & PMC
> OPS4J Pax Web <http://wiki.ops4j.org/display/paxweb/Pax+Web/> Committer &
> Project Lead
> blog <http://notizblog.nierbeck.de/>
> Co-Author of Apache Karaf Cookbook <http://bit.ly/1ps9rkS>
>
> Software Architect / Project Manager / Scrum Master
>
>


Re: Karaf 3.0.3 OSGi Version

2015-03-17 Thread Arnaud Deprez
Hi JB,

I think it makes sense to upgrade the felix framework in K3 at least to be
aligned with K2.4.Z in order to avoid confusion.
Or maybe K3 won't have any improvements and all the effort should be
concentrated on K4.
I think the documentation should be clearer for this topic, shouldn't it ?

Regards,

Arnaud

2015-03-17 11:23 GMT+01:00 agrz :

> Thank you for the Clarification.
> Alex
>
>
>
> --
> View this message in context:
> http://karaf.922171.n3.nabble.com/Karaf-3-0-3-OSGi-Version-tp4039091p4039114.html
> Sent from the Karaf - User mailing list archive at Nabble.com.
>


Re: Which version of Karaf should we choose

2015-02-03 Thread Arnaud Deprez
Ok, thanks a lot for your quick answer.

Regards,

Arnaud Deprez

2015-02-02 12:03 GMT+01:00 Jean-Baptiste Onofré :

> Hi,
>
> I second Achim there.
>
> Again, the Karaf 2.4.x purpose is for migration: it's for the user that
> wants to easily move from 2.3 to 3.0.
>
> For a new project, from scratch, I would advice 3.x for GA.
>
> Regards
> JB
>
> On 02/02/2015 10:56 AM, Achim Nierbeck wrote:
>
>> Hi,
>>
>> actually version 3 is "older" compared to version 2.4.
>>
>> 2.4 is there to have an easier transition phase between 2.3. and 3.0
>> since we changed APIs in those versions.
>> So right now I'd go for 3.0.3 which has a good support.
>> OSGi 5 is only because the 3.0.3 line uses Felix 4.2.1, cause at that
>> time 4.4. hasn't been released. [1]
>> While 2.4. (which has been released later) uses Felix 4.4.1. An upgrade
>> to it would be a major change and therefore requires a major version bump.
>> The next version to come is 4.0 which also has Felix 4.4.1 as
>> dependency, so if you're are starting with a new Project you
>> might want to work with the soon to come 4.0.0.M2 as 4.0.0 will be our
>> next focused version.
>>
>> Regards, Achim
>>
>> [1] -
>> http://karaf.apache.org/index/documentation/karaf-
>> dependencies/karaf-deps-3.0.x.html
>> [2] -
>> http://karaf.apache.org/index/documentation/karaf-
>> dependencies/karaf-deps-2.4.x.html
>>
>>
>> 2015-02-02 10:49 GMT+01:00 Arnaud Deprez > <mailto:arnaudep...@gmail.com>>:
>>
>> Hello,
>>
>> My question is in the title : for a new project, which version of
>> karaf should we choose ?
>>
>> When I read the blog post from Christian
>> (http://www.liquid-reality.de/display/liquid/2013/12/28/10+
>> reasons+to+switch+to+Apache+Karaf+3).
>> I should use the version 3.
>>
>> But on the official web site, in the release schedule section :
>> http://karaf.apache.org/index/community/releases-schedule.html
>> I see that karaf 3 has only a partial support for OSGi 5 and karaf
>> 2.4.x has full support.
>>
>> I don't get why previous version of karaf have a better support of
>> latest OSGi version.
>>
>> Could someone enlighten me on ?
>>
>> Thanks,
>>
>> Arnaud Deprez
>>
>>
>>
>>
>> --
>>
>> Apache Member
>> Apache Karaf <http://karaf.apache.org/> Committer & PMC
>> OPS4J Pax Web <http://wiki.ops4j.org/display/paxweb/Pax+Web/> Committer
>> & Project Lead
>> blog <http://notizblog.nierbeck.de/>
>> Co-Author of Apache Karaf Cookbook <http://bit.ly/1ps9rkS>
>>
>> Software Architect / Project Manager / Scrum Master
>>
>>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>


Which version of Karaf should we choose

2015-02-02 Thread Arnaud Deprez
Hello,

My question is in the title : for a new project, which version of karaf
should we choose ?

When I read the blog post from Christian (
http://www.liquid-reality.de/display/liquid/2013/12/28/10+reasons+to+switch+to+Apache+Karaf+3).
I should use the version 3.

But on the official web site, in the release schedule section :
http://karaf.apache.org/index/community/releases-schedule.html
I see that karaf 3 has only a partial support for OSGi 5 and karaf 2.4.x
has full support.

I don't get why previous version of karaf have a better support of latest
OSGi version.

Could someone enlighten me on ?

Thanks,

Arnaud Deprez


Re: [PROPOSAL] Karaf Decanter monitoring

2014-10-14 Thread Arnaud Deprez
Thanks JB to bring me some light.

I was just wondering. I don't want to start a discussion/troll too :-).
Anyway, it's a very good idea. It can be a very good alternative and it can
improve both projects !

I say +1 but I'm not sure if my vote will be taken into account :-).

Cheers

2014-10-14 20:59 GMT+02:00 Jamie G. :

> Thank you JB for the description, sounds very interesting.
>
> +1 as a subproject idea, nice name choice too :)
>
> Cheers,
> Jamie
>
> On Tue, Oct 14, 2014 at 3:54 PM, Achim Nierbeck 
> wrote:
> > Hi JB,
> >
> > This has been a very nice and detailed description.
> > I like it right away, so +1
> > For calling it decanter and as extra subproject.
> >
> > Regards, Achim
> >
> > sent from mobile device
> >
> > Am 14.10.2014 17:13 schrieb "Jean-Baptiste Onofré" :
> >
> >> Hi all,
> >>
> >> First of all, sorry for this long e-mail ;)
> >>
> >> Some weeks ago, I blogged about the usage of ELK
> >> (Logstash/Elasticsearch/Kibana) with Karaf, Camel, ActiveMQ, etc to
> provide
> >> a monitoring dashboard (know what's happen in Karaf and be able to
> store it
> >> for a long period):
> >>
> >>
> >>
> http://blog.nanthrax.net/2014/03/apache-karaf-cellar-camel-activemq-monitoring-with-elk-elasticsearch-logstash-and-kibana/
> >>
> >> If this solution works fine, there are some drawbacks:
> >> - it requires additional middlewares on the machines. Additionally to
> >> Karaf itself, we have to install logstash, elasticsearch nodes, and
> kibana
> >> console
> >> - it's not usable "out of the box": you need at least to configure
> >> logstash (with the different input/output plugins), kibana (to create
> the
> >> dashboard that you need)
> >> - it doesn't cover all the monitoring needs, especially in term of SLA:
> we
> >> want to be able to raise some alerts depending of some events (for
> instance,
> >> when a regex is match in the log messages, when a feature is
> uninstalled,
> >> when a JMX metric is greater than a given value, etc)
> >>
> >> Actually, Karaf (and related projects) already provides most (all) data
> >> required for the monitoring. However, it would be very helpful to have a
> >> "glue", ready to use and more user friendly, including a storage of the
> >> metrics/monitoring data.
> >>
> >> Regarding this, I started a prototype of a monitoring solution for Karaf
> >> and the applications running in Karaf.
> >> The purpose is to be very extendible, flexible, easy to install and use.
> >>
> >> In term of architecture, we can find the following component:
> >>
> >> 1/ Collectors & SLA Policies
> >> The collectors are services responsible of harvesting monitoring data.
> >> We have two kinds of collectors:
> >> - the polling collectors are invoked by a scheduler periodically.
> >> - the event driven collectors react to some events.
> >> Two collectors are already available:
> >> - the JMX collector is a polling collector which harvest all MBeans
> >> attributes
> >> - the Log collector is a event driven collector, implementing a
> >> PaxAppender which react when a log message occurs
> >> We can planned the following collectors:
> >> - a Camel Tracer collector would be an event driven collector, acting
> as a
> >> Camel Interceptor. It would allow to trace any Exchange in Camel.
> >>
> >> It's very dynamic (thanks to OSGi services), so it's possible to add a
> new
> >> custom collector (user/custom implementation).
> >>
> >> The Collectors are also responsible of checking the SLA. As the SLA
> >> policies are tight to the collected data, it makes sense that the
> collector
> >> validates the SLA and call/delegate the alert to SLA services.
> >>
> >> 2/ Scheduler
> >> The scheduler service is responsible to call the Polling Collectors,
> >> gather the harvested data, and delegate to the dispatcher.
> >> We already have a simple scheduler (just a thread), but we can plan a
> >> quartz scheduler (for advanced cron/trigger configuration), and another
> one
> >> leveraging the Karaf scheduler.
> >>
> >> 3/ Dispatcher
> >> The dispatcher is called by the scheduler or the event driven collectors
> >> to dispatch the collected data to the appenders.
> >>
> >> 4/ Appenders
> >> The appender services are responsible to send/store the collected data
> to
> >> target systems.
> >> For now, we have two appenders:
> >> - a log appender which just log the collected data
> >> - a elasticsearch appender which send the collected data to a
> >> elasticsearch instance. For now, it uses "external" elasticsearch, but
> I'm
> >> working on an elasticsearch feature allowing to embed elasticsearch in
> Karaf
> >> (it's mostly done).
> >> We can plan the following other appenders:
> >> - redis to send the collected data in Redis messaging system
> >> - jdbc to store the collected data in a database
> >> - jms to send the collected data to a JMS broker (like ActiveMQ)
> >> - camel to send the collected data to a Camel direct-vm/vm endpoint of a
> >> route (it would create an internal route)
> >>
> >> 5/ Console/K

Re: [PROPOSAL] Karaf Decanter monitoring

2014-10-14 Thread Arnaud Deprez
Hi,

I find the idea very interesting, but I'm not sure I get the whole point.

Just for my information, comparing to what hawtio and its log plugin
provide, what are the differences with your solution ?

2014-10-14 18:29 GMT+02:00 David Bosschaert :

> +1 this looks like a very useful set of components!
>
>
> On 14 October 2014 17:17, Matt Sicker  wrote:
> > I never heard of a decanter before, but now that I have, it's an awesome
> > name.
> >
> > On 14 October 2014 11:06, Krzysztof Sobkowiak  >
> > wrote:
> >>
> >> +1
> >>
> >> I think it's a good idea. It's good to have a monitoring functionality
> >> for Karaf.  I would prefer to make it as a separate subproject like
> >> Cellar, to make the Karaf code base simply and could have a separate
> >> release cycle (from the same reason we had plans to extract enterprise
> >> features in a separate subproject). It could be an Karaf odd-on. Karaf
> >> Decanter is a good name.
> >>
> >> Regards
> >> Krzysztof
> >>
> >> On 14.10.2014 17:12, Jean-Baptiste Onofré wrote:
> >> > Hi all,
> >> >
> >> > First of all, sorry for this long e-mail ;)
> >> >
> >> > Some weeks ago, I blogged about the usage of ELK
> >> > (Logstash/Elasticsearch/Kibana) with Karaf, Camel, ActiveMQ, etc to
> >> > provide a monitoring dashboard (know what's happen in Karaf and be
> >> > able to store it for a long period):
> >> >
> >> >
> >> >
> http://blog.nanthrax.net/2014/03/apache-karaf-cellar-camel-activemq-monitoring-with-elk-elasticsearch-logstash-and-kibana/
> >> >
> >> >
> >> > If this solution works fine, there are some drawbacks:
> >> > - it requires additional middlewares on the machines. Additionally to
> >> > Karaf itself, we have to install logstash, elasticsearch nodes, and
> >> > kibana console
> >> > - it's not usable "out of the box": you need at least to configure
> >> > logstash (with the different input/output plugins), kibana (to create
> >> > the dashboard that you need)
> >> > - it doesn't cover all the monitoring needs, especially in term of
> >> > SLA: we want to be able to raise some alerts depending of some events
> >> > (for instance, when a regex is match in the log messages, when a
> >> > feature is uninstalled, when a JMX metric is greater than a given
> >> > value, etc)
> >> >
> >> > Actually, Karaf (and related projects) already provides most (all)
> >> > data required for the monitoring. However, it would be very helpful to
> >> > have a "glue", ready to use and more user friendly, including a
> >> > storage of the metrics/monitoring data.
> >> >
> >> > Regarding this, I started a prototype of a monitoring solution for
> >> > Karaf and the applications running in Karaf.
> >> > The purpose is to be very extendible, flexible, easy to install and
> use.
> >> >
> >> > In term of architecture, we can find the following component:
> >> >
> >> > 1/ Collectors & SLA Policies
> >> > The collectors are services responsible of harvesting monitoring data.
> >> > We have two kinds of collectors:
> >> > - the polling collectors are invoked by a scheduler periodically.
> >> > - the event driven collectors react to some events.
> >> > Two collectors are already available:
> >> > - the JMX collector is a polling collector which harvest all MBeans
> >> > attributes
> >> > - the Log collector is a event driven collector, implementing a
> >> > PaxAppender which react when a log message occurs
> >> > We can planned the following collectors:
> >> > - a Camel Tracer collector would be an event driven collector, acting
> >> > as a Camel Interceptor. It would allow to trace any Exchange in Camel.
> >> >
> >> > It's very dynamic (thanks to OSGi services), so it's possible to add a
> >> > new custom collector (user/custom implementation).
> >> >
> >> > The Collectors are also responsible of checking the SLA. As the SLA
> >> > policies are tight to the collected data, it makes sense that the
> >> > collector validates the SLA and call/delegate the alert to SLA
> services.
> >> >
> >> > 2/ Scheduler
> >> > The scheduler service is responsible to call the Polling Collectors,
> >> > gather the harvested data, and delegate to the dispatcher.
> >> > We already have a simple scheduler (just a thread), but we can plan a
> >> > quartz scheduler (for advanced cron/trigger configuration), and
> >> > another one leveraging the Karaf scheduler.
> >> >
> >> > 3/ Dispatcher
> >> > The dispatcher is called by the scheduler or the event driven
> >> > collectors to dispatch the collected data to the appenders.
> >> >
> >> > 4/ Appenders
> >> > The appender services are responsible to send/store the collected data
> >> > to target systems.
> >> > For now, we have two appenders:
> >> > - a log appender which just log the collected data
> >> > - a elasticsearch appender which send the collected data to a
> >> > elasticsearch instance. For now, it uses "external" elasticsearch, but
> >> > I'm working on an elasticsearch feature allowing to embed
> >> > elasticsearch in Karaf (it's mostly done).
> >> > We ca

Re: Using visualvm to profile/monitor karaf instance

2014-08-03 Thread Arnaud Deprez
Ok but NPE isn't normal. If it's due to the acl, we should have something
like "Forbidden" or something like that


2014-08-02 21:53 GMT+02:00 j...@nanthrax.net :

> Remote connection works. Local no and it's normal due to the acl.
>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://wwx.talend.com
>
>
> - Reply message -
> From: "Achim Nierbeck" 
> To: "user@karaf.apache.org" 
> Subject: Using visualvm to profile/monitor karaf instance
> Date: Fri, Aug 1, 2014 7:35 pm
>
>
> Hmm, you're right I've neither been able to connect to a local process
> (NullPointerException) nor via the remote jmx connection.
> I've created a Bug for this.
>
> regards, Achim
>
> [1] - https://issues.apache.org/jira/browse/KARAF-3147
>
>
> 2014-08-01 19:08 GMT+02:00 Kevin Schmidt :
>
>> I'm having the same problem connecting to a local process.
>>
>> I've started up a fresh install of Karaf 3.0.1 and run jconsole and pick
>> the org.apache.karaf.main.Main local process and get the message about a
>> secure connection failing, but proceed with the insecure connection.  It
>> appears to connect but the console shows no statistics like for Vinu.
>>
>> However, if I use a remote connection and use
>>
>> service:jmx:rmi://0.0.0.0:4/jndi/rmi://0.0.0.0:1099/karaf-root
>>
>> Then it all works ok, albeit with the same secure connection failing
>> message.  Is there an issue with the local process connection?
>>
>>
>> On Fri, Aug 1, 2014 at 9:27 AM, Vinu Raj 
>> wrote:
>>
>>> I have tried this with no luck. Connected as insecure connection as got
>>> error
>>> for SSL connection.
>>>
>>> 
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://karaf.922171.n3.nabble.com/Using-visualvm-to-profile-monitor-karaf-instance-tp4034536p4034542.html
>>> Sent from the Karaf - User mailing list archive at Nabble.com.
>>>
>>
>>
>
>
> --
>
> Apache Member
> Apache Karaf  Committer & PMC
> OPS4J Pax Web  Committer &
> Project Lead
> blog 
>
> Software Architect / Project Manager / Scrum Master
>
>