Re: Use case: Per tenant deployments talking to multi tenant kafka cluster

2021-12-08 Thread Christian Schneider
Indeed. Unfortunately I confused the two in auto completion.

Thanks for pointing out,

Christian

Am Mi., 8. Dez. 2021 um 09:33 Uhr schrieb Jean-Baptiste Onofré <
j...@nanthrax.net>:

> Hi Christian,
>
> I guess you wanted to send this message on the kafka mailing list, right ?
>
> Regards
> JB
>
> On 08/12/2021 09:31, Christian Schneider wrote:
> > We have a single tenant application that we deploy to a kubernetes
> > cluster in many instances.
> > Every customer has several environments of the application. Each
> > application lives in a separate namespace and should be isolated from
> > other applications.
> >
> > We plan to use kafka to communicate inside an environment (between the
> > different pods).
> > As setting up one kafka cluster per such environment is a lot of
> > overhead and cost we would like to just use a single multi tenant kafka
> > cluster.
> >
> > Let's assume we just have one topic with 10 partitions for simplicity.
> > We can now use the environment id as a key for the messages to make sure
> > the messages of each environment arrive in order while sharing the load
> > on the partitions.
> >
> > Now we want each environment to only read the minimal number of messages
> > while consuming. Ideally we would like to to only consume its own
> > messages. Can we somehow filter to only
> > receive messages with a certain key? Can we maybe only listen to a
> > certain partition at least?
> >
> > Additionally we ideally would like to have enforced isolation. So each
> > environment can only see its own messages even if it might receive
> > messages of other environments from the same partition.
> > I think in worst case we can make this happen by encrypting the messages
> > but it would be great if we could filter on broker side.
> >
> > Christian
> >
> > --
> > --
> > Christian Schneider
> > http://www.liquid-reality.de <http://www.liquid-reality.de>
> >
> > Computer Scientist
> > http://www.adobe.com <http://www.adobe.com>
> >
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Use case: Per tenant deployments talking to multi tenant kafka cluster

2021-12-08 Thread Christian Schneider
We have a single tenant application that we deploy to a kubernetes cluster
in many instances.
Every customer has several environments of the application. Each
application lives in a separate namespace and should be isolated from other
applications.

We plan to use kafka to communicate inside an environment (between the
different pods).
As setting up one kafka cluster per such environment is a lot of overhead
and cost we would like to just use a single multi tenant kafka cluster.

Let's assume we just have one topic with 10 partitions for simplicity.
We can now use the environment id as a key for the messages to make sure
the messages of each environment arrive in order while sharing the load on
the partitions.

Now we want each environment to only read the minimal number of messages
while consuming. Ideally we would like to to only consume its own messages.
Can we somehow filter to only
receive messages with a certain key? Can we maybe only listen to a certain
partition at least?

Additionally we ideally would like to have enforced isolation. So each
environment can only see its own messages even if it might receive messages
of other environments from the same partition.
I think in worst case we can make this happen by encrypting the messages
but it would be great if we could filter on broker side.

Christian

-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: Data isolation between Karaf bundles/features: how to achieve?

2021-10-12 Thread Christian Schneider
I would solve this in a fashion that is independent of OSGi.
Think in terms of bounded contexts in domain driven design terminology.

In your case you have to decide if person and department belong to the same
bounded context or not.
If they belong to the same then share the db schema and also share a single
transaction maybe even also bundle and persistence context.  There is no
need to fully shield them against each other.
If they belong to different bounded contexts then separate them completely
(different db schema, no shared transaction). In this case you can easily
extract each bounded context as a microservice if needed but of course you
loose quite a few nice features like XA transactions.

Christian

Am So., 10. Okt. 2021 um 20:38 Uhr schrieb jgfrm :

> Suppose the following case:
>
> - there is a Person bundle, responsible of managing persons
> - there is a Department bundle, reponsible of managing departmens.
>
> - there is a constraint: A person always works for precisely one
> department;
> hence a person can not exist without the department s/he works for.
>
> Hi
>
> I am interested to understand if Isolation of data (between bundles) is
> possible in Karaf.
>
> Suppose the following case:
> - there is a Person bundle, responsible of managing persons
> - there is a Department bundle, responsible of managing departments.
> - there is a constraint: A person always works for precisely one
> department;
> hence a person can not exist without the department s/he works for.
>
> I have the following requirements:
> - The Person bundle can not use the data (departments) stored by the
> Department bundle (e.g. a table "Department")
> - The Department bundle can not use the data(persons) stored by the Person
> bundle (e.g. a table "Person")
> - If a department is deleted, all persons that work for that department
> would be deleted too (due to the formulated constraint).
>
> How to do this in Karaf/OSGI?
> My ideas:
> - each bundle (Department and Person) has an own persistence unit. The
> persistence unit refers to different databases with their own access
> control, or the different schemas in the same database with proper access
> control on the database level. Bundles can not see each other's persistence
> unit. Because these are effectively two separate databases, we can not use
> foreign keys to represent that an employee always works for precisely one
> department.
> Therefore:
> - there is a bundle that represents a transaction coordinator
> - the Person bundle tells the transaction coordinator that there is a
> constraint: namely that it want to informed if a person is deleted
> - in case a department is deleted the following happens:
>  - the department bundle builds a XA transaction to delete the department
>  - the department bundle informs the transaction coordinator that it
> created
> a transaction to delete the department, including the identifier of the
> department to be deleted
>  - the transaction coordinator informs the person bundle that a department
> with a particular identifier is going to be deleted
>  - the person bundle looks up all persons that work for the department, and
> creates a XA transaction to delete all these persons
>  - the person bundle informs the transaction coordinator that it created a
> transaction to delete the persons, including a list with identifiers of the
> persons to be deleted
>  - the transaction coordinator executes both transaction as one enclosing
> transaction using the 2-phase commit protocol.
>
> Questions:
> - is this a feasible scenario in Karaf?
> - if so, how should it be implemented? Specific tips?
> - are there any examples doing similar things?
>
> Best,
>
> -- Jaap
>
>

-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: Apache Karaf twitter account

2020-02-08 Thread Christian Schneider
Hi JB,

please also add me.

@schneider_chris

Christian

Am Fr., 7. Feb. 2020 um 09:05 Uhr schrieb Jean-Baptiste Onofre <
j...@nanthrax.net>:

> Hi everyone,
>
> I’m happy to announce that ApacheKaraf twitter account has been created.
>
> I’m adding some content right now (logo, background, etc) and I’m linking
> this account with the Karaf private mailing list.
>
> If you are PMC or committer and you want to post on behalf of ApacheKaraf,
> please let me know I will link your twitter account on twitter desk.
>
> Regards
> JB



-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: Karaf community Meetup in Karlsruhe (Germany) - "call for Talks"

2020-01-14 Thread Christian Schneider
I could do a talk about OSGi best practices or OSGi testing.

Christian

Am Di., 14. Jan. 2020 um 09:12 Uhr schrieb Achim Nierbeck <
bcanh...@googlemail.com>:

> Hi,
>
> looks like we have a date for a community meetup in Karlsruhe Germany.
> It's going to be the 19th of March.
> Now I would like to call for talks, so we can decide how "big" we're going
> to have it ;)
>
> So please be welcome to start proposing your talks ...
>
> best regards, Achim
>
> --
>
> Apache Member
> Apache Karaf <http://karaf.apache.org/> Committer & PMC
> OPS4J Pax Web <http://wiki.ops4j.org/display/paxweb/Pax+Web/> Committer &
> Project Lead
> blog <http://notizblog.nierbeck.de/>
> Co-Author of Apache Karaf Cookbook <http://bit.ly/1ps9rkS>
>
>

-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: Karaf CXF SCR REST example...

2019-11-20 Thread Christian Schneider
The elegant way is to use the Aries JAXRS Whiteboard.

Christian

Am Mi., 20. Nov. 2019 um 00:54 Uhr schrieb Ranx0r0x <
regis...@bradleejohnson.com>:

> I noticed that when I stopped/uninstalled the bundle the CXF endpoint was
> still up and the bundle couldn't be reinstalled. By saving the Server and
> destroying it on @Deactivate it correctly went away. There may be a more
> elegant or better way to do this but it may be that it should be part of
> the
> sample code.
>
> @Component
> public class RestServiceBootstrap {
>
> private MyInjectedService injectedService;
> private Server server;
> @Activate
> public void activate() throws Exception {
> System.out.println("Activate the MemberServiceImpl");
> JAXRSServerFactoryBean bean = new JAXRSServerFactoryBean();
> bean.setAddress("/foo");
> bean.setBus(BusFactory.getDefaultBus());
> bean.setServiceBean(new RestServiceImpl(injectService));
> server = bean.create();
> }
>
> @Deactivate
> public void deactivate() {
> System.out.println("Deactivating server: " + server);
> server.destroy();
>     }
>
>
>
> --
> Sent from: http://karaf.922171.n3.nabble.com/Karaf-User-f930749.html
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: A command like config:edit that will let you replace only some properties?

2019-11-11 Thread Christian Schneider
Maybe it is related to the fact that the original configuration is a
factory config.

Perhaps you created a new non factory config that overrides the factory one.

Christian

Am Mo., 11. Nov. 2019 um 22:55 Uhr schrieb Steinar Bang :

> >>>>> Steinar Bang :
> >>>>> Jean-Baptiste Onofré :
>
> >> If you do:
>
> >> karaf@root()> config:edit my.config
> >> karaf@root()> config:property-set foo bar
> >> karaf@root()> config:update
>
> >> only foo will be changed, other properties are kept.
>
> >> Not sure I follow what you mean.
>
> > Hm... I thought that when I tried this last week the resulting file
> > only had those properties I had added with config:property-set,
> > ie. that the other properties had gone missing...?
>
> > Maybe I fooled myself while testing?  I will try again and keep track
> > of what I'm doing...:-)
>
> Nope, I tried it again now, and it didn't work for me.
>
> Platform: karaf 4.2.7, openjdk 11, debian 10.1 "buster", amd64
>
> Here's what I did:
>  1. I installed my application using
>  karaf@root()> feature:repo-add
> mvn:no.priv.bang.authservice/authservice/LATEST/xml/features
>  Adding feature url
> mvn:no.priv.bang.authservice/authservice/LATEST/xml/features
>  karaf@root()> feature:install user-admin-with-postgresql
>  karaf@root()>
>  2. This created the following file in etc/
>  -rw-r--r--  1 sb sb   250 Nov 11 22:45
> org.ops4j.datasource-authservice-production.cfg
>  3. The content of the org.ops4j.datasource-authservice-production.cfg
> file is:
>  osgi.jdbc.driver.name=PostgreSQL JDBC Driver
>  dataSourceName=jdbc/authservice
>  url=jdbc:postgresql:///authservice
>  user=karaf
>  password=karaf
>  ops4j.preHook=authservicedb
>  org.apache.karaf.features.configKey =
> org.ops4j.datasource-authservice-production
>  4. Then I tried replacing just the url property:
>  karaf@root()> config:edit org.ops4j.datasource-authservice-production
>  karaf@root()> config:property-set url "jdbc:postgresql:///ukelonn"
>  karaf@root()> config:update
>  karaf@root()>
>  5. The resulting file in the etc directory was much smaller
>  -rw-r--r--  1 sb sb33 Nov 11 22:50
> org.ops4j.datasource-authservice-production.cfg
>  6. The updated file content is just the url setting
>  url = jdbc:postgresql:///ukelonn
>
> Did I do something wrong?  Or should I report this as a bug in karaf JIRA?
>
>

-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: question on cxf bus creation in Karaf

2019-11-11 Thread Christian Schneider
Karaf has no direct dependency to CXF. So there is no special handling.
CXF has some implicit rules about bus creation though. As far as I recall
it creates a single bus if you do not specifically set a bus by hand,

I propose to ask this on the CXF list.

Christian


Am Mi., 6. Nov. 2019 um 03:08 Uhr schrieb Scott Lewis :

> I have some code running in a non-Karaf OSGi framework (e.g. equinox
> framework with bundles) that creates and uses multiple CXF Bus
> instances...in order to register multiple CXFNonSpringJaxrsServlets that
> are isolated from one another.
>
> When run in Karaf with CXF 3.3.4, however, this code does not work the
> same...and I suspect it has something to do with how busses are created
> and used in Karaf.   I noticed there are a few Karaf commands (e.g.
> cxf:list-busses) but I don't see from the documentation how cxf busses
> are created/added at runtime.
>
> Any insights about what is different about CXF in Karaf as opposed to
> plain 'ol OSGi framework?  Any docs on using/configuring/extending CXF
> in Karaf specifically?
>
> Thanksinadvance,
>
> Scott
>
>
>

-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: A command like config:edit that will let you replace only some properties?

2019-11-10 Thread Christian Schneider
Be careful though. After each such command the config is provided to the
application. So if you want to change more than one property in one go then
the approach from JB is better.

Christian

Am So., 10. Nov. 2019 um 15:55 Uhr schrieb Steinar Bang :

> >>>>> Christian Schneider :
>
> > If you only want to change a single property you can also do this in one
> > step:
> > config:property-set -p my.config foo bar
>
> Thanks!  I will try this.
>
>

-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: A command like config:edit that will let you replace only some properties?

2019-11-10 Thread Christian Schneider
If you only want to change a single property you can also do this in one
step:
config:property-set -p my.config foo bar

Am So., 10. Nov. 2019 um 05:53 Uhr schrieb Jean-Baptiste Onofré <
j...@nanthrax.net>:

> Hi Steinar,
>
> If you do:
>
> karaf@root()> config:edit my.config
> karaf@root()> config:property-set foo bar
> karaf@root()> config:update
>
> only foo will be changed, other properties are kept.
>
> Not sure I follow what you mean.
>
> Regards
> JB
>
> On 09/11/2019 17:58, Steinar Bang wrote:
> > Is there a command similar to config:edit that will let you replace some
> > properties of an existing command instead of replacing all of them?
> >
> > Whit config:edit you need to do a config:property-set for all of the
> > existing properties, of an existing config, even the properties that
> > haven't changed.
> >
> > Thanks!
> >
> >
> > - Steinar
> >
>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: Define Deployment Sequence of Activator Bundles in Karaf

2019-11-05 Thread Christian Schneider
Does the problem only happen when you put the bundles into the deploy
folder while karaf is running or does it also happen when you first put all
bundles into the deploy folder and then start karaf?



Am Mo., 4. Nov. 2019 um 15:18 Uhr schrieb Kirti Arora <
kirti.ar...@hotwaxsystems.com>:

> Hello,
>
> I am new to Karaf. I have three bundles in my Karaf_Home/deploy directory
> and
> bundles are dependent on each other. So, I want to define a specific
> sequence of
> bundle deployment in Karaf to avoid runtime ClassNotFoundException.
>
> Can someone please guide me, how can I define bundle deployment sequence
> in Karaf?
>
> Thanks,
> Kirti Arora
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: Create my own Whiteboard/Application with extensions

2019-10-30 Thread Christian Schneider
I created an example of an application that uses jax-rs-whiteboard.

https://github.com/cschneider/osgi-best-practices/blob/master/backend/src/main/java/net/lr/tasklist/resource/TaskResource.java

It uses the jackson extension. As you see the only thing you need in your
code is this:
@Component(service = TaskResource.class)
@JaxrsResource
@Produces(MediaType.APPLICATION_JSON)
@JSONRequired
@Path("tasks")

Alternatively you can also publish a jax-rs Application as an OSGi service
but I did not try this.

Christian

Am Mi., 30. Okt. 2019 um 01:36 Uhr schrieb Oleg Cohen <
oleg.co...@assurebridge.com>:

> Greetings,
>
> I am using Aries HTTP Whiteboard in Karaf 4.2.7. All is working fine with
> the default Whiteboard instance. I would like to create my own Jax-RS
> Application with the same extensions that are configured in the default
> Whiteboard, for example jaxb-json one. Is there a convenient way to clone
> the default Whiteboard or create a fully-featured Application?
>
> I would appreciate any guidance in this!
>
> Thank you,
> Oleg
>
>
>
>

-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: Pluggable databases in apache karaf

2019-10-26 Thread Christian Schneider
Hi Steinar,

you do not have to build your own layer to use liquibase. Pax-jdbc-config
has a preHook that can help with this.

See
https://github.com/cschneider/Karaf-Tutorial/blob/master/liquibase/service/src/main/java/net/lr/tutorial/db/service/Migrator.java
and
https://github.com/cschneider/Karaf-Tutorial/blob/master/liquibase/org.ops4j.datasource-person.cfg#L4


The preHook attribute allows to select a PreHook service by name.
This service is called before the DataSource is published.

Christian


Am Sa., 26. Okt. 2019 um 16:06 Uhr schrieb Steinar Bang :

> >>>>> j...@nanthrax.net:
>
> > Hi
> > Thanks for sharing, I will take a look.
>
> > The purpose is to have a service layer ?
>
> The purpose is to have a database that is ready to be used
> (ie. connected and with a schema) by the business logic code, and to
> make it easy to switch databases.
>
> > What's the difference with pax-jdbc and karaf JDBC feature ?
>
> It builds on top of them.
>
> pax-jdbc provides a DataSourceFactory.
>
> The components described in the blog post provides a DataSource.
>
> The addition to pax-jdbc is actually connecting to the database and
> using liquibase to set up/modify the schema and insert initial data.
>
> Ie. my application specific database DS components use pax-jdbc (in the
> case of derby) and the PostgreSQL driver to get the DataSourceFactory
>
> When the application specific database DS components receive a
> DataSourceFactory injection and they are activated, the first thing they
> do is get a DataSource from the DataSourceFactory. This DataSource is
> kept around while the DS component is active.
>
> Before exposing any service, the application specific database DS
> component will run liquibase scripts to set up/update the schema (and
> add initial data), and when the scripts have run expose the application
> specific database service.
>
>

-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: How to install bundle Offline

2019-10-10 Thread Christian Schneider
If you do not change the server in production at runtime then I recommend
the karaf custom distribution way.
The name sounds daunting maybe but it is much easier than it seems.
The result is a single deployment unit tar.gz or zip that you simply unpack
on your system and run. It contains karaf and your bundles and auto starts
your needed features.

Christian

Am Mi., 9. Okt. 2019 um 23:50 Uhr schrieb Miroslav Beranič <
miroslav.bera...@mibesis.si>:

> I do it in production like had Christian described -- as production is
> "final" server, there are few servers before it - that have internet
> connection, so just rsync it for there to the production server once the
> "local" Maven repo is updated.
>
> But based on this thread, I will look into how to build a kar package, I
> did not put much effort into any other way as this works for me quite OK.
> Or even a Cave, but all this looks too complicated.
>
> Kind Regards,
> Miroslav
>
>
> V V sre., 9. okt. 2019 ob 18:13 je oseba Jean-Baptiste Onofré <
> j...@nanthrax.net> napisala:
>
>> Good point, or even copy the repository somewhere on the machine and add
>> in etc/org.ops4j.pax.url.mvn.cfg.
>>
>> Regards
>> JB
>>
>> On 09/10/2019 17:51, Christian Schneider wrote:
>> > Another approach is to start with an empty maven repo on a machine with
>> > internet connect.
>> > Then install the feature in karaf. This will populate the local maven
>> repo.
>> > You can then copy the local maven repo to your machine without internet
>> > access.
>> > This is ideal for quickly testing.
>> >
>> > For a production setup a custom karaf distro that contains karaf + your
>> > features is a good solution.
>> >
>> > Christian
>> >
>> > Am Mi., 9. Okt. 2019 um 13:57 Uhr schrieb Jean-Baptiste Onofré
>> > mailto:j...@nanthrax.net>>:
>> >
>> > Hi,
>> >
>> > you can:
>> >
>> > 1. populate the Karaf system folder manually or using mvn
>> > deploy:deploy-file or mvn install:install-file
>> > 2. you can create a kar on a machine connected to Internet and then
>> > deploy the kar
>> > 3. you can install cave on a machine with Internet access and use as
>> > a proxy
>> >
>> > Probably the quickest approach is 1, you just populate the Karaf
>> system
>> > folder (which is basically a embedded Maven repository).
>> >
>> > Regards
>> > JB
>> >
>> > On 09/10/2019 13:09, imranrazakhan wrote:
>> > > I have karaf deployed but with no internet connectivity, Now i
>> have to
>> > > install jolokia on it but getting below error
>> > >
>> > > @root()> feature:install jolokia
>> >
>> > > Error executing command: Error:
>> >
>> > >Error downloading mvn:org.jolokia/jolokia-osgi/1.3.5
>> > >
>> > > How i can install offline?
>> > >
>> > >
>> > >
>> > > --
>> > > Sent from:
>> http://karaf.922171.n3.nabble.com/Karaf-User-f930749.html
>> > >
>> >
>> > --
>> > Jean-Baptiste Onofré
>> > jbono...@apache.org <mailto:jbono...@apache.org>
>> > http://blog.nanthrax.net
>> > Talend - http://www.talend.com
>> >
>> >
>> >
>> > --
>> > --
>> > Christian Schneider
>> > http://www.liquid-reality.de
>> >
>> > Computer Scientist
>> > http://www.adobe.com
>> >
>>
>> --
>> Jean-Baptiste Onofré
>> jbono...@apache.org
>> http://blog.nanthrax.net
>> Talend - http://www.talend.com
>>
>
>
> --
> Miroslav Beranič
> MIBESIS
> miroslav.bera...@mibesis.si
> https://www.mibesis.si
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: How to install bundle Offline

2019-10-09 Thread Christian Schneider
Another approach is to start with an empty maven repo on a machine with
internet connect.
Then install the feature in karaf. This will populate the local maven repo.
You can then copy the local maven repo to your machine without internet
access.
This is ideal for quickly testing.

For a production setup a custom karaf distro that contains karaf + your
features is a good solution.

Christian

Am Mi., 9. Okt. 2019 um 13:57 Uhr schrieb Jean-Baptiste Onofré <
j...@nanthrax.net>:

> Hi,
>
> you can:
>
> 1. populate the Karaf system folder manually or using mvn
> deploy:deploy-file or mvn install:install-file
> 2. you can create a kar on a machine connected to Internet and then
> deploy the kar
> 3. you can install cave on a machine with Internet access and use as a
> proxy
>
> Probably the quickest approach is 1, you just populate the Karaf system
> folder (which is basically a embedded Maven repository).
>
> Regards
> JB
>
> On 09/10/2019 13:09, imranrazakhan wrote:
> > I have karaf deployed but with no internet connectivity, Now i have to
> > install jolokia on it but getting below error
> >
> > @root()> feature:install jolokia
>
> > Error executing command: Error:
>
> >Error downloading mvn:org.jolokia/jolokia-osgi/1.3.5
> >
> > How i can install offline?
> >
> >
> >
> > --
> > Sent from: http://karaf.922171.n3.nabble.com/Karaf-User-f930749.html
> >
>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: Wiring issue

2019-09-17 Thread Christian Schneider
By default the maven bundle plugin introspects your classes and the jars in
the maven build.
If you need a package then an import is created. If the jar offering the
packge has an OSGi Manifest then the version is taken from there if not
then it uses the maven version as package version.
>From this version bnd computes the import range. By default it uses the
exported version above cut to the minor version until excluding the next
major version.

You use this rs api to build your project:

  
javax.ws.rs
jsr311-api
1.1.1
  

So for example for the rs import it detrmined version 1.1.1 from the api
jar. The jar has no OSGi metadata.
So this creates an import range [1.1, 2).

The servicemix bundle for the spec exports version 2.0.1 which is outside
the spec.
You can configure the package imports by hand in
.
This is fragile though.

One simple workaround is to use the servicemix spec bundle in your maven
build instead of the one you used.
JB mentioned that the exports of the servicemix bundle might be wrong.
Actually I do not know which exports would be correct. Spec bundles do not
always follow the semantic versioning.
So if you use the seervicemix jar be prepared to readjust if JB fixes the
export version.

There is also a difference in your specs to the one karaf offers. You used
jsr311 which I think is jax rs 2.0 while karaf offers jax rs 2.1. Normally
of course this should be compatible.

@JB I have no idea what the correct exports should be. I hope this is
defined in some OSGi spec.

Christian

Am Mo., 16. Sept. 2019 um 23:34 Uhr schrieb Greg Logan <
gregorydlo...@gmail.com>:

> Hi Christian,
>
> That's the really odd part: Neither the module pom (6.5: [1], 6.6: [2]),
> nor the main pom (6.5: [3]. 6.6: [4]) make any restriction on the package
> version.  Is there a way to enumerate which bits are imposing which
> restrictions?
>
> G
>
> 1: https://github.com/opencast/opencast/blob/6.5/modules/engage-ui/pom.xml
> 2: https://github.com/opencast/opencast/blob/6.6/modules/engage-ui/pom.xml
> 3: https://github.com/opencast/opencast/blob/6.5/pom.xml#L794
> 4: https://github.com/opencast/opencast/blob/6.6/pom.xml#L794
>
>
> On Mon, Sep 16, 2019 at 2:20 AM Christian Schneider <
> ch...@die-schneider.net> wrote:
>
>> You seem to be using the spec bundle :
>> org.apache.servicemix.specs.jsr339-api-2.0.1
>> This has
>> Export-Package: javax.ws.rs;version="2.0.1"
>> This version is outside the range < 2 you are looking for in your bundle.
>> So the question is of course why a spec bundle exports a 2.0.1 version of
>> this package. Maybe there is an error in the servicemix bundle.
>>
>> As a quick fix you can allow a package import >2 is your ui bundle.
>>
>> Christian
>>
>>
>> Am Fr., 13. Sept. 2019 um 23:24 Uhr schrieb Greg Logan <
>> gregorydlo...@gmail.com>:
>>
>>> Hi all,
>>>
>>> I'm hitting a very strange wiring issue with our features.  The error
>>> I'm seeing look like this:
>>>
>>> >feature:install opencast-adminpresentation
>>> org.osgi.service.resolver.ResolutionException: Unable to resolve root:
>>> missing requirement [root] osgi.identity;
>>> osgi.identity=opencast-adminpresentation; type=karaf.feature;
>>> version="[0,0.0.0]";
>>> filter:="(&(osgi.identity=opencast-adminpresentation)(type=karaf.feature)(version>=0.0.0)(version<=0.0.0))"
>>> [caused by: Unable to resolve opencast-adminpresentation/0.0.0: missing
>>> requirement [opencast-adminpresentation/0.0.0] osgi.identity;
>>> osgi.identity=opencast-engage-ui; type=osgi.bundle;
>>> version="[6.6.0,6.6.0]"; resolution:=mandatory [caused by: Unable to
>>> resolve opencast-engage-ui/6.6.0: missing requirement
>>> [opencast-engage-ui/6.6.0] osgi.wiring.package;
>>> filter:="(&(osgi.wiring.package=javax.ws.rs
>>> )(version>=1.1.0)(!(version>=2.0.0)))"]]
>>> at
>>> org.apache.felix.resolver.ResolutionError.toException(ResolutionError.java:42)[6:org.apache.karaf.features.core:4.0.10]
>>> at
>>> org.apache.felix.resolver.ResolverImpl.doResolve(ResolverImpl.java:391)[6:org.apache.karaf.features.core:4.0.10]
>>> at
>>> org.apache.felix.resolver.ResolverImpl.resolve(ResolverImpl.java:377)[6:org.apache.karaf.features.core:4.0.10]
>>> at
>>> org.apache.felix.resolver.ResolverImpl.resolve(ResolverImpl.java:349)[6:org.apache.karaf.features.core:4.0.10]
>>> at
>>> org.apache.karaf.features.internal.region.SubsystemResolver.resolve(SubsystemResolver.jav

Re: Wiring issue

2019-09-16 Thread Christian Schneider
You seem to be using the spec bundle :
org.apache.servicemix.specs.jsr339-api-2.0.1
This has
Export-Package: javax.ws.rs;version="2.0.1"
This version is outside the range < 2 you are looking for in your bundle.
So the question is of course why a spec bundle exports a 2.0.1 version of
this package. Maybe there is an error in the servicemix bundle.

As a quick fix you can allow a package import >2 is your ui bundle.

Christian


Am Fr., 13. Sept. 2019 um 23:24 Uhr schrieb Greg Logan <
gregorydlo...@gmail.com>:

> Hi all,
>
> I'm hitting a very strange wiring issue with our features.  The error I'm
> seeing look like this:
>
> >feature:install opencast-adminpresentation
> org.osgi.service.resolver.ResolutionException: Unable to resolve root:
> missing requirement [root] osgi.identity;
> osgi.identity=opencast-adminpresentation; type=karaf.feature;
> version="[0,0.0.0]";
> filter:="(&(osgi.identity=opencast-adminpresentation)(type=karaf.feature)(version>=0.0.0)(version<=0.0.0))"
> [caused by: Unable to resolve opencast-adminpresentation/0.0.0: missing
> requirement [opencast-adminpresentation/0.0.0] osgi.identity;
> osgi.identity=opencast-engage-ui; type=osgi.bundle;
> version="[6.6.0,6.6.0]"; resolution:=mandatory [caused by: Unable to
> resolve opencast-engage-ui/6.6.0: missing requirement
> [opencast-engage-ui/6.6.0] osgi.wiring.package;
> filter:="(&(osgi.wiring.package=javax.ws.rs
> )(version>=1.1.0)(!(version>=2.0.0)))"]]
> at
> org.apache.felix.resolver.ResolutionError.toException(ResolutionError.java:42)[6:org.apache.karaf.features.core:4.0.10]
> at
> org.apache.felix.resolver.ResolverImpl.doResolve(ResolverImpl.java:391)[6:org.apache.karaf.features.core:4.0.10]
> at
> org.apache.felix.resolver.ResolverImpl.resolve(ResolverImpl.java:377)[6:org.apache.karaf.features.core:4.0.10]
> at
> org.apache.felix.resolver.ResolverImpl.resolve(ResolverImpl.java:349)[6:org.apache.karaf.features.core:4.0.10]
> at
> org.apache.karaf.features.internal.region.SubsystemResolver.resolve(SubsystemResolver.java:216)[6:org.apache.karaf.features.core:4.0.10]
> at
> org.apache.karaf.features.internal.service.Deployer.deploy(Deployer.java:263)[6:org.apache.karaf.features.core:4.0.10]
> at
> org.apache.karaf.features.internal.service.FeaturesServiceImpl.doProvision(FeaturesServiceImpl.java:1188)[6:org.apache.karaf.features.core:4.0.10]
> at
> org.apache.karaf.features.internal.service.FeaturesServiceImpl$1.call(FeaturesServiceImpl.java:1086)[6:org.apache.karaf.features.core:4.0.10]
> at
> java.util.concurrent.FutureTask.run(FutureTask.java:266)[:1.8.0_222]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)[:1.8.0_222]
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)[:1.8.0_222]
> at java.lang.Thread.run(Thread.java:748)[:1.8.0_222]
>
> This occurs on three of our profiles, all involving the opencast-engage-ui
> bundle.  The odd part is that this appears in our 6.6 version, but *not*
> our 6.5 - but there's no part of the changeset between 6.5[1] and 6.6[2]
> which should be causing this.  We're using the servicemix bundle[3], which
> is the same across both of our 6.5 and 6.6 versions.  The bundle headers
> for 6.5 look like this:
>
> >bundle:headers opencast-engage-ui
>
> Opencast :: engage-ui (345)
> ---
> Bnd-LastModified = 1560502504114
> Build-Jdk = 1.8.0_212
> Build-Number = 618eec6
> Built-By = lars
> Created-By = Apache Maven Bundle Plugin
> Http-Alias = /engage/ui
> Http-Classpath = /ui
> Http-Welcome = index.html
> Manifest-Version = 1.0
> Tool = Bnd-3.5.0.201709291849
>
> Bundle-Category = opencastproject
> Bundle-Description = Opencast is a media capture, processing, management
> and distribution system
> Bundle-DocURL = http://opencastproject.org/
> Bundle-License = http://www.osedu.org/licenses/ECL-2.0/ecl2.txt
> Bundle-ManifestVersion = 2
> Bundle-Name = Opencast :: engage-ui
> Bundle-SymbolicName = opencast-engage-ui
> Bundle-Vendor = The Opencast Project
> Bundle-Version = 6.5.0
>
> But I'm not sure how to get the headers for 6.6 since the feature won't
> even start :(
>
> Any clues about how to proceed here?
>
> Thanks,
> G
>
> 1: https://github.com/opencast/opencast/releases/tag/6.5
> 2: https://github.com/opencast/opencast/releases/tag/6.6
>
> 3: 
> mvn:org.apache.servicemix.specs/org.apache.servicemix.specs.jsr339-api-2.0.1/2.6.0
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: Karaf-4.2.5 - Duplicate MySQL DataSourceFactories

2019-05-28 Thread Christian Schneider
t;>>> Not sure if I am doing something wrong or if this is a known issue.
> >>>>> I am
> >>>>> using Karaf-4.2.5 with pax-jdbc-1.3.1 and I end up with two identical
> >>>>> DataSourceFactories and 2 identical Data Sources. One from the
> >>>>> mysql-5.1.34 Oracle bundle and one from the pax-jdbc-mysql adapter
> >>>>> bundle.
> >>>>>
> >>>>> When I use jdbc:ds-list I see 2 Datasources for ea. database and
> Karaf
> >>>>> even generates a warning msg that I have duplicate DataSources and
> that
> >>>>> I should check my config.
> >>>>>
> >>>>> I only have ONE config file for ea. database.
> >>>>>
> >>>>> Name   │ Product │ Version│ URL
>
> >>>>> │ Status
> >>>>>
> ───┼─┼┼─┼───
> >>>>> jdbc/database1 │ MySQL   │ 5.5.61-cll │
> >>>>>
> jdbc:mysql://p.q.r.s:3306/Schema?useSSL=false=convertToNull
> >>>>> │ OK
> >>>>> jdbc/database2 │ MySQL   │ 5.6.31-log │
> >>>>> jdbc:mysql://the_db_server:3306/schema?useSSL=false
> >>>>>
> >>>>> │ OK
> >>>>> jdbc/database2 │ MySQL   │ 5.6.31-log │
> >>>>> jdbc:mysql://the_db_server:3306/schema?useSSL=false
> >>>>>
> >>>>> │ OK
> >>>>> jdbc/databawe1 │ MySQL   │ 5.5.61-cll │
> >>>>>
> jdbc:mysql://p.q.r.s:3306/Schema?useSSL=false=convertToNull
> >>>>> │ OK
> >>>>>
> >>>>>   [pipe-jdbc:ds-list] WARN
> >>>>> org.apache.karaf.jdbc.internal.JdbcServiceImpl - Multiple JDBC
> >>>>> datasources found with the same service ranking for jdbc/myDB
> >>>>>
> >>>>>
> >>>>> [org.osgi.service.jdbc.DataSourceFactory]
> >>>>> -
> >>>>>  osgi.jdbc.driver.class = com.mysql.jdbc.Driver
> >>>>>  osgi.jdbc.driver.name = com.mysql.jdbc
> >>>>>  osgi.jdbc.driver.version = 5.1.34
> >>>>>  service.bundleid = 172
> >>>>>  service.id <http://service.id> <http://service.id>
> >>>>> <http://service.id> = 415
> >>>>>  service.scope = singleton
> >>>>> *Provided by :
> >>>>>  Oracle Corporation's JDBC Driver for MySQL (172)*
> >>>>> Used by:
> >>>>>  OPS4J Pax JDBC Config (12)
> >>>>>
> >>>>> [org.osgi.service.jdbc.DataSourceFactory]
> >>>>> -
> >>>>>  osgi.jdbc.driver.class = com.mysql.jdbc.Driver
> >>>>>  osgi.jdbc.driver.name = mysql
> >>>>>  service.bundleid = 235
> >>>>>  service.id <http://service.id> <http://service.id>
> >>>>> <http://service.id> = 420
> >>>>>  service.scope = singleton
> >>>>> *Provided by :
> >>>>>  OPS4J Pax JDBC MySQL Driver Adapter (235)*
> >>>>> Used by:
> >>>>>  OPS4J Pax JDBC Config (12)
> >>>>>
> >>>>>
> >>>>> Kind Regards,
> >>>>>
> >>>>> Erwin
> >>>>
> >>>> --
> >>>> Jean-Baptiste Onofré
> >>>> jbono...@apache.org <mailto:jbono...@apache.org>
> >>>> <mailto:jbono...@apache.org>
> >>>> http://blog.nanthrax.net
> >>>> Talend - http://www.talend.com
> >>>
> >>
> >> --
> >> Jean-Baptiste Onofré
> >> jbono...@apache.org <mailto:jbono...@apache.org>
> >> http://blog.nanthrax.net
> >> Talend - http://www.talend.com
> >
>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: PAX-CDI producer as OSGI service

2019-05-28 Thread Christian Schneider
You can also take a look at Aries CDI and the OSGi CDI spec.

https://github.com/apache/aries-cdi
https://osgi.org/specification/osgi.enterprise/7.0.0/service.cdi.html

Christian

Am Di., 28. Mai 2019 um 00:03 Uhr schrieb keal :

> Hi there,
>
> Does anyone have experience using pax-cdi?
> Is there any way to expose as a OSGI Service a CDI producer?
>
> pax-cdi version 0.5.0 had a @OsgiServiceProvider annotation, but on
> current version was removed.
>
> Unfortunately, pax-cdi docs are not up to date (
> https://github.com/ops4j/org.ops4j.pax.cdi/blob/master/pax-cdi-manual/src/main/asciidoc/index.adoc#jboss-weld
> ),
> any help is welcome.
>
> Thanks!
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: What's the lifecycle of a bundle in a feature vs a bundle loaded directly by a feature? (Was: "No suitable driver found" after postgresql driver bundle reload)

2019-04-26 Thread Christian Schneider
Generally the karaf feature mechanism tries to load and start bundles only
once.
Unfortunately there are cases when bundles have to be refreshed and
restarted.

The main reason is when a bundle has an optional package import. So it can
happen that the bundle is first resolved with the optional package not
present and with a new feature installed into the system the package is
there. In this case the bundle will be refreshed to pick up the package.
This includes restarting the bundle.

The second reason is if a bundle depends on another bundle that is being
refreshed. In this case the bundle also needs to be refreshed.

In combination this leads to a lot of refreshs happening in karaf.

When designing a bundle you can influence this by:
1. Try to avoid optional dependencies
2. Place apis in different bundles as APIs are typically used by many other
bundles while the impls are not.
3. API bundles should not have optional dependencies and should depend on
as few other packages as possible.

Christian


Am Sa., 27. Apr. 2019 um 07:23 Uhr schrieb Steinar Bang :

> >>>>> Steinar Bang :
>
> > (I can think of one possible workaround: create a karaf feature that
> > loads the PostgreSQL driver and its dependencies, and load this directly
> > before loading any other features.  Then maybe PostgreSQL won't reload
> > and this problem won't occur)
>
> Here's a followup question: What's the lifecycle of a bundle in a
> feature vs a bundle loaded directly by a feature?
>
> What I mean, is: if I create a feature for the postgresql JDBC driver,
> and then require that feature, will the driver be just loaded and
> started once?
>
> Or will the PostgreSQL JDBC driver bundle be restarted once for every
> feature loading it, like it is today?
>
> Thanks!
>
>
> - Steinar
>
>

-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Behaviour of BundleState for scr

2019-04-04 Thread Christian Schneider
We are currently determining the BundleState for SCR similar to blueprint
and spring.

A bundle is shown as waiting when a component is missing any service or
config.
https://github.com/apache/karaf/blob/master/scr/state/src/main/java/org/apache/karaf/scr/state/ScrBundleStateService.java

For diag I think this is totally fine but I think it often looks strange in
the bundle list "la".

At least in sling but I think also in many other declarative services based
projects service dependencies or config dependencies are often used as a
kind of feature toggle.
So you might want to switch a backend or enable / disable functionality by
adding or removing config.
In this case it is strange when the bundle is listed as waiting while in
effect it is behaving completely normal.

So I see two ways of improving this:

1. Treat BundleState differently for scr and never report waiting because
of missing scr deps.
This would require some ugly special handling.

2. Simply show both the OSGi bundle state as well as the state from
BundleState service as two different columns in la.
We could name the column "diag state" or "injection state" to distinguish
from the OSGi bundle state.

WDYT?

Christian

-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: Bndtools & Karaf : the right way

2019-02-15 Thread Christian Schneider
In maven projects you have to run mvn install to build your project to the
local repo.

So it is normal that saving a file does not lead to a update of the local
repo. The reason for this is that some maven project take quite a while to
build. So it would not be wise to do that after each save.

It think bndtools at least builds the jar on each save. (This should also
be true if you use the maven setup for your project instead of the
workspace one). I am not sure if it also deploys to the local repo but
maybe it does as it is a cheap operation if the jar is already built. You
should ask this on the bndtools list for clarification.

When you start your project in bndtools from a bndrun then I tested that
saving a file indeed leads to that bundle being updated in the running
process. This also works if bndtools uses the plain maven build.

I am planning to do a webinar with Ray about OSGi development. We will demo
how to debug with bndtools as well as with karaf there. So I think after
the preparation for it we have some good infos how it will actually work.

In the mean time try out the new enroute .. it should not take long to
validate how bndtools behaves with the maven build.

Christian


Am Fr., 15. Feb. 2019 um 14:11 Uhr schrieb Alex Weirig <
alex.wei...@technolink.lu>:

> Hi Christian,
>
> I'm not sure if I'm missing something obvious ... in my project I have a
> few bundles that have to be maven based (Vaadin related projects, so they
> need to build using maven).
>
> When I change code in my regular bndtools workspace related projects,
> whenever I save code it get's automatically build and deployed to my local
> maven repository.
>
> This is unfortunately not the case with the maven projects, I always have
> to manually build the project in order to make it deploy ... and the maven
> build is way slower than the bnd(tools) build... As I only have a couple of
> hours per week to focus on development, I don't want to spend that time
> waiting for maven :-)
>
> Is that different with the "maven build of bndtools" enroute is using...
> I'm ashamed but I still couldn't find the time to look into that new
> approach after Eclipse/OSGi Con last year ... so maybe I'm still
> complaining about things that are actually solved.
>
> Thanks
>
>
> Mat frëndleche Gréiss,
> Mit freundlichen Grüßen,
> Meilleures salutations,
> Kind regards,Alex WeirigResponsable Technique
> Ville de Luxembourg
> Service Enseignement
> Centre Technolink
> *Tel* +352 4796 - 6127 <+35247966127>*Fax* +352 42 88 81*Email* 
> alex.wei...@technolink.luwww.vdl.lu // www.technolink.lu
>
> Centre Technolink
> 2, rue Charles de Tornaco
> L-2623 LUXEMBOURG
>
> On 15/02/2019 13:03, Christian Schneider wrote:
>
> I agree with Alex about using bundle:watch. It gives you a similar
> experience like bndtools once your bundles are running in karaf.
> To get the bundles running easier I propose you also create a karaf
> feature in your build.
>
> I also propose you move away from the bndtools workspace model and instead
> use the maven build of bndtools like enroute now shows. It is much nearer
> to how karaf projects are built.
>
> One other thing that might come handy is to start karaf with the "debug"
> argument. This opens karaf for remote debugging and allows you to also
> debug your bundles easily. Basically it is like running the bndtools
> starter in debug mode.
>
> Christian
>
> Am Fr., 15. Feb. 2019 um 12:55 Uhr schrieb Alex Weirig <
> alex.wei...@technolink.lu>:
>
>> Hi Kamil,
>>
>> let me try and see if this can already help you, it's very basic but
>> works really fine depending on the scope / size of your development project
>> ... this is based on bnd(tools) 4.0.0 but should still be valid in 4.1.0 I
>> guess.
>>
>> If you look at your build.bnd file in your bndtools workspace, make sure
>> you have the following plugin defined:
>>
>> -plugin.5.LocalMaven: \
>> aQute.bnd.repository.maven.provider.MavenBndRepository; \
>> name = *LocalMaven*
>>
>> then you define the buildRepo:
>>
>> -buildrepo: \
>> *LocalMaven*
>>
>> finally set some maven data:
>>
>> -pom: \
>> groupid=*your.group*,\
>> version =${versionmask;===;${@version}}-SNAPSHOT
>>
>>
>> Now when bnd(tools) builds your project it should end up in your local
>> maven repository (your home folder/.m2/repository/*your/group*). So no
>> need to gradle here.
>>
>> You can now run a karaf on your local machine and install your bundles
>> using:
>>
>> bundle:install mvn:your.group/your bundle name here/version here
>>
>

Re: Bndtools & Karaf : the right way

2019-02-15 Thread Christian Schneider
osgi-plugin)
> to create bundles and I used Gradle's maven publish plugin to publish them
> to Maven's local repo. Then I install them in Karaf one by one using
> bundle:install mvn:xxx/yyy/zzz command
> Problem:
> a) How to create set of bundles (feature) in Bnd?
> b) How to deploy this to Karaf without manually executing bundle:install
> commands?
>
> Thank you in advance,
> Kamil
>
>
> On Thu, Feb 14, 2019 at 1:37 PM Jean-Baptiste Onofré 
> wrote:
>
>> Hi,
>>
>> We didn't move forward a lot. I remember there was some discussion to
>> have a "Karaf exporter" in bndtools and I proposed my help on this.
>> I didn't move forward yet.
>>
>> Do you already know what you have in mind (if you could describe the use
>> case, that would be great) ?
>>
>> Regards
>> JB
>>
>> On 14/02/2019 13:20, kamilantlgc wrote:
>> > Dear Karaf User group,
>> >
>> > I have stumbled upon the exact problem - how to join Karaf and Bndtools
>> > together "the right way" (this conversation is the first result in
>> Google by
>> > the way: https://www.google.com/search?q=karaf+bnd).
>> >
>> > I've read the topic and was happy to see that Guillaume asked to fill
>> Jira
>> > issue.
>> > Then I've read with interest that JB is just "building the new SNAPSHOT
>> to
>> > test if the couple of issues".
>> > And then I navigated to the issue created by dleangen
>> > (https://issues.apache.org/jira/browse/KARAF-4160) just to see that
>> it's
>> > status is "Won't fix"...
>> >
>> > Anyway - does anybody on this group already figured it out how to join
>> Karaf
>> > and Bnd to play nicely together?
>> >
>> > Kind regards,
>> > Kamil
>> >
>> >
>> >
>> > --
>> > Sent from: http://karaf.922171.n3.nabble.com/Karaf-User-f930749.html
>> >
>>
>> --
>> Jean-Baptiste Onofré
>> jbono...@apache.org
>> http://blog.nanthrax.net
>> Talend - http://www.talend.com
>>
>

-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: Bndtools & Karaf : the right way

2019-02-15 Thread Christian Schneider
Hi Ray,

sounds good. Maybe we can plan this next week when we meet in Berlin.
I wonder if we could show karaf and bndtools variants in the same webinar
or do separate ones.

Another thing to show is deploying to kubernetes. That might also get some
attention.

Christian

Am Do., 14. Feb. 2019 um 14:04 Uhr schrieb Raymond Auge <
raymond.a...@liferay.com>:

> Maybe it might be worth in the next month or so if Christian and I could
> do a kind of webinar on the modern Maven way to do OSGi development.
>
> What do you think Christian, would you be up for something like that?
>
> - Ray
>
> On Thu, Feb 14, 2019, 07:59 Christian Schneider  wrote:
>
>> The new maven build in bndtools is now a plain maven build.
>> Of course the bndtools examples favour the bnd-maven-plugin instead of
>> the maven-bundle-plugin but that is more like left and right twix :-)
>>
>> The only real difference is how the bundles are then assembled into a
>> runable application.
>>
>> There bndtools uses a bndrun file which points to a repository (based on
>> maven now) and requirements. The output is a runnable jar.
>>
>> For karaf you create a feature with your bundles and dependencies
>> typically as feature dependencies. Then in a second step you can also have
>> a karaf custom distro to create a complete server (application).
>>
>> So it is now perfectly possible to produce a bndtools based assembly and
>> a karaf feature and optionally a custom distro in the same build.
>>
>> The only sad thing is that the deployment descriptions are not yet
>> compatible. As both karaf and bndtools use a bundle repository under the
>> hood I think this can be improved.
>>
>> Christian
>>
>> Am Do., 14. Feb. 2019 um 13:46 Uhr schrieb Jean-Baptiste Onofré <
>> j...@nanthrax.net>:
>>
>>> Hi Ray,
>>>
>>> does it mean that bundle can be created in bndtools, then upload in a
>>> Maven repository, and installed in Karaf ?
>>> Maybe the only missing piece is that bndtools generate a features XML to
>>> have a complete story.
>>>
>>> Regards
>>> JB
>>>
>>> On 14/02/2019 13:44, Raymond Auge wrote:
>>> > Perhaps it would be better, rather than switching to the BND Workspace
>>> > model to simply use the BND/Bndtools Maven/m2e support. It's very good
>>> > now and has parity in large part with the BND Workspace. The upcoming
>>> > release especially should break lots of barriers.
>>> >
>>> > Just a suggestion.
>>> >
>>> > - Ray
>>> >
>>> > On Thu, Feb 14, 2019, 07:37 Jean-Baptiste Onofré >> > <mailto:j...@nanthrax.net> wrote:
>>> >
>>> > Hi,
>>> >
>>> > We didn't move forward a lot. I remember there was some discussion
>>> to
>>> > have a "Karaf exporter" in bndtools and I proposed my help on this.
>>> > I didn't move forward yet.
>>> >
>>> > Do you already know what you have in mind (if you could describe
>>> the use
>>> > case, that would be great) ?
>>> >
>>> > Regards
>>> > JB
>>> >
>>> > On 14/02/2019 13:20, kamilantlgc wrote:
>>> > > Dear Karaf User group,
>>> > >
>>> > > I have stumbled upon the exact problem - how to join Karaf and
>>> > Bndtools
>>> > > together "the right way" (this conversation is the first result
>>> in
>>> > Google by
>>> > > the way: https://www.google.com/search?q=karaf+bnd).
>>> > >
>>> > > I've read the topic and was happy to see that Guillaume asked to
>>> > fill Jira
>>> > > issue.
>>> > > Then I've read with interest that JB is just "building the new
>>> > SNAPSHOT to
>>> > > test if the couple of issues".
>>> > > And then I navigated to the issue created by dleangen
>>> > > (https://issues.apache.org/jira/browse/KARAF-4160) just to see
>>> > that it's
>>> > > status is "Won't fix"...
>>> > >
>>> > > Anyway - does anybody on this group already figured it out how to
>>> > join Karaf
>>> > > and Bnd to play nicely together?
>>> > >
>>> > > Kind regards,
>>> > > Kamil
>>> > >
>>> > >
>>> > >
>>> > > --
>>> > > Sent from:
>>> http://karaf.922171.n3.nabble.com/Karaf-User-f930749.html
>>> > >
>>> >
>>> > --
>>> > Jean-Baptiste Onofré
>>> > jbono...@apache.org <mailto:jbono...@apache.org>
>>> > http://blog.nanthrax.net
>>> > Talend - http://www.talend.com
>>> >
>>>
>>> --
>>> Jean-Baptiste Onofré
>>> jbono...@apache.org
>>> http://blog.nanthrax.net
>>> Talend - http://www.talend.com
>>>
>>
>>
>> --
>> --
>> Christian Schneider
>> http://www.liquid-reality.de
>>
>> Computer Scientist
>> http://www.adobe.com
>>
>>

-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: Bndtools & Karaf : the right way

2019-02-14 Thread Christian Schneider
The new maven build in bndtools is now a plain maven build.
Of course the bndtools examples favour the bnd-maven-plugin instead of the
maven-bundle-plugin but that is more like left and right twix :-)

The only real difference is how the bundles are then assembled into a
runable application.

There bndtools uses a bndrun file which points to a repository (based on
maven now) and requirements. The output is a runnable jar.

For karaf you create a feature with your bundles and dependencies typically
as feature dependencies. Then in a second step you can also have a karaf
custom distro to create a complete server (application).

So it is now perfectly possible to produce a bndtools based assembly and a
karaf feature and optionally a custom distro in the same build.

The only sad thing is that the deployment descriptions are not yet
compatible. As both karaf and bndtools use a bundle repository under the
hood I think this can be improved.

Christian

Am Do., 14. Feb. 2019 um 13:46 Uhr schrieb Jean-Baptiste Onofré <
j...@nanthrax.net>:

> Hi Ray,
>
> does it mean that bundle can be created in bndtools, then upload in a
> Maven repository, and installed in Karaf ?
> Maybe the only missing piece is that bndtools generate a features XML to
> have a complete story.
>
> Regards
> JB
>
> On 14/02/2019 13:44, Raymond Auge wrote:
> > Perhaps it would be better, rather than switching to the BND Workspace
> > model to simply use the BND/Bndtools Maven/m2e support. It's very good
> > now and has parity in large part with the BND Workspace. The upcoming
> > release especially should break lots of barriers.
> >
> > Just a suggestion.
> >
> > - Ray
> >
> > On Thu, Feb 14, 2019, 07:37 Jean-Baptiste Onofré  > <mailto:j...@nanthrax.net> wrote:
> >
> > Hi,
> >
> > We didn't move forward a lot. I remember there was some discussion to
> > have a "Karaf exporter" in bndtools and I proposed my help on this.
> > I didn't move forward yet.
> >
> > Do you already know what you have in mind (if you could describe the
> use
> > case, that would be great) ?
> >
> > Regards
> > JB
> >
> > On 14/02/2019 13:20, kamilantlgc wrote:
> > > Dear Karaf User group,
> > >
> > > I have stumbled upon the exact problem - how to join Karaf and
> > Bndtools
> > > together "the right way" (this conversation is the first result in
> > Google by
> > > the way: https://www.google.com/search?q=karaf+bnd).
> > >
> > > I've read the topic and was happy to see that Guillaume asked to
> > fill Jira
> > > issue.
> > > Then I've read with interest that JB is just "building the new
> > SNAPSHOT to
> > > test if the couple of issues".
> > > And then I navigated to the issue created by dleangen
> > > (https://issues.apache.org/jira/browse/KARAF-4160) just to see
> > that it's
> > > status is "Won't fix"...
> > >
> > > Anyway - does anybody on this group already figured it out how to
> > join Karaf
> > > and Bnd to play nicely together?
> > >
> > > Kind regards,
> > > Kamil
> >     >
> > >
> > >
> > > --
> > > Sent from:
> http://karaf.922171.n3.nabble.com/Karaf-User-f930749.html
> > >
> >
> > --
> > Jean-Baptiste Onofré
> > jbono...@apache.org <mailto:jbono...@apache.org>
> > http://blog.nanthrax.net
> > Talend - http://www.talend.com
> >
>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: Karaf and Transaction Control Service Specification

2019-02-05 Thread Christian Schneider
I am pretty sure you can use it.

There is an example in enroute:
https://github.com/osgi/osgi.enroute/tree/master/examples/microservice/rest-app-jpa

If you use the same bundles it should work.
We can create a karaf feature in the aries-tx-control repo for it to make
it easier to install.

Christian

Am Di., 5. Feb. 2019 um 16:34 Uhr schrieb Alex Soto :

> Hi,
>
> Is it possible to use Transaction Control Service Specification with Karaf?
>
>
> https://osgi.org/specification/osgi.cmpn/7.0.0/service.transaction.control.html
>
>
> Any good example or tutorial?
>
> (I am currently using Aries JPA Template, but I would like to move on to
> use the standard)
>
> Best regards,
> Alex soto
>
>
>
>
>

-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: Features install options...

2019-02-04 Thread Christian Schneider
Be aware that the configs in features are normally only meant for default
configs.
You do not want your DB password to be in a feature xml that is deployed to
maven repo.

Christian

Am Di., 5. Feb. 2019 um 00:26 Uhr schrieb Ranx :

> One thing I realized as I was looking over the docs is that what I'm
> looking
> for is something like how the install="auto" works when you hot deploy a
> features file by dropping it in the deploy folder. I'm not really wanting
> to
> install anything that way but it would be nice if I could have some similar
> functionality when doing it via repo-add or a features command line. Just
> some way to make the features autoload but retain their identity so I can
> uninstall them if I wish.
>
>
>
>
> --
> Sent from: http://karaf.922171.n3.nabble.com/Karaf-User-f930749.html
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: OPS4J Pax JDBC + Derby = Duplicate DataSource

2019-02-01 Thread Christian Schneider
he datasource cfg file ?
> >>>>> Using a feature or by dropping the file in the etc folder ?
> >>>>>
> >>>>> I will check when my build is complete.
> >>>>>
> >>>>> Regards
> >>>>> JB
> >>>>>
> >>>>> On 01/02/2019 18:48, Alex Soto wrote:
> >>>>>> Hello,
> >>>>>>
> >>>>>> I am experiencing a problem where /pax-jdbc-config/ (version 1.3.0)
> is
> >>>>>> creating duplicate Derby Data Sources.  I copy the data source
> >>>>>> configuration file to  Karaf's /etc/ /directory, after a while I
> >>>>>> can see
> >>>>>> it created two identical Data Sources.
> >>>>>>
> >>>>>> The configuration file: /org.ops4j.datasource-querier.cfg/
> >>>>>>
> >>>>>>osgi.jdbc.driver.name = derby
> >>>>>>dataSourceName=querier
> >>>>>>url=jdbc:derby:derby-data/querier;create=true
> >>>>>>
> >>>>>>user=enquery
> >>>>>>password=
> >>>>>>databaseName=querier
> >>>>>>
> >>>>>>ops4j.preHook=querierDB
> >>>>>>
> >>>>>>
> >>>>>> It creates duplicate Data Sources:
> >>>>>>
> >>>>>>karaf@root()> service:list DataSource
>
> >>>>>>
>
> >>>>>>
>
> >>>>>>
> >>>>>>[javax.sql.DataSource]
> >>>>>>--
> >>>>>> databaseName = querier
> >>>>>> dataSourceName = querier
> >>>>>> felix.fileinstall.filename =
> >>>>>>file:/Users/asoto/test/etc/org.ops4j.datasource-querier.cfg
> >>>>>> ops4j.preHook = querierDB
> >>>>>> osgi.jdbc.driver.name = derby
> >>>>>> osgi.jndi.service.name = querier
> >>>>>> password = enquery
> >>>>>> pax.jdbc.managed = true
> >>>>>> service.bundleid = 169
> >>>>>> service.factoryPid = org.ops4j.datasource
> >>>>>> service.id
> >>>>>> <http://service.id/> <http://service.id/> <http://service.id
> >>>>>> <http://service.id/>
> >>>>>> <http://service.id/>> = 238
> >>>>>> service.pid =
> >>>>>> org.ops4j.datasource.b161e768-e5f8-40bb-b19f-40cab9111316
> >>>>>> service.scope = singleton
> >>>>>> url = jdbc:derby:derby-data/querier;create=true
> >>>>>> user = enquery
> >>>>>>Provided by :
> >>>>>> OPS4J Pax JDBC Config (169)
> >>>>>>Used by:
> >>>>>> JPA (22)
> >>>>>>[javax.sql.DataSource]
> >>>>>>--
> >>>>>> databaseName = querier
> >>>>>> dataSourceName = querier
> >>>>>> felix.fileinstall.filename
> >>>>>>= file:/Users/asoto/test/etc/org.ops4j.datasource-querier.cfg
> >>>>>> ops4j.preHook = querierDB
> >>>>>> osgi.jdbc.driver.name = derby
> >>>>>> osgi.jndi.service.name = querier
> >>>>>> password = enquery
> >>>>>> pax.jdbc.managed = true
> >>>>>> service.bundleid = 169
> >>>>>> service.factoryPid = org.ops4j.datasource
> >>>>>> service.id
> >>>>>> <http://service.id/> <http://service.id/> <http://service.id
> >>>>>> <http://service.id/>
> >>>>>> <http://service.id/>> = 282
> >>>>>> service.pid =
> >>>>>> org.ops4j.datasource.b161e768-e5f8-40bb-b19f-40cab9111316
> >>>>>> service.scope = singleton
> >>>>>> url = jdbc:derby:derby-data/querier;create=true
> >>>>>> user = enquery
> >>>>>>Provided by :
> >>>>>> OPS4J Pax JDBC Config (169)
> >>>>>>
> >>>>>>
> >>>>>> Also:
> >>>>>>
> >>>>>>karaf@root()> jdbc:ds-list
> >>>>>>Name│ Product  │ Version   │ URL
>
> >>>>>>  │ Status
> >>>>>>
>
> ┼──┼───┼───┼───
> >>>>>>querier │ Apache Derby │ 10.13.1.1 - (1765088) │
> >>>>>>jdbc:derby:derby-data/querier │ OK
> >>>>>>querier │ Apache Derby │ 10.13.1.1 - (1765088) │
> >>>>>>jdbc:derby:derby-data/querier │ OK
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> I think it must be specific to Derby, as the same works fine if the
> >>>>>> driver is MariaDB.
> >>>>>> Any clues?
> >>>>>>
> >>>>>> Best regards,
> >>>>>> Alex soto
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>
> >>>>> --
> >>>>> Jean-Baptiste Onofré
> >>>>> jbono...@apache.org
> >>>>> <mailto:jbono...@apache.org> <mailto:jbono...@apache.org>
> >>>>> http://blog.nanthrax.net
> >>>>> <http://blog.nanthrax.net/> <http://blog.nanthrax.net/>
> >>>>> Talend - http://www.talend.com
> >>>>> <http://www.talend.com/> <http://www.talend.com/>
> >>>>
> >>>
> >>> --
> >>> Jean-Baptiste Onofré
> >>> jbono...@apache.org <mailto:jbono...@apache.org>
> >>> http://blog.nanthrax.net <http://blog.nanthrax.net/>
> >>> Talend - http://www.talend.com <http://www.talend.com/>
> >>
> >
>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: How can I create a JMS connection factory for OracleAQ?

2019-01-16 Thread Christian Schneider
You showed how you get the DataSource from jndi but not how you created it.
Do you use a pax-jdbc config for this?

Christian

Am Mi., 16. Jan. 2019 um 14:44 Uhr schrieb fred :

>
> You can create just one DataSource and expose it as an OSGi service. There
> are several ways to do that, like pax-jdbc.
>
> After that, you can just reference the dataSource at your bundle and
> connect
> to the OracleAQ.
>
> --
>
> I think that's what I tried to do, but it did not work:
> javax.sql.DataSource dataSource = (javax.sql.DataSource) ((new
> javax.naming.InitialContext()).lookup("osgi:service/jdbc/av-ds"));
> jmsConnectionFactory =
> oracle.jms.AQjmsFactory.getQueueConnectionFactory(dataSource);
>
> Errormessage when accessing JMS:
> Error creating the db_connection; nested exception is
> java.lang.ClassCastException:
> org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper cannot
> be cast to oracle.jdbc.internal.OracleConnection
>
> To me, it seems that by getting the datasource as an OSGi service, it is
> weirdly wrapped or proxied in a way that the Oracle QueueConnectionFactory
> cannot work with (and I don't know how to get around this).
>
>
>
> --
> Sent from: http://karaf.922171.n3.nabble.com/Karaf-User-f930749.html
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: First user contributed story from openhab .. please participate too

2019-01-11 Thread Christian Schneider
Many thanks Serge.These personal comments on the stories page improve the
stories a lot.

Christian

Am Di., 8. Jan. 2019 um 16:23 Uhr schrieb Serge Huber :

> Actually I think I'll just put them in this thread, that'll go much
> faster for me :)
>
> For Apache Unomi:
>
> "Apache Unomi was directly created using Apache Karaf as a runtime. We
> needed a highly modular, high performance, scalable and open source
> runtime for building this implementation. Apache Unomi has to store
> millions of profiles and handle real-time processing of rules based on
> events generated from visitor behavior into an ElasticSearch backend.
> OSGi fit the bill perfectly for the technology architecture and Apache
> Karaf come out of the box with everything we needed to have a
> full-fledged and reliable runtime" - Serge Huber, Apache Unomi creator
>
> For Jahia :
>
> "Jahia has used various technologies at its core and has been using
> OSGi for its module system for quite some time now. Initially, we used
> Apache Felix as a runtime, but quite quickly we realized we needed
> more powerful logging, better provisioning, clustering and monitoring
> support. Apache Karaf fit the bill perfectly and to the point where we
> are using it as much as possible, especially since the community
> around the project has been very helpful and addressing a lot of our
> needs. We can therefore strongly recommend Apache Karaf as a runtime."
> - Serge Huber, Jahia CTO.
>
> Regards,
>   Serge...
>
> On Tue, Jan 8, 2019 at 1:44 PM Jean-Baptiste Onofré 
> wrote:
> >
> > That would great Serge ;)
> >
> > Ready to help if you need ;)
> >
> > Regards
> > JB
> >
> > On 08/01/2019 12:14, Serge Huber wrote:
> > > Actually I think I'd like to improve the Apache Unomi and Jahia ones,
> > > to give more details about how Karaf is used and why it was chosen.
> > >
> > > I'll try to see if I can get a PR ready.
> > >
> > > On Tue, Jan 8, 2019 at 12:05 PM Christian Schneider
> > >  wrote:
> > >>
> > >> I just added a first user story with some personal experiences from
> the openhab founder Kai Kreuzer to our stories page:
> > >> https://karaf.apache.org/stories.html
> > >>
> > >> If you are also using Apache Karaf please take the chance to give us
> some feedback and at the same time promote your project with a link and
> logo.
> > >>
> > >> You can either do a PR for the karaf-site repo:
> https://github.com/apache/karaf-site/blob/trunk/src/main/webapp/stories.html
> > >> or simply let us know of your story on the list or by personal mail.
> We will make sure your story is told.
> > >>
> > >> Best
> > >> Christian
> > >>
> > >> --
> > >> --
> > >> Christian Schneider
> > >> http://www.liquid-reality.de
> > >>
> > >> Computer Scientist
> > >> http://www.adobe.com
> > >>
> >
> > --
> > Jean-Baptiste Onofré
> > jbono...@apache.org
> > http://blog.nanthrax.net
> > Talend - http://www.talend.com
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


First user contributed story from openhab .. please participate too

2019-01-08 Thread Christian Schneider
I just added a first user story with some personal experiences from the
openhab founder Kai Kreuzer to our stories page:
https://karaf.apache.org/stories.html

If you are also using Apache Karaf please take the chance to give us some
feedback and at the same time promote your project with a link and logo.

You can either do a PR for the karaf-site repo:
https://github.com/apache/karaf-site/blob/trunk/src/main/webapp/stories.html
or simply let us know of your story on the list or by personal mail. We
will make sure your story is told.

Best
Christian

-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: JAX-RS Whiteboard and CXF Mechanics

2018-12-28 Thread Christian Schneider
You can try a Application.

See
https://osgi.org/specification/osgi.cmpn/7.0.0/service.jaxrs.html#service.jaxrs.resource.services
@Component(service=Application.class) @JaxrsName("myApp")
@JaxrsApplicationBase("foo") public class MyApplication extends Application
{}

Cheers

Christian

Am Fr., 28. Dez. 2018 um 16:24 Uhr schrieb Oliver Schweitzer <
oschweit...@me.com>:

> Hi,
>
> I have an existing REST JAX-RS Application based on CXF mechanics. that is
> the Resources are lifecycle managed by a programmatically setup
> JAXRSServerFactoryBean, which itself gets started and stopped by an
> immediate @Component
>
> Now I want to build new entrypoints (and eventually migrate old ones)
> using the JAX-RS whiteboard, as discussed e.g. here
> http://karaf.922171.n3.nabble.com/Aries-JAX-RS-Whiteboard-td4054440.html,
> and go full dynamically discovered declarative services with my REST API
> and application.
>
> The basic setup (without extensions) works very well,  now I want to
> replace all my programmatic CXF configuration with @Components annotated
> with @JaxrsExtension, use only JAX-RS/Whiteboard mechanics and no more
> CXF specialties if possible.
>
> Now for the questions:
>
> 1. Some of the providers I configure programmatically implement JAX-RS
> interfaces and are provided by CXF or other frameworks,
> e.g. CrossOriginResourceSharingFilter, MultipartProvider, 
> JacksonJsonProvider, WebApplicationExceptionMapper.
> How do I make these known to my JAX-RS Whiteboard  (Whiteboard service)
> ?Just derive a new class, annotate as a Component and JaxrsExtension?
>
> 2. Swagger2Feature, JAASLoginInterceptor, FastInfosetFeature, GZIPFeature,
>  CORSPreflightInterceptor (my code, extends
> AbstractPhaseInterceptor) are provided by CXF and implement CXF
> interfaces. I gather I can't use these directly with the Whiteboard? What
> do I do with those?  Reimplement/port to JAX-RS?
>
> Best regards,
>
> Oliver
>
>
>
>
>

-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: Blueprint, DS and CDI State of the Art...

2018-12-04 Thread Christian Schneider
The whole camel blueprint as well as camel OSGi integration in general is
kind of shoehorned on top of a non OSGi system.
It works but it is a bit fragile.

Christian

Am Di., 4. Dez. 2018 um 16:03 Uhr schrieb Ryan Moquin <
fragility...@gmail.com>:

> I didn't see this thread until now, but just wanted to add that I use
> blueprint with Camel all the time very successfully. There were a few
> hiccups that were resolved around injecting configurations into the tests
> for a specific PID, but in the testing stuff was put together nicely as
> well.
>
> I'd be curious what specific problems you have with it since I was able to
> figure it out pretty easily from the Camel documentation.
>
> I would however like to see some of these hurdles in general get
> addressed.  I'd like to see open source projects in general modularize
> themselves.  When I need to use one that just half-@ssed some osgi
> support or no osgi support but split packages in their jars, it's quite
> frustrating.  I love writing code using osgi.  The power you have is tough
> to wield at first, but you can do some awesome stuff when you figure it out
> (and workaround some of the current hurdles still be hashed out).
>
> Ryan
>
> Ryan
>
> On Wed, Nov 21, 2018, 1:34 PM Raymond Auge  wrote:
>
>>
>>
>> On Wed, Nov 21, 2018 at 11:23 AM Ranx  wrote:
>>
>>> Raymond,
>>>
>>> Thanks for the information. I was probably unaware of the RI because it
>>> isn't listed on the Aries website
>>
>>
>> Good point! So I did some updates to the main page [1]. I will try to
>> make further updates to other pages as time permits.
>>
>> [1] http://aries.apache.org/
>>
>>
>>> and the only annotations I was aware of
>>> from there were the Blueprint annotations. Also, PAX CDI has been
>>> installed
>>> in Fuse for some time now although in the 6.x version (Karaf 2.x) it was
>>> only the RC so i refrained from using it for production code. Fuse 7 is
>>> currently Karaf 4.2.x and has the 1.2 version of PAX CDI installed as a
>>> default.
>>>
>>> I think I saw a presentation you gave at Eclipsecon Europe (on Youtube)
>>> on
>>> the work you were doing with CDI and OSGi/J2EE. There seemed to be a lot
>>> of
>>> work going on there for interoperability with J2EE and not just as OSGi.
>>
>>
>> So there's nothing really specific to Java EE per say other than to
>> ensure that OSGi CDI Integration could naturally accommodate other CDI
>> Portable Extensions in a friendly, portable way. As such, integration of
>> Java EE specs could be accomplished without any hacks. This model could be
>> used just as easily to make your own features available to your CDI bundles.
>>
>>
>>> For
>>> me the J2EE part isn't as relevant but if the OSGi service dynamism and
>>> injection and wire up work correctly that works for me.
>>>
>>
>> Perfect, so there's nothing for you to worry about because this is the
>> base model.
>>
>>
>>>
>>> Now to get Red Hat to embrace it and it'll be golden.
>>>
>>
>> Sure, let's see what we can do! ;)
>>
>> - Ray
>>
>>
>>>
>>>
>>>
>>> --
>>> Sent from: http://karaf.922171.n3.nabble.com/Karaf-User-f930749.html
>>>
>>
>>
>> --
>> *Raymond Augé* <http://www.liferay.com/web/raymond.auge/profile>
>>  (@rotty3000)
>> Senior Software Architect *Liferay, Inc.* <http://www.liferay.com>
>>  (@Liferay)
>> Board Member & EEG Co-Chair, OSGi Alliance <http://osgi.org>
>> (@OSGiAlliance)
>>
>

-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: Aries JAX-RS Whiteboard

2018-12-03 Thread Christian Schneider
Pretty cool .. sorry I missed that.

Christian

Am Mo., 3. Dez. 2018 um 08:33 Uhr schrieb Jean-Baptiste Onofré <
j...@nanthrax.net>:

> Hi,
>
> yes, it's what I said in my previous e-mail: Karaf example + features in
> Aries.
>
> Regards
> JB
>
> On 03/12/2018 08:27, Christian Schneider wrote:
> > Hi JB,
> >
> > can you add the jax-rs-whiteboard feature to the aries repo? I think it
> > makes sense to have it with the jax-rs code so it can be used with
> > different karaf versions.
> >
> > Christian
> >
> > Am Mo., 3. Dez. 2018 um 07:39 Uhr schrieb Jean-Baptiste Onofré
> > mailto:j...@nanthrax.net>>:
> >
> > Hi Markus,
> >
> > yes I will push the PRs (Karaf example with feature repo + Aries
> > features repo that will replace the one in Karaf example) later
> today.
> >
> > Regards
> > JB
> >
> > On 03/12/2018 06:03, Markus Rathgeb wrote:
> > > Hi JB,
> > >
> > > as I'm creating the Aries JAXRS feature for Karaf, I'm
> > currently using
> > > the one from Aries.
> > >
> > >
> > > could you share your feature?
> > >
> > > Best regards,
> > > Markus
> >
> > --
> > Jean-Baptiste Onofré
> > jbono...@apache.org <mailto:jbono...@apache.org>
> > http://blog.nanthrax.net
> > Talend - http://www.talend.com
> >
> >
> >
> > --
> > --
> > Christian Schneider
> > http://www.liquid-reality.de
> >
> > Computer Scientist
> > http://www.adobe.com
> >
>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: Aries JAX-RS Whiteboard

2018-12-02 Thread Christian Schneider
Hi JB,

can you add the jax-rs-whiteboard feature to the aries repo? I think it
makes sense to have it with the jax-rs code so it can be used with
different karaf versions.

Christian

Am Mo., 3. Dez. 2018 um 07:39 Uhr schrieb Jean-Baptiste Onofré <
j...@nanthrax.net>:

> Hi Markus,
>
> yes I will push the PRs (Karaf example with feature repo + Aries
> features repo that will replace the one in Karaf example) later today.
>
> Regards
> JB
>
> On 03/12/2018 06:03, Markus Rathgeb wrote:
> > Hi JB,
> >
> > as I'm creating the Aries JAXRS feature for Karaf, I'm currently
> using
> > the one from Aries.
> >
> >
> > could you share your feature?
> >
> > Best regards,
> > Markus
>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: Aries

2018-11-27 Thread Christian Schneider
I understand that you are seeking a more standard way than karaf features
to deploy parts of an application. Indeed subsystems look like a good way
at first. Unfortunately they add a lot of complexity to a system. So almost
no one uses them.

Currently there are two major ways of packaging an application:
- karaf features (uses repository + + requirements under the covers). A
feature repo is described in xml. The bundles from all the required
features form the repository. The bundles with dependency=false form the
requirements.
- repository + requirements based approach like used by bnd (without
features). They currently use a pom file to describe a repository +
requirements in a bndrun file.

So I agree it would be great to have a more standard way to package
applications. I discussed with JB that we could make more explicit  use of
repositories for karaf features. The idea is to describe karaf features
using a backing repository + required bundles for each feature. We could
describe the repository for the feature in a pom and refer to it in the
feature repo file. The features would then only contain the required
bundles.

This approach would provide a repository in pom form for all karaf features
that is then also usable by bnd for packaging. So projects like aries would
only need to provide one common form of feature description.

Besides this there is a standardisation effort at the OSGi alliance for
features. Currently the work in progress there looks more like karaf 2
featues, so it is not usable for karaf but maybe in the next iteraion a
repository based approach is considered.

Chritian


Am Di., 27. Nov. 2018 um 21:56 Uhr schrieb Leschke, Scott <
slesc...@medline.com>:

> It wasn’t really a dev request per se, more of a curiosity question as to
> whether something along those lines was being considered as it would seem
> to make the implementations more easily consumable in a variety of OSGi
> environments.  My primary interest is in Karaf which is why I guess I
> targeted this list. Perhaps I should have thought that through better.
>
>
>
> As for how something like that were structured, I don’t know really.  I
> only have passing familiarity with the Subsystem spec and that it sort of
> overlaps and extends what Karaf Features do, at least to my knowledge. My
> take is that a Karaf Feature commonly maps to an OSGi service spec.
> implementation, even if the names don’t match exactly
>
>
>
> I readily admit that I could be grossly mistaken on that.
>
>
>
> Scott
>
>
>
> *From:* David Jencks 
> *Sent:* Tuesday, November 27, 2018 2:08 PM
> *To:* user@karaf.apache.org
> *Subject:* Re: Aries
>
>
>
> I’m somewhat curious how you decided on this karaf list for a Dev request
> for Aries.
>
> I’m more curious how a feature subsystem would help deploying an aries
> osgi service implementation. I haven’t looked for several years at how
> Aries sub projects divide up their artifact functionality, but I’d hope
> that all the spec functionality, and api, would be from a single bundle,
> with, possibly additional bundles for extensions.  If this is how a project
> is structured, how does a feature subsystem make deployment easier? If not,
> would it make more sense to adopt such a structure than to imitate it with
> a feature subsystem?
>
> Thanks
>
> David Jencks
>
> Sent from my iPhone
>
>
> On Nov 27, 2018, at 11:27 AM, Leschke, Scott  wrote:
>
> I was wondering if there is a possibility that the Aries project would
> provide OSGi Feature Subsystems for each of the OSGi services they’ve
> implemented (with the exception of the subsystem spec of course).  There is
> a Karaf Feature for installing the Subsystem service so it would be nice if
> the remaining services were available as Feature Subsystems (or Karaf
> Features I guess but the former seems like a more neutral solution).
>
>
>
> Scott
>
>

-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: Troubles upgrading to Karaf-4.2.1/DOSGi-2.3.0

2018-11-04 Thread Christian Schneider
 │  │
> Uninstalled │ *cxf*-3.2.0 │
> *cxf*-rs-security-oauth│ 3.2.0│  │
> Uninstalled │ *cxf*-3.2.0 │
> *cxf*-rs-security-jose │ 3.2.0│  │
> Uninstalled │ *cxf*-3.2.0 │
> *cxf*-rs-security-oauth2   │ 3.2.0│  │
> Uninstalled │ *cxf*-3.2.0 │
> *cxf*-jackson  │ 3.2.0│  │
> Uninstalled │ *cxf*-3.2.0 │
> *cxf*-jsr-json │ 3.2.0│  │
> Uninstalled │ *cxf*-3.2.0 │
> *cxf*-tracing-brave│ 3.2.0│  │
> Uninstalled │ *cxf*-3.2.0 │
> *cxf*-rs-description-swagger2  │ 3.2.0│  │
> Uninstalled │ *cxf*-3.2.0 │
> *cxf*-databinding-aegis│ 3.2.0│  │
> Started │ *cxf*-3.2.0 │
> *cxf*-databinding-jaxb │ 3.2.0│  │
> Started │ *cxf*-3.2.0 │
> *cxf*-features-clustering  │ 3.2.0│  │
> Started │ *cxf*-3.2.0 │
> *cxf*-features-logging │ 3.2.0│  │
> Started │ *cxf*-3.2.0 │
> *cxf*-features-throttling  │ 3.2.0│  │
> Started │ *cxf*-3.2.0 │
> *cxf*-features-metrics │ 3.2.0│  │
> Started │ *cxf*-3.2.0 │
> *cxf*-bindings-corba   │ 3.2.0│  │
> Started │ *cxf*-3.2.0 │
> *cxf*-bindings-coloc   │ 3.2.0│  │
> Started │ *cxf*-3.2.0 │
> *cxf*-transports-local │ 3.2.0│  │
> Started │ *cxf*-3.2.0 │
> *cxf*-transports-jms   │ 3.2.0│  │
> Started │ *cxf*-3.2.0 │
> *cxf*-transports-udp   │ 3.2.0│  │
> Started │ *cxf*-3.2.0 │
> *cxf*-transports-websocket-client  │ 3.2.0│  │
> Uninstalled │ *cxf*-3.2.0 │
> *cxf*-transports-websocket-server  │ 3.2.0│  │
> Uninstalled │ *cxf*-3.2.0 │
> *cxf*-javascript   │ 3.2.0│  │
> Started │ *cxf*-3.2.0 │
> *cxf*-frontend-javascript  │ 3.2.0│  │
> Started │ *cxf*-3.2.0 │
> *cxf*-xjc-runtime  │ 3.2.0│  │
> Started │ *cxf*-3.2.0 │
> *cxf*-tools│ 3.2.0│  │
> Uninstalled │ *cxf*-3.2.0 │
> *cxf*  │ 3.2.0│ x│
> Started │ *cxf*-3.2.0 │
> *cxf*-sts  │ 3.2.0│  │
> Uninstalled │ *cxf*-3.2.0 │
> *cxf*-wsn-api  │ 3.2.0│  │
> Uninstalled │ *cxf*-3.2.0 │
> *cxf*-wsn  │ 3.2.0│  │
> Uninstalled │ *cxf*-3.2.0 │
> *cxf*-ws-discovery-api │ 3.2.0│ x│
> Started │ *cxf*-3.2.0 │
> *cxf*-ws-discovery │ 3.2.0│ x│
> Started │ *cxf*-3.2.0 │
> *cxf*-bean-validation-core │ 3.2.0│  │
> Uninstalled │ *cxf*-3.2.0 │
> *cxf*-bean-validation  │ 3.2.0│  │
> Uninstalled │ *cxf*-3.2.0 │
> *cxf*-jaxrs-cdi│ 3.2.0    │      │
> Uninstalled │ *cxf*-3.2.0 │
> *cxf*-dosgi-common │ 2.3.0│ x│
> Started │ *cxf*-dosgi-2.3.0   │
> *cxf*-dosgi-provider-ws│ 2.3.0│ x│
> Started │ *cxf*-dosgi-2.3.0   │
> *cxf*-dosgi-provider-rs│ 2.3.0│  │
> Uninstalled │ *cxf*-dosgi-2.3.0
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: Comparison between Karaf, Spring & NodeJS

2018-10-07 Thread Christian Schneider
omote Apache Karaf
> > >> > > with JB at ApacheCon and in Paris, I suggested to compare Karaf to
> > >> > > Spring in a grid format and publish it on the website.
> > >> > >
> > >> > > I wanted to get the ball rolling so I started a Google Spreadsheet
> > > here:
> > >> > >
> > >
> https://docs.google.com/spreadsheets/d/1Js5qTXXugEOsp-5kUYoUbCKP1xt1dUx8efWZcGbm6C4/edit?usp=sharing
> > >> > >
> > >> > > By default commenting is allowed, so please don't hesitate. The
> > >> > > entries with a question mark are answers I don't have.
> > >> > >
> > >> > > Also, the point is to make this as precise as possible, so it's
> quite
> > >> > > probable I made some mistakes because I might be biased towards
> Karaf
> > >> > > :)
> > >> > >
> > >> > > cheers,
> > >> > >   Serge...
> > >> > >
> > >> >
> > >> > --
> > >> > Jean-Baptiste Onofré
> > >> > jbono...@apache.org
> > >> > http://blog.nanthrax.net
> > >> > Talend - http://www.talend.com
> >
> > --
> > Jean-Baptiste Onofré
> > jbono...@apache.org
> > http://blog.nanthrax.net
> > Talend - http://www.talend.com
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: Karaf pax exam test works locally but fails on travis-ci

2018-08-27 Thread Christian Schneider
I suspect pax exam does not find your built bundles. Maybe non default
local mvn repo location.


Steinar Bang  schrieb am So., 26. Aug. 2018, 22:45:

> I've merged in a large change to master of my project[1], transforming a
> vaadin webapp into a react webapp.
>
> In the last commit on master before the merge, the pax exam test[2] runs
> as expected.
>
> After the merge the test[3] fails to start on travis-ci.
>
> The test after the merge runs fine when I run it locally, but as said,
> it fails to start on travis-ci.
>
> Does anyone have any idea how to debug this?  The travis-ci log doesn't
> say much other than that it times out waiting for the karaf node to
> start in the pax exam test, and I don't know how to get hold of the
> karaf.log after the build.
>
> Thanks!
>
>  - Steinar
>
>
> References;
> [1] 
> [2] <
> https://github.com/steinarb/ukelonn/blob/a3e9b928a781c78252c65a69c6298b433ae065aa/ukelonn.tests/src/test/java/no/priv/bang/ukelonn/tests/UkelonnServiceIntegrationTest.java#L54
> >
> [3] <
> https://github.com/steinarb/ukelonn/blob/5755236cc036bc256348dc6d7b38ff2c95748a97/ukelonn.tests/src/test/java/no/priv/bang/ukelonn/tests/UkelonnServiceIntegrationTest.java#L45
> >
>
>


OSGi and karaf in the cloud. Looking for some experiences and stories

2018-08-18 Thread Christian Schneider
I am currently looking into optimized ways to run OSGi applications in the
cloud.
I would like to describe how cloud native OSGi could look like.

Apart from my own experiments I would like to hear from other karaf users.
Do you run OSGi and especially karaf applications in the cloud?

How do you build your application? How do you do releases and deployments?
What do you do different compared to a non cloud setup.

Regards
Christian

-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: Creating an JDBC

2018-07-24 Thread Christian Schneider
You need a DataSourceFactory for this to work. As ingres probably does not
offer this you can create your own bundle with a class that implements
DataSourceFactory and returns a DataSource. This class then must be
published as a service with the above property.

Christian

Am Di., 24. Juli 2018 um 18:02 Uhr schrieb Paul Spencer :

> Karaf 4.2
>
> I am trying to create a JDBC datasource for a DBMS not natively supported
> by
> PAX-JDBC, specifically Ingres.   Below are the commands I am using to
> install the Ingres JDBC driver and create the datasource.  Based on the log
> files, the creation process is waiting on a service dependency.
>
> karaf@root()> bundle:install wrap:mvn:com.ingres.jdbc/iijdbc/9.2-3.4.10
> karaf@root()> jdbc:ds-create -url jdbc:ingres://localhost/dbname -u user
> -p
> password -dc com.ingres.jdbc.IngresDataSource myDS
> karaf@root()> log:display
> 11:47:35.465 INFO [CM Configuration Updater (Update:
> pid=org.ops4j.datasource.b91f90b0-b399-49ab-9f55-9ab522d24833)] Waiting for
> service dependency:
>
> (&(objectClass=org.osgi.service.jdbc.DataSourceFactory)(osgi.jdbc.driver.class=com.ingres.jdbc.IngresDataSource))
>
> How to I create a JDBC datasource using com.ingres.jdbc.IngresDataSource?
>
>
>
> --
> Sent from: http://karaf.922171.n3.nabble.com/Karaf-User-f930749.html
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: aries remote service admin: Why needs consumer implementation?

2018-07-23 Thread Christian Schneider
In very simple cases it is possible to transfer JPA entities but it is not
a good practice to do so.

For a remote service it makes sense to have a separate DTO. Often you can
also tailor the DTO to the use case of the remote service. Like in the
service facade pattern.

Christian

Am Mo., 23. Juli 2018 um 14:53 Uhr schrieb ceugster <
christian.eugs...@gmx.net>:

> So if I get it right, I have theoretically two classes for Car, an entity
> Car
> and a DTO Car. Both have same fields and getters/setters. and as next step
> I
> use one class (instead of interface) car, that is used for both? Or is it
> better to separate each implementation (jpa entity and data transfer
> object)?
>
>
>
> --
> Sent from: http://karaf.922171.n3.nabble.com/Karaf-User-f930749.html
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: aries remote service admin: Why needs consumer implementation?

2018-07-23 Thread Christian Schneider
rg.apache.felix.framework.BundleWiringImpl.doImplicitBootDelegation(BundleWiringImpl.java:1859)
> ~[?:?]
> at
>
> org.apache.felix.framework.BundleWiringImpl.tryImplicitBootDelegation(BundleWiringImpl.java:1788)
> ~[?:?]
> at
>
> org.apache.felix.framework.BundleWiringImpl.searchDynamicImports(BundleWiringImpl.java:1741)
> ~[?:?]
> at
>
> org.apache.felix.framework.BundleWiringImpl.findClassOrResourceByDelegation(BundleWiringImpl.java:1617)
> ~[?:?]
> at
>
> org.apache.felix.framework.BundleWiringImpl.access$200(BundleWiringImpl.java:80)
> ~[?:?]
> at
>
> org.apache.felix.framework.BundleWiringImpl$BundleClassLoader.loadClass(BundleWiringImpl.java:2053)
> ~[?:?]
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[?:?]
> at java.lang.Class.forName0(Native Method) ~[?:?]
> at java.lang.Class.forName(Class.java:348) ~[?:?]
> at
> java.io.ObjectInputStream.resolveClass(ObjectInputStream.java:686)
> ~[?:?]
> at
>
> org.apache.aries.rsa.provider.tcp.ser.BasicObjectInputStream.resolveClass(BasicObjectInputStream.java:56)
> ~[?:?]
> at
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1866)
> ~[?:?]
> at
> java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1749)
> ~[?:?]
> at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2040)
> ~[?:?]
> at
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571)
> ~[?:?]
> at
> java.io.ObjectInputStream.readObject(ObjectInputStream.java:431) ~[?:?]
> at java.util.ArrayList.readObject(ArrayList.java:797) ~[?:?]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ~[?:?]
> at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> ~[?:?]
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> ~[?:?]
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:?]
> at
> java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1158)
> ~[?:?]
> at
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2176)
> ~[?:?]
> at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2067)
> ~[?:?]
> at
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571)
> ~[?:?]
> at
> java.io.ObjectInputStream.readObject(ObjectInputStream.java:431) ~[?:?]
> at
>
> org.apache.aries.rsa.provider.tcp.TcpInvocationHandler.parseResult(TcpInvocationHandler.java:139)
> ~[?:?]
> at
>
> org.apache.aries.rsa.provider.tcp.TcpInvocationHandler.handleSyncCall(TcpInvocationHandler.java:111)
> ~[?:?]
> ... 54 more
>
> Thanks!
>
>
>
> --
> Sent from: http://karaf.922171.n3.nabble.com/Karaf-User-f930749.html
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: Karaf Tutorials moved to new site https://cschneider.github.io/Karaf-Tutorial/

2018-07-05 Thread Christian Schneider
Am Do., 5. Juli 2018 um 00:02 Uhr schrieb Jean-Baptiste Onofré <
j...@nanthrax.net>:

> We have different perspective there. My standpoint is simpler: we need
> to help our users to start easily with Karaf.
>

 Not sure if this is different from my view. I also want to help people to
start easily.

>
> Users don't care about some technologies, bnd or maven-bundle-plugin, or
> whatever: they just need turnkey examples.
>

This is exactly the point. New users take the examples as starting point
for their software. So whatever we put in there will be what users keep
using for a long time.
That is why I think we need to have consistent and opinionated examples to
provide real best practices to avoid leading users on a way that turns out
to be a dead end.

Typically users will either choose blueprint or DS and will not want to mix
both. My opionion is to simply not have blueprint examples but I am totally
fine if we have them.. but they should be
separate from the DS ones. So I propose the directories below examples are
ds and blueprint. That should prevent quite a bit of confusion.

I see one other problem with the parent. The examples use the karaf parent.
I think this is not good as users will want to copy the examples but they
will not want to keep the karaf parent.
Actually I am not sure if the examples must live in the karaf repo at all.
They are not strictly tied to the karaf release and often have a different
lifecycle. Maybe there could be a repo karaf-examples or karaf-tutorials.

Christian


Re: Karaf Tutorials moved to new site https://cschneider.github.io/Karaf-Tutorial/

2018-07-04 Thread Christian Schneider
Hi JB,

yes having some tutorials in the karaf docs makes sense.
I am not yet sure how to best structure these. The current examples are
often too simple to explain a certain technology. On the other hand I also
do not yet have a really good concept.

I would also like to focus on just declarative services. I do not see a big
future for blueprint.

Another thing is the way we build bundles in the examples. In my newer
tutorials I only use the bnd-maven-plugin.
I also define the exported packages simply by a package info file with a
version annotation. I think this makes the OSGi configs a lot simpler and
avoids errors.
See here for an example:
https://github.com/apache/felix/blob/trunk/systemready/src/main/java/org/apache/felix/systemready/package-info.java

This config can be put in a parent pom for all modules of an example. So
indiviual modules do not need any OSGi config.
https://github.com/apache/felix/blob/trunk/systemready/pom.xml#L38-L58

I think we really need to bring the examples and tutorials to the next
level of best practices. As OSGi R7 support will arrive soon I plan to aim
at
R7 as target platform for upcoming tutorials (Similar to the new enroute).

Christian

Am Mi., 4. Juli 2018 um 11:54 Uhr schrieb Jean-Baptiste Onofré <
j...@nanthrax.net>:

> Hi Christian
>
> Thanks for the update.
>
> Did you see the effort we are doing right now on examples that will be
> part of the distribution? Maybe it would make sense to have tutorials as
> part of the exemples ?
>
> Regards
> JB
> Le 4 juil. 2018, à 11:50, Christian Schneider  a
> écrit:
>>
>> Recently I had quite a few issues with the availability of my homepage
>> http://liquid-reality.de .
>> So I am in the process of moving my whole site to github io pages. The
>> first and most important part is done now.
>>
>> The Tutorials for Apache Karaf now live at:
>> https://cschneider.github.io/Karaf-Tutorial/
>>
>> This has some advantages:
>> - The tutorials now live closer to the code (same repo)
>> - You can provide PRs for the tutorials
>> - The hosting by github should be a lot more stable
>>
>> Christian
>>
>> --
>> --
>> Christian Schneider
>> http://www.liquid-reality.de
>>
>> Computer Scientist
>> http://www.adobe.com
>>
>>

-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Karaf Tutorials moved to new site https://cschneider.github.io/Karaf-Tutorial/

2018-07-04 Thread Christian Schneider
Recently I had quite a few issues with the availability of my homepage
http://liquid-reality.de .
So I am in the process of moving my whole site to github io pages. The
first and most important part is done now.

The Tutorials for Apache Karaf now live at:
https://cschneider.github.io/Karaf-Tutorial/

This has some advantages:
- The tutorials now live closer to the code (same repo)
- You can provide PRs for the tutorials
- The hosting by github should be a lot more stable

Christian

-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: execute only at first startup?

2018-06-07 Thread Christian Schneider
How about storing the fact that the initialization is done in the data?

I have seen this with liquibase.
Liquibase is a tool that manages database updates. It stores the version of
the installed data in a table.
So when you install a new version it can do the necessary updates and then
stores the new version.

Best
Christian

Am Fr., 8. Juni 2018 um 03:42 Uhr schrieb Max Spring :

> Not sure I understand your notion of "service changed" :-)
> I use the term "service" in a colloquial sense. I don't mean "OSGi
> service".
>
> Let me try it that way:
>
> My release build produces a self-contained tarball of the entire code of
> my service.
> My "devops" automation (in Jenkins) deploys the tarball on a test VM
> together with the production data.
>
> After this deployment, the very first time the Karaf container starts up,
> I want to run some initialization logic.
> But only this very first time. Any subsequent Karaf container start up
> should not do this initialization any more.
>
> -Max
>
>
> On 06/07/2018 05:48 PM, Leschke, Scott wrote:
> > You mean the logic should only execute if the service is "changed", but
> not in the case where the service is stopped and restarted?
> >
> > -Original Message-
> > From: Max Spring [mailto:m2spr...@springdot.org]
> > Sent: Thursday, June 07, 2018 7:40 PM
> > To: user@karaf.apache.org
> > Subject: execute only at first startup?
> >
> > I've got a Karaf-based service.
> > Whenever I deploy a new revision of my service, I need to execute some
> code only at the very first startup.
> > I have this first-time functionality available as a Karaf command which
> I currently run manually each time right after startup after a new
> deployment.
> > I'd like to automate this.
> >
> > I'm thinking of using a marker file somewhere to indicate "first
> startup".
> > I'd have a new bundle checking for this file when it starts up. When it
> detect the file, the bundle executes my business logic initialization and
> then deletes the marker file.
> >
> > Or, is there something better for this scenario?
> >
> > -Max
> >
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: REST - Declarative Services

2018-05-28 Thread Christian Schneider
Aries JAX-RS should work. It is not yet released though. So currently there
is only a snapshot. A release should follow soon.

Another option is to use CXF-DOSGi. You can find an example below. It is
similar to Aries JAX-RS so a later switch should be easy.

https://github.com/apache/cxf-dosgi/tree/master/samples/rest

Christian

2018-05-28 19:40 GMT+02:00 Guenther Schmidt :

> Hello All,
>
> I’ve been developing services using Declarative Services for dependency
> injection and it was a breeze so far. Now I want to expose some of the
> functionality via a REST API and I’m stuck. So far I’ve deployed my bundles
> through bundle:install -s man: …. all very easy. But what should be simple,
> exposing this through REST is becoming difficult. There are tips out there
> suggesting to use Blueprint, which I don’t want, others seem to suggest
> that I need to create a “feature” package.
>
> Then there’s also the requirements to “feature” install cxf. That’s OK
> btw, I only have to do that once. But is there really no simple way to
> create a simple REST service using merely DS?
>
> Guenther
>
>
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: pax-jdbc-config connection pool configuration

2018-05-15 Thread Christian Schneider
j.pax.jdbc.config:1.2.0]
> at org.apache.felix.cm.impl.helper.ManagedServiceFactoryTracker.updated(
> ManagedServiceFactoryTracker.java:159) [8:org.apache.felix.
> configadmin:1.8.16]
> at org.apache.felix.cm.impl.helper.ManagedServiceFactoryTracker.
> provideConfiguration(ManagedServiceFactoryTracker.java:93)
> [8:org.apache.felix.configadmin:1.8.16]
> at org.apache.felix.cm.impl.ConfigurationManager$UpdateConfiguration.run(
> ConfigurationManager.java:1792) [8:org.apache.felix.configadmin:1.8.16]
> at org.apache.felix.cm.impl.UpdateThread.run0(
> UpdateThread.java:141) [8:org.apache.felix.configadmin:1.8.16]
> at org.apache.felix.cm.impl.UpdateThread.run(UpdateThread.
> java:109) [8:org.apache.felix.configadmin:1.8.16]
> at java.lang.Thread.run(Thread.java:748) [?:?]
>
>
>
> How do I configure the various parameters of the connection pool?
>
>
> Best regards,
> Alex soto
>
>
>
>
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Some tipps and tricks for writing pax exam OSGi tests

2018-04-17 Thread Christian Schneider
I have just finished an example and docs of some tips and tricks I learned
for writing OSGi tests using pax exam.

Some highlights:

- Mock tests for DS components using Mockito
- Debug pax exam based tests like plain java code (edit, save, debug)
- Full support for hamcrest matchers
- Use Awaitility for polling asynchronous external systems
- Create test bundles with bnd and DS components on the fly using
TinyBundles
- Use logback in pax exam

See
https://github.com/cschneider/osgi-testing-example

Cheers
Christian

-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: Karaf 4.2.0M2 issue with classpath

2018-03-21 Thread Christian Schneider
If you have control over that code then the TCCL is a good solution.

Christian

2018-03-21 16:54 GMT+01:00 bobanbp <boba...@gmail.com>:

> Hi Christian,
>
> Thanks for your reply. I tried your suggestion but without success. The
> only
> solution that works is to explicitly change class loader:
>
> ClassLoader tccl = Thread.currentThread() .getContextClassLoader();
> Thread.currentThread().setContextClassLoader(this.
> getClass().getClassLoader());
> my code that uses AWS SDK
> Thread.currentThread().setContextClassLoader(tccl);
>
>
>
> --
> Sent from: http://karaf.922171.n3.nabble.com/Karaf-User-f930749.html
>



-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: Karaf + Hibernate + Oracle: missing dependency

2018-03-20 Thread Christian Schneider
The debug log message is an early warning that something might be wrong. It
says that Aries JPA needs a DataSourceFactory to initialize your
persistence unit.

As in OSGi services can start in any order this can be a temporary state
(if the DSF is just not yet up). In your case this is a permanent state as
you seem to the missing this service.
Blueprint reports this after a while but the timeout is quite high. This is
why there is the debug log message.

Another way to get an early warning is to se the karaf diag command.

Now back to your original problem. You need to install a DataSourceFactory
OSGi service. Many databases already provide this in their driver jar.
Unsurprisingly Oracle does not.
You can solve this by installing the feature pax-jdbc-oracle.

Cheers
Christian

2018-03-20 15:35 GMT+01:00 GFO <guillaume.forg...@soprasteria.com>:

> Hello,
>
> I am trying to connect my bundles to a Oracle Database (10g) through
> ServiceMix.
>
> I have a bundle which contains my entities and my persistence.xml file in
> META-INF.
>
> Alongside, I have a DAO bundle in which I inject my entity manager.
>
> When I start my entities+persistence bundle, it has the state "Active". But
> when I look at the logs (I turned them into DEBUG), I have the following
> line : " org.apache.aries.jpa.container - 1.0.4 | The persistence unit
> my-unit in bundle my-bundle/0.0.1.SNAPSHOT cannot be registered because no
> DataSourceFactory service for JDBC driver oracle.jdbc.driver.OracleDriver
> exists.".
>
> You'll find the result of service:list DataSource and DataSourceFactory
> here: https://pastebin.com/H1UB4D4h. Does the log line is "normal" as it
> is
> in DEBUG ?
>
> Plus, when I launch my DAO bundle, I stays in "GRACE_PERIOD" and then
> "FAILED" states.
> In the logs, my DAO bundle seems to find the persistence infos from the
> other bundle : "Registering bundle bundle-dao_0.0.1.SNAPSHOT as a client of
> persistence unit my-unit with properties
> {org.apache.aries.jpa.context.type=TRANSACTION}.".
>
> But my DAO bundle seems to wait for a dependency : "Bundle
> bundle-dao/0.0.1.SNAPSHOT is waiting for dependencies
> [(&(&(org.apache.aries.jpa.proxy.factory=true)(osgi.unit.name
> =my-unit))(objectClass=javax.persistence.EntityManagerFactory))]".
>
> I don't know why.
>
> Here you'll find the logs of the launch of the DAO bundle :
> https://pastebin.com/YnVirQqm.
>
> Here is my persistence.xml (from entity+persistence bundle):
> https://pastebin.com/FY2s5AAT
>
> Here is my blueprint context (from DAO bundle):
> https://pastebin.com/WxpeBAte
>
> Here is the result of bundle:list -t 0 command:
> https://pastebin.com/2L80Ud6L
>
> Here is the MANIFEST of my DAO bundle: https://pastebin.com/fpYmALgg
>
>
> Please can you help me on this problem? I'm completely stuck. :(
>
> Thanks a lot!
>
>
>
> --
> Sent from: http://karaf.922171.n3.nabble.com/Karaf-User-f930749.html
>



-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: service to list all Karaf commands?

2018-03-13 Thread Christian Schneider
I am not sure if it would be a good idea to tap into commands for a web ui.
The commands are very much tailored for usage in the shell.
You will also face another problem as commands in karaf are not OSGi
services. They use custom annotations and are processed in a special way.

Instead I propose you use an approach similar to JBs example. Offer your
functionality as a plain OSGi service. Then upon this build a layer of
commands that use the service and a layer of web UIs that use the same
service.

Christian

2018-03-13 4:12 GMT+01:00 Max Spring <m2spr...@springdot.org>:

> Hi François,
>
> my own UI is a Web UI sitting on top of my own REST service.
> The UI part is actually not important for what I want to do.
>
> In essence, I want a programmatic way of listing all Karaf commands.
> Then I'll filter out just "my" Karaf commands (which implement my own
> interface in addition to extending OsgiCommandSupport).
>
> Ultimately, I want to have my own set of functions exposed as Karaf
> commands (for development time and admins) and exposed as my own type of
> "command" on the Web UI (via REST for regular users).
>
> I'm on Karaf 3.0.5, migrating to 4.1.x.
>
> -Max
>
>
>
> On 03/12/2018 07:56 PM, Francois Papon wrote:
>
>> Hi Max,
>>
>> What do you mean by your own UI from ? It's a terminal or a webUI ? You
>> are using a custom distribution of Karaf ?
>>
>> François
>>
>>
>> Le 13/03/2018 à 04:05, Max Spring a écrit :
>>
>>> I want to implement the ability to execute my own Karaf commands from
>>> my own UI.
>>> How can I list all command classes at runtime?
>>> Thanks!
>>> -Max
>>>
>>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: What feature is needed to make DS work?

2018-03-09 Thread Christian Schneider
You only need the scr feature.

Then you can use the scr:* commands to look into your components.

How do you see that it does not work?

Maybe you use the wrong annotations or the wrong version of the bundle
plugin.

Christian

2018-03-09 19:18 GMT+01:00 Steinar Bang <s...@dod.no>:

> What karaf feature is needed to make Declarative Services work?
>
> I thought it was the "scr" feature, but that wasn't enough.
>
> Thanks!
>
>
> - Steinar
>
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: How to start and immediately stop karaf?

2018-01-12 Thread Christian Schneider
The better way would be to create a custom distro. As far as I know you can
create the etc files at build time this way.

Christian

2018-01-11 23:35 GMT+01:00 Steinar Bang <s...@dod.no>:

> How I start karaf and stop it immediately after it has started?
>
> The platform is karaf 4.1.4 on debian GNU/linux, on amd64.
> The karaf is unpacked from a tarball.
>
> The reason I want to do start and stop karaf, is to modify the etc files
> that are touched by a startup, so that they get the exact same md5
> checksum they get after karaf has modified them, before I package up
> karaf to become a .deb package.
>
> I'm trying to improve my deb package and the /etc/karaf files showed up
> as modified, even though I hadn't touched them.
>
> I eventually figured out that it was karaf touching them.  Some are
> changed a little (they get a lf at the end of the last line), and some
> are changed more.
>
> That the date changes doesn't change the files' behaviour in a .deb
> package.  But the content changing (and therefore the md5 checksum
> changing) affects the behaviour.
>
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: Recommended CDI tool, Blueprint / DS / Dependency Management / Low level API?

2017-12-17 Thread Christian Schneider
I started with blueprint but today I am almost exclusively using
declarative services. DS is a lot more dynamic than blueprint. So it causes
less issues in OSGi as it adapts better to changes. DS is a bit of a
learning curve but is very well worth the effort.

One isue with DS is that it does not have special support for JPA or CXF.
For JPA you will not be able to use the @Transactional annotation but there
are good solutions for it like the Aries JPA JPATemplate or Aries
tx-control. For CXF you can use either CXF-DOSGi or Aries JAX-RS whiteboard.

http://enroute.osgi.org/services/org.osgi.service.component.html
You can also find some hints about DS here:
http://liquid-reality.de:8090/x/CYACAQ .

Christian

2017-12-17 21:59 GMT+01:00 Guenther Schmidt <schmi...@gmail.com>:

> Hello All,
>
>
> what is the recommended CDI tool?
>
>
> Should I use
>
>- the low level API (BundleActivator),
>- Felix Dependency Management,
>- Blueprint,
>- or annotation based DS?
>
> I'd rather not use the first two options, i don't want to buy into
> anything non-standard.
>
>
> Guenther
>



-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: ActiveMQ not starting soon enough

2017-12-10 Thread Christian Schneider
Why cant your other bundles start without activemq?
What would happen if they do?

Christian

2017-12-08 14:13 GMT+01:00 smunro <stephen.ross.mu...@gmail.com>:

> Hello,
>
> I'm having a small issue where the ActiveMQ Service has not initialized in
> time for a bundle I have developed. With our own bundles, we can resolve
> ordering with declarative services, using @Reference, but I'm not sure how
> to achieve the same with ActiveMQ. Basically, I want to make sure the
> ActiveMQ bundle is actively up and running before any of my bundles,
> without
> messing with start orders
>
> Stephen
>
>
>
> --
> Sent from: http://karaf.922171.n3.nabble.com/Karaf-User-f930749.html
>



-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: Adding an @Activate to a DS bundle causes the bundle not to load

2017-12-05 Thread Christian Schneider
 full error message from karaf.log below.
> >
> > Thanks!
> >
> >
> > - Steinar
> >
> > Error message from karaf.log follows:
> >
> > 2017-12-04T20:28:57,555 | ERROR | Karaf local console user karaf |
> ShellUtil| 42 - org.apache.karaf.shell.core - 4.1.3
> | Exception caught while executing command
> > org.osgi.service.resolver.ResolutionException: Unable to resolve root:
> missing requirement [root] osgi.identity; 
> osgi.identity=sonar-collector-webhook;
> type=karaf.feature; version="[1.0.0.SNAPSHOT,1.0.0.SNAPSHOT]";
> filter:="(&(osgi.identity=sonar-collector-webhook)(type=
> karaf.feature)(version>=1.0.0.SNAPSHOT)(version<=1.0.0.SNAPSHOT))"
> [caused by: Unable to resolve sonar-collector-webhook/1.0.0.SNAPSHOT:
> missing requirement [sonar-collector-webhook/1.0.0.SNAPSHOT]
> osgi.identity; osgi.identity=no.priv.bang.sonar.sonar-collector-webhook;
> type=osgi.bundle; version="[1.0.0.SNAPSHOT,1.0.0.SNAPSHOT]";
> resolution:=mandatory [caused by: Unable to resolve
> no.priv.bang.sonar.sonar-collector-webhook/1.0.0.SNAPSHOT: missing
> requirement [no.priv.bang.sonar.sonar-collector-webhook/1.0.0.SNAPSHOT]
> osgi.s
> > ervice; effective:=active; filter:="(objectClass=org.osgi.service.jdbc.
> DataSourceFactory)"]]
> >at 
> > org.apache.felix.resolver.ResolutionError.toException(ResolutionError.java:42)
> ~[?:?]
> >at 
> > org.apache.felix.resolver.ResolverImpl.doResolve(ResolverImpl.java:391)
> ~[?:?]
> >at org.apache.felix.resolver.ResolverImpl.resolve(ResolverImpl.java:377)
> ~[?:?]
> >at org.apache.felix.resolver.ResolverImpl.resolve(ResolverImpl.java:349)
> ~[?:?]
> >at org.apache.karaf.features.internal.region.
> SubsystemResolver.resolve(SubsystemResolver.java:218) ~[?:?]
> >at 
> > org.apache.karaf.features.internal.service.Deployer.deploy(Deployer.java:291)
> ~[?:?]
> >at org.apache.karaf.features.internal.service.FeaturesServiceImpl.
> doProvision(FeaturesServiceImpl.java:1248) ~[?:?]
> >at org.apache.karaf.features.internal.service.
> FeaturesServiceImpl.lambda$doProvisionInThread$1(FeaturesServiceImpl.java:1147)
> ~[?:?]
> >at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?]
> >at 
> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> [?:?]
> >at 
> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> [?:?]
> >at java.lang.Thread.run(Thread.java:748) [?:?]
> >
> >
> >
>



-- 
-- 
Christian Schneider
http://www.liquid-reality.de

Computer Scientist
http://www.adobe.com


Re: JMS log4j appender

2017-11-25 Thread Christian Schneider
In your case you would have to make sure the pax logging bunde has access
to the jms api. So you would need to write a fragment for it.

An easier way is to use Apache Decanter with the jms appender. It is
already well prepared for OSGi.
See:
https://karaf.apache.org/manual/decanter/latest-1/#_jms

Christian

2017-11-25 9:39 GMT+01:00 eric56 <equefel...@laposte.net>:

> Hi,
>
> I created this appender in my org.ops4j.pax.logging.cfg file:
> *log4j.appender.JMS=org.apache.log4j.net.JMSAppender
> log4j.appender.JMS.InitialContextFactoryName=org.apache.activemq.jndi.
> ActiveMQInitialContextFactory
> log4j.appender.JMS.ProviderURL=tcp://localhost:61616
> log4j.appender.JMS.TopicBindingName=logTopic
> log4j.appender.JMS.TopicConnectionFactoryBindingName=ConnectionFactory
> log4j.appender.JMS.Threshold=ERROR*
>
> I get this error message is:
> *Unexpected problem updating configuration org.ops4j.pax.logging
> java.lang.NoClassDefFoundError: javax/jms/JMSException*
>
> I installed this additionnal bundle:
> *bundle:install mvn:javax.jms/javax.jms-api/2.0.1*
>
> But it didn't help...
>
> Currently I have this bundle installed:
> *162 | Active   |  50 | 2.16.3| camel-jms
> 279 | Active   |  80 | 4.0.5 | Apache Karaf :: JMS :: Core
> 280 | Active   |  80 | 2.0.1 | JMS API
> *
> And these features:
> *karaf@trun()> feature:list | grep -i JMS
> cxf-transports-jms  | 3.1.5 |
> |
> Started | cxf-3.1.5  |
> spring-jms  | 3.2.14.RELEASE_1  |
> |
> Started | spring-4.0.5   | Spring 3.2.x JMS
> support
> camel-jms   | 2.16.3| x
> |
> Started | camel-2.16.3   |
> jms | 4.0.5 | x
> |
> Started | enterprise-4.0.5   | JMS service and
> commands*
>
> Could you help me ?
>
> Regards.
>
> Eric
>
>
>
> --
> Sent from: http://karaf.922171.n3.nabble.com/Karaf-User-f930749.html
>



-- 
-- 
Christian Schneider
http://www.liquid-reality.de
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.liquid-reality.de>

Computer Scientist
http://www.adobe.com


Re: Pax JDBC DataSourceFactory connection pooling config

2017-11-21 Thread Christian Schneider
When I look into the source I see that the hikari pooling checks for the
prefix "hikari.". Maybe you can set a breakpoint there and check what it
actually does.

See:
https://github.com/ops4j/org.ops4j.pax.jdbc/blob/master/pax-jdbc-pool-hikaricp/src/main/java/org/ops4j/pax/jdbc/pool/hikaricp/impl/HikariPooledDataSourceFactory.java

Christian

2017-11-20 22:47 GMT+01:00 Leschke, Scott <slesc...@medline.com>:

> How does one configure the underlying connection pool when using Pax JDBC
> DataSourceFactory?  I’ve been using this for a while and recently
> discovered it’s not behaving as I intended. I’m using Hikari as my CP, and
> want to configure the following Hikari properties:
>
>
>
> poolName
>
> maximumPoolSize
>
> minimumIdle
>
> idleTimeout
>
> maxLifetime
>
>
>
> I’ve been prefixing each of these “hikari.” (which I concluded was the
> proper way to do it some months ago), but it appears that Hikari is using
> defaults.
>
> When I configure as follows,
>
>
>
> hikari.poolName= Composite Enterprise Data
>
> hikari.maximumPoolSize = 1
>
> hikari.minimumIdle = 0
>
> hikari.idleTimeout = 2880
>
> hikari.maxLifetime = 0
>
>
>
> I immediately get 10 connections to the datastore, even before a
> connection is actually requested to run a query (Cisco Information Server
> (aka, Composite)).
>
> This would be the default behavior if none of the above get used.  I also
> tried prefixing with “pool.” btw (which makes more sense to me), but get
> the same behavior.
>
>
>
> Scott
>



-- 
-- 
Christian Schneider
http://www.liquid-reality.de
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.liquid-reality.de>

Computer Scientist
http://www.adobe.com


Re: Aries Blueprint Annotations Feature

2017-11-05 Thread Christian Schneider
If you want to use blueprint with annotations I propose to use the
blueprint-maven-plugin.
http://aries.apache.org/modules/blueprint-maven-plugin.html
It creates plain blueprint xml at build time. So at runtime you do not need
any special support for it.

Christian

2017-11-05 11:52 GMT+01:00 JT <karaf-u...@avionicengineers.com>:

> Hi,
>
> It looks as though the Aries blueprint annotations feature has been
> removed in recent releases of Karaf, or is it possible to add it like came
> with 'repo-add' etc?
>
> thanks
>
> Kerry
>
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.liquid-reality.de>

Computer Scientist
http://www.adobe.com


Re: General question about DOSGi

2017-10-25 Thread Christian Schneider
Yes .. this is correct. Actually exporting and importing or services is
completely decoupled in Remote Service Admin.

When you offer a suitable OSGi service it will be exported as a REST
service. The properties of the service are then also sent to the Discovery
implementation. If you just want to export the service then
you simply do not use that discovery information.

On the client side the Discovery information can be used to create a proxy
that is offered as an OSGi service and that forms a REST client. Again this
is completely decoupled from your REST service.

You can even feed the Discovery information into the system if there is no
DOSGi REST service on the other side. This can be used to create a CXF
DOSGi proxy to a service that is outside of OSGi.

Christian

2017-10-24 9:24 GMT+02:00 Massimo Bono <massimobo...@gmail.com>:

> So, it's like saying:
>
> We know DOSGI implements RPC with REST-ful services, so we exploit that in
> order to create some rest webservices. Then, instead of query them from
> another OSGi container, we directly query them from the browser.
>
> Is my understanding correct?
>
> 2017-10-24 6:29 GMT+02:00 Jean-Baptiste Onofré <j...@nanthrax.net>:
>
>> Hi,
>>
>> CXF DOSGi implementation is based on CXF and exposes OSGi services as
>> REST service.
>>
>> That's an approach for DOSGi, but it's not the only one.
>>
>> In Cellar, you have another DOSGi implementation based on NIO/Hazelcast.
>> Another one is Eclipse RemoteService.
>>
>> Each has pros/cons.
>>
>> Anyway, the purpose of DOSGi is to provide remote service invocation. So,
>> a service is exposed on a node and used remotely on another one. It should
>> be transparent for your code (the only minor change is that the service
>> that has to be exposed for remote call should contain
>> exported.service.interface property).
>>
>> Regards
>> JB
>>
>> On 10/23/2017 10:13 PM, Massimo Bono wrote:
>>
>>> Hello,
>>>
>>> I'm trying to grasp my mind on DOSGi; I want to have a general idea on
>>> the main concepts before start coding.
>>>
>>> A while ago I tried (with success) to replicate the awesome tutorial
>>> Christian provided (available https://github.com/apache/cxf-
>>> dosgi/tree/master/samples/rest).
>>>
>>> Now, before continuing coding, I want to understand why DOSGi is useful
>>> in my use case.
>>>
>>> Briefly, I want to code with Declarative Services with Karaf because i
>>> feel it's a more OSGi oriented way to define and bind services.
>>> Furthermore, I want my OSGi framework to recreate a web page the user
>>> can interact with: CXF can easily be deployed in Karaf, so I felt like it
>>> was a good choice over the other alternatives (like jetty). I used RESTful
>>> services as well, just to have something well structured.
>>> In a previous question, Christian suggested me to use DOSGi to fullly
>>> implement this scenario.
>>> After the successful attempt, I read the following resources on the
>>> topic.
>>>
>>> 1) http://cxf.apache.org/distributed-osgi-reference.html;
>>> 2) https://github.com/apache/cxf-dosgi;
>>> http://www.liquid-reality.de/display/liquid/2013/02/13/Apach
>>> e+Karaf+Tutorial+Part+8+-+Distributed+OSGi;
>>>
>>> Especially from the last one: It seems that DOSGi is used to let an OSGi
>>> framework B access to services located on a OSGi framework A. This is all
>>> good and dandy but in my scenario (Karaf + CXF exposing a REST service)
>>> where are the 2 OSGI containers? I can see only one, namely the one on my
>>> laptop in localhost!
>>>
>>> I'm sure I'm missing something, probably for my inexperience.
>>> Can someone solves this question of mine?
>>>
>>> Thanks!
>>>
>>> --
>>> *Ing. Massimo Bono*
>>>
>>
>> --
>> Jean-Baptiste Onofré
>> jbono...@apache.org
>> http://blog.nanthrax.net
>> Talend - http://www.talend.com
>>
>
>
>
> --
> *Ing. Massimo Bono*
>



-- 
-- 
Christian Schneider
http://www.liquid-reality.de
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.liquid-reality.de>

Computer Scientist
http://www.adobe.com


Re: Using Blueprint & DI in the same bundle

2017-10-16 Thread Christian Schneider
You can have DS and blueprint in the same bundle but they can only
communicate via OSGi services. So it would be the same as if each was in
its own bundle.

The recommended way to use SOAP and JAX-RS services with DS is to use
CXF-DOSGi. It has DS examples and now allows to configure almost every cxf
feature.
https://github.com/apache/cxf-dosgi/tree/master/samples

You might also want to look into the Aries JAX-RS whiteboard. Which
implements the new OSGi spec.

Best
Christian

2017-10-16 18:08 GMT+02:00 Stephen Munro <stephen.ross.mu...@gmail.com>:

> I'm in the process of swapping some blueprint based bundles over to use
> declarative services. While I've read up on blueprint vs ds, I was
> wondering if it was feasible to make use of them both in the same bundle.
> I have a bundle which has various CXF interceptors and a Rest service,
> which blueprint has excellent support for, whereas I've not found an easy
> way to manage this with DI. So, unless I can do this, I would like to keep
> the CXF beans within blueprint and use DI for just about everything else.
> Even if it is feasible, is it safe to do so?
> Stephen
>
> --
> Warmest Regards,
>
> Stephen Munro
>



-- 
-- 
Christian Schneider
http://www.liquid-reality.de
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.liquid-reality.de>

Computer Scientist
http://www.adobe.com


Re: What kind of things would prevent a set of bundles from going Active?

2017-10-03 Thread Christian Schneider
t surprising, as the bundle in question probably is not
> > active.
> > >
> > > I also tried installing the web console.  I just did "feature:install
> > webconsole" and then went to "http://localhost:8181/system/console; in
> > my browser.  This timed out.
> > >
> > > What should I be looking at to diagnose this?
> > >
> >
> > --
> > Jean-Baptiste Onofré
> > jbono...@apache.org
> > https://urldefense.proofpoint.com/v2/url?u=http-
> > 3A__blog.nanthrax.net=DwIDaQ=LFYZ-o9_HUMeMTSQicvjIg=OsTemSXEn-
> > xy2uk0vYF_EA=ZMfiZcSDNceMx7Qo65Vgub5g4k_Jmwo5hPTCY33LQXA=jl9mLMBBmRS
> > FeUETzUN7l8dHAQbh5CGPlgZd6fqUSJI=
> > Talend - https://urldefense.proofpoint.com/v2/url?u=http-
> > 3A__www.talend.com=DwIDaQ=LFYZ-o9_HUMeMTSQicvjIg=OsTemSXEn-
> > xy2uk0vYF_EA=ZMfiZcSDNceMx7Qo65Vgub5g4k_Jmwo5hPTCY33LQXA=ZcPGU_vMwhY
> > t2Zoc_2TdHZKrZ1Z-wyM2owPWlY6nFM0=
>



-- 
-- 
Christian Schneider
http://www.liquid-reality.de
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.liquid-reality.de>

Computer Scientist
http://www.adobe.com


Re: weblogic t3 thin client

2017-08-24 Thread Christian Schneider
Can you describe what you setup exactly and what error you get?

Christian

2017-08-25 4:00 GMT+02:00 Matthew Shaw <matthew.s...@ambulance.qld.gov.au>:

> Hi All,
>
>
>
> I’ve upgraded to karaf 4.1.2 from 4.1.1. I had wlthint3client.jar  on the
> karaf/lib folder and subsequent classpath and my bundle that was using it
> worked fine.
>
>
>
> In the new version Idon’t seem to be able to load this jar at startup, and
> ideas?
>
>
>
> Cheers,
>
> Matt.
>
> This email, including any attachments sent with it, is confidential and
> for the sole use of the intended recipient(s). This confidentiality is not
> waived or lost, if you receive it and you are not the intended
> recipient(s), or if it is transmitted/received in error.
>
> Any unauthorised use, alteration, disclosure, distribution or review of
> this email is strictly prohibited. The information contained in this email,
> including any attachment sent with it, may be subject to a statutory duty
> of confidentiality if it relates to health service matters.
>
> If you are not the intended recipient(s), or if you have received this
> email in error, you are asked to immediately notify the sender. You should
> also delete this email, and any copies, from your computer system network
> and destroy any hard copies produced.
>
> If not an intended recipient of this email, you must not copy, distribute
> or take any action(s) that relies on it; any form of disclosure,
> modification, distribution and/or publication of this email is also
> prohibited.
>
> Although the Queensland Ambulance Service takes all reasonable steps to
> ensure this email does not contain malicious software, the Queensland
> Ambulance Service does not accept responsibility for the consequences if
> any person's computer inadvertently suffers any disruption to services,
> loss of information, harm or is infected with a virus, other malicious
> computer programme or code that may occur as a consequence of receiving
> this email.
>
> Unless stated otherwise, this email represents only the views of the
> sender and not the views of the Queensland Government.
>
> 
> 
>
> The content presented in this publication is distributed by the Queensland
> Government as an information source only. The State of Queensland makes no
> statements, representations or warranties about the accuracy, completeness
> or reliability of any information contained in this publication. The State
> of Queensland disclaims all responsibility and all liability (including
> without limitation for liability in negligence) for all expenses, losses,
> damages and costs you might incur as a result of the information being
> inaccurate or incomplete in any way, and for any reason reliance was placed
> on such information.
>



-- 
-- 
Christian Schneider
http://www.liquid-reality.de
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.liquid-reality.de>

Open Source Architect
http://www.talend.com
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.talend.com>


Re: CXF 3.1.12, karaf 4.1.1 & 4.1.2-SNAPSHOT

2017-07-31 Thread Christian Schneider
I fixed the InterruptedException today. Can you try with the current master
or karaf-4.1.x.

Btw. The exception does not prevent cxf from being installed. So it should
even work in your version.
How do you see that it does not work?

Christian

2017-07-31 19:58 GMT+02:00 Michal Hlavac <hla...@hlavki.eu>:

> Hi,
>
> I am trying to install cxf to karaf 4.1.x and it fails.
>
> Commands:
> feature:repo-add cxf 3.1.12
> feature:install cxf
>
> It fails with restart and error:
> Error executing command: java.lang.InterruptedException
>
> Environment:
> Linux 4.11.8-2-default #1 SMP PREEMPT Thu Jun 29 14:37:33 UTC 2017
> (42bd7a0) x86_64 x86_64 x86_64 GNU/Linux
> java version "1.8.0_141"
> Java(TM) SE Runtime Environment (build 1.8.0_141-b15)
> Java HotSpot(TM) 64-Bit Server VM (build 25.141-b15, mixed mode)
>
> Full log: http://paste.opensuse.org/view/simple/78027503
>
> m.
>



-- 
-- 
Christian Schneider
http://www.liquid-reality.de
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.liquid-reality.de>

Open Source Architect
http://www.talend.com
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.talend.com>


Re: DBCP2 & Karaf

2017-07-05 Thread Christian Schneider
If I understand correctly then you are trying to use the plain java 
approach of creating a data source using the Driver class.
This does not work well in OSGi as the Driver approach expects a flat 
classloader that sees all classes. You can try to work around this but

it will suck.

Instead I recommend to use the DataSourceFactory approach like used by 
pax-jdbc. In OSGi a compliant database driver bundle will offer a 
DataSourceFactory as an OSGi service.

You can use this factory to create a DataSource.

In case of H2 this is really  easy you just need two bundles:
install -s mvn:org.osgi/org.osgi.service.jdbc/1.0.0
install -s mvn:com.h2database/h2/1.3.172

This is already enough to use the DataSourceFactory.

Pax jdbc makes this even easier by providing the feature pax-jdbc-h2 
which can be directly installed in karaf.

It also provides wrappers for databases that do not yet support this.

On top pax-jdbc can create pooling and XA ready DataSources. You can 
even create such Datasources from a plain config without any coding.


See:
https://ops4j1.jira.com/wiki/display/PAXJDBC/Documentation
https://ops4j1.jira.com/wiki/display/PAXJDBC/H2+Driver+Adapter
https://ops4j1.jira.com/wiki/display/PAXJDBC/Pooling+and+XA+support+for+DataSourceFactory
https://ops4j1.jira.com/wiki/display/PAXJDBC/Pooling+and+XA+support+in+1.0.0

Christian

On 04.07.2017 23:35, smunro wrote:

Hello,

I've got a question regarding DBCP2 & Karaf.   When using DBCP2, I get a
driver not found error. If I use a straight Class.forName("org.h2.Driver")
it works as expected. I'm not looking to use fragments at the moment as I
need to get a working example quickly, but before I bin all the DBCP2 code I
have, does anyone know of a quick way to get the above working.

I've tried the DynamicImport-Package entry (which doesn't appear to appear
in intellisense as an option when adding it to the maven plugin). And while
the bundle does book up, none of the breakpoints are hit when running in
debug mode (when I take it out, the break points are hit), so I'm guessing
this isn't supported.

Can anyone suggest a quick way to get the DBCP2 BasicDataSource to work
correctly in an osgi bundle without it throwing an exception that it cannot
locate the driver. I know it's a classpath issue with the current thread,
I'm just looking for a fast way to get around it before moving onto a more
long term solution.

Stephen



--
View this message in context: 
http://karaf.922171.n3.nabble.com/DBCP2-Karaf-tp4050942.html
Sent from the Karaf - User mailing list archive at Nabble.com.



--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: Problem using drop-in deploy feature

2017-07-04 Thread Christian Schneider

Hi David,

you will need to configure the authentication in the maven settings.xml. 
Either on server or mirror level.


https://maven.apache.org/settings.html

Best
Christian

On 04.07.2017 11:55, David Leangen wrote:

Thanks, Christian.

(BTW, I just noticed that somehow this went off-list. I assume that this was an 
error, so I’m bringing back on list.)

I took a look at the pax-mvn urls, but could not find how to configure 
authentication.

I checked these places:
   * pax-url website
   * The org.ops4j.pax.url.mvn.cfg file
   * The Karaf doc

Is there some other place I should look to figure this out?

Or perhaps there are some examples somewhere?


Thanks!
=David



On Jul 4, 2017, at 6:18 PM, Christian Schneider <ch...@die-schneider.net> wrote:

If you are not using mvn urls then pax url maven is not involved.
I recommend to switch to mvn urls. It will take care of the download as well as 
the caching in the local repo.

If you want to get authentication working with plain http urls you should 
create another issue.

Christian

On 04.07.2017 11:01, David Leangen wrote:

Thanks, Christian, I’ll take a look at pax-url.

The URLs are simple http(s). I don’t use Maven (though I am now using Nexus), 
so I’m not sure…


Cheers,
=David



--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: Re: Karaf 4 - how to set properties for commands

2017-06-17 Thread Christian Schneider
The karaf @Reference annotation is not only a special annotation. The karaf
commands are run completely independent of the blueprint container.
So the only way to use a blueprint bean from a command is to export it as a
service. As the @Reference does not have a filter you should use a specific
interface for this service that is unique.

Christian

2017-06-16 18:36 GMT+02:00 Martin Lichtin <lich...@yahoo.com>:

> Hi Achim
>
> Right, I can still use the "old"-style with Blueprint. But all this code
> is marked as deprecated.
> So I'm desparately trying to find  a solution using the "new"-style Karaf
> commands, as the old-style will disappear.
> I would not spend time on this, if it wasn't marked deprecated.
>
> I could switch if BlueprintContainer.getComponentInstance("id") was
> fixed, so I could use the ids as set in Blueprint cfg file.
>
> - Martin
>
> On 15.06.2017 10:03, Achim Nierbeck wrote:
>
>> Hi Martin,
>>
>> afaik you still can also use the "old" style with blueprint.
>> As you are using blueprint anyway that shouldn't be much of a big deal.
>> The idea about the new command way is to not depend on blueprint for
>> Karaf internals.
>>
>> The @Reference annotation is actually a karaf own annotation,
>> org.apache.karaf.shell.api.action.lifecycle.Reference
>> There is no filtering available on that annotation.
>>
>> regards, Achim
>>
>>
>> 2017-06-15 8:38 GMT+02:00 Martin Lichtin <lich...@yahoo.com > lich...@yahoo.com>>:
>>
>> So far I could not find a way to do this in the new Karaf command
>> framework.
>> A command is now instantiated each time it is invoked.
>> It can use OSGi services (@Reference) but there doesn't seem to be a
>> way to set a filter for it.
>> I can access the BlueprintContainer (it's  available as a service),
>> but not the beans by their name.
>> oh well..
>>
>>
>>
>> On 02.06.2017 20 <tel:02.06.2017%2020>:23, Martin Lichtin wrote:
>>
>> In Karaf 3, a command can be defined in Blueprint as:
>>
>> 
>>   
>> > />
>>   
>> 
>>
>> where in my case "producerTemplate" comes from a CamelContext
>> created in the same Blueprint context.
>>
>> Now in Karaf 4, how would I do the same, i.e. set the property?
>>
>>     - Martin
>>
>>
>>
>>
>> --
>>
>> Apache Member
>> Apache Karaf <http://karaf.apache.org/> Committer & PMC
>> OPS4J Pax Web <http://wiki.ops4j.org/display/paxweb/Pax+Web/> Committer
>> & Project Lead
>> blog <http://notizblog.nierbeck.de/>
>> Co-Author of Apache Karaf Cookbook <http://bit.ly/1ps9rkS>
>>
>> Software Architect / Project Manager / Scrum Master
>>
>>
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.liquid-reality.de>

Open Source Architect
http://www.talend.com
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.talend.com>


Re: Karaf Feature vs. OBR

2017-06-15 Thread Christian Schneider
You need both. During active development you want to use the newest
dependency .. at least for your own artifacts .. sometimes also for remote
ones. Then once you do a release you of course want it to be completely
static and not change over time.

Christian

2017-06-15 13:12 GMT+02:00 David Leangen <apa...@leangen.net>:

>
> Hi Christian,
>
> I don’t know. I think I rather like the idea of the curated repositories
> that I know won’t shift over time. I can package up a small OBR (with URLs
> related to the index file) and deploy it somewhere. I can have very fine
> grained management of my “sets” (I’ll refrain from calling them
> “features”), including versioned “sets” so I can easily roll back to a
> previously known working state if needed.
>
> I believe that this is how bnd/enroute is intended to work.
>
> Otherwise, you get pure chaos, like the npm world, which works one day and
> is broken the next because a butterfly in China flapped its wings.
>
> (By the way, OBR or Maven, either way I don’t mind, but it’s the
> predictability/stability that is important to me.)
>
>
> Cheers,
> =David
>
>
>
> On Jun 15, 2017, at 4:54 PM, Christian Schneider <ch...@die-schneider.net>
> wrote:
>
> Without mvn urls you can either use file urls or http urls. Both suck in
> some important regards:
>
> A file url requires that the refered jar resides near the OBR. Today most
> people work with maven repos. So a natural place for an artifact is the
> local maven repository. It is difficult to point a file url to it as it
> will always be depending on the user setup.
> As a workaround file urls worked well for me in bndtools. What I did was
> to always generrate my OBR content during the build so it was no problem
> that the urls depend on my setup.
> This means though that you can never publish the contents of the OBR. The
> good thing is that you can work with maven SNAPSHOTs this way.
>
> For http urls you can point to the http url of a jar in a maven repo like
> maven central .. there are many downsides though.
> 1. The url is absolute. So you always point to a certain external
> resource. So you need additional measures if you want to cache the artifact
> locally.
> 2. For maven snapshots the urls in a remote maven repo always change. So
> you can not point to the newest snapshot which you want to do during active
> development
> 3. These absolute urls make you dependent on the external repo
> availability and you need to open your firewall to access it.
>
> mvn urls on the other hand work extremly well in enterprise environments
> as you can leverage local mvn proxies. You can also access secured
> reporitories that require authentication.
>
> So if you really try to use plain OBR without mvn urls in a maven build it
> sucks a lot. This is why Peter did the mvn based repos in bndtools 3.3.0. I
> discussed a lot with him about these. The current state works quite well
> but unfortunately it is completely outside the OBR spec. So I hope we see
> improvements in the spec so we can have both a solution that is sec
> compliant and works well for maven builds.
>
> Actually I think this is not only about maven builds. It is about using
> maven repos which is also very relevant for gradle and other builds.
>
> Christian
>
>
>
> 2017-06-15 8:51 GMT+02:00 David Leangen <apa...@leangen.net>:
>
>>
>> Hi Christian,
>>
>> Thanks for this information.
>>
>> > There is one big downside to OBR unfortunately. Inside OBR the content
>> of bundles needs to be refered as a url. While karaf can work with mvn urls
>> other systems like bndtools can not. Now the problem is that you can not
>> populate an OBR well without maven urls if your artifacts reside in maven
>> and you also want to use maven SNAPSHOTs.
>>
>> My understanding is that this is very much by design. The reason is to
>> have a “strongly” curated repository to promote well-behaved and
>> predictable builds and deployments.
>>
>> I could be wrong. Maybe would be good to verify with some of the alliance
>> peeps.
>>
>>
>> > Another solution is to consider the OBR just as a cache and define the
>> list of bundles in a pom. Bndtools is going that way and I think karaf
>> could do the same. So a feature could simply point to a pom or when
>> deployed to maven we could assume to use the pom of the project the feature
>> resides in as default. This would allow to simply define OBRs by using a
>> pom.
>>
>> You think bndtools is going that way? Or rather, they are just trying to
>> bring in more Maven people? I don’t understand what the purpose would be.
>> If the bundles are

Re: Karaf Feature vs. OBR

2017-06-15 Thread Christian Schneider
Without mvn urls you can either use file urls or http urls. Both suck in
some important regards:

A file url requires that the refered jar resides near the OBR. Today most
people work with maven repos. So a natural place for an artifact is the
local maven repository. It is difficult to point a file url to it as it
will always be depending on the user setup.
As a workaround file urls worked well for me in bndtools. What I did was to
always generrate my OBR content during the build so it was no problem that
the urls depend on my setup.
This means though that you can never publish the contents of the OBR. The
good thing is that you can work with maven SNAPSHOTs this way.

For http urls you can point to the http url of a jar in a maven repo like
maven central .. there are many downsides though.
1. The url is absolute. So you always point to a certain external resource.
So you need additional measures if you want to cache the artifact locally.
2. For maven snapshots the urls in a remote maven repo always change. So
you can not point to the newest snapshot which you want to do during active
development
3. These absolute urls make you dependent on the external repo availability
and you need to open your firewall to access it.

mvn urls on the other hand work extremly well in enterprise environments as
you can leverage local mvn proxies. You can also access secured
reporitories that require authentication.

So if you really try to use plain OBR without mvn urls in a maven build it
sucks a lot. This is why Peter did the mvn based repos in bndtools 3.3.0. I
discussed a lot with him about these. The current state works quite well
but unfortunately it is completely outside the OBR spec. So I hope we see
improvements in the spec so we can have both a solution that is sec
compliant and works well for maven builds.

Actually I think this is not only about maven builds. It is about using
maven repos which is also very relevant for gradle and other builds.

Christian



2017-06-15 8:51 GMT+02:00 David Leangen <apa...@leangen.net>:

>
> Hi Christian,
>
> Thanks for this information.
>
> > There is one big downside to OBR unfortunately. Inside OBR the content
> of bundles needs to be refered as a url. While karaf can work with mvn urls
> other systems like bndtools can not. Now the problem is that you can not
> populate an OBR well without maven urls if your artifacts reside in maven
> and you also want to use maven SNAPSHOTs.
>
> My understanding is that this is very much by design. The reason is to
> have a “strongly” curated repository to promote well-behaved and
> predictable builds and deployments.
>
> I could be wrong. Maybe would be good to verify with some of the alliance
> peeps.
>
>
> > Another solution is to consider the OBR just as a cache and define the
> list of bundles in a pom. Bndtools is going that way and I think karaf
> could do the same. So a feature could simply point to a pom or when
> deployed to maven we could assume to use the pom of the project the feature
> resides in as default. This would allow to simply define OBRs by using a
> pom.
>
> You think bndtools is going that way? Or rather, they are just trying to
> bring in more Maven people? I don’t understand what the purpose would be.
> If the bundles are already defined in the OBR, why have a second list?
>
>
> Cheers,
> =David
>
>
>
> > On Jun 15, 2017, at 3:18 PM, Christian Schneider <
> ch...@die-schneider.net> wrote:
> >
> > There is one big downside to OBR unfortunately. Inside OBR the content
> of bundles needs to be refered as a url. While karaf can work with mvn urls
> other systems like bndtools can not. Now the problem is that you can not
> populate an OBR well without maven urls if your artifacts reside in maven
> and you also want to use maven SNAPSHOTs.
> >
> > So I think there is a gap in OBR that needs to be closed. One solution
> would be to add the mvn urls to the spec so they are universally accepted.
> >
> > Another solution is to consider the OBR just as a cache and define the
> list of bundles in a pom. Bndtools is going that way and I think karaf
> could do the same. So a feature could simply point to a pom or when
> deployed to maven we could assume to use the pom of the project the feature
> resides in as default. This would allow to simply define OBRs by using a
> pom.
> >
> > Christian
> >
> >
> > 2017-06-14 23:58 GMT+02:00 David Leangen <apa...@leangen.net>:
> >
> > Hi Guillaume,
> >
> > Thank you for this assessment.
> >
> > I agree that Features adds value. Your post explains a lot of good
> reasons why this is so.
> >
> > My question is more about “why compete with OBR?”. Instead of embracing
> OBR and working on top of it, it s

Re: Karaf Feature vs. OBR

2017-06-14 Thread Christian Schneider
Hi David,

I think the reason is more that features in karaf used to work a lot
simpler in the start. They were simply a list of bundles to install. Over
time features got more and more abilities.
So it is less to lock in people and more simply a history matter.

Since karaf 4 features use the felix resolver. You can imagine a feature as
a mix of obr and requirements for the resolver.
If a bundle in a feature is marked as dependency=true then it behaves in
the same way as a bundle listed in an OBR if the feature is installed. It
is simply there to be selected if necessary. If dependency=false (the
default) then the bundle is also a requirement for the resolver if the
feature is to be installed.

I agree with you that it would be great to move to a more general way that
then also works in different environments.
Some time ago I wrote down some ideas for a feature replacement that is
less karaf specific.
http://liquid-reality.de/display/liquid/Design+repository+based+features+for+Apache+Karaf

The main things features provide:

- List of bundles to choose from
- List of bundles to install (requirements)
- Configs to install
- Conditionally install additional bundles if other features are present

The first three things can already be done without features:
- An OBR index can supply the list of bundles to choose from ( I already
started to provide OBR repos in some projects like Aries RSA)
- We could use a list of top level bundles as initial requirements
- A bundle can require other bundles using Require-Bundle headers. This
could allow feature like bundles that list other top level bundles
- Configurations can be provided inside of bundles using the Configurer
spec and impl from enroute

For conditional bundles there is no replacement outside of features.

So we could develop a replacement of features that works in all OSGi
environments. It is just matter of knowledge and effort to implement this.

You can see in the  CXF-DOSGi SOAP sample what can already be done with OBR
and a resolver:
https://github.com/apache/cxf-dosgi/blob/master/samples/soap/soap.bndrun#L1-L41
The runbundles are automatically determined by the resolver.
As you can see it is already possible but still quite a bit more effort
than with karaf features at the moment.

Christian



2017-06-14 7:49 GMT+02:00 David Leangen <apa...@leangen.net>:

>
> Hi!
>
> I am trying to wrap my head around the differences between an OBR and a
> Karaf Feature. The concepts seem to be overlapping.
>
> An OBR has an index of the contained bundles, as well as meta information,
> which includes requirements and capabilities. An OBR is therefore very
> useful for resolving bundles, and partitioning bundles into some kind of
> category. It can also be versioned, and can contained different versions of
> bundles. An OBR could potentially be used to keep snapshots of system
> releases. I believe that this is somewhat how Apache ACE works. (A
> Distribution can be rolled back by simply referring to a different OBR and
> allowing the system to re-resolve.) The actual bundles need to be stored
> somewhere. The OBR index needs to provide links to that storage.
>
> A Karaf Feature is basically an index of bundles (and configurations),
> too. I think that it can also be versioned, and can contain different
> versions of bundles. Like an OBR, it is very useful for partitioning
> bundles into some kind of category, so the groups of bundles can be
> manipulated as a single unit. Just like an OBR, the Karaf Feature also
> needs to provide a link to the bundles. AFAIU, resolution is done somehow
> in Karaf, based on the bundles available via the Features, so in the end
> the entire mechanism seems almost identical to what the OBR is doing.
>
>
> So many similarities!
>
>
> I understand that a Feature can include configurations, which is nice, but
> why have a competing non-official standard against an official standard? If
> configurations is the only problem, then why not build it on top of OBRs,
> rather than creating something completely new and different and competing?
>
> Is it to try to force lock-in to Karaf? Or am I completely missing
> something?
>
>
> Thanks for explaining! :-)
>
>
> Cheers,
> =David
>
>
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.liquid-reality.de>

Open Source Architect
http://www.talend.com
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.talend.com>


Re: Hibernate, JPA and Karaf 4

2017-06-10 Thread Christian Schneider

I did a PR with some fixes.

https://github.com/JackMic/Hibernate-Postgressql-Karaf-4.1.0/pull/1

Christian

On 10.06.2017 11:08, Jack wrote:

Stephen, I have shared all the code here
https://github.com/JackMic/Hibernate-Postgressql-Karaf-4.1.0



--
View this message in context: 
http://karaf.922171.n3.nabble.com/Hibernate-JPA-and-Karaf-4-tp4050569p4050657.html
Sent from the Karaf - User mailing list archive at Nabble.com.



--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: Hibernate, JPA and Karaf 4

2017-06-09 Thread Christian Schneider
This looks better now. Your bundle is now waiting for the EntityManager 
service:


DemoProject.DemoData/0.0.1.SNAPSHOT due to unresolved dependencies
[(&(osgi.unit.name=store)(objectClass=javax.persistence.EntityManager))]

This is created by aries jpa from the information in the persistence.xml.
Things you need to check now:
Is your persistence.xml picked up? You should see this in the log.

Do you have a suitable PersistenceProvider service?
service:list PersistenceProvider

Do you have a suitable DataSource service?
service:list DataSource

Christian

On 09.06.2017 12:53, Jack wrote:

Hi Stephen,

I have removed  and  tags in xml file and added

 and 

Below is my blueprint file


http://www.osgi.org/xmlns/blueprint/v1.0.0;
xmlns:jpa="http://aries.apache.org/xmlns/jpa/v2.0.0;
xmlns:tx="http://aries.apache.org/xmlns/transactions/v2.0.0;
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
xsi:schemaLocation="http://www.osgi.org/xmlns/blueprint/v1.0.0
https://osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd
http://aries.apache.org/xmlns/jpa/v2.0.0
http://aries.apache.org/xmlns/transactions/v2.0.0;>

  

 















When I build and run in container, below exception is throwing

java.lang.IllegalArgumentException: No matching bundles
at
org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:59)
[22:org.apache.karaf.bundle.core:4.1.0]
at
org.apache.karaf.bundle.command.BundlesCommand.execute(BundlesCommand.java:54)
[22:org.apache.karaf.bundle.core:4.1.0]
at
org.apache.karaf.shell.impl.action.command.ActionCommand.execute(ActionCommand.java:84)
[43:org.apache.karaf.shell.core:4.1.0]
at
org.apache.karaf.shell.impl.console.osgi.secured.SecuredCommand.execute(SecuredCommand.java:67)
[43:org.apache.karaf.shell.core:4.1.0]
at
org.apache.karaf.shell.impl.console.osgi.secured.SecuredCommand.execute(SecuredCommand.java:82)
[43:org.apache.karaf.shell.core:4.1.0]
at org.apache.felix.gogo.runtime.Closure.executeCmd(Closure.java:552)
[43:org.apache.karaf.shell.core:4.1.0]
at 
org.apache.felix.gogo.runtime.Closure.executeStatement(Closure.java:478)
[43:org.apache.karaf.shell.core:4.1.0]
at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:367)
[43:org.apache.karaf.shell.core:4.1.0]
at org.apache.felix.gogo.runtime.Pipe.doCall(Pipe.java:417)
[43:org.apache.karaf.shell.core:4.1.0]
at org.apache.felix.gogo.runtime.Pipe.call(Pipe.java:229)
[43:org.apache.karaf.shell.core:4.1.0]
at org.apache.felix.gogo.runtime.Pipe.call(Pipe.java:59)
[43:org.apache.karaf.shell.core:4.1.0]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[?:?]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[?:?]
at java.lang.Thread.run(Thread.java:745) [?:?]
2017-06-09T15:29:26,814 | ERROR | Blueprint Extender: 1 |
BlueprintContainerImpl   | 12 - org.apache.aries.blueprint.core -
1.7.1 | Unable to start blueprint container for bundle
DemoProject.DemoData/0.0.1.SNAPSHOT due to unresolved dependencies
[(&(osgi.unit.name=store)(objectClass=javax.persistence.EntityManager))]
java.util.concurrent.TimeoutException
at
org.apache.aries.blueprint.container.BlueprintContainerImpl$1.run(BlueprintContainerImpl.java:371)
[12:org.apache.aries.blueprint.core:1.7.1]
at
org.apache.aries.blueprint.utils.threading.impl.DiscardableRunnable.run(DiscardableRunnable.java:48)
[12:org.apache.aries.blueprint.core:1.7.1]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?]
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
[?:?]
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
[?:?]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[?:?]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[?:?]
at java.lang.Thread.run(Thread.java:745) [?:?]




--
View this message in context: 
http://karaf.922171.n3.nabble.com/Hibernate-JPA-and-Karaf-4-tp4050569p4050643.html
Sent from the Karaf - User mailing list archive at Nabble.com.



--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: Hibernate, JPA and Karaf 4

2017-06-09 Thread Christian Schneider

Your log shows this:
2017-06-09T15:24:26,909 | INFO | pool-5-thread-1 | DataSourceTracker | 
415 - org.apache.aries.jpa.container - 2.5.0 | Tracking DataSource for 
punit store with filter

(&(objectClass=javax.sql.DataSource)(osgi.jndi.service.name=store))

I think your datasource config is wrong. You should use:
osgi.jdbc.driver.class = org.postgresql.Driver

Christian

On 09.06.2017 13:33, Jack wrote:

Hi Stephen,

I have attached complete log file
log.txt <http://karaf.922171.n3.nabble.com/file/n4050646/log.txt>




--
View this message in context: 
http://karaf.922171.n3.nabble.com/Hibernate-JPA-and-Karaf-4-tp4050569p4050646.html
Sent from the Karaf - User mailing list archive at Nabble.com.



--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: Experiences with karaf and liquibase?

2017-06-03 Thread Christian Schneider
Sounds great. Can you also open issues for what you found and changed at
liquibase? Maybe we can persuade them to deliver the necesary changes
themselves.

Christian

2017-06-02 23:24 GMT+02:00 Steinar Bang <s...@dod.no>:

> >>>>> Steinar Bang <s...@dod.no>:
>
> >>>>> Hello Steinar! I'm using liquibase in karaf for some time, and to
> >>>>> fix that you need to repackage the liquibase-slf4j for it to be a
> >>>>> fragment of the liquibase bundle. Here's how:
>
> >>>>> https://gist.github.com/YgorCastor/44fb3a13520d28aa328c4975f8bf5e8c
>
> >>>>> and in your feature:
>
> >>>>>  start-level="40" version="3.5.1">
> >>>>> mvn:org.liquibase/liquibase-core/3.5.1
> >>>>> mvn:org.yaml/snakeyaml/1.17
> >>>>> mvn:com.mattbertolini/liquibase-slf4j-
> osgi/2.0.0
> >>>>> 
>
> > I was thinking "attach to the bundle using liquibase", so that was what
> > I read...
>
> > But what you clearly say here is: "attach to the org.liquibase.core
> > bundle". :-)
>
> > When I followed your instructions, and:
> >  - Created a maven module that rebundled the liquibase-slf4j jar into an
> >OSGi bundle fragment (as outlined in your gist)
> >  - Modified the feature.xml file as outlined in the quoted file above
> >(the start levels are important)
>
> > I have some modifications to the bundling compared to the gist.  I will
> > post a followup to this article with a link to the code, when I
> > eventually push the liquibase changes.
>
> I've now pushed the liquibase changes to this branch (including the
> liquibase-slf4j based logging):
>  https://github.com/steinarb/ukelonn/tree/work/use-liquibase
>
> The liquibase-slf4j changes was made in this commit:
>  https://github.com/steinarb/ukelonn/commit/1a85c03fe00e63b5ecbdbb507dae9b
> 3b53cb68e7
>
> I've also created a standalone liqibase-core feature for karaf:
>  https://github.com/steinarb/liquibase-karaf-feature
>
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.liquid-reality.de>

Open Source Architect
http://www.talend.com
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.talend.com>


Re: Problem with Karaf 4 feature installation

2017-05-23 Thread Christian Schneider
The requirements are solved purely with the Provide-Capability headers of
the bundles. It does not matter if bundle A actually provides such a
service. The important thing is that bundle A has a suitable
Provide-Capabilitiy Manifest header to announce it will provide a suitable
service. If this is not the case then you should make sure the header is
added. If you can not change the bundle then you can also add the
capability in the feature.

Christian

2017-05-23 10:44 GMT+02:00 Martin Lichtin <lich...@yahoo.com>:

> I'm in the process of moving a system from Karaf 3 to 4.0. The 4.0
> 'features' changes turn out to be quite painful to upgrade..
>
> In particulur, with Pax-Exam, I have a situation with a feature B, where
> bundle B requires a service from bundle A, from feature A:
>
> http://karaf.apache.org/xmlns/features/v1.4.0; name="B">
>   mvn:grp/artifact-A/1.0/xml/features
> 
> A
> mvn:grp/bundle-B/1.0
>  
>
> http://karaf.apache.org/xmlns/features/v1.4.0; name="A">
> aries-blueprint
> deployer
> 
> blueprint:mvn:grp/bundle-A/1.0/xml/idA
>  
>
> The location and name of feature "B" is provided to Pax-Exam to install it.
> What is puzzling is that I see how the "blueprint" XML file is downloaded
> and the BlueprintURLHandler
> seems to kick in, but at about the same time the Resolver throws a
> "missing requirement"
> regarding bundle-B missing the service that bundle-A is about to provide.
> At this time, blueprint bundle-A has not been fully activated yet.
> But should not the prerequisite=true assure that services from bundle A
> are all visible
> before bundle B is installed and resolved?
>
> In another, similar situation it seems to work, so perhaps the special
> "blueprint:" loader is an issue?
> Any other ideas how to better debug? I turn on org.apache.karaf TRACE but
> no real info comes out.
>
> - Martin
>



-- 
-- 
Christian Schneider
http://www.liquid-reality.de
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.liquid-reality.de>

Open Source Architect
http://www.talend.com
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.talend.com>


Re: materialize bundles?

2017-05-18 Thread Christian Schneider
bundle:watch * is also very useful. See
http://karaf.apache.org/manual/latest/#_watch

So while debugging you can do mvn install on the single project after
making changes to it. Karaf will then automatically updated the bundle(s)
you built. This even works without disconnecting the debugger.

So in practice the debugging experience is not much worse than with the pde.

Christian

2017-05-19 2:00 GMT+02:00 Scott Lewis <sle...@composent.com>:

> One more question:
>
> The pom you pointed me to:
>
> https://github.com/apache/karaf/blob/master/assemblies/apach
> e-karaf-minimal/pom.xml#L102-L148
>
> Seems to be the 'karaf minimal' or 'karaf boot' (not sure if these are the
> same thing), but the version appears to be 4.2...which is not out yet.
>  When is the expected release of karaf minimal/boot?
>
> Thanks,
>
> Scott
>
>
>
> On 5/18/2017 4:48 PM, Scott Lewis wrote:
>
>> On 5/18/2017 3:26 PM, Guillaume Nodet wrote:
>>
>>> The karaf maven plugin is perfectly suited to create custom
>>> distributions.
>>>
>>
>> We do use it to create the karaf official distributions, so unless
>>> something is missing, I'd suggest having a look at it.
>>> See for example:
>>> https://github.com/apache/karaf/blob/master/assemblies/apach
>>> e-karaf-minimal/pom.xml#L102-L148
>>>
>>
>> Ok, thanks.   Is there some further support for
>> developing/debugging/testing to these and other features...e.g. in Eclipse
>> and/or other IDE?  e.g. target platform?
>>
>> Excepting bndtools, which I know about.
>>
>> Scott
>>
>>
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.liquid-reality.de>

Open Source Architect
http://www.talend.com
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.talend.com>


Re: materialize bundles?

2017-05-18 Thread Christian Schneider
karaf-minimal is just a smaller distro then the normal karaf. It is
released for each version of karaf. I guess Guillaume pointed you to this
as an example of how to create your own custom distro.

There is no support to start karaf with a target platform. In practice
debugging karaf distributions works quite well though.
You simply start karaf with "karaf debug". Then go to the project you want
to debug in eclipse and start a remote debugging session for it with port
5005.
See
http://karaf.apache.org/manual/latest/#_debugging

This works very well for pure maven projects. I think it is not very well
suited for tycho based projects.

Christian

2017-05-19 2:00 GMT+02:00 Scott Lewis <sle...@composent.com>:

> One more question:
>
> The pom you pointed me to:
>
> https://github.com/apache/karaf/blob/master/assemblies/apach
> e-karaf-minimal/pom.xml#L102-L148
>
> Seems to be the 'karaf minimal' or 'karaf boot' (not sure if these are the
> same thing), but the version appears to be 4.2...which is not out yet.
>  When is the expected release of karaf minimal/boot?
>
> Thanks,
>
> Scott
>
>
>
> On 5/18/2017 4:48 PM, Scott Lewis wrote:
>
>> On 5/18/2017 3:26 PM, Guillaume Nodet wrote:
>>
>>> The karaf maven plugin is perfectly suited to create custom
>>> distributions.
>>>
>>
>> We do use it to create the karaf official distributions, so unless
>>> something is missing, I'd suggest having a look at it.
>>> See for example:
>>> https://github.com/apache/karaf/blob/master/assemblies/apach
>>> e-karaf-minimal/pom.xml#L102-L148
>>>
>>
>> Ok, thanks.   Is there some further support for
>> developing/debugging/testing to these and other features...e.g. in Eclipse
>> and/or other IDE?  e.g. target platform?
>>
>> Excepting bndtools, which I know about.
>>
>> Scott
>>
>>
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.liquid-reality.de>

Open Source Architect
http://www.talend.com
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.talend.com>


Re: Fileinstall, feature config and pax-exam

2017-04-20 Thread Christian Schneider
Not sure if this is a good idea. This way the test is quite different 
from your real system.


Better use editConfigurationFilePut to create the data source config you 
need for the test.


Christian

On 20.04.2017 16:18, Matteo Rulli wrote:
Thank you Christian. Yes you are right, we could simply avoid to embed 
the config placeholders in the feature file.


But there is also another way that just came to my mind:

Option[] options = new Option[] {
editConfigurationFilePut( "etc/config.properties", 
"felix.fileinstall.poll", String.valueOf(Integer.MAX_VALUE)),

features(biepiRepo,"my-feature-to-test")
};

as this prevents fileinstall to trigger bundle updates.

Thanks,
Matteo


--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: Fileinstall, feature config and pax-exam

2017-04-20 Thread Christian Schneider
I think there is or was an issue with loading factory configs as 
defaults in a feature.


As your database config normally contains secret information I propose 
to not install it in the feature. Instead you can deploy the config as 
part of your pax exam setup.


Christian

On 20.04.2017 12:09, Matteo Rulli wrote:

Hello,
We are running a couple of integration tests using pax-exam 4.10.0 and 
Karaf 4.0.8. In the test's @Configuration method we install a feature 
that contains both the bundle under test and the corresp. 
configuration files.


The tests work fine on most dev machines except one where the test 
fails. As far as I understood the test failure is triggered by 
fileinstall that restarts some bundles while the tests are running:


... test already started...

2017-04-20 09:30:48,638 | INFO  | 3952a6dd0975/etc | fileinstall | 4 - 
org.apache.felix.fileinstall - 3.5.6 | Updating configuration from 
org.apache.aries.transaction.cfg
2017-04-20 09:30:48,642 | INFO  | 3952a6dd0975/etc | fileinstall | 4 - 
org.apache.felix.fileinstall - 3.5.6 | Creating configuration from 
org.ops4j.datasource-dsone-postgres.cfg
2017-04-20 09:30:48,645 | INFO  | 3952a6dd0975/etc | fileinstall | 4 - 
org.apache.felix.fileinstall - 3.5.6 | Creating configuration from 
org.ops4j.datasource-dsone-postgres-plain.cfg

2

... test fails with:
 
org.apache.openjpa.persistence.InvalidStateException: This operation 
failed for some instances.  See the nested exceptions array for details.
Caused by:  
org.apache.openjpa.persistence.InvalidStateException: This operation 
cannot be performed while a Transaction is active.

FailedObject: org.apache.openjpa.persistence.EntityManagerImpl@28c1622b
at 
org.apache.openjpa.kernel.AbstractBrokerFactory.assertNoActiveTransaction(AbstractBrokerFactory.java:708)[87:org.apache.openjpa:2.4.1]

... 63 more

And if I query the ConfigAdmin, it reports the "duplicated" 
configuration entries for the DataSource:


databaseName: dsone
dataSourceName: xa-com.example.dsone.persistence
org.apache.karaf.features.configKey: org.ops4j.datasource-dsone-postgres
osgi.jdbc.driver.name: PostgreSQL JDBC Driver-pool-xa
password: **
portNumber: 5432
serverName: localhost
service.factoryPid: org.ops4j.datasource
service.pid: org.ops4j.datasource.211f5de1-8c00-4eaa-b501-b35ac6e9b6c4
user: **

databaseName: dsone
dataSourceName: com.example.dsone.persistence
org.apache.karaf.features.configKey: 
org.ops4j.datasource-dsone-postgres-plain

osgi.jdbc.driver.name: PostgreSQL JDBC Driver-pool
password: **
portNumber: 5432
serverName: localhost
service.factoryPid: org.ops4j.datasource
service.pid: org.ops4j.datasource.15452283-8188-4761-b134-f1d987eb3302
user: **

databaseName: dsone
dataSourceName: com.example.dsone.persistence
felix.fileinstall.filenamefile:/Users/roberto/Documents/git/example/com.example.osgi.dsone/com.example.dsone.storage.provider/target/exam/b157e12e-78cc-4138-9be3-8841da76b972/etc/org.ops4j.datasource-dsone-postgres-plain.cfg
osgi.jdbc.driver.name: PostgreSQL JDBC Driver-pool
password: **
portNumber: 5432
serverName: localhost
service.factoryPid: org.ops4j.datasource
service.pid: org.ops4j.datasource.78917eb8-051b-454d-905b-758b125ab631
user: **

databaseName: dsone
dataSourceName: xa-com.example.dsone.persistence
felix.fileinstall.filenamefile:/Users/roberto/Documents/git/example/com.example.osgi.dsone/com.example.dsone.storage.provider/target/exam/b157e12e-78cc-4138-9be3-8841da76b972/etc/org.ops4j.datasource-dsone-postgres.cfg
osgi.jdbc.driver.name: PostgreSQL JDBC Driver-pool-xa
password: **
portNumber: 5432
serverName: localhost
service.factoryPid: org.ops4j.datasource
service.pid: org.ops4j.datasource.0dfef597-9c1c-4cfc-864c-1a50c73b7350
user: **

So my question is:

1. Is this plausible? I mean: is it true that fileinstall could 
interpret config files that are installed by the feature service as 
new config files, triggering the problem above?
1. Is installing the config files along with the karaf feature the 
right way to go in this kind of scenarios?
2. What is the best way to solve this problem? I could wait in @Before 
method until I see all the config entries but this seems a little bit 
ugly to me...


Thank you,
matteo



--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: Using ActiveMQ via karaf bundle

2017-04-11 Thread Christian Schneider

Spring is just used inside activemq to setup the broker.

You can use the activemq-cf feature to create a ConnectionFactory from 
config.

This is how it works:
https://github.com/apache/activemq/blob/master/activemq-cf/src/main/java/org/apache/activemq/osgi/cf/ConnectionFactoryProvider.java

and this is an example config:
https://github.com/apache/activemq/blob/master/activemq-cf/org.apache.activemq.cfg

You can then reference the ConnectionFactory in your blueprint context 
as an OSGi service and inject it into any bean.
Camel-jms is the simplest way to get started with jms but you can also 
use plain java.


See this example for using jms with camel and injecting the 
ConnectionFactory:

https://github.com/Talend/tesb-rt-se/blob/master/examples/tesb/ebook/ebook-importer/src/main/resources/OSGI-INF/blueprint/blueprint.xml

The example uses jta. In your case you can remove the TransactionManager 
parts.


So the context would look like this:






xmlns="http://camel.apache.org/schema/blueprint;>




And this is the camel route
https://github.com/Talend/tesb-rt-se/blob/master/examples/tesb/ebook/ebook-importer/src/main/java/org/talend/esb/examples/ebook/importer/ImportRoutes.java 



For accessing jms with plain java you can use any plain java example and 
just create a bean for it and inject the ConnectionFactory. Do not 
underestimate the complexity of doing jms

by hand though.

Christian

On 11.04.2017 18:23, smunro wrote:

Hello,

I am a small issue (possible down to my lack of knowledge of Karaf) around
accessing a BrokerService (activemq component).

Following the activemq cookbook, I had installed the jms , activemq-broker &
activemq-components, and I noticed a configuration (spring) file was created
after doing this.

I was hoping to be able to have an osgi bundle which a blueprint.xml file
that could reference connection settings defined in the config file.

This is obviously pretty trivial, but I have not found any simple examples
on how to do this, more often than not, camel is thrown into the mixture
which I do not wish to make use of at  the moment.

So, to be clear, I want to use the embedded activemq feature, be able to
reference the broker service within my own osgi bundle and via code, manage
subscriptions.

Can anyone point me at some good samples and/or offer any suggestions to
achieve what I want.  It's possible blueprint is now the way to go, but I
would like to have some declarative control and we are not using spring at
the moment either.



--
View this message in context: 
http://karaf.922171.n3.nabble.com/Using-ActiveMQ-via-karaf-bundle-tp4050100.html
Sent from the Karaf - User mailing list archive at Nabble.com.



--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: Why is karaf so much easier to get working than older OSGi containers?

2017-04-09 Thread Christian Schneider
I think it is a mixture of both. Since the start of karaf its biggest
advantage was that it has predefined features for popular building blocks
of applications.
You are completely right that the packaging of bundles to form a consistent
deployment was always the biggest hurdle in using OSGi.

 At the start these features simply defined a list of bundles to install.
Since then the feature resolver has become a lot more sophisticated
(largely thanks to Guillaume). It now uses the felix resolver with a few
extensions to build the optimal set of bundles for any combination of
features you install. Besides this karaf also comes with definitions of
system packages and other little tweaks that make it easier to use the
built in spec impls the jdk contains.

There are also other environments like bndtools that provide a very good
resolution of bundles but only karaf has the prepackaged features that make
it so easy to start.

Christian

2017-04-09 8:37 GMT+02:00 Steinar Bang <s...@dod.no>:

> I first encountered OSGi in 2006.  The place I worked at that time had
> (prior to my hiring) selected OSGi as the platform for server side
> components.
>
> The team I worked on extended this into the GUI space by creating an
> eclipse GEF-based IDE for data flows in the server system, where we
> integrated the server components into the eclipse instance for
> debugging.
>
> At that time it was a very promising technology, it was defined in a
> standard document that was actually readable, and it had (at that time,
> if memory serves me right) one complete free software implementation
> (eclipse equinox), two commercial implementations, and one free
> implementation (apache felix) just getting started.
>
> For my own part I was attracted to the lego building block possibilities
> of OSGi, and the fact that we were able to get the server components
> running inside eclipse and talking to eclipse GUI components by
> using OSGi services (even though what the server side components and
> eclipse used on top of OSGi services was very different).
>
> But... the problem with OSGi both then, and when I started looking at it
> back in 2013, was the practicalities in getting all bundle dependencies
> satisfied, and finding, and working around bundle version issues.
>
> In contrast to this, karaf has just worked for me (I took the plunge
> into learning karaf in the autumn of 2016).
>
> Or let me qualify that a little: since I started creating features for
> my own bundles, as a part of the maven build, karaf has just worked for
> me.
>
> So what I'm wondering, is: why is karaf so easy when everything before
> has been so hard?
>
> Is it because there is something magical in the feature resolution,
> compared to other way of starting OSGi runtimes?
>
> Or is it just that karaf comes prepackaged with features for the pax
> stuff (web, jdbc)? And that it is these prepackaged features that just
> works?
>
> Just some idle curiosity on a Sunday morning...:-)
>
>
> - Steinar
>
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.liquid-reality.de>

Open Source Architect
http://www.talend.com
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.talend.com>


Re: Is there an upgrade process for Karaf 4.1.0 to 4.1.1?

2017-04-05 Thread Christian Schneider
Many people use the karaf-maven-plugin to create a custom distro that can
define the boot features (karaf ones as well as your own) as well as
changes to configs.
If you do this in your build or deployment pipeline then upgrading to a new
karaf version is mainly changing the karaf version to the new one.

Unfortunately this fully automated packaging does not play well with using
hot deployments. It works best when you define the whole karaf setup in
your pipelines and do not touch it when it is running (immutable server
pattern).

For upgrades without dowtime the typical scheme is a blue / green
deployment where you have failover of at least two servers and upgrade one
at a time while the other takes the load.

Christan

2017-04-05 19:49 GMT+02:00 mtod09 <m...@thetods.net>:

> I have a test bed with a master / slave cluster setup for Artemis each node
> is on different Karaf instance and using a replicated persistence model. I
> guess I can give it a try under load to see if this will function as an
> upgrade model.
>
> I'm not using the Karaf cluster options only the Artemis master/slave I
> mainly use Karaf's hot deployment for camel routes.
> We implemented an event base model so our clients need a single point to
> get
> the latest event. So a split brain is a real issue running Active/Active
> models.
>
> I was planning on having 4 geographical clusters at min 2 US and 2 EU in a
> network of brokers configuration. This will all be running in AWS and has
> to
> be fully scripted and auto discovery.
>
> Thanks
>
> Mike
>
>
>
>
>
>
>
>
> --
> View this message in context: http://karaf.922171.n3.nabble.
> com/Is-there-an-upgrade-process-for-Karaf-4-1-0-to-4-
> 1-1-tp4050010p4050045.html
> Sent from the Karaf - User mailing list archive at Nabble.com.
>



-- 
-- 
Christian Schneider
http://www.liquid-reality.de
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.liquid-reality.de>

Open Source Architect
http://www.talend.com
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.talend.com>


Re: Karaf JDBC

2017-04-05 Thread Christian Schneider

Yes you can create a DataSource that is available on first boot.

To make this work you need to add the pax-jdbc features that are needed 
for your DataSource to the boot features.
In your example that should be: pax-jdbc-config pax-jdbc-pool-dbcp2 
pax-jdbc-h2 transaction

Then you also need to put a DataSource config in etc.

If you then start karaf the DataSource service should come up.

Christian

On 05.04.2017 11:43, Cristiano Costantini wrote:

hello,

Is it possible to specify at configuration a JDBC datasource to be 
created when Karaf boots the first time?


For information, I'm learning using Karaf JDBC 
(https://karaf.apache.org/manual/latest/#_datasources_jdbc) and then 
Karaf JPA (https://karaf.apache.org/manual/latest/#_persistence_jpa) 
to replace our current implementation which is based on Spring 
(spring-orm and spring-jdbc)


I'm following the examples of Christian from 
https://github.com/apache/aries/tree/trunk/jpa/examples/


where I've found the example command
jdbc:ds-create -dn H2-pool-xa -url jdbc:h2:mem:tasklist tasklist

that creates a data source and publish to OSGi registry its service:
karaf@root>jdbc:ds-info tasklist
Property   | Value
-
driver.version | 1.3.172 (2013-05-25)
db.version | 1.3.172 (2013-05-25)
db.product | H2
url| jdbc:h2:mem:tasklist
driver.name <http://driver.name>| H2 JDBC Driver
username   |

karaf@root>service:list javax.sql.DataSource
[javax.sql.DataSource]
--
 dataSourceName = tasklist
osgi.jdbc.driver.name <http://osgi.jdbc.driver.name> = H2-pool-xa
osgi.jndi.service.name <http://osgi.jndi.service.name> = tasklist
 service.bundleid = 222
 service.factoryPid = org.ops4j.datasource
service.id <http://service.id> = 403
 service.pid = org.ops4j.datasource.ac08f704-67e1-40c8-8855-9e3e262f8a9e
 service.scope = singleton
 url = jdbc:h2:mem:tasklist
Provided by :
 OPS4J Pax JDBC Config (222)
Used by:
 Apache Karaf :: JDBC :: Core (226)


Thank you !
Cristiano





--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: Is there an upgrade process for Karaf 4.1.0 to 4.1.1?

2017-04-05 Thread Christian Schneider
You can try to copy over the etc dir but you will have to reinstall you 
feature repos and features unless you set them as boot features.


Christian

On 04.04.2017 15:02, mtod09 wrote:

Anyone have a process I would like to upgrade without having reinstall
everything?



--
View this message in context: 
http://karaf.922171.n3.nabble.com/Is-there-an-upgrade-process-for-Karaf-4-1-0-to-4-1-1-tp4050010p4050025.html
Sent from the Karaf - User mailing list archive at Nabble.com.



--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: Best Practices for Web application published by Karaf

2017-03-30 Thread Christian Schneider
I know Web-ContextPath is implemented by pax-web but I do not know if it is
part of a standard. I am sure Achim knows this.

Christian

2017-03-30 11:34 GMT+02:00 Cristiano Costantini <
cristiano.costant...@gmail.com>:

> Hi All and thank you for your support,
> the solution suggested by Christian seems to me the simplest at the moment.
>
> Where can I find more documentation on how "Web-ContextPath" works?
> Do "Web-ContextPath" is an OSGi header or is it a Karaf specific header?
> does it is implemented using some  some sub-project (i.e. Pax Web) ?
>
> thank you,
> Cristiano
>
>
>
>
> Il giorno mer 29 mar 2017 alle ore 12:12 Christian Schneider <
> ch...@die-schneider.net> ha scritto:
>
>> I used this approach for a small angular UI:
>> https://github.com/cschneider/Karaf-Tutorial/tree/master/
>> tasklist-blueprint-cdi/angular-ui
>>
>> I just added Web-ContextPath to the Manifest and used the
>> src/main/resources to deploy the static files.
>> Web-ContextPath: /tasklist
>>
>> Christian
>>
>> 2017-03-29 11:10 GMT+02:00 Cristiano Costantini <
>> cristiano.costant...@gmail.com>:
>>
>> Hello all,
>>
>> which the best practices to publish a web application from
>> Karaf currently in 2017?
>>
>> I know that it exists the WebContainer (https://karaf.apache.org/
>> manual/latest/webcontainer) which supports both publishing of WABs or
>> WARs to Karaf.
>>
>> I however see different approaches, used in conjunction with Javascript
>> frameworks like Angular or Polymer, where the server side logic is reduced
>> to REST services (no JSP required) which rely simply on the OSGi HTTP
>> service to serve HTML and JS files.
>>
>> (one example, I stumbled onto the Openmuc project, it has an Angular
>> application,  https://github.com/gythialy/openmuc/tree/master/projects/
>> webui/base, which is published by a regular bundle, its web files are
>> inside the src/main/resource folder - note: this is a bundle running on
>> felix, does not need karaf).
>>
>> I like the second kind of approach and it seems to me more suitable for
>> JS framework like Polymer which I am studying right now. Also, I've tried
>> the first approach but it was hard to make it work in Karaf the server side
>> dependencies we had, so as today, we run our application on a standalone
>> jetty or glassfish container.
>>
>> To go back to my initial question, what kind of approach do you
>> recommend?
>> Are there any recent sample or well made open source project or any maven
>> archetype to use as starting point reference?
>>
>> I would like to hear also personal opinions :-)
>>
>> Thank you very much!
>> Cristiano
>>
>>
>>
>>
>>
>>
>>
>> --
>> --
>> Christian Schneider
>> http://www.liquid-reality.de
>> <https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.liquid-reality.de>
>>
>> Open Source Architect
>> http://www.talend.com
>> <https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.talend.com>
>>
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.liquid-reality.de>

Open Source Architect
http://www.talend.com
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.talend.com>


Re: Best Practices for Web application published by Karaf

2017-03-29 Thread Christian Schneider
I used this approach for a small angular UI:
https://github.com/cschneider/Karaf-Tutorial/tree/master/tasklist-blueprint-cdi/angular-ui

I just added Web-ContextPath to the Manifest and used the
src/main/resources to deploy the static files.
Web-ContextPath: /tasklist

Christian

2017-03-29 11:10 GMT+02:00 Cristiano Costantini <
cristiano.costant...@gmail.com>:

> Hello all,
>
> which the best practices to publish a web application from Karaf currently
> in 2017?
>
> I know that it exists the WebContainer (https://karaf.apache.org/
> manual/latest/webcontainer) which supports both publishing of WABs or
> WARs to Karaf.
>
> I however see different approaches, used in conjunction with Javascript
> frameworks like Angular or Polymer, where the server side logic is reduced
> to REST services (no JSP required) which rely simply on the OSGi HTTP
> service to serve HTML and JS files.
>
> (one example, I stumbled onto the Openmuc project, it has an Angular
> application,  https://github.com/gythialy/openmuc/tree/master/projects/
> webui/base, which is published by a regular bundle, its web files are
> inside the src/main/resource folder - note: this is a bundle running on
> felix, does not need karaf).
>
> I like the second kind of approach and it seems to me more suitable for JS
> framework like Polymer which I am studying right now. Also, I've tried the
> first approach but it was hard to make it work in Karaf the server side
> dependencies we had, so as today, we run our application on a standalone
> jetty or glassfish container.
>
> To go back to my initial question, what kind of approach do you recommend?
> Are there any recent sample or well made open source project or any maven
> archetype to use as starting point reference?
>
> I would like to hear also personal opinions :-)
>
> Thank you very much!
> Cristiano
>
>
>
>



-- 
-- 
Christian Schneider
http://www.liquid-reality.de
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.liquid-reality.de>

Open Source Architect
http://www.talend.com
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.talend.com>


Re: ActiveMq + karaf + shutdown

2017-03-29 Thread Christian Schneider

In OSGi you should solve such things using service dependencies.

I just checked which services activemq provides (see below).
So you could have a service dependency on one of these. If you use 
declarative services for your own components and add a reference to one 
of these services then your component should

be stopped before activemq goes down.

As a dependency to the specific blueprint context service is a bit of a 
tight coupling a good solution might be to create a DS component named 
like ActiveMQPresent which holds the reference to the blueprint context 
of activemq and just reference this service in your other components.


Be aware though that this does not work with blueprint as blueprint does 
not shut down the context when a referenced service goes down.


Christian


karaf@root()> bundle:services -p org.apache.activemq.activemq-osgi 10:25:17

activemq-osgi (62) provides:

objectClass = [org.osgi.service.cm.ManagedServiceFactory]
osgi.service.blueprint.compname = activeMQServiceFactory
service.bundleid = 62
service.id = 149
service.pid = org.apache.activemq.server
service.scope = bundle

objectClass = [org.osgi.service.blueprint.container.BlueprintContainer]
osgi.blueprint.container.symbolicname = org.apache.activemq.activemq-osgi
osgi.blueprint.container.version = 5.14.5.SNAPSHOT
service.bundleid = 62
service.id = 150
service.scope = singleton


On 29.03.2017 10:07, xav wrote:

Hi all,

I have a question about ActiveMq embedded in Karaf.
I would like to shutdown the feature activemq-broker-noweb at the end.
Why, because I have a feature (bundles connected with the broker) which stop
(on a karaf shutdown) after  the activemq and I would like the reverse, i.e.
shutdown my feature, and after shutdown the activemq feature.
Do we have only the start-level at our disposition?
  
Thx a lot for a help


Regards

Xav



--
View this message in context: 
http://karaf.922171.n3.nabble.com/ActiveMq-karaf-shutdown-tp4049950.html
Sent from the Karaf - User mailing list archive at Nabble.com.



--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: Multiple bundles dependencies injection (pax-cdi or blueprint)

2017-03-28 Thread Christian Schneider

On 28.03.2017 11:07, erwan wrote:

Hello,
Coming back again to get light!
I have a bundle of questions regarding DS.
How do you use a service without being yourself a service?
You can not use a service without being a service in DS. You can hide 
your service though.
If you set serviceClass to a class in a private package then no other 
bundle can use this service.

So this works nicely if you have some internal wiring.


I understand that each service is managed by the scr and so need to be
activated at a time. If I have a class that need to do a reference to a
service but that I want to control instantiation, is this possible to
instantiate a service on demand (newInstance?) ?
I think you can create a DS component instance programmatically but I do 
not remember any more how to do it exactly.


How do you manage to deal with multiple instances of a service?
For example:

class dummy {

@Reference
private Service service1

@Reference
private Service service2

}


For this case you typically use a filter and set a service property on 
each of the implementing classes.


Btw you can even set a service property for a DS service that you have 
not written.
See 
http://liquid-reality.de/display/liquid/2016/09/27/Some+hints+to+boost+your+productivity+with+declarative+services

Paragraph "Override service properties using config "

Christian


--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: CXF 3.1.8 in karaf 4.1.1 ?

2017-03-17 Thread Christian Schneider
terxml.jackson.dataformat.jackson-dataformat-yaml/2.8.6: missing
requirement [com.fasterxml.jackson.dataformat.jackson-dataformat-yaml/2.8.6]
osgi.wiring.package;
filter:="(&(osgi.wiring.package=org.yaml.snakeyaml)(version>=1.17.0)(!(version>=2.0.0)))"]]
at
org.apache.felix.resolver.ResolutionError.toException(ResolutionError.java:42)
~[?:?]
at 
org.apache.felix.resolver.ResolverImpl.doResolve(ResolverImpl.java:389)
~[?:?]
at org.apache.felix.resolver.ResolverImpl.resolve(ResolverImpl.java:375)
~[?:?]
at org.apache.felix.resolver.ResolverImpl.resolve(ResolverImpl.java:347)
~[?:?]
at
org.apache.karaf.features.internal.region.SubsystemResolver.resolve(SubsystemResolver.java:218)
~[?:?]
at
org.apache.karaf.features.internal.service.Deployer.deploy(Deployer.java:285)
~[?:?]
at
org.apache.karaf.features.internal.service.FeaturesServiceImpl.doProvision(FeaturesServiceImpl.java:1170)
~[?:?]
at
org.apache.karaf.features.internal.service.FeaturesServiceImpl.lambda$doProvisionInThread$0(FeaturesServiceImpl.java:1069)
~[?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[?:?]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[?:?]
at java.lang.Thread.run(Thread.java:745) [?:?]

I'm going to replicate the dosgi feature on my custom feature. From what
I've seen I think it's the only way. I don't fully understand why after
blacklisting the repository version I still have 3.1.7 cxf bundles
installed. Also I've tried using the oneVersion property of
karaf-maven-plugin but it probably doesn't do what I was thinking (despite
the documentation saying "If set to true then for each bundle symbolic name
only the highest version will be used")



--
View this message in context: 
http://karaf.922171.n3.nabble.com/CXF-3-1-8-in-karaf-4-1-1-tp4049844p4049869.html
Sent from the Karaf - User mailing list archive at Nabble.com.



--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: CXF 3.1.8 in karaf 4.1.1 ?

2017-03-16 Thread Christian Schneider
I think it should work that karaf then uses the newer CXF version together
with CXF-DOSGi.

Christian



2017-03-16 0:13 GMT+01:00 ivoleitao <ivo.lei...@gmail.com>:

> Hi,
>
> I'm sorry I was not very clear in the question, I'm using CXF DOSGI which
> is currently bound to version 3.1.7 of CXF via repository declaration in
> the latest DOSGi karaf feature (
> http://repo.maven.apache.org/maven2/org/apache/cxf/dosgi/
> cxf-dosgi/2.1.0/cxf-dosgi-2.1.0-features.xml).
> Also I'm building a custom karaf distribution via the karaf maven plugin
>
> Actually I'm not completely sure what happens with the karaf plugin if I
> use as dependency a cxf feature with an higher version like the 3.1.10.
> Since the CXF DOSGi feature uses a cxf repository with version 3.1.7 it's
> not completely clear for me. Following the semantic versioning rules it
> should be possible. I'm going to try out.
>
> Anyway thank you for your response
> Best Regards.
>
>
> On 15 March 2017 at 22:33, Thomas PEREZ [via Karaf] <
> ml-node+s922171n4049857...@n3.nabble.com> wrote:
>
> > Hi,
> >
> > Karaf 4.1.0 have cxf feature in realease version so it take the 3.1.10 (
> > RELEASE refers to the last non-snapshot release in the repository.)
> > *cxf=mvn:org.apache.cxf.karaf/apache-cxf/RELEASE/xml/features*
> > https://mvnrepository.com/artifact/org.apache.cxf.karaf/apache-cxf
> >
> >
> > I think you need to wait the 3.2.0 from cxf ...
> > But you can try to uninstall the feature 3.1.10, then install the feature
> > 3.1.8 via command or webconsole.
> >
> > Best Regards
> >
> > --
> > If you reply to this email, your message will be added to the discussion
> > below:
> > http://karaf.922171.n3.nabble.com/CXF-3-1-8-in-karaf-4-1-1-
> > tp4049844p4049857.html
> > To start a new topic under Karaf - User, email
> > ml-node+s922171n930749...@n3.nabble.com
> > To unsubscribe from Karaf - User, click here
> > <http://karaf.922171.n3.nabble.com/template/NamlServlet.jtp?macro=
> unsubscribe_by_code=930749=SXZvLmxlaXRhb0BnbWFpbC5jb218OT
> MwNzQ5fDU3MDgwNzUzMw==>
> > .
> > NAML
> > <http://karaf.922171.n3.nabble.com/template/NamlServlet.jtp?macro=macro_
> viewer=instant_html%21nabble%3Aemail.naml=nabble.naml.namespaces.
> BasicNamespace-nabble.view.web.template.NabbleNamespace-
> nabble.view.web.template.NodeNamespace=
> notify_subscribers%21nabble%3Aemail.naml-instant_emails%
> 21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
> >
>
>
>
>
> --
> View this message in context: http://karaf.922171.n3.nabble.
> com/CXF-3-1-8-in-karaf-4-1-1-tp4049844p4049858.html
> Sent from the Karaf - User mailing list archive at Nabble.com.
>



-- 
-- 
Christian Schneider
http://www.liquid-reality.de
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.liquid-reality.de>

Open Source Architect
http://www.talend.com
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.talend.com>


Re: Multiple bundles dependencies injection (pax-cdi or blueprint)

2017-03-15 Thread Christian Schneider
Indeed you can not use @Inject in DS. Instead use @Reference and make 
sure you export the class you want to inject using @Component.
Each dependency injection solution has a certain set of annotations with 
slightly different abilities and limitations.


Christian

On 15.03.2017 16:46, erwan wrote:

Ok thanks Christian.
I manage to make things work after moving cxf part from blueprint xml
description to component property elements.
For cxf, it seems to be really sensible to configuration as I wasn't able to
give a "/" as org.apache.cxf.servlet.context and neither was able to
configure "org.apache.cxf.rs.address=/" in properties. Anyway, my requests
are received using curl and processed to the database.
I still have issues coming from injected fields.
Can you confirm also that I can't use @Inject in @Component annotated class?
I thought it was possible to inject classes within the same bundle and
inside a @Component annotated class.
I don't really understand why all these solutions are exclusive as well...




--
View this message in context: 
http://karaf.922171.n3.nabble.com/Multiple-bundles-dependencies-injection-pax-cdi-or-blueprint-tp4049756p4049853.html
Sent from the Karaf - User mailing list archive at Nabble.com.



--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: Multiple bundles dependencies injection (pax-cdi or blueprint)

2017-03-15 Thread Christian Schneider

You are right.

@Reference is only meaningful in an @Component. So this only works if 
your class is instantiated by DS.


If you want to use DS then I strongly recommend to start with my 
tutorial code:

https://github.com/cschneider/Karaf-Tutorial/tree/master/tasklist-ds

So at least you have a working base.

Again - Do not mix blueprint and DS in the same bundle. Choose one and 
use it for all classes in a bundle.


Christian

On 15.03.2017 09:21, erwan wrote:

I'm still working on a solution for my problem and for the moment Reference
injected doesn't seem to be filled.
I'm getting a NullPointerException every time I'm trying to access methods
on this @Reference.
What is difficult is that I found some key information in several sources.
For example, I found these lines in hibernate site:
"Technically, multiple persistence units are supported by Enterprise OSGi
JPA and unmanaged Hibernate JPA use. However, we cannot currently support
this in OSGi. In Hibernate 4, only one instance of the OSGi-specific
ClassLoader is used per Hibernate bundle, mainly due to heavy use of static
TCCL utilities. We hope to support one OSGi ClassLoader per persistence unit
in Hibernate 5.
Scanning is supported to find non-explicitly listed entities and mappings.
However, they MUST be in the same bundle as your persistence unit (fairly
typical anyway). Our OSGi ClassLoader only considers the "requesting bundle"
(hence the requirement on using services to create EMF/SF), rather than
attempting to scan all available bundles. This is primarily for versioning
considerations, collision protections, etc. "

I also see that a @Reference seems to be only taken into account inside a
@Component? is this right?

Maybe the way I use DS with hibernate isn't possible.



--
View this message in context: 
http://karaf.922171.n3.nabble.com/Multiple-bundles-dependencies-injection-pax-cdi-or-blueprint-tp4049756p4049851.html
Sent from the Karaf - User mailing list archive at Nabble.com.



--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: Multiple bundles dependencies injection (pax-cdi or blueprint)

2017-03-10 Thread Christian Schneider
  
   
 

   



and what might be unnecessary :
http://www.osgi.org/xmlns/blueprint/v1.0.0;
   xmlns:jpa="http://aries.apache.org/xmlns/jpa/v2.0.0;
   xmlns:tx="http://aries.apache.org/xmlns/transactions/v1.2.0;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   
xsi:schemaLocation="http://www.osgi.org/xmlns/blueprint/v1.0.0
https://osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd;>




In the consumer bundle (something like a REST api):

@Path("")
@Component
public class EndPoint {

private IRepository repository;
@Reference
public void setRepository(IRepository repository) {
this.repository = repository;
}

@DELETE
@Path("removeAll")
public void  clearDB() {
repository.removeAll();

}

With a third bundle API:
public interface IRepository {
void removeAll();

in this example, I think i'm missing something that will instantiate the
IRepository although I thought it will be done by the declarative service. I
also think that I probably didn't understand well how it works internally.

On karaf side:
scr:details com.example.persistence.JPAResourceRepositoryImpl
Component Details
   Name: com.example.persistence.JPAResourceRepositoryImpl
   State   : ACTIVE
References
   Reference   : UniqueIdManager
 State : satisfied
 Multiple  : single
 Optional  : mandatory
 Policy: static
 Service Reference : Bound Service ID 556
(com.example.persistence.UniqueIdManager)

As you can understand, I really need more reading sessions of the chapter 5
inside OSGi Core document!




--
View this message in context: 
http://karaf.922171.n3.nabble.com/Multiple-bundles-dependencies-injection-pax-cdi-or-blueprint-tp4049756p4049823.html
Sent from the Karaf - User mailing list archive at Nabble.com.



--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: DOSGI 2.1.0 calling soap service results in "javax.xml.stream.XMLOutputFactory cannot be created"

2017-03-08 Thread Christian Schneider
The karaf way to provide java specs is to use "libraries" in the custom
build. These override the spec apis to make them more OSGi friendly.
As the DOSGi feature is built for this style you should add the necessary
libraries:

See:
https://github.com/apache/karaf/blob/master/assemblies/apache-karaf/pom.xml#L192-L209


2017-03-08 0:33 GMT+01:00 ivoleitao <ivo.lei...@gmail.com>:

> Also and sorry for the spam :-)
>
> In my pax exam my test code above returns the following:
>
> 07-03-2017 23:27:39 [ERROR] -  XOF INSTANCED 
> 07-03-2017 23:27:39 [ERROR] -  XOF CLASSNAME:
> com.sun.xml.internal.stream.XMLOutputFactoryImpl
>
> For paxexam I'm not installing all the bundles from the feature cxf_specs
> (described at
> http://repo.maven.apache.org/maven2/org/apache/cxf/karaf/
> apache-cxf/3.1.7/apache-cxf-3.1.7-features.xml)
>
> I'm doing something like this:
>
> public static Option cxf_specs() {
> return composite(
> systemPackages("javax.xml.stream;
> version=\"1.0.0\"",
> "javax.xml.stream.events; version=\"1.0.0\"",
> "javax.xml.stream.util;
> version=\"1.0.0\""),
>
> mavenBundle().groupId("org.codehaus.woodstox").artifactId("stax2-api").
> version(versionResolver),
>
> mavenBundle().groupId("org.codehaus.woodstox").
> artifactId("woodstox-core-asl").version(versionResolver),
> mavenBundle().groupId("org.
> apache.servicemix.specs")
>
> .artifactId("org.apache.servicemix.specs.jsr339-api-2.
> 0.1").version(versionResolver));
> }
>
> I'm not sure if the systemPackages configuration is the "why" this is
> working in paxexam or it's the removal of the other bundles in the
> cxf_specs
> feature (I've had a number of other problems and this was the winning
> combination :-) for paxexam)
>
>
>
> --
> View this message in context: http://karaf.922171.n3.nabble.
> com/DOSGI-2-1-0-calling-soap-service-results-in-javax-xml-
> stream-XMLOutputFactory-cannot-be-created-tp4049778p4049780.html
> Sent from the Karaf - User mailing list archive at Nabble.com.
>



-- 
-- 
Christian Schneider
http://www.liquid-reality.de
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.liquid-reality.de>

Open Source Architect
http://www.talend.com
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.talend.com>


Re: PAX JDBC 1.0.1 pools

2017-03-06 Thread Christian Schneider

Hi Scott,

sorry for the late response. Took a while until I found time to look 
into the hikari pool code.


You need to prefix the hikari properties with "hikari.".
All these properties will be stripped of the prefix and given to Hikari 
as the config map.


Christian

On 24.02.2017 18:35, Leschke, Scott wrote:


/That much I have. I was talking about the configuration that might be 
more pool specific, like:/


poolName =

maximumPoolSize =

minimumIdle =

idleTimeout =

maxLifetime = //

//

*From:*Christian Schneider [mailto:cschneider...@gmail.com] *On Behalf 
Of *Christian Schneider

*Sent:* Friday, February 24, 2017 11:30 AM
*To:* user@karaf.apache.org
*Subject:* Re: PAX JDBC 1.0.1 pools

See 
https://ops4j1.jira.com/wiki/display/PAXJDBC/Pooling+and+XA+support+in+1.0.0


For H2 and hikari you could use:
osgi.jdbc.driver.name=H2
pool=hikari
databaseName=test
user=sa
password=
dataSourceName=test2

To install in karaf:
feature:repo-add pax-jdbc 1.0.1
feature:install pax-jdbc-config pax-jdbc-pool-hikaricp pax-jdbc-h2

You should see a DataSource

service:list DataSource

[javax.sql.DataSource]
--
 databaseName = test
 dataSourceName = test2
 felix.fileinstall.filename = 
file:/home/cschneider/java/apache-karaf-4.1.0/etc/org.ops4j.datasource-local.cfg 


 osgi.jdbc.driver.name = H2
 osgi.jndi.service.name = test2
 password =
 service.bundleid = 55
 service.factoryPid = org.ops4j.datasource
 service.id = 120
 service.pid = org.ops4j.datasource.78e4961e-be81-4328-9d2e-6e6af73bebd1
 service.scope = singleton
 user = sa
Provided by :
 OPS4J Pax JDBC Config (55)


As far as I know hikari has no XA support or at least we do not 
support it.


Christian



On 24.02.2017 17:12, Leschke, Scott wrote:

I’m a bit confused on how to configure the underlying connection
pool. I’ll be using the Hikari pool
service.pax-jdbc-pool-hikaricp. Could someone point me to the docs
or something? The only example I see is for DBCP and all my
experiments thus far have failed.

Thx, Scott

--
Christian Schneider
http://www.liquid-reality.de
Open Source Architect
http://www.talend.com



--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: Multiple bundles dependencies injection (pax-cdi or blueprint)

2017-03-06 Thread Christian Schneider

A simple @Inject only works inside a bundle.
If you want to inject an object from another bundle then Export the 
object as a service in bundle B and inject it as a service in bundle C.


Btw. using the blueprint-maven-plugin you can even use the CDI/JEE 
annotations for blueprint.


See this to inject a service:
https://github.com/cschneider/Karaf-Tutorial/blob/master/tasklist-blueprint-cdi/tasklist-service/src/main/java/net/lr/tasklist/service/impl/TaskServiceRest.java#L31

and this to offer a service:
https://github.com/cschneider/Karaf-Tutorial/blob/master/tasklist-blueprint-cdi/persistence/src/main/java/net/lr/tasklist/persistence/impl/TaskServiceImpl.java#L18

The approach works with both pax-cdi and blueprint with the 
blueprint-maven-plugin.


Having said this I support what Achim said. In many cases it is even 
better to use declarative services instead as they are better tailored 
for OSGi.
In your case CDI might be the better choice as you have more experience 
with it but I would experiment with both.


Christian

On 06.03.2017 11:04, erwan wrote:

Hello,
I'm new to osgi world (coming from a JEE bck) and trying to use pax-cdi at
first to manage injection.
I have 3 bundles
Bundle A declares an interface:
/package com.eg;
public interface Itf {
public void PrintITf();

}/

with Manifest generation using maven:
com.eg*


Bundle B implements this interface:
/package com.eg.impl;
public class ItfImpl implements Itf {

public void PrintIFy() {
System.out.println("Hello");
}
}/


com.eg*,*


Bundle C is trying to inject Itf (as it can be done using JEE):
/@Inject
private Itf itf;
public void setItf(Itf itf) {
this.itf= itf;
}

public Use() {
itf.PrintIFy();
}/

com.eg*, com.eg.impl*, *


osgi.extender;
filter:="(osgi.extender=pax.cdi)",
org.ops4j.pax.cdi.extension;
filter:="(extension=pax-cdi-extension)"


It doesn't seem to work as I always get NullPointerException starting bundle
C.

is this a classical use case in OSGI framework?
Do you think I should change the way I split these 3 bundles (moving
interface and implementation in the same bundle)?

I also tried with blueprint without being able to do it as well...
thanks for your help



--
View this message in context: 
http://karaf.922171.n3.nabble.com/Multiple-bundles-dependencies-injection-pax-cdi-or-blueprint-tp4049756.html
Sent from the Karaf - User mailing list archive at Nabble.com.



--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: Bundle

2017-03-03 Thread Christian Schneider
In OSGi dependencies are resolved in the resolve state of a bundle. So this
resolution is completely independtent of eventual start levels.
When a bundle is started then the only thing happening is that the
activator is executed or in case of spring dm or blueprint the context is
started.

Equinox as well as felix allow to set start levels for bundles so there
shoud be no difference in that regard.
The problem with spring is that it was never designed for OSGi and spring
dm is a quite buggy way to bridge spring to OSGi. So it might be a good
idea to try if switching to blueprint solves the issue. If you look into
blueprint then you should also take a look at the blueprint-maven-plugin.
It allows to use  a lot of the spring and JEE annotations to do the wiring
and creates a blueprint xml at build time.

Christian

2017-03-04 0:01 GMT+01:00 IgorS <ige.simjano...@gmail.com>:

> Hi, i'm relatively new to OSGi and Karaf, therefore please forgive me if
> this
> problem was answered elsewhere (i looked but couldn't find)
>
> I was working in past few days with Karaf and deploying OSGi
> bundles(containing routes) using different DI frameworks within Karaf:
> blueprint and spring.
>
> I'm running Karaf 4.1 on Windows 7 and java 1.8.
>
> As far i understand blueprint is recommended but i had to check spring as
> well In order to 'integrate' Spring and OSGI. Eventually i managed to get
> it
> working.
>
> Feature list is the following (my bundle is FileRouteSpring and it contains
> very simple route , just copy file(s) from->to):
> karaf@root()> list
> START LEVEL 100 , List Threshold: 50
>  ID | State  | Lvl | Version| Name
> ++-++---
> -
>  29 | Active |  80 | 4.1.0  | Apache Karaf :: OSGi Services ::
> Event
>  53 | Active |  50 | 2.18.2 | camel-blueprint
>  54 | Active |  50 | 2.18.2 | camel-catalog
>  55 | Active |  50 | 2.18.2 | camel-commands-core
>  56 | Active |  50 | 2.18.2 | camel-core
>  57 | Active |  80 | 2.18.2 | camel-karaf-commands
>  64 | Active |  50 | 2.18.2 | camel-spring
>  86 | Active |  50 | 2.18.2 | camel-spring-dm
>  87 | Active |  50 | 1.1.1  | geronimo-jta_1.1_spec
> 109 | Active |  80 | 0.0.1.SNAPSHOT | FileRouteSpring
>
> But, sometimes after Karaf is restarted bundle can't start due to errors
> like this:
> ("org.apache.camel.model.config" doesnt contain ObjectFactory.class or
> jaxb.index)
>
> It looks like its timing issue. I guess OSGI runtime(Felix) is resolving
> all
> bundle dependencies during boot based on start level and i suspect that
> sometimes resolution part fails(or wrong class is wired). There are several
> bundles that have same level (50) hence that could explain the timing
> problem. Or, maybe this i'm completely off mark with my thoughts :)
>
> I was wondering if anyone experienced this problem or maybe can give his
> thoughts.
>
> I was thinking to switch to Equinox or to give different start levels per
> bundle.
>
>
>
> --
> View this message in context: http://karaf.922171.n3.nabble.
> com/Bundle-tp4049740.html
> Sent from the Karaf - User mailing list archive at Nabble.com.
>



-- 
-- 
Christian Schneider
http://www.liquid-reality.de
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.liquid-reality.de>

Open Source Architect
http://www.talend.com
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.talend.com>


Re: PAX JDBC 1.0.1 pools

2017-03-01 Thread Christian Schneider
It would be nice if the transaction control service would also support
DataSources as services. So would would only need to teach people one
variante to configure them.
Transaction control is special in its config and it can not be reused for
other usages of a database.

Christian

2017-03-01 10:44 GMT+01:00 Timothy Ward <tim.w...@paremus.com>:

> Again, I can recommend the OSGi Transaction Control service. The Aries
> implementation has support for configuration defined resources, which make
> connection and pooling configuration extremely easy. See
> http://aries.apache.org/modules/tx-control/localJDBC.
> html#creating-a-resource-using-a-factory-configuration for details.
>
> The Aries Transaction Control implementation also has support for XA
> transactions if that’s of interest to you.
>
> Best Regards,
>
> Tim Ward
>
> Author, Enterprise OSGi in Action https://www.manning.com/cummins
>
>
>
> On 1 Mar 2017, at 08:11, schmke <ktschm...@gmail.com> wrote:
>
> I too am trying out the HikariCP pooling and haven't figured out how to
> change/specify pool settings.
>
> I have a .cfg file that creates a pooled data source just fine, with TRACE
> logging on I see HikariCP initializing and all the default settings.  And
> the pool is used as I use the data source.
>
> But when I try to specify pooling configuration in the .cfg file, the
> property I set is passed on to the underlying data source factory, not the
> pool.  For example, I want to set the minimumIdle to 5 rather than the
> default 10.
>
> If I specify pool.minimumIdle=5 I see this in the log:
>
> 2017-03-01T00:08:13,848 | WARN  | CM Configuration Updater
> (ManagedServiceFactory Update: factoryPid=[org.ops4j.datasource]) |
> DataSourceRegistration   | 76 - org.ops4j.pax.jdbc.config - 1.0.1 |
> cannot set properties [pool.minimumIdle]
> java.sql.SQLException: cannot set properties [pool.minimumIdle]
> at
> org.ops4j.pax.jdbc.mysql.impl.MysqlDataSourceFactory.setProperties(
> MysqlDataSourceFactory.java:71)
> [77:org.ops4j.pax.jdbc.mysql:1.0.1]
>
> If I instead specify jdbc.pool.minimumIdle=5, the same thing:
>
> 2017-03-01T00:09:04,034 | WARN  | CM Configuration Updater
> (ManagedServiceFactory Update: factoryPid=[org.ops4j.datasource]) |
> DataSourceRegistration   | 76 - org.ops4j.pax.jdbc.config - 1.0.1 |
> cannot set properties [pool.minimumIdle]
> java.sql.SQLException: cannot set properties [pool.minimumIdle]
> at
> org.ops4j.pax.jdbc.mysql.impl.MysqlDataSourceFactory.setProperties(
> MysqlDataSourceFactory.java:71)
> [77:org.ops4j.pax.jdbc.mysql:1.0.1]
>
> So how are the properties to be specified so they get passed to the pool
> and
> not the underlying JDBC data source?
>
>
>
> --
> View this message in context: http://karaf.922171.n3.nabble.
> com/PAX-JDBC-1-0-1-pools-tp4049649p4049697.html
> Sent from the Karaf - User mailing list archive at Nabble.com.
>
>
>


-- 
-- 
Christian Schneider
http://www.liquid-reality.de
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.liquid-reality.de>

Open Source Architect
http://www.talend.com
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.talend.com>


  1   2   3   4   5   >